id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
d0998d2a6f4e23d75924132e74222f05d6ab41e47e848129f3e96a12d46cb025
|
2026-01-21T00:00:00-05:00
|
PREFAB: PREFerence-based Affective Modeling for Low-Budget Self-Annotation
|
arXiv:2601.13904v1 Announce Type: new Abstract: Self-annotation is the gold standard for collecting affective state labels in affective computing. Existing methods typically rely on full annotation, requiring users to continuously label affective states across entire sessions. While this process yields fine-grained data, it is time-consuming, cognitively demanding, and prone to fatigue and errors. To address these issues, we present PREFAB, a low-budget retrospective self-annotation method that targets affective inflection regions rather than full annotation. Grounded in the peak-end rule and ordinal representations of emotion, PREFAB employs a preference-learning model to detect relative affective changes, directing annotators to label only selected segments while interpolating the remainder of the stimulus. We further introduce a preview mechanism that provides brief contextual cues to assist annotation. We evaluate PREFAB through a technical performance study and a 25-participant user study. Results show that PREFAB outperforms baselines in modeling affective inflections while mitigating workload (and conditionally mitigating temporal burden). Importantly PREFAB improves annotator confidence without degrading annotation quality.
|
https://arxiv.org/abs/2601.13904
|
Academic Papers
|
svg
|
e79b66521ee2aaecd60b7fd450f8d829ed1d89bdc7fc626ad29317c4a6c4f7f7
|
2026-01-21T00:00:00-05:00
|
Decentralized Infrastructure for Digital Notarizing, Signing and Sharing Files using Blockchain
|
arXiv:2601.13907v1 Announce Type: new Abstract: Traditional paper-based document management has long posed challenges related to security, authenticity, and efficiency. Despite advances in digitalization, official documents remain vulnerable to forgery, loss, and unauthorized access. This thesis proposes a decentralized infrastructure for digital notarization, signing, and sharing of documents using blockchain technology. The research addresses key issues of transparency, immutability, and feasibility by defining system requirements, evaluating existing solutions, and proposing a novel architecture based on distributed systems. By combining cryptographic techniques with decentralized storage, this research contributes to the development of a more secure and efficient framework for managing official documents. The findings highlight the potential of blockchain-based digital notarization to streamline bureaucratic processes, mitigate security risks, and enhance user trust in digital document management.
|
https://arxiv.org/abs/2601.13907
|
Academic Papers
|
svg
|
0c9e5f1e98c989c6a655c60604bb90452a98eda1e4e04537cb1ac1d57f6e5186
|
2026-01-21T00:00:00-05:00
|
Improving the local solution of the DG predictor of the ADER-DG method for solving systems of ordinary differential equations and its applicability to systems of differential-algebraic equations
|
arXiv:2601.13908v1 Announce Type: new Abstract: Improved local numerical solution for the ADER-DG numerical method with a local DG predictor for solving the initial value problem for a first-order ODE system is proposed. The improved local numerical solution demonstrates convergence orders of one higher than the convergence order of the local numerical solution of the original ADER-DG numerical method and has the property of continuity at grid nodes. Rigorous proofs of the approximation orders of the local numerical solution and the improved local numerical solution are presented. Obtaining the proposed improved local numerical solution does not require significant changes to the structure of the ADER-DG numerical method. Therefore, all conclusions regarding the convergence orders of the numerical solution at grid nodes, the resulting superconvergence, and the high stability of the ADER-DG numerical method remain unchanged. A wide range of applications of the ADER-DG numerical method is presented for solving specific initial value problems for ODE systems for a wide range of polynomial degrees. The obtained results provide strong confirmation for the developed rigorous theory. The improved local numerical solution is shown to exhibit both higher accuracy and improved smoothness and point-wise comparability. Empirical convergence orders of all individual numerical solutions were calculated for a wide range of error norms, which well agree with the expected convergence orders. The rigorous proof, based on the $\epsilon$-embedding method, of the applicability of the ADER-DG numerical method with a local DG predictor to solving DAE systems is presents.
|
https://arxiv.org/abs/2601.13908
|
Academic Papers
|
svg
|
57a898fdbb6e8c4b32c7535c33fd6bdeffc4c45f5e1e69de7ed7c56b21a088fd
|
2026-01-21T00:00:00-05:00
|
On the Role of Rotation Equivariance in Monocular 3D Human Pose Estimation
|
arXiv:2601.13913v1 Announce Type: new Abstract: Estimating 3D from 2D is one of the central tasks in computer vision. In this work, we consider the monocular setting, i.e. single-view input, for 3D human pose estimation (HPE). Here, the task is to predict a 3D point set of human skeletal joints from a single 2D input image. While by definition this is an ill-posed problem, recent work has presented methods that solve it with up to several-centimetre error. Typically, these methods employ a two-step approach, where the first step is to detect the 2D skeletal joints in the input image, followed by the step of 2D-to-3D lifting. We find that common lifting models fail when encountering a rotated input. We argue that learning a single human pose along with its in-plane rotations is considerably easier and more geometrically grounded than directly learning a point-to-point mapping. Furthermore, our intuition is that endowing the model with the notion of rotation equivariance without explicitly constraining its parameter space should lead to a more straightforward learning process than one with equivariance by design. Utilising the common HPE benchmarks, we confirm that the 2D rotation equivariance per se improves the model performance on human poses akin to rotations in the image plane, and can be efficiently and straightforwardly learned by augmentation, outperforming state-of-the-art equivariant-by-design methods.
|
https://arxiv.org/abs/2601.13913
|
Academic Papers
|
svg
|
8064c6be9a25c424d33946901463a65efda6fdef1465d8dd01d18179bc8b4153
|
2026-01-21T00:00:00-05:00
|
AgentEHR: Advancing Autonomous Clinical Decision-Making via Retrospective Summarization
|
arXiv:2601.13918v1 Announce Type: new Abstract: Large Language Models have demonstrated profound utility in the medical domain. However, their application to autonomous Electronic Health Records~(EHRs) navigation remains constrained by a reliance on curated inputs and simplified retrieval tasks. To bridge the gap between idealized experimental settings and realistic clinical environments, we present AgentEHR. This benchmark challenges agents to execute complex decision-making tasks, such as diagnosis and treatment planning, requiring long-range interactive reasoning directly within raw and high-noise databases. In tackling these tasks, we identify that existing summarization methods inevitably suffer from critical information loss and fractured reasoning continuity. To address this, we propose RetroSum, a novel framework that unifies a retrospective summarization mechanism with an evolving experience strategy. By dynamically re-evaluating interaction history, the retrospective mechanism prevents long-context information loss and ensures unbroken logical coherence. Additionally, the evolving strategy bridges the domain gap by retrieving accumulated experience from a memory bank. Extensive empirical evaluations demonstrate that RetroSum achieves performance gains of up to 29.16% over competitive baselines, while significantly decreasing total interaction errors by up to 92.3%.
|
https://arxiv.org/abs/2601.13918
|
Academic Papers
|
svg
|
2de761a41f675e4c4f66d7c04ada9349099440ef2e3b09c9436d5dbb8aab39a7
|
2026-01-21T00:00:00-05:00
|
HyperWalker: Dynamic Hypergraph-Based Deep Diagnosis for Multi-Hop Clinical Modeling across EHR and X-Ray in Medical VLMs
|
arXiv:2601.13919v1 Announce Type: new Abstract: Automated clinical diagnosis remains a core challenge in medical AI, which usually requires models to integrate multi-modal data and reason across complex, case-specific contexts. Although recent methods have advanced medical report generation (MRG) and visual question answering (VQA) with medical vision-language models (VLMs), these methods, however, predominantly operate under a sample-isolated inference paradigm, as such processing cases independently without access to longitudinal electronic health records (EHRs) or structurally related patient examples. This paradigm limits reasoning to image-derived information alone, which ignores external complementary medical evidence for potentially more accurate diagnosis. To overcome this limitation, we propose \textbf{HyperWalker}, a \textit{Deep Diagnosis} framework that reformulates clinical reasoning via dynamic hypergraphs and test-time training. First, we construct a dynamic hypergraph, termed \textbf{iBrochure}, to model the structural heterogeneity of EHR data and implicit high-order associations among multimodal clinical information. Within this hypergraph, a reinforcement learning agent, \textbf{Walker}, navigates to and identifies optimal diagnostic paths. To ensure comprehensive coverage of diverse clinical characteristics in test samples, we incorporate a \textit{linger mechanism}, a multi-hop orthogonal retrieval strategy that iteratively selects clinically complementary neighborhood cases reflecting distinct clinical attributes. Experiments on MRG with MIMIC and medical VQA on EHRXQA demonstrate that HyperWalker achieves state-of-the-art performance. Code is available at: https://github.com/Bean-Young/HyperWalker
|
https://arxiv.org/abs/2601.13919
|
Academic Papers
|
svg
|
a58d7bec09e84027881150e12cfb4161dbacc1e67c8b0964dac5de525f8010a3
|
2026-01-21T00:00:00-05:00
|
Asymmetric regularization mechanism for GAN training with Variational Inequalities
|
arXiv:2601.13920v1 Announce Type: new Abstract: We formulate the training of generative adversarial networks (GANs) as a Nash equilibrium seeking problem. To stabilize the training process and find a Nash equilibrium, we propose an asymmetric regularization mechanism based on the classic Tikhonov step and on a novel zero-centered gradient penalty. Under smoothness and a local identifiability condition induced by a Gauss-Newton Gramian, we obtain explicit Lipschitz and (strong)-monotonicity constants for the regularized operator. These constants ensure last-iterate linear convergence of a single-call Extrapolation-from-the-Past (EFTP) method. Empirical simulations on an academic example show that, even when strong monotonicity cannot be achieved, the asymmetric regularization is enough to converge to an equilibrium and stabilize the trajectory.
|
https://arxiv.org/abs/2601.13920
|
Academic Papers
|
svg
|
23419ceea0694064421282cd71b8520e61fa89fffd62e9bee942859af6605415
|
2026-01-21T00:00:00-05:00
|
Automatic Prompt Optimization for Dataset-Level Feature Discovery
|
arXiv:2601.13922v1 Announce Type: new Abstract: Feature extraction from unstructured text is a critical step in many downstream classification pipelines, yet current approaches largely rely on hand-crafted prompts or fixed feature schemas. We formulate feature discovery as a dataset-level prompt optimization problem: given a labelled text corpus, the goal is to induce a global set of interpretable and discriminative feature definitions whose realizations optimize a downstream supervised learning objective. To this end, we propose a multi-agent prompt optimization framework in which language-model agents jointly propose feature definitions, extract feature values, and evaluate feature quality using dataset-level performance and interpretability feedback. Instruction prompts are iteratively refined based on this structured feedback, enabling optimization over prompts that induce shared feature sets rather than per-example predictions. This formulation departs from prior prompt optimization methods that rely on per-sample supervision and provides a principled mechanism for automatic feature discovery from unstructured text.
|
https://arxiv.org/abs/2601.13922
|
Academic Papers
|
svg
|
535a9890edb46000e55099f9c651854786eb03fc7a46b3a285ad633b382f9947
|
2026-01-21T00:00:00-05:00
|
Proactive Coded Caching Scheme for D2D Networks
|
arXiv:2601.13929v1 Announce Type: new Abstract: Coded caching and device-to-device (D2D) communication are two effective techniques for alleviating network traffic. Secure transmission and file privacy have also become critical concerns in these domains. However, prevailing coded caching schemes typically assume that a user's cached content is inaccessible to others, overlooking the risk of file privacy leakage due to attacks targeting the cache itself. In this paper, we propose a secure coded caching scheme for D2D networks that guarantees both file privacy and secure delivery. We demonstrate that the proposed scheme achieves order-optimal performance when the file size is sufficiently large and the cache memory is ample.
|
https://arxiv.org/abs/2601.13929
|
Academic Papers
|
svg
|
835a35815cc3aabf738cce3111d88f8dad7a330e08c556d1d96a58f436cc72af
|
2026-01-21T00:00:00-05:00
|
Towards Effective Negation Modeling in Joint Audio-Text Models for Music
|
arXiv:2601.13931v1 Announce Type: new Abstract: Joint audio-text models are widely used for music retrieval, yet they struggle with semantic phenomena such as negation. Negation is fundamental for distinguishing the absence (or presence) of musical elements (e.g., "with vocals" vs. "without vocals"), but current systems fail to represent this reliably. In this work, we investigate and mitigate this limitation by training CLAP models from scratch on the Million Song Dataset with LP-MusicCaps-MSD captions. We introduce negation through text augmentation and a dissimilarity-based contrastive loss, designed to explicitly separate original and negated captions in the joint embedding space. To evaluate progress, we propose two protocols that frame negation modeling as retrieval and binary classification tasks. Experiments demonstrate that both methods, individually and combined, improve negation handling while largely preserving retrieval performance.
|
https://arxiv.org/abs/2601.13931
|
Academic Papers
|
svg
|
f7ded84b9a9198578fa95e137a680696cdf778daf824ae53d0c9b0dde5e0c47d
|
2026-01-21T00:00:00-05:00
|
VulnResolver: A Hybrid Agent Framework for LLM-Based Automated Vulnerability Issue Resolution
|
arXiv:2601.13933v1 Announce Type: new Abstract: As software systems grow in complexity, security vulnerabilities have become increasingly prevalent, posing serious risks and economic costs. Although automated detection tools such as fuzzers have advanced considerably, effective resolution still often depends on human expertise. Existing automated vulnerability repair (AVR) methods rely heavily on manually provided annotations (e.g., fault locations or CWE labels), which are often difficult and time-consuming to obtain, while overlooking the rich, naturally embedded semantic context found in issue reports from developers. In this paper, we present VulnResolver, the first LLM-based hybrid agent framework for automated vulnerability issue resolution. VulnResolver unites the adaptability of autonomous agents with the stability of workflow-guided repair through two specialized agents. The Context Pre-Collection Agent (CPCAgent) adaptively explores the repository to gather dependency and contextual information, while the Safety Property Analysis Agent (SPAAgent) generates and validates the safety properties violated by vulnerabilities. Together, these agents produce structured analyses that enrich the original issue reports, enabling more accurate vulnerability localization and patch generation. Evaluations on the SEC-bench benchmark show that VulnResolver resolves 75% of issues on SEC-bench Lite, achieving the best resolution performance. On SEC-bench Full, VulnResolver also significantly outperforms the strongest baseline, the agent-based OpenHands, confirming its effectiveness. Overall, VulnResolver delivers an adaptive and security-aware framework that advances end-to-end automated vulnerability issue resolution through workflow stability and the specialized agents' capabilities in contextual reasoning and property-based analysis.
|
https://arxiv.org/abs/2601.13933
|
Academic Papers
|
svg
|
726f98455998e6673405e2f375ee54ff6402aa84aaf04ad480962ac818d0f97d
|
2026-01-21T00:00:00-05:00
|
TrackletGPT: A Language-like GPT Framework for White Matter Tract Segmentation
|
arXiv:2601.13935v1 Announce Type: new Abstract: White Matter Tract Segmentation is imperative for studying brain structural connectivity, neurological disorders and neurosurgery. This task remains complex, as tracts differ among themselves, across subjects and conditions, yet have similar 3D structure across hemispheres and subjects. To address these challenges, we propose TrackletGPT, a language-like GPT framework which reintroduces sequential information in tokens using tracklets. TrackletGPT generalises seamlessly across datasets, is fully automatic, and encodes granular sub-streamline segments, Tracklets, scaling and refining GPT models in Tractography Segmentation. Based on our experiments, TrackletGPT outperforms state-of-the-art methods on average DICE, Overlap and Overreach scores on TractoInferno and HCP datasets, even on inter-dataset experiments.
|
https://arxiv.org/abs/2601.13935
|
Academic Papers
|
svg
|
88870d7f08e9ace1e8b76c79ac7fc6fde99fcd55ac53ec426f0a23f79954b0b2
|
2026-01-21T00:00:00-05:00
|
Impact Matters! An Audit Method to Evaluate AI Projects and their Impact for Sustainability and Public Interest
|
arXiv:2601.13936v1 Announce Type: new Abstract: The overall rapid increase of artificial intelligence (AI) use is linked to various initiatives that propose AI 'for good'. However, there is a lack of transparency in the goals of such projects, as well as a missing evaluation of their actual impacts on society and the planet. We close this gap by proposing public interest and sustainability as a regulatory dual-concept, together creating the necessary framework for a just and sustainable development that can be operationalized and utilized for the assessment of AI systems. Based on this framework, and building on existing work in auditing, we introduce the Impact-AI-method, a qualitative audit method to evaluate concrete AI projects with respect to public interest and sustainability. The interview-based method captures a project's governance structure, its theory of change, AI model and data characteristics, and social, environmental, and economic impacts. We also propose a catalog of assessment criteria to rate the outcome of the audit as well as to create an accessible output that can be debated broadly by civil society. The Impact-AI-method, developed in a transdisciplinary research setting together with NGOs and a multi-stakeholder research council, is intended as a reusable blueprint that both informs public debate about AI 'for good' claims and supports the creation of transparency of AI systems that purport to contribute to a just and sustainable development.
|
https://arxiv.org/abs/2601.13936
|
Academic Papers
|
svg
|
87af0868124cb7408eb108eab5377160333440e8828cd28c870c80c9fdbe77e4
|
2026-01-21T00:00:00-05:00
|
IF-GEO: Conflict-Aware Instruction Fusion for Multi-Query Generative Engine Optimization
|
arXiv:2601.13938v1 Announce Type: new Abstract: As Generative Engines revolutionize information retrieval by synthesizing direct answers from retrieved sources, ensuring source visibility becomes a significant challenge. Improving it through targeted content revisions is a practical strategy termed Generative Engine Optimization (GEO). However, optimizing a document for diverse queries presents a constrained optimization challenge where heterogeneous queries often impose conflicting and competing revision requirements under a limited content budget. To address this challenge, we propose IF-GEO, a "diverge-then-converge" framework comprising two phases: (i) mining distinct optimization preferences from representative latent queries; (ii) synthesizing a Global Revision Blueprint for guided editing by coordinating preferences via conflict-aware instruction fusion. To explicitly quantify IF-GEO's objective of cross-query stability, we introduce risk-aware stability metrics. Experiments on multi-query benchmarks demonstrate that IF-GEO achieves substantial performance gains while maintaining robustness across diverse retrieval scenarios.
|
https://arxiv.org/abs/2601.13938
|
Academic Papers
|
svg
|
9572f04ed7411e4fad77f9a6b8b6c0194e673a91b1f9155385805b7cee6f04ee
|
2026-01-21T00:00:00-05:00
|
Glance-or-Gaze: Incentivizing LMMs to Adaptively Focus Search via Reinforcement Learning
|
arXiv:2601.13942v1 Announce Type: new Abstract: Large Multimodal Models (LMMs) have achieved remarkable success in visual understanding, yet they struggle with knowledge-intensive queries involving long-tail entities or evolving information due to static parametric knowledge. Recent search-augmented approaches attempt to address this limitation, but existing methods rely on indiscriminate whole-image retrieval that introduces substantial visual redundancy and noise, and lack deep iterative reflection, limiting their effectiveness on complex visual queries. To overcome these challenges, we propose Glance-or-Gaze (GoG), a fully autonomous framework that shifts from passive perception to active visual planning. GoG introduces a Selective Gaze mechanism that dynamically chooses whether to glance at global context or gaze into high-value regions, filtering irrelevant information before retrieval. We design a dual-stage training strategy: Reflective GoG Behavior Alignment via supervised fine-tuning instills the fundamental GoG paradigm, while Complexity-Adaptive Reinforcement Learning further enhances the model's capability to handle complex queries through iterative reasoning. Experiments across six benchmarks demonstrate state-of-the-art performance. Ablation studies confirm that both Selective Gaze and complexity-adaptive RL are essential for effective visual search. We will release our data and models for further exploration soon.
|
https://arxiv.org/abs/2601.13942
|
Academic Papers
|
svg
|
8b9e99ac73b476009e61dff08effef0ce05f2761e31f855b79771dc5ec57441a
|
2026-01-21T00:00:00-05:00
|
RepoGenesis: Benchmarking End-to-End Microservice Generation from Readme to Repository
|
arXiv:2601.13943v1 Announce Type: new Abstract: Large language models and agents have achieved remarkable progress in code generation. However, existing benchmarks focus on isolated function/class-level generation (e.g., ClassEval) or modifications to existing codebases (e.g., SWE-Bench), neglecting complete microservice repository generation that reflects real-world 0-to-1 development workflows. To bridge this gap, we introduce RepoGenesis, the first multilingual benchmark for repository-level end-to-end web microservice generation, comprising 106 repositories (60 Python, 46 Java) across 18 domains and 11 frameworks, with 1,258 API endpoints and 2,335 test cases verified through a "review-rebuttal" quality assurance process. We evaluate open-source agents (e.g., DeepCode) and commercial IDEs (e.g., Cursor) using Pass@1, API Coverage (AC), and Deployment Success Rate (DSR). Results reveal that despite high AC (up to 73.91%) and DSR (up to 100%), the best-performing system achieves only 23.67% Pass@1 on Python and 21.45% on Java, exposing deficiencies in architectural coherence, dependency management, and cross-file consistency. Notably, GenesisAgent-8B, fine-tuned on RepoGenesis (train), achieves performance comparable to GPT-5 mini, demonstrating the quality of RepoGenesis for advancing microservice generation. We release our benchmark at https://github.com/pzy2000/RepoGenesis.
|
https://arxiv.org/abs/2601.13943
|
Academic Papers
|
svg
|
d36ad546e1fbce4b348957354222d9a75fdff50d18dd40ded880b0f48636cd97
|
2026-01-21T00:00:00-05:00
|
Efficient Coordination with the System-Level Shared State: An Embodied-AI Native Modular Framework
|
arXiv:2601.13945v1 Announce Type: new Abstract: As Embodied AI systems move from research prototypes to real world deployments, they tend to evolve rapidly while remaining reliable under workload changes and partial failures. In practice, many deployments are only partially decoupled: middleware moves messages, but shared context and feedback semantics are implicit, causing interface drift, cross-module interference, and brittle recovery at scale. We present ANCHOR, a modular framework that makes decoupling and robustness explicit system-level primitives. ANCHOR separates (i) Canonical Records, an evolvable contract for the standardized shared state, from (ii) a communication bus for many-to-many dissemination and feedback-oriented coordination, forming an inspectable end-to-end loop. We validate closed-loop feasibility on a de-identified workflow instantiation, characterize latency distributions under varying payload sizes and publish rates, and demonstrate automatic stream resumption after hard crashes and restarts even with shared-memory loss. Overall, ANCHOR turns ad-hoc integration glue into explicit contracts, enabling controlled degradation under load and self-healing recovery for scalable deployment of closed-loop AI systems.
|
https://arxiv.org/abs/2601.13945
|
Academic Papers
|
svg
|
28c2c987e31d7d0639ed02f4438329002fddbf82988b58b1520120048af4ae67
|
2026-01-21T00:00:00-05:00
|
VTONGuard: Automatic Detection and Authentication of AI-Generated Virtual Try-On Content
|
arXiv:2601.13951v1 Announce Type: new Abstract: With the rapid advancement of generative AI, virtual try-on (VTON) systems are becoming increasingly common in e-commerce and digital entertainment. However, the growing realism of AI-generated try-on content raises pressing concerns about authenticity and responsible use. To address this, we present VTONGuard, a large-scale benchmark dataset containing over 775,000 real and synthetic try-on images. The dataset covers diverse real-world conditions, including variations in pose, background, and garment styles, and provides both authentic and manipulated examples. Based on this benchmark, we conduct a systematic evaluation of multiple detection paradigms under unified training and testing protocols. Our results reveal each method's strengths and weaknesses and highlight the persistent challenge of cross-paradigm generalization. To further advance detection, we design a multi-task framework that integrates auxiliary segmentation to enhance boundary-aware feature learning, achieving the best overall performance on VTONGuard. We expect this benchmark to enable fair comparisons, facilitate the development of more robust detection models, and promote the safe and responsible deployment of VTON technologies in practice.
|
https://arxiv.org/abs/2601.13951
|
Academic Papers
|
svg
|
fc367803b85749910cc49eb226d7c3834f362cce5bc783b41e62a97c0953401a
|
2026-01-21T00:00:00-05:00
|
Differentiable Logic Synthesis: Spectral Coefficient Selection via Sinkhorn-Constrained Composition
|
arXiv:2601.13953v1 Announce Type: new Abstract: Learning precise Boolean logic via gradient descent remains challenging: neural networks typically converge to "fuzzy" approximations that degrade under quantization. We introduce Hierarchical Spectral Composition, a differentiable architecture that selects spectral coefficients from a frozen Boolean Fourier basis and composes them via Sinkhorn-constrained routing with column-sign modulation. Our approach draws on recent insights from Manifold-Constrained Hyper-Connections (mHC), which demonstrated that projecting routing matrices onto the Birkhoff polytope preserves identity mappings and stabilizes large-scale training. We adapt this framework to logic synthesis, adding column-sign modulation to enable Boolean negation -- a capability absent in standard doubly stochastic routing. We validate our approach across four phases of increasing complexity: (1) For n=2 (16 Boolean operations over 4-dim basis), gradient descent achieves 100% accuracy with zero routing drift and zero-loss quantization to ternary masks. (2) For n=3 (10 three-variable operations), gradient descent achieves 76% accuracy, but exhaustive enumeration over 3^8 = 6561 configurations proves that optimal ternary masks exist for all operations (100% accuracy, 39% sparsity). (3) For n=4 (10 four-variable operations over 16-dim basis), spectral synthesis -- combining exact Walsh-Hadamard coefficients, ternary quantization, and MCMC refinement with parallel tempering -- achieves 100% accuracy on all operations. This progression establishes (a) that ternary polynomial threshold representations exist for all tested functions, and (b) that finding them requires methods beyond pure gradient descent as dimensionality grows. All operations enable single-cycle combinational logic inference at 10,959 MOps/s on GPU, demonstrating viability for hardware-efficient neuro-symbolic logic synthesis.
|
https://arxiv.org/abs/2601.13953
|
Academic Papers
|
svg
|
f7749f17d336ad541b8f435086b8aeb679b915e606031daa26b65efe8c06c1f7
|
2026-01-21T00:00:00-05:00
|
DExTeR: Weakly Semi-Supervised Object Detection with Class and Instance Experts for Medical Imaging
|
arXiv:2601.13954v1 Announce Type: new Abstract: Detecting anatomical landmarks in medical imaging is essential for diagnosis and intervention guidance. However, object detection models rely on costly bounding box annotations, limiting scalability. Weakly Semi-Supervised Object Detection (WSSOD) with point annotations proposes annotating each instance with a single point, minimizing annotation time while preserving localization signals. A Point-to-Box teacher model, trained on a small box-labeled subset, converts these point annotations into pseudo-box labels to train a student detector. Yet, medical imagery presents unique challenges, including overlapping anatomy, variable object sizes, and elusive structures, which hinder accurate bounding box inference. To overcome these challenges, we introduce DExTeR (DETR with Experts), a transformer-based Point-to-Box regressor tailored for medical imaging. Built upon Point-DETR, DExTeR encodes single-point annotations as object queries, refining feature extraction with the proposed class-guided deformable attention, which guides attention sampling using point coordinates and class labels to capture class-specific characteristics. To improve discrimination in complex structures, it introduces CLICK-MoE (CLass, Instance, and Common Knowledge Mixture of Experts), decoupling class and instance representations to reduce confusion among adjacent or overlapping instances. Finally, we implement a multi-point training strategy which promotes prediction consistency across different point placements, improving robustness to annotation variability. DExTeR achieves state-of-the-art performance across three datasets spanning different medical domains (endoscopy, chest X-rays, and endoscopic ultrasound) highlighting its potential to reduce annotation costs while maintaining high detection accuracy.
|
https://arxiv.org/abs/2601.13954
|
Academic Papers
|
svg
|
2234ddab9c70dbb6169fdf6d87878d5cbeedb3cce7312c686bf91d5bf3d96199
|
2026-01-21T00:00:00-05:00
|
Where to Place a Heavy Payload on a Multirotor UAV for Best Control Performance
|
arXiv:2601.13958v1 Announce Type: new Abstract: This paper studies the impact of rigidly attached heavy payload placement - where the payload mass significantly influences the UAV's dynamics - on the stability and control performance of a multirotor unmanned aerial vehicle (UAV). In particular, we focus on how the position of such a payload relative to the vehicle's Center of Gravity (CoG) affects the stability and control performance at an arbitrary point of interest on the UAV, such as the payload position, and on how this position can be optimized. Our conclusions are based on two key contributions. First, we analyze the stability of the zero-dynamics of a complete nonlinear model of the UAV with payload. We demonstrate that the stability of the zero dynamics depends on the vertical signed distance in the body-fixed frame between the controlled output position and the combined CoG of the UAV with payload. Specifically, positioning the output below the CoG yields unstable zero dynamics, while the linearized zero dynamics are marginally stable when placing it above, indicating reduced sensitivity to input disturbances. Second, we analyze the performance of the linearized UAV model with payload by providing an analytical expression for the H2-norm, from which we can quantify the system's attenuation to white noise input disturbances. We conclude that less control authority leads to a higher optimal position of the controlled output with respect to the CoG for closed-loop white-noise disturbance rejection capabilities, also when the heavy payload is the controlled output. The results are illustrated through numerical examples.
|
https://arxiv.org/abs/2601.13958
|
Academic Papers
|
svg
|
d89c8e9a51bbe464256adeea2a4476c8b9bdddbb56a49436b69f180d1135f55e
|
2026-01-21T00:00:00-05:00
|
RL-BioAug: Label-Efficient Reinforcement Learning for Self-Supervised EEG Representation Learning
|
arXiv:2601.13964v1 Announce Type: new Abstract: The quality of data augmentation serves as a critical determinant for the performance of contrastive learning in EEG tasks. Although this paradigm is promising for utilizing unlabeled data, static or random augmentation strategies often fail to preserve intrinsic information due to the non-stationarity of EEG signals where statistical properties change over time. To address this, we propose RL-BioAug, a framework that leverages a label-efficient reinforcement learning (RL) agent to autonomously determine optimal augmentation policies. While utilizing only a minimal fraction (10\%) of labeled data to guide the agent's policy, our method enables the encoder to learn robust representations in a strictly self-supervised manner. Experimental results demonstrate that RL-BioAug significantly outperforms the random selection strategy, achieving substantial improvements of 9.69\% and 8.80\% in Macro-F1 score on the Sleep-EDFX and CHB-MIT datasets, respectively. Notably, this agent mainly chose optimal strategies for each task -- for example, Time Masking with a 62\% probability for sleep stage classification and Crop \& Resize with a 77\% probability for seizure detection. Our framework suggests its potential to replace conventional heuristic-based augmentations and establish a new autonomous paradigm for data augmentation. The source code is available at \href{https://github.com/dlcjfgmlnasa/RL-BioAug}{https://github.com/dlcjfgmlnasa/RL-BioAug}.
|
https://arxiv.org/abs/2601.13964
|
Academic Papers
|
svg
|
85ac8cc26ac670bb88d5691a89c6af6bb7b75849f4f2cdae3561c327cb09de13
|
2026-01-21T00:00:00-05:00
|
Autonomous Knowledge Graph Exploration with Adaptive Breadth-Depth Retrieval
|
arXiv:2601.13969v1 Announce Type: new Abstract: Retrieving evidence for language model queries from knowledge graphs requires balancing broad search across the graph with multi-hop traversal to follow relational links. Similarity-based retrievers provide coverage but remain shallow, whereas traversal-based methods rely on selecting seed nodes to start exploration, which can fail when queries span multiple entities and relations. We introduce ARK: Adaptive Retriever of Knowledge, an agentic KG retriever that gives a language model control over this breadth-depth tradeoff using a two-operation toolset: global lexical search over node descriptors and one-hop neighborhood exploration that composes into multi-hop traversal. ARK alternates between breadth-oriented discovery and depth-oriented expansion without depending on a fragile seed selection, a pre-set hop depth, or requiring retrieval training. ARK adapts tool use to queries, using global search for language-heavy queries and neighborhood exploration for relation-heavy queries. On STaRK, ARK reaches 59.1% average Hit@1 and 67.4 average MRR, improving average Hit@1 by up to 31.4% and average MRR by up to 28.0% over retrieval-based and agentic training-free methods. Finally, we distill ARK's tool-use trajectories from a large teacher into an 8B model via label-free imitation, improving Hit@1 by +7.0, +26.6, and +13.5 absolute points over the base 8B model on AMAZON, MAG, and PRIME datasets, respectively, while retaining up to 98.5% of the teacher's Hit@1 rate.
|
https://arxiv.org/abs/2601.13969
|
Academic Papers
|
svg
|
ea38ae3a2846e0e47b61e70d538f861a4271c1730a228a447465ee94d8675d4d
|
2026-01-21T00:00:00-05:00
|
The Transparency Paradox in Explainable AI: A Theory of Autonomy Depletion Through Cognitive Load
|
arXiv:2601.13973v1 Announce Type: new Abstract: Objective: This paper develops a theoretical framework explaining when and why AI explanations enhance versus impair human decision-making. Background: Transparency is advocated as universally beneficial for human-AI interaction, yet identical AI explanations improve decision quality in some contexts but impair it in others. Current theories--trust calibration, cognitive load, and self-determination--cannot fully account for this paradox. Method: The framework models autonomy as a continuous stochastic process influenced by information-induced cognitive load. Using stochastic control theory, autonomy evolution is formalized as geometric Brownian motion with information-dependent drift, and optimal transparency is derived via Hamilton-Jacobi-Bellman equations. Monte Carlo simulations validate theoretical predictions. Results: Mathematical analysis generates five testable predictions about disengagement timing, working memory moderation, autonomy trajectory shapes, and optimal information levels. Computational solutions demonstrate that dynamic transparency policies outperform both maximum and minimum transparency by adapting to real-time cognitive state. The optimal policy exhibits threshold structure: provide information when autonomy is high and accumulated load is low; withhold when resources are depleted. Conclusion: Transparency effects depend on dynamic cognitive resource depletion rather than static design choices. Information provision triggers metacognitive processing that reduces perceived control when cognitive load exceeds working memory capacity. Application: The framework provides design principles for adaptive AI systems: adjust transparency based on real-time cognitive state, implement information budgets respecting capacity limits, and personalize thresholds based on individual working memory capacity.
|
https://arxiv.org/abs/2601.13973
|
Academic Papers
|
svg
|
da1bfc60e6df30a714799ea0ca8897876c81314c93079b95a69c58b48391ebc0
|
2026-01-21T00:00:00-05:00
|
STEC: A Reference-Free Spatio-Temporal Entropy Coverage Metric for Evaluating Sampled Video Frames
|
arXiv:2601.13974v1 Announce Type: new Abstract: Frame sampling is a fundamental component in video understanding and video--language model pipelines, yet evaluating the quality of sampled frames remains challenging. Existing evaluation metrics primarily focus on perceptual quality or reconstruction fidelity, and are not designed to assess whether a set of sampled frames adequately captures informative and representative video content. We propose Spatio-Temporal Entropy Coverage (STEC), a simple and non-reference metric for evaluating the effectiveness of video frame sampling. STEC builds upon Spatio-Temporal Frame Entropy (STFE), which measures per-frame spatial information via entropy-based structural complexity, and evaluates sampled frames based on their temporal coverage and redundancy. By jointly modeling spatial information strength, temporal dispersion, and non-redundancy, STEC provides a principled and lightweight measure of sampling quality. Experiments on the MSR-VTT test-1k benchmark demonstrate that STEC clearly differentiates common sampling strategies, including random, uniform, and content-aware methods. We further show that STEC reveals robustness patterns across individual videos that are not captured by average performance alone, highlighting its practical value as a general-purpose evaluation tool for efficient video understanding. We emphasize that STEC is not designed to predict downstream task accuracy, but to provide a task-agnostic diagnostic signal for analyzing frame sampling behavior under constrained budgets.
|
https://arxiv.org/abs/2601.13974
|
Academic Papers
|
svg
|
57d1eb0f7a51ab7c6d6dfcc5bcf5695cb7bd37933c5234ccf1a81d2bdb197068
|
2026-01-21T00:00:00-05:00
|
Harmonizing the Deep: A Unified Information Pipeline for Robust Marine Biodiversity Assessment Across Heterogeneous Domains
|
arXiv:2601.13975v1 Announce Type: new Abstract: Marine biodiversity monitoring requires scalability and reliability across complex underwater environments to support conservation and invasive-species management. Yet existing detection solutions often exhibit a pronounced deployment gap, with performance degrading sharply when transferred to new sites. This work establishes the foundational detection layer for a multi-year invasive species monitoring initiative targeting Arctic and Atlantic marine ecosystems. We address this challenge by developing a Unified Information Pipeline that standardises heterogeneous datasets into a comparable information flow and evaluates a fixed, deployment-relevant detector under controlled cross-domain protocols. Across multiple domains, we find that structural factors, such as scene composition, object density, and contextual redundancy, explain cross-domain performance loss more strongly than visual degradation such as turbidity, with sparse scenes inducing a characteristic "Context Collapse" failure mode. We further validate operational feasibility by benchmarking inference on low-cost edge hardware, showing that runtime optimisation enables practical sampling rates for remote monitoring. The results shift emphasis from image enhancement toward structure-aware reliability, providing a democratised tool for consistent marine ecosystem assessment.
|
https://arxiv.org/abs/2601.13975
|
Academic Papers
|
svg
|
76a4a650dff250a1d9f6197cb85479137bab175ed23734d0ba71383f366c5ee1
|
2026-01-21T00:00:00-05:00
|
FantasyVLN: Unified Multimodal Chain-of-Thought Reasoning for Vision-Language Navigation
|
arXiv:2601.13976v1 Announce Type: new Abstract: Achieving human-level performance in Vision-and-Language Navigation (VLN) requires an embodied agent to jointly understand multimodal instructions and visual-spatial context while reasoning over long action sequences. Recent works, such as NavCoT and NavGPT-2, demonstrate the potential of Chain-of-Thought (CoT) reasoning for improving interpretability and long-horizon planning. Moreover, multimodal extensions like OctoNav-R1 and CoT-VLA further validate CoT as a promising pathway toward human-like navigation reasoning. However, existing approaches face critical drawbacks: purely textual CoTs lack spatial grounding and easily overfit to sparse annotated reasoning steps, while multimodal CoTs incur severe token inflation by generating imagined visual observations, making real-time navigation impractical. In this work, we propose FantasyVLN, a unified implicit reasoning framework that preserves the benefits of CoT reasoning without explicit token overhead. Specifically, imagined visual tokens are encoded into a compact latent space using a pretrained Visual AutoRegressor (VAR) during CoT reasoning training, and the model jointly learns from textual, visual, and multimodal CoT modes under a unified multi-CoT strategy. At inference, our model performs direct instruction-to-action mapping while still enjoying reasoning-aware representations. Extensive experiments on LH-VLN show that our approach achieves reasoning-aware yet real-time navigation, improving success rates and efficiency while reducing inference latency by an order of magnitude compared to explicit CoT methods.
|
https://arxiv.org/abs/2601.13976
|
Academic Papers
|
svg
|
9843e901592f8cc8779f5d7314e93b5f32194bfce02cb70d4a5082a8b034ffdb
|
2026-01-21T00:00:00-05:00
|
Active Cross-Modal Visuo-Tactile Perception of Deformable Linear Objects
|
arXiv:2601.13979v1 Announce Type: new Abstract: This paper presents a novel cross-modal visuo-tactile perception framework for the 3D shape reconstruction of deformable linear objects (DLOs), with a specific focus on cables subject to severe visual occlusions. Unlike existing methods relying predominantly on vision, whose performance degrades under varying illumination, background clutter, or partial visibility, the proposed approach integrates foundation-model-based visual perception with adaptive tactile exploration. The visual pipeline exploits SAM for instance segmentation and Florence for semantic refinement, followed by skeletonization, endpoint detection, and point-cloud extraction. Occluded cable segments are autonomously identified and explored with a tactile sensor, which provides local point clouds that are merged with the visual data through Euclidean clustering and topology-preserving fusion. A B-spline interpolation driven by endpoint-guided point sorting yields a smooth and complete reconstruction of the cable shape. Experimental validation using a robotic manipulator equipped with an RGB-D camera and a tactile pad demonstrates that the proposed framework accurately reconstructs both simple and highly curved single or multiple cable configurations, even when large portions are occluded. These results highlight the potential of foundation-model-enhanced cross-modal perception for advancing robotic manipulation of deformable objects.
|
https://arxiv.org/abs/2601.13979
|
Academic Papers
|
svg
|
36fef0698cfece4b4d7a4b062c735bf3e88a1c0aeac655dfa6930c8f26d22e48
|
2026-01-21T00:00:00-05:00
|
VirtualCrime: Evaluating Criminal Potential of Large Language Models via Sandbox Simulation
|
arXiv:2601.13981v1 Announce Type: new Abstract: Large language models (LLMs) have shown strong capabilities in multi-step decision-making, planning and actions, and are increasingly integrated into various real-world applications. It is concerning whether their strong problem-solving abilities may be misused for crimes. To address this gap, we propose VirtualCrime, a sandbox simulation framework based on a three-agent system to evaluate the criminal capabilities of models. Specifically, this framework consists of an attacker agent acting as the leader of a criminal team, a judge agent determining the outcome of each action, and a world manager agent updating the environment state and entities. Furthermore, we design 40 diverse crime tasks within this framework, covering 11 maps and 13 crime objectives such as theft, robbery, kidnapping, and riot. We also introduce a human player baseline for reference to better interpret the performance of LLM agents. We evaluate 8 strong LLMs and find (1) All agents in the simulation environment compliantly generate detailed plans and execute intelligent crime processes, with some achieving relatively high success rates; (2) In some cases, agents take severe action that inflicts harm to NPCs to achieve their goals. Our work highlights the need for safety alignment when deploying agentic AI in real-world settings.
|
https://arxiv.org/abs/2601.13981
|
Academic Papers
|
svg
|
9ecc7c211c91c9ec6c4285526871f8668e1cf0bbef91787c1150fd2f5f9949bd
|
2026-01-21T00:00:00-05:00
|
Equivariant Learning for Unsupervised Image Dehazing
|
arXiv:2601.13986v1 Announce Type: new Abstract: Image Dehazing (ID) aims to produce a clear image from an observation contaminated by haze. Current ID methods typically rely on carefully crafted priors or extensive haze-free ground truth, both of which are expensive or impractical to acquire, particularly in the context of scientific imaging. We propose a new unsupervised learning framework called Equivariant Image Dehazing (EID) that exploits the symmetry of image signals to restore clarity to hazy observations. By enforcing haze consistency and systematic equivariance, EID can recover clear patterns directly from raw, hazy images. Additionally, we propose an adversarial learning strategy to model unknown haze physics and facilitate EID learning. Experiments on two scientific image dehazing benchmarks (including cell microscopy and medical endoscopy) and on natural image dehazing have demonstrated that EID significantly outperforms state-of-the-art approaches. By unifying equivariant learning with modelling haze physics, we hope that EID will enable more versatile and effective haze removal in scientific imaging. Code and datasets will be published.
|
https://arxiv.org/abs/2601.13986
|
Academic Papers
|
svg
|
9fbdb6fcce3ed676602e71632d15c94033a7f3bb2012e59fb65d7207f1af64a3
|
2026-01-21T00:00:00-05:00
|
A universal linearized subspace refinement framework for neural networks
|
arXiv:2601.13989v1 Announce Type: new Abstract: Neural networks are predominantly trained using gradient-based methods, yet in many applications their final predictions remain far from the accuracy attainable within the model's expressive capacity. We introduce Linearized Subspace Refinement (LSR), a general and architecture-agnostic framework that exploits the Jacobian-induced linear residual model at a fixed trained network state. By solving a reduced direct least-squares problem within this subspace, LSR computes a subspace-optimal solution of the linearized residual model, yielding a refined linear predictor with substantially improved accuracy over standard gradient-trained solutions, without modifying network architectures, loss formulations, or training procedures. Across supervised function approximation, data-driven operator learning, and physics-informed operator fine-tuning, we show that gradient-based training often fails to access this attainable accuracy, even when local linearization yields a convex problem. This observation indicates that loss-induced numerical ill-conditioning, rather than nonconvexity or model expressivity, can constitute a dominant practical bottleneck. In contrast, one-shot LSR systematically exposes accuracy levels not fully exploited by gradient-based training, frequently achieving order-of-magnitude error reductions. For operator-constrained problems with composite loss structures, we further introduce Iterative LSR, which alternates one-shot LSR with supervised nonlinear alignment, transforming ill-conditioned residual minimization into numerically benign fitting steps and yielding accelerated convergence and improved accuracy. By bridging nonlinear neural representations with reduced-order linear solvers at fixed linearization points, LSR provides a numerically grounded and broadly applicable refinement framework for supervised learning, operator learning, and scientific computing.
|
https://arxiv.org/abs/2601.13989
|
Academic Papers
|
svg
|
ed7485ee05829940a23aa3417dbcd66de22312e8b5e5339335bda32123728e85
|
2026-01-21T00:00:00-05:00
|
Generating Functions Meet Occupation Measures: Invariant Synthesis for Probabilistic Loops (Extended Version)
|
arXiv:2601.13991v1 Announce Type: new Abstract: A fundamental computational task in probabilistic programming is to infer a program's output (posterior) distribution from a given initial (prior) distribution. This problem is challenging, especially for expressive languages that feature loops or unbounded recursion. While most of the existing literature focuses on statistical approximation, in this paper we address the problem of mathematically exact inference. To achieve this for programs with loops, we rely on a relatively underexplored type of probabilistic loop invariant, which is linked to a loop's so-called occupation measure. The occupation measure associates program states with their expected number of visits, given the initial distribution. Based on this, we derive the notion of an occupation invariant. Such invariants are essentially dual to probabilistic martingales, the predominant technique for formal probabilistic loop analysis in the literature. A key feature of occupation invariants is that they can take the initial distribution into account and often yield a proof of positive almost sure termination as a by-product. Finally, we present an automatic, template-based invariant synthesis approach for occupation invariants by encoding them as generating functions. The approach is implemented and evaluated on a set of benchmarks.
|
https://arxiv.org/abs/2601.13991
|
Academic Papers
|
svg
|
4db5e26be3dc4c2bb65324eaf0f259eb13e7ce886934690b60701c654248aee2
|
2026-01-21T00:00:00-05:00
|
"The Whole Is Greater Than the Sum of Its Parts": A Compatibility-Aware Multi-Teacher CoT Distillation Framework
|
arXiv:2601.13992v1 Announce Type: new Abstract: Chain-of-Thought (CoT) reasoning empowers Large Language Models (LLMs) with remarkable capabilities but typically requires prohibitive parameter scales. CoT distillation has emerged as a promising paradigm to transfer reasoning prowess into compact Student Models (SLMs), but existing approaches often rely on a solitary teacher, capping the student's potential since individual LLMs often exhibit distinct capability biases and may suffer from catastrophic forgetting. While leveraging diverse teachers seems appealing, effectively fusing their supervisions remains challenging: teacher-student incompatibility risks amplifying hallucinations, and passive supervision fails to ensure genuine logic internalization. To address this, we introduce COMPACT, a framework that adaptively fuses supervisions from different teachers by dynamically weighting teacher gradients based on the student's real-time compatibility evaluated by a multi-dimensional metric: (1) Graph-based Consensus to filter misleading rationales by identifying mainstream reasoning paths; (2) Mutual-Information-based Adaptability to detect "epiphany moments" for genuinely understanding the reasoning process rather than merely imitating; and (3) Loss-based Difficulty to assess student receptivity to the teacher's guidance and prevent negative transfer. Extensive experiments and latent space analysis demonstrate that COMPACT effectively integrates diverse reasoning capabilities without damaging the model's original knowledge structure, achieving state-of-the-art performance on various benchmarks while mitigating catastrophic forgetting.
|
https://arxiv.org/abs/2601.13992
|
Academic Papers
|
svg
|
0c73e90365cf642d4da67edd7ef64f7ca858dee5ce3236548e684599c6e1aec8
|
2026-01-21T00:00:00-05:00
|
Capacity and Energy Trade-Offs in FR3 6G Networks Using Real Deployment Data
|
arXiv:2601.13993v1 Announce Type: new Abstract: This article presents a data-driven system-level analysis of multi-layer 6G networks operating in the upper mid-band (FR3: 7-24 GHz). Unlike most prior studies based on 3rd Generation Partnership Project (3GPP) templates, we leverage real-world deployment and traffic data from a commercial 4G/5G network in China to evaluate practical 6G strategies. Using Giulia-a deployment-informed system-level heterogeneous network model-we show that 6G can boost median throughput by up to 9.5x over heterogeneous 4G+5G deployments, but also increases power usage by up to 59%. Critically, co-locating 6G with existing sites delivers limited gains while incurring high energy cost. In contrast, non-co-located, traffic-aware deployments achieve superior throughput-to-watt efficiency, highlighting the need for strategic, user equipment (UE) hotspot-focused 6G planning.
|
https://arxiv.org/abs/2601.13993
|
Academic Papers
|
svg
|
1cebe6f3e7b6da36fec22bd4b15c6ff17ddba82cd8bb6070dfd78de0099bf344
|
2026-01-21T00:00:00-05:00
|
torch-sla: Differentiable Sparse Linear Algebra with Adjoint Solvers and Sparse Tensor Parallelism for PyTorch
|
arXiv:2601.13994v1 Announce Type: new Abstract: Industrial scientific computing predominantly uses sparse matrices to represent unstructured data -- finite element meshes, graphs, point clouds. We present \torchsla{}, an open-source PyTorch library that enables GPU-accelerated, scalable, and differentiable sparse linear algebra. The library addresses three fundamental challenges: (1) GPU acceleration for sparse linear solves, nonlinear solves (Newton, Picard, Anderson), and eigenvalue computation; (2) Multi-GPU scaling via domain decomposition with halo exchange, reaching \textbf{400 million DOF linear solve on 3 GPUs}; and (3) Adjoint-based differentiation} achieving $\mathcal{O}(1)$ computational graph nodes (for autograd) and $\mathcal{O}(\text{nnz})$ memory -- independent of solver iterations. \torchsla{} supports multiple backends (SciPy, cuDSS, PyTorch-native) and seamlessly integrates with PyTorch autograd for end-to-end differentiable simulations. Code is available at https://github.com/walkerchi/torch-sla.
|
https://arxiv.org/abs/2601.13994
|
Academic Papers
|
svg
|
4bfefbb5fda1fb9d1eda1b42ccf5d37ec6946a33604eb4be9155f92c94460102
|
2026-01-21T00:00:00-05:00
|
From Tags to Trees: Structuring Fine-Grained Knowledge for Controllable Data Selection in LLM Instruction Tuning
|
arXiv:2601.13995v1 Announce Type: new Abstract: Effective and controllable data selection is critical for LLM instruction tuning, especially with massive open-source datasets. Existing approaches primarily rely on instance-level quality scores, or diversity metrics based on embedding clusters or semantic tags. However, constrained by the flatness of embedding spaces or the coarseness of tags, these approaches overlook fine-grained knowledge and its intrinsic hierarchical dependencies, consequently hindering precise data valuation and knowledge-aligned sampling. To address this challenge, we propose Tree-aware Aligned Global Sampling (TAGS), a unified framework that leverages a knowledge tree built from fine-grained tags, thereby enabling joint control of global quality, diversity, and target alignment. Using an LLM-based tagger, we extract atomic knowledge concepts, which are organized into a global tree through bottom-up hierarchical clustering. By grounding data instances onto this tree, a tree-aware metric then quantifies data quality and diversity, facilitating effective sampling. Our controllable sampling strategy maximizes tree-level information gain and enforces leaf-level alignment via KL-divergence for specific domains. Extensive experiments demonstrate that TAGS significantly outperforms state-of-the-art baselines. Notably, it surpasses the full-dataset model by \textbf{+5.84\%} using only \textbf{5\%} of the data, while our aligned sampling strategy further boosts average performance by \textbf{+4.24\%}.
|
https://arxiv.org/abs/2601.13995
|
Academic Papers
|
svg
|
b3ff2a0411bcf8b6e80f65348f1f22c8aed76358084ccd54219f7d48c209a762
|
2026-01-21T00:00:00-05:00
|
Software Testing in the Quantum World
|
arXiv:2601.13996v1 Announce Type: new Abstract: Quantum computing offers significant speedups for simulating physical, chemical, and biological systems, and for optimization and machine learning. As quantum software grows in complexity, the classical simulation of quantum computers, which has long been essential for quality assurance, becomes infeasible. This shift requires new quality-assurance methods that operate directly on real quantum computers. This paper presents the key challenges in testing large-scale quantum software and offers software engineering perspectives for addressing them.
|
https://arxiv.org/abs/2601.13996
|
Academic Papers
|
svg
|
fe8a83de3775dd7a4ea14db09e95396c6a59192dbc7db151d78390b1033de953
|
2026-01-21T00:00:00-05:00
|
Group-Invariant Unsupervised Skill Discovery: Symmetry-aware Skill Representations for Generalizable Behavior
|
arXiv:2601.14000v1 Announce Type: new Abstract: Unsupervised skill discovery aims to acquire behavior primitives that improve exploration and accelerate downstream task learning. However, existing approaches often ignore the geometric symmetries of physical environments, leading to redundant behaviors and sample inefficiency. To address this, we introduce Group-Invariant Skill Discovery (GISD), a framework that explicitly embeds group structure into the skill discovery objective. Our approach is grounded in a theoretical guarantee: we prove that in group-symmetric environments, the standard Wasserstein dependency measure admits a globally optimal solution comprised of an equivariant policy and a group-invariant scoring function. Motivated by this, we formulate the Group-Invariant Wasserstein dependency measure, which restricts the optimization to this symmetry-aware subspace without loss of optimality. Practically, we parameterize the scoring function using a group Fourier representation and define the intrinsic reward via the alignment of equivariant latent features, ensuring that the discovered skills generalize systematically under group transformations. Experiments on state-based and pixel-based locomotion benchmarks demonstrate that GISD achieves broader state-space coverage and improved efficiency in downstream task learning compared to a strong baseline.
|
https://arxiv.org/abs/2601.14000
|
Academic Papers
|
svg
|
495aeec312214c0c1fc81d4af2dcc5ba2797c30bb82af2cb9b7d98ae28b23c2c
|
2026-01-21T00:00:00-05:00
|
Auditory Brain Passage Retrieval: Cross-Sensory EEG Training for Neural Information Retrieval
|
arXiv:2601.14001v1 Announce Type: new Abstract: Query formulation from internal information needs remains fundamentally challenging across all Information Retrieval paradigms due to cognitive complexity and physical impairments. Brain Passage Retrieval (BPR) addresses this by directly mapping EEG signals to passage representations without intermediate text translation. However, existing BPR research exclusively uses visual stimuli, leaving critical questions unanswered: Can auditory EEG enable effective retrieval for voice-based interfaces and visually impaired users? Can training on combined EEG datasets from different sensory modalities improve performance despite severe data scarcity? We present the first systematic investigation of auditory EEG for BPR and evaluate cross-sensory training benefits. Using dual encoder architectures with four pooling strategies (CLS, mean, max, multi-vector), we conduct controlled experiments comparing auditory-only, visual-only, and combined training on the Alice (auditory) and Nieuwland (visual) datasets. Results demonstrate that auditory EEG consistently outperforms visual EEG, and cross-sensory training with CLS pooling achieves substantial improvements over individual training: 31% in MRR (0.474), 43% in Hit@1 (0.314), and 28% in Hit@10 (0.858). Critically, combined auditory EEG models surpass BM25 text baselines (MRR: 0.474 vs 0.428), establishing neural queries as competitive with traditional retrieval whilst enabling accessible interfaces. These findings validate auditory neural interfaces for IR tasks and demonstrate that cross-sensory training addresses data scarcity whilst outperforming single-modality approaches Code: https://github.com/NiallMcguire/Audio_BPR
|
https://arxiv.org/abs/2601.14001
|
Academic Papers
|
svg
|
3724234dcbac4b014ee2644909707488fb3bfc7ba73bfc421cddd77a3d332199
|
2026-01-21T00:00:00-05:00
|
Consensus Stability of Community Notes on X
|
arXiv:2601.14002v1 Announce Type: new Abstract: Community-based fact-checking systems, such as Community Notes on X (formerly Twitter), aim to mitigate online misinformation by surfacing annotations judged helpful by contributors with diverse viewpoints. While prior work has shown that the platform's bridging-based algorithm effectively selects helpful notes at the time of display, little is known about how evaluations change after notes become visible. Using a large-scale dataset of 437,396 community notes and 35 million ratings from over 580,000 contributors, we examine the stability of helpful notes and the rating dynamics that follow their initial display. We find that 30.2% of displayed notes later lose their helpful status and disappear. Using interrupted time series models, we further show that note display triggers a sharp increase in rating volume and a significant shift in rating leaning, but these effects differ across rater groups. Contributors with viewpoints similar to note authors tend to increase supportive ratings, while dissimilar contributors increase negative ratings, producing systematic post-display polarization. Counterfactual analyses suggest that this post-display polarization, particularly from dissimilar raters, plays a substantial role in note disappearance. These findings highlight the vulnerability of consensus-based fact-checking systems to polarized rating behavior and suggest pathways for improving their resilience.
|
https://arxiv.org/abs/2601.14002
|
Academic Papers
|
svg
|
d36197c2e5fa2daabf7234ea11b404b9615ea056d39c0510676e3f5c8c341dfe
|
2026-01-21T00:00:00-05:00
|
Locate, Steer, and Improve: A Practical Survey of Actionable Mechanistic Interpretability in Large Language Models
|
arXiv:2601.14004v1 Announce Type: new Abstract: Mechanistic Interpretability (MI) has emerged as a vital approach to demystify the opaque decision-making of Large Language Models (LLMs). However, existing reviews primarily treat MI as an observational science, summarizing analytical insights while lacking a systematic framework for actionable intervention. To bridge this gap, we present a practical survey structured around the pipeline: "Locate, Steer, and Improve." We formally categorize Localizing (diagnosis) and Steering (intervention) methods based on specific Interpretable Objects to establish a rigorous intervention protocol. Furthermore, we demonstrate how this framework enables tangible improvements in Alignment, Capability, and Efficiency, effectively operationalizing MI as an actionable methodology for model optimization. The curated paper list of this work is available at https://github.com/rattlesnakey/Awesome-Actionable-MI-Survey.
|
https://arxiv.org/abs/2601.14004
|
Academic Papers
|
svg
|
f34d228f0ecec0e76e1190e9d4be98ca7fb34725335a91cf8cfe8de8cbc20bf0
|
2026-01-21T00:00:00-05:00
|
BACH-V: Bridging Abstract and Concrete Human-Values in Large Language Models
|
arXiv:2601.14007v1 Announce Type: new Abstract: Do large language models (LLMs) genuinely understand abstract concepts, or merely manipulate them as statistical patterns? We introduce an abstraction-grounding framework that decomposes conceptual understanding into three capacities: interpretation of abstract concepts (Abstract-Abstract, A-A), grounding of abstractions in concrete events (Abstract-Concrete, A-C), and application of abstract principles to regulate concrete decisions (Concrete-Concrete, C-C). Using human values as a testbed - given their semantic richness and centrality to alignment - we employ probing (detecting value traces in internal activations) and steering (modifying representations to shift behavior). Across six open-source LLMs and ten value dimensions, probing shows that diagnostic probes trained solely on abstract value descriptions reliably detect the same values in concrete event narratives and decision reasoning, demonstrating cross-level transfer. Steering reveals an asymmetry: intervening on value representations causally shifts concrete judgments and decisions (A-C, C-C), yet leaves abstract interpretations unchanged (A-A), suggesting that encoded abstract values function as stable anchors rather than malleable activations. These findings indicate LLMs maintain structured value representations that bridge abstraction and action, providing a mechanistic and operational foundation for building value-driven autonomous AI systems with more transparent, generalizable alignment and control.
|
https://arxiv.org/abs/2601.14007
|
Academic Papers
|
svg
|
d7230a154967f7045af662456ed86330f9e74bc9bb143f909ca6b58e4a6561a6
|
2026-01-21T00:00:00-05:00
|
MANATEE: A DevOps Platform for xApp Lifecycle Management and Testing in Open RAN
|
arXiv:2601.14009v1 Announce Type: new Abstract: The shift to disaggregated 5G architectures introduces unprecedented flexibility but also significant complexity in Beyond 5G Radio Access Networks (RANs). Open RAN enables programmability through xApps, yet deploying and validating these applications is critical given the nature of the systems they aim to control. Current Open RAN ecosystems lack robust lifecycle management of xApps that enable automated testing, seamless migration, and production-grade observability, resulting in slow, error-prone xApp delivery. To address these issues, DevOps practices can streamline the xApp lifecycle by integrating Continuous Integration/Continuous Deployment (CI/CD) pipelines with advanced traffic management and monitoring, such as leveraging service mesh technologies to enable progressive deployment strategies (e.g., canary releases and A/B testing) to ensure fine-grained observability and resilience. The solution presented in this article, MANATEE (Mesh Architecture for Radio Access Network Automation and TEsting Ecosystems), is the first platform that combines these principles to simplify xApp delivery into production, accelerate innovation, and guarantee performance across heterogeneous O-RAN environments. We prototyped MANATEE on a Kubernetes cluster integrated with the O-RAN Software Community Near-Real Time RAN Intelligent Controller (RIC), as well as with service mesh technologies, to facilitate testing of xApps across simulated, emulated, and real testbed environments. Our experimental results demonstrate that service mesh integration introduces minimal overhead (below 1 ms latency), while enabling reliable canary deployments with fine-grained traffic control and conflict-free A/B testing through circuit-breaking mechanisms.
|
https://arxiv.org/abs/2601.14009
|
Academic Papers
|
svg
|
0af984cdd81f328e4f2af76f8d949d59c8b758839d401214ad27e9c46d86ef4d
|
2026-01-21T00:00:00-05:00
|
Numerical solution of Smoluchowski coagulation equation combined with Ostwald ripening
|
arXiv:2601.14011v1 Announce Type: new Abstract: The processes of simultaneous coagulation and Ostwald ripening of particles in the concluding stage of phase transformation are considered. We solve the integro-differential system of Smoluchowski-type kinetic and mass balance equations using a computationally efficient numerical algorithm based on low-rank matrices. We compare our numerical solutions for different initial particle-volume distributions with the universal distribution function for combined coagulation and Ostwald ripening. Our calculations confirm the tendency of a particulate ensemble to the universal particle-volume distribution to be approached asymptotically after a sufficiently long time, no matter what the initial particle-volume distribution might be.
|
https://arxiv.org/abs/2601.14011
|
Academic Papers
|
svg
|
430471469e4370897d84c5b27f54e42ac6e9f8eb446cc38189bd6e2ad2ed7dfe
|
2026-01-21T00:00:00-05:00
|
BallotRank: A Condorcet Completion Method for Graphs
|
arXiv:2601.14015v1 Announce Type: new Abstract: We introduce BallotRank, a ranked preference aggregation method derived from a modified PageRank algorithm. It is a Condorcet-consistent method without damping, and empirical examination of nearly 2,000 ranked choice elections and over 20,000 internet polls confirms that BallotRank always identifies the Condorcet winner at conventional values of the damping parameter. We also prove that the method satisfies many of the same social choice criteria as other well-known Condorcet completion methods, but it has the advantage of being a natural social welfare function that provides a full ranking of the candidates.
|
https://arxiv.org/abs/2601.14015
|
Academic Papers
|
svg
|
4e7f1c19b011ce8a24573e61652296ed27a25e66483e2f15997b8ba7123c6298
|
2026-01-21T00:00:00-05:00
|
A Security Framework for Chemical Functions
|
arXiv:2601.14019v1 Announce Type: new Abstract: In this paper, we introduce chemical functions, a unified framework that models chemical systems as noisy challenge--response primitives, and formalize the associated chemical function infrastructure. Building on the theory of physical functions, we rigorously define robustness, unclonability, and unpredictability for chemical functions in both finite and asymptotic regimes, and specify security games that capture the adversary's power and the security goals. We instantiate the framework with two existing DNA-based constructions (operable random DNA and Genomic Sequence Encryption) and derive quantitative bounds for robustness, unclonability, and unpredictability. Our analysis develops maximum-likelihood verification rules under sequencing noise and partial-edit models, and provides high-precision estimates based on binomial distributions to guide parameter selection. The framework, definitions, and analyses yield a reproducible methodology for designing chemically unclonable authentication mechanisms. We demonstrate applications to in-product authentication and to shared key generation using standard extraction techniques.
|
https://arxiv.org/abs/2601.14019
|
Academic Papers
|
svg
|
b7844a76bbcec49f6e155003abe7c7e128b0ccae21c6e4bdc9db411f138d8e14
|
2026-01-21T00:00:00-05:00
|
OAMAC: Origin-Aware Mandatory Access Control for Practical Post-Compromise Attack Surface Reduction
|
arXiv:2601.14021v1 Announce Type: new Abstract: Modern operating systems provide powerful mandatory access control mechanisms, yet they largely reason about who executes code rather than how execution originates. As a result, processes launched remotely, locally, or by background services are often treated equivalently once privileges are obtained, complicating security reasoning and enabling post-compromise abuse of sensitive system interfaces. We introduce origin-aware mandatory access control (OAMAC), a kernel-level enforcement model that treats execution origin -- such as physical user presence, remote access, or service execution -- as a first-class security attribute. OAMAC mediates access to security-critical subsystems based on execution provenance rather than identity alone, enabling centralized governance over multiple attack surfaces while significantly reducing policy complexity. We present a deployable prototype implemented entirely using the Linux eBPF LSM framework, requiring no kernel modifications. OAMAC classifies execution origin using kernel-visible metadata, propagates origin across process creation, and enforces origin-aware policies on both sensitive filesystem interfaces and the kernel BPF control plane. Policies are maintained in kernel-resident eBPF maps and can be reconfigured at runtime via a minimal userspace tool. Our evaluation demonstrates that OAMAC effectively restricts common post-compromise actions available to remote attackers while preserving normal local administration and system stability. We argue that execution origin represents a missing abstraction in contemporary operating system security models, and that elevating it to a first-class concept enables practical attack surface reduction without requiring subsystem-specific expertise or heavyweight security frameworks.
|
https://arxiv.org/abs/2601.14021
|
Academic Papers
|
svg
|
2e2029f81d420ed9ea6cb6f5eaab432e9c2c7f883358e6f6aa6f2abd5d5fd854
|
2026-01-21T00:00:00-05:00
|
Credible CO2 Comparisons: A Machine Learning Approach to Vehicle Powertrain Assessment
|
arXiv:2601.14022v1 Announce Type: new Abstract: Decarbonizing road transport requires consistent and transparent methods for comparing CO2 emissions across vehicle technologies. This paper proposes a machine learning-based framework for like-for-like operational assessment of internal combustion engine vehicles (ICEVs) and electric vehicles (EVs) under identical, real-world driving conditions. The approach isolates technology-specific effects by holding the observed speed profile and environmental context fixed, enabling direct comparison of powertrain performance. Recurrent neural network models are trained independently for each domain to learn the mapping from contextual driving variables (speed, acceleration, temperature) to internal actuation variables (torque, throttle) and instantaneous CO2-equivalent emission rates. This structure allows the construction of counterfactual scenarios that answer: What emissions would an EV have generated if it had followed the same driving profile as an ICEV? By aligning both vehicle types on a unified instantaneous emissions metric, the framework enables fair and reproducible evaluation of powertrain technologies. It offers a scalable foundation for credible, data-driven assessments of vehicle carbon performance under real-world operating conditions.
|
https://arxiv.org/abs/2601.14022
|
Academic Papers
|
svg
|
6d20ab8682f5163b5bad6fed707f7ad791a237b024b566c81f61523cf4671f61
|
2026-01-21T00:00:00-05:00
|
Universal Approximation Theorem for Input-Connected Multilayer Perceptrons
|
arXiv:2601.14026v1 Announce Type: new Abstract: We introduce the Input-Connected Multilayer Perceptron (IC-MLP), a feedforward neural network architecture in which each hidden neuron receives, in addition to the outputs of the preceding layer, a direct affine connection from the raw input. We first study this architecture in the univariate setting and give an explicit and systematic description of IC-MLPs with an arbitrary finite number of hidden layers, including iterated formulas for the network functions. In this setting, we prove a universal approximation theorem showing that deep IC-MLPs can approximate any continuous function on a closed interval of the real line if and only if the activation function is nonlinear. We then extend the analysis to vector-valued inputs and establish a corresponding universal approximation theorem for continuous functions on compact subsets of $\mathbb{R}^n$.
|
https://arxiv.org/abs/2601.14026
|
Academic Papers
|
svg
|
f309401f0caceb8d685081bf216eee43077a17423714a0e324df449fc252a4b0
|
2026-01-21T00:00:00-05:00
|
Numina-Lean-Agent: An Open and General Agentic Reasoning System for Formal Mathematics
|
arXiv:2601.14027v1 Announce Type: new Abstract: Agentic systems have recently become the dominant paradigm for formal theorem proving, achieving strong performance by coordinating multiple models and tools. However, existing approaches often rely on task-specific pipelines and trained formal provers, limiting their flexibility and reproducibility. In this paper, we propose the paradigm that directly uses a general coding agent as a formal math reasoner. This paradigm is motivated by (1) A general coding agent provides a natural interface for diverse reasoning tasks beyond proving, (2) Performance can be improved by simply replacing the underlying base model, without training, and (3) MCP enables flexible extension and autonomous calling of specialized tools, avoiding complex design. Based on this paradigm, we introduce Numina-Lean-Agent, which combines Claude Code with Numina-Lean-MCP to enable autonomous interaction with Lean, retrieval of relevant theorems, informal proving and auxiliary reasoning tools. Using Claude Opus 4.5 as the base model, Numina-Lean-Agent solves all problems in Putnam 2025 (12 / 12), matching the best closed-source system. Beyond benchmark evaluation, we further demonstrate its generality by interacting with mathematicians to successfully formalize the Brascamp-Lieb theorem. We release Numina-Lean-Agent and all solutions at https://github.com/project-numina/numina-lean-agent.
|
https://arxiv.org/abs/2601.14027
|
Academic Papers
|
svg
|
c3a8a612b188f54de79f2de14d441add03e123fe41170ccd585f86f5db047b97
|
2026-01-21T00:00:00-05:00
|
Likelihood-Separable Diffusion Inference for Multi-Image MRI Super-Resolution
|
arXiv:2601.14030v1 Announce Type: new Abstract: Diffusion models are the current state-of-the-art for solving inverse problems in imaging. Their impressive generative capability allows them to approximate sampling from a prior distribution, which alongside a known likelihood function permits posterior sampling without retraining the model. While recent methods have made strides in advancing the accuracy of posterior sampling, the majority focuses on single-image inverse problems. However, for modalities such as magnetic resonance imaging (MRI), it is common to acquire multiple complementary measurements, each low-resolution along a different axis. In this work, we generalize common diffusion-based inverse single-image problem solvers for multi-image super-resolution (MISR) MRI. We show that the DPS likelihood correction allows an exactly-separable gradient decomposition across independently acquired measurements, enabling MISR without constructing a joint operator, modifying the diffusion model, or increasing network function evaluations. We derive MISR versions of DPS, DMAP, DPPS, and diffusion-based PnP/ADMM, and demonstrate substantial gains over SISR across $4\times/8\times/16\times$ anisotropic degradations. Our results achieve state-of-the-art super-resolution of anisotropic MRI volumes and, critically, enable reconstruction of near-isotropic anatomy from routine 2D multi-slice acquisitions, which are otherwise highly degraded in orthogonal views.
|
https://arxiv.org/abs/2601.14030
|
Academic Papers
|
svg
|
92ecf441092ec644f84f04a21dc3609c1a94bc16deaf14bc33ec584d1c20aec1
|
2026-01-21T00:00:00-05:00
|
RM-Distiller: Exploiting Generative LLM for Reward Model Distillation
|
arXiv:2601.14032v1 Announce Type: new Abstract: Reward models (RMs) play a pivotal role in aligning large language models (LLMs) with human preferences. Due to the difficulty of obtaining high-quality human preference annotations, distilling preferences from generative LLMs has emerged as a standard practice. However, existing approaches predominantly treat teacher models as simple binary annotators, failing to fully exploit the rich knowledge and capabilities for RM distillation. To address this, we propose RM-Distiller, a framework designed to systematically exploit the multifaceted capabilities of teacher LLMs: (1) Refinement capability, which synthesizes highly correlated response pairs to create fine-grained and contrastive signals. (2) Scoring capability, which guides the RM in capturing precise preference strength via a margin-aware optimization objective. (3) Generation capability, which incorporates the teacher's generative distribution to regularize the RM to preserve its fundamental linguistic knowledge. Extensive experiments demonstrate that RM-Distiller significantly outperforms traditional distillation methods both on RM benchmarks and reinforcement learning-based alignment, proving that exploiting multifaceted teacher capabilities is critical for effective reward modeling. To the best of our knowledge, this is the first systematic research on RM distillation from generative LLMs.
|
https://arxiv.org/abs/2601.14032
|
Academic Papers
|
svg
|
c999a856d4573088960b8b7f5c5d90bdd1858a712c6baad2a3d7dddd94382f89
|
2026-01-21T00:00:00-05:00
|
PAC-Private Responses with Adversarial Composition
|
arXiv:2601.14033v1 Announce Type: new Abstract: Modern machine learning models are increasingly deployed behind APIs. This renders standard weight-privatization methods (e.g. DP-SGD) unnecessarily noisy at the cost of utility. While model weights may vary significantly across training datasets, model responses to specific inputs are much lower dimensional and more stable. This motivates enforcing privacy guarantees directly on model outputs. We approach this under PAC privacy, which provides instance-based privacy guarantees for arbitrary black-box functions by controlling mutual information (MI). Importantly, PAC privacy explicitly rewards output stability with reduced noise levels. However, a central challenge remains: response privacy requires composing a large number of adaptively chosen, potentially adversarial queries issued by untrusted users, where existing composition results on PAC privacy are inadequate. We introduce a new algorithm that achieves adversarial composition via adaptive noise calibration and prove that mutual information guarantees accumulate linearly under adaptive and adversarial querying. Experiments across tabular, vision, and NLP tasks show that our method achieves high utility at extremely small per-query privacy budgets. On CIFAR-10, we achieve 87.79% accuracy with a per-step MI budget of $2^{-32}$. This enables serving one million queries while provably bounding membership inference attack (MIA) success rates to 51.08% -- the same guarantee of $(0.04, 10^{-5})$-DP. Furthermore, we show that private responses can be used to label public data to distill a publishable privacy-preserving model; using an ImageNet subset as a public dataset, our model distilled from 210,000 responses achieves 91.86% accuracy on CIFAR-10 with MIA success upper-bounded by 50.49%, which is comparable to $(0.02,10^{-5})$-DP.
|
https://arxiv.org/abs/2601.14033
|
Academic Papers
|
svg
|
34bb2236d4354d0acc84f98886180749cf0304db93103593b7a8cd7e9a84bc8b
|
2026-01-21T00:00:00-05:00
|
Analyzing the Availability of E-Mail Addresses for PyPI Libraries
|
arXiv:2601.14034v1 Announce Type: new Abstract: Open Source Software (OSS) libraries form the backbone of modern software systems, yet their long-term sustainability often depends on maintainers being reachable for support, coordination, and security reporting. In this paper, we empirically analyze the availability of contact information - specifically e-mail addresses - across 686,034 Python libraries on the Python Package Index (PyPI) and their associated GitHub repositories. We examine how and where maintainers provide this information, assess its validity, and explore coverage across individual libraries and their dependency chains. Our findings show that 81.6% of libraries include at least one valid e-mail address, with PyPI serving as the primary source (79.5%). When analyzing dependency chains, we observe that up to 97.8% of direct and 97.7% of transitive dependencies provide valid contact information. At the same time, we identify over 698,000 invalid entries, primarily due to missing fields. These results demonstrate strong maintainer reachability across the ecosystem, while highlighting opportunities for improvement - such as offering clearer guidance to maintainers during the packaging process and introducing opt-in validation mechanisms for existing e-mail addresses.
|
https://arxiv.org/abs/2601.14034
|
Academic Papers
|
svg
|
0ce0e9d921431b67e49a0b2f01a367385dddfba3b6ac20f01236df67bab9d11f
|
2026-01-21T00:00:00-05:00
|
Human detectors are surprisingly powerful reward models
|
arXiv:2601.14037v1 Announce Type: new Abstract: Video generation models have recently achieved impressive visual fidelity and temporal coherence. Yet, they continue to struggle with complex, non-rigid motions, especially when synthesizing humans performing dynamic actions such as sports, dance, etc. Generated videos often exhibit missing or extra limbs, distorted poses, or physically implausible actions. In this work, we propose a remarkably simple reward model, HuDA, to quantify and improve the human motion in generated videos. HuDA integrates human detection confidence for appearance quality, and a temporal prompt alignment score to capture motion realism. We show this simple reward function that leverages off-the-shelf models without any additional training, outperforms specialized models finetuned with manually annotated data. Using HuDA for Group Reward Policy Optimization (GRPO) post-training of video models, we significantly enhance video generation, especially when generating complex human motions, outperforming state-of-the-art models like Wan 2.1, with win-rate of 73%. Finally, we demonstrate that HuDA improves generation quality beyond just humans, for instance, significantly improving generation of animal videos and human-object interactions.
|
https://arxiv.org/abs/2601.14037
|
Academic Papers
|
svg
|
8cec323f776561f07079f59edc114b31d8f2ddd062f94db0e93bc364447dec95
|
2026-01-21T00:00:00-05:00
|
Correcting and Quantifying Systematic Errors in 3D Box Annotations for Autonomous Driving
|
arXiv:2601.14038v1 Announce Type: new Abstract: Accurate ground truth annotations are critical to supervised learning and evaluating the performance of autonomous vehicle systems. These vehicles are typically equipped with active sensors, such as LiDAR, which scan the environment in predefined patterns. 3D box annotation based on data from such sensors is challenging in dynamic scenarios, where objects are observed at different timestamps, hence different positions. Without proper handling of this phenomenon, systematic errors are prone to being introduced in the box annotations. Our work is the first to discover such annotation errors in widely used, publicly available datasets. Through our novel offline estimation method, we correct the annotations so that they follow physically feasible trajectories and achieve spatial and temporal consistency with the sensor data. For the first time, we define metrics for this problem; and we evaluate our method on the Argoverse 2, MAN TruckScenes, and our proprietary datasets. Our approach increases the quality of box annotations by more than 17% in these datasets. Furthermore, we quantify the annotation errors in them and find that the original annotations are misplaced by up to 2.5 m, with highly dynamic objects being the most affected. Finally, we test the impact of the errors in benchmarking and find that the impact is larger than the improvements that state-of-the-art methods typically achieve with respect to the previous state-of-the-art methods; showing that accurate annotations are essential for correct interpretation of performance. Our code is available at https://github.com/alexandre-justo-miro/annotation-correction-3D-boxes.
|
https://arxiv.org/abs/2601.14038
|
Academic Papers
|
svg
|
39590b0c270c223ccd62f5468b3f353af634b0aeea4c3466101b3dfb7060a807
|
2026-01-21T00:00:00-05:00
|
Generalizing Abstention for Noise-Robust Learning in Medical Image Segmentation
|
arXiv:2601.14039v1 Announce Type: new Abstract: Label noise is a critical problem in medical image segmentation, often arising from the inherent difficulty of manual annotation. Models trained on noisy data are prone to overfitting, which degrades their generalization performance. While a number of methods and strategies have been proposed to mitigate noisy labels in the segmentation domain, this area remains largely under-explored. The abstention mechanism has proven effective in classification tasks by enhancing the capabilities of Cross Entropy, yet its potential in segmentation remains unverified. In this paper, we address this gap by introducing a universal and modular abstention framework capable of enhancing the noise-robustness of a diverse range of loss functions. Our framework improves upon prior work with two key components: an informed regularization term to guide abstention behaviour, and a more flexible power-law-based auto-tuning algorithm for the abstention penalty. We demonstrate the framework's versatility by systematically integrating it with three distinct loss functions to create three novel, noise-robust variants: GAC, SAC, and ADS. Experiments on the CaDIS and DSAD medical datasets show our methods consistently and significantly outperform their non-abstaining baselines, especially under high noise levels. This work establishes that enabling models to selectively ignore corrupted samples is a powerful and generalizable strategy for building more reliable segmentation models. Our code is publicly available at https://github.com/wemous/abstention-for-segmentation.
|
https://arxiv.org/abs/2601.14039
|
Academic Papers
|
svg
|
96309752f7aca2486e5077fe2cb84d9c67c925a7c6f5db98e2594f93bbf9cdbc
|
2026-01-21T00:00:00-05:00
|
Top 10 Open Challenges Steering the Future of Diffusion Language Model and Its Variants
|
arXiv:2601.14041v1 Announce Type: new Abstract: The paradigm of Large Language Models (LLMs) is currently defined by auto-regressive (AR) architectures, which generate text through a sequential ``brick-by-brick'' process. Despite their success, AR models are inherently constrained by a causal bottleneck that limits global structural foresight and iterative refinement. Diffusion Language Models (DLMs) offer a transformative alternative, conceptualizing text generation as a holistic, bidirectional denoising process akin to a sculptor refining a masterpiece. However, the potential of DLMs remains largely untapped as they are frequently confined within AR-legacy infrastructures and optimization frameworks. In this Perspective, we identify ten fundamental challenges ranging from architectural inertia and gradient sparsity to the limitations of linear reasoning that prevent DLMs from reaching their ``GPT-4 moment''. We propose a strategic roadmap organized into four pillars: foundational infrastructure, algorithmic optimization, cognitive reasoning, and unified multimodal intelligence. By shifting toward a diffusion-native ecosystem characterized by multi-scale tokenization, active remasking, and latent thinking, we can move beyond the constraints of the causal horizon. We argue that this transition is essential for developing next-generation AI capable of complex structural reasoning, dynamic self-correction, and seamless multimodal integration.
|
https://arxiv.org/abs/2601.14041
|
Academic Papers
|
svg
|
7314ac098894300203756db339e27ed69cc5c4ade3906535b498780f678c1860
|
2026-01-21T00:00:00-05:00
|
Federated Balanced Learning
|
arXiv:2601.14042v1 Announce Type: new Abstract: Federated learning is a paradigm of joint learning in which clients collaborate by sharing model parameters instead of data. However, in the non-iid setting, the global model experiences client drift, which can seriously affect the final performance of the model. Previous methods tend to correct the global model that has already deviated based on the loss function or gradient, overlooking the impact of the client samples. In this paper, we rethink the role of the client side and propose Federated Balanced Learning, i.e., FBL, to prevent this issue from the beginning through sample balance on the client side. Technically, FBL allows unbalanced data on the client side to achieve sample balance through knowledge filling and knowledge sampling using edge-side generation models, under the limitation of a fixed number of data samples on clients. Furthermore, we design a Knowledge Alignment Strategy to bridge the gap between synthetic and real data, and a Knowledge Drop Strategy to regularize our method. Meanwhile, we scale our method to real and complex scenarios, allowing different clients to adopt various methods, and extend our framework to further improve performance. Numerous experiments show that our method outperforms state-of-the-art baselines. The code is released upon acceptance.
|
https://arxiv.org/abs/2601.14042
|
Academic Papers
|
svg
|
a05f2b6bb6a19e8ffd474e53c7466ea38adeb9124135ec79e71e1ba013f5528d
|
2026-01-21T00:00:00-05:00
|
Weather-R1: Logically Consistent Reinforcement Fine-Tuning for Multimodal Reasoning in Meteorology
|
arXiv:2601.14044v1 Announce Type: new Abstract: While Vision Language Models (VLMs) show advancing reasoning capabilities, their application in meteorology is constrained by a domain gap and a reasoning faithfulness gap. Specifically, mainstream Reinforcement Fine-Tuning (RFT) can induce Self-Contradictory Reasoning (Self-Contra), where the model's reasoning contradicts its final answer, which is unacceptable in such a high-stakes domain. To address these challenges, we construct WeatherQA, a novel multimodal reasoning benchmark in meteorology. We also propose Logically Consistent Reinforcement Fine-Tuning (LoCo-RFT), which resolves Self-Contra by introducing a logical consistency reward. Furthermore, we introduce Weather-R1, the first reasoning VLM with logical faithfulness in meteorology, to the best of our knowledge. Experiments demonstrate that Weather-R1 improves performance on WeatherQA by 9.8 percentage points over the baseline, outperforming Supervised Fine-Tuning and RFT, and even surpassing the original Qwen2.5-VL-32B. These results highlight the effectiveness of our LoCo-RFT and the superiority of Weather-R1. Our benchmark and code are available at https://github.com/Marcowky/Weather-R1.
|
https://arxiv.org/abs/2601.14044
|
Academic Papers
|
svg
|
46d65a78f766ef937f2ab0a7f9de128095f0c8733c2e24b9a86c60373c23b8f2
|
2026-01-21T00:00:00-05:00
|
PRiSM: Benchmarking Phone Realization in Speech Models
|
arXiv:2601.14046v1 Announce Type: new Abstract: Phone recognition (PR) serves as the atomic interface for language-agnostic modeling for cross-lingual speech processing and phonetic analysis. Despite prolonged efforts in developing PR systems, current evaluations only measure surface-level transcription accuracy. We introduce PRiSM, the first open-source benchmark designed to expose blind spots in phonetic perception through intrinsic and extrinsic evaluation of PR systems. PRiSM standardizes transcription-based evaluation and assesses downstream utility in clinical, educational, and multilingual settings with transcription and representation probes. We find that diverse language exposure during training is key to PR performance, encoder-CTC models are the most stable, and specialized PR models still outperform Large Audio Language Models. PRiSM releases code, recipes, and datasets to move the field toward multilingual speech models with robust phonetic ability: https://github.com/changelinglab/prism.
|
https://arxiv.org/abs/2601.14046
|
Academic Papers
|
svg
|
f572159a67241e39d019d4330459a4a4e1165eba1981a1d43f67a715c97e129d
|
2026-01-21T00:00:00-05:00
|
Collective intelligence in science: direct elicitation of diverse information from experts with unknown information structure
|
arXiv:2601.14047v1 Announce Type: new Abstract: Suppose we need a deep collective analysis of an open scientific problem: there is a complex scientific hypothesis and a large online group of mutually unrelated experts with relevant private information of a diverse and unpredictable nature. This information may be results of experts' individual experiments, original reasoning of some of them, results of AI systems they use, etc. We propose a simple mechanism based on a self-resolving play-money prediction market entangled with a chat. We show that such a system can easily be brought to an equilibrium where participants directly share their private information on the hypothesis through the chat and trade as if the market were resolved in accordance with the truth of the hypothesis. This approach will lead to efficient aggregation of relevant information in a completely interpretable form even if the ground truth cannot be established and experts initially know nothing about each other and cannot perform complex Bayesian calculations. Finally, by rewarding the experts with some real assets proportionally to the play money they end up with, we can get an innovative way to fund large-scale collaborative studies of any type.
|
https://arxiv.org/abs/2601.14047
|
Academic Papers
|
svg
|
523a4d4c9325fce095d7e18d1073e927b19aa85b310d3de8bb7c5d8c2a2d1849
|
2026-01-21T00:00:00-05:00
|
Understanding Multilingualism in Mixture-of-Experts LLMs: Routing Mechanism, Expert Specialization, and Layerwise Steering
|
arXiv:2601.14050v1 Announce Type: new Abstract: Mixture-of-Experts (MoE) architectures have shown strong multilingual capabilities, yet the internal mechanisms underlying performance gains and cross-language differences remain insufficiently understood. In this work, we conduct a systematic analysis of MoE models, examining routing behavior and expert specialization across languages and network depth. Our analysis reveals that multilingual processing in MoE models is highly structured: routing aligns with linguistic families, expert utilization follows a clear layerwise pattern, and high-resource languages rely on shared experts while low-resource languages depend more on language-exclusive experts despite weaker performance. Layerwise interventions further show that early and late MoE layers support language-specific processing, whereas middle layers serve as language-agnostic capacity hubs. Building on these insights, we propose a routing-guided steering method that adaptively guides routing behavior in middle layers toward shared experts associated with dominant languages at inference time, leading to consistent multilingual performance improvements, particularly for linguistically related language pairs. Our code is available at https://github.com/conctsai/Multilingualism-in-Mixture-of-Experts-LLMs.
|
https://arxiv.org/abs/2601.14050
|
Academic Papers
|
svg
|
bfc6fc53adeaf3ad4a0fc945450f5e9343677874a8478c93a31499a7afd3f3ee
|
2026-01-21T00:00:00-05:00
|
Kakugo: Distillation of Low-Resource Languages into Small Language Models
|
arXiv:2601.14051v1 Announce Type: new Abstract: We present Kakugo, a novel and cost-effective pipeline designed to train general-purpose Small Language Models (SLMs) for low-resource languages using only the language name as input. By using a large teacher model to generate synthetic prompts and translate instruction datasets, we produced training data and SLMs for 54 low-resource languages. Evaluations across a diverse set of general natural language processing tasks, including translation, classification, and question answering, demonstrate that our pipeline consistently improves performance over base models. With a total generation and training cost of under $50 per language, Kakugo offers an accessible method for communities to develop language-specific AI.
|
https://arxiv.org/abs/2601.14051
|
Academic Papers
|
svg
|
d02b529bf116c842ca27fc24c1f505a6eccd629cc78b376a697a394408232882
|
2026-01-21T00:00:00-05:00
|
Vision Also You Need: Navigating Out-of-Distribution Detection with Multimodal Large Language Model
|
arXiv:2601.14052v1 Announce Type: new Abstract: Out-of-Distribution (OOD) detection is a critical task that has garnered significant attention. The emergence of CLIP has spurred extensive research into zero-shot OOD detection, often employing a training-free approach. Current methods leverage expert knowledge from large language models (LLMs) to identify potential outliers. However, these approaches tend to over-rely on knowledge in the text space, neglecting the inherent challenges involved in detecting out-of-distribution samples in the image space. In this paper, we propose a novel pipeline, MM-OOD, which leverages the multimodal reasoning capabilities of MLLMs and their ability to conduct multi-round conversations for enhanced outlier detection. Our method is designed to improve performance in both near OOD and far OOD tasks. Specifically, (1) for near OOD tasks, we directly feed ID images and corresponding text prompts into MLLMs to identify potential outliers; and (2) for far OOD tasks, we introduce the sketch-generate-elaborate framework: first, we sketch outlier exposure using text prompts, then generate corresponding visual OOD samples, and finally elaborate by using multimodal prompts. Experiments demonstrate that our method achieves significant improvements on widely used multimodal datasets such as Food-101, while also validating its scalability on ImageNet-1K.
|
https://arxiv.org/abs/2601.14052
|
Academic Papers
|
svg
|
00bdd0c98a4403c65dc9e9ca3844412e4b078ab10fd2f56772b0aae32fef86ff
|
2026-01-21T00:00:00-05:00
|
LLMOrbit: A Circular Taxonomy of Large Language Models -From Scaling Walls to Agentic AI Systems
|
arXiv:2601.14053v1 Announce Type: new Abstract: The field of artificial intelligence has undergone a revolution from foundational Transformer architectures to reasoning-capable systems approaching human-level performance. We present LLMOrbit, a comprehensive circular taxonomy navigating the landscape of large language models spanning 2019-2025. This survey examines over 50 models across 15 organizations through eight interconnected orbital dimensions, documenting architectural innovations, training methodologies, and efficiency patterns defining modern LLMs, generative AI, and agentic systems. We identify three critical crises: (1) data scarcity (9-27T tokens depleted by 2026-2028), (2) exponential cost growth ($3M to $300M+ in 5 years), and (3) unsustainable energy consumption (22x increase), establishing the scaling wall limiting brute-force approaches. Our analysis reveals six paradigms breaking this wall: (1) test-time compute (o1, DeepSeek-R1 achieve GPT-4 performance with 10x inference compute), (2) quantization (4-8x compression), (3) distributed edge computing (10x cost reduction), (4) model merging, (5) efficient training (ORPO reduces memory 50%), and (6) small specialized models (Phi-4 14B matches larger models). Three paradigm shifts emerge: (1) post-training gains (RLHF, GRPO, pure RL contribute substantially, DeepSeek-R1 achieving 79.8% MATH), (2) efficiency revolution (MoE routing 18x efficiency, Multi-head Latent Attention 8x KV cache compression enables GPT-4-level performance at <$0.30/M tokens), and (3) democratization (open-source Llama 3 88.6% MMLU surpasses GPT-4 86.4%). We provide insights into techniques (RLHF, PPO, DPO, GRPO, ORPO), trace evolution from passive generation to tool-using agents (ReAct, RAG, multi-agent systems), and analyze post-training innovations.
|
https://arxiv.org/abs/2601.14053
|
Academic Papers
|
svg
|
a01aa817457d4428cdd11ffd0ae3a0c039e5069872dc5808b9985c70b4ba1aec
|
2026-01-21T00:00:00-05:00
|
SecureSplit: Mitigating Backdoor Attacks in Split Learning
|
arXiv:2601.14054v1 Announce Type: new Abstract: Split Learning (SL) offers a framework for collaborative model training that respects data privacy by allowing participants to share the same dataset while maintaining distinct feature sets. However, SL is susceptible to backdoor attacks, in which malicious clients subtly alter their embeddings to insert hidden triggers that compromise the final trained model. To address this vulnerability, we introduce SecureSplit, a defense mechanism tailored to SL. SecureSplit applies a dimensionality transformation strategy to accentuate subtle differences between benign and poisoned embeddings, facilitating their separation. With this enhanced distinction, we develop an adaptive filtering approach that uses a majority-based voting scheme to remove contaminated embeddings while preserving clean ones. Rigorous experiments across four datasets (CIFAR-10, MNIST, CINIC-10, and ImageNette), five backdoor attack scenarios, and seven alternative defenses confirm the effectiveness of SecureSplit under various challenging conditions.
|
https://arxiv.org/abs/2601.14054
|
Academic Papers
|
svg
|
47201dcdf8ae150a2f68d0be2ad82e9879242abbf4a3bb1d689a104a4e7ae184
|
2026-01-21T00:00:00-05:00
|
Decoder-Free Supervoxel GNN for Accurate Brain-Tumor Localization in Multi-Modal MRI
|
arXiv:2601.14055v1 Announce Type: new Abstract: Modern vision backbones for 3D medical imaging typically process dense voxel grids through parameter-heavy encoder-decoder structures, a design that allocates a significant portion of its parameters to spatial reconstruction rather than feature learning. Our approach introduces SVGFormer, a decoder-free pipeline built upon a content-aware grouping stage that partitions the volume into a semantic graph of supervoxels. Its hierarchical encoder learns rich node representations by combining a patch-level Transformer with a supervoxel-level Graph Attention Network, jointly modeling fine-grained intra-region features and broader inter-regional dependencies. This design concentrates all learnable capacity on feature encoding and provides inherent, dual-scale explainability from the patch to the region level. To validate the framework's flexibility, we trained two specialized models on the BraTS dataset: one for node-level classification and one for tumor proportion regression. Both models achieved strong performance, with the classification model achieving a F1-score of 0.875 and the regression model a MAE of 0.028, confirming the encoder's ability to learn discriminative and localized features. Our results establish that a graph-based, encoder-only paradigm offers an accurate and inherently interpretable alternative for 3D medical image representation.
|
https://arxiv.org/abs/2601.14055
|
Academic Papers
|
svg
|
2978efb84dd0c2083a3bb76157bf4260b3993d125d216967de00fa0bfb8a88ce
|
2026-01-21T00:00:00-05:00
|
POCI-Diff: Position Objects Consistently and Interactively with 3D-Layout Guided Diffusion
|
arXiv:2601.14056v1 Announce Type: new Abstract: We propose a diffusion-based approach for Text-to-Image (T2I) generation with consistent and interactive 3D layout control and editing. While prior methods improve spatial adherence using 2D cues or iterative copy-warp-paste strategies, they often distort object geometry and fail to preserve consistency across edits. To address these limitations, we introduce a framework for Positioning Objects Consistently and Interactively (POCI-Diff), a novel formulation for jointly enforcing 3D geometric constraints and instance-level semantic binding within a unified diffusion process. Our method enables explicit per-object semantic control by binding individual text descriptions to specific 3D bounding boxes through Blended Latent Diffusion, allowing one-shot synthesis of complex multi-object scenes. We further propose a warping-free generative editing pipeline that supports object insertion, removal, and transformation via regeneration rather than pixel deformation. To preserve object identity and consistency across edits, we condition the diffusion process on reference images using IP-Adapter, enabling coherent object appearance throughout interactive 3D editing while maintaining global scene coherence. Experimental results demonstrate that POCI-Diff produces high-quality images consistent with the specified 3D layouts and edits, outperforming state-of-the-art methods in both visual fidelity and layout adherence while eliminating warping-induced geometric artifacts.
|
https://arxiv.org/abs/2601.14056
|
Academic Papers
|
svg
|
1cc628375b75b5e3be6937884a577ec746e9633ad183a511bcfefec2cf741b84
|
2026-01-21T00:00:00-05:00
|
Verifying Floating-Point Programs in Stainless
|
arXiv:2601.14059v1 Announce Type: new Abstract: We extend the Stainless deductive verifier with floating-point support, providing the first automated verification support for floating-point numbers for a subset of Scala that includes polymorphism, recursion and higher-order functions. We follow the recent approach in the KeY verifier to axiomatise reasoning about mathematical functions, but go further by supporting all functions from Scala's math API, and by verifying the correctness of the axioms against the actual implementation in Stainless itself. We validate Stainless' floating-point support on a new set of benchmarks sampled from real-world code from GitHub, showing that it can verify specifications about, e.g., ranges of output or absence of special values for most supported functions, or produce counter-examples when the specifications do not hold.
|
https://arxiv.org/abs/2601.14059
|
Academic Papers
|
svg
|
d1708d93bf522dc25a2433edadfaa94f2932b6a02222c6fdba908f7bff300d7f
|
2026-01-21T00:00:00-05:00
|
Fine-Grained Zero-Shot Composed Image Retrieval with Complementary Visual-Semantic Integration
|
arXiv:2601.14060v1 Announce Type: new Abstract: Zero-shot composed image retrieval (ZS-CIR) is a rapidly growing area with significant practical applications, allowing users to retrieve a target image by providing a reference image and a relative caption describing the desired modifications. Existing ZS-CIR methods often struggle to capture fine-grained changes and integrate visual and semantic information effectively. They primarily rely on either transforming the multimodal query into a single text using image-to-text models or employing large language models for target image description generation, approaches that often fail to capture complementary visual information and complete semantic context. To address these limitations, we propose a novel Fine-Grained Zero-Shot Composed Image Retrieval method with Complementary Visual-Semantic Integration (CVSI). Specifically, CVSI leverages three key components: (1) Visual Information Extraction, which not only extracts global image features but also uses a pre-trained mapping network to convert the image into a pseudo token, combining it with the modification text and the objects most likely to be added. (2) Semantic Information Extraction, which involves using a pre-trained captioning model to generate multiple captions for the reference image, followed by leveraging an LLM to generate the modified captions and the objects most likely to be added. (3) Complementary Information Retrieval, which integrates information extracted from both the query and database images to retrieve the target image, enabling the system to efficiently handle retrieval queries in a variety of situations. Extensive experiments on three public datasets (e.g., CIRR, CIRCO, and FashionIQ) demonstrate that CVSI significantly outperforms existing state-of-the-art methods. Our code is available at https://github.com/yyc6631/CVSI.
|
https://arxiv.org/abs/2601.14060
|
Academic Papers
|
svg
|
008f05092ca9a73b153928fa14913cc0d244f70fc7ee6fc6b94c6f0e7481693f
|
2026-01-21T00:00:00-05:00
|
XCR-Bench: A Multi-Task Benchmark for Evaluating Cultural Reasoning in LLMs
|
arXiv:2601.14063v1 Announce Type: new Abstract: Cross-cultural competence in large language models (LLMs) requires the ability to identify Culture-Specific Items (CSIs) and to adapt them appropriately across cultural contexts. Progress in evaluating this capability has been constrained by the scarcity of high-quality CSI-annotated corpora with parallel cross-cultural sentence pairs. To address this limitation, we introduce XCR-Bench, a Cross(X)-Cultural Reasoning Benchmark consisting of 4.9k parallel sentences and 1,098 unique CSIs, spanning three distinct reasoning tasks with corresponding evaluation metrics. Our corpus integrates Newmark's CSI framework with Hall's Triad of Culture, enabling systematic analysis of cultural reasoning beyond surface-level artifacts and into semi-visible and invisible cultural elements such as social norms, beliefs, and values. Our findings show that state-of-the-art LLMs exhibit consistent weaknesses in identifying and adapting CSIs related to social etiquette and cultural reference. Additionally, we find evidence that LLMs encode regional and ethno-religious biases even within a single linguistic setting during cultural adaptation. We release our corpus and code to facilitate future research on cross-cultural NLP.
|
https://arxiv.org/abs/2601.14063
|
Academic Papers
|
svg
|
3417b52b6f2fd68b78dd3cb56503909ee671835f8edffc914ff2bda837293461
|
2026-01-21T00:00:00-05:00
|
VERIDAH: Solving Enumeration Anomaly Aware Vertebra Labeling across Imaging Sequences
|
arXiv:2601.14066v1 Announce Type: new Abstract: The human spine commonly consists of seven cervical, twelve thoracic, and five lumbar vertebrae. However, enumeration anomalies may result in individuals having eleven or thirteen thoracic vertebrae and four or six lumbar vertebrae. Although the identification of enumeration anomalies has potential clinical implications for chronic back pain and operation planning, the thoracolumbar junction is often poorly assessed and rarely described in clinical reports. Additionally, even though multiple deep-learning-based vertebra labeling algorithms exist, there is a lack of methods to automatically label enumeration anomalies. Our work closes that gap by introducing "Vertebra Identification with Anomaly Handling" (VERIDAH), a novel vertebra labeling algorithm based on multiple classification heads combined with a weighted vertebra sequence prediction algorithm. We show that our approach surpasses existing models on T2w TSE sagittal (98.30% vs. 94.24% of subjects with all vertebrae correctly labeled, p < 0.001) and CT imaging (99.18% vs. 77.26% of subjects with all vertebrae correctly labeled, p < 0.001) and works in arbitrary field-of-view images. VERIDAH correctly labeled the presence 2 M\"oller et al. of thoracic enumeration anomalies in 87.80% and 96.30% of T2w and CT images, respectively, and lumbar enumeration anomalies in 94.48% and 97.22% for T2w and CT, respectively. Our code and models are available at: https://github.com/Hendrik-code/spineps.
|
https://arxiv.org/abs/2601.14066
|
Academic Papers
|
svg
|
2906d6d6e410c56be6475819cf34d36c1429a0c47525f6032f0f4316451c8d6c
|
2026-01-21T00:00:00-05:00
|
Modular Attractor Acceleration in Infinite-State Games (Full Version)
|
arXiv:2601.14068v1 Announce Type: new Abstract: Infinite-state games provide a framework for the synthesis of reactive systems with unbounded data domains. Solving such games typically relies on computing symbolic fixpoints, particularly symbolic attractors. However, these computations may not terminate, and while recent acceleration techniques have been proposed to address this issue, they often rely on acceleration arguments of limited expressiveness. In this work, we propose an approach for the modular computation of acceleration arguments. It enables the construction of complex acceleration arguments by composing simpler ones, thereby improving both scalability and flexibility. In addition, we introduce a summarization technique that generalizes discovered acceleration arguments, allowing them to be efficiently reused across multiple contexts. Together, these contributions improve the efficiency of solving infinite-state games in reactive synthesis, as demonstrated by our experimental evaluation.
|
https://arxiv.org/abs/2601.14068
|
Academic Papers
|
svg
|
fbc3f2a17e60577a0fb0d5f3108b864952eb01e09b64f0d44fd8313959c51659
|
2026-01-21T00:00:00-05:00
|
Unsupervised Video Class-Incremental Learning via Deep Embedded Clustering Management
|
arXiv:2601.14069v1 Announce Type: new Abstract: Unsupervised video class incremental learning (uVCIL) represents an important learning paradigm for learning video information without forgetting, and without considering any data labels. Prior approaches have focused on supervised class-incremental learning, relying on using the knowledge of labels and task boundaries, which is costly, requires human annotation, or is simply not a realistic option. In this paper, we propose a simple yet effective approach to address the uVCIL. We first consider a deep feature extractor network, providing a set of representative video features during each task without assuming any class or task information. We then progressively build a series of deep clusters from the extracted features. During the successive task learning, the model updated from the previous task is used as an initial state in order to transfer knowledge to the current learning task. We perform in-depth evaluations on three standard video action recognition datasets, including UCF101, HMDB51, and Something-to-Something V2, by ignoring the labels from the supervised setting. Our approach significantly outperforms other baselines on all datasets.
|
https://arxiv.org/abs/2601.14069
|
Academic Papers
|
svg
|
c470f75d4a45bc35bcadbbc2500f384d476be8437f5d6543118bafc81f85a542
|
2026-01-21T00:00:00-05:00
|
On the optimal shape parameter for kernel methods: Sharp direct and inverse statements
|
arXiv:2601.14070v1 Announce Type: new Abstract: The search for the optimal shape parameter for Radial Basis Function (RBF) kernel approximation has been an outstanding research problem for decades. In this work, we establish a theoretical framework for this problem by leveraging a recently established theory on sharp direct, inverse and saturation statements for kernel based approximation. In particular, we link the search for the optimal shape parameter to superconvergence phenomena. Our analysis is carried out for finitely smooth Sobolev kernels, thereby covering large classes of radial kernels used in practice, including those emerging from current machine-learning methodologies. Our results elucidate how approximation regimes, kernel regularity, and parameter choices interact, thereby clarifying a question that has remained unresolved for decades.
|
https://arxiv.org/abs/2601.14070
|
Academic Papers
|
svg
|
d4c8ce6dfae2d4266466b868019e0cb06556d3e685358c24ad40f98c4e867fd2
|
2026-01-21T00:00:00-05:00
|
Utilizing the Perceived Age to Maximize Freshness in Query-Based Update Systems
|
arXiv:2601.14075v1 Announce Type: new Abstract: Query-based sampling has become an increasingly popular technique for monitoring Markov sources in pull-based update systems. However, most of the contemporary literature on this assumes an exponential distribution for query delay and often relies on the assumption that the feedback or replies to the queries are instantaneous. In this work, we relax both of these assumptions and find optimal sampling policies for monitoring continuous-time Markov chains (CTMC) under generic delay distributions. In particular, we show that one can obtain significant gains in terms of mean binary freshness (MBF) by employing a waiting based strategy for query-based sampling.
|
https://arxiv.org/abs/2601.14075
|
Academic Papers
|
svg
|
c0713323f7554511ba5732702f7850c3a317e4c4db10bfa5b3d6423bddf36eb5
|
2026-01-21T00:00:00-05:00
|
From Trees to Tree-Like: Distribution and Synthesis for Asynchronous Automata
|
arXiv:2601.14078v1 Announce Type: new Abstract: We revisit constructions for distribution and synthesis of Zielonka's asynchronous automata in restricted settings. We show first a simple, quadratic, distribution construction for asynchronous automata, where the process architecture is tree-like. An architecture is tree-like if there is an underlying spanning tree of the architecture and communications are local on the tree. This quadratic distribution result generalizes the known construction for tree architectures and improves on an older, exponential construction for triangulated dependence alphabets. Lastly we consider the problem of distributed controller synthesis and show that it is decidable for tree-like architectures. This extends the decidability boundary from tree architectures to tree-like keeping the same $\text{Tower}_d(n)$ complexity bound, where $n$ is the size of the system and $d \ge 0$ the depth of the process tree.
|
https://arxiv.org/abs/2601.14078
|
Academic Papers
|
svg
|
f5bb7c5fa5de32e4eb0d761e2a0399669fc520b8b907cf33634c56228153252d
|
2026-01-21T00:00:00-05:00
|
VENI: Variational Encoder for Natural Illumination
|
arXiv:2601.14079v1 Announce Type: new Abstract: Inverse rendering is an ill-posed problem, but priors like illumination priors, can simplify it. Existing work either disregards the spherical and rotation-equivariant nature of illumination environments or does not provide a well-behaved latent space. We propose a rotation-equivariant variational autoencoder that models natural illumination on the sphere without relying on 2D projections. To preserve the SO(2)-equivariance of environment maps, we use a novel Vector Neuron Vision Transformer (VN-ViT) as encoder and a rotation-equivariant conditional neural field as decoder. In the encoder, we reduce the equivariance from SO(3) to SO(2) using a novel SO(2)-equivariant fully connected layer, an extension of Vector Neurons. We show that our SO(2)-equivariant fully connected layer outperforms standard Vector Neurons when used in our SO(2)-equivariant model. Compared to previous methods, our variational autoencoder enables smoother interpolation in latent space and offers a more well-behaved latent space.
|
https://arxiv.org/abs/2601.14079
|
Academic Papers
|
svg
|
9f6f08dbf9edb5546b7bb637ff252cbd595ccbdc9d0bc26ea30c3efb828edd14
|
2026-01-21T00:00:00-05:00
|
Feature-Aware Test Generation for Deep Learning Models
|
arXiv:2601.14081v1 Announce Type: new Abstract: As deep learning models are widely used in software systems, test generation plays a crucial role in assessing the quality of such models before deployment. To date, the most advanced test generators rely on generative AI to synthesize inputs; however, these approaches remain limited in providing semantic insight into the causes of misbehaviours and in offering fine-grained semantic controllability over the generated inputs. In this paper, we introduce Detect, a feature-aware test generation framework for vision-based deep learning (DL) models that systematically generates inputs by perturbing disentangled semantic attributes within the latent space. Detect perturbs individual latent features in a controlled way and observes how these changes affect the model's output. Through this process, it identifies which features lead to behavior shifts and uses a vision-language model for semantic attribution. By distinguishing between task-relevant and irrelevant features, Detect applies feature-aware perturbations targeted for both generalization and robustness. Empirical results across image classification and detection tasks show that Detect generates high-quality test cases with fine-grained control, reveals distinct shortcut behaviors across model architectures (convolutional and transformer-based), and bugs that are not captured by accuracy metrics. Specifically, Detect outperforms a state-of-the-art test generator in decision boundary discovery and a leading spurious feature localization method in identifying robustness failures. Our findings show that fully fine-tuned convolutional models are prone to overfitting on localized cues, such as co-occurring visual traits, while weakly supervised transformers tend to rely on global features, such as environmental variances. These findings highlight the value of interpretable and feature-aware testing in improving DL model reliability.
|
https://arxiv.org/abs/2601.14081
|
Academic Papers
|
svg
|
a0bd1cbaa2c97326f9d78744622c55fe500ae44a97588b448f355ed7bdcc19dd
|
2026-01-21T00:00:00-05:00
|
DermaBench: A Clinician-Annotated Benchmark Dataset for Dermatology Visual Question Answering and Reasoning
|
arXiv:2601.14084v1 Announce Type: new Abstract: Vision-language models (VLMs) are increasingly important in medical applications; however, their evaluation in dermatology remains limited by datasets that focus primarily on image-level classification tasks such as lesion recognition. While valuable for recognition, such datasets cannot assess the full visual understanding, language grounding, and clinical reasoning capabilities of multimodal models. Visual question answering (VQA) benchmarks are required to evaluate how models interpret dermatological images, reason over fine-grained morphology, and generate clinically meaningful descriptions. We introduce DermaBench, a clinician-annotated dermatology VQA benchmark built on the Diverse Dermatology Images (DDI) dataset. DermaBench comprises 656 clinical images from 570 unique patients spanning Fitzpatrick skin types I-VI. Using a hierarchical annotation schema with 22 main questions (single-choice, multi-choice, and open-ended), expert dermatologists annotated each image for diagnosis, anatomic site, lesion morphology, distribution, surface features, color, and image quality, together with open-ended narrative descriptions and summaries, yielding approximately 14.474 VQA-style annotations. DermaBench is released as a metadata-only dataset to respect upstream licensing and is publicly available at Harvard Dataverse.
|
https://arxiv.org/abs/2601.14084
|
Academic Papers
|
svg
|
5c4e3833f480637ed1c9b5a5c4babe60a8db39bfde9bfe883136eaa8f52af6dc
|
2026-01-21T00:00:00-05:00
|
Two-Stream temporal transformer for video action classification
|
arXiv:2601.14086v1 Announce Type: new Abstract: Motion representation plays an important role in video understanding and has many applications including action recognition, robot and autonomous guidance or others. Lately, transformer networks, through their self-attention mechanism capabilities, have proved their efficiency in many applications. In this study, we introduce a new two-stream transformer video classifier, which extracts spatio-temporal information from content and optical flow representing movement information. The proposed model identifies self-attention features across the joint optical flow and temporal frame domain and represents their relationships within the transformer encoder mechanism. The experimental results show that our proposed methodology provides excellent classification results on three well-known video datasets of human activities.
|
https://arxiv.org/abs/2601.14086
|
Academic Papers
|
svg
|
8c70974cf5e529b87f26cc7897c3d6a25024cfc3e43aaa941692ad5c149069c8
|
2026-01-21T00:00:00-05:00
|
'1'-bit Count-based Sorting Unit to Reduce Link Power in DNN Accelerators
|
arXiv:2601.14087v1 Announce Type: new Abstract: Interconnect power consumption remains a bottleneck in Deep Neural Network (DNN) accelerators. While ordering data based on '1'-bit counts can mitigate this via reduced switching activity, practical hardware sorting implementations remain underexplored. This work proposes the hardware implementation of a comparison-free sorting unit optimized for Convolutional Neural Networks (CNN). By leveraging approximate computing to group population counts into coarse-grained buckets, our design achieves hardware area reductions while preserving the link power benefits of data reordering. Our approximate sorting unit achieves up to 35.4% area reduction while maintaining 19.50\% BT reduction compared to 20.42% of precise implementation.
|
https://arxiv.org/abs/2601.14087
|
Academic Papers
|
svg
|
4496a3df95bf3ed70546f22f350fb35fb465ee01640fc7be9bffd0674d20441f
|
2026-01-21T00:00:00-05:00
|
Near Optimal Code Construction for the Adversarial Torn Paper Channel With Edit Errors
|
arXiv:2601.14088v1 Announce Type: new Abstract: Motivated by DNA storage systems and 3D fingerprinting, this work studies the adversarial torn paper channel with edit errors. This channel first applies at most $t_e$ edit errors (i.e., insertions, deletions, and substitutions) to the transmitted word and then breaks it into $t+1$ fragments at arbitrary positions. In this paper, we construct a near optimal error correcting code for this channel, which will be referred to as a $t$-breaks $t_e$-edit-errors resilient code. This code enables reconstructing the transmitted codeword from the $t+1$ noisy fragments. Moreover, we study list decoding of the torn paper channel by deriving bounds on the size of the list (of codewords) obtained from cutting a codeword of a $t$-breaks resilient code $t'$ times, where $t' > t$.
|
https://arxiv.org/abs/2601.14088
|
Academic Papers
|
svg
|
67fda2380976241c9fcbb23859f108f020094d940167da8bcb8755b45436c5da
|
2026-01-21T00:00:00-05:00
|
Data-Driven Safe Output Regulation of Strict-Feedback Linear Systems with Input Delay
|
arXiv:2601.14089v1 Announce Type: new Abstract: This paper develops a data-driven safe control framework for linear systems possessing a known strict-feedback structure, but with most plant parameters, external disturbances, and input delay being unknown. By leveraging Koopman operator theory, we utilize Krylov dynamic mode decomposition (DMD) to extract the system dynamics from measured data, enabling the reconstruction of the system and disturbance matrices. Concurrently, the batch least-squares identification (BaLSI) method is employed to identify other unknown parameters in the input channel. Using control barrier functions (CBFs) and backstepping, we first develop a full-state safe controller. Based on this, we build an output-feedback controller by performing system identification using only the output data and actuation signals as well as constructing an observer to estimate the unmeasured plant states. The proposed approach achieves: 1) finite-time identification of a substantial set of unknown system quantities, and 2) exponential convergence of the output state (the state furthest from the control input) to a reference trajectory while rigorously ensuring safety constraints. The effectiveness of the proposed method is demonstrated through a safe vehicle platooning application.
|
https://arxiv.org/abs/2601.14089
|
Academic Papers
|
svg
|
b1663680c090ca606ac3ed62ad462247605135b52da76cf958f21a2b4b6be96f
|
2026-01-21T00:00:00-05:00
|
Zero-shot adaptable task planning for autonomous construction robots: a comparative study of lightweight single and multi-AI agent systems
|
arXiv:2601.14091v1 Announce Type: new Abstract: Robots are expected to play a major role in the future construction industry but face challenges due to high costs and difficulty adapting to dynamic tasks. This study explores the potential of foundation models to enhance the adaptability and generalizability of task planning in construction robots. Four models are proposed and implemented using lightweight, open-source large language models (LLMs) and vision language models (VLMs). These models include one single agent and three multi-agent teams that collaborate to create robot action plans. The models are evaluated across three construction roles: Painter, Safety Inspector, and Floor Tiling. Results show that the four-agent team outperforms the state-of-the-art GPT-4o in most metrics while being ten times more cost-effective. Additionally, teams with three and four agents demonstrate the improved generalizability. By discussing how agent behaviors influence outputs, this study enhances the understanding of AI teams and supports future research in diverse unstructured environments beyond construction.
|
https://arxiv.org/abs/2601.14091
|
Academic Papers
|
svg
|
7af8383381a4794451ca9d04c8be7a17e2b764d14638ea0606e03541a2b5029e
|
2026-01-21T00:00:00-05:00
|
Optimizing Energy and Data Collection in UAV-aided IoT Networks using Attention-based Multi-Objective Reinforcement Learning
|
arXiv:2601.14092v1 Announce Type: new Abstract: Due to their adaptability and mobility, Unmanned Aerial Vehicles (UAVs) are becoming increasingly essential for wireless network services, particularly for data harvesting tasks. In this context, Artificial Intelligence (AI)-based approaches have gained significant attention for addressing UAV path planning tasks in large and complex environments, bridging the gap with real-world deployments. However, many existing algorithms suffer from limited training data, which hampers their performance in highly dynamic environments. Moreover, they often overlook the inherently multi-objective nature of the task, treating it in an overly simplistic manner. To address these limitations, we propose an attention-based Multi-Objective Reinforcement Learning (MORL) architecture that explicitly handles the trade-off between data collection and energy consumption in urban environments, even without prior knowledge of wireless channel conditions. Our method develops a single model capable of adapting to varying trade-off preferences and dynamic scenario parameters without the need for fine-tuning or retraining. Extensive simulations show that our approach achieves substantial improvements in performance, model compactness, sample efficiency, and most importantly, generalization to previously unseen scenarios, outperforming existing RL solutions.
|
https://arxiv.org/abs/2601.14092
|
Academic Papers
|
svg
|
f46055e74c200ba41443aa40771f6f588e80c471b0b673e9c70af913eb5e76db
|
2026-01-21T00:00:00-05:00
|
Remapping and navigation of an embedding space via error minimization: a fundamental organizational principle of cognition in natural and artificial systems
|
arXiv:2601.14096v1 Announce Type: new Abstract: The emerging field of diverse intelligence seeks an integrated view of problem-solving in agents of very different provenance, composition, and substrates. From subcellular chemical networks to swarms of organisms, and across evolved, engineered, and chimeric systems, it is hypothesized that scale-invariant principles of decision-making can be discovered. We propose that cognition in both natural and synthetic systems can be characterized and understood by the interplay between two equally important invariants: (1) the remapping of embedding spaces, and (2) the navigation within these spaces. Biological collectives, from single cells to entire organisms (and beyond), remap transcriptional, morphological, physiological, or 3D spaces to maintain homeostasis and regenerate structure, while navigating these spaces through distributed error correction. Modern Artificial Intelligence (AI) systems, including transformers, diffusion models, and neural cellular automata enact analogous processes by remapping data into latent embeddings and refining them iteratively through contextualization. We argue that this dual principle - remapping and navigation of embedding spaces via iterative error minimization - constitutes a substrate-independent invariant of cognition. Recognizing this shared mechanism not only illuminates deep parallels between living systems and artificial models, but also provides a unifying framework for engineering adaptive intelligence across scales.
|
https://arxiv.org/abs/2601.14096
|
Academic Papers
|
svg
|
e02fb0775b0baf53439d5b22977ec24d2a490569486db03a20732492730b4975
|
2026-01-21T00:00:00-05:00
|
A flexible language model-assisted electronic design automation framework
|
arXiv:2601.14098v1 Announce Type: new Abstract: Large language models (LLMs) are transforming electronic design automation (EDA) by enhancing design stages such as schematic design, simulation, netlist synthesis, and place-and-route. Existing methods primarily focus these optimisations within isolated open-source EDA tools and often lack the flexibility to handle multiple domains, such as analogue, digital, and radio-frequency design. In contrast, modern systems require to interface with commercial EDA environments, adhere to tool-specific operation rules, and incorporate feedback from design outcomes while supporting diverse design flows. We propose a versatile framework that uses LLMs to generate files compatible with commercial EDA tools and optimise designs using power-performance-area reports. This is accomplished by guiding the LLMs with tool constraints and feedback from design outputs to meet tool requirements and user specifications. Case studies on operational transconductance amplifiers, microstrip patch antennas, and FPGA circuits show that the framework is effective as an EDA-aware assistant, handling diverse design challenges reliably.
|
https://arxiv.org/abs/2601.14098
|
Academic Papers
|
svg
|
51189015b972b9b2814036b7648321a514119af1db6fe584717adb4b1893fc69
|
2026-01-21T00:00:00-05:00
|
Causal feature selection framework for stable soft sensor modeling based on time-delayed cross mapping
|
arXiv:2601.14099v1 Announce Type: new Abstract: Soft sensor modeling plays a crucial role in process monitoring. Causal feature selection can enhance the performance of soft sensor models in industrial applications. However, existing methods ignore two critical characteristics of industrial processes. Firstly, causal relationships between variables always involve time delays, whereas most causal feature selection methods investigate causal relationships in the same time dimension. Secondly, variables in industrial processes are often interdependent, which contradicts the decorrelation assumption of traditional causal inference methods. Consequently, soft sensor models based on existing causal feature selection approaches often lack sufficient accuracy and stability. To overcome these challenges, this paper proposes a causal feature selection framework based on time-delayed cross mapping. Time-delayed cross mapping employs state space reconstruction to effectively handle interdependent variables in causality analysis, and considers varying causal strength across time delay. Time-delayed convergent cross mapping (TDCCM) is introduced for total causal inference, and time-delayed partial cross mapping (TDPCM) is developed for direct causal inference. Then, in order to achieve automatic feature selection, an objective feature selection strategy is presented. The causal threshold is automatically determined based on the model performance on the validation set, and the causal features are then selected. Two real-world case studies show that TDCCM achieves the highest average performance, while TDPCM improves soft sensor stability and performance in the worst scenario. The code is publicly available at https://github.com/dirge1/TDPCM.
|
https://arxiv.org/abs/2601.14099
|
Academic Papers
|
svg
|
bb399b9197ecb5e75144fe15858d0cccdcc5bacc79fef9879a0d2dca737fff1a
|
2026-01-21T00:00:00-05:00
|
Curriculum-Based Strategies for Efficient Cross-Domain Action Recognition
|
arXiv:2601.14101v1 Announce Type: new Abstract: Despite significant progress in human action recognition, generalizing to diverse viewpoints remains a challenge. Most existing datasets are captured from ground-level perspectives, and models trained on them often struggle to transfer to drastically different domains such as aerial views. This paper examines how curriculum-based training strategies can improve generalization to unseen real aerial-view data without using any real aerial data during training. We explore curriculum learning for cross-view action recognition using two out-of-domain sources: synthetic aerial-view data and real ground-view data. Our results on the evaluation on order of training (fine-tuning on synthetic aerial data vs. real ground data) shows that fine-tuning on real ground data but differ in how they transition from synthetic to real. The first uses a two-stage curriculum with direct fine-tuning, while the second applies a progressive curriculum that expands the dataset in multiple stages before fine-tuning. We evaluate both methods on the REMAG dataset using SlowFast (CNN-based) and MViTv2 (Transformer-based) architectures. Results show that combining the two out-of-domain datasets clearly outperforms training on a single domain, whether real ground-view or synthetic aerial-view. Both curriculum strategies match the top-1 accuracy of simple dataset combination while offering efficiency gains. With the two-step fine-tuning method, SlowFast achieves up to a 37% reduction in iterations and MViTv2 up to a 30% reduction compared to simple combination. The multi-step progressive approach further reduces iterations, by up to 9% for SlowFast and 30% for MViTv2, relative to the two-step method. These findings demonstrate that curriculum-based training can maintain comparable performance (top-1 accuracy within 3% range) while improving training efficiency in cross-view action recognition.
|
https://arxiv.org/abs/2601.14101
|
Academic Papers
|
svg
|
1a9b91c33ea8c8d64bb1f8206d50014126e5580aade2050972a85b9695912802
|
2026-01-21T00:00:00-05:00
|
Interp3D: Correspondence-aware Interpolation for Generative Textured 3D Morphing
|
arXiv:2601.14103v1 Announce Type: new Abstract: Textured 3D morphing seeks to generate smooth and plausible transitions between two 3D assets, preserving both structural coherence and fine-grained appearance. This ability is crucial not only for advancing 3D generation research but also for practical applications in animation, editing, and digital content creation. Existing approaches either operate directly on geometry, limiting them to shape-only morphing while neglecting textures, or extend 2D interpolation strategies into 3D, which often causes semantic ambiguity, structural misalignment, and texture blurring. These challenges underscore the necessity to jointly preserve geometric consistency, texture alignment, and robustness throughout the transition process. To address this, we propose Interp3D, a novel training-free framework for textured 3D morphing. It harnesses generative priors and adopts a progressive alignment principle to ensure both geometric fidelity and texture coherence. Starting from semantically aligned interpolation in condition space, Interp3D enforces structural consistency via SLAT (Structured Latent)-guided structure interpolation, and finally transfers appearance details through fine-grained texture fusion. For comprehensive evaluations, we construct a dedicated dataset, Interp3DData, with graded difficulty levels and assess generation results from fidelity, transition smoothness, and plausibility. Both quantitative metrics and human studies demonstrate the significant advantages of our proposed approach over previous methods. Source code is available at https://github.com/xiaolul2/Interp3D.
|
https://arxiv.org/abs/2601.14103
|
Academic Papers
|
svg
|
ca46f23097f400cbfca780b4c25ca6c7e9a4e4d45f8ba6f804933ad9c79b84c1
|
2026-01-21T00:00:00-05:00
|
Diffusion-Guided Backdoor Attacks in Real-World Reinforcement Learning
|
arXiv:2601.14104v1 Announce Type: new Abstract: Backdoor attacks embed hidden malicious behaviors in reinforcement learning (RL) policies and activate them using triggers at test time. Most existing attacks are validated only in simulation, while their effectiveness in real-world robotic systems remains unclear. In physical deployment, safety-constrained control pipelines such as velocity limiting, action smoothing, and collision avoidance suppress abnormal actions, causing strong attenuation of conventional backdoor attacks. We study this previously overlooked problem and propose a diffusion-guided backdoor attack framework (DGBA) for real-world RL. We design small printable visual patch triggers placed on the floor and generate them using a conditional diffusion model that produces diverse patch appearances under real-world visual variations. We treat the robot control stack as a black-box system. We further introduce an advantage-based poisoning strategy that injects triggers only at decision-critical training states. We evaluate our method on a TurtleBot3 mobile robot and demonstrate reliable activation of targeted attacks while preserving normal task performance. Demo videos and code are available in the supplementary material.
|
https://arxiv.org/abs/2601.14104
|
Academic Papers
|
svg
|
f3d6c460bff79c93db431dde4cab73b05d3e7ee1fd9814194b62ca92bc4fdd1e
|
2026-01-21T00:00:00-05:00
|
Truth with a Twist: The Rhetoric of Persuasion in Professional vs. Community-Authored Fact-Checks
|
arXiv:2601.14105v1 Announce Type: new Abstract: This study presents the first large-scale comparison of persuasion techniques present in crowd- versus professionally-written debunks. Using extensive datasets from Community Notes (CNs), EUvsDisinfo, and the Database of Known Fakes (DBKF), we quantify the prevalence and types of persuasion techniques across these fact-checking ecosystems. Contrary to prior hypothesis that community-produced debunks rely more heavily on subjective or persuasive wording, we find no evidence that CNs contain a higher average number of persuasion techniques than professional fact-checks. We additionally identify systematic rhetorical differences between CNs and professional debunking efforts, reflecting differences in institutional norms and topical coverage. Finally, we examine how the crowd evaluates persuasive language in CNs and show that, although notes with more persuasive elements receive slightly higher overall helpfulness ratings, crowd raters are effective at penalising the use of particular problematic rhetorical means
|
https://arxiv.org/abs/2601.14105
|
Academic Papers
|
svg
|
4f54ef52377d21f6e57221ec548270ec18ecf41a39f51f3fa664723493cabfe2
|
2026-01-21T00:00:00-05:00
|
Communication Technologies for Intelligent Transportation Systems: From Railways to UAVs and Beyond
|
arXiv:2601.14106v1 Announce Type: new Abstract: This white paper aims to comprehensively analyze and consolidate the state of the art in communication technologies supporting modern and future Information and Communication Technology (ICT). Its primary objective is to establish a common understanding of how communication solutions enable automation, safety, and efficiency across multiple transport domains, including railways, road vehicles, aircraft, and unmanned aerial vehicles. The document seeks to identify key communication requirements and technological enablers necessary for interoperable and reliable ITS operation. It also assesses the limitations of current systems and proposes pathways for integrating emerging technologies such as 5G, Sixth Generation (6G), and Artificial Intelligence (AI)-driven network control. The white paper also intends to support harmonization between different transport modes through a unified framework for communication modeling, testing, and standardization. It highlights the importance of accurate channel modeling and empirical validation to design efficient, robust, and scalable systems. Another objective is to explore the use of reconfigurable intelligent surfaces, integrated sensing and communication, and digital twin concepts within ITS. The document emphasizes the role of spectrum management and standardization efforts in ensuring interoperability among diverse communication systems. Finally, the paper seeks to stimulate collaboration among academia, industry, and standardization bodies to advance the design of resilient and adaptive communication infrastructures for future transportation systems.
|
https://arxiv.org/abs/2601.14106
|
Academic Papers
|
svg
|
918a36310b9d74beea1de97ddd86ca044a8b37cf544389a6febd48dd240f4c71
|
2026-01-21T00:00:00-05:00
|
AttackMate: Realistic Emulation and Automation of Cyber Attack Scenarios Across the Kill Chain
|
arXiv:2601.14108v1 Announce Type: new Abstract: Adversary emulation tools facilitate scripting and automated execution of cyber attack chains, thereby reducing costs and manual expert effort required for security testing, cyber exercises, and intrusion detection research. However, due to the fact that existing tools typically rely on agents installed on target systems, they leave suspicious traces that make it easy to distinguish their activities from those of real human attackers. Moreover, these tools often lack relevant capabilities, such as handling of interactive prompts, and are unsuitable for emulating specific stages of the kill chain, such as initial access. This paper thus introduces AttackMate, an open-source attack scripting language and execution engine designed to mimic behavior patterns of actual attackers. We validate the tool in a case study covering common attack steps including privilege escalation, information gathering, and lateral movement. Our results indicate that log artifacts resulting from AttackMate's activities resemble those produced by human attackers more closely than those generated by standard adversary emulation tools.
|
https://arxiv.org/abs/2601.14108
|
Academic Papers
|
svg
|
009942a658b1151765788cdcb4fc0bb4b757bf2ef74d77a4be20d6978c8a410c
|
2026-01-21T00:00:00-05:00
|
TLSQL: Table Learning Structured Query Language
|
arXiv:2601.14109v1 Announce Type: new Abstract: Table learning, which lies at the intersection of machine learning and modern database systems, has recently attracted growing attention. However, existing frameworks typically require explicit data export and extensive feature engineering, creating a high barrier for database practitioners. We present TLSQL (Table Learning Structured Query Language), a system that enables table learning directly over relational databases via SQL-like declarative specifications. TLSQL is implemented as a lightweight Python library that translates these specifications into standard SQL queries and structured learning task descriptions. The generated SQL queries are executed natively by the database engine, while the task descriptions are consumed by downstream table learning frameworks. This design allows users to focus on modeling and analysis rather than low-level data preparation and pipeline orchestration. Experiments on real-world datasets demonstrate that TLSQL effectively lowers the barrier to integrating machine learning into databasecentric workflows. Our code is available at https://github.com/rllmproject/tlsql/.
|
https://arxiv.org/abs/2601.14109
|
Academic Papers
|
svg
|
1185a04780975e8cf62b3a54e86eee9823890828b7354a29110588c3b2650de7
|
2026-01-21T00:00:00-05:00
|
PMCE: Probabilistic Multi-Granularity Semantics with Caption-Guided Enhancement for Few-Shot Learning
|
arXiv:2601.14111v1 Announce Type: new Abstract: Few-shot learning aims to identify novel categories from only a handful of labeled samples, where prototypes estimated from scarce data are often biased and generalize poorly. Semantic-based methods alleviate this by introducing coarse class-level information, but they are mostly applied on the support side, leaving query representations unchanged. In this paper, we present PMCE, a Probabilistic few-shot framework that leverages Multi-granularity semantics with Caption-guided Enhancement. PMCE constructs a nonparametric knowledge bank that stores visual statistics for each category as well as CLIP-encoded class name embeddings of the base classes. At meta-test time, the most relevant base classes are retrieved based on the similarities of class name embeddings for each novel category. These statistics are then aggregated into category-specific prior information and fused with the support set prototypes via a simple MAP update. Simultaneously, a frozen BLIP captioner provides label-free instance-level image descriptions, and a lightweight enhancer trained on base classes optimizes both support prototypes and query features under an inductive protocol with a consistency regularization to stabilize noisy captions. Experiments on four benchmarks show that PMCE consistently improves over strong baselines, achieving up to 7.71% absolute gain over the strongest semantic competitor on MiniImageNet in the 1-shot setting. Our code is available at https://anonymous.4open.science/r/PMCE-275D
|
https://arxiv.org/abs/2601.14111
|
Academic Papers
|
svg
|
5cb0cd9698ff9a02c8e0c67c607e984f4a42b511e24fca6195933a193f7153ae
|
2026-01-21T00:00:00-05:00
|
Learning to Explain: Supervised Token Attribution from Transformer Attention Patterns
|
arXiv:2601.14112v1 Announce Type: new Abstract: Explainable AI (XAI) has become critical as transformer-based models are deployed in high-stakes applications including healthcare, legal systems, and financial services, where opacity hinders trust and accountability. Transformers self-attention mechanisms have proven valuable for model interpretability, with attention weights successfully used to understand model focus and behavior (Xu et al., 2015); (Wiegreffe and Pinter, 2019). However, existing attention-based explanation methods rely on manually defined aggregation strategies and fixed attribution rules (Abnar and Zuidema, 2020a); (Chefer et al., 2021), while model-agnostic approaches (LIME, SHAP) treat the model as a black box and incur significant computational costs through input perturbation. We introduce Explanation Network (ExpNet), a lightweight neural network that learns an explicit mapping from transformer attention patterns to token-level importance scores. Unlike prior methods, ExpNet discovers optimal attention feature combinations automatically rather than relying on predetermined rules. We evaluate ExpNet in a challenging cross-task setting and benchmark it against a broad spectrum of model-agnostic methods and attention-based techniques spanning four methodological families.
|
https://arxiv.org/abs/2601.14112
|
Academic Papers
|
svg
|
55bcb43542cc65798d118e503b42f5f775ac39b6845fc912042f7a905aa8e058
|
2026-01-21T00:00:00-05:00
|
Partial Reductions for Kleene Algebra with Linear Hypotheses
|
arXiv:2601.14114v1 Announce Type: new Abstract: Kleene algebra (KA) is an important tool for reasoning about general program equivalences, with a decidable and complete equational theory. However, KA cannot always prove equivalences between specific programs. For this purpose, one adds hypotheses to KA that encode program-specific knowledge. Traditionally, a map on regular expressions called a reduction then lets us lift decidability and completeness to these more expressive systems. Explicitly constructing such a reduction requires significant labour. Moreover, due to regularity constraints, a reduction may not exist for all combinations of expression and hypothesis. We describe an automaton-based construction to mechanically derive reductions for a wide class of hypotheses. These reductions can be partial, in which case they yield partial completeness: completeness for expressions in their domain. This allows us to automatically establish the provability of more equivalences than what is covered in existing work.
|
https://arxiv.org/abs/2601.14114
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.