id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
c9e123a970a81f481ccab2519cc1af26a1819d6207bfca10de3c9c1234d42013
|
2026-01-21T00:00:00-05:00
|
BlocksecRT-DETR: Decentralized Privacy-Preserving and Token-Efficient Federated Transformer Learning for Secure Real-Time Object Detection in ITS
|
arXiv:2601.12693v1 Announce Type: new Abstract: Federated real-time object detection using transformers in Intelligent Transportation Systems (ITS) faces three major challenges: (1) missing-class non-IID data heterogeneity from geographically diverse traffic environments, (2) latency constraints on edge hardware for high-capacity transformer models, and (3) privacy and security risks from untrusted client updates and centralized aggregation. We propose BlockSecRT-DETR, a BLOCKchain-SECured Real-Time Object DEtection TRansformer framework for ITS that provides a decentralized, token-efficient, and privacy-preserving federated training solution using RT-DETR transformer, incorporating a blockchain-secured update validation mechanism for trustworthy aggregation. In this framework, challenges (1) and (2) are jointly addressed through a unified client-side design that integrates RT-DETR training with a Token Engineering Module (TEM). TEM prunes low-utility tokens, reducing encoder complexity and latency on edge hardware, while aggregated updates mitigate non-IID data heterogeneity across clients. To address challenge (3), BlockSecRT-DETR incorporates a decentralized blockchain-secured update validation mechanism that enables tamper-proof, privacy-preserving, and trust-free authenticated model aggregation without relying on a central server. We evaluated the proposed framework under a missing-class Non-IID partition of the KITTI dataset and conducted a blockchain case study to quantify security overhead. TEM improves inference latency by 17.2% and reduces encoder FLOPs by 47.8%, while maintaining global detection accuracy (89.20% mAP@0.5). The blockchain integration adds 400 ms per round, and the ledger size remains under 12 KB due to metadata-only on-chain storage.
|
https://arxiv.org/abs/2601.12693
|
Academic Papers
|
svg
|
1a812d31940e63df4cd06825e948b4311282bb17186565f5c9dbb87b726d6d4a
|
2026-01-21T00:00:00-05:00
|
Closed-loop Uplink Radio Resource Management in CF-O-RAN Empowered 5G Aerial Corridor
|
arXiv:2601.12694v1 Announce Type: new Abstract: In this paper, we investigate the uplink (UL) radio resource management for 5G aerial corridors with an open-radio access network (O-RAN)-enabled cell-free (CF) massive multiple-input multiple-output (mMIMO) system. Our objective is to maximize the minimum spectral efficiency (SE) by jointly optimizing unmanned aerial vehicle (UAV)-open radio unit (O-RU) association and UL transmit power under quality-of-service (QoS) constraints. Owing to its NP-hard nature, the formulated problem is decomposed into two tractable sub-problems solved via alternating optimization (AO) using two computationally efficient algorithms. We then propose (i) a QoS-driven and multi-connectivity-enabled association algorithm incorporating UAV-centric and O-RU-centric criteria with targeted refinement for weak UAVs, and (ii) a bisection-guided fixed-point power control algorithm achieving global optimality with significantly reduced complexity, hosted as xApp at the near-real-time (near-RT) RAN intelligent controller (RIC) of O-RAN. Solving the resource-allocation problem requires global channel state information (CSI), which incurs substantial measurement and signaling overhead. To mitigate this, we leverage a channel knowledge map (CKM) within the O-RAN non-RT RIC to enable efficient environment-aware CSI inference. Simulation results show that the proposed framework achieves up to 440% improvement in minimum SE, 100% QoS satisfaction and fairness, while reducing runtime by up to 99.7% compared to an interior point solver-based power allocation solution, thereby enabling O-RAN compliant real-time deployment.
|
https://arxiv.org/abs/2601.12694
|
Academic Papers
|
svg
|
1e0ac78fa5acc8350a8459e080129556bac25e56716e448341aeb4ffddaffd59
|
2026-01-21T00:00:00-05:00
|
From Noise to Knowledge: System Identification with Systematic Polytope Construction via Cyclic Reformulation
|
arXiv:2601.12695v1 Announce Type: new Abstract: Model-based control requires accurate mathematical models to guarantee control performance and stability. However, obtaining accurate models is challenging due to process and sensor noise. This paper proposes a novel identification algorithm that derives polytopic uncertainty models by interpreting noise-induced parameter fluctuations as intrinsic uncertainty. The method applies cyclic reformulation with period N to linear time-invariant systems, yielding N parameter sets with slight variations that serve as polytope vertices. This enables systematic polytopic model construction from a single identification experiment. Simulation results demonstrate significant improvements: the proposed method achieves higher parameter estimation accuracy and reduces prediction errors by approximately half compared to conventional approaches. The vertex count N provides systematic control over the precision of uncertainty representation.
|
https://arxiv.org/abs/2601.12695
|
Academic Papers
|
svg
|
39613efb29252c7d9aa4c8415860997bbd547290da2f7357a1826de950c0cb0c
|
2026-01-21T00:00:00-05:00
|
UbuntuGuard: A Culturally-Grounded Policy Benchmark for Equitable AI Safety in African Languages
|
arXiv:2601.12696v1 Announce Type: new Abstract: Current guardian models are predominantly Western-centric and optimized for high-resource languages, leaving low-resource African languages vulnerable to evolving harms, cross-lingual safety failures, and cultural misalignment. Moreover, most guardian models rely on rigid, predefined safety categories that fail to generalize across diverse linguistic and sociocultural contexts. Robust safety, therefore, requires flexible, runtime-enforceable policies and benchmarks that reflect local norms, harm scenarios, and cultural expectations. We introduce UbuntuGuard, the first African policy-based safety benchmark built from adversarial queries authored by 155 domain experts across sensitive fields, including healthcare. From these expert-crafted queries, we derive context-specific safety policies and reference responses that capture culturally grounded risk signals, enabling policy-aligned evaluation of guardian models. We evaluate 13 models, comprising six general-purpose LLMs and seven guardian models across three distinct variants: static, dynamic, and multilingual. Our findings reveal that existing English-centric benchmarks overestimate real-world multilingual safety, cross-lingual transfer provides partial but insufficient coverage, and dynamic models, while better equipped to leverage policies at inference time, still struggle to fully localize African-language contexts. These findings highlight the urgent need for multilingual, culturally grounded safety benchmarks to enable the development of reliable and equitable guardian models for low-resource languages. Our code can be found online.\footnote{Code repository available at https://github.com/hemhemoh/UbuntuGuard.
|
https://arxiv.org/abs/2601.12696
|
Academic Papers
|
svg
|
6a325245df613f5fac54163b31ceb321c9e9ac4063a99dc8c8b33e1016448742
|
2026-01-21T00:00:00-05:00
|
Fusing in 3D: Free-Viewpoint Fusion Rendering with a 3D Infrared-Visible Scene Representation
|
arXiv:2601.12697v1 Announce Type: new Abstract: Infrared-visible image fusion aims to integrate infrared and visible information into a single fused image. Existing 2D fusion methods focus on fusing images from fixed camera viewpoints, neglecting a comprehensive understanding of complex scenarios, which results in the loss of critical information about the scene. To address this limitation, we propose a novel Infrared-Visible Gaussian Fusion (IVGF) framework, which reconstructs scene geometry from multimodal 2D inputs and enables direct rendering of fused images. Specifically, we propose a cross-modal adjustment (CMA) module that modulates the opacity of Gaussians to solve the problem of cross-modal conflicts. Moreover, to preserve the distinctive features from both modalities, we introduce a fusion loss that guides the optimization of CMA, thus ensuring that the fused image retains the critical characteristics of each modality. Comprehensive qualitative and quantitative experiments demonstrate the effectiveness of the proposed method.
|
https://arxiv.org/abs/2601.12697
|
Academic Papers
|
svg
|
6ec44d56276bf0f4c48fd2b0478ca7518c9a77741e4db126359fa32c7229f948
|
2026-01-21T00:00:00-05:00
|
A Two-Stage GPU Kernel Tuner Combining Semantic Refactoring and Search-Based Optimization
|
arXiv:2601.12698v1 Announce Type: new Abstract: GPU code optimization is a key performance bottleneck for HPC workloads as well as large-model training and inference. Although compiler optimizations and hand-written kernels can partially alleviate this issue, achieving near-hardware-limit performance still relies heavily on manual code refactoring and parameter tuning. Recent progress in LLM-agent-based kernel generation and optimization has been reported, yet many approaches primarily focus on direct code rewriting, where parameter choices are often implicit and hard to control, or require human intervention, leading to unstable performance gains. This paper introduces a template-based rewriting layer on top of an agent-driven iterative loop: kernels are semantically refactored into explicitly parameterizable templates, and template parameters are then optimized via search-based autotuning, yielding more stable and higher-quality speedups. Experiments on a set of real-world kernels demonstrate speedups exceeding 3x in the best case. We extract representative CUDA kernels from SGLang as evaluation targets; the proposed agentic tuner iteratively performs templating, testing, analysis, and planning, and leverages profiling feedback to execute constrained parameter search under hardware resource limits. Compared to agent-only direct rewriting, the template-plus-search design significantly reduces the randomness of iterative optimization, making the process more interpretable and enabling a more systematic approach toward high-performance configurations. The proposed method can be further extended to OpenCL, HIP, and other backends to deliver automated performance optimization for real production workloads.
|
https://arxiv.org/abs/2601.12698
|
Academic Papers
|
svg
|
13b360b6f22c31c0fe09a18a9be8d6526cf5350ef4b9b58c236e7e7b3f0484ed
|
2026-01-21T00:00:00-05:00
|
Resource-Conscious RL Algorithms for Deep Brain Stimulation
|
arXiv:2601.12699v1 Announce Type: new Abstract: Deep Brain Stimulation (DBS) has proven to be a promising treatment of Parkinson's Disease (PD). DBS involves stimulating specific regions of the brain's Basal Ganglia (BG) using electric impulses to alleviate symptoms of PD such as tremors, rigidity, and bradykinesia. Although most clinical DBS approaches today use a fixed frequency and amplitude, they suffer from side effects (such as slurring of speech) and shortened battery life of the implant. Reinforcement learning (RL) approaches have been used in recent research to perform DBS in a more adaptive manner to improve overall patient outcome. These RL algorithms are, however, too complex to be trained in vivo due to their long convergence time and requirement of high computational resources. We propose a new Time & Threshold-Triggered Multi-Armed Bandit (T3P MAB) RL approach for DBS that is more effective than existing algorithms. Further, our T3P agent is lightweight enough to be deployed in the implant, unlike current deep-RL strategies, and even forgoes the need for an offline training phase. Additionally, most existing RL approaches have focused on modulating only frequency or amplitude, and the possibility of tuning them together remains greatly unexplored in the literature. Our RL agent can tune both frequency and amplitude of DBS signals to the brain with better sample efficiency and requires minimal time to converge. We implement an MAB agent for DBS for the first time on hardware to report energy measurements and prove its suitability for resource-constrained platforms. Our T3P MAB algorithm is deployed on a variety of microcontroller unit (MCU) setups to show its efficiency in terms of power consumption as opposed to other existing RL approaches used in recent work.
|
https://arxiv.org/abs/2601.12699
|
Academic Papers
|
svg
|
1549f248db05a045fc217e73ea9e75a1ed22afdfa85b10ffcae3b6fa4fb12b57
|
2026-01-21T00:00:00-05:00
|
RPT*: Global Planning with Probabilistic Terminals for Target Search in Complex Environments
|
arXiv:2601.12701v1 Announce Type: new Abstract: Routing problems such as Hamiltonian Path Problem (HPP), seeks a path to visit all the vertices in a graph while minimizing the path cost. This paper studies a variant, HPP with Probabilistic Terminals (HPP-PT), where each vertex has a probability representing the likelihood that the robot's path terminates there, and the objective is to minimize the expected path cost. HPP-PT arises in target object search, where a mobile robot must visit all candidate locations to find an object, and prior knowledge of the object's location is expressed as vertex probabilities. While routing problems have been studied for decades, few of them consider uncertainty as required in this work. The challenge lies not only in optimally ordering the vertices, as in standard HPP, but also in handling history dependency: the expected path cost depends on the order in which vertices were previously visited. This makes many existing methods inefficient or inapplicable. To address the challenge, we propose a search-based approach RPT* with solution optimality guarantees, which leverages dynamic programming in a new state space to bypass the history dependency and novel heuristics to speed up the computation. Building on RPT*, we design a Hierarchical Autonomous Target Search (HATS) system that combines RPT* with either Bayesian filtering for lifelong target search with noisy sensors, or autonomous exploration to find targets in unknown environments. Experiments in both simulation and real robot show that our approach can naturally balance between exploitation and exploration, thereby finding targets more quickly on average than baseline methods.
|
https://arxiv.org/abs/2601.12701
|
Academic Papers
|
svg
|
f771acab1e1a7b9721b3ec9506a73631e49b82e89dfc6ed637bb86d09dfca07f
|
2026-01-21T00:00:00-05:00
|
Towards Spectroscopy: Susceptibility Clusters in Language Models
|
arXiv:2601.12703v1 Announce Type: new Abstract: Spectroscopy infers the internal structure of physical systems by measuring their response to perturbations. We apply this principle to neural networks: perturbing the data distribution by upweighting a token $y$ in context $x$, we measure the model's response via susceptibilities $\chi_{xy}$, which are covariances between component-level observables and the perturbation computed over a localized Gibbs posterior via stochastic gradient Langevin dynamics (SGLD). Theoretically, we show that susceptibilities decompose as a sum over modes of the data distribution, explaining why tokens that follow their contexts "for similar reasons" cluster together in susceptibility space. Empirically, we apply this methodology to Pythia-14M, developing a conductance-based clustering algorithm that identifies 510 interpretable clusters ranging from grammatical patterns to code structure to mathematical notation. Comparing to sparse autoencoders, 50% of our clusters match SAE features, validating that both methods recover similar structure.
|
https://arxiv.org/abs/2601.12703
|
Academic Papers
|
svg
|
37ac5d6c9706895fbe50442ca5073b8309ccd778cebfe10e765ce7fc4f25e129
|
2026-01-21T00:00:00-05:00
|
Adaptively trained Physics-informed Radial Basis Function Neural Networks for Solving Multi-asset Option Pricing Problems
|
arXiv:2601.12704v1 Announce Type: new Abstract: The present study investigates the numerical solution of Black-Scholes partial differential equation (PDE) for option valuation with multiple underlying assets. We develop a physics-informed (PI) machine learning algorithm based on a radial basis function neural network (RBFNN) that concurrently optimizes the network architecture and predicts the target option price. The physics-informed radial basis function neural network (PIRBFNN) combines the strengths of the traditional radial basis function collocation method and the physics-informed neural network machine learning approach to effectively solve PDE problems in the financial context. By employing a PDE residual-based technique to adaptively refine the distribution of hidden neurons during the training process, the PIRBFNN facilitates accurate and efficient handling of multidimensional option pricing models featuring non-smooth payoff conditions. The validity of the proposed method is demonstrated through a set of experiments encompassing a single-asset European put option, a double-asset exchange option, and a four-asset basket call option.
|
https://arxiv.org/abs/2601.12704
|
Academic Papers
|
svg
|
30b0e99ea5c37713e03c0842fdb27a97efcfe31613907ceaafec3a79de6e1779
|
2026-01-21T00:00:00-05:00
|
How do the Global South Diasporas Mobilize for Transnational Political Change?
|
arXiv:2601.12705v1 Announce Type: new Abstract: This paper examines how non-resident Bangladeshis mobilized during the 2024 quota-reform turned pro-democracy movement, leveraging social platforms and remittance flows to challenge state authority. Drawing on semi-structured interviews, we identify four phases of their collective action: technology-mediated shifts to active engagement, rapid transnational network building, strategic execution of remittance boycott, reframing economic dependence as political leverage, and adaptive responses to government surveillance and information blackouts. We extend postcolonial computing by introducing the idea of "diasporic superposition," which shows how diasporas can exercise political and economic influence from hybrid positionalities that both contest and complicate power asymmetries. We reframe diaspora engagement by highlighting how migrants participate in and reshape homeland politics, beyond narratives of integration in host countries. We advance the scholarship on financial technologies by foregrounding their relationship with moral economies of care, state surveillance, regulatory constraints, and uneven international economic power dynamics. Together, these contributions theorize how transnational activism and digital technologies intersect to mobilize political change in Global South contexts.
|
https://arxiv.org/abs/2601.12705
|
Academic Papers
|
svg
|
55ffd237517ea765c2dd4a466a764345c4187d6859812f323a56c363398d0ec9
|
2026-01-21T00:00:00-05:00
|
Trend-Adjusted Time Series Models with an Application to Gold Price Forecasting
|
arXiv:2601.12706v1 Announce Type: new Abstract: Time series data play a critical role in various fields, including finance, healthcare, marketing, and engineering. A wide range of techniques (from classical statistical models to neural network-based approaches such as Long Short-Term Memory (LSTM)) have been employed to address time series forecasting challenges. In this paper, we reframe time series forecasting as a two-part task: (1) predicting the trend (directional movement) of the time series at the next time step, and (2) forecasting the quantitative value at the next time step. The trend can be predicted using a binary classifier, while quantitative values can be forecasted using models such as LSTM and Bidirectional Long Short-Term Memory (Bi-LSTM). Building on this reframing, we propose the Trend-Adjusted Time Series (TATS) model, which adjusts the forecasted values based on the predicted trend provided by the binary classifier. We validate the proposed approach through both theoretical analysis and empirical evaluation. The TATS model is applied to a volatile financial time series (the daily gold price) with the objective of forecasting the next days price. Experimental results demonstrate that TATS consistently outperforms standard LSTM and Bi-LSTM models by achieving significantly lower forecasting error. In addition, our results indicate that commonly used metrics such as MSE and MAE are insufficient for fully assessing time series model performance. Therefore, we also incorporate trend detection accuracy, which measures how effectively a model captures trends in a time series.
|
https://arxiv.org/abs/2601.12706
|
Academic Papers
|
svg
|
ded9e55d5d53770c45cc72526710c2a4216b9486eac05e7262d725024a7b2749
|
2026-01-21T00:00:00-05:00
|
Decoding Rewards in Competitive Games: Inverse Game Theory with Entropy Regularization
|
arXiv:2601.12707v1 Announce Type: new Abstract: Estimating the unknown reward functions driving agents' behaviors is of central interest in inverse reinforcement learning and game theory. To tackle this problem, we develop a unified framework for reward function recovery in two-player zero-sum matrix games and Markov games with entropy regularization, where we aim to reconstruct the underlying reward functions given observed players' strategies and actions. This task is challenging due to the inherent ambiguity of inverse problems, the non-uniqueness of feasible rewards, and limited observational data coverage. To address these challenges, we establish the reward function's identifiability using the quantal response equilibrium (QRE) under linear assumptions. Building upon this theoretical foundation, we propose a novel algorithm to learn reward functions from observed actions. Our algorithm works in both static and dynamic settings and is adaptable to incorporate different methods, such as Maximum Likelihood Estimation (MLE). We provide strong theoretical guarantees for the reliability and sample efficiency of our algorithm. Further, we conduct extensive numerical studies to demonstrate the practical effectiveness of the proposed framework, offering new insights into decision-making in competitive environments.
|
https://arxiv.org/abs/2601.12707
|
Academic Papers
|
svg
|
3766d3b293b8806695a4b7646e757c4dae46ece0977dcadce661ef57edf3deba
|
2026-01-21T00:00:00-05:00
|
Neurosymbolic LoRA: Why and When to Tune Weights vs. Rewrite Prompts
|
arXiv:2601.12711v1 Announce Type: new Abstract: Large language models (LLMs) can be adapted either through numerical updates that alter model parameters or symbolic manipulations that work on discrete prompts or logical constraints. While numerical fine-tuning excels at injecting new factual knowledge, symbolic updates offer flexible control of style and alignment without retraining. We introduce a neurosymbolic LoRA framework that dynamically combines these two complementary strategies. Specifically, we present a unified monitoring signal and a reward-based classifier to decide when to employ LoRA for deeper factual reconstruction and when to apply TextGrad for token-level edits. Our approach remains memory-efficient by offloading the symbolic transformations to an external LLM only when needed. Additionally, the refined prompts produced during symbolic editing serve as high-quality, reusable training data, an important benefit in data-scarce domains like mathematical reasoning. Extensive experiments across multiple LLM backbones show that neurosymbolic LoRA consistently outperforms purely numerical or purely symbolic baselines, demonstrating superior adaptability and improved performance. Our findings highlight the value of interleaving numerical and symbolic updates to unlock a new level of versatility in language model fine-tuning.
|
https://arxiv.org/abs/2601.12711
|
Academic Papers
|
svg
|
aab9f737881d36c5b71b6c2f3850f9401267b8a750b899756bd347f954b729a3
|
2026-01-21T00:00:00-05:00
|
Dynamic Detection of Inefficient Data Mapping Patterns in Heterogeneous OpenMP Applications
|
arXiv:2601.12713v1 Announce Type: new Abstract: With the growing prevalence of heterogeneous computing, CPUs are increasingly being paired with accelerators to achieve new levels of performance and energy efficiency. However, data movement between devices remains a significant bottleneck, complicating application development. Existing performance tools require considerable programmer intervention to diagnose and locate data transfer inefficiencies. To address this, we propose dynamic analysis techniques to detect and profile inefficient data transfer and allocation patterns in heterogeneous applications. We implemented these techniques into OMPDataPerf, which provides detailed traces of problematic data mappings, source code attribution, and assessments of optimization potential in heterogeneous OpenMP applications. OMPDataPerf uses the OpenMP Tools Interface (OMPT) and incurs only a 5 % geometric-mean runtime overhead.
|
https://arxiv.org/abs/2601.12713
|
Academic Papers
|
svg
|
d2fd03cdf3d762efc63e77d2f7102e02af691c5faa436a8d2889f80e2ffeed01
|
2026-01-21T00:00:00-05:00
|
P2L-CA: An Effective Parameter Tuning Framework for Rehearsal-Free Multi-Label Class-Incremental Learning
|
arXiv:2601.12714v1 Announce Type: new Abstract: Multi-label Class-Incremental Learning aims to continuously recognize novel categories in complex scenes where multiple objects co-occur. However, existing approaches often incur high computational costs due to full-parameter fine-tuning and substantial storage overhead from memory buffers, or they struggle to address feature confusion and domain discrepancies adequately. To overcome these limitations, we introduce P2L-CA, a parameter-efficient framework that integrates a Prompt-to-Label module with a Continuous Adapter module. The P2L module leverages class-specific prompts to disentangle multi-label representations while incorporating linguistic priors to enforce stable semantic-visual alignment. Meanwhile, the CA module employs lightweight adapters to mitigate domain gaps between pre-trained models and downstream tasks, thereby enhancing model plasticity. Extensive experiments across standard and challenging MLCIL settings on MS-COCO and PASCAL VOC show that P2L-CA not only achieves substantial improvements over state-of-the-art methods but also demonstrates strong generalization in CIL scenarios, all while requiring minimal trainable parameters and eliminating the need for memory buffers.
|
https://arxiv.org/abs/2601.12714
|
Academic Papers
|
svg
|
f84471f7467731fe8a3f88b5658d7c0e519d898a95f51a2a5a333bd1adbb9bd3
|
2026-01-21T00:00:00-05:00
|
RSOD: Reliability-Guided Sonar Image Object Detection with Extremely Limited Labels
|
arXiv:2601.12715v1 Announce Type: new Abstract: Object detection in sonar images is a key technology in underwater detection systems. Compared to natural images, sonar images contain fewer texture details and are more susceptible to noise, making it difficult for non-experts to distinguish subtle differences between classes. This leads to their inability to provide precise annotation data for sonar images. Therefore, designing effective object detection methods for sonar images with extremely limited labels is particularly important. To address this, we propose a teacher-student framework called RSOD, which aims to fully learn the characteristics of sonar images and develop a pseudo-label strategy suitable for these images to mitigate the impact of limited labels. First, RSOD calculates a reliability score by assessing the consistency of the teacher's predictions across different views. To leverage this score, we introduce an object mixed pseudo-label method to tackle the shortage of labeled data in sonar images. Finally, we optimize the performance of the student by implementing a reliability-guided adaptive constraint. By taking full advantage of unlabeled data, the student can perform well even in situations with extremely limited labels. Notably, on the UATD dataset, our method, using only 5% of labeled data, achieves results that can compete against those of our baseline algorithm trained on 100% labeled data. We also collected a new dataset to provide more valuable data for research in the field of sonar.
|
https://arxiv.org/abs/2601.12715
|
Academic Papers
|
svg
|
017cf5c362c34579b005dba2784811c5f37e2ef3fd60a233b639246d3e01a751
|
2026-01-21T00:00:00-05:00
|
CellularSpecSec-Bench: A Staged Benchmark for Evidence-Grounded Interpretation and Security Reasoning over 3GPP Specifications
|
arXiv:2601.12716v1 Announce Type: new Abstract: Cellular networks are critical infrastructure supporting billions of worldwide users and safety- and mission-critical services. Vulnerabilities in cellular networks can therefore cause service disruption, privacy breaches, and broad societal harm, motivating growing efforts to analyze 3GPP specifications that define required device and operator behavior. While large language models (LLMs) have demonstrated the capability for reading technical documents, cellular specifications impose unique challenges: faithful interpretation of normative language, reasoning across cross-referenced clauses, and verifiable conclusions grounded in multimodal evidence such as tables and figures. To address these challenges, we propose CellSpecSec-ARI, a unified Adapt-Retrieve-Integrate framework for systematic understanding and standard-driven security analysis of 3GPP specifications; CellularSpecSec-Bench, a staged benchmark, containing newly constructed high-quality datasets with expert-verified and corrected subsets from prior open-source resources. Together, they establish an accessible and reproducible foundation for quantifying progress in specification understanding and security reasoning in the cellular network security domain.
|
https://arxiv.org/abs/2601.12716
|
Academic Papers
|
svg
|
946e42b6b2a128870a7de05425ba9f034f086d41db009b83ba3412744ee60eef
|
2026-01-21T00:00:00-05:00
|
Dataset of GenAI-Assisted Information Problem Solving in Education
|
arXiv:2601.12718v1 Announce Type: new Abstract: Information Problem Solving (IPS) is a critical competency for academic and professional success in education, work, and life. The advent of Generative Artificial Intelligence (GenAI), particularly tools like ChatGPT, has introduced new possibilities for supporting students in complex IPS tasks. However, empirical insights into how students engage with GenAI during IPS and how these tools can be effectively leveraged for learning remain limited. Moreover, differences in background, shaped by cultural and socioeconomic factors, pose additional challenges to the equitable integration of GenAI in educational contexts. To address this gap, we present an open-source dataset collected from 279 students at a public Australian university. The dataset was generated through students' use of FLoRA, a GenAI-powered educational platform that widely adopted in the field of learning analytics. Within FLoRA, students interacted with an embedded GenAI chatbot to gather information and synthesize it into data science project proposals. The dataset captures fine-grained, multi-dimensional records of GenAI-assisted IPS processes, including: (i) student-GenAI dialogue transcripts; (ii) writing process log traces; (iii) final project proposals with human-assigned assessment scores; (iv) surveys of biographic and prior knowledge in data science and AI; and (v) surveys capturing students' GenAI experience and perceptions of GenAI's effectiveness in supporting IPS. This dataset provides a valuable resource for advancing our understanding of GenAI's role in educational IPS and informing the design of adaptive, inclusive AI-powered learning tools.
|
https://arxiv.org/abs/2601.12718
|
Academic Papers
|
svg
|
50a2704ea21ce8ef4fbe3bce1dac4b6153b0733eb815344586922b378ddc80aa
|
2026-01-21T00:00:00-05:00
|
S2DiT: Sandwich Diffusion Transformer for Mobile Streaming Video Generation
|
arXiv:2601.12719v1 Announce Type: new Abstract: Diffusion Transformers (DiTs) have recently improved video generation quality. However, their heavy computational cost makes real-time or on-device generation infeasible. In this work, we introduce S2DiT, a Streaming Sandwich Diffusion Transformer designed for efficient, high-fidelity, and streaming video generation on mobile hardware. S2DiT generates more tokens but maintains efficiency with novel efficient attentions: a mixture of LinConv Hybrid Attention (LCHA) and Stride Self-Attention (SSA). Based on this, we uncover the sandwich design via a budget-aware dynamic programming search, achieving superior quality and efficiency. We further propose a 2-in-1 distillation framework that transfers the capacity of large teacher models (e.g., Wan 2.2-14B) to the compact few-step sandwich model. Together, S2DiT achieves quality on par with state-of-the-art server video models, while streaming at over 10 FPS on an iPhone.
|
https://arxiv.org/abs/2601.12719
|
Academic Papers
|
svg
|
5b17d2e30ce5977f7cdae411ecf2a29d412066fc4d879aea8ad239f7785996a4
|
2026-01-21T00:00:00-05:00
|
Teaching Large Reasoning Models Effective Reflection
|
arXiv:2601.12720v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) have recently shown impressive performance on complex reasoning tasks, often by engaging in self-reflective behaviors such as self-critique and backtracking. However, not all reflections are beneficial-many are superficial, offering little to no improvement over the original answer and incurring computation overhead. In this paper, we identify and address the problem of superficial reflection in LRMs. We first propose Self-Critique Fine-Tuning (SCFT), a training framework that enhances the model's reflective reasoning ability using only self-generated critiques. SCFT prompts models to critique their own outputs, filters high-quality critiques through rejection sampling, and fine-tunes the model using a critique-based objective. Building on this strong foundation, we further introduce Reinforcement Learning with Effective Reflection Rewards (RLERR). RLERR leverages the high-quality reflections initialized by SCFT to construct reward signals, guiding the model to internalize the self-correction process via reinforcement learning. Experiments on two challenging benchmarks, AIME2024 and AIME2025, show that SCFT and RLERR significantly improve both reasoning accuracy and reflection quality, outperforming state-of-the-art baselines. All data and codes are available at https://github.com/wanghanbinpanda/SCFT.
|
https://arxiv.org/abs/2601.12720
|
Academic Papers
|
svg
|
9684b13eee329b1c54941df17814a1d004fadba0ec67fd4b3de7cde9c8d6280b
|
2026-01-21T00:00:00-05:00
|
An Evolutionary Framework for Automatic Optimization Benchmark Generation via Large Language Models
|
arXiv:2601.12723v1 Announce Type: new Abstract: Optimization benchmarks play a fundamental role in assessing algorithm performance; however, existing artificial benchmarks often fail to capture the diversity and irregularity of real-world problem structures, while benchmarks derived from real-world problems are costly and difficult to construct. To address these challenges, we propose an evolutionary automatic benchmark generation framework that leverages a large language model (LLM) as a generative operator, termed the LLM-driven evolutionary benchmark generator (LLM-EBG). In this framework, the LLM serves as an evolutionary operator that generates and evolves benchmark problems within a flexible, expressive representation space. As a case study, we generate unconstrained single-objective continuous minimization problems represented as mathematical expressions designed to induce significant performance differences between a genetic algorithm (GA) and differential evolution (DE). Experimental results show that LLM-EBG successfully produces benchmark problems in which the designated target algorithm consistently outperforms the comparative algorithm in more than 80\% of trials. Furthermore, exploratory landscape analysis reveals that benchmarks favoring GA are highly sensitive to variable scaling, demonstrating that the proposed framework can generate problems with distinct geometric characteristics that reflect the intrinsic search behaviors of different optimization algorithms.
|
https://arxiv.org/abs/2601.12723
|
Academic Papers
|
svg
|
a734998f3eb660537d6865b71531dd44c0cf76d1849a5334b413d219df109287
|
2026-01-21T00:00:00-05:00
|
Explicit Entropic Constructions for Coverage, Facility Location, and Graph Cuts
|
arXiv:2601.12724v1 Announce Type: new Abstract: Shannon entropy is a polymatroidal set function and lies at the foundation of information theory, yet the class of entropic polymatroids is strictly smaller than the class of all submodular functions. In parallel, submodular and combinatorial information measures (SIMs) have recently been proposed as a principled framework for extending entropy, mutual information, and conditional mutual information to general submodular functions, and have been used extensively in data subset selection, active learning, domain adaptation, and representation learning. This raises a natural and fundamental question: are the monotone submodular functions most commonly used in practice entropic? In this paper, we answer this question in the affirmative for a broad class of widely used polymatroid functions. We provide explicit entropic constructions for set cover and coverage functions, facility location, saturated coverage, concave-over-modular functions via truncations, and monotone graph-cut-type objectives. Our results show that these functions can be realized exactly as Shannon entropies of appropriately constructed random variables. As a consequence, for these functions, submodular mutual information coincides with classical mutual information, conditional gain specializes to conditional entropy, and submodular conditional mutual information reduces to standard conditional mutual information in the entropic sense. These results establish a direct bridge between combinatorial information measures and classical information theory for many of the most common submodular objectives used in applications.
|
https://arxiv.org/abs/2601.12724
|
Academic Papers
|
svg
|
83f7010e03c817e48053ca4c2e151c065b3fb2ab6dc42739137be20d0096a223
|
2026-01-21T00:00:00-05:00
|
AI-exhibited Personality Traits Can Shape Human Self-concept through Conversations
|
arXiv:2601.12727v1 Announce Type: new Abstract: Recent Large Language Model (LLM) based AI can exhibit recognizable and measurable personality traits during conversations to improve user experience. However, as human understandings of their personality traits can be affected by their interaction partners' traits, a potential risk is that AI traits may shape and bias users' self-concept of their own traits. To explore the possibility, we conducted a randomized behavioral experiment. Our results indicate that after conversations about personal topics with an LLM-based AI chatbot using GPT-4o default personality traits, users' self-concepts aligned with the AI's measured personality traits. The longer the conversation, the greater the alignment. This alignment led to increased homogeneity in self-concepts among users. We also observed that the degree of self-concept alignment was positively associated with users' conversation enjoyment. Our findings uncover how AI personality traits can shape users' self-concepts through human-AI conversation, highlighting both risks and opportunities. We provide important design implications for developing more responsible and ethical AI systems.
|
https://arxiv.org/abs/2601.12727
|
Academic Papers
|
svg
|
d825bc6bc30886a8586be4bebf844b90090b3ae366830147caca48ff4f1cd854
|
2026-01-21T00:00:00-05:00
|
DC-VLAQ: Query-Residual Aggregation for Robust Visual Place Recognition
|
arXiv:2601.12729v1 Announce Type: new Abstract: One of the central challenges in visual place recognition (VPR) is learning a robust global representation that remains discriminative under large viewpoint changes, illumination variations, and severe domain shifts. While visual foundation models (VFMs) provide strong local features, most existing methods rely on a single model, overlooking the complementary cues offered by different VFMs. However, exploiting such complementary information inevitably alters token distributions, which challenges the stability of existing query-based global aggregation schemes. To address these challenges, we propose DC-VLAQ, a representation-centric framework that integrates the fusion of complementary VFMs and robust global aggregation. Specifically, we first introduce a lightweight residual-guided complementary fusion that anchors representations in the DINOv2 feature space while injecting complementary semantics from CLIP through a learned residual correction. In addition, we propose the Vector of Local Aggregated Queries (VLAQ), a query--residual global aggregation scheme that encodes local tokens by their residual responses to learnable queries, resulting in improved stability and the preservation of fine-grained discriminative cues. Extensive experiments on standard VPR benchmarks, including Pitts30k, Tokyo24/7, MSLS, Nordland, SPED, and AmsterTime, demonstrate that DC-VLAQ consistently outperforms strong baselines and achieves state-of-the-art performance, particularly under challenging domain shifts and long-term appearance changes.
|
https://arxiv.org/abs/2601.12729
|
Academic Papers
|
svg
|
c783a36518947f51b1c861cf59974df93ceeb07ad9407884af534dc0e6a6f0cc
|
2026-01-21T00:00:00-05:00
|
Distribution-Centric Policy Optimization Dominates Exploration-Exploitation Trade-off
|
arXiv:2601.12730v1 Announce Type: new Abstract: The exploration-exploitation (EE) trade-off is a central challenge in reinforcement learning (RL) for large language models (LLMs). With Group Relative Policy Optimization (GRPO), training tends to be exploitation driven: entropy decreases monotonically, samples convergence, and exploration fades. Most existing fixes are \textbf{sample-centric}: they seek or bonus rare samples, assuming exploration comes from novel trajectories and tokens. These heuristics depend on the "luck" of informative samples, lack principled control of the policy, and often yield limited or inconsistent gains. In this work, we are the first to introduce a \textbf{distribution-centric} perspective for RL, in which exploration is always guided by a "better" target distribution, and reveal that a policy's ability to resist entropy collapse is governed by the distribution itself rather than individual samples. Building on this insight, we propose Distribution-Centric Policy Optimization (DCPO), which reformulates entropy regulation as distribution-level regularization. DCPO achieves controllable entropy fully on-policy without sampling from external distributions, enabling efficient exploration while maintaining training stability. Across multiple models and seven benchmarks, DCPO improves over GRPO by about 20\% on average. Overall, DCPO replaces sample-level heuristics with distribution-level principles, offering a theoretically grounded and flexible framework for controllable exploration and a stronger EE trade-off. The code is available in https://github.com/597358816/DCPO.
|
https://arxiv.org/abs/2601.12730
|
Academic Papers
|
svg
|
6567ddae2313c96d8eb1fdf5188bf71eb7fb549f344cfb10809f47e45792be04
|
2026-01-21T00:00:00-05:00
|
A Shared Geometry of Difficulty in Multilingual Language Models
|
arXiv:2601.12731v1 Announce Type: new Abstract: Predicting problem-difficulty in large language models (LLMs) refers to estimating how difficult a task is according to the model itself, typically by training linear probes on its internal representations. In this work, we study the multilingual geometry of problem-difficulty in LLMs by training linear probes using the AMC subset of the Easy2Hard benchmark, translated into 21 languages. We found that difficulty-related signals emerge at two distinct stages of the model internals, corresponding to shallow (early-layers) and deep (later-layers) internal representations, that exhibit functionally different behaviors. Probes trained on deep representations achieve high accuracy when evaluated on the same language but exhibit poor cross-lingual generalization. In contrast, probes trained on shallow representations generalize substantially better across languages, despite achieving lower within-language performance. Together, these results suggest that LLMs first form a language-agnostic representation of problem difficulty, which subsequently becomes language-specific. This closely aligns with existing findings in LLM interpretability showing that models tend to operate in an abstract conceptual space before producing language-specific outputs. We demonstrate that this two-stage representational process extends beyond semantic content to high-level meta-cognitive properties such as problem-difficulty estimation.
|
https://arxiv.org/abs/2601.12731
|
Academic Papers
|
svg
|
1d18dcfc8caace5b2d297edc36c8d2c1d60ba2695e280c086031000425ed18ab
|
2026-01-21T00:00:00-05:00
|
Optimal Error Estimates of a Linearized Backward Euler Localized Orthogonal Decomposition for the Landau-Lifshitz Equation
|
arXiv:2601.12734v1 Announce Type: new Abstract: We introduce a novel spatial discretization technique for the reliable and efficient simulation of magnetization dynamics governed by the Landau-Lifshitz (LL) equation. The overall discretization error is systematically decomposed into temporal and spatial components. The spatial error analysis is conducted by formulating the LL equation within the framework of the Localized Orthogonal Decomposition (LOD) method. Numerical examples are presented to validate the accuracy and approximation properties of the proposed scheme.
|
https://arxiv.org/abs/2601.12734
|
Academic Papers
|
svg
|
9ef78446391a8b3596f5b4c62bbedd3056f18f7f2552b9078cf78455418d5888
|
2026-01-21T00:00:00-05:00
|
OpenAI for OpenAPI: Automated generation of REST API specification via LLMs
|
arXiv:2601.12735v1 Announce Type: new Abstract: REST APIs, based on the REpresentational State Transfer (REST) architecture, are the primary type of Web API. The OpenAPI Specification (OAS) serves as the de facto standard for describing REST APIs and is crucial for multiple software engineering tasks. However, developers face challenges in writing and maintaining OAS. Although static analysis shows potential for OAS generation, it is limited to specific programming languages and development frameworks. The powerful code understanding capabilities of LLMs offer new opportunities for OAS generation, yet they are constrained by context limitations and hallucinations. To address these challenges, we propose the OpenAI OpenAPI Project Scanner (OOPS), the first technology-agnostic LLM-based static analysis method for OAS generation, requiring fewer technology-specific rules and less human expert intervention. OOPS is implemented as an LLM agent workflow comprising two key steps: endpoint method extraction and OAS generation. By constructing an API dependency graph, it establishes necessary file associations to address LLMs' context limitations. Through multi-stage generation and self-refine, it mitigates both syntactic and semantic hallucinations during OAS generation. We evaluated OOPS on 12 real-world REST APIs spanning 5 programming languages and 8 development frameworks. Experimental results demonstrate that OOPS accurately generates high-quality OAS for REST APIs implemented with diverse technologies, achieving an average F1-score exceeding 98% for endpoint method inference, 97% for both request parameter and response inference, and 92% for parameter constraint inference. The input tokens average below 5.6K with a maximum of 16.2K, while the output tokens average below 0.9K with a maximum of 7.7K.
|
https://arxiv.org/abs/2601.12735
|
Academic Papers
|
svg
|
bf55acf77576eef97c980cf2dd1c2f4e9d88e7d6216a830dd4019b1337c57006
|
2026-01-21T00:00:00-05:00
|
KaoLRM: Repurposing Pre-trained Large Reconstruction Models for Parametric 3D Face Reconstruction
|
arXiv:2601.12736v1 Announce Type: new Abstract: We propose KaoLRM to re-target the learned prior of the Large Reconstruction Model (LRM) for parametric 3D face reconstruction from single-view images. Parametric 3D Morphable Models (3DMMs) have been widely used for facial reconstruction due to their compact and interpretable parameterization, yet existing 3DMM regressors often exhibit poor consistency across varying viewpoints. To address this, we harness the pre-trained 3D prior of LRM and incorporate FLAME-based 2D Gaussian Splatting into LRM's rendering pipeline. Specifically, KaoLRM projects LRM's pre-trained triplane features into the FLAME parameter space to recover geometry, and models appearance via 2D Gaussian primitives that are tightly coupled to the FLAME mesh. The rich prior enables the FLAME regressor to be aware of the 3D structure, leading to accurate and robust reconstructions under self-occlusions and diverse viewpoints. Experiments on both controlled and in-the-wild benchmarks demonstrate that KaoLRM achieves superior reconstruction accuracy and cross-view consistency, while existing methods remain sensitive to viewpoint variations. The code is released at https://github.com/CyberAgentAILab/KaoLRM.
|
https://arxiv.org/abs/2601.12736
|
Academic Papers
|
svg
|
3500a990a6e26e2a09368633020b4ce6507fa23dc4f54191fbacb1cba942aefb
|
2026-01-21T00:00:00-05:00
|
TreeWriter: AI-Assisted Hierarchical Planning and Writing for Long-Form Documents
|
arXiv:2601.12740v1 Announce Type: new Abstract: Long documents pose many challenges to current intelligent writing systems. These include maintaining consistency across sections, sustaining efficient planning and writing as documents become more complex, and effectively providing and integrating AI assistance to the user. Existing AI co-writing tools offer either inline suggestions or limited structured planning, but rarely support the entire writing process that begins with high-level ideas and ends with polished prose, in which many layers of planning and outlining are needed. Here, we introduce TreeWriter, a hierarchical writing system that represents documents as trees and integrates contextual AI support. TreeWriter allows authors to create, save, and refine document outlines at multiple levels, facilitating drafting, understanding, and iterative editing of long documents. A built-in AI agent can dynamically load relevant content, navigate the document hierarchy, and provide context-aware editing suggestions. A within-subject study (N=12) comparing TreeWriter with Google Docs + Gemini on long-document editing and creative writing tasks shows that TreeWriter improves idea exploration/development, AI helpfulness, and perceived authorial control. A two-month field deployment (N=8) further demonstrated that hierarchical organization supports collaborative writing. Our findings highlight the potential of hierarchical, tree-structured editors with integrated AI support and provide design guidelines for future AI-assisted writing tools that balance automation with user agency.
|
https://arxiv.org/abs/2601.12740
|
Academic Papers
|
svg
|
99563d4d596d997f85cf1df9cc61b8923afc48769199e2f0c63c4f5a82b406f6
|
2026-01-21T00:00:00-05:00
|
An Introduction to Razborov's Flag Algebra as a Proof System for Extremal Graph Theory
|
arXiv:2601.12741v1 Announce Type: new Abstract: Razborov's flag algebra forms a powerful framework for deriving asymptotic inequalities between induced subgraph densities, underpinning many advances in extremal graph theory. This survey introduces flag algebra to computer scientists working in logic, programming languages, automated verification, and formal methods. We take a logical perspective on flag algebra and present it in terms of syntax, semantics, and proof strategies, in a style closer to formal logic. One popular proof strategy derives valid inequalities by first proving inequalities in a labelled variant of flag algebra and then transferring them to the original unlabelled setting using the so-called downward operator. We explain this strategy in detail and highlight that its transfer mechanism relies on the notion of what we call an adjoint pair, reminiscent of Galois connections and categorical adjunctions, which appear frequently in work on automated verification and programming languages. Along the way, we work through representative examples, including Mantel's theorem and Goodman's bound on Ramsey multiplicity, to illustrate how mathematical arguments can be carried out symbolically in the flag algebra framework.
|
https://arxiv.org/abs/2601.12741
|
Academic Papers
|
svg
|
6ea5eda455e7bb3148bdf2b88830e3172d36b6161ef3f8e66f54bfebe3b76358
|
2026-01-21T00:00:00-05:00
|
AirHunt: Bridging VLM Semantics and Continuous Planning for Efficient Aerial Object Navigation
|
arXiv:2601.12742v1 Announce Type: new Abstract: Recent advances in large Vision-Language Models (VLMs) have provided rich semantic understanding that empowers drones to search for open-set objects via natural language instructions. However, prior systems struggle to integrate VLMs into practical aerial systems due to orders-of-magnitude frequency mismatch between VLM inference and real-time planning, as well as VLMs' limited 3D scene understanding. They also lack a unified mechanism to balance semantic guidance with motion efficiency in large-scale environments. To address these challenges, we present AirHunt, an aerial object navigation system that efficiently locates open-set objects with zero-shot generalization in outdoor environments by seamlessly fusing VLM semantic reasoning with continuous path planning. AirHunt features a dual-pathway asynchronous architecture that establishes a synergistic interface between VLM reasoning and path planning, enabling continuous flight with adaptive semantic guidance that evolves through motion. Moreover, we propose an active dual-task reasoning module that exploits geometric and semantic redundancy to enable selective VLM querying, and a semantic-geometric coherent planning module that dynamically reconciles semantic priorities and motion efficiency in a unified framework, enabling seamless adaptation to environmental heterogeneity. We evaluate AirHunt across diverse object navigation tasks and environments, demonstrating a higher success rate with lower navigation error and reduced flight time compared to state-of-the-art methods. Real-world experiments further validate AirHunt's practical capability in complex and challenging environments. Code and dataset will be made publicly available before publication.
|
https://arxiv.org/abs/2601.12742
|
Academic Papers
|
svg
|
98cd906e2e1253ca608e4a4b2882d1f100cae795dce58b1ad9dd1948ee2c5f8b
|
2026-01-21T00:00:00-05:00
|
Vision Language Models for Optimization-Driven Intent Processing in Autonomous Networks
|
arXiv:2601.12744v1 Announce Type: new Abstract: Intent-Based Networking (IBN) allows operators to specify high-level network goals rather than low-level configurations. While recent work demonstrates that large language models can automate configuration tasks, a distinct class of intents requires generating optimization code to compute provably optimal solutions for traffic engineering, routing, and resource allocation. Current systems assume text-based intent expression, requiring operators to enumerate topologies and parameters in prose. Network practitioners naturally reason about structure through diagrams, yet whether Vision-Language Models (VLMs) can process annotated network sketches into correct optimization code remains unexplored. We present IntentOpt, a benchmark of 85 optimization problems across 17 categories, evaluating four VLMs (GPT-5-Mini, Claude-Haiku-4.5, Gemini-2.5-Flash, Llama-3.2-11B-Vision) under three prompting strategies on multimodal versus text-only inputs. Our evaluation shows that visual parameter extraction reduces execution success by 12-21 percentage points (pp), with GPT-5-Mini dropping from 93% to 72%. Program-of-thought prompting decreases performance by up to 13 pp, and open-source models lag behind closed-source ones, with Llama-3.2-11B-Vision reaching 18% compared to 75% for GPT-5-Mini. These results establish baseline capabilities and limitations of current VLMs for optimization code generation within an IBN system. We also demonstrate practical feasibility through a case study that deploys VLM-generated code to network testbed infrastructure using Model Context Protocol.
|
https://arxiv.org/abs/2601.12744
|
Academic Papers
|
svg
|
b77261960475961c224fc17168f49725236d0cf781af8cf6f231f33d31ecacd6
|
2026-01-21T00:00:00-05:00
|
A Graph Prompt Fine-Tuning Method for WSN Spatio-Temporal Correlation Anomaly Detection
|
arXiv:2601.12745v1 Announce Type: new Abstract: Anomaly detection of multi-temporal modal data in Wireless Sensor Network (WSN) can provide an important guarantee for reliable network operation. Existing anomaly detection methods in multi-temporal modal data scenarios have the problems of insufficient extraction of spatio-temporal correlation features, high cost of anomaly sample category annotation, and imbalance of anomaly samples. In this paper, a graph neural network anomaly detection backbone network incorporating spatio-temporal correlation features and a multi-task self-supervised training strategy of "pre-training - graph prompting - fine-tuning" are designed for the characteristics of WSN graph structure data. First, the anomaly detection backbone network is designed by improving the Mamba model based on a multi-scale strategy and inter-modal fusion method, and combining it with a variational graph convolution module, which is capable of fully extracting spatio-temporal correlation features in the multi-node, multi-temporal modal scenarios of WSNs. Secondly, we design a three-subtask learning "pre-training" method with no-negative comparative learning, prediction, and reconstruction to learn generic features of WSN data samples from unlabeled data, and design a "graph prompting-fine-tuning" mechanism to guide the pre-trained self-supervised learning. The model is fine-tuned through the "graph prompting-fine-tuning" mechanism to guide the pre-trained self-supervised learning model to complete the parameter fine-tuning, thereby reducing the training cost and enhancing the detection generalization performance. The F1 metrics obtained from experiments on the public dataset and the actual collected dataset are up to 91.30% and 92.31%, respectively, which provides better detection performance and generalization ability than existing methods designed by the method.
|
https://arxiv.org/abs/2601.12745
|
Academic Papers
|
svg
|
96526ae299ca04d112564821484e805f1661dd065e19569cd8f1858aaaa8a42e
|
2026-01-21T00:00:00-05:00
|
SSPFormer: Self-Supervised Pretrained Transformer for MRI Images
|
arXiv:2601.12747v1 Announce Type: new Abstract: The pre-trained transformer demonstrates remarkable generalization ability in natural image processing. However, directly transferring it to magnetic resonance images faces two key challenges: the inability to adapt to the specificity of medical anatomical structures and the limitations brought about by the privacy and scarcity of medical data. To address these issues, this paper proposes a Self-Supervised Pretrained Transformer (SSPFormer) for MRI images, which effectively learns domain-specific feature representations of medical images by leveraging unlabeled raw imaging data. To tackle the domain gap and data scarcity, we introduce inverse frequency projection masking, which prioritizes the reconstruction of high-frequency anatomical regions to enforce structure-aware representation learning. Simultaneously, to enhance robustness against real-world MRI artifacts, we employ frequency-weighted FFT noise enhancement that injects physiologically realistic noise into the Fourier domain. Together, these strategies enable the model to learn domain-invariant and artifact-robust features directly from raw scans. Through extensive experiments on segmentation, super-resolution, and denoising tasks, the proposed SSPFormer achieves state-of-the-art performance, fully verifying its ability to capture fine-grained MRI image fidelity and adapt to clinical application requirements.
|
https://arxiv.org/abs/2601.12747
|
Academic Papers
|
svg
|
8666bb73a7bea5e4298c4fb91001bcd5b041ba5d3c076d0ab9a457df0e9d58a2
|
2026-01-21T00:00:00-05:00
|
Towards Robust Process Reward Modeling via Noise-aware Learning
|
arXiv:2601.12748v1 Announce Type: new Abstract: Process Reward Models (PRMs) have achieved strong results in complex reasoning, but are bottlenecked by costly process-level supervision. A widely used alternative, Monte Carlo Estimation (MCE), defines process rewards as the probability that a policy model reaches the correct final answer from a given reasoning step. However, step correctness is an intrinsic property of the reasoning trajectory, and should be invariant to policy choice. Our empirical findings show that MCE producing policy-dependent rewards that induce label noise, including false positives that reward incorrect steps and false negatives that penalize correct ones. To address above challenges, we propose a two-stage framework to mitigate noisy supervision. In the labeling stage, we introduce a reflection-aware label correction mechanism that uses a large language model (LLM) as a judge to detect reflection and self-correction behaviors related to the current reasoning step, thereby suppressing overestimated rewards. In the training stage, we further propose a \underline{\textbf{N}}oise-\underline{\textbf{A}}ware \underline{\textbf{I}}terative \underline{\textbf{T}}raining framework that enables the PRM to progressively refine noisy labels based on its own confidence. Extensive Experiments show that our method substantially improves step-level correctness discrimination, achieving up to a 27\% absolute gain in average F1 over PRMs trained with noisy supervision.
|
https://arxiv.org/abs/2601.12748
|
Academic Papers
|
svg
|
d233dc691e196b61a7bf6b75f5f29b600acaeec7f18c028318f2432cab83b16a
|
2026-01-21T00:00:00-05:00
|
Efficient Local-to-Global Collaborative Perception via Joint Communication and Computation Optimization
|
arXiv:2601.12749v1 Announce Type: new Abstract: Autonomous driving relies on accurate perception to ensure safe driving. Collaborative perception improves accuracy by mitigating the sensing limitations of individual vehicles, such as limited perception range and occlusion-induced blind spots. However, collaborative perception often suffers from high communication overhead due to redundant data transmission, as well as increasing computation latency caused by excessive load with growing connected and autonomous vehicles (CAVs) participation. To address these challenges, we propose a novel local-to-global collaborative perception framework (LGCP) to achieve collaboration in a communication- and computation-efficient manner. The road of interest is partitioned into non-overlapping areas, each of which is assigned a dedicated CAV group to perform localized perception. A designated leader in each group collects and fuses perception data from its members, and uploads the perception result to the roadside unit (RSU), establishing a link between local perception and global awareness. The RSU aggregates perception results from all groups and broadcasts a global view to all CAVs. LGCP employs a centralized scheduling strategy via the RSU, which assigns CAV groups to each area, schedules their transmissions, aggregates area-level local perception results, and propagates the global view to all CAVs. Experimental results demonstrate that the proposed LGCP framework achieves an average 44 times reduction in the amount of data transmission, while maintaining or even improving the overall collaborative performance.
|
https://arxiv.org/abs/2601.12749
|
Academic Papers
|
svg
|
ea756d14cde26f98474ca55a8d66a800713a28ba5e9bfbd3e51eb04e6d29b8c4
|
2026-01-21T00:00:00-05:00
|
Approximation Schemes for Sequential Hiring Problems
|
arXiv:2601.12750v1 Announce Type: new Abstract: The main contribution of this paper resides in providing novel algorithmic advances and analytical insights for the sequential hiring problem, a recently introduced dynamic optimization model where a firm adaptively fills a limited number of positions from a pool of applicants with known values and acceptance probabilities. While earlier research established a strong foundation -- notably an LP-based $(1 - \frac{e^{-k}k^k}{k!})$-approximation by Epstein and Ma (Operations Research, 2024) -- the attainability of superior approximation guarantees has remained a central open question. Our work addresses this challenge by establishing the first polynomial-time approximation scheme for sequential hiring, proposing an $O(n^{O(1)} \cdot T^{2^{\tilde{O}(1/\epsilon^{2})}})$-time construction of semi-adaptive policies whose expected reward is within factor $1 - \epsilon$ of optimal. To overcome the constant-factor optimality loss inherent to earlier literature, and to circumvent intrinsic representational barriers of adaptive policies, our approach is driven by the following innovations: -- The block-responsive paradigm: We introduce block-responsive policies, a new class of decision-making strategies, selecting ordered sets (blocks) of applicants rather than single individuals, while still allowing for internal reactivity. -- Adaptivity and efficiency: We prove that these policies can nearly match the performance of general adaptive policies while utilizing polynomially-sized decision trees. -- Efficient construction: By developing a recursive enumeration-based framework, we resolve the problematic ``few-positions'' regime, bypassing a fundamental hurdle that hindered previous approaches.
|
https://arxiv.org/abs/2601.12750
|
Academic Papers
|
svg
|
a7372b710b0a2c2bb5440d80b300e9680bee445b787984062227fcc973796322
|
2026-01-21T00:00:00-05:00
|
A Boolean Function-Theoretic Framework for Expressivity in GNNs with Applications to Fair Graph Mining
|
arXiv:2601.12751v1 Announce Type: new Abstract: We propose a novel expressivity framework for Graph Neural Networks (GNNs) grounded in Boolean function theory, enabling a fine-grained analysis of their ability to capture complex subpopulation structures. We introduce the notion of \textit{Subpopulation Boolean Isomorphism} (SBI) as an invariant that strictly subsumes existing expressivity measures such as Weisfeiler-Lehman (WL), biconnectivity-based, and homomorphism-based frameworks. Our theoretical results identify Fourier degree, circuit class (AC$^0$, NC$^1$), and influence as key barriers to expressivity in fairness-aware GNNs. We design a circuit-traversal-based fairness algorithm capable of handling subpopulations defined by high-complexity Boolean functions, such as parity, which break existing baselines. Experiments on real-world graphs show that our method achieves low fairness gaps across intersectional groups where state-of-the-art methods fail, providing the first principled treatment of GNN expressivity tailored to fairness.
|
https://arxiv.org/abs/2601.12751
|
Academic Papers
|
svg
|
c1d8a9719132218417f619803a604beed5fa94a0fb57f0d89d017827a1d27da0
|
2026-01-21T00:00:00-05:00
|
SoundPlot: An Open-Source Framework for Birdsong Acoustic Analysis and Neural Synthesis with Interactive 3D Visualization
|
arXiv:2601.12752v1 Announce Type: new Abstract: We present SoundPlot, an open-source framework for analyzing avian vocalizations through acoustic feature extraction, dimensionality reduction, and neural audio synthesis. The system transforms audio signals into a multi-dimensional acoustic feature space, enabling real-time visualization of temporal dynamics in 3D using web-based interactive graphics. Our framework implements a complete analysis-synthesis pipeline that extracts spectral features (centroid, bandwidth, contrast), pitch contours via probabilistic YIN (pYIN), and mel-frequency cepstral coefficients (MFCCs), mapping them to a unified timbre space for visualization. Audio reconstruction employs the Griffin-Lim phase estimation algorithm applied to mel spectrograms. The accompanying Three.js-based interface provides dual-viewport visualization comparing original and synthesized audio trajectories with independent playback controls. We demonstrate the framework's capabilities through comprehensive waveform analysis, spectrogram comparisons, and feature space evaluation using Principal Component Analysis (PCA). Quantitative evaluation shows mel spectrogram correlation scores exceeding 0.92, indicating high-fidelity preservation of perceptual acoustic structure. SoundPlot is released under the MIT License to facilitate research in bioacoustics, audio signal processing, and computational ethology.
|
https://arxiv.org/abs/2601.12752
|
Academic Papers
|
svg
|
2b74664e3246f4c89d415e5d831b896a63674dd4dc37f234d6c2ef93d9a5884e
|
2026-01-21T00:00:00-05:00
|
PAIR-SAFE: A Paired-Agent Approach for Runtime Auditing and Refining AI-Mediated Mental Health Support
|
arXiv:2601.12754v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used for mental health support, yet they can produce responses that are overly directive, inconsistent, or clinically misaligned, particularly in sensitive or high-risk contexts. Existing approaches to mitigating these risks largely rely on implicit alignment through training or prompting, offering limited transparency and runtime accountability. We introduce PAIR-SAFE, a paired-agent framework for auditing and refining AI-generated mental health support that integrates a Responder agent with a supervisory Judge agent grounded in the clinically validated Motivational Interviewing Treatment Integrity (MITI-4) framework. The Judgeaudits each response and provides structuredALLOW or REVISE decisions that guide runtime response refinement. We simulate counseling interactions using a support-seeker simulator derived from human-annotated motivational interviewing data. We find that Judge-supervised interactions show significant improvements in key MITI dimensions, including Partnership, Seek Collaboration, and overall Relational quality. Our quantitative findings are supported by qualitative expert evaluation, which further highlights the nuances of runtime supervision. Together, our results reveal that such pairedagent approach can provide clinically grounded auditing and refinement for AI-assisted conversational mental health support.
|
https://arxiv.org/abs/2601.12754
|
Academic Papers
|
svg
|
17eda7325082f9f9f1056d751425683660efee92820b30434bcc879d5a2e6231
|
2026-01-21T00:00:00-05:00
|
VISPA: Pluralistic Alignment via Automatic Value Selection and Activation
|
arXiv:2601.12758v1 Announce Type: new Abstract: As large language models are increasingly used in high-stakes domains, it is essential that their outputs reflect not average} human preference, rather range of varying perspectives. Achieving such pluralism, however, remains challenging. Existing approaches consider limited values or rely on prompt-level interventions, lacking value control and representation. To address this, we introduce VISPA, a training-free pluralistic alignment framework, that enables direct control over value expression by dynamic selection and internal model activation steering. Across extensive empirical studies spanning multiple models and evaluation settings, we show VISPA is performant across all pluralistic alignment modes in healthcare and beyond. Further analysis reveals VISPA is adaptable with different steering initiations, model, and/or values. These results suggest that pluralistic alignment can be achieved through internal activation mechanisms, offering a scalable path toward language models that serves all.
|
https://arxiv.org/abs/2601.12758
|
Academic Papers
|
svg
|
70c5e5d33c3cbbff59198050b594559b3a17cb535c1c0350f65d5af0fe95f0e4
|
2026-01-21T00:00:00-05:00
|
Moaw: Unleashing Motion Awareness for Video Diffusion Models
|
arXiv:2601.12761v1 Announce Type: new Abstract: Video diffusion models, trained on large-scale datasets, naturally capture correspondences of shared features across frames. Recent works have exploited this property for tasks such as optical flow prediction and tracking in a zero-shot setting. Motivated by these findings, we investigate whether supervised training can more fully harness the tracking capability of video diffusion models. To this end, we propose Moaw, a framework that unleashes motion awareness for video diffusion models and leverages it to facilitate motion transfer. Specifically, we train a diffusion model for motion perception, shifting its modality from image-to-video generation to video-to-dense-tracking. We then construct a motion-labeled dataset to identify features that encode the strongest motion information, and inject them into a structurally identical video generation model. Owing to the homogeneity between the two networks, these features can be naturally adapted in a zero-shot manner, enabling motion transfer without additional adapters. Our work provides a new paradigm for bridging generative modeling and motion understanding, paving the way for more unified and controllable video learning frameworks.
|
https://arxiv.org/abs/2601.12761
|
Academic Papers
|
svg
|
2bbec37bf2664de63297a24f46f29d7a8e64975a242dfe8a956e2079666d7d29
|
2026-01-21T00:00:00-05:00
|
Teaching LLMs to Learn Tool Trialing and Execution through Environment Interaction
|
arXiv:2601.12762v1 Announce Type: new Abstract: Equipping Large Language Models (LLMs) with external tools enables them to solve complex real-world problems. However, the robustness of existing methods remains a critical challenge when confronting novel or evolving tools. Existing trajectory-centric paradigms primarily rely on memorizing static solution paths during training, which limits the ability of LLMs to generalize tool usage to newly introduced or previously unseen tools. In this paper, we propose ToolMaster, a framework that shifts tool use from imitating golden tool-calling trajectories to actively learning tool usage through interaction with the environment. To optimize LLMs for tool planning and invocation, ToolMaster adopts a trial-and-execution paradigm, which trains LLMs to first imitate teacher-generated trajectories containing explicit tool trials and self-correction, followed by reinforcement learning to coordinate the trial and execution phases jointly. This process enables agents to autonomously explore correct tool usage by actively interacting with environments and forming experiential knowledge that benefits tool execution. Experimental results demonstrate that ToolMaster significantly outperforms existing baselines in terms of generalization and robustness across unseen or unfamiliar tools. All code and data are available at https://github.com/NEUIR/ToolMaster.
|
https://arxiv.org/abs/2601.12762
|
Academic Papers
|
svg
|
9017a2472e80476e5dda84e372154b7da293e130c0db25b0f9048a6cd2e16c85
|
2026-01-21T00:00:00-05:00
|
Towards Unbiased Source-Free Object Detection via Vision Foundation Models
|
arXiv:2601.12765v1 Announce Type: new Abstract: Source-Free Object Detection (SFOD) has garnered much attention in recent years by eliminating the need of source-domain data in cross-domain tasks, but existing SFOD methods suffer from the Source Bias problem, i.e. the adapted model remains skewed towards the source domain, leading to poor generalization and error accumulation during self-training. To overcome this challenge, we propose Debiased Source-free Object Detection (DSOD), a novel VFM-assisted SFOD framework that can effectively mitigate source bias with the help of powerful VFMs. Specifically, we propose Unified Feature Injection (UFI) module that integrates VFM features into the CNN backbone through Simple-Scale Extension (SSE) and Domain-aware Adaptive Weighting (DAAW). Then, we propose Semantic-aware Feature Regularization (SAFR) that constrains feature learning to prevent overfitting to source domain characteristics. Furthermore, we propose a VFM-free variant, termed DSOD-distill for computation-restricted scenarios through a novel Dual-Teacher distillation scheme. Extensive experiments on multiple benchmarks demonstrate that DSOD outperforms state-of-the-art SFOD methods, achieving 48.1% AP on Normal-to-Foggy weather adaptation, 39.3% AP on Cross-scene adaptation, and 61.4% AP on Synthetic-to-Real adaptation.
|
https://arxiv.org/abs/2601.12765
|
Academic Papers
|
svg
|
f20ac2085a396b50bd9baa484e1f992fd8e32a72fb37941d59e34fafd70efe1e
|
2026-01-21T00:00:00-05:00
|
Spatial-VLN: Zero-Shot Vision-and-Language Navigation With Explicit Spatial Perception and Exploration
|
arXiv:2601.12766v1 Announce Type: new Abstract: Zero-shot Vision-and-Language Navigation (VLN) agents leveraging Large Language Models (LLMs) excel in generalization but suffer from insufficient spatial perception. Focusing on complex continuous environments, we categorize key perceptual bottlenecks into three spatial challenges: door interaction,multi-room navigation, and ambiguous instruction execution, where existing methods consistently suffer high failure rates. We present Spatial-VLN, a perception-guided exploration framework designed to overcome these challenges. The framework consists of two main modules. The Spatial Perception Enhancement (SPE) module integrates panoramic filtering with specialized door and region experts to produce spatially coherent, cross-view consistent perceptual representations. Building on this foundation, our Explored Multi-expert Reasoning (EMR) module uses parallel LLM experts to address waypoint-level semantics and region-level spatial transitions. When discrepancies arise between expert predictions, a query-and-explore mechanism is activated, prompting the agent to actively probe critical areas and resolve perceptual ambiguities. Experiments on VLN-CE demonstrate that Spatial VLN achieves state-of-the-art performance using only low-cost LLMs. Furthermore, to validate real-world applicability, we introduce a value-based waypoint sampling strategy that effectively bridges the Sim2Real gap. Extensive real-world evaluations confirm that our framework delivers superior generalization and robustness in complex environments. Our codes and videos are available at https://yueluhhxx.github.io/Spatial-VLN-web/.
|
https://arxiv.org/abs/2601.12766
|
Academic Papers
|
svg
|
8b68bcec18ac1e8c1234b9171879daf77f8446904ef6ea793b74afd7709b6f00
|
2026-01-21T00:00:00-05:00
|
Delving Deeper: Hierarchical Visual Perception for Robust Video-Text Retrieval
|
arXiv:2601.12768v1 Announce Type: new Abstract: Video-text retrieval (VTR) aims to locate relevant videos using natural language queries. Current methods, often based on pre-trained models like CLIP, are hindered by video's inherent redundancy and their reliance on coarse, final-layer features, limiting matching accuracy. To address this, we introduce the HVP-Net (Hierarchical Visual Perception Network), a framework that mines richer video semantics by extracting and refining features from multiple intermediate layers of a vision encoder. Our approach progressively distills salient visual concepts from raw patch-tokens at different semantic levels, mitigating redundancy while preserving crucial details for alignment. This results in a more robust video representation, leading to new state-of-the-art performance on challenging benchmarks including MSRVTT, DiDeMo, and ActivityNet. Our work validates the effectiveness of exploiting hierarchical features for advancing video-text retrieval. Our codes are available at https://github.com/boyun-zhang/HVP-Net.
|
https://arxiv.org/abs/2601.12768
|
Academic Papers
|
svg
|
1b2dcf0c1a38cf03e81379a09a138bf5b204775f1dff6ef4d70b71952347ffb6
|
2026-01-21T00:00:00-05:00
|
Generalizable and Animatable 3D Full-Head Gaussian Avatar from a Single Image
|
arXiv:2601.12770v1 Announce Type: new Abstract: Building 3D animatable head avatars from a single image is an important yet challenging problem. Existing methods generally collapse under large camera pose variations, compromising the realism of 3D avatars. In this work, we propose a new framework to tackle the novel setting of one-shot 3D full-head animatable avatar reconstruction in a single feed-forward pass, enabling real-time animation and simultaneous 360$^\circ$ rendering views. To facilitate efficient animation control, we model 3D head avatars with Gaussian primitives embedded on the surface of a parametric face model within the UV space. To obtain knowledge of full-head geometry and textures, we leverage rich 3D full-head priors within a pretrained 3D generative adversarial network (GAN) for global full-head feature extraction and multi-view supervision. To increase the fidelity of the 3D reconstruction of the input image, we take advantage of the symmetric nature of the UV space and human faces to fuse local fine-grained input image features with the global full-head textures. Extensive experiments demonstrate the effectiveness of our method, achieving high-quality 3D full-head modeling as well as real-time animation, thereby improving the realism of 3D talking avatars.
|
https://arxiv.org/abs/2601.12770
|
Academic Papers
|
svg
|
4829df39ef051dd5d099671aa7b138c63b11a6ecd6ea97429a6e8170cff20fec
|
2026-01-21T00:00:00-05:00
|
Who Does This Name Remind You of? Nationality Prediction via Large Language Model Associative Memory
|
arXiv:2601.12771v1 Announce Type: new Abstract: Large language models (LLMs) possess extensive world knowledge, yet methods for effectively eliciting this knowledge remain underexplored. Nationality and region prediction tasks require understanding of not only linguistic features but also cultural and historical background, making LLM world knowledge particularly valuable. However, conventional LLM prompting methods rely on direct reasoning approaches, which have limitations in applying abstract linguistic rules. We propose LLM Associative Memory Agents (LAMA), a novel framework that leverages LLM world knowledge as associative memory. Rather than directly inferring nationality from names, LAMA recalls famous individuals with the same name and aggregates their nationalities through indirect reasoning. A dual-agent architecture comprising a Person Agent and a Media Agent, specialized in different knowledge domains, recalls famous individuals in parallel, generating Top-1 predictions through voting and Top-K predictions through conditional completion. On a 99-country nationality prediction task, LAMA achieved 0.817 accuracy, substantially outperforming conventional LLM prompting methods and neural models. Our experiments reveal that LLMs exhibit higher reliability in recalling concrete examples than in abstract reasoning, that recall-based approaches are robust to low-frequency nationalities independent of data frequency distributions, and that the dual-agent architecture functions complementarily to produce synergistic effects. These results demonstrate the effectiveness of a new multi-agent system that retrieves and aggregates LLM knowledge rather than prompting reasoning.
|
https://arxiv.org/abs/2601.12771
|
Academic Papers
|
svg
|
636031575d235058af40ddbaacc6c5dbc5588d2bdce6940d175e58e20ec2ab35
|
2026-01-21T00:00:00-05:00
|
SDN-Blockchain Based Security Routing for UAV Communication via Reinforcement Learning
|
arXiv:2601.12774v1 Announce Type: new Abstract: The unmanned aerial vehicle (UAV) network plays important roles in emergency communications. However, it is challenging to design reliable routing strategies that ensure low latency, energy efficiency, and security in the dynamic and attack-prone environments. To this end, we design a secure routing architecture integrating software-defined networking (SDN) for centralized control and blockchain for tamper-proof trust management. In particular, a novel security degree metric is introduced to quantify the UAV trustworthiness. Based on this architecture, we propose a beam search-proximal policy optimization (BSPPO) algorithm, where beam search (BS) pre-screens the high-security candidate paths, and proximal policy optimization (PPO) performs hop-by-hop routing decisions to support dynamic rerouting upon attack detections. Finally, extensive simulations under varying attack densities, packet sizes, and rerouting events demonstrate that BSPPO outperforms PPO, BS-Q learning, and BS-actor critic in terms of delay, energy consumption, and transmission success rate, showing the outstanding robustness and adaptability.
|
https://arxiv.org/abs/2601.12774
|
Academic Papers
|
svg
|
e22f5f45c0a1b5474e707b9a76dd31009077d1adcb734be4515471fbb10aff1c
|
2026-01-21T00:00:00-05:00
|
Eddy-Resolving Global Ocean Forecasting with Multi-Scale Graph Neural Networks
|
arXiv:2601.12775v1 Announce Type: new Abstract: Research on data-driven ocean models has progressed rapidly in recent years; however, the application of these models to global eddy-resolving ocean forecasting remains limited. The accurate representation of ocean dynamics across a wide range of spatial scales remains a major challenge in such applications. This study proposes a multi-scale graph neural network-based ocean model for 10-day global forecasting that improves short-term prediction skill and enhances the representation of multi-scale ocean variability. The model employs an encoder-processor-decoder architecture and uses two spherical meshes with different resolutions to better capture the multi-scale nature of ocean dynamics. In addition, the model incorporates surface atmospheric variables along with ocean state variables as node inputs to improve short-term prediction accuracy by representing atmospheric forcing. Evaluation using surface kinetic energy spectra and case studies shows that the model accurately represents a broad range of spatial scales, while root mean square error comparisons demonstrate improved skill in short-term predictions. These results indicate that the proposed model delivers more accurate short-term forecasts and improved representation of multi-scale ocean dynamics, thereby highlighting its potential to advance data-driven, eddy-resolving global ocean forecasting.
|
https://arxiv.org/abs/2601.12775
|
Academic Papers
|
svg
|
dd47681b2287e094f0be31243125c971f73648601aad015ad1202ca3689f6b1c
|
2026-01-21T00:00:00-05:00
|
High-order Lagrange multiplier schemes for general Hamiltonian PDEs
|
arXiv:2601.12776v1 Announce Type: new Abstract: In this paper, we introduce a Lagrange multiplier approach to construct linearly implicit energy-preserving schemes of arbitrary order for general Hamiltonian PDEs. Unlike the widely used auxiliary variable methods, this novel approach does not require the nonlinear part of the energy to be bounded from below, thereby offering broader applicability. Moreover, this approach preserves the original energy exactly at both the continuous and discrete levels, as opposed to a modified energy preserved by the auxiliary variable methods. Rigorous proofs are provided for the energy conservation and numerical accuracy of all derived schemes. The trade-off for these advantages is the need to solve a nonlinear algebraic equation to determine the Lagrange multiplier. Nevertheless, numerical experiments show that the associated computational cost is generally not dominant, indicating that the new schemes retain computational efficiency comparable to the auxiliary variable-based schemes. Numerical results demonstrate the efficiency, accuracy, and structure-preserving properties of the proposed schemes.
|
https://arxiv.org/abs/2601.12776
|
Academic Papers
|
svg
|
b6671e967d51a5134ab034f606c09b5f7edfae9cae512ce726c9f3ea98e534f8
|
2026-01-21T00:00:00-05:00
|
Open Vocabulary Panoptic Segmentation With Retrieval Augmentation
|
arXiv:2601.12779v1 Announce Type: new Abstract: Given an input image and set of class names, panoptic segmentation aims to label each pixel in an image with class labels and instance labels. In comparison, Open Vocabulary Panoptic Segmentation aims to facilitate the segmentation of arbitrary classes according to user input. The challenge is that a panoptic segmentation system trained on a particular dataset typically does not generalize well to unseen classes beyond the training data. In this work, we propose RetCLIP, a retrieval-augmented panoptic segmentation method that improves the performance of unseen classes. In particular, we construct a masked segment feature database using paired image-text data. At inference time, we use masked segment features from the input image as query keys to retrieve similar features and associated class labels from the database. Classification scores for the masked segment are assigned based on the similarity between query features and retrieved features. The retrieval-based classification scores are combined with CLIP-based scores to produce the final output. We incorporate our solution with a previous SOTA method (FC-CLIP). When trained on COCO, the proposed method demonstrates 30.9 PQ, 19.3 mAP, 44.0 mIoU on the ADE20k dataset, achieving +4.5 PQ, +2.5 mAP, +10.0 mIoU absolute improvement over the baseline.
|
https://arxiv.org/abs/2601.12779
|
Academic Papers
|
svg
|
ebdd86c61667ab972c40b4e68845c1e5dfd9892aaca7cdb6c7517305841694df
|
2026-01-21T00:00:00-05:00
|
Extended Gabidulin-Kronecker Product Codes and Their Application to Cryptosystems
|
arXiv:2601.12780v1 Announce Type: new Abstract: In this paper, we initiate the study of Extended Gabidulin codes with a Kronecker product structure and propose three enhanced variants of the Rank Quasi-Cyclic (RQC) (Melchor et.al., IEEE IT, 2018) cryptosystem. First, we establish precise bounds on the minimum rank distance of Gabidulin-Kronecker product codes under two distinct parameter regimes. Specifically, when $n_{1}=k_{1}$ and $n_{2}=m<n_{1}n_{2}$, the minimum rank distance is exactly $n_{2}-k_{2}+1$. This yields a new family of Maximum Rank Distance (MRD) codes, which are distinct from classical Gabidulin codes. For the case of $k_{1}\leq n_{1},k_{2}\leq n_{2},n_{1}n_{2}\leq m$, the minimum rank distance $d$ of Gabidulin-Kronecker product codes satisfies a tight upper and lower bound, i.e., $n_{2}-k_{2}+1 \leq d \leq (n_{1}-k_{1}+1)(n_{2}-k_{2}+1)$. Second, we introduce a new class of decodable rank-metric codes, namely Extended Gabidulin-Kronecker product (EGK) codes, which generalize the structure of Gabidulin-Kronecker product (GK) codes. We also propose a decoding algorithm that directly retrieves the codeword without recovering the error vector, thus improving efficiency. This algorithm achieves zero decoding failure probability when the error weight is within its correction capability. Third, we propose three enhanced variants of the RQC cryptosystem based on EGK codes, each offering a distinct trade-off between security and efficiency. For 128-bit security, all variants achieve significant reductions in public key size compared to the Multi-UR-AG (Bidoux et.al., IEEE IT, 2024) while ensuring zero decryption failure probability--a key security advantage over many existing rank-based schemes.
|
https://arxiv.org/abs/2601.12780
|
Academic Papers
|
svg
|
35b82bd70aefb2f87ec666d96d2639de697d885babe681b7f6b83352d150c10e
|
2026-01-21T00:00:00-05:00
|
VIRO: Robust and Efficient Neuro-Symbolic Reasoning with Verification for Referring Expression Comprehension
|
arXiv:2601.12781v1 Announce Type: new Abstract: Referring Expression Comprehension (REC) aims to localize the image region corresponding to a natural-language query. Recent neuro-symbolic REC approaches leverage large language models (LLMs) and vision-language models (VLMs) to perform compositional reasoning, decomposing queries 4 structured programs and executing them step-by-step. While such approaches achieve interpretable reasoning and strong zero-shot generalization, they assume that intermediate reasoning steps are accurate. However, this assumption causes cascading errors: false detections and invalid relations propagate through the reasoning chain, yielding high-confidence false positives even when no target is present in the image. To address this limitation, we introduce Verification-Integrated Reasoning Operators (VIRO), a neuro-symbolic framework that embeds lightweight operator-level verifiers within reasoning steps. Each operator executes and validates its output, such as object existence or spatial relationship, thereby allowing the system to robustly handle no-target cases when verification conditions are not met. Our framework achieves state-of-the-art performance, reaching 61.1% balanced accuracy across target-present and no-target settings, and demonstrates generalization to real-world egocentric data. Furthermore, VIRO shows superior computational efficiency in terms of throughput, high reliability with a program failure rate of less than 0.3%, and scalability through decoupled program generation from execution.
|
https://arxiv.org/abs/2601.12781
|
Academic Papers
|
svg
|
472e0655b116a110a09ced5e34e8bf039dea2cf1f8f6a490f85e39e5bea4bf85
|
2026-01-21T00:00:00-05:00
|
Sensing-Limited Control of Noiseless Linear Systems Under Nonlinear Observations
|
arXiv:2601.12782v1 Announce Type: new Abstract: This paper investigates the fundamental information-theoretic limits for the control and sensing of noiseless linear dynamical systems subject to a broad class of nonlinear observations. We analyze the interactions between the control and sensing components by characterizing the minimum information flow required for stability. Specifically, we derive necessary conditions for mean-square observability and stabilizability, demonstrating that the average directed information rate from the state to the observations must exceed the intrinsic expansion rate of the unstable dynamics. Furthermore, to address the challenges posed by non-Gaussian distributions inherent to nonlinear observation channels, we establish sufficient conditions by imposing regularity assumptions, specifically log-concavity, on the system's probabilistic components. We show that under these conditions, the divergence of differential entropy implies the convergence of the estimation error, thereby closing the gap between information-theoretic bounds and estimation performance. By establishing these results, we unveil the fundamental performance limits imposed by the sensing layer, extending classical data-rate constraints to the more challenging regime of nonlinear observation models.
|
https://arxiv.org/abs/2601.12782
|
Academic Papers
|
svg
|
0c2776707da299f541dfff460b3745ffa927a945909c7445c4fc266ead8888f1
|
2026-01-21T00:00:00-05:00
|
Unleashing Efficient Asynchronous RL Post-Training via Staleness-Constrained Rollout Coordination
|
arXiv:2601.12784v1 Announce Type: new Abstract: Reinforcement learning (RL) post-training has become pivotal for enhancing the capabilities of modern large models. A recent trend is to develop RL systems with a fully disaggregated architecture, which decouples the three RL phases (rollout, reward, and training) onto separate resources and executes them asynchronously. However, two critical data-level concerns arise: (1) asynchronous execution leads to data staleness in trajectories (the data generated by rollout) as the model parameters used in rollout may not be up to date, which impairs RL convergence; and (2) the length variation of trajectories introduces severe data skewness, leading to workload imbalance and degraded system performance. Existing systems fail to address these two concerns in a unified manner. Techniques that tightly control data staleness often constrain effective data skewness mitigation, while aggressive data skewness mitigation tends to exacerbate data staleness. As a result, systems are forced to trade off convergence for performance, or vice versa. To address this, we propose StaleFlow, an RL post-training system that jointly tackles data staleness and skewness. First, to control staleness, StaleFlow introduces a global consistency protocol that tracks the full lifecycle of each trajectory and constrains staleness. Second, to mitigate skewness, StaleFlow re-designs the RL system architecture by constructing data servers for trajectories and parameters to achieve flexible rollout coordination. Subsequently, we develop a suite of staleness-aware, throughput-oriented strategies to enhance system performance. Evaluations show that StaleFlow achieves up to 1.42-2.68$\times$ (1.17-2.01$\times$ on average) higher throughput than state-of-the-art systems, without compromising convergence.
|
https://arxiv.org/abs/2601.12784
|
Academic Papers
|
svg
|
d9501d7f5c896c3573f1921bd4dfb7d8f890bf706c99b0e8e20e2322f2fc8d03
|
2026-01-21T00:00:00-05:00
|
Distilling Time Series Foundation Models for Efficient Forecasting
|
arXiv:2601.12785v1 Announce Type: new Abstract: Time Series foundation models (TSFMs) deliver strong forecasting performance through large-scale pretraining, but their large parameter sizes make deployment costly. While knowledge distillation offers a natural and effective approach for model compression, techniques developed for general machine learning tasks are not directly applicable to time series forecasting due to the unique characteristics. To address this, we present DistilTS, the first distillation framework specifically designed for TSFMs. DistilTS addresses two key challenges: (1) task difficulty discrepancy, specific to forecasting, where uniform weighting makes optimization dominated by easier short-term horizons, while long-term horizons receive weaker supervision; and (2) architecture discrepancy, a general challenge in distillation, for which we design an alignment mechanism in the time series forecasting. To overcome these issues, DistilTS introduces horizon-weighted objectives to balance learning across horizons, and a temporal alignment strategy that reduces architectural mismatch, enabling compact models. Experiments on multiple benchmarks demonstrate that DistilTS achieves forecasting performance comparable to full-sized TSFMs, while reducing parameters by up to 1/150 and accelerating inference by up to 6000x. Code is available at: https://github.com/itsnotacie/DistilTS-ICASSP2026.
|
https://arxiv.org/abs/2601.12785
|
Academic Papers
|
svg
|
1b9f5b18a53613dbd094ff18c0b79d0ed4a382be46888424ed6b6604e2e7fad8
|
2026-01-21T00:00:00-05:00
|
DUAP: Dual-task Universal Adversarial Perturbations Against Voice Control Systems
|
arXiv:2601.12786v1 Announce Type: new Abstract: Modern Voice Control Systems (VCS) rely on the collaboration of Automatic Speech Recognition (ASR) and Speaker Recognition (SR) for secure interaction. However, prior adversarial attacks typically target these tasks in isolation, overlooking the coupled decision pipeline in real-world scenarios. Consequently, single-task attacks often fail to pose a practical threat. To fill this gap, we first utilize gradient analysis to reveal that ASR and SR exhibit no inherent conflicts. Building on this, we propose Dual-task Universal Adversarial Perturbation (DUAP). Specifically, DUAP employs a targeted surrogate objective to effectively disrupt ASR transcription and introduces a Dynamic Normalized Ensemble (DNE) strategy to enhance transferability across diverse SR models. Furthermore, we incorporate psychoacoustic masking to ensure perturbation imperceptibility. Extensive evaluations across five ASR and six SR models demonstrate that DUAP achieves high simultaneous attack success rates and superior imperceptibility, significantly outperforming existing single-task baselines.
|
https://arxiv.org/abs/2601.12786
|
Academic Papers
|
svg
|
fea2717f42333bcbb97b4947f9987dbc771d5cc5f45c10909a1b70a2f669e38a
|
2026-01-21T00:00:00-05:00
|
FocusNav: Spatial Selective Attention with Waypoint Guidance for Humanoid Local Navigation
|
arXiv:2601.12790v1 Announce Type: new Abstract: Robust local navigation in unstructured and dynamic environments remains a significant challenge for humanoid robots, requiring a delicate balance between long-range navigation targets and immediate motion stability. In this paper, we propose FocusNav, a spatial selective attention framework that adaptively modulates the robot's perceptual field based on navigational intent and real-time stability. FocusNav features a Waypoint-Guided Spatial Cross-Attention (WGSCA) mechanism that anchors environmental feature aggregation to a sequence of predicted collision-free waypoints, ensuring task-relevant perception along the planned trajectory. To enhance robustness in complex terrains, the Stability-Aware Selective Gating (SASG) module autonomously truncates distal information when detecting instability, compelling the policy to prioritize immediate foothold safety. Extensive experiments on the Unitree G1 humanoid robot demonstrate that FocusNav significantly improves navigation success rates in challenging scenarios, outperforming baselines in both collision avoidance and motion stability, achieving robust navigation in dynamic and complex environments.
|
https://arxiv.org/abs/2601.12790
|
Academic Papers
|
svg
|
430040a0c7de5e70b6876104943aff38c801070f21741d96505aa9ca412d70e6
|
2026-01-21T00:00:00-05:00
|
SKANet: A Cognitive Dual-Stream Framework with Adaptive Modality Fusion for Robust Compound GNSS Interference Classification
|
arXiv:2601.12791v1 Announce Type: new Abstract: As the electromagnetic environment becomes increasingly complex, Global Navigation Satellite Systems (GNSS) face growing threats from sophisticated jamming interference. Although Deep Learning (DL) effectively identifies basic interference, classifying compound interference remains difficult due to the superposition of diverse jamming sources. Existing single-domain approaches often suffer from performance degradation because transient burst signals and continuous global signals require conflicting feature extraction scales. We propose the Selective Kernel and Asymmetric convolution Network(SKANet), a cognitive deep learning framework built upon a dual-stream architecture that integrates Time-Frequency Images (TFIs) and Power Spectral Density (PSD). Distinct from conventional fusion methods that rely on static receptive fields, the proposed architecture incorporates a Multi-Branch Selective Kernel (SK) module combined with Asymmetric Convolution Blocks (ACBs). This mechanism enables the network to dynamically adjust its receptive fields, acting as an adaptive filter that simultaneously captures micro-scale transient features and macro-scale spectral trends within entangled compound signals. To complement this spatial-temporal adaptation, a Squeeze-and-Excitation (SE) mechanism is integrated at the fusion stage to adaptively recalibrate the contribution of heterogeneous features from each modality. Evaluations on a dataset of 405,000 samples demonstrate that SKANet achieves an overall accuracy of 96.99\%, exhibiting superior robustness for compound jamming classification, particularly under low Jamming-to-Noise Ratio (JNR) regimes.
|
https://arxiv.org/abs/2601.12791
|
Academic Papers
|
svg
|
4be51f430ecc3db05a9d69612a1cbcb162acd2397d3ab2ddfe779cf2ace08ee7
|
2026-01-21T00:00:00-05:00
|
Graph Laplacian assisted regularization method under noise level free heuristic and statistical stopping rule
|
arXiv:2601.12792v1 Announce Type: new Abstract: In this work, we address the solution of both linear and nonlinear ill-posed inverse problems by developing a novel graph-based regularization framework, where the regularization term is formulated through an iteratively updated graph Laplacian. The proposed approach operates without prior knowledge of the noise level and employs two distinct stopping criteria namely, the heuristic rule and the statistical discrepancy principle. To facilitate the latter, we utilize averaged measurements derived from multiple repeated observations. We provide a detailed convergence analysis of the method in statistical prospective, establishing its stability and regularization properties under both stopping strategies. The algorithm begins with the computation of an initial reconstruction using any suitable techniques like Tikhonov regularization (Tik), filtered back projection (FBP) or total variation (TV), which is used as the foundation for generating the initial graph Laplacian. The reconstruction is made better step by step using an iterative process, during which the graph Laplacian is dynamically re-calibrated to reflect how the solution's structure is changing. Finally, we present numerical experiments on X-ray Computed Tomography (CT) and phase retrieval CT, demonstrating the effectiveness and robustness of the proposed method and comparing its reconstruction performance under both stopping rules.
|
https://arxiv.org/abs/2601.12792
|
Academic Papers
|
svg
|
cf1fd8314dc0138f4d0e90bb45d99c480803b1b47f863f21a7eee28049061b45
|
2026-01-21T00:00:00-05:00
|
Two Frameworks and their Fourth Order Implicit Schemes for Time Discretization of Maxwell's Equations
|
arXiv:2601.12793v1 Announce Type: new Abstract: Our work is about energy conserving fourth-order time discretizations of a three-field formulation of Maxwell's equations in conjunction with a spatial discretization using higher-order and compatible de Rham finite element spaces. Toward this end, we delineate two broad classes of strategies for general higher-order time discretizations which we term spatial and temporal strategies. We provide a description of these two strategies and develop fourth-order time accurate schemes in the context of our Maxwell's system. However, our description can be used to prescribe similar fourth- or even higher-order time-integration methods for any linear (or quasi-linear) system of time-dependent partial differential equations. Our organizing principle in our proposed two strategies is to Taylor expand the unknown solution in time by assuming sufficient regularity. Then, in the spatial strategy, we use Maxwell's equations themselves to replace the fourth-order time derivatives in an appropriately truncated Taylor expansion with corresponding higher-order spatial derivatives. On the other hand, in the temporal strategy, we simply use higher-order finite difference schemes for the various higher-order time derivative terms in the truncated Taylor approximation. In both cases, we then defer to a standard finite element exterior calculus manner of compatible discretization for the spatial component of the Maxwell's solution. For our proposed schemes corresponding to the two strategies, we show that they are both stable and convergent and provide some validating numerical examples in $\mathbb{R}^2$. Our main contributions are in the development of the fourth-order time discretization methods that are energy conserving using our two outlined strategies and proofs of their convergence for semi- and full-discretizations of our three-field system of Maxwell's equations.
|
https://arxiv.org/abs/2601.12793
|
Academic Papers
|
svg
|
65c183433855ad03935a5f892892ca0692c9a20de29c98d9bccbd8506cbb9d3b
|
2026-01-21T00:00:00-05:00
|
Combating Noisy Labels through Fostering Self- and Neighbor-Consistency
|
arXiv:2601.12795v1 Announce Type: new Abstract: Label noise is pervasive in various real-world scenarios, posing challenges in supervised deep learning. Deep networks are vulnerable to such label-corrupted samples due to the memorization effect. One major stream of previous methods concentrates on identifying clean data for training. However, these methods often neglect imbalances in label noise across different mini-batches and devote insufficient attention to out-of-distribution noisy data. To this end, we propose a noise-robust method named Jo-SNC (\textbf{Jo}int sample selection and model regularization based on \textbf{S}elf- and \textbf{N}eighbor-\textbf{C}onsistency). Specifically, we propose to employ the Jensen-Shannon divergence to measure the ``likelihood'' of a sample being clean or out-of-distribution. This process factors in the nearest neighbors of each sample to reinforce the reliability of clean sample identification. We design a self-adaptive, data-driven thresholding scheme to adjust per-class selection thresholds. While clean samples undergo conventional training, detected in-distribution and out-of-distribution noisy samples are trained following partial label learning and negative learning, respectively. Finally, we advance the model performance further by proposing a triplet consistency regularization that promotes self-prediction consistency, neighbor-prediction consistency, and feature consistency. Extensive experiments on various benchmark datasets and comprehensive ablation studies demonstrate the effectiveness and superiority of our approach over existing state-of-the-art methods.
|
https://arxiv.org/abs/2601.12795
|
Academic Papers
|
svg
|
5ec9c1d4633cf6e513741306743410449c8635912db25301cecd6d0fb136db29
|
2026-01-21T00:00:00-05:00
|
Contact-Aware Neural Dynamics
|
arXiv:2601.12796v1 Announce Type: new Abstract: High-fidelity physics simulation is essential for scalable robotic learning, but the sim-to-real gap persists, especially for tasks involving complex, dynamic, and discontinuous interactions like physical contacts. Explicit system identification, which tunes explicit simulator parameters, is often insufficient to align the intricate, high-dimensional, and state-dependent dynamics of the real world. To overcome this, we propose an implicit sim-to-real alignment framework that learns to directly align the simulator's dynamics with contact information. Our method treats the off-the-shelf simulator as a base prior and learns a contact-aware neural dynamics model to refine simulated states using real-world observations. We show that using tactile contact information from robotic hands can effectively model the non-smooth discontinuities inherent in contact-rich tasks, resulting in a neural dynamics model grounded by real-world data. We demonstrate that this learned forward dynamics model improves state prediction accuracy and can be effectively used to predict policy performance and refine policies trained purely in standard simulators, offering a scalable, data-driven approach to sim-to-real alignment.
|
https://arxiv.org/abs/2601.12796
|
Academic Papers
|
svg
|
c5624cc787b59dcfa02550139ebd679198567fa9cd23a7b11ef8a812e4811343
|
2026-01-21T00:00:00-05:00
|
PhyG-MoE: A Physics-Guided Mixture-of-Experts Framework for Energy-Efficient GNSS Interference Recognition
|
arXiv:2601.12798v1 Announce Type: new Abstract: Complex electromagnetic interference increasingly compromises Global Navigation Satellite Systems (GNSS), threatening the reliability of Space-Air-Ground Integrated Networks (SAGIN). Although deep learning has advanced interference recognition, current static models suffer from a \textbf{fundamental limitation}: they impose a fixed computational topology regardless of the input's physical entropy. This rigidity leads to severe resource mismatch, where simple primitives consume the same processing cost as chaotic, saturated mixtures. To resolve this, this paper introduces PhyG-MoE (Physics-Guided Mixture-of-Experts), a framework designed to \textbf{dynamically align model capacity with signal complexity}. Unlike static architectures, the proposed system employs a spectrum-based gating mechanism that routes signals based on their spectral feature entanglement. A high-capacity TransNeXt expert is activated on-demand to disentangle complex features in saturated scenarios, while lightweight experts handle fundamental signals to minimize latency. Evaluations on 21 jamming categories demonstrate that PhyG-MoE achieves an overall accuracy of 97.58\%. By resolving the intrinsic conflict between static computing and dynamic electromagnetic environments, the proposed framework significantly reduces computational overhead without performance degradation, offering a viable solution for resource-constrained cognitive receivers.
|
https://arxiv.org/abs/2601.12798
|
Academic Papers
|
svg
|
d4ac2fabd9860191281771ef6b30dc665fb5e02dc2c94704cacef7c518e100a8
|
2026-01-21T00:00:00-05:00
|
FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions
|
arXiv:2601.12799v1 Announce Type: new Abstract: Humanoid robots are capable of performing various actions such as greeting, dancing and even backflipping. However, these motions are often hard-coded or specifically trained, which limits their versatility. In this work, we present FRoM-W1, an open-source framework designed to achieve general humanoid whole-body motion control using natural language. To universally understand natural language and generate corresponding motions, as well as enable various humanoid robots to stably execute these motions in the physical world under gravity, FRoM-W1 operates in two stages: (a) H-GPT: utilizing massive human data, a large-scale language-driven human whole-body motion generation model is trained to generate diverse natural behaviors. We further leverage the Chain-of-Thought technique to improve the model's generalization in instruction understanding. (b) H-ACT: After retargeting generated human whole-body motions into robot-specific actions, a motion controller that is pretrained and further fine-tuned through reinforcement learning in physical simulation enables humanoid robots to accurately and stably perform corresponding actions. It is then deployed on real robots via a modular simulation-to-reality module. We extensively evaluate FRoM-W1 on Unitree H1 and G1 robots. Results demonstrate superior performance on the HumanML3D-X benchmark for human whole-body motion generation, and our introduced reinforcement learning fine-tuning consistently improves both motion tracking accuracy and task success rates of these humanoid robots. We open-source the entire FRoM-W1 framework and hope it will advance the development of humanoid intelligence.
|
https://arxiv.org/abs/2601.12799
|
Academic Papers
|
svg
|
65e5dcc3c4cb0374dcd46f59e63f005d4e332c68a9b1b5655adbd50b8020cafa
|
2026-01-21T00:00:00-05:00
|
UNMIXX: Untangling Highly Correlated Singing Voices Mixtures
|
arXiv:2601.12802v1 Announce Type: new Abstract: We introduce UNMIXX, a novel framework for multiple singing voices separation (MSVS). While related to speech separation, MSVS faces unique challenges: data scarcity and the highly correlated nature of singing voices mixture. To address these issues, we propose UNMIXX with three key components: (1) musically informed mixing strategy to construct highly correlated, music-like mixtures, (2) cross-source attention that drives representations of two singers apart via reverse attention, and (3) magnitude penalty loss penalizing erroneously assigned interfering energy. UNMIXX not only addresses data scarcity by simulating realistic training data, but also excels at separating highly correlated mixtures through cross-source interactions at both the architectural and loss levels. Our extensive experiments demonstrate that UNMIXX greatly enhances performance, with SDRi gains exceeding 2.2 dB over prior work.
|
https://arxiv.org/abs/2601.12802
|
Academic Papers
|
svg
|
fd94dc28341b467cd8dd3717ed57cb4a290d7c5b24ec2326620cd4d02ce4444e
|
2026-01-21T00:00:00-05:00
|
SL-CBM: Enhancing Concept Bottleneck Models with Semantic Locality for Better Interpretability
|
arXiv:2601.12804v1 Announce Type: new Abstract: Explainable AI (XAI) is crucial for building transparent and trustworthy machine learning systems, especially in high-stakes domains. Concept Bottleneck Models (CBMs) have emerged as a promising ante-hoc approach that provides interpretable, concept-level explanations by explicitly modeling human-understandable concepts. However, existing CBMs often suffer from poor locality faithfulness, failing to spatially align concepts with meaningful image regions, which limits their interpretability and reliability. In this work, we propose SL-CBM (CBM with Semantic Locality), a novel extension that enforces locality faithfulness by generating spatially coherent saliency maps at both concept and class levels. SL-CBM integrates a 1x1 convolutional layer with a cross-attention mechanism to enhance alignment between concepts, image regions, and final predictions. Unlike prior methods, SL-CBM produces faithful saliency maps inherently tied to the model's internal reasoning, facilitating more effective debugging and intervention. Extensive experiments on image datasets demonstrate that SL-CBM substantially improves locality faithfulness, explanation quality, and intervention efficacy while maintaining competitive classification accuracy. Our ablation studies highlight the importance of contrastive and entropy-based regularization for balancing accuracy, sparsity, and faithfulness. Overall, SL-CBM bridges the gap between concept-based reasoning and spatial explainability, setting a new standard for interpretable and trustworthy concept-based models.
|
https://arxiv.org/abs/2601.12804
|
Academic Papers
|
svg
|
7fb07a73f3303ad287381262f3314db32082681cd21f9f55ef1df9b4cad4d222
|
2026-01-21T00:00:00-05:00
|
Semi-supervised Instruction Tuning for Large Language Models on Text-Attributed Graphs
|
arXiv:2601.12807v1 Announce Type: new Abstract: The emergent reasoning capabilities of Large Language Models (LLMs) offer a transformative paradigm for analyzing text-attributed graphs. While instruction tuning is the prevailing method for adapting pre-trained LLMs to graph learning tasks like node classification, it requires a substantial volume of annotated (INSTRUCTION, OUTPUT) pairs deriving from labeled nodes. This requirement is particularly prohibitive in the social domain, where obtaining expert labels for sensitive or evolving content is costly and slow. Furthermore, standard graph instruction tuning fails to exploit the vast amount of unlabeled nodes, which contain latent correlations due to edge connections that are beneficial for downstream predictions. To bridge this gap, we propose a novel Semi-supervised Instruction Tuning pipeline for Graph Learning, named SIT-Graph. Notably, SIT-Graph is model-agnostic and can be seamlessly integrated into any graph instruction tuning method that utilizes LLMs as the predictor. SIT-Graph operates via an iterative self-training process. Initially, the model is fine-tuned using instruction pairs constructed solely from the labeled nodes. Then it generates confidence-filtered pseudo-responses for unlabeled nodes to strategically augment the dataset for the next round of fine-tuning. Finally, this iterative refinement progressively aligns the LLM with the underlying node correlations. Extensive experiments demonstrate that when incorporated into state-of-the-art graph instruction tuning methods, SIT-Graph significantly enhances their performance on text-attributed graph benchmarks, achieving over 20% improvement under the low label ratio settings.
|
https://arxiv.org/abs/2601.12807
|
Academic Papers
|
svg
|
b32a8ac5a7aa4b75e605129e1906b6317929790a370360cbeedb059bcf91d222
|
2026-01-21T00:00:00-05:00
|
Joint Source-Channel-Generation Coding: From Distortion-oriented Reconstruction to Semantic-consistent Generation
|
arXiv:2601.12808v1 Announce Type: new Abstract: Conventional communication systems, including both separation-based coding and AI-driven joint source-channel coding (JSCC), are largely guided by Shannon's rate-distortion theory. However, relying on generic distortion metrics fails to capture complex human visual perception, often resulting in blurred or unrealistic reconstructions. In this paper, we propose Joint Source-Channel-Generation Coding (JSCGC), a novel paradigm that shifts the focus from deterministic reconstruction to probabilistic generation. JSCGC leverages a generative model at the receiver as a generator rather than a conventional decoder to parameterize the data distribution, enabling direct maximization of mutual information under channel constraints while controlling stochastic sampling to produce outputs residing on the authentic data manifold with high fidelity. We further derive a theoretical lower bound on the maximum semantic inconsistency with given transmitted mutual information, elucidating the fundamental limits of communication in controlling the generative process. Extensive experiments on image transmission demonstrate that JSCGC substantially improves perceptual quality and semantic fidelity, significantly outperforming conventional distortion-oriented JSCC methods.
|
https://arxiv.org/abs/2601.12808
|
Academic Papers
|
svg
|
b6ebadf380b4cf99dcf28034a7b032db62b41d320251d820a944792c3f8aa00e
|
2026-01-21T00:00:00-05:00
|
Left-Right Symmetry Breaking in CLIP-style Vision-Language Models Trained on Synthetic Spatial-Relation Data
|
arXiv:2601.12809v1 Announce Type: new Abstract: Spatial understanding remains a key challenge in vision-language models. Yet it is still unclear whether such understanding is truly acquired, and if so, through what mechanisms. We present a controllable 1D image-text testbed to probe how left-right relational understanding emerges in Transformer-based vision and text encoders trained with a CLIP-style contrastive objective. We train lightweight Transformer-based vision and text encoders end-to-end on paired descriptions of one- and two-object scenes and evaluate generalization to unseen object pairs while systematically varying label and layout diversity. We find that contrastive training learns left-right relations and that label diversity, more than layout diversity, is the primary driver of generalization in this setting. To gain the mechanistic understanding, we perform an attention decomposition and show that interactions between positional and token embeddings induce a horizontal attention gradient that breaks left-right symmetry in the encoders; ablating this contribution substantially reduces left-right discrimination. Our results provide a mechanistic insight of when and how CLIP-style models acquire relational competence.
|
https://arxiv.org/abs/2601.12809
|
Academic Papers
|
svg
|
0e856bc06f48eb87d443589a1e2f10374a01193fc879a09c3a33b15d55ea62ae
|
2026-01-21T00:00:00-05:00
|
Docker Does Not Guarantee Reproducibility
|
arXiv:2601.12811v1 Announce Type: new Abstract: The reproducibility of software environments is a critical concern in modern software engineering, with ramifications ranging from the effectiveness of collaboration workflows to software supply chain security and scientific reproducibility. Containerization technologies like Docker address this problem by encapsulating software environments into shareable filesystem snapshots known as images. While Docker is frequently cited in the literature as a tool that enables reproducibility in theory, the extent of its guarantees and limitations in practice remains under-explored. In this work, we address this gap through two complementary approaches. First, we conduct a systematic literature review to examine how Docker is framed in scientific discourse on reproducibility and to identify documented best practices for writing Dockerfiles enabling reproducible image building. Then, we perform a large-scale empirical study of 5298 Docker builds collected from GitHub workflows. By rebuilding these images and comparing the results with their historical counterparts, we assess the real reproducibility of Docker images and evaluate the effectiveness of the best practices identified in the literature.
|
https://arxiv.org/abs/2601.12811
|
Academic Papers
|
svg
|
77ea147b0771c527dba01cd6817dcf0eca6c42f2b1bf8458f195a81f7b38abce
|
2026-01-21T00:00:00-05:00
|
Do Clinical Question Answering Systems Really Need Specialised Medical Fine Tuning?
|
arXiv:2601.12812v1 Announce Type: new Abstract: Clinical Question-Answering (CQA) industry systems are increasingly rely on Large Language Models (LLMs), yet their deployment is often guided by the assumption that domain-specific fine-tuning is essential. Although specialised medical LLMs such as BioBERT, BioGPT, and PubMedBERT remain popular, they face practical limitations including narrow coverage, high retraining costs, and limited adaptability. Efforts based on Supervised Fine-Tuning (SFT) have attempted to address these assumptions but continue to reinforce what we term the SPECIALISATION FALLACY-the belief that specialised medical LLMs are inherently superior for CQA. To address this assumption, we introduce MEDASSESS-X, a deployment-industry-oriented CQA framework that applies alignment at inference time rather than through SFT. MEDASSESS-X uses lightweight steering vectors to guide model activations toward medically consistent reasoning without updating model weights or requiring domain-specific retraining. This inference-time alignment layer stabilises CQA performance across both general-purpose and specialised medical LLMs, thereby resolving the SPECIALISATION FALLACY. Empirically, MEDASSESS-X delivers consistent gains across all LLM families, improving Accuracy by up to +6%, Factual Consistency by +7%, and reducing Safety Error Rate by as much as 50%.
|
https://arxiv.org/abs/2601.12812
|
Academic Papers
|
svg
|
a06808b5d522539f1c26fa428a61c08259d064d65b9331d6e4e6d07fed407f98
|
2026-01-21T00:00:00-05:00
|
A Formally Verified Procedure for Width Inference in FIRRTL
|
arXiv:2601.12813v1 Announce Type: new Abstract: FIRRTL is an intermediate representation language for Register Transfer Level (RTL) hardware designs. In FIRRTL programs, the bit widths of many components are not specified explicitly and must be inferred during compilation. In mainstream FIRRTL compilers, such as the official compiler firtool, width inference is conducted by a compilation pass referred to as InferWidths, which may fail even for simple FIRRTL programs. In this paper, we thoroughly investigate the width inference problem for FIRRTL programs. We show that, if the constraints obtained from a FIRRTL program are satisfiable, there exists a unique least solution. Based on this result, we propose a complete procedure for solving the width inference problem. We implement it in the interactive theorem prover Rocq and prove its functional correctness. From the Rocq implementation, we extract an OCaml implementation, which is the first formally verified implementation of the InferWidths pass. Extensive experiments demonstrate that our approach can solve more instances than the official InferWidths pass in firtool, normally with high efficiency.
|
https://arxiv.org/abs/2601.12813
|
Academic Papers
|
svg
|
58a35a8f373bfba79fb4b68210121addb3f25650297176538c6f125fe5766819
|
2026-01-21T00:00:00-05:00
|
CSGaussian: Progressive Rate-Distortion Compression and Segmentation for 3D Gaussian Splatting
|
arXiv:2601.12814v1 Announce Type: new Abstract: We present the first unified framework for rate-distortion-optimized compression and segmentation of 3D Gaussian Splatting (3DGS). While 3DGS has proven effective for both real-time rendering and semantic scene understanding, prior works have largely treated these tasks independently, leaving their joint consideration unexplored. Inspired by recent advances in rate-distortion-optimized 3DGS compression, this work integrates semantic learning into the compression pipeline to support decoder-side applications--such as scene editing and manipulation--that extend beyond traditional scene reconstruction and view synthesis. Our scheme features a lightweight implicit neural representation-based hyperprior, enabling efficient entropy coding of both color and semantic attributes while avoiding costly grid-based hyperprior as seen in many prior works. To facilitate compression and segmentation, we further develop compression-guided segmentation learning, consisting of quantization-aware training to enhance feature separability and a quality-aware weighting mechanism to suppress unreliable Gaussian primitives. Extensive experiments on the LERF and 3D-OVS datasets demonstrate that our approach significantly reduces transmission cost while preserving high rendering quality and strong segmentation performance.
|
https://arxiv.org/abs/2601.12814
|
Academic Papers
|
svg
|
24b3e13cb1808a0a680c9c585442bab770698378b400aac0e5dc290b295e5b2d
|
2026-01-21T00:00:00-05:00
|
Multimodal Multi-Agent Empowered Legal Judgment Prediction
|
arXiv:2601.12815v1 Announce Type: new Abstract: Legal Judgment Prediction (LJP) aims to predict the outcomes of legal cases based on factual descriptions, serving as a fundamental task to advance the development of legal systems. Traditional methods often rely on statistical analyses or role-based simulations but face challenges with multiple allegations, diverse evidence, and lack adaptability. In this paper, we introduce JurisMMA, a novel framework for LJP that effectively decomposes trial tasks, standardizes processes, and organizes them into distinct stages. Furthermore, we build JurisMM, a large dataset with over 100,000 recent Chinese judicial records, including both text and multimodal video-text data, enabling comprehensive evaluation. Experiments on JurisMM and the benchmark LawBench validate our framework's effectiveness. These results indicate that our framework is effective not only for LJP but also for a broader range of legal applications, offering new perspectives for the development of future legal methods and datasets.
|
https://arxiv.org/abs/2601.12815
|
Academic Papers
|
svg
|
6780ba1af4d29ca7847343d0a549dc651df8e103b3d597a016777b9e716ca17a
|
2026-01-21T00:00:00-05:00
|
Fisher-Orthogonal Projected Natural Gradient Descent for Continual Learning
|
arXiv:2601.12816v1 Announce Type: new Abstract: Continual learning aims to enable neural networks to acquire new knowledge on sequential tasks. However, the key challenge in such settings is to learn new tasks without catastrophically forgetting previously learned tasks. We propose the Fisher-Orthogonal Projected Natural Gradient Descent (FOPNG) optimizer, which enforces Fisher-orthogonal constraints on parameter updates to preserve old task performance while learning new tasks. Unlike existing methods that operate in Euclidean parameter space, FOPNG projects gradients onto the Fisher-orthogonal complement of previous task gradients. This approach unifies natural gradient descent with orthogonal gradient methods within an information-geometric framework. The resulting update direction is invariant under reparameterization, guarantees descent in the Fisher metric, and helps preserve prior task outputs. We provide theoretical analysis establishing the properties of the projected update, describe efficient and practical implementations using the diagonal Fisher, and demonstrate strong results on standard continual learning benchmarks such as Permuted-MNIST, Split-MNIST, Rotated-MNIST, Split-CIFAR10, and Split-CIFAR100.
|
https://arxiv.org/abs/2601.12816
|
Academic Papers
|
svg
|
2229c253b2670eb7176730c48eed83b22e0e2e6b83955dab09199214002a3b60
|
2026-01-21T00:00:00-05:00
|
A Generalist Foundation Model for Total-body PET/CT Enables Diagnostic Reporting and System-wide Metabolic Profiling
|
arXiv:2601.12820v1 Announce Type: new Abstract: Total-body PET/CT enables system-wide molecular imaging, but heterogeneous anatomical and metabolic signals, approximately 2 m axial coverage, and structured radiology semantics challenge existing medical AI models that assume single-modality inputs, localized fields of view, and coarse image-text alignment. We introduce SDF-HOLO (Systemic Dual-stream Fusion Holo Model), a multimodal foundation model for holistic total-body PET/CT, pre-trained on more than 10,000 patients. SDF-HOLO decouples CT and PET representation learning with dual-stream encoders and couples them through a cross-modal interaction module, allowing anatomical context to refine PET aggregation while metabolic saliency guides subtle morphological reasoning. To model long-range dependencies across the body, hierarchical context modeling combines efficient local windows with global attention. To bridge voxels and clinical language, we use anatomical segmentation masks as explicit semantic anchors and perform voxel-mask-text alignment during pre-training. Across tumor segmentation, low-dose lesion detection, and multilingual diagnostic report generation, SDF-HOLO outperforms strong task-specific and clinical-reference baselines while reducing localization errors and hallucinated findings. Beyond focal interpretation, the model enables system-wide metabolic profiling and reveals tumor-associated fingerprints of inter-organ metabolic network interactions, providing a scalable computational foundation for total-body PET/CT diagnostics and system-level precision oncology.
|
https://arxiv.org/abs/2601.12820
|
Academic Papers
|
svg
|
0d9cac2b51a978a55abe7ef8ee6b3aba565cad2b27a53961fe8c35555bf81f78
|
2026-01-21T00:00:00-05:00
|
MirrorGuard: Toward Secure Computer-Use Agents via Simulation-to-Real Reasoning Correction
|
arXiv:2601.12822v1 Announce Type: new Abstract: Large foundation models are integrated into Computer Use Agents (CUAs), enabling autonomous interaction with operating systems through graphical user interfaces (GUIs) to perform complex tasks. This autonomy introduces serious security risks: malicious instructions or visual prompt injections can trigger unsafe reasoning and cause harmful system-level actions. Existing defenses, such as detection-based blocking, prevent damage but often abort tasks prematurely, reducing agent utility. In this paper, we present MirrorGuard, a plug-and-play defense framework that uses simulation-based training to improve CUA security in the real world. To reduce the cost of large-scale training in operating systems, we propose a novel neural-symbolic simulation pipeline, which generates realistic, high-risk GUI interaction trajectories entirely in a text-based simulated environment, which captures unsafe reasoning patterns and potential system hazards without executing real operations. In the simulation environment, MirrorGuard learns to intercept and rectify insecure reasoning chains of CUAs before they produce and execute unsafe actions. In real-world testing, extensive evaluations across diverse benchmarks and CUA architectures show that MirrorGuard significantly mitigates security risks. For instance, on the ByteDance UI-TARS system, it reduces the unsafe rate from 66.5% to 13.0% while maintaining a marginal false refusal rate (FRR). In contrast, the state-of-the-art GuardAgent only achieves a reduction to 53.9% and suffers from a 15.4% higher FRR. Our work proves that simulation-derived defenses can provide robust, real-world protection while maintaining the fundamental utility of the agent. Our code and model are publicly available at https://bmz-q-q.github.io/MirrorGuard/.
|
https://arxiv.org/abs/2601.12822
|
Academic Papers
|
svg
|
4fb28dcdb3f4642b8f6a0a117f2ce4ff2a8247aacd6ba484005900b34c8f41dc
|
2026-01-21T00:00:00-05:00
|
TreeDGS: Aerial Gaussian Splatting for Distant DBH Measurement
|
arXiv:2601.12823v1 Announce Type: new Abstract: Aerial remote sensing enables efficient large-area surveying, but accurate direct object-level measurement remains difficult in complex natural scenes. Recent advancements in 3D vision, particularly learned radiance-field representations such as NeRF and 3D Gaussian Splatting, have begun to raise the ceiling on reconstruction fidelity and densifiable geometry from posed imagery. Nevertheless, direct aerial measurement of important natural attributes such as tree diameter at breast height (DBH) remains challenging. Trunks in aerial forest scans are distant and sparsely observed in image views: at typical operating altitudes, stems may span only a few pixels. With these constraints, conventional reconstruction methods leave breast-height trunk geometry weakly constrained. We present TreeDGS, an aerial image reconstruction method that leverages 3D Gaussian Splatting as a continuous, densifiable scene representation for trunk measurement. After SfM-MVS initialization and Gaussian optimization, we extract a dense point set from the Gaussian field using RaDe-GS's depth-aware cumulative-opacity integration and associate each sample with a multi-view opacity reliability score. We then estimate DBH from trunk-isolated points using opacity-weighted solid-circle fitting. Evaluated on 10 plots with field-measured DBH, TreeDGS reaches 4.79,cm RMSE (about 2.6 pixels at this GSD) and outperforms a state-of-the-art LiDAR baseline (7.91,cm RMSE), demonstrating that densified splat-based geometry can enable accurate, low-cost aerial DBH measurement.
|
https://arxiv.org/abs/2601.12823
|
Academic Papers
|
svg
|
da015ed51e1a8c40106bbadba6ccb3fb9838628b5aaf70022fc10346beed0945
|
2026-01-21T00:00:00-05:00
|
Seeing Isn't Always Believing: Analysis of Grad-CAM Faithfulness and Localization Reliability in Lung Cancer CT Classification
|
arXiv:2601.12826v1 Announce Type: new Abstract: Explainable Artificial Intelligence (XAI) techniques, such as Gradient-weighted Class Activation Mapping (Grad-CAM), have become indispensable for visualizing the reasoning process of deep neural networks in medical image analysis. Despite their popularity, the faithfulness and reliability of these heatmap-based explanations remain under scrutiny. This study critically investigates whether Grad-CAM truly represents the internal decision-making of deep models trained for lung cancer image classification. Using the publicly available IQ-OTH/NCCD dataset, we evaluate five representative architectures: ResNet-50, ResNet-101, DenseNet-161, EfficientNet-B0, and ViT-Base-Patch16-224, to explore model-dependent variations in Grad-CAM interpretability. We introduce a quantitative evaluation framework that combines localization accuracy, perturbation-based faithfulness, and explanation consistency to assess Grad-CAM reliability across architectures. Experimental findings reveal that while Grad-CAM effectively highlights salient tumor regions in most convolutional networks, its interpretive fidelity significantly degrades for Vision Transformer models due to non-local attention behavior. Furthermore, cross-model comparisons indicate substantial variability in saliency localization, implying that Grad-CAM explanations may not always correspond to the true diagnostic evidence used by the networks. This work exposes critical limitations of current saliency-based XAI approaches in medical imaging and emphasizes the need for model-aware interpretability methods that are both computationally sound and clinically meaningful. Our findings aim to inspire a more cautious and rigorous adoption of visual explanation tools in medical AI, urging the community to rethink what it truly means to "trust" a model's explanation.
|
https://arxiv.org/abs/2601.12826
|
Academic Papers
|
svg
|
0f3ecd77cd1bc293061349369b0525650b153c21b66f8e527ddeca6a6651a3d4
|
2026-01-21T00:00:00-05:00
|
The Unfairness of Multifactorial Bias in Recommendation
|
arXiv:2601.12828v1 Announce Type: new Abstract: Popularity bias and positivity bias are two prominent sources of bias in recommender systems. Both arise from input data, propagate through recommendation models, and lead to unfair or suboptimal outcomes. Popularity bias occurs when a small subset of items receives most interactions, while positivity bias stems from the over-representation of high rating values. Although each bias has been studied independently, their combined effect, to which we refer to as multifactorial bias, remains underexplored. In this work, we examine how multifactorial bias influences item-side fairness, focusing on exposure bias, which reflects the unequal visibility of items in recommendation outputs. Through simulation studies, we find that positivity bias is disproportionately concentrated on popular items, further amplifying their over-exposure. Motivated by this insight, we adapt a percentile-based rating transformation as a pre-processing strategy to mitigate multifactorial bias. Experiments using six recommendation algorithms across four public datasets show that this approach improves exposure fairness with negligible accuracy loss. We also demonstrate that integrating this pre-processing step into post-processing fairness pipelines enhances their effectiveness and efficiency, enabling comparable or better fairness with reduced computational cost. These findings highlight the importance of addressing multifactorial bias and demonstrate the practical value of simple, data-driven pre-processing methods for improving fairness in recommender systems.
|
https://arxiv.org/abs/2601.12828
|
Academic Papers
|
svg
|
e045135365ef7e6bb45fd7b9de773b544bb5f992927143c2d9d3be4b6f1624e3
|
2026-01-21T00:00:00-05:00
|
From Design to Deorbit: A Solar-Electric Autonomous Module for Multi-Debris Remediation
|
arXiv:2601.12830v1 Announce Type: new Abstract: The escalating accumulation of orbital debris threatens the sustainability of space operations, necessitating active removal solutions that overcome the limitations of current fuel-dependent methods. To address this, this study introduces a novel remediation architecture that integrates a mechanical clamping system for secure capture with a high-efficiency, solar-powered NASA Evolutionary Xenon Thruster (NEXT) and autonomous navigation protocols. High-fidelity simulations validate the architecture's capabilities, demonstrating a successful retrograde deorbit from 800 km to 100 km, <10m position Root Mean Square Errors (RMSE) via radar-based Extended Kalman Filter (EKF) navigation, and a 93\% data delivery efficiency within 1 second using Delay/Disruption Tolerant Network (DTN) protocols. This approach significantly advances orbital management by establishing a benchmark for renewable solar propulsion that minimizes reliance on conventional fuels and extends mission longevity for multi-target removal.
|
https://arxiv.org/abs/2601.12830
|
Academic Papers
|
svg
|
2c53c868c9ccee7bfd58a0257d4ce3df29e14d0194f276afcd8d670b4f133691
|
2026-01-21T00:00:00-05:00
|
Data-Consistent Learning of Inverse Problems
|
arXiv:2601.12831v1 Announce Type: new Abstract: Inverse problems are inherently ill-posed, suffering from non-uniqueness and instability. Classical regularization methods provide mathematically well-founded solutions, ensuring stability and convergence, but often at the cost of reduced flexibility or visual quality. Learned reconstruction methods, such as convolutional neural networks, can produce visually compelling results, yet they typically lack rigorous theoretical guarantees. DC (DC) networks address this gap by enforcing the measurement model within the network architecture. In particular, null-space networks combined with a classical regularization method as an initial reconstruction define a convergent regularization method. This approach preserves the theoretical reliability of classical schemes while leveraging the expressive power of data-driven learning, yielding reconstructions that are both accurate and visually appealing.
|
https://arxiv.org/abs/2601.12831
|
Academic Papers
|
svg
|
a93e43bb2589aed6735c7acad31ad4aef248a003189ef4a79b565a6a1e52ddd0
|
2026-01-21T00:00:00-05:00
|
Temporal Fair Division of Indivisible Goods with Scheduling
|
arXiv:2601.12835v1 Announce Type: new Abstract: We study temporal fair division, where agents receive goods over multiple rounds and cumulative fairness is required. We investigate Temporal Envy-Freeness Up to One Good (TEF1) and Up to Any Good (TEFX), its approximation $\alpha$-TEFX, and Temporal Maximin Share (TMMS). Motivated by known impossibilities in standard settings, we consider the model in various restricted settings and extend it by introducing scheduling. Our main contributions draw the boundary between possibility and impossibility. First, regarding temporal fair division without scheduling, we prove that while constant-factor $\alpha$-TEFX is impossible in general, a $1/2$-approximation is achievable for generalized binary valuations and identical days with two agents. Second, regarding temporal fair division with scheduling, we demonstrate that a scheduling buffer of size at least $n/2$ enables TEF1 for identical days. However, we establish that TEFX and TMMS remain largely impossible even with scheduling or restricted domains. These results highlight the inherent difficulty of strict temporal fairness and quantify the trade-offs required to achieve approximation guarantees.
|
https://arxiv.org/abs/2601.12835
|
Academic Papers
|
svg
|
a5cdc042a4b9cd21ce50a6e0db4bc87c59954e065660857fbbe82afc9f2cea06
|
2026-01-21T00:00:00-05:00
|
Knowledge-Integrated Representation Learning for Crypto Anomaly Detection under Extreme Label Scarcity; Relational Domain-Logic Integration with Retrieval-Grounded Context and Path-Level Explanations
|
arXiv:2601.12839v1 Announce Type: new Abstract: Detecting anomalous trajectories in decentralized crypto networks is fundamentally challenged by extreme label scarcity and the adaptive evasion strategies of illicit actors. While Graph Neural Networks (GNNs) effectively capture local structural patterns, they struggle to internalize multi hop, logic driven motifs such as fund dispersal and layering that characterize sophisticated money laundering, limiting their forensic accountability under regulations like the FATF Travel Rule. To address this limitation, we propose Relational Domain Logic Integration (RDLI), a framework that embeds expert derived heuristics as differentiable, logic aware latent signals within representation learning. Unlike static rule based approaches, RDLI enables the detection of complex transactional flows that evade standard message passing. To further account for market volatility, we incorporate a Retrieval Grounded Context (RGC) module that conditions anomaly scoring on regulatory and macroeconomic context, mitigating false positives caused by benign regime shifts. Under extreme label scarcity (0.01%), RDLI outperforms state of the art GNN baselines by 28.9% in F1 score. A micro expert user study further confirms that RDLI path level explanations significantly improve trustworthiness, perceived usefulness, and clarity compared to existing methods, highlighting the importance of integrating domain logic with contextual grounding for both accuracy and explainability.
|
https://arxiv.org/abs/2601.12839
|
Academic Papers
|
svg
|
f9c037702c4e647f6419a07cd64e40d13d0a4a306becd23b9cf0bfc18d041ddb
|
2026-01-21T00:00:00-05:00
|
Lessons Learned from Structural Design and Vibration Testing of 50-kg Microsatellites Deployed from the International Space Station
|
arXiv:2601.12840v1 Announce Type: new Abstract: Hokkaido University and Tohoku University have been developing and operating a constellation of 50-cm-class microsatellites for Earth observation. DIWATA-1, launched in 2016, was deployed into a circular orbit at an altitude of approximately 400 km from the International Space Station (ISS). For the subsequent satellite developed in 2021, the structural design and vibration test campaign were optimized to meet a strict one-year development schedule. This paper summarizes how the structural design of the previous satellite was reviewed and updated, and how the vibration test was successfully completed in a single trial to minimize schedule and technical risks. These lessons learned provide valuable insights, as there are only a limited number of reported cases of 50-kg-class microsatellites deployed from the ISS.
|
https://arxiv.org/abs/2601.12840
|
Academic Papers
|
svg
|
4e80dcb43bed6a0029c54e1f34d51843c8b7e6a6922530d64ac8a6fdf440beda
|
2026-01-21T00:00:00-05:00
|
SCULPT: Constraint-Guided Pruned MCTS that Carves Efficient Paths for Mathematical Reasoning
|
arXiv:2601.12842v1 Announce Type: new Abstract: Automated agent workflows can enhance the problem-solving ability of large language models (LLMs), but common search strategies rely on stochastic exploration and often traverse implausible branches. This occurs because current pipelines sample candidate steps from generic prompts or learned policies with weak domain priors, yielding near-random walks over operators, units, and formats. To promote ordered exploration, this paper introduces SCULPT, a constraint-guided approach for Monte Carlo Tree Search (MCTS) that integrates domain-aware scoring into selection, expansion, simulation, and backpropagation. SCULPT scores and prunes actions using a combination of symbolic checks (dimensional consistency, type compatibility, magnitude sanity, depth control, and diversity) and structural pattern guidance, thereby steering the search toward plausible reasoning paths. Under matched LLM configurations, SCULPT yields stable improvements on multiple datasets; additional results with GPT-5.2 assess executor transferability and performance on frontier reasoning models. Overall, domain-aware constraints can improve accuracy while maintaining efficiency and reasoning stability.
|
https://arxiv.org/abs/2601.12842
|
Academic Papers
|
svg
|
b2436c844be7519efd440e50bfbd61d918fdf6089919cbdd997e1a44a5b053ae
|
2026-01-21T00:00:00-05:00
|
Rapport du Projet de Recherche TRAIMA
|
arXiv:2601.12844v1 Announce Type: new Abstract: The TRAIMA project (TRaitement Automatique des Interactions Multimodales en Apprentissage), conducted between March 2019 and June 2020, investigates the potential of automatic processing of multimodal interactions in educational settings. The project addresses a central methodological challenge in educational and interactional research: the analysis of verbal, paraverbal, and non-verbal data is currently carried out manually, making it extremely time-consuming and difficult to scale. TRAIMA explores how machine learning approaches could contribute to the categorisation and classification of such interactions. The project focuses specifically on explanatory and collaborative sequences occurring in classroom interactions, particularly in French as a Foreign Language (FLE) and French as a First Language (FLM) contexts. These sequences are analysed as inherently multimodal phenomena, combining spoken language with prosody, gestures, posture, gaze, and spatial positioning. A key theoretical contribution of the project is the precise linguistic and interactional definition of explanatory discourse as a tripartite sequence (opening, explanatory core, closure), drawing on discourse analysis and interactional linguistics. A substantial part of the research is devoted to the methodological foundations of transcription, which constitute a critical bottleneck for any form of automation. The report provides a detailed state of the art of existing transcription conventions (ICOR, Mondada, GARS, VALIBEL, Ferr{\'e}), highlighting their respective strengths and limitations when applied to multimodal classroom data. Through comparative analyses of manually transcribed sequences, the project demonstrates the inevitable variability and interpretative dimension of transcription practices, depending on theoretical positioning and analytical goals. Empirical work is based on several corpora, notably the INTER-EXPLIC corpus (approximately 30 hours of classroom interaction) and the EXPLIC-LEXIC corpus, which serve both as testing grounds for manual annotation and as reference datasets for future automation. Particular attention is paid to teacher gestures (kin{\'e}sic and proxemic resources), prosodic features, and their functional role in meaning construction and learner comprehension. The project also highlights the strategic role of the Techn{\'e}LAB platform, which provides advanced multimodal data capture (multi-camera video, synchronized audio, eye-tracking, digital interaction traces) and constitutes both a research infrastructure and a test environment for the development of automated tools. In conclusion, TRAIMA does not aim to deliver a fully operational automated system, but rather to establish a rigorous methodological framework for the automatic processing of multimodal pedagogical interactions. The project identifies transcription conventions, annotation categories, and analytical units that are compatible with machine learning approaches, while emphasizing the need for theoretical explicitness and researcher reflexivity. TRAIMA thus lays the groundwork for future interdisciplinary research at the intersection of didactics, discourse analysis, multimodality, and artificial intelligence in education.
|
https://arxiv.org/abs/2601.12844
|
Academic Papers
|
svg
|
518c7b17b0e2f17a3b04044a463397b912524cc337da593713576f59d35d3940
|
2026-01-21T00:00:00-05:00
|
Automatic Generation of Formal Specification and Verification Annotations Using LLMs and Test Oracles
|
arXiv:2601.12845v1 Announce Type: new Abstract: Recent verification tools aim to make formal verification more accessible to software engineers by automating most of the verification process. However, annotating conventional programs with the formal specification and verification constructs (preconditions, postconditions, loop invariants, auxiliary predicates and functions and proof helpers) required to prove their correctness still demands significant manual effort and expertise. This paper investigates how LLMs can automatically generate such annotations for programs written in Dafny, a verification-aware programming language, starting from conventional code accompanied by natural language specifications (in comments) and test code. In experiments on 110 Dafny programs, a multimodel approach combining Claude Opus 4.5 and GPT-5.2 generated correct annotations for 98.2% of the programs within at most 8 repair iterations, using verifier feedback. A logistic regression analysis shows that proof-helper annotations contribute disproportionately to problem difficulty for current LLMs. Assertions in the test cases served as static oracles to automatically validate the generated pre/postconditions. We also compare generated and manual solutions and present an extension for Visual Studio Code to incorporate automatic generation into the IDE, with encouraging usability feedback.
|
https://arxiv.org/abs/2601.12845
|
Academic Papers
|
svg
|
14e545bda43358e25a31849c81d65509b835125da5b6b3e243c52faab0fcd43d
|
2026-01-21T00:00:00-05:00
|
The Cost of EFX: Generalized-Mean Welfare and Complexity Dichotomies with Few Surplus Items
|
arXiv:2601.12849v1 Announce Type: new Abstract: Envy-freeness up to any good (EFX) is a central fairness notion for allocating indivisible goods, yet its existence is unresolved in general. In the setting with few surplus items, where the number of goods exceeds the number of agents by a small constant (at most three), EFX allocations are guaranteed to exist, shifting the focus from existence to efficiency and computation. We study how EFX interacts with generalized-mean ($p$-mean) welfare, which subsumes commonly-studied utilitarian ($p=1$), Nash ($p=0$), and egalitarian ($p \rightarrow -\infty$) objectives. We establish sharp complexity dichotomies at $p=0$: for any fixed $p \in (0,1]$, both deciding whether EFX can attain the global $p$-mean optimum and computing an EFX allocation maximizing $p$-mean welfare are NP-hard, even with at most three surplus goods; in contrast, for any fixed $p \leq 0$, we give polynomial-time algorithms that optimize $p$-mean welfare within the space of EFX allocations and efficiently certify when EFX attains the global optimum. We further quantify the welfare loss of enforcing EFX via the price of fairness framework, showing that for $p > 0$, the loss can grow linearly with the number of agents, whereas for $p \leq 0$, it is bounded by a constant depending on the surplus (and for Nash welfare it vanishes asymptotically). Finally we show that requiring Pareto-optimality alongside EFX is NP-hard (and becomes $\Sigma_2^P$-complete for a stronger variant of EFX). Overall, our results delineate when EFX is computationally costly versus structurally aligned with welfare maximization in the setting with few surplus items.
|
https://arxiv.org/abs/2601.12849
|
Academic Papers
|
svg
|
a4a3a1433d5587d7cf66e3bf2ee501f7ed1535ba390d39827dc07fd4c19d650d
|
2026-01-21T00:00:00-05:00
|
System Analysis and Pre-Flight Evaluation of Deployable Solar Panels for 3U CubeSat HOKUSHIN-1
|
arXiv:2601.12851v1 Announce Type: new Abstract: This paper describes the system design methodology derived from the development and evaluation tests of deployable solar panels to be mounted on a 3U CubeSat. The study mainly includes structural analysis, thermal analysis, and a review of vibration test results. Hokkaido University is developing the 3U CubeSat HOKUSHIN-1 in collaboration with Tohoku University and Muroran Institute of Technology. Deployable solar panels are a key technology for future planned lunar exploration missions, as they enable power-intensive communication and propulsion required for orbit control. The satellite also demonstrates a newly developed compact and efficient propulsion system. The satellite has dimensions of approximately 10x10x34 cm, a mass of 3.99 kg, and will be deployed into a circular orbit at an altitude of about 400 km with an orbital inclination of 51.6 degrees from the International Space Station.
|
https://arxiv.org/abs/2601.12851
|
Academic Papers
|
svg
|
e4fdebbda71b137987914258d16035dd434dae5d977d47b750b95e016a040e44
|
2026-01-21T00:00:00-05:00
|
On Resilient and Efficient Linear Secure Aggregation in Hierarchical Federated Learning
|
arXiv:2601.12853v1 Announce Type: new Abstract: In this paper, we study the fundamental limits of hierarchical secure aggregation under unreliable communication. We consider a hierarchical network where each client connects to multiple relays, and both client-to-relay and relay-to-server links are intermittent. Under this setting, we characterize the minimum communication and randomness costs required to achieve robust secure aggregation. We then propose an optimal protocol that attains these minimum costs, and establish its optimality through a matching converse proof. In addition, we introduce an improved problem formulation that bridges the gap between existing information-theoretic secure aggregation protocols and practical real-world federated learning problems.
|
https://arxiv.org/abs/2601.12853
|
Academic Papers
|
svg
|
d889067fee3bbbea0ba6779b76162b9edded13415ba02260a9bd72d1252a21cb
|
2026-01-21T00:00:00-05:00
|
Mining Citywide Dengue Spread Patterns in Singapore Through Hotspot Dynamics from Open Web Data
|
arXiv:2601.12856v1 Announce Type: new Abstract: Dengue, a mosquito-borne disease, continues to pose a persistent public health challenge in urban areas, particularly in tropical regions such as Singapore. Effective and affordable control requires anticipating where transmission risks are likely to emerge so that interventions can be deployed proactively rather than reactively. This study introduces a novel framework that uncovers and exploits latent transmission links between urban regions, mined directly from publicly available dengue case data. Instead of treating cases as isolated reports, we model how hotspot formation in one area is influenced by epidemic dynamics in neighboring regions. While mosquito movement is highly localized, long-distance transmission is often driven by human mobility, and in our case study, the learned network aligns closely with commuting flows, providing an interpretable explanation for citywide spread. These hidden links are optimized through gradient descent and used not only to forecast hotspot status but also to verify the consistency of spreading patterns, by examining the stability of the inferred network across consecutive weeks. Case studies on Singapore during 2013-2018 and 2020 show that four weeks of hotspot history are sufficient to achieve an average F-score of 0.79. Importantly, the learned transmission links align with commuting flows, highlighting the interpretable interplay between hidden epidemic spread and human mobility. By shifting from simply reporting dengue cases to mining and validating hidden spreading dynamics, this work transforms open web-based case data into a predictive and explanatory resource. The proposed framework advances epidemic modeling while providing a scalable, low-cost tool for public health planning, early intervention, and urban resilience.
|
https://arxiv.org/abs/2601.12856
|
Academic Papers
|
svg
|
c9ad4128dece44ac66fb55465100eefdc5c7cc956cb1a94b482d41bbcdcfc6be
|
2026-01-21T00:00:00-05:00
|
Report on Earth Observation Missions and Ground Station Management using On-Demand Satellite Operation System
|
arXiv:2601.12857v1 Announce Type: new Abstract: Since the launch of its first satellite in 2009, Tohoku University has continuously developed and operated Earth observation satellites and engineering demonstration satellites in the 50cm-class and CubeSat-class (up to 3U). The 50cm-class satellite launched into operation in 2021 enabled efficient operations through cloud-based management functions for both the satellite and ground stations, including automatic command generation. By 2022, up to eight operational satellites were simultaneously managed on a daily basis using three ground stations (Sendai, Hakodate, and Sweden). This paper presents the operational achievements to date and introduces the system that supports efficient satellite operations
|
https://arxiv.org/abs/2601.12857
|
Academic Papers
|
svg
|
cf2903b0125e9bfe4667d561c7e692e144e16ca9294e752055714fc4322a5ad8
|
2026-01-21T00:00:00-05:00
|
Generating Cyclic Conformers with Flow Matching in Cremer-Pople Coordinates
|
arXiv:2601.12859v1 Announce Type: new Abstract: Cyclic molecules are ubiquitous across applications in chemistry and biology. Their restricted conformational flexibility provides structural pre-organization that is key to their function in drug discovery and catalysis. However, reliably sampling the conformer ensembles of ring systems remains challenging. Here, we introduce PuckerFlow, a generative machine learning model that performs flow matching on the Cremer-Pople space, a low-dimensional internal coordinate system capturing the relevant degrees of freedom of rings. Our approach enables generation of valid closed rings by design and demonstrates strong performance in generating conformers that are both diverse and precise. We show that PuckerFlow outperforms other conformer generation methods on nearly all quantitative metrics and illustrate the potential of PuckerFlow for ring systems relevant to chemical applications, particularly in catalysis and drug discovery. This work enables efficient and reliable conformer generation of cyclic structures, paving the way towards modeling structure-property relationships and the property-guided generation of rings across a wide range of applications in chemistry and biology.
|
https://arxiv.org/abs/2601.12859
|
Academic Papers
|
svg
|
f3c0029c3b621b838db6edd35962b4cd9ac118bd045be484c2b67782098c0434
|
2026-01-21T00:00:00-05:00
|
FGTBT: Frequency-Guided Task-Balancing Transformer for Unified Facial Landmark Detection
|
arXiv:2601.12863v1 Announce Type: new Abstract: Recently, deep learning based facial landmark detection (FLD) methods have achieved considerable success. However, in challenging scenarios such as large pose variations, illumination changes, and facial expression variations, they still struggle to accurately capture the geometric structure of the face, resulting in performance degradation. Moreover, the limited size and diversity of existing FLD datasets hinder robust model training, leading to reduced detection accuracy. To address these challenges, we propose a Frequency-Guided Task-Balancing Transformer (FGTBT), which enhances facial structure perception through frequency-domain modeling and multi-dataset unified training. Specifically, we propose a novel Fine-Grained Multi-Task Balancing loss (FMB-loss), which moves beyond coarse task-level balancing by assigning weights to individual landmarks based on their occurrence across datasets. This enables more effective unified training and mitigates the issue of inconsistent gradient magnitudes. Additionally, a Frequency-Guided Structure-Aware (FGSA) model is designed to utilize frequency-guided structure injection and regularization to help learn facial structure constraints. Extensive experimental results on popular benchmark datasets demonstrate that the integration of the proposed FMB-loss and FGSA model into our FGTBT framework achieves performance comparable to state-of-the-art methods. The code is available at https://github.com/Xi0ngxinyu/FGTBT.
|
https://arxiv.org/abs/2601.12863
|
Academic Papers
|
svg
|
bc30de5d72f34ad2073b8bcdffc85b72376e0b654867ee6867e8991b85420f64
|
2026-01-21T00:00:00-05:00
|
Proxy Robustness in Vision Language Models is Effortlessly Transferable
|
arXiv:2601.12865v1 Announce Type: new Abstract: As a pivotal technique for improving the defense of deep models, adversarial robustness transfer via distillation has demonstrated remarkable success in conventional image classification tasks. However, this paradigm encounters critical challenges when applied to vision-language models (VLM) (e.g., CLIP): constructing adversarially robust teacher for large-scale multi-modal models demands prohibitively high computational resources. We bridge this gap by revealing an interesting phenomenon: vanilla CLIP (without adversarial training) exhibits intrinsic defensive capabilities against adversarial examples generated by another CLIP with different architectures. We formally define this as proxy adversarial robustness, and naturally propose a Heterogeneous Proxy Transfer (HPT) framework that establishes cross-architectural robustness distillation channels between CLIP variants, effortlessly enabling the VLM robustness transfer from proxy to target models. Yet, such proxy transfer paradigm easily induces severe overfitting, leading to a sharp degradation in zero-shot natural generalization. To resolve that, we design Generalization-Pivot Decoupling (GPD) by leveraging the difference in learning rate scheduling. This decouples the proxy transfer process into a generalization-anchored warm-up that maintains generalization and a generalization-pulled HPT that promotes adversarial robustness, to achieve an equilibrium between natural generalization and adversarial robustness. Extensive experiments on 15 zero-shot datasets demonstrate the effectiveness of our HPT-GPD method. The code is available at the website of github.com/fxw13/HPT-GPD.
|
https://arxiv.org/abs/2601.12865
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.