diff --git "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml" "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
--- "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
+++ "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
@@ -7,12 +7,26364 @@
http://www.rssboard.org/rss-specificationen-us
- Sun, 18 Jan 2026 05:00:00 +0000
+ Wed, 21 Jan 2026 05:00:20 +0000rss-help@arxiv.org
- Sun, 18 Jan 2026 00:00:00 -0500
+ Wed, 21 Jan 2026 00:00:00 -0500SundaySaturday
+
+ SynQP: A Framework and Metrics for Evaluating the Quality and Privacy Risk of Synthetic Data
+ https://arxiv.org/abs/2601.12124
+ arXiv:2601.12124v1 Announce Type: new
+Abstract: The use of synthetic data in health applications raises privacy concerns, yet the lack of open frameworks for privacy evaluations has slowed its adoption. A major challenge is the absence of accessible benchmark datasets for evaluating privacy risks, due to difficulties in acquiring sensitive data. To address this, we introduce SynQP, an open framework for benchmarking privacy in synthetic data generation (SDG) using simulated sensitive data, ensuring that original data remains confidential. We also highlight the need for privacy metrics that fairly account for the probabilistic nature of machine learning models. As a demonstration, we use SynQP to benchmark CTGAN and propose a new identity disclosure risk metric that offers a more accurate estimation of privacy risks compared to existing approaches. Our work provides a critical tool for improving the transparency and reliability of privacy evaluations, enabling safer use of synthetic data in health-related applications. % In our quality evaluations, non-private models achieved near-perfect machine-learning efficacy \(\ge0.97\). Our privacy assessments (Table II) reveal that DP consistently lowers both identity disclosure risk (SD-IDR) and membership-inference attack risk (SD-MIA), with all DP-augmented models staying below the 0.09 regulatory threshold. Code available at https://github.com/CAN-SYNH/SynQP
+ oai:arXiv.org:2601.12124v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/PST65910.2025.11268831
+ 2025 22nd Annual International Conference on Privacy, Security, and Trust (PST)
+ Bing Hu, Yixin Li, Asma Bahamyirou, Helen Chen
+
+
+ UniMo: Unified Motion Generation and Understanding with Chain of Thought
+ https://arxiv.org/abs/2601.12126
+ arXiv:2601.12126v1 Announce Type: new
+Abstract: Existing 3D human motion generation and understanding methods often exhibit limited interpretability, restricting effective mutual enhancement between these inherently related tasks. While current unified frameworks based on large language models (LLMs) leverage linguistic priors, they frequently encounter challenges in semantic alignment and task coherence. Moreover, the next-token prediction paradigm in LLMs is ill-suited for motion sequences, causing cumulative prediction errors. To address these limitations, we propose UniMo, a novel framework that integrates motion-language information and interpretable chain of thought (CoT) reasoning into the LLM via supervised fine-tuning (SFT). We further introduce reinforcement learning with Group Relative Policy Optimization (GRPO) as a post-training strategy that optimizes over groups of tokens to enforce structural correctness and semantic alignment, mitigating cumulative errors in motion token prediction. Extensive experiments demonstrate that UniMo significantly outperforms existing unified and task-specific models, achieving state-of-the-art performance in both motion generation and understanding.
+ oai:arXiv.org:2601.12126v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Guocun Wang, Kenkun Liu, Jing Lin, Guorui Song, Jian Li, Xiaoguang Han
+
+
+ SolarGPT-QA: A Domain-Adaptive Large Language Model for Educational Question Answering in Space Weather and Heliophysics
+ https://arxiv.org/abs/2601.12131
+ arXiv:2601.12131v1 Announce Type: new
+Abstract: Solar activity, including solar flares, coronal mass ejections (CMEs), and geomagnetic storms, can significantly impact satellites, aviation, power grids, data centers, and space missions. Extreme solar events can cause substantial economic damage if not predicted in advance, highlighting the importance of accurate forecasting and effective education in space science. Although large language models (LLMs) perform well on general tasks, they often lack domain-specific knowledge and pedagogical capability to clearly explain complex space science concepts.
+ We introduce SolarGPT-QA, a question answering system based on a domain-adapted large language model built on the LLaMA-3 base model. The model is trained using scientific literature and large-scale question-answer data generated with GPT-4 and refined using Grok-3 in a student-friendly storytelling style. Human pairwise evaluations show that SolarGPT-QA outperforms general-purpose models in zero-shot settings and achieves competitive performance compared to instruction-tuned models for educational explanations in space weather and heliophysics. A small pilot student comprehension study further suggests improved clarity and accessibility of the generated explanations. Ablation experiments indicate that combining domain-adaptive pretraining with pedagogical fine-tuning is important for balancing scientific accuracy and educational effectiveness. This work represents an initial step toward a broader SolarGPT framework for space science education and forecasting.
+ oai:arXiv.org:2601.12131v1
+ cs.LG
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Santosh Chapagain, MohammadReza EskandariNasab, Onur Vural, Shah Muhammad Hamdi, Soukaina Filali Boubrahimi
+
+
+ Bengali Text Classification: An Evaluation of Large Language Model Approaches
+ https://arxiv.org/abs/2601.12132
+ arXiv:2601.12132v1 Announce Type: new
+Abstract: Bengali text classification is a Significant task in natural language processing (NLP), where text is categorized into predefined labels. Unlike English, Bengali faces challenges due to the lack of extensive annotated datasets and pre-trained language models. This study explores the effectiveness of large language models (LLMs) in classifying Bengali newspaper articles. The dataset used, obtained from Kaggle, consists of articles from Prothom Alo, a major Bangladeshi newspaper. Three instruction-tuned LLMs LLaMA 3.1 8B Instruct, LLaMA 3.2 3B Instruct, and Qwen 2.5 7B Instruct were evaluated for this task under the same classification framework. Among the evaluated models, Qwen 2.5 achieved the highest classification accuracy of 72%, showing particular strength in the "Sports" category. In comparison, LLaMA 3.1 and LLaMA 3.2 attained accuracies of 53% and 56%, respectively. The findings highlight the effectiveness of LLMs in Bengali text classification, despite the scarcity of resources for Bengali NLP. Future research will focus on exploring additional models, addressing class imbalance issues, and refining fine-tuning approaches to improve classification performance.
+ oai:arXiv.org:2601.12132v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Md Mahmudul Hoque, Md Mehedi Hassain, Md Hojaifa Tanvir, Rahul Nandy
+
+
+ Human-Human-AI Triadic Programming: Uncovering the Role of AI Agent and the Value of Human Partner in Collaborative Learning
+ https://arxiv.org/abs/2601.12134
+ arXiv:2601.12134v1 Announce Type: new
+Abstract: As AI assistance becomes embedded in programming practice, researchers have increasingly examined how these systems help learners generate code and work more efficiently. However, these studies often position AI as a replacement for human collaboration and overlook the social and learning-oriented aspects that emerge in collaborative programming. Our work introduces human-human-AI (HHAI) triadic programming, where an AI agent serves as an additional collaborator rather than a substitute for a human partner. Through a within-subjects study with 20 participants, we show that triadic collaboration enhances collaborative learning and social presence compared to the dyadic human-AI (HAI) baseline. In the triadic HHAI conditions, participants relied significantly less on AI-generated code in their work. This effect was strongest in the HHAI-shared condition, where participants had an increased sense of responsibility to understand AI suggestions before applying them. These findings demonstrate how triadic settings activate socially shared regulation of learning by making AI use visible and accountable to a human peer, suggesting that AI systems that augment rather than automate peer collaboration can better preserve the learning processes that collaborative programming relies on.
+ oai:arXiv.org:2601.12134v1
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Taufiq Daryanto, Xiaohan Ding, Kaike Ping, Lance T. Wilhelm, Yan Chen, Chris Brown, Eugenia H. Rho
+
+
+ CoSMeTIC: Zero-Knowledge Computational Sparse Merkle Trees with Inclusion-Exclusion Proofs for Clinical Research
+ https://arxiv.org/abs/2601.12136
+ arXiv:2601.12136v1 Announce Type: new
+Abstract: Analysis of clinical data is a cornerstone of biomedical research with applications in areas such as genomic testing and response characterization of therapeutic drugs. Maintaining strict privacy controls is essential because such data typically contains personally identifiable health information of patients. At the same time, regulatory compliance often requires study managers to demonstrate the integrity and authenticity of participant data used in analyses. Balancing these competing requirements, privacy preservation and verifiable accountability, remains a critical challenge. In this paper, we present CoSMeTIC, a zero-knowledge computational framework that proposes computational Sparse Merkle Trees (SMTs) as a means to generate verifiable inclusion and exclusion proofs for individual participants' data in clinical studies. We formally analyze the zero-knowledge properties of CoSMeTIC and evaluate its computational efficiency through extensive experiments. Using the Kolmogorov-Smirnov and likelihood-ratio hypothesis tests, along with logistic-regression-based genomic analyses on real-world Huntington's disease datasets, we demonstrate that CoSMeTIC achieves strong privacy guarantees while maintaining statistical fidelity. Our results suggest that CoSMeTIC provides a scalable and practical alternative for achieving regulatory compliance with rigorous privacy protection in large-scale clinical research.
+ oai:arXiv.org:2601.12136v1
+ cs.CR
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohammad Shahid, Paritosh Ramanan, Mohammad Fili, Guiping Hu, Hillel Haim
+
+
+ EMoE: Eigenbasis-Guided Routing for Mixture-of-Experts
+ https://arxiv.org/abs/2601.12137
+ arXiv:2601.12137v1 Announce Type: new
+Abstract: The relentless scaling of deep learning models has led to unsustainable computational demands, positioning Mixture-of-Experts (MoE) architectures as a promising path towards greater efficiency. However, MoE models are plagued by two fundamental challenges: 1) a load imbalance problem known as the``rich get richer" phenomenon, where a few experts are over-utilized, and 2) an expert homogeneity problem, where experts learn redundant representations, negating their purpose. Current solutions typically employ an auxiliary load-balancing loss that, while mitigating imbalance, often exacerbates homogeneity by enforcing uniform routing at the expense of specialization. To resolve this, we introduce the Eigen-Mixture-of-Experts (EMoE), a novel architecture that leverages a routing mechanism based on a learned orthonormal eigenbasis. EMoE projects input tokens onto this shared eigenbasis and routes them based on their alignment with the principal components of the feature space. This principled, geometric partitioning of data intrinsically promotes both balanced expert utilization and the development of diverse, specialized experts, all without the need for a conflicting auxiliary loss function. Our code is publicly available at https://github.com/Belis0811/EMoE.
+ oai:arXiv.org:2601.12137v1
+ cs.LG
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Anzhe Cheng, Shukai Duan, Shixuan Li, Chenzhong Yin, Mingxi Cheng, Shahin Nazarian, Paul Thompson, Paul Bogdan
+
+
+ DriveSafe: A Hierarchical Risk Taxonomy for Safety-Critical LLM-Based Driving Assistants
+ https://arxiv.org/abs/2601.12138
+ arXiv:2601.12138v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are increasingly integrated into vehicle-based digital assistants, where unsafe, ambiguous, or legally incorrect responses can lead to serious safety, ethical, and regulatory consequences. Despite growing interest in LLM safety, existing taxonomies and evaluation frameworks remain largely general-purpose and fail to capture the domain-specific risks inherent to real-world driving scenarios. In this paper, we introduce DriveSafe, a hierarchical, four-level risk taxonomy designed to systematically characterize safety-critical failure modes of LLM-based driving assistants. The taxonomy comprises 129 fine-grained atomic risk categories spanning technical, legal, societal, and ethical dimensions, grounded in real-world driving regulations and safety principles and reviewed by domain experts. To validate the safety relevance and realism of the constructed prompts, we evaluate their refusal behavior across six widely deployed LLMs. Our analysis shows that the evaluated models often fail to appropriately refuse unsafe or non-compliant driving-related queries, underscoring the limitations of general-purpose safety alignment in driving contexts.
+ oai:arXiv.org:2601.12138v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Abhishek Kumar, Riya Tapwal, Carsten Maple
+
+
+ TIDE: A Trace-Informed Depth-First Exploration for Planning with Temporally Extended Goals
+ https://arxiv.org/abs/2601.12141
+ arXiv:2601.12141v1 Announce Type: new
+Abstract: Task planning with temporally extended goals (TEGs) is a critical challenge in AI and robotics, enabling agents to achieve complex sequences of objectives over time rather than addressing isolated, immediate tasks. Linear Temporal Logic on finite traces (LTLf ) provides a robust formalism for encoding these temporal goals. Traditional LTLf task planning approaches often transform the temporal planning problem into a classical planning problem with reachability goals, which are then solved using off-the-shelf planners. However, these methods often lack informed heuristics to provide a guided search for temporal goals. We introduce TIDE (Trace-Informed Depth-first Exploration), a novel approach that addresses this limitation by decomposing a temporal problem into a sequence of smaller, manageable reach-avoid sub-problems, each solvable using an off-the-shelf planner. TIDE identifies and prioritizes promising automaton traces within the domain graph, using cost-driven heuristics to guide exploration. Its adaptive backtracking mechanism systematically recovers from failed plans by recalculating costs and penalizing infeasible transitions, ensuring completeness and efficiency. Experimental results demonstrate that TIDE achieves promising performance and is a valuable addition to the portfolio of planning methods for temporally extended goals.
+ oai:arXiv.org:2601.12141v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yuliia Suprun, Khen Elimelech, Lydia E. Kavraki, Moshe Y. Vardi
+
+
+ Neural Process-Based Reactive Controller for Autonomous Racing
+ https://arxiv.org/abs/2601.12143
+ arXiv:2601.12143v1 Announce Type: new
+Abstract: Attention-based neural architectures have become central to state-of-the-art methods in real-time nonlinear control. As these data-driven models continue to be integrated into increasingly safety-critical domains, ensuring statistically grounded and provably safe decision-making becomes essential. This paper introduces a novel reactive control framework for gap-based navigation using the Attentive Neural Process (AttNP) and a physics-informed extension, the PI-AttNP. Both models are evaluated in a simulated F1TENTH-style Ackermann steering racecar environment, chosen as a fast-paced proxy for safety-critical autonomous driving scenarios. The PI-AttNP augments the AttNP architecture with approximate model-based priors to inject physical inductive bias, enabling faster convergence and improved prediction accuracy suited for real-time control. To further ensure safety, we derive and implement a control barrier function (CBF)-based filtering mechanism that analytically enforces collision avoidance constraints. This CBF formulation is fully compatible with the learned AttNP controller and generalizes across a wide range of racing scenarios, providing a lightweight and certifiable safety layer. Our results demonstrate competitive closed-loop performance while ensuring real-time constraint satisfaction.
+ oai:arXiv.org:2601.12143v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Devin Hunter, Chinwendu Enyioha
+
+
+ Threshold Differential Attention for Sink-Free, Ultra-Sparse, and Non-Dispersive Language Modeling
+ https://arxiv.org/abs/2601.12145
+ arXiv:2601.12145v1 Announce Type: new
+Abstract: Softmax attention struggles with long contexts due to structural limitations: the strict sum-to-one constraint forces attention sinks on irrelevant tokens, and probability mass disperses as sequence lengths increase. We tackle these problems with Threshold Differential Attention (TDA), a sink-free attention mechanism that achieves ultra-sparsity and improved robustness at longer sequence lengths without the computational overhead of projection methods or the performance degradation caused by noise accumulation of standard rectified attention. TDA applies row-wise extreme-value thresholding with a length-dependent gate, retaining only exceedances. Inspired by the differential transformer, TDA also subtracts an inhibitory view to enhance expressivity. Theoretically, we prove that TDA controls the expected number of spurious survivors per row to $O(1)$ and that consensus spurious matches across independent views vanish as context grows. Empirically, TDA produces $>99\%$ exact zeros and eliminates attention sinks while maintaining competitive performance on standard and long-context benchmarks.
+ oai:arXiv.org:2601.12145v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xingyue Huang, Xueying Ding, Mingxuan Ju, Yozen Liu, Neil Shah, Tong Zhao
+
+
+ From LLMs to Agents in Programming: The Impact of Providing an LLM with a Compiler
+ https://arxiv.org/abs/2601.12146
+ arXiv:2601.12146v1 Announce Type: new
+Abstract: Large Language Models have demonstrated a remarkable capability in natural language and program generation and software development. However, the source code generated by the LLMs does not always meet quality requirements and may fail to compile. Therefore, many studies evolve into agents that can reason about the problem before generating the source code for the solution. The goal of this paper is to study the degree to which such agents benefit from access to software development tools, in our case, a \texttt{gcc} compiler. We conduct a computational experiment on the RosettaCode dataset, on 699 programming tasks in C. We evaluate how the integration with a compiler shifts the role of the language model from a passive generator to an active agent capable of iteratively developing runnable programs based on feedback from the compiler. We evaluated 16 language models with sizes ranging from small (135 million) to medium (3 billion) and large (70 billion). Our results show that access to a compiler improved the compilation success by 5.3 to 79.4 percentage units in compilation without affecting the semantics of the generated program. Syntax errors dropped by 75\%, and errors related to undefined references dropped by 87\% for the tasks where the agents outperformed the baselines. We also observed that in some cases, smaller models with a compiler outperform larger models with a compiler. We conclude that it is essential for LLMs to have access to software engineering tools to enhance their performance and reduce the need for large models in software engineering, such as reducing our energy footprint.
+ oai:arXiv.org:2601.12146v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Viktor Kjellberg, Miroslaw Staron, Farnaz Fotrousi
+
+
+ Segment and Matte Anything in a Unified Model
+ https://arxiv.org/abs/2601.12147
+ arXiv:2601.12147v1 Announce Type: new
+Abstract: Segment Anything (SAM) has recently pushed the boundaries of segmentation by demonstrating zero-shot generalization and flexible prompting after training on over one billion masks. Despite this, its mask prediction accuracy often falls short of the precision required in real-world applications. While several refinement modules have been proposed to boost SAM's segmentation quality, achieving highly accurate object delineation within a single, unified framework remains an open challenge. Furthermore, interactive image matting, which aims to generate fine-grained alpha mattes guided by diverse user hints, has not yet been explored in the context of SAM. Insights from recent studies highlight strong correlations between segmentation and matting, suggesting the feasibility of a unified model capable of both tasks. In this paper, we introduce Segment And Matte Anything (SAMA), a lightweight extension of SAM that delivers high-quality interactive image segmentation and matting with minimal extra parameters. Our Multi-View Localization Encoder (MVLE) captures detailed features from local views, while the Localization Adapter (Local-Adapter) refines mask outputs by recovering subtle boundary details. We also incorporate two prediction heads for each task into the architecture to generate segmentation and matting masks, simultaneously. Trained on a diverse dataset aggregated from publicly available sources, SAMA achieves state-of-the-art performance across multiple segmentation and matting benchmarks, showcasing its adaptability and effectiveness in a wide range of downstream tasks.
+ oai:arXiv.org:2601.12147v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zezhong Fan, Xiaohan Li, Topojoy Biswas, Kaushiki Nag, Kannan Achan
+
+
+ Many Hands Make Light Work: An LLM-based Multi-Agent System for Detecting Malicious PyPI Packages
+ https://arxiv.org/abs/2601.12148
+ arXiv:2601.12148v1 Announce Type: new
+Abstract: Malicious code in open-source repositories such as PyPI poses a growing threat to software supply chains. Traditional rule-based tools often overlook the semantic patterns in source code that are crucial for identifying adversarial components. Large language models (LLMs) show promise for software analysis, yet their use in interpretable and modular security pipelines remains limited. This paper presents LAMPS, a multi-agent system that employs collaborative LLMs to detect malicious PyPI packages. The system consists of four role-specific agents for package retrieval, file extraction, classification, and verdict aggregation, coordinated through the CrewAI framework. A prototype combines a fine-tuned CodeBERT model for classification with LLaMA-3 agents for contextual reasoning. LAMPS has been evaluated on two complementary datasets: D1, a balanced collection of 6,000 setup.py files, and D2, a realistic multi-file dataset with 1,296 files and natural class imbalance. On D1, LAMPS achieves 97.7% accuracy, surpassing MPHunter--one of the state-of-the-art approaches. On D2, it reaches 99.5% accuracy and 99.5% balanced accuracy, outperforming RAG-based approaches and fine-tuned single-agent baselines. McNemar's test confirmed these improvements as highly significant. The results demonstrate the feasibility of distributed LLM reasoning for malicious code detection and highlight the benefits of modular multi-agent designs in software supply chain security.
+ oai:arXiv.org:2601.12148v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Muhammad Umar Zeshan, Motunrayo Ibiyo, Claudio Di Sipio, Phuong T. Nguyen, Davide Di Ruscio
+
+
+ Principal Component Analysis-Based Terahertz Self-Supervised Denoising and Deblurring Deep Neural Networks
+ https://arxiv.org/abs/2601.12149
+ arXiv:2601.12149v1 Announce Type: new
+Abstract: Terahertz (THz) systems inherently introduce frequency-dependent degradation effects, resulting in low-frequency blurring and high-frequency noise in amplitude images. Conventional image processing techniques cannot simultaneously address both issues, and manual intervention is often required due to the unknown boundary between denoising and deblurring. To tackle this challenge, we propose a principal component analysis (PCA)-based THz self-supervised denoising and deblurring network (THz-SSDD). The network employs a Recorrupted-to-Recorrupted self-supervised learning strategy to capture the intrinsic features of noise by exploiting invariance under repeated corruption. PCA decomposition and reconstruction are then applied to restore images across both low and high frequencies. The performance of the THz-SSDD network was evaluated on four types of samples. Training requires only a small set of unlabeled noisy images, and testing across samples with different material properties and measurement modes demonstrates effective denoising and deblurring. Quantitative analysis further validates the network feasibility, showing improvements in image quality while preserving the physical characteristics of the original signals.
+ oai:arXiv.org:2601.12149v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pengfei Zhu, Xavier Maldague
+
+
+ Enhanced Diagnostic Performance via Large-Resolution Inference Optimization for Pathology Foundation Models
+ https://arxiv.org/abs/2601.12150
+ arXiv:2601.12150v1 Announce Type: new
+Abstract: Despite their prominent performance on tasks such as ROI classification and segmentation, many pathology foundation models remain constrained by a specific input size e.g. 224 x 224, creating substantial inefficiencies when applied to whole-slide images (WSIs), which span thousands of resolutions. A naive strategy is to either enlarge inputs or downsample the WSIs. However, enlarging inputs results in prohibitive GPU memory consumption, while downsampling alters the microns-per-pixel resolution and obscures critical morphological details. To overcome these limitations, we propose an space- and time- efficient inference strategy that sparsifies attention using spatially aware neighboring blocks and filters out non-informative tokens through global attention scores. This design substantially reduces GPU memory and runtime during high-resolution WSI inference while preserving and even improving the downstream performance, enabling inference at higher resolutions under the same GPU budget. The experimental results show that our method can achieves up to an 7.67% improvement in the ROI classification and compatible results in segmentation.
+ oai:arXiv.org:2601.12150v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Mengxuan Hu, Zihan Guan, John Kang, Sheng Li, Zhongliang Zhou
+
+
+ Who Owns Creativity and Who Does the Work? Trade-offs in LLM-Supported Research Ideation
+ https://arxiv.org/abs/2601.12152
+ arXiv:2601.12152v1 Announce Type: new
+Abstract: LLM-based agents offer new potential to accelerate science and reshape research work. However, the quality of researcher contributions can vary significantly depending on human ability to steer agent behaviors. How can we best use these tools to augment scientific creativity without undermining aspects of contribution and ownership that drive research? To investigate this, we developed an agentic research ideation system integrating three roles -- Ideator, Writer, and Evaluator -- across three control levels -- Low, Medium, and Intensive. Our mixed-methods study with 54 researchers suggests three key findings in how LLM-based agents reshape scientific creativity: 1) perceived creativity support does not simply increase linearly with greater control; 2) human effort shifts from ideating to verifying ideas; and 3) ownership becomes a negotiated outcome between human and AI. Our findings suggest that LLM agent design should emphasize researcher empowerment, fostering a sense of ownership over strong ideas rather than reducing researchers to operating an automated AI-driven process.
+ oai:arXiv.org:2601.12152v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Houjiang Liu, Yujin Choi, Sanjana Gautam, Gabriel Jaffe, Soo Young Rieh, Matthew Lease
+
+
+ Analyzing Cancer Patients' Experiences with Embedding-based Topic Modeling and LLMs
+ https://arxiv.org/abs/2601.12154
+ arXiv:2601.12154v1 Announce Type: new
+Abstract: This study investigates the use of neural topic modeling and LLMs to uncover meaningful themes from patient storytelling data, to offer insights that could contribute to more patient-oriented healthcare practices. We analyze a collection of transcribed interviews with cancer patients (132,722 words in 13 interviews). We first evaluate BERTopic and Top2Vec for individual interview summarization by using similar preprocessing, chunking, and clustering configurations to ensure a fair comparison on Keyword Extraction. LLMs (GPT4) are then used for the next step topic labeling. Their outputs for a single interview (I0) are rated through a small-scale human evaluation, focusing on {coherence}, {clarity}, and {relevance}. Based on the preliminary results and evaluation, BERTopic shows stronger performance and is selected for further experimentation using three {clinically oriented embedding} models. We then analyzed the full interview collection with the best model setting. Results show that domain-specific embeddings improved topic \textit{precision} and \textit{interpretability}, with BioClinicalBERT producing the most consistent results across transcripts. The global analysis of the full dataset of 13 interviews, using the BioClinicalBERT embedding model, reveals the most dominant topics throughout all 13 interviews, namely ``Coordination and Communication in Cancer Care Management" and ``Patient Decision-Making in Cancer Treatment Journey''. Although the interviews are machine translations from Dutch to English, and clinical professionals are not involved in this evaluation, the findings suggest that neural topic modeling, particularly BERTopic, can help provide useful feedback to clinicians from patient interviews. This pipeline could support more efficient document navigation and strengthen the role of patients' voices in healthcare workflows.
+ oai:arXiv.org:2601.12154v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Teodor-C\u{a}lin Ionescu, Lifeng Han, Jan Heijdra Suasnabar, Anne Stiggelbout, Suzan Verberne
+
+
+ Inverse Rendering for High-Genus 3D Surface Meshes from Multi-view Images with Persistent Homology Priors
+ https://arxiv.org/abs/2601.12155
+ arXiv:2601.12155v1 Announce Type: new
+Abstract: Reconstructing 3D objects from images is inherently an ill-posed problem due to ambiguities in geometry, appearance, and topology. This paper introduces collaborative inverse rendering with persistent homology priors, a novel strategy that leverages topological constraints to resolve these ambiguities. By incorporating priors that capture critical features such as tunnel loops and handle loops, our approach directly addresses the difficulty of reconstructing high-genus surfaces. The collaboration between photometric consistency from multi-view images and homology-based guidance enables recovery of complex high-genus geometry while circumventing catastrophic failures such as collapsing tunnels or losing high-genus structure. Instead of neural networks, our method relies on gradient-based optimization within a mesh-based inverse rendering framework to highlight the role of topological priors. Experimental results show that incorporating persistent homology priors leads to lower Chamfer Distance (CD) and higher Volume IoU compared to state-of-the-art mesh-based methods, demonstrating improved geometric accuracy and robustness against topological failure.
+ oai:arXiv.org:2601.12155v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xiang Gao, Xinmu Wang, Yuanpeng Liu, Yue Wang, Junqi Huang, Wei Chen, Xianfeng Gu
+
+
+ Biological Intuition on Digital Hardware: An RTL Implementation of Poisson-Encoded SNNs for Static Image Classification
+ https://arxiv.org/abs/2601.12156
+ arXiv:2601.12156v1 Announce Type: new
+Abstract: The deployment of Artificial Intelligence on edge devices (TinyML) is often constrained by the high power consumption and latency associated with traditional Artificial Neural Networks (ANNs) and their reliance on intensive Matrix-Multiply (MAC) operations. Neuromorphic computing offers a compelling alternative by mimicking biological efficiency through event-driven processing. This paper presents the design and implementation of a cycle-accurate, hardware-oriented Spiking Neural Network (SNN) core implemented in SystemVerilog. Unlike conventional accelerators, this design utilizes a Leaky Integrate-and-Fire (LIF) neuron model powered by fixed-point arithmetic and bit-wise primitives (shifts and additions) to eliminate the need for complex floating-point hardware. The architecture features an on-chip Poisson encoder for stochastic spike generation and a novel active pruning mechanism that dynamically disables neurons post-classification to minimize dynamic power consumption. We demonstrate the hardware's efficacy through a fully connected layer implementation targeting digit classification. Simulation results indicate that the design achieves rapid convergence (89% accuracy) within limited timesteps while maintaining a significantly reduced computational footprint compared to traditional dense architectures. This work serves as a foundational building block for scalable, energy-efficient neuromorphic hardware on FPGA and ASIC platforms.
+ oai:arXiv.org:2601.12156v1
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Debabrata Das, Yogeeth G. K., Arnav Gupta
+
+
+ Streaming Operator Inference for Model Reduction of Large-Scale Dynamical Systems
+ https://arxiv.org/abs/2601.12161
+ arXiv:2601.12161v1 Announce Type: new
+Abstract: Projection-based model reduction enables efficient simulation of complex dynamical systems by constructing low-dimensional surrogate models from high-dimensional data. The Operator Inference (OpInf) approach learns such reduced surrogate models through a two-step process: constructing a low-dimensional basis via Singular Value Decomposition (SVD) to compress the data, then solving a linear least-squares (LS) problem to infer reduced operators that govern the dynamics in this compressed space, all without access to the underlying code or full model operators, i.e., non-intrusively. Traditional OpInf operates as a batch learning method, where both the SVD and LS steps process all data simultaneously. This poses a barrier to deployment of the approach on large-scale applications where dataset sizes prevent the loading of all data into memory at once. Additionally, the traditional batch approach does not naturally allow model updates using new data acquired during online computation. To address these limitations, we propose Streaming OpInf, which learns reduced models from sequentially arriving data streams. Our approach employs incremental SVD for adaptive basis construction and recursive LS for streaming operator updates, eliminating the need to store complete data sets while enabling online model adaptation. The approach can flexibly combine different choices of streaming algorithms for numerical linear algebra: we systematically explore the impact of these choices both analytically and numerically to identify effective combinations for accurate reduced model learning. Numerical experiments on benchmark problems and a large-scale turbulent channel flow demonstrate that Streaming OpInf achieves accuracy comparable to batch OpInf while reducing memory requirements by over 99% and enabling dimension reductions exceeding 31,000x, resulting in orders-of-magnitude faster predictions.
+ oai:arXiv.org:2601.12161v1
+ math.NA
+ cs.LG
+ cs.NA
+ math.DS
+ physics.comp-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tomoki Koike, Prakash Mohan, Marc T. Henry de Frahan, Julie Bessac, Elizabeth Qian
+
+
+ The Language You Ask In: Language-Conditioned Ideological Divergence in LLM Analysis of Contested Political Documents
+ https://arxiv.org/abs/2601.12164
+ arXiv:2601.12164v1 Announce Type: new
+Abstract: Large language models (LLMs) are increasingly deployed as analytical tools across multilingual contexts, yet their outputs may carry systematic biases conditioned by the language of the prompt. This study presents an experimental comparison of LLM-generated political analyses of a Ukrainian civil society document, using semantically equivalent prompts in Russian and Ukrainian. Despite identical source material and parallel query structures, the resulting analyses varied substantially in rhetorical positioning, ideological orientation, and interpretive conclusions. The Russian-language output echoed narratives common in Russian state discourse, characterizing civil society actors as illegitimate elites undermining democratic mandates. The Ukrainian-language output adopted vocabulary characteristic of Western liberal-democratic political science, treating the same actors as legitimate stakeholders within democratic contestation. These findings demonstrate that prompt language alone can produce systematically different ideological orientations from identical models analyzing identical content, with significant implications for AI deployment in polarized information environments, cross-lingual research applications, and the governance of AI systems in multilingual societies.
+ oai:arXiv.org:2601.12164v1
+ cs.CY
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Oleg Smirnov
+
+
+ Learning Legged MPC with Smooth Neural Surrogates
+ https://arxiv.org/abs/2601.12169
+ arXiv:2601.12169v1 Announce Type: new
+Abstract: Deep learning and model predictive control (MPC) can play complementary roles in legged robotics. However, integrating learned models with online planning remains challenging. When dynamics are learned with neural networks, three key difficulties arise: (1) stiff transitions from contact events may be inherited from the data; (2) additional non-physical local nonsmoothness can occur; and (3) training datasets can induce non-Gaussian model errors due to rapid state changes. We address (1) and (2) by introducing the smooth neural surrogate, a neural network with tunable smoothness designed to provide informative predictions and derivatives for trajectory optimization through contact. To address (3), we train these models using a heavy-tailed likelihood that better matches the empirical error distributions observed in legged-robot dynamics. Together, these design choices substantially improve the reliability, scalability, and generalizability of learned legged MPC. Across zero-shot locomotion tasks of increasing difficulty, smooth neural surrogates with robust learning yield consistent reductions in cumulative cost on simple, well-conditioned behaviors (typically 10-50%), while providing substantially larger gains in regimes where standard neural dynamics often fail outright. In these regimes, smoothing enables reliable execution (from 0/5 to 5/5 success) and produces about 2-50x lower cumulative cost, reflecting orders-of-magnitude absolute improvements in robustness rather than incremental performance gains.
+ oai:arXiv.org:2601.12169v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Samuel A. Moore, Easop Lee, Boyuan Chen
+
+
+ Federated Learning for the Design of Parametric Insurance Indices under Heterogeneous Renewable Production Losses
+ https://arxiv.org/abs/2601.12178
+ arXiv:2601.12178v1 Announce Type: new
+Abstract: We propose a federated learning framework for the calibration of parametric insurance indices under heterogeneous renewable energy production losses. Producers locally model their losses using Tweedie generalized linear models and private data, while a common index is learned through federated optimization without sharing raw observations. The approach accommodates heterogeneity in variance and link functions and directly minimizes a global deviance objective in a distributed setting. We implement and compare FedAvg, FedProx and FedOpt, and benchmark them against an existing approximation-based aggregation method. An empirical application to solar power production in Germany shows that federated learning recovers comparable index coefficients under moderate heterogeneity, while providing a more general and scalable framework.
+ oai:arXiv.org:2601.12178v1
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fallou Niakh
+
+
+ Tolerance Principle and Small Language Model Learning
+ https://arxiv.org/abs/2601.12179
+ arXiv:2601.12179v1 Announce Type: new
+Abstract: Modern language models like GPT-3, BERT, and LLaMA require massive training data, yet with sufficient training they reliably learn to distinguish grammatical from ungrammatical sentences. Children aged as young as 14 months already have the capacity to learn abstract grammar rules from very few exemplars, even in the presence of non-rule-following exceptions. Yang's (2016) Tolerance Principle defines a precise threshold for how many exceptions a rule can tolerate and still be learnable. The present study explored the minimal amount and quality of training data necessary for rules to be generalized by a transformer-based language model to test the predictions of the Tolerance Principle. We trained BabyBERTa (Huebner et al. 2021), a transformer model optimized for small datasets, on artificial grammars. The training sets varied in size, number of unique sentence types, and proportion of rule-following versus exception exemplars. We found that, unlike human infants, BabyBERTa's learning dynamics do not align with the Tolerance Principle.
+ oai:arXiv.org:2601.12179v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Adam E. Friedman, Stevan Harnad, Rushen Shi
+
+
+ VidTune: Creating Video Soundtracks with Generative Music and Contextual Thumbnails
+ https://arxiv.org/abs/2601.12180
+ arXiv:2601.12180v1 Announce Type: new
+Abstract: Music shapes the tone of videos, yet creators often struggle to find soundtracks that match their video's mood and narrative. Recent text-to-music models let creators generate music from text prompts, but our formative study (N=8) shows creators struggle to construct diverse prompts, quickly review and compare tracks, and understand their impact on the video. We present VidTune, a system that supports soundtrack creation by generating diverse music options from a creator's prompt and producing contextual thumbnails for rapid review. VidTune extracts representative video subjects to ground thumbnails in context, maps each track's valence and energy onto visual cues like color and brightness, and depicts prominent genres and instruments. Creators can refine tracks through natural language edits, which VidTune expands into new generations. In a controlled user study (N=12) and an exploratory case study (N=6), participants found VidTune helpful for efficiently reviewing and comparing music options and described the process as playful and enriching.
+ oai:arXiv.org:2601.12180v1
+ cs.HC
+ cs.MM
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mina Huh, Ailie C. Fraser, Dingzeyu Li, Mira Dontcheva, Bryan Wang
+
+
+ Negotiating Digital Identities with AI Companions: Motivations, Strategies, and Emotional Outcomes
+ https://arxiv.org/abs/2601.12181
+ arXiv:2601.12181v1 Announce Type: new
+Abstract: AI companions enable deep emotional relationships by engaging a user's sense of identity, but they also pose risks like unhealthy emotional dependence. Mitigating these risks requires first understanding the underlying process of identity construction and negotiation with AI companions. Focusing on Character.AI (C.AI), a popular AI companion, we conducted an LLM-assisted thematic analysis of 22,374 online discussions on its subreddit. Using Identity Negotiation Theory as an analytical lens, we identified a three-stage process: 1) five user motivations; 2) an identity negotiation process involving three communication expectations and four identity co-construction strategies; and 3) three emotional outcomes. Our findings surface the identity work users perform as both performers and directors to co-construct identities in negotiation with C.AI. This process takes place within a socio-emotional sandbox where users can experiment with social roles and express emotions without non-human partners. Finally, we offer design implications for emotionally supporting users while mitigating the risks.
+ oai:arXiv.org:2601.12181v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renkai Ma, Shuo Niu, Lingyao Li, Alex Hirth, Ava Brehm, Rowajana Behterin Barbie
+
+
+ Aletheia: What Makes RLVR For Code Verifiers Tick?
+ https://arxiv.org/abs/2601.12186
+ arXiv:2601.12186v1 Announce Type: new
+Abstract: Multi-domain thinking verifiers trained via Reinforcement Learning from Verifiable Rewards (RLVR) are a prominent fixture of the Large Language Model (LLM) post-training pipeline, owing to their ability to robustly rate and rerank model outputs. However, the adoption of such verifiers towards code generation has been comparatively sparse, with execution feedback constituting the dominant signal. Nonetheless, code verifiers remain valuable toward judging model outputs in scenarios where execution feedback is hard to obtain and are a potentially powerful addition to the code generation post-training toolbox. To this end, we create and open-source Aletheia, a controlled testbed that enables execution-grounded evaluation of code verifiers' robustness across disparate policy models and covariate shifts. We examine components of the RLVR-based verifier training recipe widely credited for its success: (1) intermediate thinking traces, (2) learning from negative samples, and (3) on-policy training. While experiments show the optimality of RLVR, we uncover important opportunities to simplify the recipe. Particularly, despite code verification exhibiting positive training- and inference-time scaling, on-policy learning stands out as the key component at small verifier sizes, and thinking-based training emerges as the most important component at larger scales.
+ oai:arXiv.org:2601.12186v1
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Vatsal Venkatkrishna, Indraneil Paul, Iryna Gurevych
+
+
+ VIRTUE: Versatile Video Retrieval Through Unified Embeddings
+ https://arxiv.org/abs/2601.12193
+ arXiv:2601.12193v1 Announce Type: new
+Abstract: Modern video retrieval systems are expected to handle diverse tasks ranging from corpus-level retrieval and fine-grained moment localization to flexible multimodal querying. Specialized architectures achieve strong retrieval performance by training modality-specific encoders on massive datasets, but they lack the ability to process composed multimodal queries. In contrast, multimodal LLM (MLLM)-based methods support rich multimodal search but their retrieval performance remains well below that of specialized systems. We present VIRTUE, an MLLM-based versatile video retrieval framework that integrates corpus and moment-level retrieval capabilities while accommodating composed multimodal queries within a single architecture. We use contrastive alignment of visual and textual embeddings generated using a shared MLLM backbone to facilitate efficient embedding-based candidate search. Our embedding model, trained efficiently using low-rank adaptation (LoRA) on 700K paired visual-text data samples, surpasses other MLLM-based methods on zero-shot video retrieval tasks. Additionally, we demonstrate that the same model can be adapted without further training to achieve competitive results on zero-shot moment retrieval, and state of the art results for zero-shot composed video retrieval. With additional training for reranking candidates identified in the embedding-based search, our model substantially outperforms existing MLLM-based retrieval systems and achieves retrieval performance comparable to state of the art specialized models which are trained on orders of magnitude larger data.
+ oai:arXiv.org:2601.12193v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shaunak Halbe, Bhagyashree Puranik, Jayakrishnan Unnikrishnan, Kushan Thakkar, Vimal Bhat, Toufiq Parag
+
+
+ Coherent Comparison as Information Cost: A Cost-First Ledger Framework for Discrete Dynamics
+ https://arxiv.org/abs/2601.12194
+ arXiv:2601.12194v1 Announce Type: new
+Abstract: We develop an information-theoretic framework for discrete dynamics grounded in a comparison-cost functional on ratios. Given two quantities compared via their ratio \(x=a/b\), we assign a cost \(F(x)\) measuring deviation from equilibrium (\(x=1\)). Requiring coherent composition under multiplicative chaining imposes a d'Alembert functional equation; together with normalization (\(F(1)=0\)) and quadratic calibration at unity, this yields a unique reciprocal cost functional (proved in a companion paper): \[ J(x) = \tfrac{1}{2}\bigl(x + x^{-1}\bigr) - 1. \] This cost exhibits reciprocity \(J(x)=J(x^{-1})\), vanishes only at \(x=1\), and diverges at boundary regimes \(x\to 0^+\) and \(x\to\infty\), excluding ``nothingness'' configurations. Using \(J\) as input, we introduce a discrete ledger as a minimal lossless encoding of recognition events on directed graphs. Under deterministic update semantics and minimality (no intra-tick ordering metadata), we derive atomic ticks (at most one event per tick). Explicit structural assumptions (conservation, no sources/sinks, pairwise locality, quantization in \(\delta\mathbb{Z}\)) force balanced double-entry postings and discrete ledger units. To obtain scalar potentials on graphs with cycles while retaining single-edge impulses per tick, we impose time-aggregated cycle closure (no-arbitrage/clearing over finite windows). Under this hypothesis, cycle closure is equivalent to path-independence, and the cleared cumulative flow admits a unique scalar potential on each connected component (up to additive constant), via a discrete Poincar\'e lemma. On hypercube graphs \(Q_d\), atomicity imposes a \(2^d\)-tick minimal period, with explicit Gray-code realization at \(d=3\). The framework connects ratio-based divergences, conservative graph flows, and discrete potential theory through a coherence-forced cost structure.
+ oai:arXiv.org:2601.12194v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sebastian Pardo-Guerra, Megan Simons, Anil Thapa, Jonathan Washburn
+
+
+ Understanding Partial Reachability in the Internet Core
+ https://arxiv.org/abs/2601.12196
+ arXiv:2601.12196v1 Announce Type: new
+Abstract: Routing strives to connect all the Internet, but compete: political pressure threatens routing fragmentation; architectural changes such as private clouds, carrier-grade NAT, and firewalls make connectivity conditional; and commercial disputes create partial reachability for days or years. This paper suggests *persistent, partial reachability is fundamental to the Internet* and an underexplored problem. We first *derive a conceptual definition of the Internet core* based on connectivity, not authority. We identify *peninsulas*: persistent, partial connectivity; and *islands*: when computers are partitioned from the Internet core. Second, we develop algorithms to observe each across the Internet, and apply them to two existing measurement systems: Trinocular, where 6 locations observe 5M networks frequently, and RIPE Atlas, where 13k locations scan the DNS roots frequently. Cross-validation shows our findings are stable over *three years of data*, and consistent with as few as 3 geographically-distributed observers. We validate peninsulas and islands against CAIDA Ark, showing good recall (0.94) and bounding precision between 0.42 and 0.82. Finally, our work has broad practical impact: we show that *peninsulas are more common than Internet outages*. Factoring out peninsulas and islands as noise can *improve existing measurement systems*; their ``noise'' is $5\times$ to $9.7\times$ larger than the operational events in RIPE's DNSmon. We show that most peninsula events are routing transients (45\%), but most peninsula-time (90\%) is due to a few (7\%) long-lived events. Our work helps inform Internet policy and governance, with our neutral definition showing no single country or organization can unilaterally control the Internet core.
+ oai:arXiv.org:2601.12196v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Guillermo Baltra, Tarang Saluja, Yuri Pradkin, John Heidemann
+
+
+ CTC-DID: CTC-Based Arabic dialect identification for streaming applications
+ https://arxiv.org/abs/2601.12199
+ arXiv:2601.12199v1 Announce Type: new
+Abstract: This paper proposes a Dialect Identification (DID) approach inspired by the Connectionist Temporal Classification (CTC) loss function as used in Automatic Speech Recognition (ASR). CTC-DID frames the dialect identification task as a limited-vocabulary ASR system, where dialect tags are treated as a sequence of labels for a given utterance. For training, the repetition of dialect tags in transcriptions is estimated either using a proposed Language-Agnostic Heuristic (LAH) approach or a pre-trained ASR model. The method is evaluated on the low-resource Arabic Dialect Identification (ADI) task, with experimental results demonstrating that an SSL-based CTC-DID model, trained on a limited dataset, outperforms both fine-tuned Whisper and ECAPA-TDNN models. Notably, CTC-DID also surpasses these models in zero-shot evaluation on the Casablanca dataset. The proposed approach is found to be more robust to shorter utterances and is shown to be easily adaptable for streaming, real-time applications, with minimal performance degradation.
+ oai:arXiv.org:2601.12199v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Muhammad Umar Farooq, Oscar Saz
+
+
+ Computing Maximal Repeating Subsequences in a String
+ https://arxiv.org/abs/2601.12200
+ arXiv:2601.12200v1 Announce Type: new
+Abstract: In this paper we initiate the study of computing a maximal (not necessarily maximum) repeating pattern in a single input string, where the corresponding problems have been studied (e.g., a maximal common subsequence) only in two or more input strings by Hirota and Sakai starting 2019. Given an input string $S$ of length $n$, we can compute a maximal square subsequence of $S$ in $O(n\log n)$ time, greatly improving the $O(n^2)$ bound for computing the longest square subsequence of $S$. For a maximal $k$-repeating subsequence, our bound is $O(f(k)n\log n)$, where \(f(k)\) is a computable function such that $f(k) < k\cdot 4^k$. This greatly improves the $O(n^{2k-1})$ bound for computing a longest $k$-repeating subsequence of $S$, for $k\geq 3$. Both results hold for the constrained case, i.e., when the solution must contain a subsequence $X$ of $S$, though with higher running times.
+ oai:arXiv.org:2601.12200v1
+ cs.DS
+ cs.FL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mingyang Gong, Adiesha Liyanage, Braeden Sopp, Binhai Zhu
+
+
+ Embryonic Exposure to VPA Influences Chick Vocalisations: A Computational Study
+ https://arxiv.org/abs/2601.12203
+ arXiv:2601.12203v1 Announce Type: new
+Abstract: In young animals like poultry chicks (Gallus gallus), vocalisations convey information about affective and behavioural states. Traditional approaches to vocalisation analysis, relying on manual annotation and predefined categories, introduce biases, limit scalability, and fail to capture the full complexity of vocal repertoires. We introduce a computational framework for the automated detection, acoustic feature extraction, and unsupervised learning of chick vocalisations. Applying this framework to a dataset of newly hatched chicks, we identified two primary vocal clusters. We then tested our computational framework on an independent dataset of chicks exposed during embryonic development to vehicle or Valproic Acid (VPA), a compound that disrupts neural development and is linked to autistic-like symptoms. Clustering analysis on the experimental dataset confirmed two primary vocal clusters and revealed systematic differences between groups. VPA-exposed chicks showed an altered repertoire, with a relative increase in softer calls. VPA differentially affected call clusters, modulating temporal, frequency, and energy domain features. Overall, VPA-exposed chicks produced vocalisations with shorter duration, reduced pitch variability, and modified energy profiles, with the strongest alterations observed in louder calls. This study provides a computational framework for analysing animal vocalisations, advancing knowledge of early-life communication in typical and atypical vocal development.
+ oai:arXiv.org:2601.12203v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Antonella M. C. Torrisi, In\^es Nolasco, Paola Sgad\`o, Elisabetta Versace, Emmanouil Benetos
+
+
+ Do Neural Codecs Generalize? A Controlled Study Across Unseen Languages and Non-Speech Tasks
+ https://arxiv.org/abs/2601.12205
+ arXiv:2601.12205v1 Announce Type: new
+Abstract: This paper investigates three crucial yet underexplored aspects of the generalization capabilities of neural audio codecs (NACs): (i) whether NACs can generalize to unseen languages during pre-training, (ii) whether speech-only pre-trained NACs can effectively generalize to non-speech applications such as environmental sounds, music, and animal vocalizations, and (iii) whether incorporating non-speech data during pre-training can improve performance on both speech and non-speech tasks. Existing studies typically rely on off-the-shelf NACs for comparison, which limits insight due to variations in implementation. In this work, we train NACs from scratch using strictly controlled configurations and carefully curated pre-training data to enable fair comparisons. We conduct a comprehensive evaluation of NAC performance on both signal reconstruction quality and downstream applications using 11 metrics. Our results show that NACs can generalize to unseen languages during pre-training, speech-only pre-trained NACs exhibit degraded performance on non-speech tasks, and incorporating non-speech data during pre-training improves performance on non-speech tasks while maintaining comparable performance on speech tasks.
+ oai:arXiv.org:2601.12205v1
+ cs.SD
+ cs.AI
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shih-Heng Wang, Jiatong Shi, Jinchuan Tian, Haibin Wu, Shinji Watanabe
+
+
+ CoReflect: Conversational Evaluation via Co-Evolutionary Simulation and Reflective Rubric Refinement
+ https://arxiv.org/abs/2601.12208
+ arXiv:2601.12208v1 Announce Type: new
+Abstract: Evaluating conversational systems in multi-turn settings remains a fundamental challenge. Conventional pipelines typically rely on manually defined rubrics and fixed conversational context$-$a static approach that limits coverage and fails to capture the diverse, emergent behaviors of dialogue models. To address this, we introduce CoReflect (Conversational Evaluation via Co-Evolutionary Simulation and Reflective Rubric Refinement), which unifies dialogue simulation and evaluation into an adaptive, iterative process. CoReflect employs a conversation planner that generates structured templates to guide a user simulator through diverse, goal-directed dialogues. Subsequently, a reflective analyzer processes these dialogues to identify systematic behavioral patterns and automatically refine the evaluation rubrics. Crucially, the insights from the conversation analysis are fed back into the planner to update conversation templates for subsequent iterations. This co-evolution loop ensures that the complexity of test cases and the diagnostic precision of rubrics improve in tandem. By minimizing human intervention, CoReflect provides a scalable and self-refining methodology that allows evaluation protocols to adapt alongside the rapidly advancing capabilities of dialogue models.
+ oai:arXiv.org:2601.12208v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yunzhe Li, Richie Yueqi Feng, Tianxin Wei, Chin-Chia Hsu
+
+
+ DaggerFFT: A Distributed FFT Framework Using Task Scheduling in Julia
+ https://arxiv.org/abs/2601.12209
+ arXiv:2601.12209v1 Announce Type: new
+Abstract: The Fast Fourier Transform (FFT) is a fundamental numerical technique with widespread application in a range of scientific problems. As scientific simulations attempt to exploit exascale systems, there has been a growing demand for distributed FFT algorithms that can effectively utilize modern heterogeneous high-performance computing (HPC) systems. Conventional FFT algorithms commonly encounter performance bottlenecks, especially when run on heterogeneous platforms. Most distributed FFT approaches rely on static task distribution and require synchronization barriers, limiting scalability and impacting overall resource utilization. In this paper we present DaggerFFT, a distributed FFT framework, developed in Julia, that treats highly parallel FFT computations as a dynamically scheduled task graph. Each FFT stage operates on a separately defined distributed array. FFT operations are expressed as DTasks operating on pencil or slab partitioned DArrays. Each FFT stage owns its own DArray, and the runtime assigns DTasks across devices using Dagger's dynamic scheduler that uses work stealing. We demonstrate how DaggerFFT's dynamic scheduler can outperform state-of-the-art distributed FFT libraries on both CPU and GPU backends, achieving up to a 2.6x speedup on CPU clusters and up to a 1.35x speedup on GPU clusters. We have integrated DaggerFFT into Oceananigans.jl, a geophysical fluid dynamics framework, demonstrating that high-level, task-based runtimes can deliver both superior performance and modularity in large-scale, real-world simulations.
+ oai:arXiv.org:2601.12209v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sana Taghipour Anvari, Julian Samaroo, Matin Raayai Ardakani, David Kaeli
+
+
+ Solvability of The Output Corridor Control Problem by Pulse-Modulated Feedback
+ https://arxiv.org/abs/2601.12210
+ arXiv:2601.12210v1 Announce Type: new
+Abstract: The problem of maintaining the output of a positive time-invariant single-input single-output system within a predefined corridor of values is treated. For third-order plants possessing a certain structure, it is proven that the problem is always solvable under stationary conditions by means of pulse-modulated feedback. The obtained result is utilized to assess the feasibility of patient-specific pharmacokinetic-pharmacodynamic models with respect to patient safety. A population of Wiener models capturing the dynamics of a neuromuscular blockade agent is studied to investigate whether or not they can be driven into the desired output corridor by clinically acceptable sequential drug doses (boluses). It is demonstrated that low values of a parameter in the nonlinear pharmacodynamic part lie behind the detected model infeasibility.
+ oai:arXiv.org:2601.12210v1
+ eess.SY
+ cs.SY
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Alexander Medvedev, Anton V. Proskurnikov
+
+
+ Speculative Sampling with Reinforcement Learning
+ https://arxiv.org/abs/2601.12212
+ arXiv:2601.12212v1 Announce Type: new
+Abstract: Inference time latency has remained an open challenge for real world applications of large language models (LLMs). State-of-the-art (SOTA) speculative sampling (SpS) methods for LLMs, like EAGLE-3, use tree-based drafting to explore multiple candidate continuations in parallel. However, the hyperparameters controlling the tree structure are static, which limits flexibility and efficiency across diverse contexts and domains. We introduce Reinforcement learning for Speculative Sampling (Re-SpS), the first reinforcement learning (RL)-based framework for draft tree hyperparameter optimization. Re-SpS dynamically adjusts draft tree hyperparameters in real-time, learning context-aware policies that maximize generation speed by balancing speculative aggression with computational overhead. It leverages efficient state representations from target model hidden states and introduces multi-step action persistence for better context modeling. Evaluation results across five diverse benchmarks demonstrate consistent improvements over the SOTA method EAGLE-3, achieving up to 5.45$\times$ speedup over the backbone LLM and up to 1.12$\times$ speedup compared to EAGLE-3 across five diverse benchmarks, with no loss in output fidelity.
+ oai:arXiv.org:2601.12212v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chenan Wang, Daniel H. Shi, Haipeng Chen
+
+
+ One-Sided Matrix Completion from Ultra-Sparse Samples
+ https://arxiv.org/abs/2601.12213
+ arXiv:2601.12213v1 Announce Type: new
+Abstract: Matrix completion is a classical problem that has received recurring interest across a wide range of fields. In this paper, we revisit this problem in an ultra-sparse sampling regime, where each entry of an unknown, $n\times d$ matrix $M$ (with $n \ge d$) is observed independently with probability $p = C / d$, for a fixed integer $C \ge 2$. This setting is motivated by applications involving large, sparse panel datasets, where the number of rows far exceeds the number of columns. When each row contains only $C$ entries -- fewer than the rank of $M$ -- accurate imputation of $M$ is impossible. Instead, we estimate the row span of $M$ or the averaged second-moment matrix $T = M^{\top} M / n$.
+ The empirical second-moment matrix computed from observed entries exhibits non-random and sparse missingness. We propose an unbiased estimator that normalizes each nonzero entry of the second moment by its observed frequency, followed by gradient descent to impute the missing entries of $T$. The normalization divides a weighted sum of $n$ binomial random variables by the total number of ones. We show that the estimator is unbiased for any $p$ and enjoys low variance. When the row vectors of $M$ are drawn uniformly from a rank-$r$ factor model satisfying an incoherence condition, we prove that if $n \ge O({d r^5 \epsilon^{-2} C^{-2} \log d})$, any local minimum of the gradient-descent objective is approximately global and recovers $T$ with error at most $\epsilon^2$.
+ Experiments on both synthetic and real-world data validate our approach. On three MovieLens datasets, our algorithm reduces bias by $88\%$ relative to baseline estimators. We also empirically validate the linear sampling complexity of $n$ relative to $d$ on synthetic data. On an Amazon reviews dataset with sparsity $10^{-7}$, our method reduces the recovery error of $T$ by $59\%$ and $M$ by $38\%$ compared to baseline methods.
+ oai:arXiv.org:2601.12213v1
+ cs.LG
+ math.OC
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Trans. Mach. Learn. Res. 2026
+ Hongyang R. Zhang, Zhenshuo Zhang, Huy L. Nguyen, Guanghui Lan
+
+
+ Wavelet-Driven Masked Multiscale Reconstruction for PPG Foundation Models
+ https://arxiv.org/abs/2601.12215
+ arXiv:2601.12215v1 Announce Type: new
+Abstract: Wearable foundation models have the potential to transform digital health by learning transferable representations from large-scale biosignals collected in everyday settings. While recent progress has been made in large-scale pretraining, most approaches overlook the spectral structure of photoplethysmography (PPG) signals, wherein physiological rhythms unfold across multiple frequency bands. Motivated by the insight that many downstream health-related tasks depend on multi-resolution features spanning fine-grained waveform morphology to global rhythmic dynamics, we introduce Masked Multiscale Reconstruction (MMR) for PPG representation learning - a self-supervised pretraining framework that explicitly learns from hierarchical time-frequency scales of PPG data. The pretraining task is designed to reconstruct randomly masked out coefficients obtained from a wavelet-based multiresolution decomposition of PPG signals, forcing the transformer encoder to integrate information across temporal and spectral scales. We pretrain our model with MMR using ~17 million unlabeled 10-second PPG segments from ~32,000 smartwatch users. On 17 of 19 diverse health-related tasks, MMR trained on large-scale wearable PPG data improves over or matches state-of-the-art open-source PPG foundation models, time-series foundation models, and other self-supervised baselines. Extensive analysis of our learned embeddings and systematic ablations underscores the value of wavelet-based representations, showing that they capture robust and physiologically-grounded features. Together, these results highlight the potential of MMR as a step toward generalizable PPG foundation models.
+ oai:arXiv.org:2601.12215v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Megha Thukral, Cyrus Tanade, Simon A. Lee, Juhyeon Lee, Hao Zhou, Keum San Chun, Migyeong Gwak, Viswam Nathan, Md Mahbubur Rahman, Li Zhu, Mehrab Bin Morshed, Subramaniam Venkatraman, Sharanya Arcot Desai
+
+
+ Canonicalization of Batched Einstein Summations for Tuning Retrieval
+ https://arxiv.org/abs/2601.12220
+ arXiv:2601.12220v1 Announce Type: new
+Abstract: We present an algorithm for normalizing \emph{Batched Einstein Summation}
+ expressions by mapping mathematically equivalent formulations to a unique
+ normal form. Batches of einsums with the same Einstein notation that exhibit
+ substantial data reuse appear frequently in finite element methods (FEM),
+ numerical linear algebra, and computational chemistry. To effectively exploit
+ this temporal locality for high performance, we consider groups of einsums in
+ batched form.
+ Representations of equivalent batched einsums may differ due to index
+ renaming, permutations within the batch, and, due to the commutativity and
+ associativity of multiplication operation. The lack of a canonical
+ representation hinders the reuse of optimization and tuning knowledge in
+ software systems. To this end, we develop a novel encoding of batched einsums
+ as colored graphs and apply graph canonicalization to derive a normal form.
+ In addition to the canonicalization algorithm, we propose a representation of
+ einsums using functional array operands and provide a strategy to transfer
+ transformations operating on the normal form to \emph{functional batched
+ einsums} that exhibit the same normal form; crucial for fusing surrounding
+ computations for memory bound einsums. We evaluate our approach against JAX,
+ and observe a geomean speedup of $4.7\times$ for einsums from the TCCG
+ benchmark suite and an FEM solver.
+ oai:arXiv.org:2601.12220v1
+ cs.MS
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kaushik Kulkarni, Andreas Kl\"ockner
+
+
+ Song Aesthetics Evaluation with Multi-Stem Attention and Hierarchical Uncertainty Modeling
+ https://arxiv.org/abs/2601.12222
+ arXiv:2601.12222v1 Announce Type: new
+Abstract: Music generative artificial intelligence (AI) is rapidly expanding music content, necessitating automated song aesthetics evaluation. However, existing studies largely focus on speech, audio or singing quality, leaving song aesthetics underexplored. Moreover, conventional approaches often predict a precise Mean Opinion Score (MOS) value directly, which struggles to capture the nuances of human perception in song aesthetics evaluation. This paper proposes a song-oriented aesthetics evaluation framework, featuring two novel modules: 1) Multi-Stem Attention Fusion (MSAF) builds bidirectional cross-attention between mixture-vocal and mixture-accompaniment pairs, fusing them to capture complex musical features; 2) Hierarchical Granularity-Aware Interval Aggregation (HiGIA) learns multi-granularity score probability distributions, aggregates them into a score interval, and applies a regression within the interval to produce the final score. We evaluated on two datasets of full-length songs: SongEval dataset (AI-generated) and an internal aesthetics dataset (human-created), and compared with two state-of-the-art (SOTA) models. Results show that the proposed method achieves stronger performance for multi-dimensional song aesthetics evaluation.
+ oai:arXiv.org:2601.12222v1
+ cs.SD
+ cs.MM
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yishan Lv, Jing Luo, Boyuan Ju, Yang Zhang, Xinda Wu, Bo Yuan, Xinyu Yang
+
+
+ Where It Moves, It Matters: Referring Surgical Instrument Segmentation via Motion
+ https://arxiv.org/abs/2601.12224
+ arXiv:2601.12224v1 Announce Type: new
+Abstract: Enabling intuitive, language-driven interaction with surgical scenes is a critical step toward intelligent operating rooms and autonomous surgical robotic assistance. However, the task of referring segmentation, localizing surgical instruments based on natural language descriptions, remains underexplored in surgical videos, with existing approaches struggling to generalize due to reliance on static visual cues and predefined instrument names. In this work, we introduce SurgRef, a novel motion-guided framework that grounds free-form language expressions in instrument motion, capturing how tools move and interact across time, rather than what they look like. This allows models to understand and segment instruments even under occlusion, ambiguity, or unfamiliar terminology. To train and evaluate SurgRef, we present Ref-IMotion, a diverse, multi-institutional video dataset with dense spatiotemporal masks and rich motion-centric expressions. SurgRef achieves state-of-the-art accuracy and generalization across surgical procedures, setting a new benchmark for robust, language-driven surgical video segmentation.
+ oai:arXiv.org:2601.12224v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ AAAI 2026
+ Meng Wei, Kun Yuan, Shi Li, Yue Zhou, Long Bai, Nassir Navab, Hongliang Ren, Hong Joo Lee, Tom Vercauteren, Nicolas Padoy
+
+
+ Learning Longitudinal Health Representations from EHR and Wearable Data
+ https://arxiv.org/abs/2601.12227
+ arXiv:2601.12227v1 Announce Type: new
+Abstract: Foundation models trained on electronic health records show strong performance on many clinical prediction tasks but are limited by sparse and irregular documentation. Wearable devices provide dense continuous physiological signals but lack semantic grounding. Existing methods usually model these data sources separately or combine them through late fusion. We propose a multimodal foundation model that jointly represents electronic health records and wearable data as a continuous time latent process. The model uses modality specific encoders and a shared temporal backbone pretrained with self supervised and cross modal objectives. This design produces representations that are temporally coherent and clinically grounded. Across forecasting physiological and risk modeling tasks the model outperforms strong electronic health record only and wearable only baselines especially at long horizons and under missing data. These results show that joint electronic health record and wearable pretraining yields more faithful representations of longitudinal health.
+ oai:arXiv.org:2601.12227v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuanyun Zhang, Han Zhou, Li Feng, Yilin Hong, Shi Li
+
+
+ Classical-Quantum Channel Resolvability Using Matrix Multiplicative Weight Update Algorithm
+ https://arxiv.org/abs/2601.12230
+ arXiv:2601.12230v1 Announce Type: new
+Abstract: We study classical-quantum (C-Q) channel resolvability. C-Q channel resolvability has been proved by only random coding in the literature. In our previous study, we proved channel resolvability by deterministic coding, using multiplicative weight update algorithm. We extend this approach to C-Q channels and prove C-Q channel resolvability by deterministic coding, using the matrix multiplicative weight update algorithm. This is the first approach to C-Q channel resolvability using deterministic coding.
+ oai:arXiv.org:2601.12230v1
+ cs.IT
+ math.IT
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Koki Takahashi, Shun Watanabe
+
+
+ Wavelet-Aware Anomaly Detection in Multi-Channel User Logs via Deviation Modulation and Resolution-Adaptive Attention
+ https://arxiv.org/abs/2601.12231
+ arXiv:2601.12231v1 Announce Type: new
+Abstract: Insider threat detection is a key challenge in enterprise security, relying on user activity logs that capture rich and complex behavioral patterns. These logs are often multi-channel, non-stationary, and anomalies are rare, making anomaly detection challenging. To address these issues, we propose a novel framework that integrates wavelet-aware modulation, multi-resolution wavelet decomposition, and resolution-adaptive attention for robust anomaly detection. Our approach first applies a deviation-aware modulation scheme to suppress routine behaviors while amplifying anomalous deviations. Next, discrete wavelet transform (DWT) decomposes the log signals into multi-resolution representations, capturing both long-term trends and short-term anomalies. Finally, a learnable attention mechanism dynamically reweights the most discriminative frequency bands for detection. On the CERT r4.2 benchmark, our approach consistently outperforms existing baselines in precision, recall, and F1 score across various time granularities and scenarios.
+ oai:arXiv.org:2601.12231v1
+ cs.LG
+ cs.CR
+ stat.CO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kaichuan Kong, Dongjie Liu, Xiaobo Jin, Shijie Xu, Guanggang Geng
+
+
+ DiffusionQC: Artifact Detection in Histopathology via Diffusion Model
+ https://arxiv.org/abs/2601.12233
+ arXiv:2601.12233v1 Announce Type: new
+Abstract: Digital pathology plays a vital role across modern medicine, offering critical insights for disease diagnosis, prognosis, and treatment. However, histopathology images often contain artifacts introduced during slide preparation and digitization. Detecting and excluding them is essential to ensure reliable downstream analysis. Traditional supervised models typically require large annotated datasets, which is resource-intensive and not generalizable to novel artifact types. To address this, we propose DiffusionQC, which detects artifacts as outliers among clean images using a diffusion model. It requires only a set of clean images for training rather than pixel-level artifact annotations and predefined artifact types. Furthermore, we introduce a contrastive learning module to explicitly enlarge the distribution separation between artifact and clean images, yielding an enhanced version of our method. Empirical results demonstrate superior performance to state-of-the-art and offer cross-stain generalization capacity, with significantly less data and annotations.
+ oai:arXiv.org:2601.12233v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Zhenzhen Wang, Zhongliang Zhou, Zhuoyu Wen, Jeong Hwan Kook, John B Wojcik, John Kang
+
+
+ Proc3D: Procedural 3D Generation and Parametric Editing of 3D Shapes with Large Language Models
+ https://arxiv.org/abs/2601.12234
+ arXiv:2601.12234v1 Announce Type: new
+Abstract: Generating 3D models has traditionally been a complex task requiring specialized expertise. While recent advances in generative AI have sought to automate this process, existing methods produce non-editable representation, such as meshes or point clouds, limiting their adaptability for iterative design. In this paper, we introduce Proc3D, a system designed to generate editable 3D models while enabling real-time modifications. At its core, Proc3D introduces procedural compact graph (PCG), a graph representation of 3D models, that encodes the algorithmic rules and structures necessary for generating the model. This representation exposes key parameters, allowing intuitive manual adjustments via sliders and checkboxes, as well as real-time, automated modifications through natural language prompts using Large Language Models (LLMs). We demonstrate Proc3D's capabilities using two generative approaches: GPT-4o with in-context learning (ICL) and a fine-tuned LLAMA-3 model. Experimental results show that Proc3D outperforms existing methods in editing efficiency, achieving more than 400x speedup over conventional approaches that require full regeneration for each modification. Additionally, Proc3D improves ULIP scores by 28%, a metric that evaluates the alignment between generated 3D models and text prompts. By enabling text-aligned 3D model generation along with precise, real-time parametric edits, Proc3D facilitates highly accurate text-based image editing applications.
+ oai:arXiv.org:2601.12234v1
+ cs.GR
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Fadlullah Raji, Stefano Petrangeli, Matheus Gadelha, Yu Shen, Uttaran Bhattacharya, Gang Wu
+
+
+ Analyzing the Impact of EV Battery Charging on the Distribution Network
+ https://arxiv.org/abs/2601.12236
+ arXiv:2601.12236v1 Announce Type: new
+Abstract: Many countries are rapidly adopting electric vehicles (EVs) due to their meager running cost and environment-friendly nature. EVs are likely to dominate the internal combustion (IC) engine cars entirely over the next few years. With the rise in popularity of EVs, adverse effects of EV charging loads on the grid system have been observed. Since the distribution system (DS) does not cope with the high overloading capacity, the negative impact of EV charging load on the distribution network (DN) cannot be neglected. A high level of EV penetration with uncoordinated charging is the primary cause of voltage instability, increased peak load demand, and reliability issues of the DN. In this paper, a detailed overview of all the notable impacts of EV charging on voltage profile, power quality, and DS performance is discussed. This work also reviews the different topologies of EV chargers and the issues introduced by power converters on the utility grid. Finally, the strategies for improving the charging of EVs proposed in the literature to consider the random nature of EVs charging, the management of peak loads, and bidirectional power flow are summarized.
+ oai:arXiv.org:2601.12236v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Sahil Aziz, Wajid Ali, Khaliqur Rahman
+
+
+ Power Aware Dynamic Reallocation For Inference
+ https://arxiv.org/abs/2601.12241
+ arXiv:2601.12241v1 Announce Type: new
+Abstract: Disaggregation has emerged as a powerful strategy for optimizing large language model (LLM) inference by separating compute-intensive prefill and memory-bound decode phases across specialized GPUs. This separation improves utilization and throughput under fixed hardware capacity. However, as model and cluster scales grow, power, rather than compute, has become the dominant limiter of overall performance and cost efficiency. In this paper, we propose RAPID, a power-aware disaggregated inference framework that jointly manages GPU roles and power budgets to sustain goodput within strict power caps. RAPID utilizes static and dynamic power reallocation in addition to GPU reallocation to improve performance under fixed power bounds. RAPID improves overall performance and application consistency beyond what is achievable in current disaggregation solutions, resulting in up to a 2x improvement in SLO attainment at peak load when compared to a static assignment without an increase in complexity or cost.
+ oai:arXiv.org:2601.12241v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yiwei Jiang, Sangeeta Chowdhary, Nathaniel Morris, Rutwik Jain, Srilatha Manne, Sam Bayliss
+
+
+ Optimal Power Allocation and Sub-Optimal Channel Assignment for Downlink NOMA Systems Using Deep Reinforcement Learning
+ https://arxiv.org/abs/2601.12242
+ arXiv:2601.12242v1 Announce Type: new
+Abstract: In recent years, Non-Orthogonal Multiple Access (NOMA) system has emerged as a promising candidate for multiple access frameworks due to the evolution of deep machine learning, trying to incorporate deep machine learning into the NOMA system. The main motivation for such active studies is the growing need to optimize the utilization of network resources as the expansion of the internet of things (IoT) caused a scarcity of network resources. The NOMA addresses this need by power multiplexing, allowing multiple users to access the network simultaneously. Nevertheless, the NOMA system has few limitations. Several works have proposed to mitigate this, including the optimization of power allocation known as joint resource allocation(JRA) method, and integration of the JRA method and deep reinforcement learning (JRA-DRL). Despite this, the channel assignment problem remains unclear and requires further investigation. In this paper, we propose a deep reinforcement learning framework incorporating replay memory with an on-policy algorithm, allocating network resources in a NOMA system to generalize the learning. Also, we provide extensive simulations to evaluate the effects of varying the learning rate, batch size, type of model, and the number of features in the state.
+ oai:arXiv.org:2601.12242v1
+ cs.AI
+ cs.LG
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.7840/kics.2025.50.3.406
+ J. Korean Inst. Commun. Inf. Sci. (J-KICS), vol. 50, no. 3, pp. 406-419, 2025
+ WooSeok Kim, Jeonghoon Lee, Sangho Kim, Taesun An, WonMin Lee, Dowon Kim, Kyungseop Shin
+
+
+ Less is More: Label-Guided Summarization of Procedural and Instructional Videos
+ https://arxiv.org/abs/2601.12243
+ arXiv:2601.12243v1 Announce Type: new
+Abstract: Video summarization helps turn long videos into clear, concise representations that are easier to review, document, and analyze, especially in high-stakes domains like surgical training. Prior work has progressed from using basic visual features like color, motion, and structural changes to using pre-trained vision-language models that can better understand what's happening in the video (semantics) and capture temporal flow, resulting in more context-aware video summarization. We propose a three-stage framework, PRISM: Procedural Representation via Integrated Semantic and Multimodal analysis, that produces semantically grounded video summaries. PRISM combines adaptive visual sampling, label-driven keyframe anchoring, and contextual validation using a large language model (LLM). Our method ensures that selected frames reflect meaningful and procedural transitions while filtering out generic or hallucinated content, resulting in contextually coherent summaries across both domain-specific and instructional videos. We evaluate our method on instructional and activity datasets, using reference summaries for instructional videos. Despite sampling fewer than 5% of the original frames, our summaries retain 84% semantic content while improving over baselines by as much as 33%. Our approach generalizes across procedural and domain-specific video tasks, achieving strong performance with both semantic alignment and precision.
+ oai:arXiv.org:2601.12243v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shreya Rajpal, Michal Golovanesky, Carsten Eickhoff
+
+
+ A Comprehensive Review of Bio-Inspired Approaches to Coordination, Communication, and System Architecture in Underwater Swarm Robotics
+ https://arxiv.org/abs/2601.12244
+ arXiv:2601.12244v1 Announce Type: new
+Abstract: The increasing complexity of marine operations has intensified the need for intelligent robotic systems to support ocean observation, exploration, and resource management. Underwater swarm robotics offers a promising framework that extends the capabilities of individual autonomous platforms through collective coordination. Inspired by natural systems, such as fish schools and insect colonies, bio-inspired swarm approaches enable distributed decision-making, adaptability, and resilience under challenging marine conditions. Yet research in this field remains fragmented, with limited integration across algorithmic, communication, and hardware design perspectives. This review synthesises bio-inspired coordination mechanisms, communication strategies, and system design considerations for underwater swarm robotics. It examines key marine-specific algorithms, including the Artificial Fish Swarm Algorithm, Whale Optimisation Algorithm, Coral Reef Optimisation, and Marine Predators Algorithm, highlighting their applications in formation control, task allocation, and environmental interaction. The review also analyses communication constraints unique to the underwater domain and emerging acoustic, optical, and hybrid solutions that support cooperative operation. Additionally, it examines hardware and system design advances that enhance system efficiency and scalability. A multi-dimensional classification framework evaluates existing approaches across communication dependency, environmental adaptability, energy efficiency, and swarm scalability. Through this integrated analysis, the review unifies bio-inspired coordination algorithms, communication modalities, and system design approaches. It also identifies converging trends, key challenges, and future research directions for real-world deployment of underwater swarm systems.
+ oai:arXiv.org:2601.12244v1
+ cs.RO
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.3390/jmse14010059
+ Journal of Marine Science and Engineering, 14(1), 59 (2026)
+ Shyalan Ramesh, Scott Mann, Alex Stumpf
+
+
+ Sound2Hap: Learning Audio-to-Vibrotactile Haptic Generation from Human Ratings
+ https://arxiv.org/abs/2601.12245
+ arXiv:2601.12245v1 Announce Type: new
+Abstract: Environmental sounds like footsteps, keyboard typing, or dog barking carry rich information and emotional context, making them valuable for designing haptics in user applications. Existing audio-to-vibration methods, however, rely on signal-processing rules tuned for music or games and often fail to generalize across diverse sounds. To address this, we first investigated user perception of four existing audio-to-haptic algorithms, then created a data-driven model for environmental sounds. In Study 1, 34 participants rated vibrations generated by the four algorithms for 1,000 sounds, revealing no consistent algorithm preferences. Using this dataset, we trained Sound2Hap, a CNN-based autoencoder, to generate perceptually meaningful vibrations from diverse sounds with low latency. In Study 2, 15 participants rated its output higher than signal-processing baselines on both audio-vibration match and Haptic Experience Index (HXI), finding it more harmonious with diverse sounds. This work demonstrates a perceptually validated approach to audio-haptic translation, broadening the reach of sound-driven haptics.
+ oai:arXiv.org:2601.12245v1
+ cs.HC
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yinan Li, Hasti Seifi
+
+
+ Explicit symmetric low-regularity integrators for the semilinear Klein-Gordon equation
+ https://arxiv.org/abs/2601.12246
+ arXiv:2601.12246v1 Announce Type: new
+Abstract: This paper is concerned with the design and analysis of symmetric low-regularity integrators for the semilinear Klein-Gordon equation. We first propose a general symmetrization procedure that allows for the systematic construction of symmetric schemes from existing explicit (non-symmetric) integrators. Applying this procedure, we derive two novel schemes. Error analyses show that both integrators achieve their optimal convergence orders in the energy space under significantly relaxed regularity assumptions. Furthermore, the symmetry property ensures that the convergence order of a first-order symmetric scheme improves as the regularity of the exact solution increases. A numerical experiment demonstrates that the proposed second-order symmetric scheme nearly preserves the system energy over extended periods.
+ oai:arXiv.org:2601.12246v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhirui Shen, Bin Wang
+
+
+ Plan, Verify and Fill: A Structured Parallel Decoding Approach for Diffusion Language Models
+ https://arxiv.org/abs/2601.12247
+ arXiv:2601.12247v1 Announce Type: new
+Abstract: Diffusion Language Models (DLMs) present a promising non-sequential paradigm for text generation, distinct from standard autoregressive (AR) approaches. However, current decoding strategies often adopt a reactive stance, underutilizing the global bidirectional context to dictate global trajectories. To address this, we propose Plan-Verify-Fill (PVF), a training-free paradigm that grounds planning via quantitative validation. PVF actively constructs a hierarchical skeleton by prioritizing high-leverage semantic anchors and employs a verification protocol to operationalize pragmatic structural stopping where further deliberation yields diminishing returns. Extensive evaluations on LLaDA-8B-Instruct and Dream-7B-Instruct demonstrate that PVF reduces the Number of Function Evaluations (NFE) by up to 65% compared to confidence-based parallel decoding across benchmark datasets, unlocking superior efficiency without compromising accuracy.
+ oai:arXiv.org:2601.12247v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Miao Li, Hanyang Jiang, Sikai Chen, Hengyu Fu, Yuhang Cai, Baihe Huang, Tinghan Ye, Xuanzhou Chen, Pascal Van Hentenryck
+
+
+ An Innovative Framework for Breast Cancer Detection Using Pyramid Adaptive Atrous Convolution, Transformer Integration, and Multi-Scale Feature Fusion
+ https://arxiv.org/abs/2601.12249
+ arXiv:2601.12249v1 Announce Type: new
+Abstract: Breast cancer is one of the most common cancers among women worldwide, and its accurate and timely diagnosis plays a critical role in improving treatment outcomes. This thesis presents an innovative framework for detecting malignant masses in mammographic images by integrating the Pyramid Adaptive Atrous Convolution (PAAC) and Transformer architectures. The proposed approach utilizes Multi-Scale Feature Fusion to enhance the extraction of features from benign and malignant tissues and combines Dice Loss and Focal Loss functions to improve the model's learning process, effectively reducing errors in binary breast cancer classification and achieving high accuracy and efficiency. In this study, a comprehensive dataset of breast cancer images from INbreast, MIAS, and DDSM was preprocessed through data augmentation and contrast enhancement and resized to 227x227 pixels for model training. Leveraging the Transformer's ability to manage long-range dependencies with Self-Attention mechanisms, the proposed model achieved high accuracy in detecting cancerous masses, outperforming foundational models such as BreastNet, DeepMammo, Multi-Scale CNN, Swin-Unet, and SegFormer. The final evaluation results for the proposed model include an accuracy of 98.5\%, sensitivity of 97.8\%, specificity of 96.3\%, F1-score of 98.2\%, and overall precision of 97.9\%. These metrics demonstrate a significant improvement over traditional methods and confirm the model's effectiveness in identifying cancerous masses in complex scenarios and large datasets. This model shows potential as a reliable and efficient tool for breast cancer diagnosis and can be effectively integrated into medical diagnostic systems.
+ oai:arXiv.org:2601.12249v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ehsan Sadeghi Pour, Mahdi Esmaeili, Morteza Romoozi
+
+
+ Breaking Coordinate Overfitting: Geometry-Aware WiFi Sensing for Cross-Layout 3D Pose Estimation
+ https://arxiv.org/abs/2601.12252
+ arXiv:2601.12252v1 Announce Type: new
+Abstract: WiFi-based 3D human pose estimation offers a low-cost and privacy-preserving alternative to vision-based systems for smart interaction. However, existing approaches rely on visual 3D poses as supervision and directly regress CSI to a camera-based coordinate system. We find that this practice leads to coordinate overfitting: models memorize deployment-specific WiFi transceiver layouts rather than only learning activity-relevant representations, resulting in severe generalization failures. To address this challenge, we present PerceptAlign, the first geometry-conditioned framework for WiFi-based cross-layout pose estimation. PerceptAlign introduces a lightweight coordinate unification procedure that aligns WiFi and vision measurements in a shared 3D space using only two checkerboards and a few photos. Within this unified space, it encodes calibrated transceiver positions into high-dimensional embeddings and fuses them with CSI features, making the model explicitly aware of device geometry as a conditional variable. This design forces the network to disentangle human motion from deployment layouts, enabling robust and, for the first time, layout-invariant WiFi pose estimation. To support systematic evaluation, we construct the largest cross-domain 3D WiFi pose estimation dataset to date, comprising 21 subjects, 5 scenes, 18 actions, and 7 device layouts. Experiments show that PerceptAlign reduces in-domain error by 12.3% and cross-domain error by more than 60% compared to state-of-the-art baselines. These results establish geometry-conditioned learning as a viable path toward scalable and practical WiFi sensing.
+ oai:arXiv.org:2601.12252v1
+ cs.HC
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Songming Jia, Yan Lu, Bin Liu, Xiang Zhang, Peng Zhao, Xinmeng Tang, Yelin Wei, Jinyang Huang, Huan Yan, Zhi Liu
+
+
+ Federated Joint Learning for Domain and Class Generalization
+ https://arxiv.org/abs/2601.12253
+ arXiv:2601.12253v1 Announce Type: new
+Abstract: Efficient fine-tuning of visual-language models like CLIP has become crucial due to their large-scale parameter size and extensive pretraining requirements. Existing methods typically address either the issue of unseen classes or unseen domains in isolation, without considering a joint framework for both. In this paper, we propose \textbf{Fed}erated Joint Learning for \textbf{D}omain and \textbf{C}lass \textbf{G}eneralization, termed \textbf{FedDCG}, a novel approach that addresses both class and domain generalization in federated learning settings. Our method introduces a domain grouping strategy where class-generalized networks are trained within each group to prevent decision boundary confusion. During inference, we aggregate class-generalized results based on domain similarity, effectively integrating knowledge from both class and domain generalization. Specifically, a learnable network is employed to enhance class generalization capabilities, and a decoupling mechanism separates general and domain-specific knowledge, improving generalization to unseen domains. Extensive experiments across various datasets show that \textbf{FedDCG} outperforms state-of-the-art baselines in terms of accuracy and robustness.
+ oai:arXiv.org:2601.12253v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Haoran Xu, Jiaze Li, Jianzhong Ju, Zhenbo Luo
+
+
+ Confidence-based Filtering for Speech Dataset Curation with Generative Speech Enhancement Using Discrete Tokens
+ https://arxiv.org/abs/2601.12254
+ arXiv:2601.12254v1 Announce Type: new
+Abstract: Generative speech enhancement (GSE) models show great promise in producing high-quality clean speech from noisy inputs, enabling applications such as curating noisy text-to-speech (TTS) datasets into high-quality ones. However, GSE models are prone to hallucination errors, such as phoneme omissions and speaker inconsistency, which conventional error filtering based on non-intrusive speech quality metrics often fails to detect. To address this issue, we propose a non-intrusive method for filtering hallucination errors from discrete token-based GSE models. Our method leverages the log-probabilities of generated tokens as confidence scores to detect potential errors. Experimental results show that the confidence scores strongly correlate with a suite of intrusive SE metrics, and that our method effectively identifies hallucination errors missed by conventional filtering methods. Furthermore, we demonstrate the practical utility of our method: curating an in-the-wild TTS dataset with our confidence-based filtering improves the performance of subsequently trained TTS models.
+ oai:arXiv.org:2601.12254v1
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Kazuki Yamauchi, Masato Murata, Shogo Seki
+
+
+ Improving Large Molecular Language Model via Relation-aware Multimodal Collaboration
+ https://arxiv.org/abs/2601.12256
+ arXiv:2601.12256v1 Announce Type: new
+Abstract: Large language models (LLMs) have demonstrated their instruction-following capabilities and achieved powerful performance on various tasks. Inspired by their success, recent works in the molecular domain have led to the development of large molecular language models (LMLMs) that integrate 1D molecular strings or 2D molecular graphs into the language models. However, existing LMLMs often suffer from hallucination and limited robustness, largely due to inadequate integration of diverse molecular modalities such as 1D sequences, 2D molecular graphs, and 3D conformations. To address these limitations, we propose CoLLaMo, a large language model-based molecular assistant equipped with a multi-level molecular modality-collaborative projector. The relation-aware modality-collaborative attention mechanism in the projector facilitates fine-grained and relation-guided information exchange between atoms by incorporating 2D structural and 3D spatial relations. Furthermore, we present a molecule-centric new automatic measurement, including a hallucination assessment metric and GPT-based caption quality evaluation to address the limitations of token-based generic evaluation metrics (i.e., BLEU) widely used in assessing molecular comprehension of LMLMs. Our extensive experiments demonstrate that our CoLLaMo enhances the molecular modality generalization capabilities of LMLMs, achieving the best performance on multiple tasks, including molecule captioning, computed property QA, descriptive property QA, motif counting, and IUPAC name prediction.
+ oai:arXiv.org:2601.12256v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinyoung Park, Minseong Bae, Jeehye Na, Hyunwoo J. Kim
+
+
+ Soft Shadow Diffusion (SSD): Physics-inspired Learning for 3D Computational Periscopy
+ https://arxiv.org/abs/2601.12257
+ arXiv:2601.12257v1 Announce Type: new
+Abstract: Conventional imaging requires a line of sight to create accurate visual representations of a scene. In certain circumstances, however, obtaining a suitable line of sight may be impractical, dangerous, or even impossible. Non-line-of-sight (NLOS) imaging addresses this challenge by reconstructing the scene from indirect measurements. Recently, passive NLOS methods that use an ordinary photograph of the subtle shadow cast onto a visible wall by the hidden scene have gained interest. These methods are currently limited to 1D or low-resolution 2D color imaging or to localizing a hidden object whose shape is approximately known. Here, we generalize this class of methods and demonstrate a 3D reconstruction of a hidden scene from an ordinary NLOS photograph. To achieve this, we propose a novel reformulation of the light transport model that conveniently decomposes the hidden scene into \textit{light-occluding} and \textit{non-light-occluding} components to yield a separable non-linear least squares (SNLLS) inverse problem. We develop two solutions: A gradient-based optimization method and a physics-inspired neural network approach, which we call Soft Shadow diffusion (SSD). Despite the challenging ill-conditioned inverse problem encountered here, our approaches are effective on numerous 3D scenes in real experimental scenarios. Moreover, SSD is trained in simulation but generalizes well to unseen classes in simulation and real-world NLOS scenes. SSD also shows surprising robustness to noise and ambient illumination.
+ oai:arXiv.org:2601.12257v1
+ cs.CV
+ cs.AI
+ cs.CG
+ cs.GR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ European Conference on Computer Vision (ECCV 2024)
+ Fadlullah Raji, John Murray-Bruce
+
+
+ FutureX-Pro: Extending Future Prediction to High-Value Vertical Domains
+ https://arxiv.org/abs/2601.12259
+ arXiv:2601.12259v1 Announce Type: new
+Abstract: Building upon FutureX, which established a live benchmark for general-purpose future prediction, this report introduces FutureX-Pro, including FutureX-Finance, FutureX-Retail, FutureX-PublicHealth, FutureX-NaturalDisaster, and FutureX-Search. These together form a specialized framework extending agentic future prediction to high-value vertical domains. While generalist agents demonstrate proficiency in open-domain search, their reliability in capital-intensive and safety-critical sectors remains under-explored. FutureX-Pro targets four economically and socially pivotal verticals: Finance, Retail, Public Health, and Natural Disaster. We benchmark agentic Large Language Models (LLMs) on entry-level yet foundational prediction tasks -- ranging from forecasting market indicators and supply chain demands to tracking epidemic trends and natural disasters. By adapting the contamination-free, live-evaluation pipeline of FutureX, we assess whether current State-of-the-Art (SOTA) agentic LLMs possess the domain grounding necessary for industrial deployment. Our findings reveal the performance gap between generalist reasoning and the precision required for high-value vertical applications.
+ oai:arXiv.org:2601.12259v1
+ cs.AI
+ cs.CE
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Jiashuo Liu, Siyuan Chen, Zaiyuan Wang, Zhiyuan Zeng, Jiacheng Guo, Liang Hu, Lingyue Yin, Suozhi Huang, Wenxin Hao, Yang Yang, Zerui Cheng, Zixin Yao, Lingyue Yin, Haoxin Liu, Jiayi Cheng, Yuzhen Li, Zezhong Ma, Bingjie Wang, Bingsen Qiu, Xiao Liu, Zeyang Zhang, Zijian Liu, Jinpeng Wang, Mingren Yin, Tianci He, Yali Liao, Yixiao Tian, Zhenwei Zhu, Anqi Dai, Ge Zhang, Jingkai Liu, Kaiyuan Zhang, Wenlong Wu, Xiang Gao, Xinjie Chen, Zhixin Yao, Zhoufutu Wen, B. Aditya Prakash, Jose Blanchet, Mengdi Wang, Nian Si, Wenhao Huang
+
+
+ Docs2Synth: A Synthetic Data Trained Retriever Framework for Scanned Visually Rich Documents Understanding
+ https://arxiv.org/abs/2601.12260
+ arXiv:2601.12260v1 Announce Type: new
+Abstract: Document understanding (VRDU) in regulated domains is particularly challenging, since scanned documents often contain sensitive, evolving, and domain specific knowledge. This leads to two major challenges: the lack of manual annotations for model adaptation and the difficulty for pretrained models to stay up-to-date with domain-specific facts. While Multimodal Large Language Models (MLLMs) show strong zero-shot abilities, they still suffer from hallucination and limited domain grounding. In contrast, discriminative Vision-Language Pre-trained Models (VLPMs) provide reliable grounding but require costly annotations to cover new domains. We introduce Docs2Synth, a synthetic-supervision framework that enables retrieval-guided inference for private and low-resource domains. Docs2Synth automatically processes raw document collections, generates and verifies diverse QA pairs via an agent-based system, and trains a lightweight visual retriever to extract domain-relevant evidence. During inference, the retriever collaborates with an MLLM through an iterative retrieval--generation loop, reducing hallucination and improving response consistency. We further deliver Docs2Synth as an easy-to-use Python package, enabling plug-and-play deployment across diverse real-world scenarios. Experiments on multiple VRDU benchmarks show that Docs2Synth substantially enhances grounding and domain generalization without requiring human annotations.
+ oai:arXiv.org:2601.12260v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yihao Ding, Qiang Sun, Puzhen Wu, Sirui Li, Siwen Luo, Wei Liu
+
+
+ Environment-Aware Code Generation: How far are We?
+ https://arxiv.org/abs/2601.12262
+ arXiv:2601.12262v1 Announce Type: new
+Abstract: Recent progress in large language models (LLMs) has improved code generation, but most evaluations still test isolated, small-scale code (e.g., a single function) under default or unspecified software environments. As a result, it is unclear whether LLMs can reliably generate executable code tailored to a user's specific environment. We present the first systematic study of Environment-Aware Code Generation (EACG), where generated code must be functionally correct and directly executable under arbitrary software configurations. To enable realistic evaluation, we introduce VersiBCB, a benchmark that is multi-package, execution-verified, and deprecation-aware, capturing complex and evolving environments that prior datasets often overlook. Using VersiBCB, we investigate three complementary adaptation axes: data, parameters, and cache, and develop representative strategies for each. Our results show that current LLMs struggle with environment-specific code generation, while our adaptations improve environment compatibility and executability. These findings highlight key challenges and opportunities for deploying LLMs in practical software engineering workflows.
+ oai:arXiv.org:2601.12262v1
+ cs.SE
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tongtong Wu, Rongyi Chen, Wenjie Du, Suyu Ma, Guilin Qi, Zhenchang Xing, Shahram Khadivi, Ramesh Periyathambi, Gholamreza Haffari
+
+
+ Multimodal Generative Engine Optimization: Rank Manipulation for Vision-Language Model Rankers
+ https://arxiv.org/abs/2601.12263
+ arXiv:2601.12263v1 Announce Type: new
+Abstract: Vision-Language Models (VLMs) are rapidly replacing unimodal encoders in modern retrieval and recommendation systems. While their capabilities are well-documented, their robustness against adversarial manipulation in competitive ranking scenarios remains largely unexplored. In this paper, we uncover a critical vulnerability in VLM-based product search: multimodal ranking attacks. We present Multimodal Generative Engine Optimization (MGEO), a novel adversarial framework that enables a malicious actor to unfairly promote a target product by jointly optimizing imperceptible image perturbations and fluent textual suffixes. Unlike existing attacks that treat modalities in isolation, MGEO employs an alternating gradient-based optimization strategy to exploit the deep cross-modal coupling within the VLM. Extensive experiments on real-world datasets using state-of-the-art models demonstrate that our coordinated attack significantly outperforms text-only and image-only baselines. These findings reveal that multimodal synergy, typically a strength of VLMs, can be weaponized to compromise the integrity of search rankings without triggering conventional content filters.
+ oai:arXiv.org:2601.12263v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yixuan Du, Chenxiao Yu, Haoyan Xu, Ziyi Wang, Yue Zhao, Xiyang Hu
+
+
+ Statistical Firefly Algorithm for Truss Topology Optimization
+ https://arxiv.org/abs/2601.12265
+ arXiv:2601.12265v1 Announce Type: new
+Abstract: This study proposes an algorithm titled a statistical firefly algorithm (SFA) for truss topology optimization. In the proposed algorithm, historical results of fireflies' motions are used in hypothesis testing to limit the motions of fireflies that are suggested by current information exchanges between fireflies only to those that are potentially useful. Hypothesis testing is applied to the mechanism of an ordinary firefly algorithm (FA) without changing its structure. As a result, the implementation of the proposed algorithm is simple and straightforward. Limiting the motions of fireflies to those that are potential useful results in reduction of firefly evaluations, and, subsequently, reduction of computational efforts. To test the validity and efficiency of the proposed algorithm, it is used to solve several truss topology optimization problems, including some benchmark problems. It is found that the added statistical strategy in the SFA significantly enhances the performance of the original FA in terms of computational efforts while still maintains the quality of the obtained results.
+ oai:arXiv.org:2601.12265v1
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nghi Huu Duong, Duy Vo, Pruettha Nanakorn
+
+
+ Opportunistic Scheduling for Optimal Spot Instance Savings in the Cloud
+ https://arxiv.org/abs/2601.12266
+ arXiv:2601.12266v1 Announce Type: new
+Abstract: We study the problem of scheduling delay-sensitive jobs over spot and on-demand cloud instances to minimize average cost while meeting an average delay constraint. Jobs arrive as a general stochastic process, and incur different costs based on the instance type. This work provides the first analytical treatment of this problem using tools from queuing theory, stochastic processes, and optimization. We derive cost expressions for general policies, prove queue length one is optimal for low target delays, and characterize the optimal wait-time distribution. For high target delays, we identify a knapsack structure and design a scheduling policy that exploits it. An adaptive algorithm is proposed to fully utilize the allowed delay, and empirical results confirm its near-optimality.
+ oai:arXiv.org:2601.12266v1
+ cs.DC
+ cs.NI
+ cs.PF
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Neelkamal Bhuyan, Randeep Bhatia, Murali Kodialam, TV Lakshman
+
+
+ Simulated Annealing Enhances Theory-of-Mind Reasoning in Autoregressive Language Models
+ https://arxiv.org/abs/2601.12269
+ arXiv:2601.12269v1 Announce Type: new
+Abstract: Autoregressive language models are next-token predictors and have been criticized for only optimizing surface plausibility (i.e., local coherence) rather than maintaining correct latent-state representations (i.e., global coherence). Because Theory of Mind (ToM) tasks crucially depend on reasoning about latent mental states of oneself and others, such models are therefore often thought to fail at ToM. While post-training methods can improve ToM performance, we show that strong ToM capability can be recovered directly from the base model without any additional weight updates or verifications. Our approach builds on recent power-sampling methods (Karan & Du, 2025) that use Markov chain Monte Carlo (MCMC) to sample from sharpened sequence-level (rather than token-level) probability distributions of autoregressive language models. We further find that incorporating annealing, where the tempered distribution is gradually shifted from high to low temperature, substantially improves ToM performance over fixed-temperature power sampling. Together, these results suggest that sampling-based optimization provides a powerful way to extract latent capabilities from language models without retraining.
+ oai:arXiv.org:2601.12269v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xucong Hu, Jian-Qiao Zhu
+
+
+ SplittingSecrets: A Compiler-Based Defense for Preventing Data Memory-Dependent Prefetcher Side-Channels
+ https://arxiv.org/abs/2601.12270
+ arXiv:2601.12270v1 Announce Type: new
+Abstract: Traditional side-channels take advantage of secrets being used as inputs to unsafe instructions, used for memory accesses, or used in control flow decisions. Constant-time programming, which restricts such code patterns, has been widely adopted as a defense against these vulnerabilities. However, new hardware optimizations in the form of Data Memory-dependent Prefetchers (DMP) present in Apple, Intel, and ARM CPUs have shown such defenses are not sufficient. These prefetchers, unlike classical prefetchers, use the content of memory as well as the trace of prior accesses to determine prefetch targets. An adversary abusing such a prefetcher has been shown to be able to mount attacks leaking data-at-rest; data that is never used by the program, even speculatively, in an unsafe manner.
+ In response, this paper introduces SplittingSecrets, a compiler-based tool that can harden software libraries against side-channels arising from DMPs. SplittingSecrets's approach avoids reasoning about the complex internals of different DMPs and instead relies on one key aspect of all DMPs: activation requires data to resemble addresses. To prevent secret data from leaking, SplittingSecrets transforms memory operations to ensure that secrets are never stored in memory in a manner resembling an address, thereby avoiding DMP activation on those secrets. Rather than disable a DMP entirely, SplittingSecrets can provide targeted hardening for only specific secrets entirely in software.
+ We have implemented SplittingSecrets using LLVM, supporting both source-level memory operations and those generated by the compiler backend for the AArch64 architecture, We have analyzed the performance overhead involved in safeguarding secrets from DMP-induced attacks using common primitives in libsodium, a popular cryptographic library when built for Apple M-series CPUs.
+ oai:arXiv.org:2601.12270v1
+ cs.CR
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Reshabh K Sharma, Dan Grossman, David Kohlbrenner
+
+
+ AgenticPruner: MAC-Constrained Neural Network Compression via LLM-Driven Strategy Search
+ https://arxiv.org/abs/2601.12272
+ arXiv:2601.12272v1 Announce Type: new
+Abstract: Neural network pruning remains essential for deploying deep learning models on resource-constrained devices, yet existing approaches primarily target parameter reduction without directly controlling computational cost. This yields unpredictable inference latency in deployment scenarios where strict Multiply-Accumulate (MAC) operation budgets must be met. We propose AgenticPruner, a framework utilizing large language models to achieve MAC-constrained optimization through iterative strategy learning. Our approach coordinates three specialized agents: a Profiling Agent that analyzes model architecture and MAC distributions, a Master Agent that orchestrates the workflow with divergence monitoring, and an Analysis Agent powered by Claude 3.5 Sonnet that learns optimal strategies from historical attempts. Through in-context learning, the Analysis Agent improves convergence success rate from 48% to 71% compared to grid search. Building upon isomorphic pruning's graph-based structural grouping, our method adds context-aware adaptation by analyzing patterns across pruning iterations, enabling automatic convergence to target MAC budgets within user-defined tolerance bands.
+ We validate our framework on ImageNet-1K across ResNet, ConvNeXt, and DeiT architectures. On CNNs, our approach achieves MAC targeting while maintaining or improving accuracy: ResNet-50 reaches 1.77G MACs with 77.04% accuracy (+0.91% vs baseline); ResNet-101 achieves 4.22G MACs with 78.94% accuracy (+1.56% vs baseline). For ConvNeXt-Small, pruning to 8.17G MACs yields 1.41x GPU and 1.07x CPU speedup with 45% parameter reduction. On Vision Transformers, we demonstrate MAC-budget compliance within user-defined tolerance bands (typically +1% to +5% overshoot, -5% to -15% undershoot), establishing feasibility for deployment scenarios requiring strict computational guarantees.
+ oai:arXiv.org:2601.12272v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shahrzad Esmat, Mahdi Banisharif, Ali Jannesari
+
+
+ Leveraging Mutation Analysis for LLM-based Repair of Quantum Programs
+ https://arxiv.org/abs/2601.12273
+ arXiv:2601.12273v1 Announce Type: new
+Abstract: In recent years, Automated Program Repair (APR) techniques specifically designed for quantum programs have been proposed. However, existing approaches often suffer from low repair success rates or poor understandability of the generated patches. In this study, we construct a framework in which a large language model (LLM) generates code repairs along with a natural language explanation of the applied repairs. To investigate how the contextual information included in prompts influences APR performance for quantum programs, we design four prompt configurations with different combinations of static information, dynamic information, and mutation analysis results. Mutation analysis evaluates how small changes to specific parts of a program affect its execution results and provides more detailed dynamic information than simple execution outputs such as stack traces. Our experimental results show that mutation analysis can provide valuable contextual information for LLM-based APR of quantum programs, improving repair success rates (achieving 94.4% in our experiment) and in some cases also improving the quality of generated explanations. Our findings point toward new directions for developing APR techniques for quantum programs that enhance both reliability and explainability.
+ oai:arXiv.org:2601.12273v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chihiro Yoshida, Yuta Ishimoto, Olivier Nourry, Masanari Kondo, Makoto Matsushita, Yasutaka Kamei, Yoshiki Higo
+
+
+ Hybrid Concolic Testing with Large Language Models for Guided Path Exploration
+ https://arxiv.org/abs/2601.12274
+ arXiv:2601.12274v1 Announce Type: new
+Abstract: Concolic testing, a powerful hybrid software testing technique, has historically been plagued by fundamental limitations such as path explosion and the high cost of constraint solving, which hinder its practical application in large-scale, real-world software systems. This paper introduces a novel algorithmic framework that synergistically integrates concolic execution with Large Language Models (LLMs) to overcome these challenges. Our hybrid approach leverages the semantic reasoning capabilities of LLMs to guide path exploration, prioritize interesting execution paths, and assist in constraint solving. We formally define the system architecture and algorithms that constitute this new paradigm. Through a series of experiments on both synthetic and real-world Fintech applications, we demonstrate that our approach significantly outperforms traditional concolic testing, random testing, and genetic algorithm-based methods in terms of branch coverage, path coverage, and time-to-coverage. The results indicate that by combining the strengths of both concolic execution and LLMs, our method achieves a more efficient and effective exploration of the program state space, leading to improved bug detection capabilities.
+ oai:arXiv.org:2601.12274v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mahdi Eslamimehr
+
+
+ Predictive Prototyping: Evaluating Design Concepts with ChatGPT
+ https://arxiv.org/abs/2601.12276
+ arXiv:2601.12276v1 Announce Type: new
+Abstract: The design-build-test cycle is essential for innovation, but physical prototyping is often slow and expensive. Although physics-based simulation and strategic prototyping can reduce cost, meaningful evaluation is frequently constrained until an integrated prototype is built. This paper investigates whether a generative pretrained transformer (GPT) can predict information typically obtained through prototyping, including cost, performance, and perceived usability. We introduce a retrieval-augmented generation (RAG) method to emulate design feedback using OpenAI GPT-4o, grounded in prototyping data scraped from Instructables.com to increase access to relevant precedent. Two studies are reported. First, a controlled experiment compares GPT-RAG and human designers, who receive design sketches and predict cost, performance, and usability; predictions are evaluated against ground-truth results from physical prototypes. Second, we report an applied demonstration in which a physical prototype is produced from GPT-RAG recommendations and compared with a commercial baseline and a topology-optimized design. Results show that GPT-RAG provides more accurate cost and performance estimates than individual or crowd human estimates, while yielding comparable usability insights; the GPT-RAG-informed prototype also outperforms both comparison prototypes. Repeated querying with response averaging significantly improves accuracy, suggesting that LLMs can emulate crowd aggregation effects consistent with the law of large numbers.
+ oai:arXiv.org:2601.12276v1
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hilsann Yong, Bradley A. Camburn
+
+
+ An Efficient and Multi-Modal Navigation System with One-Step World Model
+ https://arxiv.org/abs/2601.12277
+ arXiv:2601.12277v1 Announce Type: new
+Abstract: Navigation is a fundamental capability for mobile robots. While the current trend is to use learning-based approaches to replace traditional geometry-based methods, existing end-to-end learning-based policies often struggle with 3D spatial reasoning and lack a comprehensive understanding of physical world dynamics. Integrating world models-which predict future observations conditioned on given actions-with iterative optimization planning offers a promising solution due to their capacity for imagination and flexibility. However, current navigation world models, typically built on pure transformer architectures, often rely on multi-step diffusion processes and autoregressive frame-by-frame generation. These mechanisms result in prohibitive computational latency, rendering real-time deployment impossible. To address this bottleneck, we propose a lightweight navigation world model that adopts a one-step generation paradigm and a 3D U-Net backbone equipped with efficient spatial-temporal attention. This design drastically reduces inference latency, enabling high-frequency control while achieving superior predictive performance. We also integrate this model into an optimization-based planning framework utilizing anchor-based initialization to handle multi-modal goal navigation tasks. Extensive closed-loop experiments in both simulation and real-world environments demonstrate our system's superior efficiency and robustness compared to state-of-the-art baselines.
+ oai:arXiv.org:2601.12277v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wangtian Shen, Ziyang Meng, Jinming Ma, Mingliang Zhou, Diyun Xiang
+
+
+ HCFT: Hierarchical Convolutional Fusion Transformer for EEG Decoding
+ https://arxiv.org/abs/2601.12279
+ arXiv:2601.12279v1 Announce Type: new
+Abstract: Electroencephalography (EEG) decoding requires models that can effectively extract and integrate complex temporal, spectral, and spatial features from multichannel signals. To address this challenge, we propose a lightweight and generalizable decoding framework named Hierarchical Convolutional Fusion Transformer (HCFT), which combines dual-branch convolutional encoders and hierarchical Transformer blocks for multi-scale EEG representation learning. Specifically, the model first captures local temporal and spatiotemporal dynamics through time-domain and time-space convolutional branches, and then aligns these features via a cross-attention mechanism that enables interaction between branches at each stage. Subsequently, a hierarchical Transformer fusion structure is employed to encode global dependencies across all feature stages, while a customized Dynamic Tanh normalization module is introduced to replace traditional Layer Normalization in order to enhance training stability and reduce redundancy. Extensive experiments are conducted on two representative benchmark datasets, BCI Competition IV-2b and CHB-MIT, covering both event-related cross-subject classification and continuous seizure prediction tasks. Results show that HCFT achieves 80.83% average accuracy and a Cohen's kappa of 0.6165 on BCI IV-2b, as well as 99.10% sensitivity, 0.0236 false positives per hour, and 98.82% specificity on CHB-MIT, consistently outperforming over ten state-of-the-art baseline methods. Ablation studies confirm that each core component of the proposed framework contributes significantly to the overall decoding performance, demonstrating HCFT's effectiveness in capturing EEG dynamics and its potential for real-world BCI applications.
+ oai:arXiv.org:2601.12279v1
+ cs.HC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haodong Zhang, Jiapeng Zhu, Yitong Chen, Hongqi Li
+
+
+ Democratizing Music Therapy: LLM-Based Automated EEG Analysis and Progress Tracking for Low-Cost Home Devices
+ https://arxiv.org/abs/2601.12280
+ arXiv:2601.12280v1 Announce Type: new
+Abstract: Home-based music therapy devices require accessible and cost-effective solutions for users to understand and track their therapeutic progress. Traditional physiological signal analysis, particularly EEG interpretation, relies heavily on domain experts, creating barriers to scalability and home adoption. Meanwhile, few experts are capable of interpreting physiological signal data while also making targeted music recommendations. While large language models (LLMs) have shown promise in various domains, their application to automated physiological report generation for music therapy represents an unexplored task. We present a prototype system that leverages LLMs to bridge this gap -- transforming raw EEG and cardiovascular data into human-readable therapeutic reports and personalized music recommendations. Unlike prior work focusing on real-time physiological adaptation during listening, our approach emphasizes post-session analysis and interpretable reporting, enabling non-expert users to comprehend their psychophysiological states and track therapeutic outcomes over time. By integrating signal processing modules with LLM-based reasoning agents, the system provides a practical and low-cost solution for short-term progress monitoring in home music therapy contexts. This work demonstrates the feasibility of applying LLMs to a novel task -- democratizing access to physiology-driven music therapy through automated, interpretable reporting.
+ oai:arXiv.org:2601.12280v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Huixin Xue, Guangjun Xu, Shihong Ren, Xian Gao, Ruian Tie, Zhen Zhou, Hao Liu, Yue Gao
+
+
+ CytoCLIP: Learning Cytoarchitectural Characteristics in Developing Human Brain Using Contrastive Language Image Pre-Training
+ https://arxiv.org/abs/2601.12282
+ arXiv:2601.12282v1 Announce Type: new
+Abstract: The functions of different regions of the human brain are closely linked to their distinct cytoarchitecture, which is defined by the spatial arrangement and morphology of the cells. Identifying brain regions by their cytoarchitecture enables various scientific analyses of the brain. However, delineating these areas manually in brain histological sections is time-consuming and requires specialized knowledge. An automated approach is necessary to minimize the effort needed from human experts. To address this, we propose CytoCLIP, a suite of vision-language models derived from pre-trained Contrastive Language-Image Pre-Training (CLIP) frameworks to learn joint visual-text representations of brain cytoarchitecture. CytoCLIP comprises two model variants: one is trained using low-resolution whole-region images to understand the overall cytoarchitectural pattern of an area, and the other is trained on high-resolution image tiles for detailed cellular-level representation. The training dataset is created from NISSL-stained histological sections of developing fetal brains of different gestational weeks. It includes 86 distinct regions for low-resolution images and 384 brain regions for high-resolution tiles. We evaluate the model's understanding of the cytoarchitecture and generalization ability using region classification and cross-modal retrieval tasks. Multiple experiments are performed under various data setups, including data from samples of different ages and sectioning planes. Experimental results demonstrate that CytoCLIP outperforms existing methods. It achieves an F1 score of 0.87 for whole-region classification and 0.91 for high-resolution image tile classification.
+ oai:arXiv.org:2601.12282v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Pralaypati Ta, Sriram Venkatesaperumal, Keerthi Ram, Mohanasankar Sivaprakasam
+
+
+ SDiT: Semantic Region-Adaptive for Diffusion Transformers
+ https://arxiv.org/abs/2601.12283
+ arXiv:2601.12283v1 Announce Type: new
+Abstract: Diffusion Transformers (DiTs) achieve state-of-the-art performance in text-to-image synthesis but remain computationally expensive due to the iterative nature of denoising and the quadratic cost of global attention. In this work, we observe that denoising dynamics are spatially non-uniform-background regions converge rapidly while edges and textured areas evolve much more actively. Building on this insight, we propose SDiT, a Semantic Region-Adaptive Diffusion Transformer that allocates computation according to regional complexity. SDiT introduces a training-free framework combining (1) semantic-aware clustering via fast Quickshift-based segmentation, (2) complexity-driven regional scheduling to selectively update informative areas, and (3) boundary-aware refinement to maintain spatial coherence. Without any model retraining or architectural modification, SDiT achieves up to 3.0x acceleration while preserving nearly identical perceptual and semantic quality to full-attention inference.
+ oai:arXiv.org:2601.12283v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bowen Lin, Fanjiang Ye, Yihua Liu, Zhenghui Guo, Boyuan Zhang, Weijian Zheng, Yufan Xu, Tiancheng Xing, Yuke Wang, Chengming Zhang
+
+
+ How Safe Is Your Data in Connected and Autonomous Cars: A Consumer Advantage or a Privacy Nightmare ?
+ https://arxiv.org/abs/2601.12284
+ arXiv:2601.12284v1 Announce Type: new
+Abstract: The rapid evolution of the automobile sector, driven by advancements in connected and autonomous vehicles (CAVs), has transformed how vehicles communicate, operate, and interact with their surroundings. Technologies such as Vehicle-to-Everything (V2X) communication enable autonomous cars to generate and exchange substantial amounts of data with real-world entities, enhancing safety, improving performance, and delivering personalized user experiences. However, this data-driven ecosystem introduces significant challenges, particularly concerning data privacy, security, and governance. The absence of transparency and comprehensive regulatory frameworks exacerbates issues of unauthorized data access, prolonged retention, and potential misuse, creating tension between consumer benefits and privacy risks. This review paper explores the multifaceted nature of data sharing in CAVs, analyzing its contributions to innovation and its associated vulnerabilities. It evaluates data-sharing mechanisms and communication technologies, highlights the benefits of data exchange across various use cases, examines privacy concerns and risks of data misuse, and critically reviews regulatory frameworks and their inadequacies in safeguarding user privacy. By providing a thorough analysis of the current state of data sharing in the automotive sector, the paper emphasizes the urgent need for robust policies and ethical data management practices. It calls for striking a balance between fostering technological advancements and ensuring secure, consumer-friendly solutions, paving the way for a trustworthy and innovative automotive future.
+ oai:arXiv.org:2601.12284v1
+ cs.CY
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Amit Chougule, Vinay Chamola, Norbert Herencsar, Fei Richard Yu
+
+
+ LegacyAvatars: Volumetric Face Avatars For Traditional Graphics Pipelines
+ https://arxiv.org/abs/2601.12285
+ arXiv:2601.12285v1 Announce Type: new
+Abstract: We introduce a novel representation for efficient classical rendering of photorealistic 3D face avatars. Leveraging recent advances in radiance fields anchored to parametric face models, our approach achieves controllable volumetric rendering of complex facial features, including hair, skin, and eyes. At enrollment time, we learn a set of radiance manifolds in 3D space to extract an explicit layered mesh, along with appearance and warp textures. During deployment, this allows us to control and animate the face through simple linear blending and alpha compositing of textures over a static mesh. This explicit representation also enables the generated avatar to be efficiently streamed online and then rendered using classical mesh and shader-based rendering on legacy graphics platforms, eliminating the need for any custom engineering or integration.
+ oai:arXiv.org:2601.12285v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Safa C. Medin, Gengyan Li, Ziqian Bai, Ruofei Du, Leonhard Helminger, Yinda Zhang, Stephan J. Garbin, Philip L. Davidson, Gregory W. Wornell, Thabo Beeler, Abhimitra Meka
+
+
+ Conversational Context Classification: A Representation Engineering Approach
+ https://arxiv.org/abs/2601.12286
+ arXiv:2601.12286v1 Announce Type: new
+Abstract: The increasing prevalence of Large Language Models (LLMs) demands effective safeguards for their operation, particularly concerning their tendency to generate out-of-context responses. A key challenge is accurately detecting when LLMs stray from expected conversational norms, manifesting as topic shifts, factual inaccuracies, or outright hallucinations. Traditional anomaly detection struggles to directly apply within contextual semantics. This paper outlines our experiment in exploring the use of Representation Engineering (RepE) and One-Class Support Vector Machine (OCSVM) to identify subspaces within the internal states of LLMs that represent a specific context. By training OCSVM on in-context examples, we establish a robust boundary within the LLM's hidden state latent space. We evaluate out study with two open source LLMs - Llama and Qwen models in specific contextual domain. Our approach entailed identifying the optimal layers within the LLM's internal state subspaces that strongly associates with the context of interest. Our evaluation results showed promising results in identifying the subspace for a specific context. Aside from being useful in detecting in or out of context conversation threads, this research work contributes to the study of better interpreting LLMs.
+ oai:arXiv.org:2601.12286v1
+ cs.CL
+ cs.AI
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jonathan Pan
+
+
+ TimeGMM: Single-Pass Probabilistic Forecasting via Adaptive Gaussian Mixture Models with Reversible Normalization
+ https://arxiv.org/abs/2601.12288
+ arXiv:2601.12288v1 Announce Type: new
+Abstract: Probabilistic time series forecasting is crucial for quantifying future uncertainty, with significant applications in fields such as energy and finance. However, existing methods often rely on computationally expensive sampling or restrictive parametric assumptions to characterize future distributions, which limits predictive performance and introduces distributional mismatch. To address these challenges, this paper presents TimeGMM, a novel probabilistic forecasting framework based on Gaussian Mixture Models (GMM) that captures complex future distributions in a single forward pass. A key component is GMM-adapted Reversible Instance Normalization (GRIN), a novel module designed to dynamically adapt to temporal-probabilistic distribution shifts. The framework integrates a dedicated Temporal Encoder (TE-Module) with a Conditional Temporal-Probabilistic Decoder (CTPD-Module) to jointly capture temporal dependencies and mixture distribution parameters. Extensive experiments demonstrate that TimeGMM consistently outperforms state-of-the-art methods, achieving maximum improvements of 22.48\% in CRPS and 21.23\% in NMAE.
+ oai:arXiv.org:2601.12288v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lei Liu, Tengyuan Liu, Hongwei Zhao, Jiahui Huang, Ruibo Guo, Bin Li
+
+
+ ParaMETA: Towards Learning Disentangled Paralinguistic Speaking Styles Representations from Speech
+ https://arxiv.org/abs/2601.12289
+ arXiv:2601.12289v1 Announce Type: new
+Abstract: Learning representative embeddings for different types of speaking styles, such as emotion, age, and gender, is critical for both recognition tasks (e.g., cognitive computing and human-computer interaction) and generative tasks (e.g., style-controllable speech generation). In this work, we introduce ParaMETA, a unified and flexible framework for learning and controlling speaking styles directly from speech. Unlike existing methods that rely on single-task models or cross-modal alignment, ParaMETA learns disentangled, task-specific embeddings by projecting speech into dedicated subspaces for each type of style. This design reduces inter-task interference, mitigates negative transfer, and allows a single model to handle multiple paralinguistic tasks such as emotion, gender, age, and language classification. Beyond recognition, ParaMETA enables fine-grained style control in Text-To-Speech (TTS) generative models. It supports both speech- and text-based prompting and allows users to modify one speaking styles while preserving others. Extensive experiments demonstrate that ParaMETA outperforms strong baselines in classification accuracy and generates more natural and expressive speech, while maintaining a lightweight and efficient model suitable for real-world applications.
+ oai:arXiv.org:2601.12289v1
+ cs.SD
+ cs.LG
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Haowei Lou, Hye-young Paik, Wen Hu, Lina Yao
+
+
+ Re-educating Educated Ones: A Case Study on Chakma Language Revitalization in Chittagong Hill Tracts
+ https://arxiv.org/abs/2601.12290
+ arXiv:2601.12290v1 Announce Type: new
+Abstract: Indigenous languages face significant cultural oppression from official state languages, particularly in the Global South. We investigate the Bangladeshi Chakma language revitalization movement, a community grappling with language liquidity and amalgamation into the dominant Bengali language. Our six-month-long qualitative study involving interviews and focus group discussions with Chakma language learning stakeholders uncovered existing community socio-economic challenges and resilience strategies. We noted the need for culturally grounded digital tools and resources. We propose an ICT-mediated community-centric framework for Indigenous language revitalization in the Global South, emphasizing the integration of historical identity elements, stakeholder-defined requirements, and effective digital engagement strategies to empower communities in preserving their linguistic and cultural heritage.
+ oai:arXiv.org:2601.12290v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Avijoy Chakma, Adity Khisa, Soham Khisa, Jannatun Noor, Sharifa Sultana
+
+
+ OpenNavMap: Structure-Free Topometric Mapping via Large-Scale Collaborative Localization
+ https://arxiv.org/abs/2601.12291
+ arXiv:2601.12291v1 Announce Type: new
+Abstract: Scalable and maintainable map representations are fundamental to enabling large-scale visual navigation and facilitating the deployment of robots in real-world environments. While collaborative localization across multi-session mapping enhances efficiency, traditional structure-based methods struggle with high maintenance costs and fail in feature-less environments or under significant viewpoint changes typical of crowd-sourced data. To address this, we propose OPENNAVMAP, a lightweight, structure-free topometric system leveraging 3D geometric foundation models for on-demand reconstruction. Our method unifies dynamic programming-based sequence matching, geometric verification, and confidence-calibrated optimization to robust, coarse-to-fine submap alignment without requiring pre-built 3D models. Evaluations on the Map-Free benchmark demonstrate superior accuracy over structure-from-motion and regression baselines, achieving an average translation error of 0.62m. Furthermore, the system maintains global consistency across 15km of multi-session data with an absolute trajectory error below 3m for map merging. Finally, we validate practical utility through 12 successful autonomous image-goal navigation tasks on simulated and physical robots. Code and datasets will be publicly available in https://rpl-cs-ucl.github.io/OpenNavMap_page.
+ oai:arXiv.org:2601.12291v1
+ cs.RO
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jianhao Jiao, Changkun Liu, Jingwen Yu, Boyi Liu, Qianyi Zhang, Yue Wang, Dimitrios Kanoulas
+
+
+ ToolPRMBench: Evaluating and Advancing Process Reward Models for Tool-using Agents
+ https://arxiv.org/abs/2601.12294
+ arXiv:2601.12294v1 Announce Type: new
+Abstract: Reward-guided search methods have demonstrated strong potential in enhancing tool-using agents by effectively guiding sampling and exploration over complex action spaces. As a core design, those search methods utilize process reward models (PRMs) to provide step-level rewards, enabling more fine-grained monitoring. However, there is a lack of systematic and reliable evaluation benchmarks for PRMs in tool-using settings. In this paper, we introduce ToolPRMBench, a large-scale benchmark specifically designed to evaluate PRMs for tool-using agents. ToolPRMBench is built on top of several representative tool-using benchmarks and converts agent trajectories into step-level test cases. Each case contains the interaction history, a correct action, a plausible but incorrect alternative, and relevant tool metadata. We respectively utilize offline sampling to isolate local single-step errors and online sampling to capture realistic multi-step failures from full agent rollouts. A multi-LLM verification pipeline is proposed to reduce label noise and ensure data quality. We conduct extensive experiments across large language models, general PRMs, and tool-specialized PRMs on ToolPRMBench. The results reveal clear differences in PRM effectiveness and highlight the potential of specialized PRMs for tool-using. Code and data will be released at https://github.com/David-Li0406/ToolPRMBench.
+ oai:arXiv.org:2601.12294v1
+ cs.AI
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Dawei Li, Yuguang Yao, Zhen Tan, Huan Liu, Ruocheng Guo
+
+
+ Distribution Shift Is Key to Learning Invariant Prediction
+ https://arxiv.org/abs/2601.12296
+ arXiv:2601.12296v1 Announce Type: new
+Abstract: An interesting phenomenon arises: Empirical Risk Minimization (ERM) sometimes outperforms methods specifically designed for out-of-distribution tasks. This motivates an investigation into the reasons behind such behavior beyond algorithmic design. In this study, we find that one such reason lies in the distribution shift across training domains. A large degree of distribution shift can lead to better performance even under ERM. Specifically, we derive several theoretical and empirical findings demonstrating that distribution shift plays a crucial role in model learning and benefits learning invariant prediction. Firstly, the proposed upper bounds indicate that the degree of distribution shift directly affects the prediction ability of the learned models. If it is large, the models' ability can increase, approximating invariant prediction models that make stable predictions under arbitrary known or unseen domains; and vice versa. We also prove that, under certain data conditions, ERM solutions can achieve performance comparable to that of invariant prediction models. Secondly, the empirical validation results demonstrated that the predictions of learned models approximate those of Oracle or Optimal models, provided that the degree of distribution shift in the training data increases.
+ oai:arXiv.org:2601.12296v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Hong Zheng, Fei Teng
+
+
+ CD-PIM: A High-Bandwidth and Compute-Efficient LPDDR5-Based PIM for Low-Batch LLM Acceleration on Edge-Device
+ https://arxiv.org/abs/2601.12298
+ arXiv:2601.12298v1 Announce Type: new
+Abstract: Edge deployment of low-batch large language models (LLMs) faces critical memory bandwidth bottlenecks when executing memory-intensive general matrix-vector multiplications (GEMV) operations. While digital processing-in-memory (PIM) architectures promise to accelerate GEMV operations, existing PIM-equipped edge devices still suffer from three key limitations: limited bandwidth improvement, component under-utilization in mixed workloads, and low compute capacity of computing units (CUs). In this paper, we propose CD-PIM to address these challenges through three key innovations. First, we introduce a high-bandwidth compute-efficient mode (HBCEM) that enhances bandwidth by dividing each bank into four pseudo-banks through segmented global bitlines. Second, we propose a low-batch interleaving mode (LBIM) to improve component utilization by overlapping GEMV operations with GEMM operations. Third, we design a compute-efficient CU that performs enhanced GEMV operations in a pipelined manner by serially feeding weight data into the computing core. Forth, we adopt a column-wise mapping for the key-cache matrix and row-wise mapping for the value-cache matrix, which fully utilizes CU resources. Our evaluation shows that compared to a GPU-only baseline and state-of-the-art PIM designs, our CD-PIM achieves 11.42x and 4.25x speedup on average within a single batch in HBCEM mode, respectively. Moreover, for low-batch sizes, the CD-PIM achieves an average speedup of 1.12x in LBIM compared to HBCEM.
+ oai:arXiv.org:2601.12298v1
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ye Lin, Chao Fang, Xiaoyong Song, Qi Wu, Anying Jiang, Yichuan Bai, Li Du
+
+
+ "What If My Face Gets Scanned Without Consent": Understanding Older Adults' Experiences with Biometric Payment
+ https://arxiv.org/abs/2601.12300
+ arXiv:2601.12300v1 Announce Type: new
+Abstract: Biometric payment, i.e., biometric authentication implemented in digital payment systems, can reduce memory demands and streamline payment for older adults. However, older adults' perceptions and practices regarding biometric payment remain underexplored. We conducted semi-structured interviews with 22 Chinese older adults, including both users and non-users. Participants were motivated to use biometric payment due to convenience and perceived security. However, they also worried about loss of control due to its password-free nature and expressed concerns about biometric data security. Participants also identified desired features for biometric payment, such as lightweight and context-aware cognitive confirmation mechanisms to enhance user control. Based on these findings, we outline recommendations for more controllable and informative digital financial services that better support older adults.
+ oai:arXiv.org:2601.12300v1
+ cs.HC
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yue Deng, Changyang He, Bo Li, Yixin Zou
+
+
+ Facet-Aware Multi-Head Mixture-of-Experts Model with Text-Enhanced Pre-training for Sequential Recommendation
+ https://arxiv.org/abs/2601.12301
+ arXiv:2601.12301v1 Announce Type: new
+Abstract: Sequential recommendation (SR) systems excel at capturing users' dynamic preferences by leveraging their interaction histories. Most existing SR systems assign a single embedding vector to each item to represent its features, adopting various models to combine these embeddings into a sequence representation that captures user intent. However, we argue that this representation alone is insufficient to capture an item's multi-faceted nature (e.g., movie genres, starring actors). Furthermore, users often exhibit complex and varied preferences within these facets (e.g., liking both action and musical films within the genre facet), which are challenging to fully represent with static identifiers. To address these issues, we propose a novel architecture titled Facet-Aware Multi-Head Mixture-of-Experts Model for Sequential Recommendation (FAME). We leverage sub-embeddings from each head in the final multi-head attention layer to predict the next item separately, effectively capturing distinct item facets. A gating mechanism then integrates these predictions by dynamically determining their importance. Additionally, we introduce a Mixture-of-Experts (MoE) network within each attention head to disentangle varied user preferences within each facet, utilizing a learnable router network to aggregate expert outputs based on context. Complementing this architecture, we design a Text-Enhanced Facet-Aware Pre-training module to overcome the limitations of randomly initialized embeddings. By utilizing a pre-trained text encoder and employing an alternating supervised contrastive learning objective, we explicitly disentangle facet-specific features from textual metadata (e.g., descriptions) before sequential training begins. This ensures that the item embeddings are semantically robust and aligned with the downstream multi-facet framework.
+ oai:arXiv.org:2601.12301v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mingrui Liu, Sixiao Zhang, Cheng Long
+
+
+ On the Minimum Length of Functional Batch Codes with Small Recovery Sets
+ https://arxiv.org/abs/2601.12302
+ arXiv:2601.12302v1 Announce Type: new
+Abstract: Batch codes are of potential use for load balancing and private information retrieval in distributed data storage systems. Recently, a special case of batch codes, termed functional batch codes, was proposed in the literature. In functional batch codes, users can query linear combinations of the information symbols, and not only the information symbols themselves, as is the case for standard batch codes. In this work, we consider linear functional batch codes with the additional property that every query is answered by using only a small number of coded symbols. We derive bounds on the minimum length of such codes, and evaluate the results by numerical computations.
+ oai:arXiv.org:2601.12302v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kristiina Oksner, Henk D. L. Hollmann, Ago-Erik Riet, Vitaly Skachek
+
+
+ Concepts from Representations: Post-hoc Concept Bottleneck Models via Sparse Decomposition of Visual Representations
+ https://arxiv.org/abs/2601.12303
+ arXiv:2601.12303v1 Announce Type: new
+Abstract: Deep learning has achieved remarkable success in image recognition, yet their inherent opacity poses challenges for deployment in critical domains. Concept-based interpretations aim to address this by explaining model reasoning through human-understandable concepts. However, existing post-hoc methods and ante-hoc concept bottleneck models (CBMs), suffer from limitations such as unreliable concept relevance, non-visual or labor-intensive concept definitions, and model or data-agnostic assumptions. This paper introduces Post-hoc Concept Bottleneck Model via Representation Decomposition (PCBM-ReD), a novel pipeline that retrofits interpretability onto pretrained opaque models. PCBM-ReD automatically extracts visual concepts from a pre-trained encoder, employs multimodal large language models (MLLMs) to label and filter concepts based on visual identifiability and task relevance, and selects an independent subset via reconstruction-guided optimization. Leveraging CLIP's visual-text alignment, it decomposes image representations into linear combination of concept embeddings to fit into the CBMs abstraction. Extensive experiments across 11 image classification tasks show PCBM-ReD achieves state-of-the-art accuracy, narrows the performance gap with end-to-end models, and exhibits better interpretability.
+ oai:arXiv.org:2601.12303v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shizhan Gong, Xiaofan Zhang, Qi Dou
+
+
+ A Two-Stage Globally-Diverse Adversarial Attack for Vision-Language Pre-training Models
+ https://arxiv.org/abs/2601.12304
+ arXiv:2601.12304v1 Announce Type: new
+Abstract: Vision-language pre-training (VLP) models are vulnerable to adversarial examples, particularly in black-box scenarios. Existing multimodal attacks often suffer from limited perturbation diversity and unstable multi-stage pipelines. To address these challenges, we propose 2S-GDA, a two-stage globally-diverse attack framework. The proposed method first introduces textual perturbations through a globally-diverse strategy by combining candidate text expansion with globally-aware replacement. To enhance visual diversity, image-level perturbations are generated using multi-scale resizing and block-shuffle rotation. Extensive experiments on VLP models demonstrate that 2S-GDA consistently improves attack success rates over state-of-the-art methods, with gains of up to 11.17\% in black-box settings. Our framework is modular and can be easily combined with existing methods to further enhance adversarial transferability.
+ oai:arXiv.org:2601.12304v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wutao Chen, Huaqin Zou, Chen Wan, Lifeng Huang
+
+
+ Machine Learning as a Service (MLaaS) Dataset Generator Framework for IoT Environments
+ https://arxiv.org/abs/2601.12305
+ arXiv:2601.12305v1 Announce Type: new
+Abstract: We propose a novel MLaaS Dataset Generator (MDG) framework that creates configurable and reproducible datasets for evaluating Machine Learning as a Service (MLaaS) selection and composition. MDG simulates realistic MLaaS behaviour by training and evaluating diverse model families across multiple real-world datasets and data distribution settings. It records detailed functional attributes, quality of service metrics, and composition-specific indicators, enabling systematic analysis of service performance and cross-service behaviour. Using MDG, we generate more than ten thousand MLaaS service instances and construct a large-scale benchmark dataset suitable for downstream evaluation. We also implement a built-in composition mechanism that models how services interact under varied Internet of Things conditions. Experiments demonstrate that datasets generated by MDG enhance selection accuracy and composition quality compared to existing baselines. MDG provides a practical and extensible foundation for advancing data-driven research on MLaaS selection and composition
+ oai:arXiv.org:2601.12305v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Deepak Kanneganti, Sajib Mistry, Sheik Fattah, Joshua Boland, Aneesh Krishna
+
+
+ Rethinking the Value of Multi-Agent Workflow: A Strong Single Agent Baseline
+ https://arxiv.org/abs/2601.12307
+ arXiv:2601.12307v1 Announce Type: new
+Abstract: Recent advances in LLM-based multi-agent systems (MAS) show that workflows composed of multiple LLM agents with distinct roles, tools, and communication patterns can outperform single-LLM baselines on complex tasks. However, most frameworks are homogeneous, where all agents share the same base LLM and differ only in prompts, tools, and positions in the workflow. This raises the question of whether such workflows can be simulated by a single agent through multi-turn conversations. We investigate this across seven benchmarks spanning coding, mathematics, general question answering, domain-specific reasoning, and real-world planning and tool use. Our results show that a single agent can reach the performance of homogeneous workflows with an efficiency advantage from KV cache reuse, and can even match the performance of an automatically optimized heterogeneous workflow. Building on this finding, we propose \textbf{OneFlow}, an algorithm that automatically tailors workflows for single-agent execution, reducing inference costs compared to existing automatic multi-agent design frameworks without trading off accuracy. These results position the single-LLM implementation of multi-agent workflows as a strong baseline for MAS research. We also note that single-LLM methods cannot capture heterogeneous workflows due to the lack of KV cache sharing across different LLMs, highlighting future opportunities in developing \textit{truly} heterogeneous multi-agent systems.
+ oai:arXiv.org:2601.12307v1
+ cs.MA
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiawei Xu, Arief Koesdwiady, Sisong Bei, Yan Han, Baixiang Huang, Dakuo Wang, Yutong Chen, Zheshen Wang, Peihao Wang, Pan Li, Ying Ding
+
+
+ Adaptive Multi-Scale Correlation Meta-Network for Few-Shot Remote Sensing Image Classification
+ https://arxiv.org/abs/2601.12308
+ arXiv:2601.12308v1 Announce Type: new
+Abstract: Few-shot learning in remote sensing remains challenging due to three factors: the scarcity of labeled data, substantial domain shifts, and the multi-scale nature of geospatial objects. To address these issues, we introduce Adaptive Multi-Scale Correlation Meta-Network (AMC-MetaNet), a lightweight yet powerful framework with three key innovations: (i) correlation-guided feature pyramids for capturing scale-invariant patterns, (ii) an adaptive channel correlation module (ACCM) for learning dynamic cross-scale relationships, and (iii) correlation-guided meta-learning that leverages correlation patterns instead of conventional prototype averaging. Unlike prior approaches that rely on heavy pre-trained models or transformers, AMC-MetaNet is trained from scratch with only $\sim600K$ parameters, offering $20\times$ fewer parameters than ResNet-18 while maintaining high efficiency ($<50$ms per image inference). AMC-MetaNet achieves up to 86.65\% accuracy in 5-way 5-shot classification on various remote sensing datasets, including EuroSAT, NWPU-RESISC45, UC Merced Land Use, and AID. Our results establish AMC-MetaNet as a computationally efficient, scale-aware framework for real-world few-shot remote sensing.
+ oai:arXiv.org:2601.12308v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Anurag Kaushish, Ayan Sar, Sampurna Roy, Sudeshna Chakraborty, Prashant Trivedi, Tanupriya Choudhury, Kanav Gupta
+
+
+ Survival is the Only Reward: Sustainable Self-Training Through Environment-Mediated Selection
+ https://arxiv.org/abs/2601.12310
+ arXiv:2601.12310v1 Announce Type: new
+Abstract: Self-training systems often degenerate due to the lack of an external criterion for judging data quality, leading to reward hacking and semantic drift. This paper provides a proof-of-concept system architecture for stable self-training under sparse external feedback and bounded memory, and empirically characterises its learning dynamics and failure modes.
+ We introduce a self-training architecture in which learning is mediated exclusively by environmental viability, rather than by reward, objective functions, or externally defined fitness criteria. Candidate behaviours are executed under real resource constraints, and only those whose environmental effects both persist and preserve the possibility of future interaction are propagated. The environment does not provide semantic feedback, dense rewards, or task-specific supervision; selection operates solely through differential survival of behaviours as world-altering events, making proxy optimisation impossible and rendering reward-hacking evolutionarily unstable.
+ Analysis of semantic dynamics shows that improvement arises primarily through the persistence of effective and repeatable strategies under a regime of consolidation and pruning, a paradigm we refer to as negative-space learning (NSL), and that models develop meta-learning strategies (such as deliberate experimental failure in order to elicit informative error messages) without explicit instruction. This work establishes that environment-grounded selection enables sustainable open-ended self-improvement, offering a viable path toward more robust and generalisable autonomous systems without reliance on human-curated data or complex reward shaping.
+ oai:arXiv.org:2601.12310v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jennifer Dodgson, Alfath Daryl Alhajir, Michael Joedhitya, Akira Rafhael Janson Pattirane, Surender Suresh Kumar, Joseph Lim, C. H. Peh, Adith Ramdas, Steven Zhang Zhexu
+
+
+ Cross-reality Location Privacy Protection in 6G-enabled Vehicular Metaverses: An LLM-enhanced Hybrid Generative Diffusion Model-based Approach
+ https://arxiv.org/abs/2601.12311
+ arXiv:2601.12311v1 Announce Type: new
+Abstract: The emergence of 6G-enabled vehicular metaverses enables Autonomous Vehicles (AVs) to operate across physical and virtual spaces through space-air-ground-sea integrated networks. The AVs can deploy AI agents powered by large AI models as personalized assistants, on edge servers to support intelligent driving decision making and enhanced on-board experiences. However, such cross-reality interactions may cause serious location privacy risks, as adversaries can infer AV trajectories by correlating the location reported when AVs request LBS in reality with the location of the edge servers on which their corresponding AI agents are deployed in virtuality. To address this challenge, we design a cross-reality location privacy protection framework based on hybrid actions, including continuous location perturbation in reality and discrete privacy-aware AI agent migration in virtuality. In this framework, a new privacy metric, termed cross-reality location entropy, is proposed to effectively quantify the privacy levels of AVs. Based on this metric, we formulate an optimization problem to optimize the hybrid action, focusing on achieving a balance between location protection, service latency reduction, and quality of service maintenance. To solve the complex mixed-integer problem, we develop a novel LLM-enhanced Hybrid Diffusion Proximal Policy Optimization (LHDPPO) algorithm, which integrates LLM-driven informative reward design to enhance environment understanding with double Generative Diffusion Models-based policy exploration to handle high-dimensional action spaces, thereby enabling reliable determination of optimal hybrid actions. Extensive experiments on real-world datasets demonstrate that the proposed framework effectively mitigates cross-reality location privacy leakage for AVs while maintaining strong user immersion within 6G-enabled vehicular metaverse scenarios.
+ oai:arXiv.org:2601.12311v1
+ cs.NI
+ cs.CR
+ cs.HC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaofeng Luo, Jiayi He, Jiawen Kang, Ruichen Zhang, Zhaoshui He, Ekram Hossain, Dong In Kim
+
+
+ CurConMix+: A Unified Spatio-Temporal Framework for Hierarchical Surgical Workflow Understanding
+ https://arxiv.org/abs/2601.12312
+ arXiv:2601.12312v1 Announce Type: new
+Abstract: Surgical action triplet recognition aims to understand fine-grained surgical behaviors by modeling the interactions among instruments, actions, and anatomical targets. Despite its clinical importance for workflow analysis and skill assessment, progress has been hindered by severe class imbalance, subtle visual variations, and the semantic interdependence among triplet components. Existing approaches often address only a subset of these challenges rather than tackling them jointly, which limits their ability to form a holistic understanding. This study builds upon CurConMix, a spatial representation framework. At its core, a curriculum-guided contrastive learning strategy learns discriminative and progressively correlated features, further enhanced by structured hard-pair sampling and feature-level mixup. Its temporal extension, CurConMix+, integrates a Multi-Resolution Temporal Transformer (MRTT) that achieves robust, context-aware understanding by adaptively fusing multi-scale temporal features and dynamically balancing spatio-temporal cues. Furthermore, we introduce LLS48, a new, hierarchically annotated benchmark for complex laparoscopic left lateral sectionectomy, providing step-, task-, and action-level annotations. Extensive experiments on CholecT45 and LLS48 demonstrate that CurConMix+ not only outperforms state-of-the-art approaches in triplet recognition, but also exhibits strong cross-level generalization, as its fine-grained features effectively transfer to higher-level phase and step recognition tasks. Together, the framework and dataset provide a unified foundation for hierarchy-aware, reproducible, and interpretable surgical workflow understanding. The code and dataset will be publicly released on GitHub to facilitate reproducibility and further research.
+ oai:arXiv.org:2601.12312v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yongjun Jeon, Jongmin Shin, Kanggil Park, Seonmin Park, Soyoung Lim, Jung Yong Kim, Jinsoo Rhu, Jongman Kim, Gyu-Seong Choi, Namkee Oh, Kyu-Hwan Jung
+
+
+ S^2F-Net:A Robust Spatial-Spectral Fusion Framework for Cross-Model AIGC Detection
+ https://arxiv.org/abs/2601.12313
+ arXiv:2601.12313v1 Announce Type: new
+Abstract: The rapid development of generative models has imposed an urgent demand for detection schemes with strong generalization capabilities. However, existing detection methods generally suffer from overfitting to specific source models, leading to significant performance degradation when confronted with unseen generative architectures. To address these challenges, this paper proposes a cross-model detection framework called S 2 F-Net, whose core lies in exploring and leveraging the inherent spectral discrepancies between real and synthetic textures. Considering that upsampling operations leave unique and distinguishable frequency fingerprints in both texture-poor and texture-rich regions, we focus our research on the detection of frequency-domain artifacts, aiming to fundamentally improve the generalization performance of the model. Specifically, we introduce a learnable frequency attention module that adaptively weights and enhances discriminative frequency bands by synergizing spatial texture analysis and spectral dependencies.On the AIGCDetectBenchmark, which includes 17 categories of generative models, S 2 F-Net achieves a detection accuracy of 90.49%, significantly outperforming various existing baseline methods in cross-domain detection scenarios.
+ oai:arXiv.org:2601.12313v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiangyu Hu, Yicheng Hong, Hongchuang Zheng, Wenjun Zeng, Bingyao Liu
+
+
+ A Similarity Network for Correlating Musical Structure to Military Strategy
+ https://arxiv.org/abs/2601.12314
+ arXiv:2601.12314v1 Announce Type: new
+Abstract: Music perception, a multi-sensory process based on the synesthesia effect, is an essential component of music aesthetic education. Understanding music structure helps both perception and aesthetic education. Music structure incorporates a range of information, the coordination of which forms the melody, just as different military actions cooperate to produce a military strategy. However, there are a few ways for assessing music perception from the perspectives of system operation and information management. In this paper, we explore the similarities between music structure and military strategy while creating the Music Clips Correlation Network (MCCN) based on Mel-frequency Cepstral Coefficients (MFCCs). The inspiration comes from the comparison between a concert conductor's musical score and a military war commander's sand table exercise. Specifically, we create MCCNs for various kinds of war movie soundtracks, then relate military tactics (Sun Tzu's Art of War, etc.) and political institutions to military operations networks. Our primary findings suggest a few similarities, implying that music perception and aesthetic education can be approached from a military strategy and management perspective through this interdisciplinary research. Similarly, we can discover similarities between the art of military scheming and the art of musical structure based on network analysis in order to facilitate the understanding of the relationship between technology and art.
+ oai:arXiv.org:2601.12314v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yiwen Zhang, Hui Zhang, Fanqin Meng
+
+
+ GazeFormer-MoE: Context-Aware Gaze Estimation via CLIP and MoE Transformer
+ https://arxiv.org/abs/2601.12316
+ arXiv:2601.12316v1 Announce Type: new
+Abstract: We present a semantics modulated, multi scale Transformer for 3D gaze estimation. Our model conditions CLIP global features with learnable prototype banks (illumination, head pose, background, direction), fuses these prototype-enriched global vectors with CLIP patch tokens and high-resolution CNN tokens in a unified attention space, and replaces several FFN blocks with routed/shared Mixture of Experts to increase conditional capacity. Evaluated on MPIIFaceGaze, EYEDIAP, Gaze360 and ETH-XGaze, our model achieves new state of the art angular errors of 2.49{\deg}, 3.22{\deg}, 10.16{\deg}, and 1.44{\deg}, demonstrating up to a 64% relative improvement over previously reported results. ablations attribute gains to prototype conditioning, cross scale fusion, MoE and hyperparameter. Our code is publicly available at https://github. com/AIPMLab/Gazeformer.
+ oai:arXiv.org:2601.12316v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Xinyuan Zhao, Xianrui Chen, Ahmad Chaddad
+
+
+ Explanova: Automatically Discover Data Insights in N \times M Table via XAI Combined LLM Workflow
+ https://arxiv.org/abs/2601.12317
+ arXiv:2601.12317v1 Announce Type: new
+Abstract: Automation in data analysis has been a long-time pursuit. Current agentic LLM shows a promising solution towards it. Like DeepAnalyze, DataSage, and Datawise. They are all powerful agentic frameworks for automatic fine-grained analysis and are powered by LLM-based agentic tool calling ability. However, what about powered by a preset AutoML-like workflow? If we traverse all possible exploration, like Xn itself`s statistics, Xn1-Xn2 relationships, Xn to all other, and finally explain? Our Explanova is such an attempt: Cheaper due to a Local Small LLM.
+ oai:arXiv.org:2601.12317v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yiming Huang
+
+
+ Beyond Human Annotation: Recent Advances in Data Generation Methods for Document Intelligence
+ https://arxiv.org/abs/2601.12318
+ arXiv:2601.12318v1 Announce Type: new
+Abstract: The advancement of Document Intelligence (DI) demands large-scale, high-quality training data, yet manual annotation remains a critical bottleneck. While data generation methods are evolving rapidly, existing surveys are constrained by fragmented focuses on single modalities or specific tasks, lacking a unified perspective aligned with real-world workflows. To fill this gap, this survey establishes the first comprehensive technical map for data generation in DI. Data generation is redefined as supervisory signal production, and a novel taxonomy is introduced based on the "availability of data and labels." This framework organizes methodologies into four resource-centric paradigms: Data Augmentation, Data Generation from Scratch, Automated Data Annotation, and Self-Supervised Signal Construction. Furthermore, a multi-level evaluation framework is established to integrate intrinsic quality and extrinsic utility, compiling performance gains across diverse DI benchmarks. Guided by this unified structure, the methodological landscape is dissected to reveal critical challenges such as fidelity gaps and frontiers including co-evolutionary ecosystems. Ultimately, by systematizing this fragmented field, data generation is positioned as the central engine for next-generation DI.
+ oai:arXiv.org:2601.12318v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dehao Ying, Fengchang Yu, Haihua Chen, Changjiang Jiang, Yurong Li, Wei Lu
+
+
+ Ordered Local Momentum for Asynchronous Distributed Learning under Arbitrary Delays
+ https://arxiv.org/abs/2601.12322
+ arXiv:2601.12322v1 Announce Type: new
+Abstract: Momentum SGD (MSGD) serves as a foundational optimizer in training deep models due to momentum's key role in accelerating convergence and enhancing generalization. Meanwhile, asynchronous distributed learning is crucial for training large-scale deep models, especially when the computing capabilities of the workers in the cluster are heterogeneous. To reduce communication frequency, local updates are widely adopted in distributed learning. However, how to implement asynchronous distributed MSGD with local updates remains unexplored. To solve this problem, we propose a novel method, called \underline{or}dered \underline{lo}cal \underline{mo}mentum (OrLoMo), for asynchronous distributed learning. In OrLoMo, each worker runs MSGD locally. Then the local momentum from each worker will be aggregated by the server in order based on its global iteration index. To the best of our knowledge, OrLoMo is the first method to implement asynchronous distributed MSGD with local updates. We prove the convergence of OrLoMo for non-convex problems under arbitrary delays. Experiments validate that OrLoMo can outperform its synchronous counterpart and other asynchronous methods.
+ oai:arXiv.org:2601.12322v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chang-Wei Shi, Shi-Shang Wang, Wu-Jun Li
+
+
+ MARO: Learning Stronger Reasoning from Social Interaction
+ https://arxiv.org/abs/2601.12323
+ arXiv:2601.12323v1 Announce Type: new
+Abstract: Humans face countless scenarios that require reasoning and judgment in daily life. However, existing large language model training methods primarily allow models to learn from existing textual content or solve predetermined problems, lacking experience in real scenarios involving interaction, negotiation, and competition with others. To address this, this paper proposes Multi-Agent Reward Optimization (MARO), a method that enables large language models (LLMs) to acquire stronger reasoning abilities by learning and practicing in multi-agent social environments. Specifically, MARO first addresses the sparse learning signal problem by decomposing final success or failure outcomes into each specific behavior during the interaction process; second, it handles the uneven role distribution problem by balancing the training sample weights of different roles; finally, it addresses environmental instability issues by directly evaluating the utility of each behavior. Experimental results demonstrate that MARO not only achieves significant improvements in social reasoning capabilities, but also that the abilities acquired through social simulation learning can effectively transfer to other tasks such as mathematical reasoning and instruction following. This reveals the tremendous potential of multi-agent social learning in enhancing the general reasoning capabilities of LLMs.
+ oai:arXiv.org:2601.12323v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yin Cai, Zhouhong Gu, Juntao Zhang, Ping Chen
+
+
+ Experiencer, Helper, or Observer: Online Fraud Intervention for Older Adults Through Role-based Simulation
+ https://arxiv.org/abs/2601.12324
+ arXiv:2601.12324v1 Announce Type: new
+Abstract: Online fraud is a critical global threat that disproportionately targets older adults. Prior anti-fraud education for older adults has largely relied on static, traditional instruction that limits engagement and real-world transfer, whereas role-based simulation offers realistic yet low-risk opportunities for practice. Moreover, most interventions situate learners as victims, overlooking that fraud encounters often involve multiple roles, such as bystanders who witness scams and helpers who support victims. To address this gap, we developed ROLESafe, an anti-fraud educational intervention in which older adults learn through different learning roles, including Experiencer (experiencing fraud), Helper (assisting a victim), and Observer (witnessing fraud). In a between-subjects study with 144 older adults in China, we found that the Experiencer and Helper roles significantly improved participants' ability to identify online fraud. These findings highlight the promise of role-based, multi-perspective simulations for enhancing fraud awareness among older adults and provide design implications for future anti-fraud education.
+ oai:arXiv.org:2601.12324v1
+ cs.HC
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yue Deng, Xiaowei Chen, Junxiang Liao, Bo Li, Yixin Zou
+
+
+ Multi-Sensor Matching with HyperNetworks
+ https://arxiv.org/abs/2601.12325
+ arXiv:2601.12325v1 Announce Type: new
+Abstract: Hypernetworks are models that generate or modulate the weights of another network. They provide a flexible mechanism for injecting context and task conditioning and have proven broadly useful across diverse applications without significant increases in model size. We leverage hypernetworks to improve multimodal patch matching by introducing a lightweight descriptor-learning architecture that augments a Siamese CNN with (i) hypernetwork modules that compute adaptive, per-channel scaling and shifting and (ii) conditional instance normalization that provides modality-specific adaptation (e.g., visible vs. infrared, VIS-IR) in shallow layers. This combination preserves the efficiency of descriptor-based methods during inference while increasing robustness to appearance shifts. Trained with a triplet loss and hard-negative mining, our approach achieves state-of-the-art results on VIS-NIR and other VIS-IR benchmarks and matches or surpasses prior methods on additional datasets, despite their higher inference cost. To spur progress on domain shift, we also release GAP-VIR, a cross-platform (ground/aerial) VIS-IR patch dataset with 500K pairs, enabling rigorous evaluation of cross-domain generalization and adaptation.
+ oai:arXiv.org:2601.12325v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Eli Passov, Nathan S. Netanyahu, Yosi Keller
+
+
+ EmoKGEdit: Training-free Affective Injection via Visual Cue Transformation
+ https://arxiv.org/abs/2601.12326
+ arXiv:2601.12326v1 Announce Type: new
+Abstract: Existing image emotion editing methods struggle to disentangle emotional cues from latent content representations, often yielding weak emotional expression and distorted visual structures. To bridge this gap, we propose EmoKGEdit, a novel training-free framework for precise and structure-preserving image emotion editing. Specifically, we construct a Multimodal Sentiment Association Knowledge Graph (MSA-KG) to disentangle the intricate relationships among objects, scenes, attributes, visual clues and emotion. MSA-KG explicitly encode the causal chain among object-attribute-emotion, and as external knowledge to support chain of thought reasoning, guiding the multimodal large model to infer plausible emotion-related visual cues and generate coherent instructions. In addition, based on MSA-KG, we design a disentangled structure-emotion editing module that explicitly separates emotional attributes from layout features within the latent space, which ensures that the target emotion is effectively injected while strictly maintaining visual spatial coherence. Extensive experiments demonstrate that EmoKGEdit achieves excellent performance in both emotion fidelity and content preservation, and outperforms the state-of-the-art methods.
+ oai:arXiv.org:2601.12326v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jing Zhang, Bingjie Fan
+
+
+ The Expert Validation Framework (EVF): Enabling Domain Expert Control in AI Engineering
+ https://arxiv.org/abs/2601.12327
+ arXiv:2601.12327v1 Announce Type: new
+Abstract: Generative AI (GenAI) systems promise to transform knowledge work by automating a range of tasks, yet their deployment in enterprise settings remains hindered by the lack of systematic quality assurance mechanisms. We present an Expert Validation Framework that places domain experts at the center of building software with GenAI components, enabling them to maintain authoritative control over system behavior through structured specification, testing, validation, and continuous monitoring processes. Our framework addresses the critical gap between AI capabilities and organizational trust by establishing a rigorous, expert-driven methodology for ensuring quality across diverse GenAI applications. Through a four-stage implementation process encompassing specification, system creation, validation, and production monitoring, the framework enables organizations to leverage GenAI capabilities while maintaining expert oversight and quality standards.
+ oai:arXiv.org:2601.12327v1
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ CAIN2026: 5th International Conference on AI Engineering - Software Engineering for AI
+ Lucas Gren, Felix Dobslaw
+
+
+ FlowIID: Single-Step Intrinsic Image Decomposition via Latent Flow Matching
+ https://arxiv.org/abs/2601.12329
+ arXiv:2601.12329v1 Announce Type: new
+Abstract: Intrinsic Image Decomposition (IID) separates an image into albedo and shading components. It is a core step in many real-world applications, such as relighting and material editing. Existing IID models achieve good results, but often use a large number of parameters. This makes them costly to combine with other models in real-world settings. To address this problem, we propose a flow matching-based solution. For this, we design a novel architecture, FlowIID, based on latent flow matching. FlowIID combines a VAE-guided latent space with a flow matching module, enabling a stable decomposition of albedo and shading. FlowIID is not only parameter-efficient, but also produces results in a single inference step. Despite its compact design, FlowIID delivers competitive and superior results compared to existing models across various benchmarks. This makes it well-suited for deployment in resource-constrained and real-time vision applications.
+ oai:arXiv.org:2601.12329v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mithlesh Singla, Seema Kumari, Shanmuganathan Raman
+
+
+ IceWatch: Forecasting Glacial Lake Outburst Floods (GLOFs) using Multimodal Deep Learning
+ https://arxiv.org/abs/2601.12330
+ arXiv:2601.12330v1 Announce Type: new
+Abstract: Glacial Lake Outburst Floods (GLOFs) pose a serious threat in high mountain regions. They are hazardous to communities, infrastructure, and ecosystems further downstream. The classical methods of GLOF detection and prediction have so far mainly relied on hydrological modeling, threshold-based lake monitoring, and manual satellite image analysis. These approaches suffer from several drawbacks: slow updates, reliance on manual labor, and losses in accuracy when clouds interfere and/or lack on-site data. To tackle these challenges, we present IceWatch: a novel deep learning framework for GLOF prediction that incorporates both spatial and temporal perspectives. The vision component, RiskFlow, of IceWatch deals with Sentinel-2 multispectral satellite imagery using a CNN-based classifier and predicts GLOF events based on the spatial patterns of snow, ice, and meltwater. Its tabular counterpart confirms this prediction by considering physical dynamics. TerraFlow models glacier velocity from NASA ITS_LIVE time series while TempFlow forecasts near-surface temperature from MODIS LST records; both are trained on long-term observational archives and integrated via harmonized preprocessing and synchronization to enable multimodal, physics-informed GLOF prediction. Both together provide cross-validation, which will improve the reliability and interpretability of GLOF detection. This system ensures strong predictive performance, rapid data processing for real-time use, and robustness to noise and missing information. IceWatch paves the way for automatic, scalable GLOF warning systems. It also holds potential for integration with diverse sensor inputs and global glacier monitoring activities.
+ oai:arXiv.org:2601.12330v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Zuha Fatima, Muhammad Anser Sohaib, Muhammad Talha, Ayesha Kanwal, Sidra Sultana, Nazia Perwaiz
+
+
+ Efficient Privacy-Preserving Retrieval Augmented Generation with Distance-Preserving Encryption
+ https://arxiv.org/abs/2601.12331
+ arXiv:2601.12331v1 Announce Type: new
+Abstract: RAG has emerged as a key technique for enhancing response quality of LLMs without high computational cost. In traditional architectures, RAG services are provided by a single entity that hosts the dataset within a trusted local environment. However, individuals or small organizations often lack the resources to maintain data storage servers, leading them to rely on outsourced cloud storage. This dependence on untrusted third-party services introduces privacy risks. Embedding-based retrieval mechanisms, commonly used in RAG systems, are vulnerable to privacy leakage such as vector-to-text reconstruction attacks and structural leakage via vector analysis. Several privacy-preserving RAG techniques have been proposed but most existing approaches rely on partially homomorphic encryption, which incurs substantial computational overhead. To address these challenges, we propose an efficient privacy-preserving RAG framework (ppRAG) tailored for untrusted cloud environments that defends against vector-to-text attack, vector analysis, and query analysis. We propose Conditional Approximate Distance-Comparison-Preserving Symmetric Encryption (CAPRISE) that encrypts embeddings while still allowing the cloud to compute similarity between an encrypted query and the encrypted database embeddings. CAPRISE preserves only the relative distance ordering between the encrypted query and each encrypted database embedding, without exposing inter-database distances, thereby enhancing both privacy and efficiency. To mitigate query analysis, we introduce DP by perturbing the query embedding prior to encryption, preventing the cloud from inferring sensitive patterns. Experimental results show that ppRAG achieves efficient processing throughput, high retrieval accuracy, strong privacy guarantees, making it a practical solution for resource-constrained users seeking secure cloud-augmented LLMs.
+ oai:arXiv.org:2601.12331v1
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Huanyi Ye, Jiale Guo, Ziyao Liu, Kwok-Yan Lam
+
+
+ Worst-case Nonlinear Regression with Error Bounds
+ https://arxiv.org/abs/2601.12334
+ arXiv:2601.12334v1 Announce Type: new
+Abstract: This paper proposes an active-learning approach to worst-case nonlinear regression with deterministic error guarantees. Given a known nonlinear function defined over a compact set, we compute a surrogate model, such as a feedforward neural network, by minimizing the maximum absolute approximation error. To address the nonsmooth nature of the resulting minimax problem, we introduce a smooth approximation of the $L_\infty$-type loss that enables efficient gradient-based training. We iteratively enrich the training set by actively learning points of largest approximation error through global optimization. The resulting models admit certified worst-case error bounds, either constant or input-dependent, over the entire input domain. The approach is demonstrated through approximations of nonlinear functions and nonconvex sets, as well as through the derivation of uncertain models of more complex nonlinear dynamics within a given model class, and the approximation of explicit model predictive control laws.
+ oai:arXiv.org:2601.12334v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Alberto Bemporad
+
+
+ Turbo-GoDec: Exploiting the Cluster Sparsity Prior for Hyperspectral Anomaly Detection
+ https://arxiv.org/abs/2601.12337
+ arXiv:2601.12337v1 Announce Type: new
+Abstract: As a key task in hyperspectral image processing, hyperspectral anomaly detection has garnered significant attention and undergone extensive research. Existing methods primarily relt on two prior assumption: low-rank background and sparse anomaly, along with additional spatial assumptions of the background. However, most methods only utilize the sparsity prior assumption for anomalies and rarely expand on this hypothesis. From observations of hyperspectral images, we find that anomalous pixels exhibit certain spatial distribution characteristics: they often manifest as small, clustered groups in space, which we refer to as cluster sparsity of anomalies. Then, we combined the cluster sparsity prior with the classical GoDec algorithm, incorporating the cluster sparsity prior into the S-step of GoDec. This resulted in a new hyperspectral anomaly detection method, which we called Turbo-GoDec. In this approach, we modeled the cluster sparsity prior of anomalies using a Markov random field and computed the marginal probabilities of anomalies through message passing on a factor graph. Locations with high anomalous probabilities were treated as the sparse component in the Turbo-GoDec. Experiments are conducted on three real hyperspectral image (HSI) datasets which demonstrate the superior performance of the proposed Turbo-GoDec method in detecting small-size anomalies comparing with the vanilla GoDec (LSMAD) and state-of-the-art anomaly detection methods. The code is available at https://github.com/jiahuisheng/Turbo-GoDec.
+ oai:arXiv.org:2601.12337v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiahui Sheng, Xiaorun Li, Shuhan Chen
+
+
+ Actionable Advice from Reviews via Mixture of LoRA Experts: A Two-LLM Pipeline for Issue Extraction and Business Recommendations
+ https://arxiv.org/abs/2601.12338
+ arXiv:2601.12338v1 Announce Type: new
+Abstract: Customer reviews contain detailed, domain specific signals about service failures and user expectations, but converting this unstructured feedback into actionable business decisions remains difficult. We study review-to-action generation: producing concrete, implementable recommendations grounded in review text. We propose a modular two-LLM framework in which an Issue model extracts salient issues and assigns coarse themes, and an Advice model generates targeted operational fixes conditioned on the extracted issue representation. To enable specialization without expensive full fine-tuning, we adapt the Advice model using a mixture of LoRA experts strategy: multiple low-rank adapters are trained and a lightweight gating mechanism performs token-level expert mixing at inference, combining complementary expertise across issue types. We construct synthetic review-issue-advice triples from Yelp reviews (airlines and restaurants) to supervise training, and evaluate recommendations using an eight dimension operational rubric spanning actionability, specificity, feasibility, expected impact, novelty, non-redundancy, bias, and clarity. Across both domains, our approach consistently outperforms prompting-only and single-adapter baselines, yielding higher actionability and specificity while retaining favorable efficiency-quality trade-offs.
+ oai:arXiv.org:2601.12338v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kartikey Singh Bhandari, Manav Ganesh, Yashwant Viswanathan, Archit Agrawal, Dhruv Kumar, Pratik Narang
+
+
+ Time-Continuous Modeling for Temporal Affective Pattern Recognition in LLMs
+ https://arxiv.org/abs/2601.12341
+ arXiv:2601.12341v1 Announce Type: new
+Abstract: This paper introduces a dataset and conceptual framework for LLMs to mimic real world emotional dynamics through time and in-context learning leveraging physics-informed neural network, opening a possibility for interpretable dialogue modeling.
+ oai:arXiv.org:2601.12341v1
+ cs.LG
+ cs.AI
+ cs.ET
+ cs.HC
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Rezky Kam, Coddy N. Siswanto
+
+
+ MMDeepResearch-Bench: A Benchmark for Multimodal Deep Research Agents
+ https://arxiv.org/abs/2601.12346
+ arXiv:2601.12346v1 Announce Type: new
+Abstract: Deep Research Agents (DRAs) generate citation-rich reports via multi-step search and synthesis, yet existing benchmarks mainly target text-only settings or short-form multimodal QA, missing end-to-end multimodal evidence use. We introduce MMDeepResearch-Bench (MMDR-Bench), a benchmark of 140 expert-crafted tasks across 21 domains, where each task provides an image-text bundle to evaluate multimodal understanding and citation-grounded report generation. Compared to prior setups, MMDR-Bench emphasizes report-style synthesis with explicit evidence use, where models must connect visual artifacts to sourced claims and maintain consistency across narrative, citations, and visual references. We further propose a unified, interpretable evaluation pipeline: Formula-LLM Adaptive Evaluation (FLAE) for report quality, Trustworthy Retrieval-Aligned Citation Evaluation (TRACE) for citation-grounded evidence alignment, and Multimodal Support-Aligned Integrity Check (MOSAIC) for text-visual integrity, each producing fine-grained signals that support error diagnosis beyond a single overall score. Experiments across 25 state-of-the-art models reveal systematic trade-offs between generation quality, citation discipline, and multimodal grounding, highlighting that strong prose alone does not guarantee faithful evidence use and that multimodal integrity remains a key bottleneck for deep research agents.
+ oai:arXiv.org:2601.12346v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Peizhou Huang, Zixuan Zhong, Zhongwei Wan, Donghao Zhou, Samiul Alam, Xin Wang, Zexin Li, Zhihao Dou, Li Zhu, Jing Xiong, Chaofan Tao, Yan Xu, Dimitrios Dimitriadis, Tuo Zhang, Mi Zhang
+
+
+ RIPPLE++: An Incremental Framework for Efficient GNN Inference on Evolving Graphs
+ https://arxiv.org/abs/2601.12347
+ arXiv:2601.12347v1 Announce Type: new
+Abstract: Real-world graphs are dynamic, with frequent updates to their structure and features due to evolving vertex and edge properties. These continual changes pose significant challenges for efficient inference in graph neural networks (GNNs). Existing vertex-wise and layer-wise inference approaches are ill-suited for dynamic graphs, as they incur redundant computations, large neighborhood traversals, and high communication costs, especially in distributed settings. Additionally, while sampling-based approaches can be adopted to approximate final layer embeddings, these are often not preferred in critical applications due to their non-determinism. These limitations hinder low-latency inference required in real-time applications. To address this, we propose RIPPLE++, a framework for streaming GNN inference that efficiently and accurately updates embeddings in response to changes in the graph structure or features. RIPPLE++ introduces a generalized incremental programming model that captures the semantics of GNN aggregation functions and incrementally propagates updates to affected neighborhoods. RIPPLE++ accommodates all common graph updates, including vertex/edge addition/deletions and vertex feature updates. RIPPLE++ supports both single-machine and distributed deployments. On a single machine, it achieves up to $56$K updates/sec on sparse graphs like Arxiv ($169$K vertices, $1.2$M edges), and about $7.6$K updates/sec on denser graphs like Products ($2.5$M vertices, $123.7$M edges), with latencies of $0.06$--$960$ms, and outperforming state-of-the-art baselines by $2.2$--$24\times$ on throughput. In distributed settings, RIPPLE++ offers up to $\approx25\times$ higher throughput and $20\times$ lower communication costs compared to recomputing baselines.
+ oai:arXiv.org:2601.12347v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pranjal Naman, Parv Agarwal, Hrishikesh Haritas, Yogesh Simmhan
+
+
+ Generative AI Agents for Controllable and Protected Content Creation
+ https://arxiv.org/abs/2601.12348
+ arXiv:2601.12348v1 Announce Type: new
+Abstract: The proliferation of generative AI has transformed creative workflows, yet current systems face critical challenges in controllability and content protection. We propose a novel multi-agent framework that addresses both limitations through specialized agent roles and integrated watermarking mechanisms. Unlike existing multi-agent systems focused solely on generation quality, our approach uniquely combines controllable content synthesis with provenance protection during the generation process itself. The framework orchestrates Director/Planner, Generator, Reviewer, Integration, and Protection agents with human-in-the-loop feedback to ensure alignment with user intent while embedding imperceptible digital watermarks. We formalize the pipeline as a joint optimization objective unifying controllability, semantic alignment, and protection robustness. This work contributes to responsible generative AI by positioning multi-agent architectures as a solution for trustworthy creative workflows with built-in ownership tracking and content traceability.
+ oai:arXiv.org:2601.12348v1
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ GenProCC NeurIPS 2025, Paper # 33
+ Haris Khan, Sadia Asif
+
+
+ Zero-Permission Manipulation: Can We Trust Large Multimodal Model Powered GUI Agents?
+ https://arxiv.org/abs/2601.12349
+ arXiv:2601.12349v1 Announce Type: new
+Abstract: Large multimodal model powered GUI agents are emerging as high-privilege operators on mobile platforms, entrusted with perceiving screen content and injecting inputs. However, their design operates under the implicit assumption of Visual Atomicity: that the UI state remains invariant between observation and action. We demonstrate that this assumption is fundamentally invalid in Android, creating a critical attack surface.
+ We present Action Rebinding, a novel attack that allows a seemingly-benign app with zero dangerous permissions to rebind an agent's execution. By exploiting the inevitable observation-to-action gap inherent in the agent's reasoning pipeline, the attacker triggers foreground transitions to rebind the agent's planned action toward the target app. We weaponize the agent's task-recovery logic and Android's UI state preservation to orchestrate programmable, multi-step attack chains. Furthermore, we introduce an Intent Alignment Strategy (IAS) that manipulates the agent's reasoning process to rationalize UI states, enabling it to bypass verification gates (e.g., confirmation dialogs) that would otherwise be rejected.
+ We evaluate Action Rebinding Attacks on six widely-used Android GUI agents across 15 tasks. Our results demonstrate a 100% success rate for atomic action rebinding and the ability to reliably orchestrate multi-step attack chains. With IAS, the success rate in bypassing verification gates increases (from 0% to up to 100%). Notably, the attacker application requires no sensitive permissions and contains no privileged API calls, achieving a 0% detection rate across malware scanners (e.g., VirusTotal). Our findings reveal a fundamental architectural flaw in current agent-OS integration and provide critical insights for the secure design of future agent systems. To access experimental logs and demonstration videos, please contact yi_qian@smail.nju.edu.cn.
+ oai:arXiv.org:2601.12349v1
+ cs.CR
+ cs.AI
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yi Qian, Kunwei Qian, Xingbang He, Ligeng Chen, Jikang Zhang, Tiantai Zhang, Haiyang Wei, Linzhang Wang, Hao Wu, Bing Mao
+
+
+ Analyzing Collection Strategies: A Computational Perspective on the Coupon Collector Problem
+ https://arxiv.org/abs/2601.12351
+ arXiv:2601.12351v1 Announce Type: new
+Abstract: The Coupon Collector Problem (CCP) is a well-known combinatorial problem that seeks to estimate the number of random draws required to complete a collection of $n$ distinct coupon types. Various generalizations of this problem have been applied in numerous engineering domains. However, practical applications are often hindered by the computational challenges associated with deriving numerical results for moments and distributions. In this work, we present three algorithms for solving the most general form of the CCP, where coupons are collected under any arbitrary drawing probability, with the objective of obtaining $t$ copies of a subset of $k$ coupons from a total of $n$. The First algorithm provides the base model to compute the expectation, variance, and the second moment of the collection process. The second algorithm utilizes the construction of the base model and computes the same values in polynomial time with respect to $n$ under the uniform drawing distribution, and the third algorithm extends to any general drawing distribution. All algorithms leverage Markov models specifically designed to address computational challenges, ensuring exact computation of the expectation and variance of the collection process. Their implementation uses a dynamic programming approach that follows from the Markov models framework, and their time complexity is analyzed accordingly.
+ oai:arXiv.org:2601.12351v1
+ cs.DS
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hadas Abraham, Ido Feldman, Eitan Yaakobi
+
+
+ From Shallow Waters to Mariana Trench: A Survey of Bio-inspired Underwater Soft Robots
+ https://arxiv.org/abs/2601.12353
+ arXiv:2601.12353v1 Announce Type: new
+Abstract: Sample Exploring the ocean environment holds profound significance in areas such as resource exploration and ecological protection. Underwater robots struggle with extreme water pressure and often cause noise and damage to the underwater ecosystem, while bio-inspired soft robots draw inspiration from aquatic creatures to address these challenges. These bio-inspired approaches enable robots to withstand high water pressure, minimize drag, operate with efficient manipulation and sensing systems, and interact with the environment in an eco-friendly manner. Consequently, bio-inspired soft robots have emerged as a promising field for ocean exploration. This paper reviews recent advancements in underwater bio-inspired soft robots, analyses their design considerations when facing different desired functions, bio-inspirations, ambient pressure, temperature, light, and biodiversity , and finally explores the progression from bio-inspired principles to practical applications in the field and suggests potential directions for developing the next generation of underwater soft robots.
+ oai:arXiv.org:2601.12353v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jie Wang, Peng Du, Yiyuan Zhang, Zhexin Xie, Cecilia Laschi
+
+
+ LB-MCTS: Synergizing Large Language Models and Bayesian Optimization for Efficient CASH
+ https://arxiv.org/abs/2601.12355
+ arXiv:2601.12355v1 Announce Type: new
+Abstract: To lower the expertise barrier in machine learning, the AutoML community has focused on the CASH problem, a fundamental challenge that automates the process of algorithm selection and hyperparameter tuning. While traditional methods like Bayesian Optimization (BO) struggle with cold-start issues, Large Language Models (LLMs) can mitigate these via semantic priors. However, existing LLM-based optimizers generalize poorly to the high-dimensional, structured CASH space. We propose LB-MCTS, a framework synergizing LLMs and BO within a Monte Carlo Tree Search structure. It maximizes LLM reasoning with Selective Tuning Memory (STM) and explicit exploration-exploitation trade-off. It combines the strengths of both paradigms by dynamically shifting from LLM-driven to BO-driven proposals as data accumulates. Experiments on 104 AMLB datasets demonstrate the superiority of LB-MCTS over the competitive baselines.
+ oai:arXiv.org:2601.12355v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Beicheng Xu, Weitong Qian, Lingching Tung, Yupeng Lu, Bin Cui
+
+
+ SimpleMatch: A Simple and Strong Baseline for Semantic Correspondence
+ https://arxiv.org/abs/2601.12357
+ arXiv:2601.12357v1 Announce Type: new
+Abstract: Recent advances in semantic correspondence have been largely driven by the use of pre-trained large-scale models. However, a limitation of these approaches is their dependence on high-resolution input images to achieve optimal performance, which results in considerable computational overhead. In this work, we address a fundamental limitation in current methods: the irreversible fusion of adjacent keypoint features caused by deep downsampling operations. This issue is triggered when semantically distinct keypoints fall within the same downsampled receptive field (e.g., 16x16 patches). To address this issue, we present SimpleMatch, a simple yet effective framework for semantic correspondence that delivers strong performance even at low resolutions. We propose a lightweight upsample decoder that progressively recovers spatial detail by upsampling deep features to 1/4 resolution, and a multi-scale supervised loss that ensures the upsampled features retain discriminative features across different spatial scales. In addition, we introduce sparse matching and window-based localization to optimize training memory usage and reduce it by 51%. At a resolution of 252x252 (3.3x smaller than current SOTA methods), SimpleMatch achieves superior performance with 84.1% PCK@0.1 on the SPair-71k benchmark. We believe this framework provides a practical and efficient baseline for future research in semantic correspondence. Code is available at: https://github.com/hailong23-jin/SimpleMatch.
+ oai:arXiv.org:2601.12357v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hailing Jin, Huiying Li
+
+
+ From Prompts to Pavement: LMMs-based Agentic Behavior-Tree Generation Framework for Autonomous Vehicles
+ https://arxiv.org/abs/2601.12358
+ arXiv:2601.12358v1 Announce Type: new
+Abstract: Autonomous vehicles (AVs) require adaptive behavior planners to navigate unpredictable, real-world environments safely. Traditional behavior trees (BTs) offer structured decision logic but are inherently static and demand labor-intensive manual tuning, limiting their applicability at SAE Level 5 autonomy. This paper presents an agentic framework that leverages large language models (LLMs) and multi-modal vision models (LVMs) to generate and adapt BTs on the fly. A specialized Descriptor agent applies chain-of-symbols prompting to assess scene criticality, a Planner agent constructs high-level sub-goals via in-context learning, and a Generator agent synthesizes executable BT sub-trees in XML format. Integrated into a CARLA+Nav2 simulation, our system triggers only upon baseline BT failure, demonstrating successful navigation around unexpected obstacles (e.g., street blockage) with no human intervention. Compared to a static BT baseline, this approach is a proof-of-concept that extends to diverse driving scenarios.
+ oai:arXiv.org:2601.12358v1
+ cs.CV
+ cs.AI
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Omar Y. Goba, Ahmed Y. Gado, Catherine M. Elias, Ahmed Hussein
+
+
+ Zero-Shot Embedding Drift Detection: A Lightweight Defense Against Prompt Injections in LLMs
+ https://arxiv.org/abs/2601.12359
+ arXiv:2601.12359v1 Announce Type: new
+Abstract: Prompt injection attacks have become an increasing vulnerability for LLM applications, where adversarial prompts exploit indirect input channels such as emails or user-generated content to circumvent alignment safeguards and induce harmful or unintended outputs. Despite advances in alignment, even state-of-the-art LLMs remain broadly vulnerable to adversarial prompts, underscoring the urgent need for robust, productive, and generalizable detection mechanisms beyond inefficient, model-specific patches. In this work, we propose Zero-Shot Embedding Drift Detection (ZEDD), a lightweight, low-engineering-overhead framework that identifies both direct and indirect prompt injection attempts by quantifying semantic shifts in embedding space between benign and suspect inputs. ZEDD operates without requiring access to model internals, prior knowledge of attack types, or task-specific retraining, enabling efficient zero-shot deployment across diverse LLM architectures. Our method uses adversarial-clean prompt pairs and measures embedding drift via cosine similarity to capture subtle adversarial manipulations inherent to real-world injection attacks. To ensure robust evaluation, we assemble and re-annotate the comprehensive LLMail-Inject dataset spanning five injection categories derived from publicly available sources. Extensive experiments demonstrate that embedding drift is a robust and transferable signal, outperforming traditional methods in detection accuracy and operational efficiency. With greater than 93% accuracy in classifying prompt injections across model architectures like Llama 3, Qwen 2, and Mistral and a false positive rate of <3%, our approach offers a lightweight, scalable defense layer that integrates into existing LLM pipelines, addressing a critical gap in securing LLM-powered systems to withstand adaptive adversarial threats.
+ oai:arXiv.org:2601.12359v1
+ cs.CR
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Anirudh Sekar, Mrinal Agarwal, Rachel Sharma, Akitsugu Tanaka, Jasmine Zhang, Arjun Damerla, Kevin Zhu
+
+
+ Discovering 100+ Compiler Defects in 72 Hours via LLM-Driven Semantic Logic Recomposition
+ https://arxiv.org/abs/2601.12360
+ arXiv:2601.12360v1 Announce Type: new
+Abstract: Compilers constitute the foundational root-of-trust in software supply chains; however, their immense complexity inevitably conceals critical defects. Recent research has attempted to leverage historical bugs to design new mutation operators or fine-tune models to increase program diversity for compiler fuzzing.We observe, however, that bugs manifest primarily based on the semantics of input programs rather than their syntax. Unfortunately, current approaches, whether relying on syntactic mutation or general Large Language Model (LLM) fine-tuning, struggle to preserve the specific semantics found in the logic of bug-triggering programs. Consequently, these critical semantic triggers are often lost, resulting in a limitation of the diversity of generated programs.
+ To explicitly reuse such semantics, we propose FeatureFuzz, a compiler fuzzer that combines features to generate programs. We define a feature as a decoupled primitive that encapsulates a natural language description of a bug-prone invariant, such as an out-of-bounds array access, alongside a concrete code witness of its realization. FeatureFuzz operates via a three-stage workflow: it first extracts features from historical bug reports, synthesizes coherent groups of features, and finally instantiates these groups into valid programs for compiler fuzzing.
+ We evaluated FeatureFuzz on GCC and LLVM. Over 24-hour campaigns, FeatureFuzz uncovered 167 unique crashes, which is 2.78x more than the second-best fuzzer. Furthermore, through a 72-hour fuzzing campaign, FeatureFuzz identified 106 bugs in GCC and LLVM, 76 of which have already been confirmed by compiler developers, validating the approach's ability to stress-test modern compilers effectively.
+ oai:arXiv.org:2601.12360v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xinabang He, Yuanwei Chen, Hao Wu, Jikang Zhang, Zicheng Wang, Ligeng Chen, Junjie Peng, Haiyang Wei, Yi Qian, Tiantai Zhang, Linzhang Wang, Bing Mao
+
+
+ Complexity of Model Checking Second-Order Hyperproperties on Finite Structures
+ https://arxiv.org/abs/2601.12361
+ arXiv:2601.12361v1 Announce Type: new
+Abstract: We study the model checking problem of Hyper2LTL over finite structures. Hyper2LTL is a second-order hyperlogic, that extends the well-studied logic HyperLTL by adding quantification over sets of traces, to express complex hyperproperties such as epistemic and asynchronous hyperproperties. While Hyper2LTL is very expressive, its expressiveness comes with a price, and its general model checking problem is undecidable. This motivates us to study the model checking problem for Hyper2LTL over finite structures -- tree-shaped or acyclic graphs, which are particularly useful for monitoring purposes. We show that Hyper2LTL model checking is decidable on finite structures. It is in PSPACE (in the size of the model) on tree-shaped models and in EXPSPACE on acyclic models. Additionally, we show that for an expressive fragment of Hyper2LTL, namely the Fixpoint Hyper2LTLfp fragment, the model checking problem is much simpler and is P-complete on tree-shaped models and EXP-complete on acyclic models. Last, we present some preliminary results that take into account not only the size of the model, but also the formula size.
+ oai:arXiv.org:2601.12361v1
+ cs.LO
+ cs.FL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bernd Finkbeiner, Hadar Frenkel, Tim Rohde
+
+
+ Machine Learning-Based Framework for Real Time Detection and Early Prediction of Control Valve Stiction in Industrial Control Systems
+ https://arxiv.org/abs/2601.12362
+ arXiv:2601.12362v1 Announce Type: new
+Abstract: Control valve stiction, a friction that prevents smooth valve movement, is a common fault in industrial process systems that causes instability, equipment wear, and higher maintenance costs. Many plants still operate with conventional valves that lack real time monitoring, making early predictions challenging. This study presents a machine learning (ML) framework for detecting and predicting stiction using only routinely collected process signals: the controller output (OP) from control systems and the process variable (PV), such as flow rate. Three deep learning models were developed and compared: a Convolutional Neural Network (CNN), a hybrid CNN with a Support Vector Machine (CNN-SVM), and a Long Short-Term Memory (LSTM) network. To train these models, a data-driven labeling method based on slope ratio analysis was applied to a real oil and gas refinery dataset. The LSTM model achieved the highest accuracy and was able to predict stiction up to four hours in advance. To the best of the authors' knowledge, this is the first study to demonstrate ML based early prediction of control valve stiction from real industry data. The proposed framework can be integrated into existing control systems to support predictive maintenance, reduce downtime, and avoid unnecessary hardware replacement.
+ oai:arXiv.org:2601.12362v1
+ cs.LG
+ physics.ins-det
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Natthapong Promsricha, Chotirawee Chatpattanasiri, Nuttavut Kerdgongsup, Stavroula Balabani
+
+
+ DepthCropSeg++: Scaling a Crop Segmentation Foundation Model With Depth-Labeled Data
+ https://arxiv.org/abs/2601.12366
+ arXiv:2601.12366v1 Announce Type: new
+Abstract: DepthCropSeg++: a foundation model for crop segmentation, capable of segmenting different crop species under open in-field environment. Crop segmentation is a fundamental task for modern agriculture, which closely relates to many downstream tasks such as plant phenotyping, density estimation, and weed control. In the era of foundation models, a number of generic large language and vision models have been developed. These models have demonstrated remarkable real world generalization due to significant model capacity and largescale datasets. However, current crop segmentation models mostly learn from limited data due to expensive pixel-level labelling cost, often performing well only under specific crop types or controlled environment. In this work, we follow the vein of our previous work DepthCropSeg, an almost unsupervised approach to crop segmentation, to scale up a cross-species and crossscene crop segmentation dataset, with 28,406 images across 30+ species and 15 environmental conditions. We also build upon a state-of-the-art semantic segmentation architecture ViT-Adapter architecture, enhance it with dynamic upsampling for improved detail awareness, and train the model with a two-stage selftraining pipeline. To systematically validate model performance, we conduct comprehensive experiments to justify the effectiveness and generalization capabilities across multiple crop datasets. Results demonstrate that DepthCropSeg++ achieves 93.11% mIoU on a comprehensive testing set, outperforming both supervised baselines and general-purpose vision foundation models like Segmentation Anything Model (SAM) by significant margins (+0.36% and +48.57% respectively). The model particularly excels in challenging scenarios including night-time environment (86.90% mIoU), high-density canopies (90.09% mIoU), and unseen crop varieties (90.09% mIoU), indicating a new state of the art for crop segmentation.
+ oai:arXiv.org:2601.12366v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/JSTSP.2026.3654362
+ IEEE Journal of Selected Topics in Signal Processing, 2026
+ Jiafei Zhang, Songliang Cao, Binghui Xu, Yanan Li, Weiwei Jia, Tingting Wu, Hao Lu, Weijuan Hu, Zhiguo Han
+
+
+ User-to-Vehicle Interaction in Smart Mobility: The GO-DRiVeS Autonomous Ride-Sharing Application
+ https://arxiv.org/abs/2601.12367
+ arXiv:2601.12367v1 Announce Type: new
+Abstract: This paper introduces the GO-DRiVeS application, an on demand ride sharing and requesting mobile application tailored specifically to save long walks and challenges which are time consuming and tiring especially during hot days or when carrying heavy items, faced by university students and staff. The GO-DRiVeS application was developed following the Agile methodology for its flexibility. In addition to, using the mobile application system architecture and client-server architecture. GO-DRiVeS was implemented using React Native (Expo) for the frontend, Node.js and Express for the backend, and MongoDB as the database; based on a detailed analyses to the existing transportation application, comparing their frameworks and identifying their essential functionalities. GO-DRiVeS supports core features like user registration, ride requesting and real-time tracking.In addition to handling multiple requests at the same time in a first come first serve manner. The application was developed based on these features, and the results were conducted in the form of multiple experiments that demonstrated stable behavior in handling the requests, as presented in the Methodology and Results chapters.
+ oai:arXiv.org:2601.12367v1
+ cs.HC
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Hana E. Elmalah, Catherine M. Elias
+
+
+ Can Deep Research Agents Find and Organize? Evaluating the Synthesis Gap with Expert Taxonomies
+ https://arxiv.org/abs/2601.12369
+ arXiv:2601.12369v1 Announce Type: new
+Abstract: Deep Research Agents are increasingly used for automated survey generation. However, whether they can write surveys like human experts remains unclear. Existing benchmarks focus on fluency or citation accuracy, but none evaluates the core capabilities: retrieving essential papers and organizing them into coherent knowledge structures. We introduce TaxoBench, a diagnostic benchmark derived from 72 highly-cited computer science surveys. We manually extract expert-authored taxonomy trees containing 3,815 precisely categorized citations as ground truth. Our benchmark supports two evaluation modes: Deep Research mode tests end-to-end retrieval and organization given only a topic, while Bottom-Up mode isolates structuring capability by providing the exact papers human experts used. We evaluate 7 leading Deep Research agents and 12 frontier LLMs. Results reveal a dual bottleneck: the best agent recalls only 20.9% of expert-selected papers, and even with perfect input, the best model achieves only 0.31 ARI in organization. Current deep research agents remain far from expert-level survey writing. Our benchmark is publicly available at https://github.com/KongLongGeFDU/TaxoBench.
+ oai:arXiv.org:2601.12369v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ming Zhang, Jiabao Zhuang, Wenqing Jing, Ziyu Kong, Jingyi Deng, Yujiong Shen, Kexin Tan, Yuhang Zhao, Ning Luo, Renzhe Zheng, Jiahui Lin, Mingqi Wu, Long Ma, Yi Zou, Shihan Dou, Tao Gui, Qi Zhang, Xuanjing Huang
+
+
+ CD-TWINSAFE: A ROS-enabled Digital Twin for Scene Understanding and Safety Emerging V2I Technology
+ https://arxiv.org/abs/2601.12373
+ arXiv:2601.12373v1 Announce Type: new
+Abstract: In this paper, the CD-TWINSAFE is introduced, a V2I-based digital twin for Autonomous Vehicles. The proposed architecture is composed of two stacks running simultaneously, an on-board driving stack that includes a stereo camera for scene understanding, and a digital twin stack that runs an Unreal Engine 5 replica of the scene viewed by the camera as well as returning safety alerts to the cockpit. The on-board stack is implemented on the vehicle side including 2 main autonomous modules; localization and perception. The position and orientation of the ego vehicle are obtained using on-board sensors. Furthermore, the perception module is responsible for processing 20-fps images from stereo camera and understands the scene through two complementary pipelines. The pipeline are working on object detection and feature extraction including object velocity, yaw and the safety metrics time-to-collision and time-headway. The collected data form the driving stack are sent to the infrastructure side through the ROS-enabled architecture in the form of custom ROS2 messages and sent over UDP links that ride a 4G modem for V2I communication. The environment is monitored via the digital twin through the shared messages which update the information of the spawned ego vehicle and detected objects based on the real-time localization and perception data. Several tests with different driving scenarios to confirm the validity and real-time response of the proposed architecture.
+ oai:arXiv.org:2601.12373v1
+ cs.CV
+ cs.HC
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Amro Khaled, Farah Khaled, Omar Riad, Catherine M. Elias
+
+
+ A Scalable Entity-Based Framework for Auditing Bias in LLMs
+ https://arxiv.org/abs/2601.12374
+ arXiv:2601.12374v1 Announce Type: new
+Abstract: Existing approaches to bias evaluation in large language models (LLMs) trade ecological validity for statistical control, relying on artificial prompts that poorly reflect real-world use, or on naturalistic tasks that lack scale and rigor. We introduce a scalable bias-auditing framework using named entities as probes to measure structural disparities in model behavior. We show that synthetic data reliably reproduces bias patterns observed in natural text, enabling large-scale analysis. Using this approach, we conduct the largest bias audit to date, comprising 1.9 billion data points across multiple entity types, tasks, languages, models, and prompting strategies. Our results reveal systematic biases: models penalize right-wing politicians, favor left-wing politicians, prefer Western and wealthy nations over the Global South, favor Western companies, and penalize firms in the defense and pharmaceutical sectors. While instruction tuning reduces bias, increasing model scale amplifies it, and prompting in Chinese or Russian does not attenuate Western-aligned preferences. These results indicate that LLMs should undergo rigorous auditing before deployment in high-stakes applications.
+ oai:arXiv.org:2601.12374v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Akram Elbouanani, Aboubacar Tuo, Adrian Popescu
+
+
+ LiQSS: Post-Transformer Linear Quantum-Inspired State-Space Tensor Networks for Real-Time 6G
+ https://arxiv.org/abs/2601.12375
+ arXiv:2601.12375v1 Announce Type: new
+Abstract: Proactive and agentic control in Sixth-Generation (6G) Open Radio Access Networks (O-RAN) requires control-grade prediction under stringent Near-Real-Time (Near-RT) latency and computational constraints. While Transformer-based models are effective for sequence modeling, their quadratic complexity limits scalability in Near-RT RAN Intelligent Controller (RIC) analytics. This paper investigates a post-Transformer design paradigm for efficient radio telemetry forecasting. We propose a quantum-inspired many-body state-space tensor network that replaces self-attention with stable structured state-space dynamics kernels, enabling linear-time sequence modeling. Tensor-network factorizations in the form of Tensor Train (TT) / Matrix Product State (MPS) representations are employed to reduce parameterization and data movement in both input projections and prediction heads, while lightweight channel gating and mixing layers capture non-stationary cross-Key Performance Indicator (KPI) dependencies. The proposed model is instantiated as an agentic perceive-predict xApp and evaluated on a bespoke O-RAN KPI time-series dataset comprising 59,441 sliding windows across 13 KPIs, using Reference Signal Received Power (RSRP) forecasting as a representative use case. Our proposed Linear Quantum-Inspired State-Space (LiQSS) model is 10.8x-15.8x smaller and approximately 1.4x faster than prior structured state-space baselines. Relative to Transformer-based models, LiQSS achieves up to a 155x reduction in parameter count and up to 2.74x faster inference, without sacrificing forecasting accuracy.
+ oai:arXiv.org:2601.12375v1
+ cs.NI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Farhad Rezazadeh, Hatim Chergui, Mehdi Bennis, Houbing Song, Lingjia Liu, Dusit Niyato, Merouane Debbah
+
+
+ LR-DWM: Efficient Watermarking for Diffusion Language Models
+ https://arxiv.org/abs/2601.12376
+ arXiv:2601.12376v1 Announce Type: new
+Abstract: Watermarking (WM) is a critical mechanism for detecting and attributing AI-generated content. Current WM methods for Large Language Models (LLMs) are predominantly tailored for autoregressive (AR) models: They rely on tokens being generated sequentially, and embed stable signals within the generated sequence based on the previously sampled text. Diffusion Language Models (DLMs) generate text via non-sequential iterative denoising, which requires significant modification to use WM methods designed for AR models. Recent work proposed to watermark DLMs by inverting the process when needed, but suffers significant computational or memory overhead. We introduce Left-Right Diffusion Watermarking (LR-DWM), a scheme that biases the generated token based on both left and right neighbors, when they are available. LR-DWM incurs minimal runtime and memory overhead, remaining close to the non-watermarked baseline DLM while enabling reliable statistical detection under standard evaluation settings. Our results demonstrate that DLMs can be watermarked efficiently, achieving high detectability with negligible computational and memory overhead.
+ oai:arXiv.org:2601.12376v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ofek Raban, Ethan Fetaya, Gal Chechik
+
+
+ R-VoxelMap: Accurate Voxel Mapping with Recursive Plane Fitting for Online LiDAR Odometry
+ https://arxiv.org/abs/2601.12377
+ arXiv:2601.12377v1 Announce Type: new
+Abstract: This paper proposes R-VoxelMap, a novel voxel mapping method that constructs accurate voxel maps using a geometry-driven recursive plane fitting strategy to enhance the localization accuracy of online LiDAR odometry. VoxelMap and its variants typically fit and check planes using all points in a voxel, which may lead to plane parameter deviation caused by outliers, over segmentation of large planes, and incorrect merging across different physical planes. To address these issues, R-VoxelMap utilizes a geometry-driven recursive construction strategy based on an outlier detect-and-reuse pipeline. Specifically, for each voxel, accurate planes are first fitted while separating outliers using random sample consensus (RANSAC). The remaining outliers are then propagated to deeper octree levels for recursive processing, ensuring a detailed representation of the environment. In addition, a point distribution-based validity check algorithm is devised to prevent erroneous plane merging. Extensive experiments on diverse open-source LiDAR(-inertial) simultaneous localization and mapping (SLAM) datasets validate that our method achieves higher accuracy than other state-of-the-art approaches, with comparable efficiency and memory usage. Code will be available on GitHub.
+ oai:arXiv.org:2601.12377v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Haobo Xi, Shiyong Zhang, Qianli Dong, Yunze Tong, Songyang Wu, Jing Yuan, Xuebo Zhang
+
+
+ Utilizing the Score of Data Distribution for Hyperspectral Anomaly Detection
+ https://arxiv.org/abs/2601.12379
+ arXiv:2601.12379v1 Announce Type: new
+Abstract: Hyperspectral images (HSIs) are a type of image that contains abundant spectral information. As a type of real-world data, the high-dimensional spectra in hyperspectral images are actually determined by only a few factors, such as chemical composition and illumination. Thus, spectra in hyperspectral images are highly likely to satisfy the manifold hypothesis. Based on the hyperspectral manifold hypothesis, we propose a novel hyperspectral anomaly detection method (named ScoreAD) that leverages the time-dependent gradient field of the data distribution (i.e., the score), as learned by a score-based generative model (SGM). Our method first trains the SGM on the entire set of spectra from the hyperspectral image. At test time, each spectrum is passed through a perturbation kernel, and the resulting perturbed spectrum is fed into the trained SGM to obtain the estimated score. The manifold hypothesis of HSIs posits that background spectra reside on one or more low-dimensional manifolds. Conversely, anomalous spectra, owing to their unique spectral signatures, are considered outliers that do not conform to the background manifold. Based on this fundamental discrepancy in their manifold distributions, we leverage a generative SGM to achieve hyperspectral anomaly detection. Experiments on the four hyperspectral datasets demonstrate the effectiveness of the proposed method. The code is available at https://github.com/jiahuisheng/ScoreAD.
+ oai:arXiv.org:2601.12379v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiahui Sheng, Yidan Shi, Shu Xiang, Xiaorun Li, Shuhan Chen
+
+
+ Statistical-Neural Interaction Networks for Interpretable Mixed-Type Data Imputation
+ https://arxiv.org/abs/2601.12380
+ arXiv:2601.12380v1 Announce Type: new
+Abstract: Real-world tabular databases routinely combine continuous measurements and categorical records, yet missing entries are pervasive and can distort downstream analysis. We propose Statistical-Neural Interaction (SNI), an interpretable mixed-type imputation framework that couples correlation-derived statistical priors with neural feature attention through a Controllable-Prior Feature Attention (CPFA) module. CPFA learns head-wise prior-strength coefficients $\{\lambda_h\}$ that softly regularize attention toward the prior while allowing data-driven deviations when nonlinear patterns appear to be present in the data. Beyond imputation, SNI aggregates attention maps into a directed feature-dependency matrix that summarizes which variables the imputer relied on, without requiring post-hoc explainers. We evaluate SNI against six baselines (Mean/Mode, MICE, KNN, MissForest, GAIN, MIWAE) on six datasets spanning ICU monitoring, population surveys, socio-economic statistics, and engineering applications. Under MCAR/strict-MAR at 30\% missingness, SNI is generally competitive on continuous metrics but is often outperformed by accuracy-first baselines (MissForest, MIWAE) on categorical variables; in return, it provides intrinsic dependency diagnostics and explicit statistical-neural trade-off parameters. We additionally report MNAR stress tests (with a mask-aware variant) and discuss computational cost, limitations -- particularly for severely imbalanced categorical targets -- and deployment scenarios where interpretability may justify the trade-off.
+ oai:arXiv.org:2601.12380v1
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ou Deng, Shoji Nishimura, Atsushi Ogihara, Qun Jin
+
+
+ A Hierarchical Benchmark of Foundation Models for Dermatology
+ https://arxiv.org/abs/2601.12382
+ arXiv:2601.12382v1 Announce Type: new
+Abstract: Foundation models have transformed medical image analysis by providing robust feature representations that reduce the need for large-scale task-specific training. However, current benchmarks in dermatology often reduce the complex diagnostic taxonomy to flat, binary classification tasks, such as distinguishing melanoma from benign nevi. This oversimplification obscures a model's ability to perform fine-grained differential diagnoses, which is critical for clinical workflow integration. This study evaluates the utility of embeddings derived from ten foundation models, spanning general computer vision, general medical imaging, and dermatology-specific domains, for hierarchical skin lesion classification. Using the DERM12345 dataset, which comprises 40 lesion subclasses, we calculated frozen embeddings and trained lightweight adapter models using a five-fold cross-validation. We introduce a hierarchical evaluation framework that assesses performance across four levels of clinical granularity: 40 Subclasses, 15 Main Classes, 2 and 4 Superclasses, and Binary Malignancy. Our results reveal a "granularity gap" in model capabilities: MedImageInsights achieved the strongest overall performance (97.52% weighted F1-Score on Binary Malignancy detection) but declined to 65.50% on fine-grained 40-class subtype classification. Conversely, MedSigLip (69.79%) and dermatology-specific models (Derm Foundation and MONET) excelled at fine-grained 40-class subtype discrimination while achieving lower overall performance than MedImageInsights on broader classification tasks. Our findings suggest that while general medical foundation models are highly effective for high-level screening, specialized modeling strategies are necessary for the granular distinctions required in diagnostic support systems.
+ oai:arXiv.org:2601.12382v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Furkan Yuceyalcin, Abdurrahim Yilmaz, Burak Temelkuran
+
+
+ Context-Free Grammar Inference for Complex Programming Languages in Black Box Settings
+ https://arxiv.org/abs/2601.12385
+ arXiv:2601.12385v1 Announce Type: new
+Abstract: Grammar inference for complex programming languages remains a significant challenge, as existing approaches fail to scale to real world datasets within practical time constraints. In our experiments, none of the state-of-the-art tools, including Arvada, Treevada and Kedavra were able to infer grammars for complex languages such as C, C++, and Java within 48 hours. Arvada and Treevada perform grammar inference directly on full-length input examples, which proves inefficient for large files commonly found in such languages. While Kedavra introduces data decomposition to create shorter examples for grammar inference, its lexical analysis still relies on the original inputs. Additionally, its strict no-overgeneralization constraint limits the construction of complex grammars.
+ To overcome these limitations, we propose Crucio, which builds a decomposition forest to extract short examples for lexical and grammar inference via a distributional matrix. Experimental results show that Crucio is the only method capable of successfully inferring grammars for complex programming languages (where the number of nonterminals is up to 23x greater than in prior benchmarks) within reasonable time limits. On the prior simple benchmark, Crucio achieves an average recall improvement of 1.37x and 1.19x over Treevada and Kedavra, respectively, and improves F1 scores by 1.21x and 1.13x.
+ oai:arXiv.org:2601.12385v1
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Feifei Li, Xiao Chen, Xiaoyu Sun, Xi Xiao, Shaohua Wang, Yong Ding, Sheng Wen, Qing Li
+
+
+ NADIR: Differential Attention Flow for Non-Autoregressive Transliteration in Indic Languages
+ https://arxiv.org/abs/2601.12389
+ arXiv:2601.12389v1 Announce Type: new
+Abstract: In this work, we argue that not all sequence-to-sequence tasks require the strong inductive biases of autoregressive (AR) models. Tasks like multilingual transliteration, code refactoring, grammatical correction or text normalization often rely on local dependencies where the full modeling capacity of AR models can be overkill, creating a trade-off between their high accuracy and high inference latency. While non-autoregressive (NAR) models offer speed, they typically suffer from hallucinations and poor length control. To explore this trade-off, we focus on the multilingual transliteration task in Indic languages and introduce NADIR, a novel NAR architecture designed to strike a balance between speed and accuracy. NADIR integrates a Differential Transformer and a Mixture-of-Experts mechanism, enabling it to robustly model complex character mappings without sequential dependencies. NADIR achieves over a 13x speed-up compared to the state-of-the-art AR baseline. It maintains a competitive mean Character Error Rate of 15.78%, compared to 14.44% for the AR model and 21.88% for a standard NAR equivalent. Importantly, NADIR reduces Repetition errors by 49.53%, Substitution errors by 24.45%, Omission errors by 32.92%, and Insertion errors by 16.87%. This work provides a practical blueprint for building fast and reliable NAR systems, effectively bridging the gap between AR accuracy and the demands of real-time, large-scale deployment.
+ oai:arXiv.org:2601.12389v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lakshya Tomar, Vinayak Abrol, Puneet Agarwal
+
+
+ Auditing Meta and TikTok Research API Data Access under Article 40(12) of the Digital Services Act
+ https://arxiv.org/abs/2601.12390
+ arXiv:2601.12390v1 Announce Type: new
+Abstract: Article 40(12) of the Digital Services Act (DSA) requires Very Large Online Platforms (VLOPs) to provide vetted researchers with access to publicly accessible data. While prior work has identified shortcomings of platform-provided data access mechanisms, existing research has not quantitatively assessed data quality and completeness in Research APIs across platforms, nor systematically mapped how current access provisions fall short. This paper presents a systematic audit of research access modalities by comparing data obtained through platform Research APIs with data collected about the same platforms' user-visible public information environment (PIE). Focusing on two major platform APIs, the TikTok Research API and the Meta Content Library, we reconstruct full information feeds for two controlled sockpuppet accounts during two election periods and benchmark these against the data retrievable for the same posts through the corresponding Research APIs. Our findings show systematic data loss through three classes of platform-imposed mechanisms: scope narrowing, metadata stripping, and operational restrictions. Together, these mechanisms implement overlapping filters that exclude large portions of the platform PIE (up to approximately 50 percent), strip essential contextual metadata (up to approximately 83 percent), and impose severe technical constraints for researchers (down to approximately 1000 requests per day). Viewed through a data quality lens, these filters primarily undermine completeness, resulting in a structurally biased representation of platform activity. We conclude that, in their current form, the Meta and TikTok Research APIs fall short of supporting meaningful, independent auditing of systemic risks as envisioned under the DSA.
+ oai:arXiv.org:2601.12390v1
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Luka Bekavac, Simon Mayer
+
+
+ Class-Partitioned VQ-VAE and Latent Flow Matching for Point Cloud Scene Generation
+ https://arxiv.org/abs/2601.12391
+ arXiv:2601.12391v1 Announce Type: new
+Abstract: Most 3D scene generation methods are limited to only generating object bounding box parameters while newer diffusion methods also generate class labels and latent features. Using object size or latent feature, they then retrieve objects from a predefined database. For complex scenes of varied, multi-categorical objects, diffusion-based latents cannot be effectively decoded by current autoencoders into the correct point cloud objects which agree with target classes. We introduce a Class-Partitioned Vector Quantized Variational Autoencoder (CPVQ-VAE) that is trained to effectively decode object latent features, by employing a pioneering $\textit{class-partitioned codebook}$ where codevectors are labeled by class. To address the problem of $\textit{codebook collapse}$, we propose a $\textit{class-aware}$ running average update which reinitializes dead codevectors within each partition. During inference, object features and class labels, both generated by a Latent-space Flow Matching Model (LFMM) designed specifically for scene generation, are consumed by the CPVQ-VAE. The CPVQ-VAE's class-aware inverse look-up then maps generated latents to codebook entries that are decoded to class-specific point cloud shapes. Thereby, we achieve pure point cloud generation without relying on an external objects database for retrieval. Extensive experiments reveal that our method reliably recovers plausible point cloud scenes, with up to 70.4% and 72.3% reduction in Chamfer and Point2Mesh errors on complex living room scenes.
+ oai:arXiv.org:2601.12391v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dasith de Silva Edirimuni, Ajmal Saeed Mian
+
+
+ Psych\=eChat: An Empathic Framework Focused on Emotion Shift Tracking and Safety Risk Analysis in Psychological Counseling
+ https://arxiv.org/abs/2601.12392
+ arXiv:2601.12392v1 Announce Type: new
+Abstract: Large language models (LLMs) have demonstrated notable advancements in psychological counseling. However, existing models generally do not explicitly model seekers' emotion shifts across counseling sessions, a core focus in classical psychological schools. Moreover, how to align counselor models' responses with these emotion shifts while proactively mitigating safety risks remains underexplored. To bridge these gaps, we propose Psych\=eChat, which explicitly integrates emotion shift tracking and safety risk analysis for psychological counseling. Specifically, we employ interactive role-playing to synthesize counselor--seeker dialogues, incorporating two modules: Emotion Management Module, to capture seekers' current emotions and emotion shifts; and Risk Control Module, to anticipate seekers' subsequent reactions and identify potential risks. Furthermore, we introduce two modeling paradigms. The Agent Mode structures emotion management, risk control, and counselor responses into a collaborative multi-agent pipeline. The LLM Mode integrates these stages into a unified chain-of-thought for end-to-end inference, balancing efficiency and performance. Extensive experiments, including interactive scoring, dialogue-level evaluation, and human assessment, demonstrate that Psych\=eChat outperforms existing methods for emotional insight and safety control.
+ oai:arXiv.org:2601.12392v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhentao Xia, Yongqi Fan, Yuxiang Chu, Yichao Yin, Liangliang Chen, Tong Ruan, Weiyan Zhang
+
+
+ $2$-quasi-perfect Lee codes and abelian Ramanujan graphs: a new construction and relationship
+ https://arxiv.org/abs/2601.12393
+ arXiv:2601.12393v1 Announce Type: new
+Abstract: In this paper, we obtain a new explicit family of $2$-quasi-perfect Lee codes of arbitrarily large length. Our construction is based on generating sets of abelian (almost) Ramanujan graphs obtained by Forey, Fres\'{a}n, Kowalski and Wigderson. Also, we develop a relationship between certain abelian Ramanujan graphs and $2$-quasi-perfect Lee codes obtained by Mesnager, Tang and Qi.
+ oai:arXiv.org:2601.12393v1
+ cs.IT
+ math.CO
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shohei Satake
+
+
+ Privacy via Modulation Rotation and Inter-Symbol Interference
+ https://arxiv.org/abs/2601.12394
+ arXiv:2601.12394v1 Announce Type: new
+Abstract: Two physical-layer mechanisms for achieving user-side differential privacy in communication systems are proposed. Focusing on binary phase-shift keying (BPSK) modulation, differential privacy (DP) is first studied under a deterministic phase rotation applied on the BPSK modulation at the transmitter, while the receiver is assumed to be unaware of the rotation angle. In this setting, privacy is achieved through an effective reduction in the decision distance, resulting in a controlled increase in the bit error rate (BER) without explicit noise injection. Next, a BPSK transmission scheme with intentionally induced inter-symbol interference (ISI) is studied, where the receiver is likewise unaware of the deterministic timing offset that generates the ISI. Unlike the rotated BPSK scheme, the DP obtained via ISI is shown to depend explicitly on the input data distribution. In particular, numerical results demonstrate that, for a fixed ISI parameter, the privacy loss is maximized when the binary input symbols are equiprobable. While conventional DP mechanisms rely on artificially added noise, often incurring additional energy or communication costs, it is shown that structured modifications, such as modulation rotation or induced ISI inherent to realistic communication channels can itself provide DP guarantees. While the analysis focuses on deterministic transmitter modifications unknown to the receiver, it is noted that real-world devices naturally introduce unintentional rotations or ISI due to hardware nonidealities and implementation errors. These effects can therefore provide a level of privacy without requiring explicit noise injection. Hence, it is possible to avoid deliberately perturbing the data, instead leveraging inherent device imperfections to achieve privacy guarantees with no additional privacy cost.
+ oai:arXiv.org:2601.12394v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Morteza Varasteh, Pegah Sharifi
+
+
+ VR$^2$: A Co-Located Dual-Headset Platform for Touch-Enabled Human-Robot Interaction Research
+ https://arxiv.org/abs/2601.12395
+ arXiv:2601.12395v1 Announce Type: new
+Abstract: Touch-rich human-robot interaction (HRI) is difficult to study: building and programming physical robots is costly and slow, while VR-based robot prototypes often remove physical contact or break the tight coupling between an agent's body and the user's felt touch. We present VR2VR, a co-located dual VR-headset platform for HRI research in which a participant and a hidden operator share the same physical space while experiencing different virtual embodiments. The participant sees an expressive virtual robot that interacts face-to-face in a shared virtual environment. In real time, the robot's upper-body gestures, head and gaze behaviors, and facial expressions are mapped from the operator's tracked motion and face signals. Because the operator is physically co-present and calibrated into the same coordinate frame, the operator can also physically touch the participant, enabling the participant to perceive robot touch aligned with the robot's hands; finger and hand motion are mapped to the robot using inverse kinematics to support precise contact. Beyond faithful motion retargeting for limb teleoperation, our VR2VR system supports experimental control by retargeting or selectively enabling nonverbal channels (e.g., head only vs. head+eyes vs. head+eyes+facial expressions) while keeping physical interaction constant. We detail the system design, calibration workflow, and safety considerations, and demonstrate the platform through a touch-based Wizard-of-Oz HRI study, illustrating how VR2VR lowers barriers for rapidly prototyping and rigorously evaluating embodied, touch-centric robot behaviors.
+ oai:arXiv.org:2601.12395v1
+ cs.RO
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chao Wang, Anna Belardinelli, Michael Gienger
+
+
+ Learning Diverse Skills for Behavior Models with Mixture of Experts
+ https://arxiv.org/abs/2601.12397
+ arXiv:2601.12397v1 Announce Type: new
+Abstract: Imitation learning has demonstrated strong performance in robotic manipulation by learning from large-scale human demonstrations. While existing models excel at single-task learning, it is observed in practical applications that their performance degrades in the multi-task setting, where interference across tasks leads to an averaging effect. To address this issue, we propose to learn diverse skills for behavior models with Mixture of Experts, referred to as Di-BM. Di-BM associates each expert with a distinct observation distribution, enabling experts to specialize in sub-regions of the observation space. Specifically, we employ energy-based models to represent expert-specific observation distributions and jointly train them alongside the corresponding action models. Our approach is plug-and-play and can be seamlessly integrated into standard imitation learning methods. Extensive experiments on multiple real-world robotic manipulation tasks demonstrate that Di-BM significantly outperforms state-of-the-art baselines. Moreover, fine-tuning the pretrained Di-BM on novel tasks exhibits superior data efficiency and the reusable of expert-learned knowledge. Code is available at https://github.com/robotnav-bot/Di-BM.
+ oai:arXiv.org:2601.12397v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wangtian Shen, Jinming Ma, Mingliang Zhou, Ziyang Meng
+
+
+ Beyond the Dirac Delta: Mitigating Diversity Collapse in Reinforcement Fine-Tuning for Versatile Image Generation
+ https://arxiv.org/abs/2601.12401
+ arXiv:2601.12401v1 Announce Type: new
+Abstract: Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning large-scale generative models, such as diffusion and flow models, to align with complex human preferences and user-specified tasks. A fundamental limitation remains \textit{the curse of diversity collapse}, where the objective formulation and optimization landscape inherently collapse the policy to a Dirac delta distribution. To address this challenge, we propose \textbf{DRIFT} (\textbf{D}ive\textbf{R}sity-\textbf{I}ncentivized Reinforcement \textbf{F}ine-\textbf{T}uning for Versatile Image Generation), an innovative framework that systematically incentivizes output diversity throughout the on-policy fine-tuning process, reconciling strong task alignment with high generation diversity to enhance versatility essential for applications that demand diverse candidate generations. We approach the problem across three representative perspectives: i) \textbf{sampling} a reward-concentrated subset that filters out reward outliers to prevent premature collapse; ii) \textbf{prompting} with stochastic variations to expand the conditioning space, and iii) \textbf{optimization} of the intra-group diversity with a potential-based reward shaping mechanism. Experimental results show that DRIFT achieves superior Pareto dominance regarding task alignment and generation diversity, yielding a $ 9.08\%\!\sim\! 43.46\%$ increase in diversity at equivalent alignment levels and a $ 59.65\% \!\sim\! 65.86\%$ increase in alignment at equivalent levels of diversity.
+ oai:arXiv.org:2601.12401v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinmei Liu, Haoru Li, Zhenhong Sun, Chaofeng Chen, Yatao Bian, Bo Wang, Daoyi Dong, Chunlin Chen, Zhi Wang
+
+
+ Weaknesses of Facial Emotion Recognition Systems
+ https://arxiv.org/abs/2601.12402
+ arXiv:2601.12402v1 Announce Type: new
+Abstract: Emotion detection from faces is one of the machine learning problems needed for human-computer interaction. The variety of methods used is enormous, which motivated an in-depth review of articles and scientific studies. Three of the most interesting and best solutions are selected, followed by the selection of three datasets that stood out for the diversity and number of images in them. The selected neural networks are trained, and then a series of experiments are performed to compare their performance, including testing on different datasets than a model was trained on. This reveals weaknesses in existing solutions, including differences between datasets, unequal levels of difficulty in recognizing certain emotions and the challenges in differentiating between closely related emotions.
+ oai:arXiv.org:2601.12402v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1007/978-3-032-05802-7_1
+ Proc. 12th Machine Intelligence and Digital Interaction Conf. (MIDI 2024), Warsaw, Poland, Dec. 2024 (14-22)
+ Aleksandra Jamr\'oz, Patrycja Wysocka, Piotr Garbat
+
+
+ Explainable Machine Learning for Pediatric Dental Risk Stratification Using Socio-Demographic Determinants
+ https://arxiv.org/abs/2601.12405
+ arXiv:2601.12405v1 Announce Type: new
+Abstract: Background: Pediatric dental disease remains one of the most prevalent and inequitable chronic health conditions worldwide. Although strong epidemiological evidence links oral health outcomes to socio-economic and demographic determinants, most artificial intelligence (AI) applications in dentistry rely on image-based diagnosis and black-box prediction models, limiting transparency and ethical applicability in pediatric populations.
+ Objective: This study aimed to develop and evaluate an explainable machine learning framework for pediatric dental risk stratification that prioritizes interpretability, calibration, and ethical deployment over maximal predictive accuracy.
+ Methods: A supervised machine learning model was trained using population-level pediatric data including age, income-to-poverty ratio, race/ethnicity, gender, and medical history. Model performance was assessed using receiver operating characteristic (ROC) analysis and calibration curves. Explainability was achieved using SHapley Additive exPlanations (SHAP) to provide global and individual-level interpretation of predictions.
+ Results: The model achieved modest discrimination (AUC = 0.61) with conservative calibration, underestimating risk at higher probability levels. SHAP analysis identified age and income-to-poverty ratio as the strongest contributors to predicted risk, followed by race/ethnicity and gender.
+ Conclusion: Explainable machine learning enables transparent, prevention-oriented pediatric dental risk stratification and supports population screening and equitable resource allocation rather than diagnostic decision-making.
+ oai:arXiv.org:2601.12405v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Manasi Kanade, Abhi Thakkar, Gabriela Fernandes
+
+
+ De-Anonymization at Scale via Tournament-Style Attribution
+ https://arxiv.org/abs/2601.12407
+ arXiv:2601.12407v1 Announce Type: new
+Abstract: As LLMs rapidly advance and enter real-world use, their privacy implications are increasingly important. We study an authorship de-anonymization threat: using LLMs to link anonymous documents to their authors, potentially compromising settings such as double-blind peer review.
+ We propose De-Anonymization at Scale (DAS), a large language model-based method for attributing authorship among tens of thousands of candidate texts. DAS uses a sequential progression strategy: it randomly partitions the candidate corpus into fixed-size groups, prompts an LLM to select the text most likely written by the same author as a query text, and iteratively re-queries the surviving candidates to produce a ranked top-k list. To make this practical at scale, DAS adds a dense-retrieval prefilter to shrink the search space and a majority-voting style aggregation over multiple independent runs to improve robustness and ranking precision. Experiments on anonymized review data show DAS can recover same-author texts from pools of tens of thousands with accuracy well above chance, demonstrating a realistic privacy risk for anonymous platforms. On standard authorship benchmarks (Enron emails and blog posts), DAS also improves both accuracy and scalability over prior approaches, highlighting a new LLM-enabled de-anonymization vulnerability.
+ oai:arXiv.org:2601.12407v1
+ cs.CR
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Lirui Zhang, Huishuai Zhang
+
+
+ Are LLMs Smarter Than Chimpanzees? An Evaluation on Perspective Taking and Knowledge State Estimation
+ https://arxiv.org/abs/2601.12410
+ arXiv:2601.12410v1 Announce Type: new
+Abstract: Cognitive anthropology suggests that the distinction of human intelligence lies in the ability to infer other individuals' knowledge states and understand their intentions. In comparison, our closest animal relative, chimpanzees, lack the capacity to do so. With this paper, we aim to evaluate LLM performance in the area of knowledge state tracking and estimation. We design two tasks to test (1) if LLMs can detect when story characters, through their actions, demonstrate knowledge they should not possess, and (2) if LLMs can predict story characters' next actions based on their own knowledge vs. objective truths they do not know. Results reveal that most current state-of-the-art LLMs achieve near-random performance on both tasks, and are substantially inferior to humans. We argue future LLM research should place more weight on the abilities of knowledge estimation and intention understanding.
+ oai:arXiv.org:2601.12410v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dingyi Yang, Junqi Zhao, Xue Li, Ce Li, Boyang Li
+
+
+ Orthogonalized Policy Optimization:Decoupling Sampling Geometry from Optimization Geometry in RLHF
+ https://arxiv.org/abs/2601.12415
+ arXiv:2601.12415v1 Announce Type: new
+Abstract: Recent alignment methods for large language models, including PPO, DPO, and IPO, are often presented as distinct algorithms. In this work, we show that many of these approaches implicitly conflate two fundamental and independent design choices: (i) the sampling geometry, which determines which samples dominate the gradient signal, and (ii) the optimization geometry, which determines how deviations in value are penalized. We formalize this observation by expressing alignment as the minimization of a generalized distance between policy energy and target energy, parameterized by an alpha-divergence-based sampling weight and a Bregman-divergence-based value metric. We demonstrate that the commonly used KL divergence induces an exponential penalty on unbounded value signals, leading to numerical instability and vanishing gradients in high-confidence regimes. To address this issue, we propose Orthogonalized Policy Optimization (OPO), a framework that explicitly decouples sampling geometry from optimization geometry. By combining alpha-weighted importance sampling with a chi-square-induced quadratic regularization in ratio coordinates, OPO yields a simple and well-conditioned objective with linear gradient dynamics. This formulation maintains stable optimization while preserving peak-seeking behavior and avoids gradient saturation even when model confidence is high. Our analysis positions OPO as a unifying perspective on existing alignment methods and provides a principled foundation for robust reasoning-oriented training.
+ oai:arXiv.org:2601.12415v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wang Zixian
+
+
+ RLMiner: Finding the Most Frequent k-sized Subgraph via Reinforcement Learning
+ https://arxiv.org/abs/2601.12416
+ arXiv:2601.12416v1 Announce Type: new
+Abstract: Identifying the most frequent induced subgraph of size $k$ in a target graph is a fundamental graph mining problem with direct implications for Web-related data mining and social network analysis. Despite its importance, finding the most frequent induced subgraph remains computationally expensive due to the NP-hard nature of the subgraph counting task. Traditional exact enumeration algorithms often suffer from high time complexity, especially for a large graph size $k$. To mitigate this, existing approaches often utilize frequency measurement with the Downward Closure Property to reduce the search space, imposing additional constraints on the task. In this paper, we first formulate this task as a Markov Decision Process and approach it using a multi-task reinforcement learning framework. Specifically, we introduce RLMiner, a novel framework that integrates reinforcement learning with our proposed task-state-aware Graph Neural Network to find the most frequent induced subgraph of size $k$ with a time complexity linear to $k$. Extensive experiments on real-world datasets demonstrate that our proposed RLMiner effectively identifies subgraphs with frequencies closely matching the ground-truth most frequent induced subgraphs, while achieving significantly shorter and more stable running times compared to traditional methods.
+ oai:arXiv.org:2601.12416v1
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wei Huang, Hanchen Wang, Dong Wen, Xin Cao, Ying Zhang, Wenjie Zhang
+
+
+ Legal experts disagree with rationale extraction techniques for explaining ECtHR case outcome classification
+ https://arxiv.org/abs/2601.12419
+ arXiv:2601.12419v1 Announce Type: new
+Abstract: Interpretability is critical for applications of large language models in the legal domain which requires trust and transparency. While some studies develop task-specific approaches, other use the classification model's parameters to explain the decisions. However, which technique explains the legal outcome prediction best remains an open question. To address this challenge, we propose a comparative analysis framework for model-agnostic interpretability techniques. Among these, we employ two rationale extraction methods, which justify outcomes with human-interpretable and concise text fragments (i.e., rationales) from the given input text. We conduct comparison by evaluating faithfulness-via normalized sufficiency and comprehensiveness metrics along with plausibility-by asking legal experts to evaluate extracted rationales. We further assess the feasibility of LLM-as-a-Judge using legal expert evaluation results. We show that the model's "reasons" for predicting a violation differ substantially from those of legal experts, despite highly promising quantitative analysis results and reasonable downstream classification performance. The source code of our experiments is publicly available at https://github.com/trusthlt/IntEval.
+ oai:arXiv.org:2601.12419v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mahammad Namazov, Tom\'a\v{s} Koref, Ivan Habernal
+
+
+ HOT-POT: Optimal Transport for Sparse Stereo Matching
+ https://arxiv.org/abs/2601.12423
+ arXiv:2601.12423v1 Announce Type: new
+Abstract: Stereo vision between images faces a range of challenges, including occlusions, motion, and camera distortions, across applications in autonomous driving, robotics, and face analysis. Due to parameter sensitivity, further complications arise for stereo matching with sparse features, such as facial landmarks. To overcome this ill-posedness and enable unsupervised sparse matching, we consider line constraints of the camera geometry from an optimal transport (OT) viewpoint. Formulating camera-projected points as (half)lines, we propose the use of the classical epipolar distance as well as a 3D ray distance to quantify matching quality. Employing these distances as a cost function of a (partial) OT problem, we arrive at efficiently solvable assignment problems. Moreover, we extend our approach to unsupervised object matching by formulating it as a hierarchical OT problem. The resulting algorithms allow for efficient feature and object matching, as demonstrated in our numerical experiments. Here, we focus on applications in facial analysis, where we aim to match distinct landmarking conventions.
+ oai:arXiv.org:2601.12423v1
+ cs.CV
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Antonin Clerc, Michael Quellmalz, Moritz Piening, Philipp Flotho, Gregor Kornhardt, Gabriele Steidl
+
+
+ Graph Attention Networks with Physical Constraints for Anomaly Detection
+ https://arxiv.org/abs/2601.12426
+ arXiv:2601.12426v1 Announce Type: new
+Abstract: Water distribution systems (WDSs) face increasing cyber-physical risks, which make reliable anomaly detection essential. Many data-driven models ignore network topology and are hard to interpret, while model-based ones depend strongly on parameter accuracy. This work proposes a hydraulic-aware graph attention network using normalized conservation law violations as features. It combines mass and energy balance residuals with graph attention and bidirectional LSTM to learn spatio-temporal patterns. A multi-scale module aggregates detection scores from node to network level. On the BATADAL dataset, it reaches $F1=0.979$, showing $3.3$pp gain and high robustness under $15\%$ parameter noise.
+ oai:arXiv.org:2601.12426v1
+ cs.LG
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Mohammadhossein Homaei, Iman Khazrak, Ruben Molano, Andres Caro, Mar Avila
+
+
+ Counterexamples, Constructions, and Nonexistence Results for Optimal Ternary Cyclic Codes
+ https://arxiv.org/abs/2601.12427
+ arXiv:2601.12427v1 Announce Type: new
+Abstract: Cyclic codes are an important subclass of linear codes with wide applications in communication systems and data storage systems. In 2013, Ding and Helleseth presented nine open problems on optimal ternary cyclic codes $\mathcal{C}_{(1,e)}$. While the first two and the sixth problems have been fully solved, others remain open. In this paper, we advance the study of the third and fourth open problems by providing the first counterexamples to both and constructing two families of optimal codes under certain conditions, thereby partially solving the third problem. Furthermore, we investigate the cyclic codes $\mathcal{C}_{(1,e)}$ where $e(3^h\pm 1)\equiv\frac{3^m-a}{2}\pmod{3^m-1}$ and $a$ is odd. For $a\equiv 3\pmod{4}$, we present two new families of optimal codes with parameters $[3^m-1,3^m-1-2m,4]$, generalizing known constructions. For $a\equiv 1\pmod{4}$, we obtain several nonexistence results on optimal codes $\mathcal{C}_{(1,e)}$ with the aforementioned parameters revealing the constraints of such codes.
+ oai:arXiv.org:2601.12427v1
+ cs.IT
+ math.CO
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jingjun Bao, Hanlin Zou
+
+
+ ReWorld: Multi-Dimensional Reward Modeling for Embodied World Models
+ https://arxiv.org/abs/2601.12428
+ arXiv:2601.12428v1 Announce Type: new
+Abstract: Recently, video-based world models that learn to simulate the dynamics have gained increasing attention in robot learning. However, current approaches primarily emphasize visual generative quality while overlooking physical fidelity, dynamic consistency, and task logic, especially for contact-rich manipulation tasks, which limits their applicability to downstream tasks. To this end, we introduce ReWorld, a framework aimed to employ reinforcement learning to align the video-based embodied world models with physical realism, task completion capability, embodiment plausibility and visual quality. Specifically, we first construct a large-scale (~235K) video preference dataset and employ it to train a hierarchical reward model designed to capture multi-dimensional reward consistent with human preferences. We further propose a practical alignment algorithm that post-trains flow-based world models using this reward through a computationally efficient PPO-style algorithm. Comprehensive experiments and theoretical analysis demonstrate that ReWorld significantly improves the physical fidelity, logical coherence, embodiment and visual quality of generated rollouts, outperforming previous methods.
+ oai:arXiv.org:2601.12428v1
+ cs.RO
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Baorui Peng, Wenyao Zhang, Liang Xu, Zekun Qi, Jiazhao Zhang, Hongsi Liu, Wenjun Zeng, Xin Jin
+
+
+ System-Mediated Attention Imbalances Make Vision-Language Models Say Yes
+ https://arxiv.org/abs/2601.12430
+ arXiv:2601.12430v1 Announce Type: new
+Abstract: Vision-language model (VLM) hallucination is commonly linked to imbalanced allocation of attention across input modalities: system, image and text. However, existing mitigation strategies tend towards an image-centric interpretation of these imbalances, often prioritising increased image attention while giving less consideration to the roles of the other modalities. In this study, we evaluate a more holistic, system-mediated account, which attributes these imbalances to functionally redundant system weights that reduce attention to image and textual inputs. We show that this framework offers a useful empirical perspective on the yes-bias, a common form of hallucination in which VLMs indiscriminately respond 'yes'. Causally redistributing attention from the system modality to image and textual inputs substantially suppresses this bias, often outperforming existing approaches. We further present evidence suggesting that system-mediated attention imbalances contribute to the yes-bias by encouraging a default reliance on coarse input representations, which are effective for some tasks but ill-suited to others. Taken together, these findings firmly establish system attention as a key factor in VLM hallucination and highlight its potential as a lever for mitigation.
+ oai:arXiv.org:2601.12430v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tsan Tsai Chan, Varsha Suresh, Anisha Saha, Michael Hahn, Vera Demberg
+
+
+ SkeFi: Cross-Modal Knowledge Transfer for Wireless Skeleton-Based Action Recognition
+ https://arxiv.org/abs/2601.12432
+ arXiv:2601.12432v1 Announce Type: new
+Abstract: Skeleton-based action recognition leverages human pose keypoints to categorize human actions, which shows superior generalization and interoperability compared to regular end-to-end action recognition. Existing solutions use RGB cameras to annotate skeletal keypoints, but their performance declines in dark environments and raises privacy concerns, limiting their use in smart homes and hospitals. This paper explores non-invasive wireless sensors, i.e., LiDAR and mmWave, to mitigate these challenges as a feasible alternative. Two problems are addressed: (1) insufficient data on wireless sensor modality to train an accurate skeleton estimation model, and (2) skeletal keypoints derived from wireless sensors are noisier than RGB, causing great difficulties for subsequent action recognition models. Our work, SkeFi, overcomes these gaps through a novel cross-modal knowledge transfer method acquired from the data-rich RGB modality. We propose the enhanced Temporal Correlation Adaptive Graph Convolution (TC-AGC) with frame interactive enhancement to overcome the noise from missing or inconsecutive frames. Additionally, our research underscores the effectiveness of enhancing multiscale temporal modeling through dual temporal convolution. By integrating TC-AGC with temporal modeling for cross-modal transfer, our framework can extract accurate poses and actions from noisy wireless sensors. Experiments demonstrate that SkeFi realizes state-of-the-art performances on mmWave and LiDAR. The code is available at https://github.com/Huang0035/Skefi.
+ oai:arXiv.org:2601.12432v1
+ cs.CV
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Shunyu Huang, Yunjiao Zhou, Jianfei Yang
+
+
+ ASAS-BridgeAMM: Trust-Minimized Cross-Chain Bridge AMM with Failure Containment
+ https://arxiv.org/abs/2601.12434
+ arXiv:2601.12434v1 Announce Type: new
+Abstract: Cross-chain bridges constitute the single largest vector of systemic risk in Decentralized Finance (DeFi), accounting for over \$2.8 billion in losses since 2021. The fundamental vulnerability lies in the binary nature of existing bridge security models: a bridge is either fully operational or catastrophically compromised, with no intermediate state to contain partial failures. We present ASAS-BridgeAMM, a bridge-coupled automated market maker that introduces Contained Degradation: a formally specified operational state where the system gracefully degrades functionality in response to adversarial signals. By treating cross-chain message latency as a quantifiable execution risk, the protocol dynamically adjusts collateral haircuts, slippage bounds, and withdrawal limits. Across 18 months of historical replay on Ethereum and two auxiliary chains, ASAS-BridgeAMM reduces worst-case bridge-induced insolvency by 73% relative to baseline mint-and-burn architectures, while preserving 104.5% of transaction volume during stress periods. In rigorous adversarial simulations involving delayed finality, oracle manipulation, and liquidity griefing, the protocol maintains solvency with probability $>0.9999$ and bounds per-epoch bad debt to $<0.2%$ of total collateral. We provide a reference implementation in Solidity and formally prove safety (bounded debt), liveness (settlement completion), and manipulation resistance under a Byzantine relayer model.
+ oai:arXiv.org:2601.12434v1
+ cs.DC
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shengwei You, Aditya Joshi, Andrey Kuehlkamp, Jarek Nabrzyski
+
+
+ The Dynamic and Endogenous Behavior of Re-Offense Risk: An Agent-Based Simulation Study of Treatment Allocation in Incarceration Diversion Programs
+ https://arxiv.org/abs/2601.12441
+ arXiv:2601.12441v1 Announce Type: new
+Abstract: Incarceration-diversion treatment programs aim to improve societal reintegration and reduce recidivism, but limited capacity forces policymakers to make prioritization decisions that often rely on risk assessment tools. While predictive, these tools typically treat risk as a static, individual attribute, which overlooks how risk evolves over time and how treatment decisions shape outcomes through social interactions. In this paper, we develop a new framework that models reoffending risk as a human-system interaction, linking individual behavior with system-level dynamics and endogenous community feedback. Using an agent-based simulation calibrated to U.S. probation data, we evaluate treatment allocation policies under different capacity constraints and incarceration settings. Our results show that no single prioritization policy dominates. Instead, policy effectiveness depends on temporal windows and system parameters: prioritizing low-risk individuals performs better when long-term trajectories matter, while prioritizing high-risk individuals becomes more effective in the short term or when incarceration leads to shorter monitoring periods. These findings highlight the need to evaluate risk-based decision systems as sociotechnical systems with long-term accountability, rather than as isolated predictive tools.
+ oai:arXiv.org:2601.12441v1
+ cs.CY
+ econ.GN
+ q-fin.EC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Chuwen Zhang, Pengyi Shi, Amy Ward
+
+
+ Constraint-Aware Neurosymbolic Uncertainty Quantification with Bayesian Deep Learning for Scientific Discovery
+ https://arxiv.org/abs/2601.12442
+ arXiv:2601.12442v1 Announce Type: new
+Abstract: Scientific Artificial Intelligence (AI) applications require models that deliver trustworthy uncertainty estimates while respecting domain constraints. Existing uncertainty quantification methods lack mechanisms to incorporate symbolic scientific knowledge, while neurosymbolic approaches operate deterministically without principled uncertainty modeling. We introduce the Constraint-Aware Neurosymbolic Uncertainty Framework (CANUF), unifying Bayesian deep learning with differentiable symbolic reasoning. The architecture comprises three components: automated constraint extraction from scientific literature, probabilistic neural backbone with variational inference, and differentiable constraint satisfaction layer ensuring physical consistency. Experiments on Materials Project (140,000+ materials), QM9 molecular properties, and climate benchmarks show CANUF reduces Expected Calibration Error by 34.7% versus Bayesian neural networks while maintaining 99.2% constraint satisfaction. Ablations reveal constraint-guided recalibration contributes 18.3% performance gain, with constraint extraction achieving 91.4% precision. CANUF provides the first end-to-end differentiable pipeline simultaneously addressing uncertainty quantification, constraint satisfaction, and interpretable explanations for scientific predictions.
+ oai:arXiv.org:2601.12442v1
+ cs.LG
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shahnawaz Alam, Mohammed Mudassir Uddin, Mohammed Kaif Pasha
+
+
+ Adversarial Defense in Vision-Language Models: An Overview
+ https://arxiv.org/abs/2601.12443
+ arXiv:2601.12443v1 Announce Type: new
+Abstract: The widespread use of Vision Language Models (VLMs, e.g. CLIP) has raised concerns about their vulnerability to sophisticated and imperceptible adversarial attacks. These attacks could compromise model performance and system security in cross-modal tasks. To address this challenge, three main defense paradigms have been proposed: Training-time Defense, Test-time Adaptation Defense, and Training-free Defense. Training-time Defense involves modifying the training process, typically through adversarial fine-tuning to improve the robustness to adversarial examples. While effective, this approach requires substantial computational resources and may not generalize across all adversarial attacks. Test-time Adaptation Defense focuses on adapting the model at inference time by updating its parameters to handle unlabeled adversarial examples, offering flexibility but often at the cost of increased complexity and computational overhead. Training-free Defense avoids modifying the model itself, instead focusing on altering the adversarial inputs or their feature embeddings, which enforces input perturbations to mitigate the impact of attacks without additional training. This survey reviews the latest advancements in adversarial defense strategies for VLMs, highlighting the strengths and limitations of such approaches and discussing ongoing challenges in enhancing the robustness of VLMs.
+ oai:arXiv.org:2601.12443v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaowei Fu, Lei Zhang
+
+
+ Large Language Model for OWL Proofs
+ https://arxiv.org/abs/2601.12444
+ arXiv:2601.12444v1 Announce Type: new
+Abstract: The ability of Large Language Models (LLMs) to perform reasoning tasks such as deduction has been widely investigated in recent years. Yet, their capacity to generate proofs-faithful, human-readable explanations of why conclusions follow-remains largely under explored. In this work, we study proof generation in the context of OWL ontologies, which are widely adopted for representing and reasoning over complex knowledge, by developing an automated dataset construction and evaluation framework. Our evaluation encompassing three sequential tasks for complete proving: Extraction, Simplification, and Explanation, as well as an additional task of assessing Logic Completeness of the premise. Through extensive experiments on widely used reasoning LLMs, we achieve important findings including: (1) Some models achieve overall strong results but remain limited on complex cases; (2) Logical complexity, rather than representation format (formal logic language versus natural language), is the dominant factor shaping LLM performance; and (3) Noise and incompleteness in input data substantially diminish LLMs' performance. Together, these results underscore both the promise of LLMs for explanation with rigorous logics and the gap of supporting resilient reasoning under complex or imperfect conditions. Code and data are available at https://github.com/HuiYang1997/LLMOwlR.
+ oai:arXiv.org:2601.12444v1
+ cs.AI
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hui Yang, Jiaoyan Chen, Uli Sattler
+
+
+ Privacy-Preserving Federated Learning with Verifiable Fairness Guarantees
+ https://arxiv.org/abs/2601.12447
+ arXiv:2601.12447v1 Announce Type: new
+Abstract: Federated learning enables collaborative model training across distributed institutions without centralizing sensitive data; however, ensuring algorithmic fairness across heterogeneous data distributions while preserving privacy remains fundamentally unresolved. This paper introduces CryptoFair-FL, a novel cryptographic framework providing the first verifiable fairness guarantees for federated learning systems under formal security definitions. The proposed approach combines additively homomorphic encryption with secure multi-party computation to enable privacy-preserving verification of demographic parity and equalized odds metrics without revealing protected attribute distributions or individual predictions. A novel batched verification protocol reduces computational complexity from BigO(n^2) to BigO(n \log n) while maintaining (\dparam, \deltap)-differential privacy with dparam = 0.5 and deltap = 10^{-6}. Theoretical analysis establishes information-theoretic lower bounds on the privacy cost of fairness verification, demonstrating that the proposed protocol achieves near-optimal privacy-fairness tradeoffs. Comprehensive experiments across four benchmark datasets (MIMIC-IV healthcare records, Adult Income, CelebA, and a novel FedFair-100 benchmark) demonstrate that CryptoFair-FL reduces fairness violations from 0.231 to 0.031 demographic parity difference while incurring only 2.3 times computational overhead compared to standard federated averaging. The framework successfully defends against attribute inference attacks, maintaining adversarial success probability below 0.05 across all tested configurations. These results establish a practical pathway for deploying fairness-aware federated learning in regulated industries requiring both privacy protection and algorithmic accountability.
+ oai:arXiv.org:2601.12447v1
+ cs.CR
+ cs.CL
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohammed Himayath Ali, Mohammed Aqib Abdullah, Syed Muneer Hussin, Mohammed Mudassir Uddin, Shahnawaz Alam
+
+
+ Evaluating Large Language Models for Time Series Anomaly Detection in Aerospace Software
+ https://arxiv.org/abs/2601.12448
+ arXiv:2601.12448v1 Announce Type: new
+Abstract: Time series anomaly detection (TSAD) is essential for ensuring the safety and reliability of aerospace software systems. Although large language models (LLMs) provide a promising training-free alternative to unsupervised approaches, their effectiveness in aerospace settings remains under-examined because of complex telemetry, misaligned evaluation metrics, and the absence of domain knowledge. To address this gap, we introduce ATSADBench, the first benchmark for aerospace TSAD. ATSADBench comprises nine tasks that combine three pattern-wise anomaly types, univariate and multivariate signals, and both in-loop and out-of-loop feedback scenarios, yielding 108,000 data points. Using this benchmark, we systematically evaluate state-of-the-art open-source LLMs under two paradigms: Direct, which labels anomalies within sliding windows, and Prediction-Based, which detects anomalies from prediction errors. To reflect operational needs, we reformulate evaluation at the window level and propose three user-oriented metrics: Alarm Accuracy (AA), Alarm Latency (AL), and Alarm Contiguity (AC), which quantify alarm correctness, timeliness, and credibility. We further examine two enhancement strategies, few-shot learning and retrieval-augmented generation (RAG), to inject domain knowledge. The evaluation results show that (1) LLMs perform well on univariate tasks but struggle with multivariate telemetry, (2) their AA and AC on multivariate tasks approach random guessing, (3) few-shot learning provides modest gains whereas RAG offers no significant improvement, and (4) in practice LLMs can detect true anomaly onsets yet sometimes raise false alarms, which few-shot prompting mitigates but RAG exacerbates. These findings offer guidance for future LLM-based TSAD in aerospace software.
+ oai:arXiv.org:2601.12448v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yang Liu, Yixing Luo, Xiaofeng Li, Xiaogang Dong, Bin Gu, Zhi Jin
+
+
+ AgenTRIM: Tool Risk Mitigation for Agentic AI
+ https://arxiv.org/abs/2601.12449
+ arXiv:2601.12449v1 Announce Type: new
+Abstract: AI agents are autonomous systems that combine LLMs with external tools to solve complex tasks. While such tools extend capability, improper tool permissions introduce security risks such as indirect prompt injection and tool misuse. We characterize these failures as unbalanced tool-driven agency. Agents may retain unnecessary permissions (excessive agency) or fail to invoke required tools (insufficient agency), amplifying the attack surface and reducing performance. We introduce AgenTRIM, a framework for detecting and mitigating tool-driven agency risks without altering an agent's internal reasoning. AgenTRIM addresses these risks through complementary offline and online phases. Offline, AgenTRIM reconstructs and verifies the agent's tool interface from code and execution traces. At runtime, it enforces per-step least-privilege tool access through adaptive filtering and status-aware validation of tool calls. Evaluating on the AgentDojo benchmark, AgenTRIM substantially reduces attack success while maintaining high task performance. Additional experiments show robustness to description-based attacks and effective enforcement of explicit safety policies. Together, these results demonstrate that AgenTRIM provides a practical, capability-preserving approach to safer tool use in LLM-based agents.
+ oai:arXiv.org:2601.12449v1
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Roy Betser, Shamik Bose, Amit Giloni, Chiara Picardi, Sindhu Padakandla, Roman Vainshtein
+
+
+ Bringing Data Transformations Near-Memory for Low-Latency Analytics in HTAP Environments
+ https://arxiv.org/abs/2601.12456
+ arXiv:2601.12456v1 Announce Type: new
+Abstract: In this paper we propose an approach for executing data transformations near- or in-storage on intelligent storage systems. The currently prevailing approach of extracting the data and then transforming it to a target format suffers degraded performance during transformation and causes heavy data movement. Our results show robust performance of foreground workloads and lower resource contention. Our vision draws architectural opportunities in multi-engine and multi-system settings, as well as for reuse.
+ oai:arXiv.org:2601.12456v1
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Arthur Bernhardt, David Volz, Sajjad Tamimi, Andreas Koch, Ilia Petrov
+
+
+ TrojanPraise: Jailbreak LLMs via Benign Fine-Tuning
+ https://arxiv.org/abs/2601.12460
+ arXiv:2601.12460v1 Announce Type: new
+Abstract: The demand of customized large language models (LLMs) has led to commercial LLMs offering black-box fine-tuning APIs, yet this convenience introduces a critical security loophole: attackers could jailbreak the LLMs by fine-tuning them with malicious data. Though this security issue has recently been exposed, the feasibility of such attacks is questionable as malicious training dataset is believed to be detectable by moderation models such as Llama-Guard-3. In this paper, we propose TrojanPraise, a novel finetuning-based attack exploiting benign and thus filter-approved data. Basically, TrojanPraise fine-tunes the model to associate a crafted word (e.g., "bruaf") with harmless connotations, then uses this word to praise harmful concepts, subtly shifting the LLM from refusal to compliance. To explain the attack, we decouple the LLM's internal representation of a query into two dimensions of knowledge and attitude. We demonstrate that successful jailbreak requires shifting the attitude while avoiding knowledge shift, a distortion in the model's understanding of the concept. To validate this attack, we conduct experiments on five opensource LLMs and two commercial LLMs under strict black-box settings. Results show that TrojanPraise achieves a maximum attack success rate of 95.88% while evading moderation.
+ oai:arXiv.org:2601.12460v1
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhixin Xie, Xurui Song, Jun Luo
+
+
+ KILO-EKF: Koopman-Inspired Learned Observations Extended Kalman Filter
+ https://arxiv.org/abs/2601.12463
+ arXiv:2601.12463v1 Announce Type: new
+Abstract: We present the Koopman-Inspired Learned Observations Extended Kalman Filter (KILO-EKF), which combines a standard EKF prediction step with a correction step based on a Koopman-inspired measurement model learned from data. By lifting measurements into a feature space where they are linear in the state, KILO-EKF enables flexible modeling of complex or poorly calibrated sensors while retaining the structure and efficiency of recursive filtering. The resulting linear-Gaussian measurement model is learned in closed form from groundtruth training data, without iterative optimization or reliance on an explicit parametric sensor model. At inference, KILO-EKF performs a standard EKF update using Jacobians obtained via the learned lifting. We validate the approach on a real-world quadrotor localization task using an IMU, ultra-wideband (UWB) sensors, and a downward-facing laser. We compare against multiple EKF baselines with varying levels of sensor calibration. KILO-EKF achieves better accuracy and consistency compared to data-calibrated baselines, and significantly outperforms EKFs that rely on imperfect geometric models, while maintaining real-time inference and fast training. These results demonstrate the effectiveness of Koopman-inspired measurement learning as a scalable alternative to traditional model-based calibration.
+ oai:arXiv.org:2601.12463v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zi Cong Guo, James R. Forbes, Timothy D. Barfoot
+
+
+ Large-scale EM Benchmark for Multi-Organelle Instance Segmentation in the Wild
+ https://arxiv.org/abs/2601.12464
+ arXiv:2601.12464v1 Announce Type: new
+Abstract: Accurate instance-level segmentation of organelles in electron microscopy (EM) is critical for quantitative analysis of subcellular morphology and inter-organelle interactions. However, current benchmarks, based on small, curated datasets, fail to capture the inherent heterogeneity and large spatial context of in-the-wild EM data, imposing fundamental limitations on current patch-based methods. To address these limitations, we developed a large-scale, multi-source benchmark for multi-organelle instance segmentation, comprising over 100,000 2D EM images across variety cell types and five organelle classes that capture real-world variability. Dataset annotations were generated by our designed connectivity-aware Label Propagation Algorithm (3D LPA) with expert refinement. We further benchmarked several state-of-the-art models, including U-Net, SAM variants, and Mask2Former. Our results show several limitations: current models struggle to generalize across heterogeneous EM data and perform poorly on organelles with global, distributed morphologies (e.g., Endoplasmic Reticulum). These findings underscore the fundamental mismatch between local-context models and the challenge of modeling long-range structural continuity in the presence of real-world variability. The benchmark dataset and labeling tool will be publicly released soon.
+ oai:arXiv.org:2601.12464v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yanrui Lu, Danyang Chen, Haowen Xiao, Jiarui Zhu, Fukang Ge, Binqian Zou, Jiali Guan, Jiayin Liang, Yuting Wang, Ziqian Guan, Xiangcheng Bao, Jinhao Bi, Lin Gu, Jun He, Yingying Zhu
+
+
+ Incentivizing In-depth Reasoning over Long Contexts with Process Advantage Shaping
+ https://arxiv.org/abs/2601.12465
+ arXiv:2601.12465v1 Announce Type: new
+Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has proven effective in enhancing LLMs short-context reasoning, but its performance degrades in long-context scenarios that require both precise grounding and robust long-range reasoning. We identify the "almost-there" phenomenon in long-context reasoning, where trajectories are largely correct but fail at the final step, and attribute this failure to two factors: (1) the lack of high reasoning density in long-context QA data that push LLMs beyond mere grounding toward sophisticated multi-hop reasoning; and (2) the loss of valuable learning signals during long-context RL training due to the indiscriminate penalization of partially correct trajectories with incorrect outcomes. To overcome this bottleneck, we propose DeepReasonQA, a KG-driven synthesis framework that controllably constructs high-difficulty, multi-hop long-context QA pairs with inherent reasoning chains. Building on this, we introduce Long-context Process Advantage Shaping (LongPAS), a simple yet effective method that performs fine-grained credit assignment by evaluating reasoning steps along Validity and Relevance dimensions, which captures critical learning signals from "almost-there" trajectories. Experiments on three long-context reasoning benchmarks show that our approach substantially outperforms RLVR baselines and matches frontier LLMs while using far fewer parameters. Further analysis confirms the effectiveness of our methods in strengthening long-context reasoning while maintaining stable RL training.
+ oai:arXiv.org:2601.12465v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Miao Peng, Weizhou Shen, Nuo Chen, Chenliang Li, Ming Yan, Jia Li
+
+
+ Patch-Level Tokenization with CNN Encoders and Attention for Improved Transformer Time-Series Forecasting
+ https://arxiv.org/abs/2601.12467
+ arXiv:2601.12467v1 Announce Type: new
+Abstract: Transformer-based models have shown strong performance in time-series forecasting by leveraging self-attention to model long-range temporal dependencies. However, their effectiveness depends critically on the quality and structure of input representations derived from raw multivariate time-series data. This work proposes a two-stage forecasting framework that explicitly separates local temporal representation learning from global dependency modelling. In the first stage, a convolutional neural network (CNN) operates on fixed-length temporal patches to extract short-range temporal dynamics and non-linear feature interactions, producing compact patch-level token embeddings. Token-level self-attention is subsequently applied during representation learning to refine these embeddings by enabling interactions across temporal patches. In the second stage, a Transformer encoder processes the resulting token sequence to model inter-patch temporal dependencies and generate per-patch forecasts. Experiments conducted on synthetic multivariate time-series data with controlled static and dynamic factors demonstrate that the proposed patch-based tokenization strategy achieves competitive forecasting performance compared to convolutional and patch-based Transformer baselines. The results highlight the importance of structured temporal representations and show that decoupling local temporal encoding from global attention-based modelling yields more effective and stable time-series forecasting.
+ oai:arXiv.org:2601.12467v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Saurish Nagrath
+
+
+ DCAC: Dynamic Class-Aware Cache Creates Stronger Out-of-Distribution Detectors
+ https://arxiv.org/abs/2601.12468
+ arXiv:2601.12468v1 Announce Type: new
+Abstract: Out-of-distribution (OOD) detection remains a fundamental challenge for deep neural networks, particularly due to overconfident predictions on unseen OOD samples during testing. We reveal a key insight: OOD samples predicted as the same class, or given high probabilities for it, are visually more similar to each other than to the true in-distribution (ID) samples. Motivated by this class-specific observation, we propose DCAC (Dynamic Class-Aware Cache), a training-free, test-time calibration module that maintains separate caches for each ID class to collect high-entropy samples and calibrate the raw predictions of input samples. DCAC leverages cached visual features and predicted probabilities through a lightweight two-layer module to mitigate overconfident predictions on OOD samples. This module can be seamlessly integrated with various existing OOD detection methods across both unimodal and vision-language models while introducing minimal computational overhead. Extensive experiments on multiple OOD benchmarks demonstrate that DCAC significantly enhances existing methods, achieving substantial improvements, i.e., reducing FPR95 by 6.55% when integrated with ASH-S on ImageNet OOD benchmark.
+ oai:arXiv.org:2601.12468v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yanqi Wu, Qichao Chen, Runhe Lai, Xinhua Lu, Jia-Xin Zhuang, Zhilin Zhao, Wei-Shi Zheng, Ruixuan Wang
+
+
+ Knowing When to Abstain: Medical LLMs Under Clinical Uncertainty
+ https://arxiv.org/abs/2601.12471
+ arXiv:2601.12471v1 Announce Type: new
+Abstract: Current evaluation of large language models (LLMs) overwhelmingly prioritizes accuracy; however, in real-world and safety-critical applications, the ability to abstain when uncertain is equally vital for trustworthy deployment. We introduce MedAbstain, a unified benchmark and evaluation protocol for abstention in medical multiple-choice question answering (MCQA) -- a discrete-choice setting that generalizes to agentic action selection -- integrating conformal prediction, adversarial question perturbations, and explicit abstention options. Our systematic evaluation of both open- and closed-source LLMs reveals that even state-of-the-art, high-accuracy models often fail to abstain with uncertain. Notably, providing explicit abstention options consistently increases model uncertainty and safer abstention, far more than input perturbations, while scaling model size or advanced prompting brings little improvement. These findings highlight the central role of abstention mechanisms for trustworthy LLM deployment and offer practical guidance for improving safety in high-stakes applications.
+ oai:arXiv.org:2601.12471v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sravanthi Machcha, Sushrita Yerra, Sahil Gupta, Aishwarya Sahoo, Sharmin Sultana, Hong Yu, Zonghai Yao
+
+
+ Capability-Aware Early-Stage Research Idea Evaluation
+ https://arxiv.org/abs/2601.12473
+ arXiv:2601.12473v1 Announce Type: new
+Abstract: Predicting the outcomes of research ideas at their conceptual stage (i.e. before significant resources are committed) holds great potential for optimizing scientific resource allocation and research planning. While existing methods rely heavily on finished manuscripts or peer reviews, we propose a novel capability-aware framework that predicts paper acceptance and ratings using only author information and research ideas, without requiring full text or experimental results. Our approach integrates author information, (inferred) capability presentation, and research ideas through a three-way transformer architecture with flexible fusion mechanisms. We also introduce a two-stage architecture for learning the capability representation given the author information and idea. Experiments show that our method significantly outperform the single-way models by finetuning bert-base and bert-large, and the capability predicting significantly increase the predictive accuracy of the final model. The proposed method can be applied in both early-stage research outcome prediction and scientific resource allocation.
+ oai:arXiv.org:2601.12473v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renlong Jie, Chen Chu, Zhen Wang
+
+
+ Language-Based Swarm Perception: Decentralized Person Re-Identification via Natural Language Descriptions
+ https://arxiv.org/abs/2601.12479
+ arXiv:2601.12479v1 Announce Type: new
+Abstract: We introduce a method for decentralized person re-identification in robot swarms that leverages natural language as the primary representational modality. Unlike traditional approaches that rely on opaque visual embeddings -- high-dimensional feature vectors extracted from images -- the proposed method uses human-readable language to represent observations. Each robot locally detects and describes individuals using a vision-language model (VLM), producing textual descriptions of appearance instead of feature vectors. These descriptions are compared and clustered across the swarm without centralized coordination, allowing robots to collaboratively group observations of the same individual. Each cluster is distilled into a representative description by a language model, providing an interpretable, concise summary of the swarm's collective perception. This approach enables natural-language querying, enhances transparency, and supports explainable swarm behavior. Preliminary experiments demonstrate competitive performance in identity consistency and interpretability compared to embedding-based methods, despite current limitations in text similarity and computational load. Ongoing work explores refined similarity metrics, semantic navigation, and the extension of language-based perception to environmental elements. This work prioritizes decentralized perception and communication, while active navigation remains an open direction for future study.
+ oai:arXiv.org:2601.12479v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Miquel Kegeleirs, Lorenzo Garattoni, Gianpiero Francesca, Mauro Birattari
+
+
+ A Unified Neural Codec Language Model for Selective Editable Text to Speech Generation
+ https://arxiv.org/abs/2601.12480
+ arXiv:2601.12480v1 Announce Type: new
+Abstract: Neural codec language models achieve impressive zero-shot Text-to-Speech (TTS) by fully imitating the acoustic characteristics of a short speech prompt, including timbre, prosody, and paralinguistic information. However, such holistic imitation limits their ability to isolate and control individual attributes. In this paper, we present a unified codec language model SpeechEdit that extends zero-shot TTS with a selective control mechanism. By default, SpeechEdit reproduces the complete acoustic profile inferred from the speech prompt, but it selectively overrides only the attributes specified by explicit control instructions. To enable controllable modeling, SpeechEdit is trained on our newly constructed LibriEdit dataset, which provides delta (difference-aware) training pairs derived from LibriHeavy. Experimental results show that our approach maintains naturalness and robustness while offering flexible and localized control over desired attributes. Audio samples are available at https://speech-editing.github.io/speech-editing/.
+ oai:arXiv.org:2601.12480v1
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hanchen Pei, Shujie Liu, Yanqing Liu, Jianwei Yu, Yuanhang Qian, Gongping Huang, Sheng Zhao, Yan Lu
+
+
+ NeuralFur: Animal Fur Reconstruction From Multi-View Images
+ https://arxiv.org/abs/2601.12481
+ arXiv:2601.12481v1 Announce Type: new
+Abstract: Reconstructing realistic animal fur geometry from images is a challenging task due to the fine-scale details, self-occlusion, and view-dependent appearance of fur. In contrast to human hairstyle reconstruction, there are also no datasets that can be leveraged to learn a fur prior for different animals. In this work, we present a first multi-view-based method for high-fidelity 3D fur modeling of animals using a strand-based representation, leveraging the general knowledge of a vision language model. Given multi-view RGB images, we first reconstruct a coarse surface geometry using traditional multi-view stereo techniques. We then use a vision language model (VLM) system to retrieve information about the realistic length structure of the fur for each part of the body. We use this knowledge to construct the animal's furless geometry and grow strands atop it. The fur reconstruction is supervised with both geometric and photometric losses computed from multi-view images. To mitigate orientation ambiguities stemming from the Gabor filters that are applied to the input images, we additionally utilize the VLM to guide the strands' growth direction and their relation to the gravity vector that we incorporate as a loss. With this new schema of using a VLM to guide 3D reconstruction from multi-view inputs, we show generalization across a variety of animals with different fur types. For additional results and code, please refer to https://neuralfur.is.tue.mpg.de.
+ oai:arXiv.org:2601.12481v1
+ cs.CV
+ cs.GR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Vanessa Sklyarova, Berna Kabadayi, Anastasios Yiannakidis, Giorgio Becherini, Michael J. Black, Justus Thies
+
+
+ A Multimodal Assistive System for Product Localization and Retrieval for People who are Blind or have Low Vision
+ https://arxiv.org/abs/2601.12486
+ arXiv:2601.12486v1 Announce Type: new
+Abstract: Shopping is a routine activity for sighted individuals, yet for people who are blind or have low vision (pBLV), locating and retrieving products in physical environments remains a challenge. This paper presents a multimodal wearable assistive system that integrates object detection with vision-language models to support independent product or item retrieval, with the goal of enhancing users'autonomy and sense of agency. The system operates through three phases: product search, which identifies target products using YOLO-World detection combined with embedding similarity and color histogram matching; product navigation, which provides spatialized sonification and VLM-generated verbal descriptions to guide users toward the target; and product correction, which verifies whether the user has reached the correct product and provides corrective feedback when necessary. Technical evaluation demonstrated promising performance across all modules, with product detection achieving near-perfect accuracy at close range and high accuracy when facing shelves within 1.5 m. VLM-based navigation achieved up to 94.4% accuracy, and correction accuracy exceeded 86% under optimal model configurations. These results demonstrate the system's potential to address the last-meter problem in assistive shopping. Future work will focus on user studies with pBLV participants and integration with multi-scale navigation ecosystems.
+ oai:arXiv.org:2601.12486v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ligao Ruan, Giles Hamilton-Fletcher, Mahya Beheshti, Todd E Hudson, Maurizio Porfiri, John-Ross Rizzo
+
+
+ VASTU: Value-Aligned Social Toolkit for Online Content Curation
+ https://arxiv.org/abs/2601.12491
+ arXiv:2601.12491v1 Announce Type: new
+Abstract: Detecting what content communities value is a foundational challenge for social computing systems -- from feed curation and content ranking to moderation tools and personalized recommendation systems. Yet existing approaches remain fragmented across methodological paradigms, and it remains unclear which methods best capture community-specific notions of value. We introduce VASTU (Value-Aligned Social Toolkit for Online Content Curation), a benchmark and evaluation framework for systematically comparing approaches to detecting community-valued content. VASTU includes a dataset of 75,000 comments from 15 diverse Reddit communities, annotated with community approval labels and rich linguistic features. Using VASTU, we evaluate feature-based models, transformers, prompted and fine-tuned language models under global versus community-specific training regimes. We find that community-specific models consistently outperform global approaches, with fine-tuned transformers achieving the strongest performance (0.72 AUROC). Notably, fine-tuned SLMs (0.65 AUROC) substantially outperform prompted LLMs (0.60 AUROC) despite being 100 times smaller. Counterintuitively, chain-of-thought prompting provides no benefit, and reasoning models perform the worst (0.53 AUROC), suggesting this task requires learning community norms rather than test-time reasoning. By releasing VASTU, we provide a standardized benchmark to advance research on value-aligned sociotechnical systems.
+ oai:arXiv.org:2601.12491v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Agam Goyal, Xianyang Zhan, Charlotte Lambert, Koustuv Saha, Eshwar Chandrasekharan
+
+
+ Histopath-C: Towards Realistic Domain Shifts for Histopathology Vision-Language Adaptation
+ https://arxiv.org/abs/2601.12493
+ arXiv:2601.12493v1 Announce Type: new
+Abstract: Medical Vision-language models (VLMs) have shown remarkable performances in various medical imaging domains such as histo\-pathology by leveraging pre-trained, contrastive models that exploit visual and textual information. However, histopathology images may exhibit severe domain shifts, such as staining, contamination, blurring, and noise, which may severely degrade the VLM's downstream performance. In this work, we introduce Histopath-C, a new benchmark with realistic synthetic corruptions designed to mimic real-world distribution shifts observed in digital histopathology. Our framework dynamically applies corruptions to any available dataset and evaluates Test-Time Adaptation (TTA) mechanisms on the fly. We then propose LATTE, a transductive, low-rank adaptation strategy that exploits multiple text templates, mitigating the sensitivity of histopathology VLMs to diverse text inputs. Our approach outperforms state-of-the-art TTA methods originally designed for natural images across a breadth of histopathology datasets, demonstrating the effectiveness of our proposed design for robust adaptation in histopathology images. Code and data are available at https://github.com/Mehrdad-Noori/Histopath-C.
+ oai:arXiv.org:2601.12493v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Mehrdad Noori, Gustavo Adolfo Vargas Hakim, David Osowiechi, Fereshteh Shakeri, Ali Bahri, Moslem Yazdanpanah, Sahar Dastani, Ismail Ben Ayed, Christian Desrosiers
+
+
+ Harmonizing the Arabic Audio Space with Data Scheduling
+ https://arxiv.org/abs/2601.12494
+ arXiv:2601.12494v1 Announce Type: new
+Abstract: Audio large language models (LLMs) enable unified speech understanding and generation, yet their adaptation to linguistically complex, dialect-rich settings remains underexplored. This paper presents the first systematic study of multi-task instruction tuning for an Arabic-centric audio LLM, covering a hierarchy of generative tasks (ASR, speech summarization) and discriminative tasks (dialect and emotion identification). To support this study, we introduce AraMega-SSum, a novel dataset for Arabic speech summarization. We fine-tune Qwen2.5-Omni (7B) and propose Task-Progressive Curriculum (TPC) along with Aligner-Based Diverse Sampling (ADS), a strategy that constructs information-dense batches by selecting task- and label-balanced examples. Our results reveal a critical efficiency, robustness trade-off: while ADS accelerates initial convergence and boosts paralinguistic F1-scores, its inherent gradient volatility can destabilize generative decoding under prolonged training. Furthermore, while the TPC stabilizes core acoustic mapping, it often induces negative transfer in downstream tasks. We demonstrate that a Hybrid TPC+ADS Strategy provides an optimal training ``recipe'', first establishing a robust representative foundation before employing diversity-aware refinement to capture fine-grained nuances. These findings offer practical guidance for the efficient adaptation of Omni-models in complex, low-resource multimodal environments.
+ oai:arXiv.org:2601.12494v1
+ cs.SD
+ cs.AI
+ cs.CL
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Hunzalah Hassan Bhatti, Firoj Alam, Shammur Absar Chowdhury
+
+
+ Failure Modes in Multi-Hop QA: The Weakest Link Law and the Recognition Bottleneck
+ https://arxiv.org/abs/2601.12499
+ arXiv:2601.12499v1 Announce Type: new
+Abstract: Despite scaling to massive context windows, Large Language Models (LLMs) struggle with multi-hop reasoning due to inherent position bias, which causes them to overlook information at certain positions. Whether these failures stem from an inability to locate evidence (recognition failure) or integrate it (synthesis failure) is unclear. We introduce Multi-Focus Attention Instruction (MFAI), a semantic probe to disentangle these mechanisms by explicitly steering attention towards selected positions. Across 5 LLMs on two multi-hop QA tasks (MuSiQue and NeoQA), we establish the "Weakest Link Law": multi-hop reasoning performance collapses to the performance level of the least visible evidence. Crucially, this failure is governed by absolute position rather than the linear distance between facts (performance variance $<3%$). We further identify a duality in attention steering: while matched MFAI resolves recognition bottlenecks, improving accuracy by up to 11.5% in low-visibility positions, misleading MFAI triggers confusion in real-world tasks but is successfully filtered in synthetic tasks. Finally, we demonstrate that "thinking" models that utilize System-2 reasoning, effectively locate and integrate the required information, matching gold-only baselines even in noisy, long-context settings.
+ oai:arXiv.org:2601.12499v1
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Meiru Zhang, Zaiqiao Meng, Nigel Collier
+
+
+ Video Individual Counting and Tracking from Moving Drones: A Benchmark and Methods
+ https://arxiv.org/abs/2601.12500
+ arXiv:2601.12500v1 Announce Type: new
+Abstract: Counting and tracking dense crowds in large-scale scenes is highly challenging, yet existing methods mainly rely on datasets captured by fixed cameras, which provide limited spatial coverage and are inadequate for large-scale dense crowd analysis. To address this limitation, we propose a flexible solution using moving drones to capture videos and perform video-level crowd counting and tracking of unique pedestrians across entire scenes. We introduce MovingDroneCrowd++, the largest video-level dataset for dense crowd counting and tracking captured by moving drones, covering diverse and complex conditions with varying flight altitudes, camera angles, and illumination. Existing methods fail to achieve satisfactory performance on this dataset. To this end, we propose GD3A (Global Density Map Decomposition via Descriptor Association), a density map-based video individual counting method that avoids explicit localization. GD3A establishes pixel-level correspondences between pedestrian descriptors across consecutive frames via optimal transport with an adaptive dustbin score, enabling the decomposition of global density maps into shared, inflow, and outflow components. Building on this framework, we further introduce DVTrack, which converts descriptor-level matching into instance-level associations through a descriptor voting mechanism for pedestrian tracking. Experimental results show that our methods significantly outperform existing approaches under dense crowds and complex motion, reducing counting error by 47.4 percent and improving tracking performance by 39.2 percent.
+ oai:arXiv.org:2601.12500v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yaowu Fan, Jia Wan, Tao Han, Andy J. Ma, Antoni B. Chan
+
+
+ Semidefinite Programming for Quantum Channel Learning
+ https://arxiv.org/abs/2601.12502
+ arXiv:2601.12502v1 Announce Type: new
+Abstract: The problem of reconstructing a quantum channel from a sample of classical data is considered. When the total fidelity can be represented as a ratio of two quadratic forms (e.g., in the case of mapping a mixed state to a pure state, projective operators, unitary learning, and others), Semidefinite Programming (SDP) can be applied to solve the fidelity optimization problem with respect to the Choi matrix. A remarkable feature of SDP is that the optimization is convex, which allows the problem to be efficiently solved by a variety of numerical algorithms. We have tested several commercially available SDP solvers, all of which allowed for the reconstruction of quantum channels of different forms. A notable feature is that the Kraus rank of the obtained quantum channel typically comprises less than a few percent of its maximal possible value. This suggests that a relatively small Kraus rank quantum channel is typically sufficient to describe experimentally observed classical data. The theory was also applied to the problem of reconstructing projective operators from data. Finally, we discuss a classical computational model based on quantum channel transformation, performed and calculated on a classical computer, possibly hardware-optimized.
+ oai:arXiv.org:2601.12502v1
+ cs.LG
+ cs.NA
+ math.NA
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mikhail Gennadievich Belov, Victor Victorovich Dubov, Vadim Konstantinovich Ivanov, Alexander Yurievich Maslov, Olga Vladimirovna Proshina, Vladislav Gennadievich Malyshkin
+
+
+ Hard Clique Formulas for Resolution
+ https://arxiv.org/abs/2601.12503
+ arXiv:2601.12503v1 Announce Type: new
+Abstract: We show how to convert any unsatisfiable 3-CNF formula which is sparse and exponentially hard to refute in Resolution into a negative instance of the $k$-clique problem whose corresponding natural encoding as a CNF formula is $n^{\Omega(k)}$-hard to refute in Resolution. This applies to any function $k = k(n)$ of the number $n$ of vertices, provided $k_0 \leq k \leq n^{1/c_0}$, where $k_0$ and $c_0$ are small constants. We establish this by demonstrating that Resolution can simulate the correctness proof of a particular kind of reduction from 3-SAT to the parameterized clique problem. This also re-establishes the known conditional hardness result for $k$-clique which states that if the Exponential Time Hypothesis (ETH) holds, then the $k$-clique problem cannot be solved in time $n^{o(k)}$. Since it is known that the analogue of ETH holds for Resolution, unconditionally and with explicit hard instances, this gives a way to obtain explicit instances of $k$-clique that are unconditionally $n^{\Omega(k)}$-hard to refute in Resolution. This answers an open problem that appeared published in the literature at least twice.
+ oai:arXiv.org:2601.12503v1
+ cs.CC
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Albert Atserias
+
+
+ DoPE: Decoy Oriented Perturbation Encapsulation Human-Readable, AI-Hostile Documents for Academic Integrity
+ https://arxiv.org/abs/2601.12505
+ arXiv:2601.12505v1 Announce Type: new
+Abstract: Multimodal Large Language Models (MLLMs) can directly consume exam documents, threatening conventional assessments and academic integrity. We present DoPE (Decoy-Oriented Perturbation Encapsulation), a document-layer defense framework that embeds semantic decoys into PDF/HTML assessments to exploit render-parse discrepancies in MLLM pipelines. By instrumenting exams at authoring time, DoPE provides model-agnostic prevention (stop or confound automated solving) and detection (flag blind AI reliance) without relying on conventional one-shot classifiers. We formalize prevention and detection tasks, and introduce FewSoRT-Q, an LLM-guided pipeline that generates question-level semantic decoys and FewSoRT-D to encapsulate them into watermarked documents. We evaluate on Integrity-Bench, a novel benchmark of 1826 exams (PDF+HTML) derived from public QA datasets and OpenCourseWare. Against black-box MLLMs from OpenAI and Anthropic, DoPE yields strong empirical gains: a 91.4% detection rate at an 8.7% false-positive rate using an LLM-as-Judge verifier, and prevents successful completion or induces decoy-aligned failures in 96.3% of attempts. We release Integrity-Bench, our toolkit, and evaluation code to enable reproducible study of document-layer defenses for academic integrity.
+ oai:arXiv.org:2601.12505v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ashish Raj Shekhar, Shiven Agarwal, Priyanuj Bordoloi, Yash Shah, Tejas Anvekar, Vivek Gupta
+
+
+ SDCoNet: Saliency-Driven Multi-Task Collaborative Network for Remote Sensing Object Detection
+ https://arxiv.org/abs/2601.12507
+ arXiv:2601.12507v1 Announce Type: new
+Abstract: In remote sensing images, complex backgrounds, weak object signals, and small object scales make accurate detection particularly challenging, especially under low-quality imaging conditions. A common strategy is to integrate single-image super-resolution (SR) before detection; however, such serial pipelines often suffer from misaligned optimization objectives, feature redundancy, and a lack of effective interaction between SR and detection. To address these issues, we propose a Saliency-Driven multi-task Collaborative Network (SDCoNet) that couples SR and detection through implicit feature sharing while preserving task specificity. SDCoNet employs the swin transformer-based shared encoder, where hierarchical window-shifted self-attention supports cross-task feature collaboration and adaptively balances the trade-off between texture refinement and semantic representation. In addition, a multi-scale saliency prediction module produces importance scores to select key tokens, enabling focused attention on weak object regions, suppression of background clutter, and suppression of adverse features introduced by multi-task coupling. Furthermore, a gradient routing strategy is introduced to mitigate optimization conflicts. It first stabilizes detection semantics and subsequently routes SR gradients along a detection-oriented direction, enabling the framework to guide the SR branch to generate high-frequency details that are explicitly beneficial for detection. Experiments on public datasets, including NWPU VHR-10-Split, DOTAv1.5-Split, and HRSSD-Split, demonstrate that the proposed method, while maintaining competitive computational efficiency, significantly outperforms existing mainstream algorithms in small object detection on low-quality remote sensing images. Our code is available at https://github.com/qiruo-ya/SDCoNet.
+ oai:arXiv.org:2601.12507v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ruo Qi, Linhui Dai, Yusong Qin, Chaolei Yang, Yanshan Li
+
+
+ AlphaSyndrome: Tackling the Syndrome Measurement Circuit Scheduling Problem for QEC Codes
+ https://arxiv.org/abs/2601.12509
+ arXiv:2601.12509v1 Announce Type: new
+Abstract: Quantum error correction (QEC) is essential for scalable quantum computing, yet repeated syndrome-measurement cycles dominate its spacetime and hardware cost. Although stabilizers commute and admit many valid execution orders, different schedules induce distinct error-propagation paths under realistic noise, leading to large variations in logical error rate. Outside of surface codes, effective syndrome-measurement scheduling remains largely unexplored. We present AlphaSyndrome, an automated synthesis framework for scheduling syndrome-measurement circuits in general commuting-stabilizer codes under minimal assumptions: mutually commuting stabilizers and a heuristic decoder. AlphaSyndrome formulates scheduling as an optimization problem that shapes error propagation to (i) avoid patterns close to logical operators and (ii) remain within the decoder's correctable region. The framework uses Monte Carlo Tree Search (MCTS) to explore ordering and parallelism, guided by code structure and decoder feedback. Across diverse code families, sizes, and decoders, AlphaSyndrome reduces logical error rates by 80.6% on average (up to 96.2%) relative to depth-optimal baselines, matches Google's hand-crafted surface-code schedules, and outperforms IBM's schedule for the Bivariate Bicycle code.
+ oai:arXiv.org:2601.12509v1
+ cs.ET
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuhao Liu, Shuohao Ping, Junyu Zhou, Ethan Decker, Justin Kalloor, Mathias Weiden, Kean Chen, Yunong Shi, Ali Javadi-Abhari, Costin Iancu, Gushu Li
+
+
+ Fine-Tuning Cycle-GAN for Domain Adaptation of MRI Images
+ https://arxiv.org/abs/2601.12512
+ arXiv:2601.12512v1 Announce Type: new
+Abstract: Magnetic Resonance Imaging (MRI) scans acquired from different scanners or institutions often suffer from domain shifts owing to variations in hardware, protocols, and acquisition parameters. This discrepancy degrades the performance of deep learning models trained on source domain data when applied to target domain images. In this study, we propose a Cycle-GAN-based model for unsupervised medical-image domain adaptation. Leveraging CycleGANs, our model learns bidirectional mappings between the source and target domains without paired training data, preserving the anatomical content of the images. By leveraging Cycle-GAN capabilities with content and disparity loss for adaptation tasks, we ensured image-domain adaptation while maintaining image integrity. Several experiments on MRI datasets demonstrated the efficacy of our model in bidirectional domain adaptation without labelled data. Furthermore, research offers promising avenues for improving the diagnostic accuracy of healthcare. The statistical results confirm that our approach improves model performance and reduces domain-related variability, thus contributing to more precise and consistent medical image analysis.
+ oai:arXiv.org:2601.12512v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mohd Usama, Belal Ahmad, Faleh Menawer R Althiyabi
+
+
+ Cooperative Multi-agent RL with Communication Constraints
+ https://arxiv.org/abs/2601.12518
+ arXiv:2601.12518v1 Announce Type: new
+Abstract: Cooperative MARL often assumes frequent access to global information in a data buffer, such as team rewards or other agents' actions, which is typically unrealistic in decentralized MARL systems due to high communication costs. When communication is limited, agents must rely on outdated information to estimate gradients and update their policies. A common approach to handle missing data is called importance sampling, in which we reweigh old data from a base policy to estimate gradients for the current policy. However, it quickly becomes unstable when the communication is limited (i.e. missing data probability is high), so that the base policy in importance sampling is outdated. To address this issue, we propose a technique called base policy prediction, which utilizes old gradients to predict the policy update and collect samples for a sequence of base policies, which reduces the gap between the base policy and the current policy. This approach enables effective learning with significantly fewer communication rounds, since the samples of predicted base policies could be collected within one communication round. Theoretically, we show that our algorithm converges to an $\varepsilon$-Nash equilibrium in potential games with only $O(\varepsilon^{-3/4})$ communication rounds and $O(poly(\max_i |A_i|)\varepsilon^{-11/4})$ samples, improving existing state-of-the-art results in communication cost, as well as sample complexity without the exponential dependence on the joint action space size. We also extend these results to general Markov Cooperative Games to find an agent-wise local maximum. Empirically, we test the base policy prediction algorithm in both simulated games and MAPPO for complex environments.
+ oai:arXiv.org:2601.12518v1
+ cs.LG
+ cs.AI
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nuoya Xiong, Aarti Singh
+
+
+ Learning Relativistic Geodesics and Chaotic Dynamics via Stabilized Lagrangian Neural Networks
+ https://arxiv.org/abs/2601.12519
+ arXiv:2601.12519v1 Announce Type: new
+Abstract: Lagrangian Neural Networks (LNNs) can learn arbitrary Lagrangians from trajectory data, but their unusual optimization objective leads to significant training instabilities that limit their application to complex systems. We propose several improvements that address these fundamental challenges, namely, a Hessian regularization scheme that penalizes unphysical signatures in the Lagrangian's second derivatives with respect to velocities, preventing the network from learning unstable dynamics, activation functions that are better suited to the problem of learning Lagrangians, and a physics-aware coordinate scaling that improves stability. We systematically evaluate these techniques alongside previously proposed methods for improving stability. Our improved architecture successfully trains on systems of unprecedented complexity, including triple pendulums, and achieved 96.6\% lower validation loss value and 90.68\% better stability than baseline LNNs in double pendulum systems. With the improved framework, we show that our LNNs can learn Lagrangians representing geodesic motion in both non-relativistic and general relativistic settings. To deal with the relativistic setting, we extended our regularization to penalize violations of Lorentzian signatures, which allowed us to predict a geodesic Lagrangian under AdS\textsubscript{4} spacetime metric directly from trajectory data, which to our knowledge has not been done in the literature before. This opens new possibilities for automated discovery of geometric structures in physics, including extraction of spacetime metric tensor components from geodesic trajectories. While our approach inherits some limitations of the original LNN framework, particularly the requirement for invertible Hessians, it significantly expands the practical applicability of LNNs for scientific discovery tasks.
+ oai:arXiv.org:2601.12519v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Abdullah Umut Hamzaogullari, Arkadas Ozakin
+
+
+ Improved Bug Localization with AI Agents Leveraging Hypothesis and Dynamic Cognition
+ https://arxiv.org/abs/2601.12522
+ arXiv:2601.12522v1 Announce Type: new
+Abstract: Software bugs cost technology providers (e.g., AT&T) billions annually and cause developers to spend roughly 50% of their time on bug resolution. Traditional methods for bug localization often analyze the suspiciousness of code components (e.g., methods, documents) in isolation, overlooking their connections with other components in the codebase. Recent advances in Large Language Models (LLMs) and agentic AI techniques have shown strong potential for code understanding, but still lack causal reasoning during code exploration and struggle to manage growing context effectively, limiting their capability. In this paper, we present a novel agentic technique for bug localization -- CogniGent -- that overcomes the limitations above by leveraging multiple AI agents capable of causal reasoning, call-graph-based root cause analysis and context engineering. It emulates developers-inspired debugging practices (a.k.a., dynamic cognitive debugging) and conducts hypothesis testing to support bug localization. We evaluate CogniGent on a curated dataset of 591 bug reports using three widely adopted performance metrics and compare it against six established baselines from the literature. Experimental results show that our technique consistently outperformed existing traditional and LLM-based techniques, achieving MAP improvements of 23.33-38.57% at the document and method levels. Similar gains were observed in MRR, with increases of 25.14-53.74% at both granularity levels. Statistical significance tests also confirm the superiority of our technique. By addressing the reasoning, dependency, and context limitations, CogniGent advances the state of bug localization, bridging human-like cognition with agentic automation for improved performance.
+ oai:arXiv.org:2601.12522v1
+ cs.SE
+ cs.AI
+ cs.IR
+ cs.LG
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Asif Mohammed Samir, Mohammad Masudur Rahman
+
+
+ Enabling High-Curvature Navigation in Eversion Robots through Buckle-Inducing Constrictive Bands
+ https://arxiv.org/abs/2601.12523
+ arXiv:2601.12523v1 Announce Type: new
+Abstract: Tip-growing eversion robots are renowned for their ability to access remote spaces through narrow passages. However, achieving reliable navigation remains a significant challenge. Existing solutions often rely on artificial muscles integrated into the robot body or active tip-steering mechanisms. While effective, these additions introduce structural complexity and compromise the defining advantages of eversion robots: their inherent softness and compliance. In this paper, we propose a passive approach to reduce bending stiffness by purposefully introducing buckling points along the robot's outer wall. We achieve this by integrating inextensible diameter-reducing circumferential bands at regular intervals along the robot body facilitating forward motion through tortuous, obstacle cluttered paths. Rather than relying on active steering, our approach leverages the robot's natural interaction with the environment, allowing for smooth, compliant navigation. We present a Cosserat rod-based mathematical model to quantify this behavior, capturing the local stiffness reductions caused by the constricting bands and their impact on global bending mechanics. Experimental results demonstrate that these bands reduce the robot's stiffness when bent at the tip by up to 91 percent, enabling consistent traversal of 180 degree bends with a bending radius of as low as 25 mm-notably lower than the 35 mm achievable by standard eversion robots under identical conditions. The feasibility of the proposed method is further demonstrated through a case study in a colon phantom. By significantly improving maneuverability without sacrificing softness or increasing mechanical complexity, this approach expands the applicability of eversion robots in highly curved pathways, whether in relation to pipe inspection or medical procedures such as colonoscopy.
+ oai:arXiv.org:2601.12523v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Cem Suulker, Muhie Al Haimus, Thomas Mack, Mohammad Sheikhsofla, Neri Niccol\`o Dei, Reza Kashef, Hadi Sadati, Federica Barontini, Fanny Ficuciello, Alberto Arezzo, Bruno Siciliano, Sebastien Ourselin, Kaspar Althoefer
+
+
+ SGCP: A Self-Organized Game-Theoretic Framework For Collaborative Perception
+ https://arxiv.org/abs/2601.12524
+ arXiv:2601.12524v1 Announce Type: new
+Abstract: Collaborative perception holds great promise for improving safety in autonomous driving, particularly in dense traffic where vehicles can share sensory information to overcome individual blind spots and extend awareness. However, deploying such collaboration at scale remains difficult when communication bandwidth is limited and no roadside infrastructure is available. To overcome these limitations, we introduce a fully decentralized framework that enables vehicles to self organize into cooperative groups using only vehicle to vehicle communication. The approach decomposes the problem into two sequential game theoretic stages. In the first stage, vehicles form stable clusters by evaluating mutual sensing complementarity and motion coherence, and each cluster elects a coordinator. In the second stage, the coordinator guides its members to selectively transmit point cloud segments from perceptually salient regions through a non cooperative potential game, enabling efficient local fusion. Global scene understanding is then achieved by exchanging compact detection messages across clusters rather than raw sensor data. We design distributed algorithms for both stages that guarantee monotonic improvement of the system wide potential function. Comprehensive experiments on the CARLA-OpenCDA-NS3 co-simulation platform show that our method reduces communication overhead while delivering higher perception accuracy and wider effective coverage compared to existing baselines.
+ oai:arXiv.org:2601.12524v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zechuan Gong, Hui Zhang, Yuquan Yang, Wenyu Lu
+
+
+ Approximating splits for decision trees quickly in sparse data streams
+ https://arxiv.org/abs/2601.12525
+ arXiv:2601.12525v1 Announce Type: new
+Abstract: Decision trees are one of the most popular classifiers in the machine learning literature. While the most common decision tree learning algorithms treat data as a batch, numerous algorithms have been proposed to construct decision trees from a data stream. A standard training strategy involves augmenting the current tree by changing a leaf node into a split. Here we typically maintain counters in each leaf which allow us to determine the optimal split, and whether the split should be done. In this paper we focus on how to speed up the search for the optimal split when dealing with sparse binary features and a binary class. We focus on finding splits that have the approximately optimal information gain or Gini index. In both cases finding the optimal split can be done in $O(d)$ time, where $d$ is the number of features. We propose an algorithm that yields $(1 + \alpha)$ approximation when using conditional entropy in amortized $O(\alpha^{-1}(1 + m\log d) \log \log n)$ time, where $m$ is the number of 1s in a data point, and $n$ is the number of data points. Similarly, for Gini index, we achieve $(1 + \alpha)$ approximation in amortized $O(\alpha^{-1} + m \log d)$ time. Our approach is beneficial for sparse data where $m \ll d$. In our experiments we find almost-optimal splits efficiently, faster than the baseline, overperforming the theoretical approximation guarantees.
+ oai:arXiv.org:2601.12525v1
+ cs.LG
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1137/1.9781611978520.69
+ In Proceedings of the 2025 SIAM International Conference on Data Mining (SDM) (pp. 647-655) 2025
+ Nikolaj Tatti
+
+
+ Deep Feature Deformation Weights
+ https://arxiv.org/abs/2601.12527
+ arXiv:2601.12527v1 Announce Type: new
+Abstract: Handle-based mesh deformation has been a long-standing paradigm in computer graphics, enabling intuitive shape edits from sparse controls. Classic techniques offer precise and rapid deformation control. However, they solve an optimization problem with constraints defined by control handle placement, requiring a user to know apriori the ideal distribution of handles on the shape to accomplish the desired edit. The mapping from handle set to deformation behavior is often unintuitive and, importantly, non-semantic. Modern data-driven methods, on the other hand, leverage a data prior to obtain semantic edits, but are slow and imprecise. We propose a technique that fuses the semantic prior of data with the precise control and speed of traditional frameworks. Our approach is surprisingly simple yet effective: deep feature proximity makes for smooth and semantic deformation weights, with no need for additional regularization. The weights can be computed in real-time for any surface point, whereas prior methods require optimization for new handles. Moreover, the semantic prior from deep features enables co-deformation of semantic parts. We introduce an improved feature distillation pipeline, barycentric feature distillation, which efficiently uses the visual signal from shape renders to minimize distillation cost. This allows our weights to be computed for high resolution meshes in under a minute, in contrast to potentially hours for both classical and neural methods. We preserve and extend properties of classical methods through feature space constraints and locality weighting. Our field representation allows for automatic detection of semantic symmetries, which we use to produce symmetry-preserving deformations. We show a proof-of-concept application which can produce deformations for meshes up to 1 million faces in real-time on a consumer-grade machine.
+ oai:arXiv.org:2601.12527v1
+ cs.CV
+ cs.GR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Richard Liu, Itai Lang, Rana Hanocka
+
+
+ How to Get Close to the Median Shape
+ https://arxiv.org/abs/2601.12529
+ arXiv:2601.12529v1 Announce Type: new
+Abstract: $\renewcommand{\Re}{\mathbb{R}}\newcommand{\eps}{{\varepsilon}}\newcommand{\poly}{\mathrm{poly}} $In this paper, we study the problem of $L_1$-fitting a shape to a set of $n$ points in $\Re^d$ (where $d$ is a fixed constant), where the target is to minimize the sum of distances of the points to the shape, or the sum of squared distances. We present a general technique for computing a $(1 + \eps ) $-approximation for such a problem, with running time $O(n + \poly( \log n, 1/\eps))$, where $\poly(\log n, 1/\eps)$ is a polynomial of constant degree of $\log n$ and $1/\eps$ (the power of the polynomial is a function of $d$). The new algorithm runs in linear time for a fixed $\eps>0$, and is the first subquadratic algorithm for this problem.
+ Applications of the algorithm include best fitting either a circle, a sphere, or a cylinder to a set of points when minimizing the sum of distances (or squared distances) to the respective shape.
+ oai:arXiv.org:2601.12529v1
+ cs.CG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sariel Har-Peled
+
+
+ XRefine: Attention-Guided Keypoint Match Refinement
+ https://arxiv.org/abs/2601.12530
+ arXiv:2601.12530v1 Announce Type: new
+Abstract: Sparse keypoint matching is crucial for 3D vision tasks, yet current keypoint detectors often produce spatially inaccurate matches. Existing refinement methods mitigate this issue through alignment of matched keypoint locations, but they are typically detector-specific, requiring retraining for each keypoint detector. We introduce XRefine, a novel, detector-agnostic approach for sub-pixel keypoint refinement that operates solely on image patches centered at matched keypoints. Our cross-attention-based architecture learns to predict refined keypoint coordinates without relying on internal detector representations, enabling generalization across detectors. Furthermore, XRefine can be extended to handle multi-view feature tracks. Experiments on MegaDepth, KITTI, and ScanNet demonstrate that the approach consistently improves geometric estimation accuracy, achieving superior performance compared to existing refinement methods while maintaining runtime efficiency. Our code and trained models can be found at https://github.com/boschresearch/xrefine.
+ oai:arXiv.org:2601.12530v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jan Fabian Schmid, Annika Hagemann
+
+
+ BirdsEye-RU: A Dataset For Detecting Faces from Overhead Images
+ https://arxiv.org/abs/2601.12533
+ arXiv:2601.12533v1 Announce Type: new
+Abstract: Detecting faces in overhead images remains a significant challenge due to extreme scale variations and environmental clutter. To address this, we created the BirdsEye-RU dataset, a comprehensive collection of 2,978 images containing over eight thousand annotated faces. This dataset is specifically designed to capture small and distant faces across diverse environments, containing both drone images and smartphone-captured images from high altitude. We present a detailed description of the BirdsEye-RU dataset in this paper. We made our dataset freely available to the public, and it can be accessed at https://www.kaggle.com/datasets/mdahanafarifkhan/birdseye-ru.
+ oai:arXiv.org:2601.12533v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Md. Ahanaf Arif Khan, Ariful Islam, Sangeeta Biswas, Md. Iqbal Aziz Khan, Subrata Pramanik, Sanjoy Kumar Chakrabarty, Bimal Kumar Pramanik
+
+
+ Encoding Emotion Through Self-Supervised Eye Movement Reconstruction
+ https://arxiv.org/abs/2601.12534
+ arXiv:2601.12534v1 Announce Type: new
+Abstract: The relationship between emotional expression and eye movement is well-documented, with literature establishing gaze patterns are reliable indicators of emotion. However, most studies utilize specialized, high-resolution eye-tracking equipment, limiting the potential reach of findings. We investigate how eye movement can be used to predict multimodal markers of emotional expression from naturalistic, low-resolution videos. We utilize a collection of video interviews from the USC Shoah Foundation's Visual History Archive with Holocaust survivors as they recount their experiences in the Auschwitz concentration camp. Inspired by pretraining methods on language models, we develop a novel gaze detection model that uses self-supervised eye movement reconstruction that can effectively leverage unlabeled video. We use this model's encoder embeddings to fine-tune models on two downstream tasks related to emotional expression. The first is aligning eye movement with directional emotion estimates from speech. The second task is using eye gaze as a predictor of three momentary manifestations of emotional behaviors: laughing, crying/sobbing, and sighing. We find our new model is predictive of emotion outcomes and observe a positive correlation between pretraining performance and emotion processing performance for both experiments. We conclude self-supervised eye movement reconstruction is an effective method for encoding the affective signal they carry.
+ oai:arXiv.org:2601.12534v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Marcus Ma, Jordan Prescott, Emily Zhou, Tiantian Feng, Kleanthis Avramidis, Gabor Mihaly Toth, Shrikanth Narayanan
+
+
+ Improving Low-Resource Machine Translation via Round-Trip Reinforcement Learning
+ https://arxiv.org/abs/2601.12535
+ arXiv:2601.12535v1 Announce Type: new
+Abstract: Low-resource machine translation (MT) has gained increasing attention as parallel data from low-resource language communities is collected, but many potential methods for improving low-resource MT remain unexplored. We investigate a self-supervised reinforcement-learning-based fine-tuning for translation in low-resource settings using round-trip bootstrapping with the No Language Left Behind (NLLB) family of models. Our approach translates English into a target low-resource language and then back into English, using a combination of chrF++ and BLEU as the reward function on the reconstructed English sentences. Using the NLLB-MD dataset, we evaluate both the 600M and 1.3B parameter NLLB models and observe consistent improvements for the following languages: Central Aymara, Friulian, Wolof and Russian. Qualitative inspection of translation outputs indicates increased fluency and semantic fidelity. We argue that our method can further benefit from scale, enabling models to increasingly leverage their pretrained knowledge and continue self-improving.
+ oai:arXiv.org:2601.12535v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ahmed Attia, Alham Fikri
+
+
+ Agentic Reasoning for Large Language Models
+ https://arxiv.org/abs/2601.12538
+ arXiv:2601.12538v1 Announce Type: new
+Abstract: Reasoning is a fundamental cognitive process underlying inference, problem-solving, and decision-making. While large language models (LLMs) demonstrate strong reasoning capabilities in closed-world settings, they struggle in open-ended and dynamic environments. Agentic reasoning marks a paradigm shift by reframing LLMs as autonomous agents that plan, act, and learn through continual interaction. In this survey, we organize agentic reasoning along three complementary dimensions. First, we characterize environmental dynamics through three layers: foundational agentic reasoning, which establishes core single-agent capabilities including planning, tool use, and search in stable environments; self-evolving agentic reasoning, which studies how agents refine these capabilities through feedback, memory, and adaptation; and collective multi-agent reasoning, which extends intelligence to collaborative settings involving coordination, knowledge sharing, and shared goals. Across these layers, we distinguish in-context reasoning, which scales test-time interaction through structured orchestration, from post-training reasoning, which optimizes behaviors via reinforcement learning and supervised fine-tuning. We further review representative agentic reasoning frameworks across real-world applications and benchmarks, including science, robotics, healthcare, autonomous research, and mathematics. This survey synthesizes agentic reasoning methods into a unified roadmap bridging thought and action, and outlines open challenges and future directions, including personalization, long-horizon interaction, world modeling, scalable multi-agent training, and governance for real-world deployment.
+ oai:arXiv.org:2601.12538v1
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tianxin Wei, Ting-Wei Li, Zhining Liu, Xuying Ning, Ze Yang, Jiaru Zou, Zhichen Zeng, Ruizhong Qiu, Xiao Lin, Dongqi Fu, Zihao Li, Mengting Ai, Duo Zhou, Wenxuan Bao, Yunzhe Li, Gaotang Li, Cheng Qian, Yu Wang, Xiangru Tang, Yin Xiao, Liri Fang, Hui Liu, Xianfeng Tang, Yuji Zhang, Chi Wang, Jiaxuan You, Heng Ji, Hanghang Tong, Jingrui He
+
+
+ MemeLens: Multilingual Multitask VLMs for Memes
+ https://arxiv.org/abs/2601.12539
+ arXiv:2601.12539v1 Announce Type: new
+Abstract: Memes are a dominant medium for online communication and manipulation because meaning emerges from interactions between embedded text, imagery, and cultural context. Existing meme research is distributed across tasks (hate, misogyny, propaganda, sentiment, humour) and languages, which limits cross-domain generalization. To address this gap we propose MemeLens, a unified multilingual and multitask explanation-enhanced Vision Language Model (VLM) for meme understanding. We consolidate 38 public meme datasets, filter and map dataset-specific labels into a shared taxonomy of $20$ tasks spanning harm, targets, figurative/pragmatic intent, and affect. We present a comprehensive empirical analysis across modeling paradigms, task categories, and datasets. Our findings suggest that robust meme understanding requires multimodal training, exhibits substantial variation across semantic categories, and remains sensitive to over-specialization when models are fine-tuned on individual datasets rather than trained in a unified setting. We will make the experimental resources and datasets publicly available for the community.
+ oai:arXiv.org:2601.12539v1
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ali Ezzat Shahroor, Mohamed Bayan Kmainasi, Abul Hasnat, Dimitar Dimitrov, Giovanni Da San Martino, Preslav Nakov, Firoj Alam
+
+
+ Rethinking the AI Scientist: Interactive Multi-Agent Workflows for Scientific Discovery
+ https://arxiv.org/abs/2601.12542
+ arXiv:2601.12542v1 Announce Type: new
+Abstract: Artificial intelligence systems for scientific discovery have demonstrated remarkable potential, yet existing approaches remain largely proprietary and operate in batch-processing modes requiring hours per research cycle, precluding real-time researcher guidance. This paper introduces Deep Research, a multi-agent system enabling interactive scientific investigation with turnaround times measured in minutes. The architecture comprises specialized agents for planning, data analysis, literature search, and novelty detection, unified through a persistent world state that maintains context across iterative research cycles. Two operational modes support different workflows: semi-autonomous mode with selective human checkpoints, and fully autonomous mode for extended investigations. Evaluation on the BixBench computational biology benchmark demonstrated state-of-the-art performance, achieving 48.8% accuracy on open response and 64.5% on multiple-choice evaluation, exceeding existing baselines by 14 to 26 percentage points. Analysis of architectural constraints, including open access literature limitations and challenges inherent to automated novelty assessment, informs practical deployment considerations for AI-assisted scientific workflows.
+ oai:arXiv.org:2601.12542v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Lukas Weidener, Marko Brki\'c, Mihailo Jovanovi\'c, Ritvik Singh, Chiara Baccin, Emre Ulgac, Alex Dobrin, Aakaash Meduri
+
+
+ Press Start to Charge: Videogaming the Online Centralized Charging Scheduling Problem
+ https://arxiv.org/abs/2601.12543
+ arXiv:2601.12543v1 Announce Type: new
+Abstract: We study the online centralized charging scheduling problem (OCCSP). In this problem, a central authority must decide, in real time, when to charge dynamically arriving electric vehicles (EVs), subject to capacity limits, with the objective of balancing load across a finite planning horizon. To solve the problem, we first gamify it; that is, we model it as a game where charging blocks are placed within temporal and capacity constraints on a grid. We design heuristic policies, train learning agents with expert demonstrations, and improve them using Dataset Aggregation (DAgger). From a theoretical standpoint, we show that gamification reduces model complexity and yields tighter generalization bounds than vector-based formulations. Experiments across multiple EV arrival patterns confirm that gamified learning enhances load balancing. In particular, the image-to-movement model trained with DAgger consistently outperforms heuristic baselines, vector-based approaches, and supervised learning agents, while also demonstrating robustness in sensitivity analyses. These operational gains translate into tangible economic value. In a real-world case study for the Greater Montr\'eal Area (Qu\'ebec, Canada) using utility cost data, the proposed methods lower system costs by tens of millions of dollars per year over the prevailing practice and show clear potential to delay costly grid upgrades.
+ oai:arXiv.org:2601.12543v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alireza Ghahtarani, Martin Cousineau, Amir-massoud Farahmand, Jorge E. Mendoza
+
+
+ Information Farming: From Berry Picking to Berry Growing
+ https://arxiv.org/abs/2601.12544
+ arXiv:2601.12544v1 Announce Type: new
+Abstract: The classic paradigms of Berry Picking and Information Foraging Theory have framed users as gatherers, opportunistically searching across distributed sources to satisfy evolving information needs. However, the rise of GenAI is driving a fundamental transformation in how people produce, structure, and reuse information - one that these paradigms no longer fully capture. This transformation is analogous to the Neolithic Revolution, when societies shifted from hunting and gathering to cultivation. Generative technologies empower users to "farm" information by planting seeds in the form of prompts, cultivating workflows over time, and harvesting richly structured, relevant yields within their own plots, rather than foraging across others people's patches. In this perspectives paper, we introduce the notion of Information Farming as a conceptual framework and argue that it represents a natural evolution in how people engage with information. Drawing on historical analogy and empirical evidence, we examine the benefits and opportunities of information farming, its implications for design and evaluation, and the accompanying risks posed by this transition. We hypothesize that as GenAI technologies proliferate, cultivating information will increasingly supplant transient, patch-based foraging as a dominant mode of engagement, marking a broader shift in human-information interaction and its study.
+ oai:arXiv.org:2601.12544v1
+ cs.IR
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3786304.3787947
+ ACM SIGIR Conference on Human Information Interaction and Retrieval 2026
+ Leif Azzopardi, Adam Roegiest
+
+
+ An Experimental Comparison of Sliding Mode and Immersion and Invariance Adaptive Controllers forPosition-feedback Tracking of a Simple Mechanical System with Friction
+ https://arxiv.org/abs/2601.12545
+ arXiv:2601.12545v1 Announce Type: new
+Abstract: The purpose of this paper is to illustrate, in an experimental facility consisting of a simple pendular device, the performance of a sliding mode adaptive position-feedback tracking controller of mechanical systems with friction reported in the literature. To put this experimental evidence in perspective, we compare the performance of the sliding mode scheme with the one obtained by an adaptive controller designed following the well-known immersion and invariance technique.
+ oai:arXiv.org:2601.12545v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Luis Cervantes-P\'erez, V\'ictor Santib\'a\~nez, Jes\'us Sandoval, Romeo Ortega, Jose Guadalupe Romero
+
+
+ How Clinicians Think and What AI Can Learn From It
+ https://arxiv.org/abs/2601.12547
+ arXiv:2601.12547v1 Announce Type: new
+Abstract: Most clinical AI systems operate as prediction engines -- producing labels or risk scores -- yet real clinical reasoning is a time-bounded, sequential control problem under uncertainty. Clinicians interleave information gathering with irreversible actions, guided by regret, constraints and patient values. We argue that the dominant computational substrate of clinician reasoning is not cardinal optimization but ordinal, non-compensatory decision-making: Clinicians frequently rely on fast-and-frugal, lexicographic heuristics (e.g., fast-and-frugal trees) that stop early after checking a small, fixed sequence of cues. We provide a normative rationale for why such algorithms are not merely bounded rationality shortcuts, but can be epistemically preferred in medicine. First, many clinical trade-offs are constructed through human judgment and are only weakly measurable on absolute scales; without strong measurement axioms, only orderings are invariant, motivating an ordinal-by-default stance. Second, preference and signal elicitation are structurally crude: The mapping from truth $\to$ perception $\to$ inference $\to$ recorded variables introduces layered noise, leaving a persistent uncertainty floor. When this 'crudeness' overwhelms the decision margin, plug-in expected-utility optimization becomes brittle (high flip probability under small perturbations), whereas robust dominance/filtering rules ($\epsilon$-dominance, maximin) stabilize decisions.Finally, we outline a clinician-aligned AI blueprint: Use rich models for beliefs and trajectories, but choose actions through robust ordinal rules; treat heuristics as the low-dimensional special case; and deploy AI as 'selective complexity' -- invoked mainly for tie-breaking when decisions are fragile and information has positive expected impact.
+ oai:arXiv.org:2601.12547v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Dipayan Sengupta, Saumya Panda
+
+
+ Traffic Collisions: Temporal Patterns and Severity-Weighted Hotspot Analysis
+ https://arxiv.org/abs/2601.12548
+ arXiv:2601.12548v1 Announce Type: new
+Abstract: Understanding traffic collision patterns is of high importance for effective road safety planning in fast-growing urban environments. This study examines the temporal and spatial patterns of traffic collisions in Dubai, UAE, with a particular focus on collision severity. To this end, traffic collision records from November 2024 to June 2025 were analyzed to examine hourly, daily, and monthly variations in collision frequency and severity for both overall traffic collisions and pedestrian-related accidents. Temporal associations with severity were evaluated using chi-square tests and Cramer's V, while spatial patterns were analyzed using severity-weighted hotspot analysis based on the Getis-Ord Gi* statistic, complemented by inverse distance weighting (IDW) interpolation. The results show a clear temporal variation in overall collision frequency and severity, with higher collision frequencies during evening and nighttime periods with 44% higher probability of high-severity outcomes at night compared to the afternoon. On the other hand, pedestrian-related accidents showed a distinct temporal profile, characterized by higher occurrence during late-evening hours and relatively limited variation across days of the week and months. Spatial analysis identified statistically significant severity hotspots for overall collisions in the northern and northwestern parts of Dubai and along the Al Ain-Dubai Highway, while pedestrian severity hotspots were concentrated near industrial areas in the southwestern region. Several policy measures are proposed based on the findings including, reducing nighttime speed limits, enhancing automated enforcement, improving roadway lighting, and implementing pedestrian-focused treatments in statistically significant hotspots.
+ oai:arXiv.org:2601.12548v1
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nael Alsaleh, Noura Falis, Tareq Alsaleh, Farah Ba Fakih
+
+
+ Benchmarking Concept-Spilling Across Languages in LLMs
+ https://arxiv.org/abs/2601.12549
+ arXiv:2601.12549v1 Announce Type: new
+Abstract: Multilingual Large Language Models (LLMs) exhibit remarkable cross-lingual abilities, yet often exhibit a systematic bias toward the representations from other languages, resulting in semantic interference when generating content in non-English languages$-$a phenomenon we define as language spilling. This paper presents a novel comparative framework for evaluating multilingual semantic robustness by systematically measuring how models handle polysemous words across languages. Our methodology provides a relative measure of model performance: when required to generate exactly five meanings, both strong and weak models may resort to meanings from dominant languages, but semantically stronger models do so later in the generation sequence, producing more true meanings from the target language before failing, while weaker models resort to dominant-language meanings earlier in the sequence. We evaluate a diverse set of open and closed multilingual LLMs using a structured meaning generation task across nine languages, employing a carefully curated benchmark of 100 high-polysemy English words. Our findings reveal significant variation in semantic robustness across both models and languages, providing a principled ranking system for model comparison without requiring definitive causal attribution of error sources. We contribute both a scalable comparative benchmark for multilingual semantic evaluation and a rigorous validation pipeline$-$critical tools for developing more linguistically balanced AI systems.
+ oai:arXiv.org:2601.12549v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ilia Badanin, Daniil Dzenhaliou, Imanol Schlag
+
+
+ PISE: Physics-Anchored Semantically-Enhanced Deep Computational Ghost Imaging for Robust Low-Bandwidth Machine Perception
+ https://arxiv.org/abs/2601.12551
+ arXiv:2601.12551v1 Announce Type: new
+Abstract: We propose PISE, a physics-informed deep ghost imaging framework for low-bandwidth edge perception. By combining adjoint operator initialization with semantic guidance, PISE improves classification accuracy by 2.57% and reduces variance by 9x at 5% sampling.
+ oai:arXiv.org:2601.12551v1
+ cs.CV
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tong Wu
+
+
+ Evaluating Contextually Mediated Factual Recall in Multilingual Large Language Models
+ https://arxiv.org/abs/2601.12555
+ arXiv:2601.12555v1 Announce Type: new
+Abstract: Large language models (LLMs) can recall a wide range of factual knowledge across languages. However, existing factual recall evaluations primarily assess fact retrieval in isolation, where the queried entity is explicitly named and the fact is requested directly. In natural language use, facts are often accessed through context, where the relevant entity is introduced only indirectly. In this work, we study contextually mediated factual recall, asking whether LLMs can reliably retrieve factual knowledge when the target entity is embedded in a naturalistic context rather than queried explicitly, across languages. We construct controlled prompts that preserve the underlying fact while introducing referential mediation through contextual sentences. To disentangle contextual effects from name-specific associations, we further compare performance using synthetic names and real names across languages. Evaluating multiple model families in five languages, we find that contextual mediation consistently degrades factual recall, with substantial variation across relations. Larger models are more robust to contextual mediation, exhibiting a reduced performance gap relative to direct queries, while the effect of real names and name origin is mixed and unsystematic. These findings highlight a gap between isolated factual recall and context-dependent language understanding in multilingual LLMs.
+ oai:arXiv.org:2601.12555v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yihong Liu, Bingyu Xiong, Hinrich Sch\"utze
+
+
+ Life, Machine Learning, and the Search for Habitability: Predicting Biosignature Fluxes for the Habitable Worlds Observatory
+ https://arxiv.org/abs/2601.12557
+ arXiv:2601.12557v1 Announce Type: new
+Abstract: Future direct-imaging flagship missions, such as NASA's Habitable Worlds Observatory (HWO), face critical decisions in prioritizing observations due to extremely stringent time and resource constraints. In this paper, we introduce two advanced machine-learning architectures tailored for predicting biosignature species fluxes from exoplanetary reflected-light spectra: a Bayesian Convolutional Neural Network (BCNN) and our novel model architecture, the Spectral Query Adaptive Transformer (SQuAT). The BCNN robustly quantifies both epistemic and aleatoric uncertainties, offering reliable predictions under diverse observational conditions, whereas SQuAT employs query-driven attention mechanisms to enhance interpretability by explicitly associating spectral features with specific biosignature species. We demonstrate that both models achieve comparably high predictive accuracy on an augmented dataset spanning a wide range of exoplanetary conditions, while highlighting their distinct advantages in uncertainty quantification and spectral interpretability. These capabilities position our methods as promising tools for accelerating target triage, optimizing observation schedules, and maximizing scientific return for upcoming flagship missions such as HWO.
+ oai:arXiv.org:2601.12557v1
+ cs.LG
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mark Moussa, Amber V. Young, Brianna Isola, Vasuda Trehan, Michael D. Himes, Nicholas Wogan, Giada Arney
+
+
+ Automated Tool Support for Category-Partition Testing: Design Decisions, UI and Examples of Use
+ https://arxiv.org/abs/2601.12559
+ arXiv:2601.12559v1 Announce Type: new
+Abstract: Category-Partition is a functional testing technique that is based on the idea that the input domain of the system under test can be divided into sub-domains, with the assumption that inputs that belong to the same sub-domain trigger a similar behaviour and that therefore it is sufficient to select one input from each sub-domain. Category-Partition proceeds in several steps, from the identification of so-called categories and choices, possibly constrained, which are subsequently used to form test frames, i.e., combinations of choices, and eventually test cases. This paper reports on an ongoing attempt to automate as many of those steps as possible, with graphical-user interface tool support. Specifically, the user interface allows the user to specify parameters as well as so-called environment variables, further specify categories and choices with optional constraints. Choices are provided with precise specifications with operations specific to their types (e.g., Boolean, Integer, Real, String). Then, the tool automates the construction of test frames, which are combinations of choices, according to alternative selection criteria, and the identification of input values for parameters and environment variables for these test frames, thereby producing test cases. The paper illustrates the capabilities of the tool with the use of nine different case studies.
+ oai:arXiv.org:2601.12559v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yvan Labiche
+
+
+ Agentic Artificial Intelligence (AI): Architectures, Taxonomies, and Evaluation of Large Language Model Agents
+ https://arxiv.org/abs/2601.12560
+ arXiv:2601.12560v1 Announce Type: new
+Abstract: Artificial Intelligence is moving from models that only generate text to Agentic AI, where systems behave as autonomous entities that can perceive, reason, plan, and act. Large Language Models (LLMs) are no longer used only as passive knowledge engines but as cognitive controllers that combine memory, tool use, and feedback from their environment to pursue extended goals. This shift already supports the automation of complex workflows in software engineering, scientific discovery, and web navigation, yet the variety of emerging designs, from simple single loop agents to hierarchical multi agent systems, makes the landscape hard to navigate. In this paper, we investigate architectures and propose a unified taxonomy that breaks agents into Perception, Brain, Planning, Action, Tool Use, and Collaboration. We use this lens to describe the move from linear reasoning procedures to native inference time reasoning models, and the transition from fixed API calls to open standards like the Model Context Protocol (MCP) and Native Computer Use. We also group the environments in which these agents operate, including digital operating systems, embodied robotics, and other specialized domains, and we review current evaluation practices. Finally, we highlight open challenges, such as hallucination in action, infinite loops, and prompt injection, and outline future research directions toward more robust and reliable autonomous systems.
+ oai:arXiv.org:2601.12560v1
+ cs.AI
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Arunkumar V, Gangadharan G. R., Rajkumar Buyya
+
+
+ VR ProfiLens: User Profiling Risks in Consumer Virtual Reality Apps
+ https://arxiv.org/abs/2601.12563
+ arXiv:2601.12563v1 Announce Type: new
+Abstract: Virtual reality (VR) platforms and apps collect user sensor data, including motion, facial, eye, and hand data, in abstracted form. These data may expose users to unique privacy risks without their knowledge or meaningful awareness, yet the extent of these risks remains understudied. To address this gap, we propose VR ProfiLens, a framework to study user profiling based on VR sensor data and the resulting privacy risks across consumer VR apps. To systematically study this problem, we first develop a taxonomy rooted in the CCPA definition of personal information and expand it by sensor, app, and threat contexts to identify user attributes at risk. Then, we conduct a user study in which we collect VR sensor data from four sensor groups from real users interacting with 10 popular consumer VR apps, followed by a survey. We design and apply an analysis pipeline to demonstrate the feasibility of inferring user attributes using these data. Our results show that sensitive personal information can be inferred with moderately high to high risk (up to 90% F1 score) from abstracted sensor data. Through feature analysis, we further identify correlations among app groups and sensor groups in inferring user attributes. Our findings highlight risks to users, including privacy loss, tracking, targeted advertising, and safety threats. Finally, we discuss design implications and regulatory recommendations to enhance transparency and better protect users' privacy in VR.
+ oai:arXiv.org:2601.12563v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.14722/usec.2026.23003
+ Proceeding on 16th Usable Security and Privacy Symposium (USEC), co-located with NDSS, 2026
+ Ismat Jarin, Olivia Figueira, Yu Duan, Tu Le, Athina Markopoulou
+
+
+ Camera Pose Revisited
+ https://arxiv.org/abs/2601.12567
+ arXiv:2601.12567v1 Announce Type: new
+Abstract: Estimating the position and orientation of a camera with respect to an observed scene is one of the central problems in computer vision, particularly in the context of camera calibration and multi-sensor systems. This paper addresses the planar Perspective--$n$--Point problem, with special emphasis on the initial estimation of the pose of a calibration object. As a solution, we propose the \texttt{PnP-ProCay78} algorithm, which combines the classical quadratic formulation of the reconstruction error with a Cayley parameterization of rotations and least-squares optimization. The key component of the method is a deterministic selection of starting points based on an analysis of the reconstruction error for two canonical vectors, allowing costly solution-space search procedures to be avoided. Experimental validation is performed using data acquired also from high-resolution RGB cameras and very low-resolution thermal cameras in an integrated RGB--IR setup. The results demonstrate that the proposed algorithm achieves practically the same projection accuracy as optimal \texttt{SQPnP} and slightly higher than \texttt{IPPE}, both prominent \texttt{PnP-OpenCV} procedures. However, \texttt{PnP-ProCay78} maintains a significantly simpler algorithmic structure. Moreover, the analysis of optimization trajectories in Cayley space provides an intuitive insight into the convergence process, making the method attractive also from a didactic perspective. Unlike existing PnP solvers, the proposed \texttt{PnP-ProCay78} algorithm combines projection error minimization with an analytically eliminated reconstruction-error surrogate for translation, yielding a hybrid cost formulation that is both geometrically transparent and computationally efficient.
+ oai:arXiv.org:2601.12567v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ W{\l}adys{\l}aw Skarbek, Micha{\l} Salomonowicz, Micha{\l} Kr\'ol
+
+
+ The Origin of the Inaccessible Game
+ https://arxiv.org/abs/2601.12576
+ arXiv:2601.12576v1 Announce Type: new
+Abstract: The inaccessible game is an information-geometric framework where dynamics of information loss emerge from maximum entropy production under marginal-entropy conservation.
+ We study the game's starting state, the origin. Classical Shannon entropy forbids a representation with zero joint entropy and positive marginal entropies: non-negativity of conditional entropy rules this out. Replacing Shannon with von Neumann entropy within the Baez Fritz Leinster Parzygnat categorical framework removes this obstruction and admits a well-defined origin: a globally pure state with maximally mixed marginals, selected up to local-unitary equivalence. At this LME origin, marginal-entropy conservation becomes a second-order geometric condition. Because the marginal-entropy sum is saturated termwise, the constraint gradient vanishes and first-order tangency is vacuous; admissible directions are selected by the kernel of the constraint Hessian, characterised by the marginal-preserving tangent space.
+ We derive the constrained gradient flow in the matrix exponential family and show that, as the origin is approached, the affine time parameter degenerates. This motivates an axiomatically distinguished reparametrisation, entropy time $t$, defined by $dH/dt = c$ for fixed constant $c>0$. In this parametrisation, the infinite affine-time approach to the boundary maps to a finite entropy-time interval. The constrained dynamics split into a symmetric dissipative component realising SEA and a reversible component represented as unitary evolution.
+ As in the classical game, marginal-entropy conservation is equivalent to conservation of a sum of local modular Hamiltonian expectations, a state-dependent "modular energy"; in Gibbs regimes where local modular generators become approximately parameter-invariant, this reduces to familiar fixed-energy constraints from nonequilibrium thermodynamics.
+ oai:arXiv.org:2601.12576v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Neil D. Lawrence
+
+
+ Semantic Fusion: Verifiable Alignment in Decentralized Multi-Agent Systems
+ https://arxiv.org/abs/2601.12580
+ arXiv:2601.12580v1 Announce Type: new
+Abstract: We present Semantic Fusion (SF), a formal framework for decentralized semantic coordination in multi-agent systems. SF allows agents to operate over scoped views of shared memory, propose structured updates, and maintain global coherence through local ontology-based validation and refresh without centralized control or explicit message passing. The central theoretical result is a bisimulation theorem showing that each agent's local execution is behaviorally equivalent to its projection of the global semantics, in both deterministic and probabilistic settings. This enables safety, liveness, and temporal properties to be verified locally and soundly lifted to the full system. SF supports agents whose update proposals vary across invocations, including those generated by learned or heuristic components, provided updates pass semantic validation before integration. We establish deterministic and probabilistic guarantees ensuring semantic alignment under asynchronous or degraded communication. To validate the model operationally, we implement a lightweight reference architecture that instantiates its core mechanisms. A 250-agent simulation evaluates these properties across over 11,000 validated updates, demonstrating convergence under probabilistic refresh, bounded communication, and resilience to agent failure. Together, these results show that Semantic Fusion can provide a formal and scalable basis for verifiable autonomy in decentralized systems.
+ oai:arXiv.org:2601.12580v1
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sofiya Zaichyk
+
+
+ Do MLLMs See What We See? Analyzing Visualization Literacy Barriers in AI Systems
+ https://arxiv.org/abs/2601.12585
+ arXiv:2601.12585v1 Announce Type: new
+Abstract: Multimodal Large Language Models (MLLMs) are increasingly used to interpret visualizations, yet little is known about why they fail. We present the first systematic analysis of barriers to visualization literacy in MLLMs. Using the regenerated Visualization Literacy Assessment Test (reVLAT) benchmark with synthetic data, we open-coded 309 erroneous responses from four state-of-the-art models with a barrier-centric strategy adapted from human visualization literacy research. Our analysis yields a taxonomy of MLLM failures, revealing two machine-specific barriers that extend prior human-participation frameworks. Results show that models perform well on simple charts but struggle with color-intensive, segment-based visualizations, often failing to form consistent comparative reasoning. Our findings inform future evaluation and design of reliable AI-driven visualization assistants.
+ oai:arXiv.org:2601.12585v1
+ cs.HC
+ cs.AI
+ cs.ET
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mengli (Dawn), Duan (Sissi), Yuhe (Sissi), Jiang, Matthew Varona, Carolina Nobre
+
+
+ Conversing with Objects toward Fluid Human and Artificial Identities during Life Transitions
+ https://arxiv.org/abs/2601.12589
+ arXiv:2601.12589v1 Announce Type: new
+Abstract: People's identities change during life transitions, e.g., studying abroad. They bring everyday objects that embody memories and reflect their identities during such moves. To assist in these transitions, we ask how people's human identities could be influenced by their objects through an artificial agent. This paper presents an exploratory research-through-design study around how people undergoing life transitions experience conversing with their everyday objects through a chatbot. Drawing on a two-week field deployment and interviews with 12 participants, we contribute (1) a conceptualization of 'trans-embodiment' describing the asynchronous imagination of object and human identities on the chatbot, (2) empirical evidence of the resulting emotional and reflective experiences, and (3) three types of object identities for designing conversational agents that role-play objects. Our contributions sum up to triangulating human-agent-object identity as trans-embodiment in supporting life transitions.
+ oai:arXiv.org:2601.12589v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuhui Xu, Minha Lee, Stephan Wensveen, Mahla Alizadeh, Mathias Funk
+
+
+ SmoothCLAP: Soft-Target Enhanced Contrastive Language\--Audio Pretraining for Affective Computing
+ https://arxiv.org/abs/2601.12591
+ arXiv:2601.12591v1 Announce Type: new
+Abstract: The ambiguity of human emotions poses several challenges for machine learning models, as they often overlap and lack clear delineating boundaries. Contrastive language-audio pretraining (CLAP) has emerged as a key technique for generalisable emotion recognition. However, as conventional CLAP enforces a strict one-to-one alignment between paired audio-text samples, it overlooks intra-modal similarity and treats all non-matching pairs as equally negative. This conflicts with the fuzzy boundaries between different emotions. To address this limitation, we propose SmoothCLAP, which introduces softened targets derived from intra-modal similarity and paralinguistic features. By combining these softened targets with conventional contrastive supervision, SmoothCLAP learns embeddings that respect graded emotional relationships, while retaining the same inference pipeline as CLAP. Experiments on eight affective computing tasks across English and German demonstrate that SmoothCLAP is consistently achieving superior performance. Our results highlight that leveraging soft supervision is a promising strategy for building emotion-aware audio-text models.
+ oai:arXiv.org:2601.12591v1
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Xin Jing, Jiadong Wang, Andreas Triantafyllopoulos, Maurice Gerczuk, Shahin Amiriparian, Jun Luo, Bj\"orn Schuller
+
+
+ Blurred Drinker Paradoxes and Blurred Choice Axioms: Constructive Reverse Mathematics of the Downward L\"owenheim-Skolem Theorem
+ https://arxiv.org/abs/2601.12592
+ arXiv:2601.12592v1 Announce Type: new
+Abstract: In the setting of constructive reverse mathematics, we analyse the downward L\"owenheim-Skolem (DLS) theorem of first-order logic, stating that every infinite model has a countable elementary submodel. Refining the well-known equivalence of the DLS theorem to the axiom of dependent choice (DC) over classical base theories, our constructive approach allows for several finer logical decompositions: Just assuming countable choice (CC), the DLS theorem is equivalent to the conjunction of DC with a newly identified fragment of the excluded middle (LEM) that we call the blurred drinker paradox (BDP). Further without CC, the DLS theorem is equivalent to the conjunction of BDP with similarly blurred weakenings of DC and CC. Independently of their connection with the DLS theorem, we also study BDP and the blurred choice axioms on their own, for instance by showing that BDP is LEM without a contribution of Markov's principle and that blurred DC is DC without a contribution of CC. The paper is hyperlinked with an accompanying Coq development.
+ oai:arXiv.org:2601.12592v1
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Dominik Kirst, Haoyi Zeng
+
+
+ Abusing the Internet of Medical Things: Evaluating Threat Models and Forensic Readiness for Multi-Vector Attacks on Connected Healthcare Devices
+ https://arxiv.org/abs/2601.12593
+ arXiv:2601.12593v1 Announce Type: new
+Abstract: Individuals experiencing interpersonal violence (IPV), who depend on medical devices, represent a uniquely vulnerable population as healthcare technologies become increasingly connected. Despite rapid growth in MedTech innovation and "health-at-home" ecosystems, the intersection of MedTech cybersecurity and technology-facilitated abuse remains critically under-examined. IPV survivors who rely on therapeutic devices encounter a qualitatively different threat environment from the external, technically sophisticated adversaries typically modeled in MedTech cybersecurity research. We address this gap through two complementary methods: (1) the development of hazard-integrated threat models that fuse Cyber physical system security modeling with tech-abuse frameworks, and (2) an immersive simulation with practitioners, deploying a live version of our model, identifying gaps in digital forensic practice.
+ Our hazard-integrated CIA threat models map exploits to acute and chronic biological effects, uncovering (i) Integrity attack pathways that facilitate "Medical gaslighting" and "Munchausen-by-IoMT", (ii) Availability attacks that create life-critical and sub-acute harms (glycaemic emergencies, blindness, mood destabilization), and (iii) Confidentiality threats arising from MedTech advertisements (geolocation tracking from BLE broadcasts). Our simulation demonstrates that these attack surfaces are unlikely to be detected in practice: participants overlooked MedTech, misclassified reproductive and assistive technologies, and lacked awareness of BLE broadcast artifacts. Our findings show that MedTech cybersecurity in IPV contexts requires integrated threat modeling and improved forensic capabilities for detecting, preserving and interpreting harms arising from compromised patient-technology ecosystems.
+ oai:arXiv.org:2601.12593v1
+ cs.CR
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Isabel Straw, Akhil Polamarasetty, Mustafa Jaafar
+
+
+ Dissecting Linear Recurrent Models: How Different Gating Strategies Drive Selectivity and Generalization
+ https://arxiv.org/abs/2601.12598
+ arXiv:2601.12598v1 Announce Type: new
+Abstract: Linear recurrent neural networks have emerged as efficient alternatives to the original Transformer's softmax attention mechanism, thanks to their highly parallelizable training and constant memory and computation requirements at inference. Iterative refinements of these models have introduced an increasing number of architectural mechanisms, leading to increased complexity and computational costs. Nevertheless, systematic direct comparisons among these models remain limited. Existing benchmark tasks are either too simplistic to reveal substantial differences or excessively resource-intensive for experimentation. In this work, we propose a refined taxonomy of linear recurrent models and introduce SelectivBench, a set of lightweight and customizable synthetic benchmark tasks for systematically evaluating sequence models. SelectivBench specifically evaluates selectivity in sequence models at small to medium scale, such as the capacity to focus on relevant inputs while ignoring context-based distractors. It employs rule-based grammars to generate sequences with adjustable complexity, incorporating irregular gaps that intentionally violate transition rules. Evaluations of linear recurrent models on SelectivBench reveal performance patterns consistent with results from large-scale language tasks. Our analysis clarifies the roles of essential architectural features: gating and rapid forgetting mechanisms facilitate recall, in-state channel mixing is unnecessary for selectivity, but critical for generalization, and softmax attention remains dominant due to its memory capacity scaling with sequence length. Our benchmark enables targeted, efficient exploration of linear recurrent models and provides a controlled setting for studying behaviors observed in large-scale evaluations. Code is available at https://github.com/symseqbench/selectivbench
+ oai:arXiv.org:2601.12598v1
+ cs.LG
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Younes Bouhadjar, Maxime Fabre, Felix Schmidt, Emre Neftci
+
+
+ SSVD-O: Parameter-Efficient Fine-Tuning with Structured SVD for Speech Recognition
+ https://arxiv.org/abs/2601.12600
+ arXiv:2601.12600v1 Announce Type: new
+Abstract: Parameter-efficient fine-tuning (PEFT) is a scalable approach for adapting large speech foundation models to new domains. While methods such as LoRA and its state-of-the-art variants reduce adaptation costs, they typically allocate parameters uniformly across model subspaces, which limits their efficiency and scalability in speech applications. Building on our prior work, this paper introduces SSVD-Outer (SSVD-O), an extension of the structured SVD-guided (SSVD) fine-tuning method. SSVD-O combines input acoustic feature space-associated inner transformations with output semantic feature space-associated outer transformations to enable scalable and balanced adaptation. We conduct the first systematic analysis of parameter budget allocation across model subspaces in PEFT for automatic speech recognition (ASR), and investigate the trade-off between learning and forgetting under constrained resources. SSVD-O is benchmarked against LoRA, DoRA, PiSSA, and SSVD on domain-shifted ASR tasks, including child speech and regional accents, across model scales from 0.1B to 2B within the ESPnet framework. Experimental results show that SSVD-O consistently narrows the performance gap to full fine-tuning while improving generalization and mitigating catastrophic forgetting.
+ oai:arXiv.org:2601.12600v1
+ cs.SD
+ cs.CL
+ cs.LG
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pu Wang, Shinji Watanabe, Hugo Van hamme
+
+
+ Beyond Softmax and Entropy: Improving Convergence Guarantees of Policy Gradients by f-SoftArgmax Parameterization with Coupled Regularization
+ https://arxiv.org/abs/2601.12604
+ arXiv:2601.12604v1 Announce Type: new
+Abstract: Policy gradient methods are known to be highly sensitive to the choice of policy parameterization. In particular, the widely used softmax parameterization can induce ill-conditioned optimization landscapes and lead to exponentially slow convergence. Although this can be mitigated by preconditioning, this solution is often computationally expensive. Instead, we propose replacing the softmax with an alternative family of policy parameterizations based on the generalized f-softargmax. We further advocate coupling this parameterization with a regularizer induced by the same f-divergence, which improves the optimization landscape and ensures that the resulting regularized objective satisfies a Polyak-Lojasiewicz inequality. Leveraging this structure, we establish the first explicit non-asymptotic last-iterate convergence guarantees for stochastic policy gradient methods for finite MDPs without any form of preconditioning. We also derive sample-complexity bounds for the unregularized problem and show that f-PG, with Tsallis divergences achieves polynomial sample complexity in contrast to the exponential complexity incurred by the standard softmax parameterization.
+ oai:arXiv.org:2601.12604v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Safwan Labbi, Daniil Tiapkin, Paul Mangold, Eric Moulines
+
+
+ Explicit Almost-Optimal $\varepsilon$-Balanced Codes via Free Expander Walks
+ https://arxiv.org/abs/2601.12606
+ arXiv:2601.12606v1 Announce Type: new
+Abstract: We study the problem of constructing explicit codes whose rate and distance match the Gilbert-Varshamov bound in the low-rate, high-distance regime. In 2017, Ta-Shma gave an explicit family of codes where every pair of codewords has relative distance $\frac{1-\varepsilon}{2}$, with rate $\Omega(\varepsilon^{2+o(1)})$, matching the Gilbert-Varshamov bound up to a factor of $\varepsilon^{o(1)}$. Ta-Shma's construction was based on starting with a good code and amplifying its bias with walks arising from the $s$-wide-replacement product.
+ In this work, we give an arguably simpler almost-optimal construction, based on what we call free expander walks: ordinary expander walks where each step is taken on a distinct expander from a carefully chosen sequence. This sequence of expanders is derived from the construction of near-$X$-Ramanujan graphs due to O'Donnell and Wu.
+ oai:arXiv.org:2601.12606v1
+ cs.CC
+ cs.DM
+ cs.DS
+ math.CO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jun-Ting Hsieh, Sidhanth Mohanty, Rachel Yun Zhang
+
+
+ A Cloud-based Multi-Agentic Workflow for Science
+ https://arxiv.org/abs/2601.12607
+ arXiv:2601.12607v1 Announce Type: new
+Abstract: As Large Language Models (LLMs) become ubiquitous across various scientific domains, their lack of ability to perform complex tasks like running simulations or to make complex decisions limits their utility. LLM-based agents bridge this gap due to their ability to call external resources and tools and thus are now rapidly gaining popularity. However, coming up with a workflow that can balance the models, cloud providers, and external resources is very challenging, making implementing an agentic system more of a hindrance than a help. In this work, we present a domain-agnostic, model-independent workflow for an agentic framework that can act as a scientific assistant while being run entirely on cloud. Built with a supervisor agent marshaling an array of agents with individual capabilities, our framework brings together straightforward tasks like literature review and data analysis with more complex ones like simulation runs. We describe the framework here in full, including a proof-of-concept system we built to accelerate the study of Catalysts, which is highly important in the field of Chemistry and Material Science. We report the cost to operate and use this framework, including the breakdown of the cost by services use. We also evaluate our system on a custom-curated synthetic benchmark and a popular Chemistry benchmark, and also perform expert validation of the system. The results show that our system is able to route the task to the correct agent 90% of the time and successfully complete the assigned task 97.5% of the time for the synthetic tasks and 91% of the time for real-world tasks, while still achieving better or comparable accuracy to most frontier models, showing that this is a viable framework for other scientific domains to replicate.
+ oai:arXiv.org:2601.12607v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Anurag Acharya, Timothy Vega, Rizwan A. Ashraf, Anshu Sharma, Derek Parker, Robert Rallo
+
+
+ HERMES: A Unified Open-Source Framework for Realtime Multimodal Physiological Sensing, Edge AI, and Intervention in Closed-Loop Smart Healthcare Applications
+ https://arxiv.org/abs/2601.12610
+ arXiv:2601.12610v1 Announce Type: new
+Abstract: Intelligent assistive technologies are increasingly recognized as critical daily-use enablers for people with disabilities and age-related functional decline. Longitudinal studies, curation of quality datasets, live monitoring in activities of daily living, and intelligent intervention devices, share the largely unsolved need in reliable high-throughput multimodal sensing and processing. Streaming large heterogeneous data from distributed sensors, historically closed-source environments, and limited prior works on realtime closed-loop AI methodologies, inhibit such applications. To accelerate the emergence of clinical deployments, we deliver HERMES - an open-source high-performance Python framework for continuous multimodal sensing and AI processing at the edge. It enables synchronized data collection, and realtime streaming inference with user PyTorch models, on commodity computing devices. HERMES is applicable to fixed-lab and free-living environments, of distributed commercial and custom sensors. It is the first work to offer a holistic methodology that bridges cross-disciplinary gaps in real-world implementation strategies, and guides downstream AI model development. Its application on the closed-loop intelligent prosthesis use case illustrates the process of suitable AI model development from the generated constraints and trade-offs. Validation on the use case, with 4 synchronized hosts cooperatively capturing 18 wearable and off-body modalities, demonstrates performance and relevance of HERMES to the trajectory of the intelligent healthcare domain.
+ oai:arXiv.org:2601.12610v1
+ eess.SY
+ cs.LG
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Maxim Yudayev, Juha Carlon, Diwas Lamsal, Vayalet Stefanova, Benjamin Filtjens
+
+
+ What Trace Powers Reveal About Log-Determinants: Closed-Form Estimators, Certificates, and Failure Modes
+ https://arxiv.org/abs/2601.12612
+ arXiv:2601.12612v1 Announce Type: new
+Abstract: Computing $\log\det(A)$ for large symmetric positive definite matrices arises in Gaussian process inference and Bayesian model comparison. Standard methods combine matrix-vector products with polynomial approximations. We study a different model: access to trace powers $p_k = \tr(A^k)$, natural when matrix powers are available.
+ Classical moment-based approximations Taylor-expand $\log(\lambda)$ around the arithmetic mean. This requires $|\lambda - \AM| < \AM$ and diverges when $\kappa > 4$. We work instead with the moment-generating function $M(t) = \E[X^t]$ for normalized eigenvalues $X = \lambda/\AM$. Since $M'(0) = \E[\log X]$, the log-determinant becomes $\log\det(A) = n(\log \AM + M'(0))$ -- the problem reduces to estimating a derivative at $t = 0$. Trace powers give $M(k)$ at positive integers, but interpolating $M(t)$ directly is ill-conditioned due to exponential growth. The transform $K(t) = \log M(t)$ compresses this range. Normalization by $\AM$ ensures $K(0) = K(1) = 0$. With these anchors fixed, we interpolate $K$ through $m+1$ consecutive integers and differentiate to estimate $K'(0)$. However, this local interpolation cannot capture arbitrary spectral features.
+ We prove a fundamental limit: no continuous estimator using finitely many positive moments can be uniformly accurate over unbounded conditioning. Positive moments downweight the spectral tail; $K'(0) = \E[\log X]$ is tail-sensitive. This motivates guaranteed bounds. From the same traces we derive upper bounds on $(\det A)^{1/n}$. Given a spectral floor $r \leq \lambda_{\min}$, we obtain moment-constrained lower bounds, yielding a provable interval for $\log\det(A)$. A gap diagnostic indicates when to trust the point estimate and when to report bounds. All estimators and bounds cost $O(m)$, independent of $n$. For $m \in \{4, \ldots, 8\}$, this is effectively constant time.
+ oai:arXiv.org:2601.12612v1
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Piyush Sao
+
+
+ Allocating Corrective Control to Mitigate Multi-agent Safety Violations Under Private Preferences
+ https://arxiv.org/abs/2601.12616
+ arXiv:2601.12616v1 Announce Type: new
+Abstract: We propose a novel framework that computes the corrective control efforts to ensure joint safety in multi-agent dynamical systems. This framework efficiently distributes the required corrective effort without revealing individual agents' private preferences. Our framework integrates high-order control barrier functions (HOCBFs), which enforce safety constraints with formal guarantees of safety for complex dynamical systems, with a privacy-preserving resource allocation mechanism based on the progressive second price (PSP) auction. When a joint safety constraint is violated, agents iteratively bid on new corrective efforts via 'avoidance credits' rather than explicitly solving for feasible corrective efforts that remove the safety violation. The resulting correction, determined via a second price payment rule, coincides with the socially optimal safe distribution of corrective actions. Critically, the bidding process achieves this optimal allocation efficiently and without revealing private preferences of individual agents. We demonstrate this method through multi-robot hardware experiments on the Robotarium platform.
+ oai:arXiv.org:2601.12616v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Johnathan Corbin, Sarah H. Q. Li, Jonathan Rogers
+
+
+ Creating Disability Story Videos with Generative AI: Motivation, Expression, and Sharing
+ https://arxiv.org/abs/2601.12617
+ arXiv:2601.12617v1 Announce Type: new
+Abstract: Generative AI (GenAI) is both promising and challenging in supporting people with disabilities (PwDs) in creating stories about disability. GenAI can reduce barriers to media production and inspire the creativity of PwDs, but it may also introduce biases and imperfections that hinder its adoption for personal expression. In this research, we examine how nine PwD from a disability advocacy group used GenAI to create videos sharing their disability experiences. Grounded in digital storytelling theory, we explore the motivations, expression, and sharing of PwD-created GenAI story videos. We conclude with a framework of momentous depiction, which highlights four core affordances of GenAI that either facilitate or require improvements to better support disability storytelling: non-capturable depiction, identity concealment and representation, contextual realism and consistency, and emotional articulation. Based on this framework, we further discuss design implications for GenAI in relation to story completion, media formats, and corrective mechanisms.
+ oai:arXiv.org:2601.12617v1
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1145/3772318.3791495
+ Shuo Niu, Dylan Clements, Hyungsin Kim
+
+
+ Disagreement as Data: Reasoning Trace Analytics in Multi-Agent Systems
+ https://arxiv.org/abs/2601.12618
+ arXiv:2601.12618v1 Announce Type: new
+Abstract: Learning analytics researchers often analyze qualitative student data such as coded annotations or interview transcripts to understand learning processes. With the rise of generative AI, fully automated and human-AI workflows have emerged as promising methods for analysis. However, methodological standards to guide such workflows remain limited. In this study, we propose that reasoning traces generated by large language model (LLM) agents, especially within multi-agent systems, constitute a novel and rich form of process data to enhance interpretive practices in qualitative coding. We apply cosine similarity to LLM reasoning traces to systematically detect, quantify, and interpret disagreements among agents, reframing disagreement as a meaningful analytic signal. Analyzing nearly 10,000 instances of agent pairs coding human tutoring dialog segments, we show that LLM agents' semantic reasoning similarity robustly differentiates consensus from disagreement and correlates with human coding reliability. Qualitative analysis guided by this metric reveals nuanced instructional sub-functions within codes and opportunities for conceptual codebook refinement. By integrating quantitative similarity metrics with qualitative review, our method has the potential to improve and accelerate establishing inter-rater reliability during coding by surfacing interpretive ambiguity, especially when LLMs collaborate with humans. We discuss how reasoning-trace disagreements represent a valuable new class of analytic signals advancing methodological rigor and interpretive depth in educational research.
+ oai:arXiv.org:2601.12618v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1145/3785022.3785101
+ Elham Tajik, Conrad Borchers, Bahar Shahrokhian, Sebastian Simon, Ali Keramati, Sonika Pal, Sreecharan Sankaranarayanan
+
+
+ Learning Deterministic Finite-State Machines from the Prefixes of a Single String is NP-Complete
+ https://arxiv.org/abs/2601.12621
+ arXiv:2601.12621v1 Announce Type: new
+Abstract: It is well known that computing a minimum DFA consistent with a given set of positive and negative examples is NP-hard. Previous work has identified conditions on the input sample under which the problem becomes tractable or remains hard. In this paper, we study the computational complexity of the case where the input sample is prefix-closed. This formulation is equivalent to computing a minimum Moore machine consistent with observations along its runs. We show that the problem is NP-hard to approximate when the sample set consists of all prefixes of binary strings. Furthermore, we show that the problem remains NP-hard as a decision problem even when the sample set consists of the prefixes of a single binary string. Our argument also extends to the corresponding problem for Mealy machines.
+ oai:arXiv.org:2601.12621v1
+ cs.FL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Radu Cosmin Dumitru, Ryo Yoshinaka, Ayumi Shinohara
+
+
+ Towards Robust Universal Perturbation Attacks: A Float-Coded, Penalty-Driven Evolutionary Approach
+ https://arxiv.org/abs/2601.12624
+ arXiv:2601.12624v1 Announce Type: new
+Abstract: Universal adversarial perturbations (UAPs) have garnered significant attention due to their ability to undermine deep neural networks across multiple inputs using a single noise pattern. Evolutionary algorithms offer a promising approach to generating such perturbations due to their ability to navigate non-convex, gradient-free landscapes. In this work, we introduce a float-coded, penalty-driven single-objective evolutionary framework for UAP generation that achieves lower visibility perturbations while enhancing attack success rates. Our approach leverages continuous gene representations aligned with contemporary deep learning scales, incorporates dynamic evolutionary operators with adaptive scheduling, and utilizes a modular PyTorch implementation for seamless integration with modern architectures. Additionally, we ensure the universality of the generated perturbations by testing across diverse models and by periodically switching batches to prevent overfitting. Experimental results on the ImageNet dataset demonstrate that our framework consistently produces perturbations with smaller norms, higher misclassification effectiveness, and faster convergence compared to existing evolutionary-based methods. These findings highlight the robustness and scalability of our approach for universal adversarial attacks across various deep learning architectures.
+ oai:arXiv.org:2601.12624v1
+ cs.LG
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shiqi Wang, Mahdi Khosravy, Neeraj Gupta, Olaf Witkowski
+
+
+ Resilient Interval Observer-Based Control for Cooperative Adaptive Cruise Control under FDI Attack
+ https://arxiv.org/abs/2601.12625
+ arXiv:2601.12625v1 Announce Type: new
+Abstract: Connectivity in connected and autonomous vehicles (CAVs) introduces vulnerability to cyber threats such as false data injection (FDI) attacks, which can compromise system reliability and safety. To ensure resilience, this paper proposes a control framework combining a nonlinear controller with an interval observer for robust state estimation under measurement noise. The observer bounds leader's states, while a neural network-based estimator estimates the unknown FDI attacks in real time. These estimates are then used to mitigate FDI attack effects maintaining safe inter-vehicle spacing. The proposed approach leverages an idea of interval observer-based estimation and merges model-based and learning-based methods to achieve accurate estimations and real-time performance. MATLAB/Simulink results confirm resilient tracking, precise FDI attack estimation, and robustness to noise, demonstrating potential for real-world CACC applications under cyberattacks, disturbance, and bounded measurement noise.
+ oai:arXiv.org:2601.12625v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Parisa Ansari Bonab, Elisabeth Andarge Gedefaw, Mohammad Khajenejad
+
+
+ Linear Mechanisms for Spatiotemporal Reasoning in Vision Language Models
+ https://arxiv.org/abs/2601.12626
+ arXiv:2601.12626v1 Announce Type: new
+Abstract: Spatio-temporal reasoning is a remarkable capability of Vision Language Models (VLMs), but the underlying mechanisms of such abilities remain largely opaque. We postulate that visual/geometrical and textual representations of spatial structure must be combined at some point in VLM computations. We search for such confluence, and ask whether the identified representation can causally explain aspects of input-output model behavior through a linear model. We show empirically that VLMs encode object locations by linearly binding \textit{spatial IDs} to textual activations, then perform reasoning via language tokens. Through rigorous causal interventions we demonstrate that these IDs, which are ubiquitous across the model, can systematically mediate model beliefs at intermediate VLM layers. Additionally, we find that spatial IDs serve as a diagnostic tool for identifying limitations in existing VLMs, and as a valuable learning signal. We extend our analysis to video VLMs and identify an analogous linear temporal ID mechanism. By characterizing our proposed spatiotemporal ID mechanism, we elucidate a previously underexplored internal reasoning process in VLMs, toward improved interpretability and the principled design of more aligned and capable models. We release our code for reproducibility: https://github.com/Raphoo/linear-mech-vlms.
+ oai:arXiv.org:2601.12626v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Raphi Kang, Hongqiao Chen, Georgia Gkioxari, Pietro Perona
+
+
+ Constructing a Dataset to Support Agent-Based Modeling of Online Interactions: Users, Topics, and Interaction Networks
+ https://arxiv.org/abs/2601.12628
+ arXiv:2601.12628v1 Announce Type: new
+Abstract: Agent-based modeling (ABM) provides a powerful framework for exploring how individual behaviors and interactions give rise to collective social dynamics. However, most ABMs rely on handcrafted or parameterized agent rules that are not empirically grounded, thereby limiting their realism and validation against observed data. To address this gap, we constructed a large-scale, empirically grounded dataset from Reddit to support the development and evaluation of agent-based social simulations. The dataset includes 33 technology-focused, 14 climate-focused, and 7 COVID-related aggregated agents, encompassing around one million posts and comments. Using publicly available posts and comments, we define agent categories based on content and interaction patterns, derive inter-agent relationships from temporal commenting behaviors, and build a directed, weighted network that reflects empirically observed user connections. The resulting dataset enables researchers to calibrate and benchmark agent behavior, network structure, and information diffusion processes against real social dynamics. Our quantitative analysis reveals clear topic-dependent differences in how users interact. Climate discussions show dense, highly connected networks with sustained engagement, COVID-related interactions are sparse and mostly one-directional, and technology discussions are organized around a small number of central hubs. Manual qualitative analysis further shows that agent interactions follow realistic patterns of timing, similarity between users, and sentiment change.
+ oai:arXiv.org:2601.12628v1
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Abdul Sittar, Miha Cesnovar, Alenka Gucek, Marko Grobelnik
+
+
+ BioPulse-QA: A Dynamic Biomedical Question-Answering Benchmark for Evaluating Factuality, Robustness, and Bias in Large Language Models
+ https://arxiv.org/abs/2601.12632
+ arXiv:2601.12632v1 Announce Type: new
+Abstract: Objective: Large language models (LLMs) are increasingly applied in biomedical settings, and existing benchmark datasets have played an important role in supporting model development and evaluation. However, these benchmarks often have limitations. Many rely on static or outdated datasets that fail to capture the dynamic, context-rich, and high-stakes nature of biomedical knowledge. They also carry increasing risk of data leakage due to overlap with model pretraining corpora and often overlook critical dimensions such as robustness to linguistic variation and potential demographic biases.
+ Materials and Methods: To address these gaps, we introduce BioPulse-QA, a benchmark that evaluates LLMs on answering questions from newly published biomedical documents including drug labels, trial protocols, and clinical guidelines. BioPulse-QA includes 2,280 expert-verified question answering (QA) pairs and perturbed variants, covering both extractive and abstractive formats. We evaluate four LLMs - GPT-4o, GPT-o1, Gemini-2.0-Flash, and LLaMA-3.1 8B Instruct - released prior to the publication dates of the benchmark documents.
+ Results: GPT-o1 achieves the highest relaxed F1 score (0.92), followed by Gemini-2.0-Flash (0.90) on drug labels. Clinical trials are the most challenging source, with extractive F1 scores as low as 0.36.
+ Discussion and Conclusion: Performance differences are larger for paraphrasing than for typographical errors, while bias testing shows negligible differences. BioPulse-QA provides a scalable and clinically relevant framework for evaluating biomedical LLMs.
+ oai:arXiv.org:2601.12632v1
+ cs.CL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kriti Bhattarai, Vipina K. Keloth, Donald Wright, Andrew Loza, Yang Ren, Hua Xu
+
+
+ The Cost of Convenience: Identifying, Analyzing, and Mitigating Predatory Loan Applications on Android
+ https://arxiv.org/abs/2601.12634
+ arXiv:2601.12634v1 Announce Type: new
+Abstract: Digital lending applications, commonly referred to as loan apps, have become a primary channel for microcredit in emerging markets. However, many of these apps demand excessive permissions and misuse sensitive user data for coercive debt-recovery practices, including harassment, blackmail, and public shaming that affect both borrowers and their contacts.
+ This paper presents the first cross-country measurement of loan app compliance against both national regulations and Google's Financial Services Policy. We analyze 434 apps drawn from official registries and app markets from Indonesia, Kenya, Nigeria, Pakistan, and the Philippines. To operationalize policy requirements at scale, we translate policy text into testable permission checks using LLM-assisted policy-to-permission mapping and combine this with static and dynamic analyses of loan apps' code and runtime behavior.
+ Our findings reveal pervasive non-compliance among approved apps: 141 violate national regulatory policy and 147 violate Google policy. Dynamic analysis further shows that several apps transmit sensitive data (contacts, SMS, location, media) before user signup or registration, undermining informed consent and enabling downstream harassment of borrowers and third parties. Following our disclosures, Google removed 93 flagged apps from Google Play, representing over 300M cumulative installs.
+ We advocate for adopting our methodology as a proactive compliance-monitoring tool and offer targeted recommendations for regulators, platforms, and developers to strengthen privacy protections. Overall, our results highlight the need for coordinated enforcement and robust technical safeguards to ensure that digital lending supports financial inclusion without compromising user privacy or safety.
+ oai:arXiv.org:2601.12634v1
+ cs.CR
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3779208.3785263
+ Olawale Amos Akanji, Manuel Egele, Gianluca Stringhini
+
+
+ From Bands to Depth: Understanding Bathymetry Decisions on Sentinel-2
+ https://arxiv.org/abs/2601.12636
+ arXiv:2601.12636v1 Announce Type: new
+Abstract: Deploying Sentinel-2 satellite derived bathymetry (SDB) robustly across sites remains challenging. We analyze a Swin-Transformer based U-Net model (Swin-BathyUNet) to understand how it infers depth and when its predictions are trustworthy. A leave-one-band out study ranks spectral importance to the different bands consistent with shallow water optics. We adapt ablation-based CAM to regression (A-CAM-R) and validate the reliability via a performance retention test: keeping only the top-p% salient pixels while neutralizing the rest causes large, monotonic RMSE increase, indicating explanations localize on evidence the model relies on. Attention ablations show decoder conditioned cross attention on skips is an effective upgrade, improving robustness to glint/foam. Cross-region inference (train on one site, test on another) reveals depth-dependent degradation: MAE rises nearly linearly with depth, and bimodal depth distributions exacerbate mid/deep errors. Practical guidance follows: maintain wide receptive fields, preserve radiometric fidelity in green/blue channels, pre-filter bright high variance near shore, and pair light target site fine tuning with depth aware calibration to transfer across regions.
+ oai:arXiv.org:2601.12636v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Satyaki Roy Chowdhury, Aswathnarayan Radhakrishnan, Hsiao Jou Hsu, Hari Subramoni, Joachim Moortgat
+
+
+ Topology-Aware Multiscale Mixture of Experts for Efficient Molecular Property Prediction
+ https://arxiv.org/abs/2601.12637
+ arXiv:2601.12637v1 Announce Type: new
+Abstract: Many molecular properties depend on 3D geometry, where non-covalent interactions, stereochemical effects, and medium- to long-range forces are determined by spatial distances and angles that cannot be uniquely captured by a 2D bond graph. Yet most 3D molecular graph neural networks still rely on globally fixed neighborhood heuristics, typically defined by distance cutoffs and maximum neighbor limits, to define local message-passing neighborhoods, leading to rigid, data-agnostic interaction budgets. We propose Multiscale Interaction Mixture of Experts (MI-MoE) to adapt interaction modeling across geometric regimes. Our contributions are threefold: (1) we introduce a distance-cutoff expert ensemble that explicitly captures short-, mid-, and long-range interactions without committing to a single cutoff; (2) we design a topological gating encoder that routes inputs to experts using filtration-based descriptors, including persistent homology features, summarizing how connectivity evolves across radii; and (3) we show that MI-MoE is a plug-in module that consistently improves multiple strong 3D molecular backbones across diverse molecular and polymer property prediction benchmark datasets, covering both regression and classification tasks. These results highlight topology-aware multiscale routing as an effective principle for 3D molecular graph learning.
+ oai:arXiv.org:2601.12637v1
+ cs.LG
+ cs.AI
+ q-bio.QM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Long D. Nguyen, Kelin Xia, Binh P. Nguyen
+
+
+ Mixed Precision PointPillars for Efficient 3D Object Detection with TensorRT
+ https://arxiv.org/abs/2601.12638
+ arXiv:2601.12638v1 Announce Type: new
+Abstract: LIDAR 3D object detection is one of the important tasks for autonomous vehicles. Ensuring that this task operates in real-time is crucial. Toward this, model quantization can be used to accelerate the runtime. However, directly applying model quantization often leads to performance degradation due to LIDAR's wide numerical distributions and extreme outliers. To address the wide numerical distribution, we proposed a mixed precision framework designed for PointPillars. Our framework first searches for sensitive layers with post-training quantization (PTQ) by quantizing one layer at a time to 8-bit integer (INT8) and evaluating each model for average precision (AP). The top-k most sensitive layers are assigned as floating point (FP). Combinations of these layers are greedily searched to produce candidate mixed precision models, which are finalized with either PTQ or quantization-aware training (QAT). Furthermore, to handle outliers, we observe that using a very small number of calibration data reduces the likelihood of encountering outliers, thereby improving PTQ performance. Our methods provides mixed precision models without training in the PTQ pipeline, while our QAT pipeline achieves the performance competitive to FP models. With TensorRT deployment, our models offer less latency and sizes by up to 2.35 and 2.26 times, respectively.
+ oai:arXiv.org:2601.12638v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ninnart Fuengfusin, Keisuke Yoneda, Naoki Suganuma
+
+
+ Objective Matters: Fine-Tuning Objectives Shape Safety, Robustness, and Persona Drift
+ https://arxiv.org/abs/2601.12639
+ arXiv:2601.12639v1 Announce Type: new
+Abstract: Fine-tuning LLMs on benign data can still degrade alignment and adversarial robustness, yet direct analysis of the role of fine-tuning objectives in shaping these safety outcomes remain limited. We present a controlled comparison of six fine-tuning objectives -- Supervised Fine-Tuning, Direct Preference Optimization, Conditional Fine-Tuning, Inoculation Prompting, Odds Ratio Preference Optimization, and KL-regularized fine-tuning -- holding data, domain, architecture, and optimization fixed. Across closed-form reasoning and open-ended generation tasks, we find that objective choice induces systematic, scale-dependent shifts along the safety-capability frontier. At small training budgets, robustness is similar across objectives but capability differs. At larger budgets, objectives diverge sharply: supervised and preference-based tuning tightly couple capability gains to increased adversarial vulnerability and persona drift, while objectives that constrain learning signals -- especially ORPO and KL-regularization -- substantially mitigate both. Fine-tuning objectives therefore matter little for safety at small scales but become a primary driver of adversarial robustness and latent persona stability as training scale increases.
+ oai:arXiv.org:2601.12639v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Daniel Vennemeyer, Punya Syon Pandey, Phan Anh Duong, Michael Umeokoli, Samuel Ratnam
+
+
+ Beyond Identification: Computing Boolean Functions via Channels
+ https://arxiv.org/abs/2601.12640
+ arXiv:2601.12640v1 Announce Type: new
+Abstract: Consider a point-to-point communication system in which the transmitter holds a binary message of length $m$ and transmits a corresponding codeword of length $n$. The receiver's goal is to recover a Boolean function of that message, where the function is unknown to the transmitter, but chosen from a known class $F$. We are interested in the asymptotic relationship of $m$ and $n$: given $n$, how large can $m$ be (asymptotically), such that the value of the Boolean function can be recovered reliably? This problem generalizes the identification-via-channels framework introduced by Ahlswede and Dueck. We formulate the notion of computation capacity, and derive achievability and converse results for selected classes of functions $F$, characterized by the Hamming weight of functions. Our obtained results are tight in the sense of the scaling behavior for all cases of $F$ considered in the paper.
+ oai:arXiv.org:2601.12640v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jingge Zhu, Matthias Frey
+
+
+ STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models
+ https://arxiv.org/abs/2601.12641
+ arXiv:2601.12641v1 Announce Type: new
+Abstract: Computer-aided design (CAD) is vital to modern manufacturing, yet model creation remains labor-intensive and expertise-heavy. To enable non-experts to translate intuitive design intent into manufacturable artifacts, recent large language models-based text-to-CAD efforts focus on command sequences or script-based formats like CadQuery. However, these formats are kernel-dependent and lack universality for manufacturing. In contrast, the Standard for the Exchange of Product Data (STEP, ISO 10303) file is a widely adopted, neutral boundary representation (B-rep) format directly compatible with manufacturing, but its graph-structured, cross-referenced nature poses unique challenges for auto-regressive LLMs. To address this, we curate a dataset of ~40K STEP-caption pairs and introduce novel preprocessing tailored for the graph-structured format of STEP, including a depth-first search-based reserialization that linearizes cross-references while preserving locality and chain-of-thought(CoT)-style structural annotations that guide global coherence. We integrate retrieval-augmented generation to ground predictions in relevant examples for supervised fine-tuning, and refine generation quality through reinforcement learning with a specific Chamfer Distance-based geometric reward. Experiments demonstrate consistent gains of our STEP-LLM in geometric fidelity over the Text2CAD baseline, with improvements arising from multiple stages of our framework: the RAG module substantially enhances completeness and renderability, the DFS-based reserialization strengthens overall accuracy, and the RL further reduces geometric discrepancy. Both metrics and visual comparisons confirm that STEP-LLM generates shapes with higher fidelity than Text2CAD. These results show the feasibility of LLM-driven STEP model generation from natural language, showing its potential to democratize CAD design for manufacturing.
+ oai:arXiv.org:2601.12641v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiangyu Shi, Junyang Ding, Xu Zhao, Sinong Zhan, Payal Mohapatra, Daniel Quispe, Kojo Welbeck, Jian Cao, Wei Chen, Ping Guo, Qi Zhu
+
+
+ Unbounded Harms, Bounded Law: Liability in the Age of Borderless AI
+ https://arxiv.org/abs/2601.12646
+ arXiv:2601.12646v1 Announce Type: new
+Abstract: The rapid proliferation of artificial intelligence (AI) has exposed significant deficiencies in risk governance. While ex-ante harm identification and prevention have advanced, Responsible AI scholarship remains underdeveloped in addressing ex-post liability. Core legal questions regarding liability allocation, responsibility attribution, and remedial effectiveness remain insufficiently theorized and institutionalized, particularly for transboundary harms and risks that transcend national jurisdictions. Drawing on contemporary AI risk analyses, we argue that such harms are structurally embedded in global AI supply chains and are likely to escalate in frequency and severity due to cross-border deployment, data infrastructures, and uneven national oversight capacities. Consequently, territorially bounded liability regimes are increasingly inadequate. Using a comparative and interdisciplinary approach, this paper examines compensation and liability frameworks from high-risk transnational domains - including vaccine injury schemes, systemic financial risk governance, commercial nuclear liability, and international environmental regimes - to distill transferable legal design principles such as strict liability, risk pooling, collective risk-sharing, and liability channelling, while highlighting potential structural constraints on their application to AI-related harms. Situated within an international order shaped more by AI arms race dynamics than cooperative governance, the paper outlines the contours of a global AI accountability and compensation architecture, emphasizing the tension between geopolitical rivalry and the collective action required to govern transboundary AI risks effectively.
+ oai:arXiv.org:2601.12646v1
+ cs.CY
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ha-Chi Tran
+
+
+ Intelligent Documentation in Medical Education: Can AI Replace Manual Case Logging?
+ https://arxiv.org/abs/2601.12648
+ arXiv:2601.12648v1 Announce Type: new
+Abstract: Procedural case logs are a core requirement in radiology training, yet they are time-consuming to complete and prone to inconsistency when authored manually. This study investigates whether large language models (LLMs) can automate procedural case log documentation directly from free-text radiology reports. We evaluate multiple local and commercial LLMs under instruction-based and chain-of-thought prompting to extract structured procedural information from 414 curated interventional radiology reports authored by nine residents between 2018 and 2024. Model performance is assessed using sensitivity, specificity, and F1-score, alongside inference latency and token efficiency to estimate operational cost. Results show that both local and commercial models achieve strong extraction performance, with best F1-scores approaching 0.87, while exhibiting different trade-offs between speed and cost. Automation using LLMs has the potential to substantially reduce clerical burden for trainees and improve consistency in case logging. These findings demonstrate the feasibility of AI-assisted documentation in medical education and highlight the need for further validation across institutions and clinical workflows.
+ oai:arXiv.org:2601.12648v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Nafiz Imtiaz Khan, Kylie Cleland, Vladimir Filkov, Roger Eric Goldman
+
+
+ Ethical Risks in Deploying Large Language Models: An Evaluation of Medical Ethics Jailbreaking
+ https://arxiv.org/abs/2601.12652
+ arXiv:2601.12652v1 Announce Type: new
+Abstract: Background: While Large Language Models (LLMs) have achieved widespread adoption, malicious prompt engineering specifically "jailbreak attacks" poses severe security risks by inducing models to bypass internal safety mechanisms. Current benchmarks predominantly focus on public safety and Western cultural norms, leaving a critical gap in evaluating the niche, high-risk domain of medical ethics within the Chinese context. Objective: To establish a specialized jailbreak evaluation framework for Chinese medical ethics and to systematically assess the defensive resilience and ethical alignment of seven prominent LLMs when subjected to sophisticated adversarial simulations. Methodology: We evaluated seven prominent models (e.g., GPT-5, Claude-Sonnet-4-Reasoning, DeepSeek-R1) using a "role-playing + scenario simulation + multi-turn dialogue" vector within the DeepInception framework. The testing focused on eight high-risk themes, including commercial surrogacy and organ trading, utilizing a hierarchical scoring matrix to quantify the Attack Success Rate (ASR) and ASR Gain. Results: A systemic collapse of defenses was observed, whereas models demonstrated high baseline compliance, the jailbreak ASR reached 82.1%, representing an ASR Gain of over 80 percentage points. Claude-Sonnet-4-Reasoning emerged as the most robust model, while five models including Gemini-2.5-Pro and GPT-4.1 exhibited near-total failure with ASRs between 96% and 100%. Conclusions: Current LLMs are highly vulnerable to contextual manipulation in medical ethics, often prioritizing "helpfulness" over safety constraints. To enhance security, we recommend a transition from outcome to process supervision, the implementation of multi-factor identity verification, and the establishment of cross-model "joint defense" mechanisms.
+ oai:arXiv.org:2601.12652v1
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chutian Huang, Dake Cao, Jiacheng Ji, Yunlou Fan, Chengze Yan, Hanhui Xu
+
+
+ Explanation Multiplicity in SHAP: Characterization and Assessment
+ https://arxiv.org/abs/2601.12654
+ arXiv:2601.12654v1 Announce Type: new
+Abstract: Post-hoc explanations are widely used to justify, contest, and audit automated decisions in high-stakes domains. SHAP, in particular, is often treated as a reliable account of which features drove an individual prediction. Yet SHAP explanations can vary substantially across repeated runs even when the input, task, and trained model are held fixed. We term this phenomenon explanation multiplicity: multiple internally valid but substantively different explanations for the same decision. We present a methodology to characterize multiplicity in feature-attribution explanations and to disentangle sources due to model training/selection from stochasticity intrinsic to the explanation pipeline. We further show that apparent stability depends on the metric: magnitude-based distances can remain near zero while rank-based measures reveal substantial churn in the identity and ordering of top features. To contextualize observed disagreement, we derive randomized baseline values under plausible null models. Across datasets, model classes, and confidence regimes, we find explanation multiplicity is pervasive and persists even for high-confidence predictions, highlighting the need for metrics and baselines that match the intended use of explanations.
+ oai:arXiv.org:2601.12654v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hyunseung Hwang, Seungeun Lee, Lucas Rosenblatt, Julia Stoyanovich, Steven Euijong Whang
+
+
+ Multiagent Reinforcement Learning in Enhancing Resilience of Microgrids under Extreme Weather Events
+ https://arxiv.org/abs/2601.12657
+ arXiv:2601.12657v1 Announce Type: new
+Abstract: Grid resilience is crucial in light of power interruptions caused by increasingly frequent extreme weather events. Well-designed energy management systems (EMS) have made progress in improving microgrid resilience through the coordination of distributed energy resources (DERs), but still face significant challenges in addressing the uncertainty of load demand caused by extreme weather. The integration of deep reinforcement learning (DRL) into EMS design enables optimized microgrid control strategies for coordinating DERs. Building on this, we proposed a cooperative multi-agent deep reinforcement learning (MADRL)-based EMS framework to provide flexible scalability for microgrids, enhance resilience and reduce operational costs during power outages. Specifically, the gated recurrent unit with a gating mechanism was introduced to extract features from temporal data, which enables the EMS to coordinate DERs more efficiently. Next, the proposed MADRL method incorporating action masking techniques was evaluated in the IEEE 33-Bus system using real-world data on renewable generation and power load. Finally, the numerical results demonstrated the superiority of the proposed method in reducing operating costs as well as the effectiveness in enhancing microgrid resilience during power interruptions.
+ oai:arXiv.org:2601.12657v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1016/j.eswa.2025.129145
+ Expert Systems with Applications, Volume 296, Part D, 15 January 2026, 129145
+ Yin Wu, Wei-Yu Chiu, Yuan-Po Tsai, Shangyuan Liu, Weiqi Hua
+
+
+ Augmenting Question Answering with A Hybrid RAG Approach
+ https://arxiv.org/abs/2601.12658
+ arXiv:2601.12658v1 Announce Type: new
+Abstract: Retrieval-Augmented Generation (RAG) has emerged as a powerful technique for enhancing the quality of responses in Question-Answering (QA) tasks. However, existing approaches often struggle with retrieving contextually relevant information, leading to incomplete or suboptimal answers. In this paper, we introduce Structured-Semantic RAG (SSRAG), a hybrid architecture that enhances QA quality by integrating query augmentation, agentic routing, and a structured retrieval mechanism combining vector and graph based techniques with context unification. By refining retrieval processes and improving contextual grounding, our approach improves both answer accuracy and informativeness. We conduct extensive evaluations on three popular QA datasets, TruthfulQA, SQuAD and WikiQA, across five Large Language Models (LLMs), demonstrating that our proposed approach consistently improves response quality over standard RAG implementations.
+ oai:arXiv.org:2601.12658v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tianyi Yang, Nashrah Haque, Vaishnave Jonnalagadda, Yuya Jeremy Ong, Zhehui Chen, Yanzhao Wu, Lei Yu, Divyesh Jadav, Wenqi Wei
+
+
+ Toward Faithful Explanations in Acoustic Anomaly Detection
+ https://arxiv.org/abs/2601.12660
+ arXiv:2601.12660v1 Announce Type: new
+Abstract: Interpretability is essential for user trust in real-world anomaly detection applications. However, deep learning models, despite their strong performance, often lack transparency. In this work, we study the interpretability of autoencoder-based models for audio anomaly detection, by comparing a standard autoencoder (AE) with a mask autoencoder (MAE) in terms of detection performance and interpretability. We applied several attribution methods, including error maps, saliency maps, SmoothGrad, Integrated Gradients, GradSHAP, and Grad-CAM. Although MAE shows a slightly lower detection, it consistently provides more faithful and temporally precise explanations, suggesting a better alignment with true anomalies. To assess the relevance of the regions highlighted by the explanation method, we propose a perturbation-based faithfulness metric that replaces them with their reconstructions to simulate normal input. Our findings, based on experiments in a real industrial scenario, highlight the importance of incorporating interpretability into anomaly detection pipelines and show that masked training improves explanation quality without compromising performance.
+ oai:arXiv.org:2601.12660v1
+ cs.SD
+ cs.LG
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Maab Elrashid, Anthony Desch\^enes, Cem Subakan, Mirco Ravanelli, R\'emi Georges, Michael Morin
+
+
+ MedConsultBench: A Full-Cycle, Fine-Grained, Process-Aware Benchmark for Medical Consultation Agents
+ https://arxiv.org/abs/2601.12661
+ arXiv:2601.12661v1 Announce Type: new
+Abstract: Current evaluations of medical consultation agents often prioritize outcome-oriented tasks, frequently overlooking the end-to-end process integrity and clinical safety essential for real-world practice. While recent interactive benchmarks have introduced dynamic scenarios, they often remain fragmented and coarse-grained, failing to capture the structured inquiry logic and diagnostic rigor required in professional consultations. To bridge this gap, we propose MedConsultBench, a comprehensive framework designed to evaluate the complete online consultation cycle by covering the entire clinical workflow from history taking and diagnosis to treatment planning and follow-up Q\&A. Our methodology introduces Atomic Information Units (AIUs) to track clinical information acquisition at a sub-turn level, enabling precise monitoring of how key facts are elicited through 22 fine-grained metrics. By addressing the underspecification and ambiguity inherent in online consultations, the benchmark evaluates uncertainty-aware yet concise inquiry while emphasizing medication regimen compatibility and the ability to handle realistic post-prescription follow-up Q\&A via constraint-respecting plan revisions. Systematic evaluation of 19 large language models reveals that high diagnostic accuracy often masks significant deficiencies in information-gathering efficiency and medication safety. These results underscore a critical gap between theoretical medical knowledge and clinical practice ability, establishing MedConsultBench as a rigorous foundation for aligning medical AI with the nuanced requirements of real-world clinical care.
+ oai:arXiv.org:2601.12661v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chuhan Qiao, Jianghua Huang, Daxing Zhao, Ziding Liu, Yanjun Shen, Bing Cheng, Wei Lin, Kai Wu
+
+
+ Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks
+ https://arxiv.org/abs/2601.12662
+ arXiv:2601.12662v1 Announce Type: new
+Abstract: We address real-time sampling and estimation of autoregressive Markovian sources in dynamic yet structurally similar multi-hop wireless networks. Each node caches samples from others and communicates over wireless collision channels, aiming to minimize time-average estimation error via decentralized policies. Due to the high dimensionality of action spaces and complexity of network topologies, deriving optimal policies analytically is intractable. To address this, we propose a graphical multi-agent reinforcement learning framework for policy optimization. Theoretically, we demonstrate that our proposed policies are transferable, allowing a policy trained on one graph to be effectively applied to structurally similar graphs. Numerical experiments demonstrate that (i) our proposed policy outperforms state-of-the-art baselines; (ii) the trained policies are transferable to larger networks, with performance gains increasing with the number of agents; (iii) the graphical training procedure withstands non-stationarity, even when using independent learning techniques; and (iv) recurrence is pivotal in both independent learning and centralized training and decentralized execution, and improves the resilience to non-stationarity.
+ oai:arXiv.org:2601.12662v1
+ cs.LG
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xingran Chen, Navid NaderiAlizadeh, Alejandro Ribeiro, Shirin Saeedi Bidokhti
+
+
+ Generalizable Hyperparameter Optimization for Federated Learning on Non-IID Cancer Images
+ https://arxiv.org/abs/2601.12664
+ arXiv:2601.12664v1 Announce Type: new
+Abstract: Deep learning for cancer histopathology training conflicts with privacy constraints in clinical settings. Federated Learning (FL) mitigates this by keeping data local; however, its performance depends on hyperparameter choices under non-independent and identically distributed (non-IID) client datasets. This paper examined whether hyperparameters optimized on one cancer imaging dataset generalized across non-IID federated scenarios. We considered binary histopathology tasks for ovarian and colorectal cancers. We perform centralized Bayesian hyperparameter optimization and transfer dataset-specific optima to the non-IID FL setup. The main contribution of this study is the introduction of a simple cross-dataset aggregation heuristic by combining configurations by averaging the learning rates and considering the modal optimizers and batch sizes. This combined configuration achieves a competitive classification performance.
+ oai:arXiv.org:2601.12664v1
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Elisa Gon\c{c}alves Ribeiro, Rodrigo Moreira, Larissa Ferreira Rodrigues Moreira, Andr\'e Ricardo Backes
+
+
+ Near-Light Color Photometric Stereo for mono-Chromaticity non-lambertian surface
+ https://arxiv.org/abs/2601.12666
+ arXiv:2601.12666v1 Announce Type: new
+Abstract: Color photometric stereo enables single-shot surface reconstruction, extending conventional photometric stereo that requires multiple images of a static scene under varying illumination to dynamic scenarios. However, most existing approaches assume ideal distant lighting and Lambertian reflectance, leaving more practical near-light conditions and non-Lambertian surfaces underexplored. To overcome this limitation, we propose a framework that leverages neural implicit representations for depth and BRDF modeling under the assumption of mono-chromaticity (uniform chromaticity and homogeneous material), which alleviates the inherent ill-posedness of color photometric stereo and allows for detailed surface recovery from just one image. Furthermore, we design a compact optical tactile sensor to validate our approach. Experiments on both synthetic and real-world datasets demonstrate that our method achieves accurate and robust surface reconstruction.
+ oai:arXiv.org:2601.12666v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zonglin Li, Jieji Ren, Shuangfan Zhou, Heng Guo, Jinnuo Zhang, Jiang Zhou, Boxin Shi, Zhanyu Ma, Guoying Gu
+
+
+ Empowering All-in-Loop Health Management of Spacecraft Power System in the Mega-Constellation Era via Human-AI Collaboration
+ https://arxiv.org/abs/2601.12667
+ arXiv:2601.12667v1 Announce Type: new
+Abstract: It is foreseeable that the number of spacecraft will increase exponentially, ushering in an era dominated by satellite mega-constellations (SMC). This necessitates a focus on energy in space: spacecraft power systems (SPS), especially their health management (HM), given their role in power supply and high failure rates. Providing health management for dozens of SPS and for thousands of SPS represents two fundamentally different paradigms. Therefore, to adapt the health management in the SMC era, this work proposes a principle of aligning underlying capabilities (AUC principle) and develops SpaceHMchat, an open-source Human-AI collaboration (HAIC) framework for all-in-loop health management (AIL HM). SpaceHMchat serves across the entire loop of work condition recognition, anomaly detection, fault localization, and maintenance decision making, achieving goals such as conversational task completion, adaptive human-in-the-loop learning, personnel structure optimization, knowledge sharing, efficiency enhancement, as well as transparent reasoning and improved interpretability. Meanwhile, to validate this exploration, a hardware-realistic fault injection experimental platform is established, and its simulation model is built and open-sourced, both fully replicating the real SPS. The corresponding experimental results demonstrate that SpaceHMchat achieves excellent performance across 23 quantitative metrics, such as 100% conclusion accuracy in logical reasoning of work condition recognition, over 99% success rate in anomaly detection tool invocation, over 90% precision in fault localization, and knowledge base search time under 3 minutes in maintenance decision-making. Another contribution of this work is the release of the first-ever AIL HM dataset of SPS. This dataset contains four sub-datasets, involving 4 types of AIL HM sub-tasks, 17 types of faults, and over 700,000 timestamps.
+ oai:arXiv.org:2601.12667v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yi Di, Zhibin Zhao, Fujin Wang, Xue Liu, Jiafeng Tang, Jiaxin Ren, Zhi Zhai, Xuefeng Chen
+
+
+ Exploiting Test-Time Augmentation in Federated Learning for Brain Tumor MRI Classification
+ https://arxiv.org/abs/2601.12671
+ arXiv:2601.12671v1 Announce Type: new
+Abstract: Efficient brain tumor diagnosis is crucial for early treatment; however, it is challenging because of lesion variability and image complexity. We evaluated convolutional neural networks (CNNs) in a federated learning (FL) setting, comparing models trained on original versus preprocessed MRI images (resizing, grayscale conversion, normalization, filtering, and histogram equalization). Preprocessing alone yielded negligible gains; combined with test-time augmentation (TTA), it delivered consistent, statistically significant improvements in federated MRI classification (p<0.001). In practice, TTA should be the default inference strategy in FL-based medical imaging; when the computational budget permits, pairing TTA with light preprocessing provides additional reliable gains.
+ oai:arXiv.org:2601.12671v1
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Thamara Leandra de Deus Melo, Rodrigo Moreira, Larissa Ferreira Rodrigues Moreira, Andr\'e Ricardo Backes
+
+
+ VILTA: A VLM-in-the-Loop Adversary for Enhancing Driving Policy Robustness
+ https://arxiv.org/abs/2601.12672
+ arXiv:2601.12672v1 Announce Type: new
+Abstract: The safe deployment of autonomous driving (AD) systems is fundamentally hindered by the long-tail problem, where rare yet critical driving scenarios are severely underrepresented in real-world data. Existing solutions including safety-critical scenario generation and closed-loop learning often rely on rule-based heuristics, resampling methods and generative models learned from offline datasets, limiting their ability to produce diverse and novel challenges. While recent works leverage Vision Language Models (VLMs) to produce scene descriptions that guide a separate, downstream model in generating hazardous trajectories for agents, such two-stage framework constrains the generative potential of VLMs, as the diversity of the final trajectories is ultimately limited by the generalization ceiling of the downstream algorithm. To overcome these limitations, we introduce VILTA (VLM-In-the-Loop Trajectory Adversary), a novel framework that integrates a VLM into the closed-loop training of AD agents. Unlike prior works, VILTA actively participates in the training loop by comprehending the dynamic driving environment and strategically generating challenging scenarios through direct, fine-grained editing of surrounding agents' future trajectories. This direct-editing approach fully leverages the VLM's powerful generalization capabilities to create a diverse curriculum of plausible yet challenging scenarios that extend beyond the scope of traditional methods. We demonstrate that our approach substantially enhances the safety and robustness of the resulting AD policy, particularly in its ability to navigate critical long-tail events.
+ oai:arXiv.org:2601.12672v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qimao Chen, Fang Li, Shaoqing Xu, Zhiyi Lai, Zixun Xie, Yuechen Luo, Shengyin Jiang, Hanbing Li, Long Chen, Bing Wang, Yi Zhang, Zhi-Xin Yang
+
+
+ Physics-informed machine learning for reconstruction of dynamical systems with invariant measure score matching
+ https://arxiv.org/abs/2601.12675
+ arXiv:2601.12675v1 Announce Type: new
+Abstract: In this paper, we develop a novel mesh-free framework, termed physics-informed neural networks with invariant measure score matching (PINN-IMSM), for reconstructing dynamical systems from unlabeled point-cloud data that capture the system's invariant measure. The invariant density satisfies the steady-state Fokker-Planck (FP) equation. We reformulate this equation in terms of its score function (the gradient of the log-density), which is estimated directly from data via denoising score matching, thereby bypassing explicit density estimation. This learned score is then embedded into a physics-informed neural network (PINN) to reconstruct the drift velocity field under the resulting score-based FP equation. The mesh-free nature of PINNs allows the framework to scale to higher dimensions, avoiding the curse of dimensionality inherent in mesh-based methods. To address the ill-posedness of high-dimensional inverse problems, we recast the problem as a PDE-constrained optimization that seeks the minimal-energy velocity field. Under suitable conditions, we prove that this problem admits a unique solution that depends continuously on the score function. The constrained formulation is solved using a stochastic augmented Lagrangian method. Numerical experiments on representative dynamical systems, including the Van der Pol oscillator, an active swimmer in an anharmonic trap, and the chaotic Lorenz-63 and Lorenz-96 systems, demonstrate that PINN-IMSM accurately recovers invariant measures and reconstructs faithful dynamical behavior for problems in up to five dimensions.
+ oai:arXiv.org:2601.12675v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yongsheng Chen, Suddhasattwa Das, Wei Guo, Xinghui Zhong
+
+
+ MetaToolAgent: Towards Generalizable Tool Usage in LLMs through Meta-Learning
+ https://arxiv.org/abs/2601.12680
+ arXiv:2601.12680v1 Announce Type: new
+Abstract: Tool learning is increasingly important for large language models (LLMs) to effectively coordinate and utilize a diverse set of tools in order to solve complex real-world tasks. By selecting and integrating appropriate tools, LLMs extend their capabilities beyond pure language understanding to perform specialized functions. However, existing methods for tool selection often focus on limited tool sets and struggle to generalize to novel tools encountered in practical deployments. To address these challenges, we introduce a comprehensive dataset spanning 7 domains, containing 155 tools and 9,377 question-answer pairs, which simulates realistic integration scenarios. Additionally, we propose MetaToolAgent (MTA), a meta-learning approach designed to improve cross-tool generalization. Experimental results show that MTA significantly outperforms baseline methods on unseen tools, demonstrating its promise for building flexible and scalable systems that require dynamic tool coordination.
+ oai:arXiv.org:2601.12680v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zheng Fang, Wolfgang Mayer, Zeyu Zhang, Jian Wang, Hong-Yu Zhang, Wanli Li, Zaiwen Feng
+
+
+ HyFormer: Revisiting the Roles of Sequence Modeling and Feature Interaction in CTR Prediction
+ https://arxiv.org/abs/2601.12681
+ arXiv:2601.12681v1 Announce Type: new
+Abstract: Industrial large-scale recommendation models (LRMs) face the challenge of jointly modeling long-range user behavior sequences and heterogeneous non-sequential features under strict efficiency constraints. However, most existing architectures employ a decoupled pipeline: long sequences are first compressed with a query-token based sequence compressor like LONGER, followed by fusion with dense features through token-mixing modules like RankMixer, which thereby limits both the representation capacity and the interaction flexibility. This paper presents HyFormer, a unified hybrid transformer architecture that tightly integrates long-sequence modeling and feature interaction into a single backbone. From the perspective of sequence modeling, we revisit and redesign query tokens in LRMs, and frame the LRM modeling task as an alternating optimization process that integrates two core components: Query Decoding which expands non-sequential features into Global Tokens and performs long sequence decoding over layer-wise key-value representations of long behavioral sequences; and Query Boosting which enhances cross-query and cross-sequence heterogeneous interactions via efficient token mixing. The two complementary mechanisms are performed iteratively to refine semantic representations across layers. Extensive experiments on billion-scale industrial datasets demonstrate that HyFormer consistently outperforms strong LONGER and RankMixer baselines under comparable parameter and FLOPs budgets, while exhibiting superior scaling behavior with increasing parameters and FLOPs. Large-scale online A/B tests in high-traffic production systems further validate its effectiveness, showing significant gains over deployed state-of-the-art models. These results highlight the practicality and scalability of HyFormer as a unified modeling framework for industrial LRMs.
+ oai:arXiv.org:2601.12681v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yunwen Huang, Shiyong Hong, Xijun Xiao, Jinqiu Jin, Xuanyuan Luo, Zhe Wang, Zheng Chai, Shikang Wu, Yuchao Zheng, Jingjian Lin
+
+
+ Fusion-Restoration Image Processing Algorithm to Improve the High-Temperature Deformation Measurement
+ https://arxiv.org/abs/2601.12682
+ arXiv:2601.12682v1 Announce Type: new
+Abstract: In the deformation measurement of high-temperature structures, image degradation caused by thermal radiation and random errors introduced by heat haze restrict the accuracy and effectiveness of deformation measurement. To suppress thermal radiation and heat haze using fusion-restoration image processing methods, thereby improving the accuracy and effectiveness of DIC in the measurement of high-temperature deformation. For image degradation caused by thermal radiation, based on the image layered representation, the image is decomposed into positive and negative channels for parallel processing, and then optimized for quality by multi-exposure image fusion. To counteract the high-frequency, random errors introduced by heat haze, we adopt the FSIM as the objective function to guide the iterative optimization of model parameters, and the grayscale average algorithm is applied to equalize anomalous gray values, thereby reducing measurement error. The proposed multi-exposure image fusion algorithm effectively suppresses image degradation caused by complex illumination conditions, boosting the effective computation area from 26% to 50% for under-exposed images and from 32% to 40% for over-exposed images without degrading measurement accuracy in the experiment. Meanwhile, the image restoration combined with the grayscale average algorithm reduces static thermal deformation measurement errors. The error in {\epsilon}_xx is reduced by 85.3%, while the errors in {\epsilon}_yy and {\gamma}_xy are reduced by 36.0% and 36.4%, respectively. We present image processing methods to suppress the interference of thermal radiation and heat haze in high-temperature deformation measurement using DIC. The experimental results verify that the proposed method can effectively improve image quality, reduce deformation measurement errors, and has potential application value in thermal deformation measurement.
+ oai:arXiv.org:2601.12682v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Banglei Guan, Dongcai Tan, Jing Tao, Ang Su, Yang Shang, Qifeng Yu
+
+
+ GaussianTrimmer: Online Trimming Boundaries for 3DGS Segmentation
+ https://arxiv.org/abs/2601.12683
+ arXiv:2601.12683v1 Announce Type: new
+Abstract: With the widespread application of 3D Gaussians in 3D scene representation, 3D scene segmentation methods based on 3D Gaussians have also gradually emerged. However, existing 3D Gaussian segmentation methods basically segment on the basis of Gaussian primitives. Due to the large variation range of the scale of 3D Gaussians, large-sized Gaussians that often span the foreground and background lead to jagged boundaries of segmented objects. To this end, we propose an online boundary trimming method, GaussianTrimmer, which is an efficient and plug-and-play post-processing method capable of trimming coarse boundaries for existing 3D Gaussian segmentation methods. Our method consists of two core steps: 1. Generating uniformly and well-covered virtual cameras; 2. Trimming Gaussian at the primitive level based on 2D segmentation results on virtual cameras. Extensive quantitative and qualitative experiments demonstrate that our method can improve the segmentation quality of existing 3D Gaussian segmentation methods as a plug-and-play method.
+ oai:arXiv.org:2601.12683v1
+ cs.CV
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Liwei Liao, Ronggang Wang
+
+
+ A Model Fusion Approach for Enhancing Credit Approval Decision Making
+ https://arxiv.org/abs/2601.12684
+ arXiv:2601.12684v1 Announce Type: new
+Abstract: Credit default poses significant challenges to financial institutions and consumers, resulting in substantial financial losses and diminished trust. As such, credit default risk management has been a critical topic in the financial industry. In this paper, we present Combinatorial Fusion Analysis (CFA), a model fusion framework, that combines multiple machine learning algorithms to detect and predict credit card approval with high accuracy. We present the design methodology and implementation using five pre-trained models. The CFA results show an accuracy of 89.13% which is better than conventional machine learning and ensemble methods.
+ oai:arXiv.org:2601.12684v1
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuanhong Wu, Jingyan Xu, Wei Ye, Christina Schweikert, D. Frank Hsu
+
+
+ Persuasion in Online Conversations Is Associated with Alignment in Expressed Human Values
+ https://arxiv.org/abs/2601.12685
+ arXiv:2601.12685v1 Announce Type: new
+Abstract: Online disagreements often fail to produce understanding, instead reinforcing existing positions or escalating conflict. Prior work on predictors of successful persuasion in online discourse has largely focused on surface features such as linguistic style or conversational structure, leaving open the role of underlying principles or concerns that participants bring to an interaction. In this paper, we investigate how the expression and alignment of human values in back-and-forth online discussions relate to persuasion. Using data from Reddit's ChangeMyView subreddit, where successful persuasion is explicitly signaled through the awarding of deltas, we analyze one-on-one exchanges and characterize participants' value expression by drawing from Schwartz's Refined Theory of Basic Human Values. We find that successful persuasion is associated with two complementary processes: pre-existing compatibility between participants' value priorities even before the exchange happens, and the emergence of value alignment over the course of a conversation. At the same time, successful persuasion does not depend on commenters making large departures from their typical value expression patterns. We discuss implications of our findings for the design of online social platforms that aim to support constructive engagement across disagreement.
+ oai:arXiv.org:2601.12685v1
+ cs.HC
+ cs.CY
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bhavesh Vuyyuru, Farnaz Jahanbakhsh
+
+
+ Best Practices for Large Load Interconnections: A North American Perspective on Data Centers
+ https://arxiv.org/abs/2601.12686
+ arXiv:2601.12686v1 Announce Type: new
+Abstract: Large loads are expanding rapidly across North America, led by data centers, cryptocurrency mining, hydrogen production facilities, and heavy-duty charging stations. Each class presents distinct electrical characteristics, but data centers are drawing particular attention as AI deployment drives unprecedented capacity growth. Their scale, duty cycles, and converter-dominated interfaces introduce new challenges for transmission interconnections, especially regarding disturbance behavior, steady-state performance, and operational visibility. This paper reviews best practices for large-load interconnections across North America, synthesizing utility and system operator guidelines into a coherent set of technical requirements. The approach combines handbook and manual analysis with cross-utility comparisons and an outlook on European directions. The review highlights requirements on power quality, telemetry, commissioning tests, and protection coordination, while noting gaps in ride-through specifications, load-variation management, and post-disturbance recovery targets. Building on these findings, the paper proposes practical guidance for developers and utilities.
+ oai:arXiv.org:2601.12686v1
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Rafi Zahedi, Amin Zamani, Rahul Anilkumar
+
+
+ Network Slicing Resource Management in Uplink User-Centric Cell-Free Massive MIMO Systems
+ https://arxiv.org/abs/2601.12687
+ arXiv:2601.12687v1 Announce Type: new
+Abstract: This paper addresses the joint optimization of per-user equipment (UE) bandwidth allocation and UE-access point (AP) association to maximize weighted sum-rate while satisfying heterogeneous quality-of-service (QoS) requirements across enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC) slices in the uplink of a network slicing-enabled user-centric cell-free (CF) massive multiple-input multiple-output (mMIMO) system. The formulated problem is NP-hard, rendering global optimality computationally intractable. To address this challenge, it is decomposed into two sub-problems, each solved by a computationally efficient heuristic scheme, and jointly optimized through an alternating optimization framework. We then propose (i) a bandwidth allocation scheme that balances UE priority, spectral efficiency, and minimum bandwidth demand under limited resources to ensure fair QoS distribution, and (ii) a priority-based UE-AP association assignment approach that balances UE service quality with system capacity constraints. Together, these approaches provide a practical and computationally efficient solution for resource-constrained network slicing scenarios, where QoS feasibility is often violated under dense deployments and limited bandwidth, necessitating graceful degradation and fair QoS preservation rather than solely maximizing the aggregate sum-rate. Simulation results demonstrate that the proposed scheme achieves up to 52% higher weighted sum-rate, 140% and 58% higher QoS success rates for eMBB and URLLC slices, respectively, while reducing runtime by up to 97% compared to considered benchmarks.
+ oai:arXiv.org:2601.12687v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Manobendu Sarker, Soumaya Cherkaoui
+
+
+ Logic-Guided Multistage Inference for Explainable Multidefendant Judgment Prediction
+ https://arxiv.org/abs/2601.12688
+ arXiv:2601.12688v1 Announce Type: new
+Abstract: Crime disrupts societal stability, making law essential for balance. In multidefendant cases, assigning responsibility is complex and challenges fairness, requiring precise role differentiation. However, judicial phrasing often obscures the roles of the defendants, hindering effective AI-driven analyses. To address this issue, we incorporate sentencing logic into a pretrained Transformer encoder framework to enhance the intelligent assistance in multidefendant cases while ensuring legal interpretability. Within this framework an oriented masking mechanism clarifies roles and a comparative data construction strategy improves the model's sensitivity to culpability distinctions between principals and accomplices. Predicted guilt labels are further incorporated into a regression model through broadcasting, consolidating crime descriptions and court views. Our proposed masked multistage inference (MMSI) framework, evaluated on the custom IMLJP dataset for intentional injury cases, achieves significant accuracy improvements, outperforming baselines in role-based culpability differentiation. This work offers a robust solution for enhancing intelligent judicial systems, with publicly code available.
+ oai:arXiv.org:2601.12688v1
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Xu Zhang, Qinghua Wang, Mengyang Zhao, Fang Wang, Cunquan Qu
+
+
+ Priority-Based Bandwidth Allocation in Network Slicing-Enabled Cell-Free Massive MIMO Systems
+ https://arxiv.org/abs/2601.12689
+ arXiv:2601.12689v1 Announce Type: new
+Abstract: This paper addresses joint admission control and per-user equipment (UE) bandwidth allocation to maximize weighted sum-rate in network slicing-enabled user-centric cell-free (CF) massive multiple-input multiple-output (mMIMO) systems when aggregate quality-of-service (QoS) demand may exceed available bandwidth. Specifically, we optimize bandwidth allocation while satisfying heterogeneous QoS requirements across enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC) slices in the uplink. The formulated problem is NP-hard, rendering global optimality computationally intractable. We decompose it into two sub-problems and solve them via computationally efficient heuristics within a sequential framework. We propose (i) a hierarchical admission control scheme that selectively admits UEs under bandwidth scarcity, prioritizing URLLC to ensure latency-sensitive QoS compliance, and (ii) an iterative gradient-based bandwidth allocation scheme that transfers bandwidth across slices guided by marginal utility and reallocates resources within slices. Simulation results demonstrate that the proposed scheme achieves near-optimal performance, deviating from a CVX-based benchmark by at most 2.2% in weighted sum-rate while reducing runtime by 99.7%, thereby enabling practical real-time deployment. Compared to a baseline round-robin scheme without admission control, the proposed approach achieves up to 1085% and 7% higher success rates for eMBB and URLLC slices, respectively, by intentionally sacrificing sum-rate to guarantee QoS. Sensitivity analysis further reveals that the proposed solution adapts effectively to diverse eMBB/URLLC traffic compositions, maintaining 47-51% eMBB and 93-94% URLLC success rates across varying load scenarios, confirming its robustness for resource-constrained large-scale deployments.
+ oai:arXiv.org:2601.12689v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Manobendu Sarker, Soumaya Cherkaoui
+
+
+ "Are we writing an advice column for Spock here?" Understanding Stereotypes in AI Advice for Autistic Users
+ https://arxiv.org/abs/2601.12690
+ arXiv:2601.12690v1 Announce Type: new
+Abstract: Autistic individuals sometimes disclose autism when asking LLMs for social advice, hoping for more personalized responses. However, they also recognize that these systems may reproduce stereotypes, raising uncertainty about the risks and benefits of disclosure. We conducted a mixed-methods study combining a large-scale LLM audit experiment with interviews involving 11 autistic participants. We developed a six-step pipeline operationalizing 12 documented autism stereotypes into decision-making scenarios framed as users requesting advice (e.g., "Should I do A or B?"). We generated 345,000 responses from six LLMs and measured how advice shifted when prompts disclosed autism versus when they did not. When autism was disclosed, LLMs disproportionately recommended avoiding stereotypically stressful situations, including social events, confrontations, new experiences, and romantic relationships. While some participants viewed this as affirming, others criticized it as infantilizing or undermining opportunities for growth. Our study illuminates how the intermingling of affirmation and stereotyping complicates the personalization of LLMs.
+ oai:arXiv.org:2601.12690v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Caleb Wohn, Buse \c{C}ar{\i}k, Xiaohan Ding, Sang Won Lee, Young-Ho Kim, Eugenia H. Rho
+
+
+ BlocksecRT-DETR: Decentralized Privacy-Preserving and Token-Efficient Federated Transformer Learning for Secure Real-Time Object Detection in ITS
+ https://arxiv.org/abs/2601.12693
+ arXiv:2601.12693v1 Announce Type: new
+Abstract: Federated real-time object detection using transformers in Intelligent Transportation Systems (ITS) faces three major challenges: (1) missing-class non-IID data heterogeneity from geographically diverse traffic environments, (2) latency constraints on edge hardware for high-capacity transformer models, and (3) privacy and security risks from untrusted client updates and centralized aggregation. We propose BlockSecRT-DETR, a BLOCKchain-SECured Real-Time Object DEtection TRansformer framework for ITS that provides a decentralized, token-efficient, and privacy-preserving federated training solution using RT-DETR transformer, incorporating a blockchain-secured update validation mechanism for trustworthy aggregation. In this framework, challenges (1) and (2) are jointly addressed through a unified client-side design that integrates RT-DETR training with a Token Engineering Module (TEM). TEM prunes low-utility tokens, reducing encoder complexity and latency on edge hardware, while aggregated updates mitigate non-IID data heterogeneity across clients. To address challenge (3), BlockSecRT-DETR incorporates a decentralized blockchain-secured update validation mechanism that enables tamper-proof, privacy-preserving, and trust-free authenticated model aggregation without relying on a central server. We evaluated the proposed framework under a missing-class Non-IID partition of the KITTI dataset and conducted a blockchain case study to quantify security overhead. TEM improves inference latency by 17.2% and reduces encoder FLOPs by 47.8%, while maintaining global detection accuracy (89.20% mAP@0.5). The blockchain integration adds 400 ms per round, and the ledger size remains under 12 KB due to metadata-only on-chain storage.
+ oai:arXiv.org:2601.12693v1
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mohoshin Ara Tahera, Sabbir Rahman, Shuvalaxmi Dass, Sharif Ullah, Mahmoud Abouyessef
+
+
+ Closed-loop Uplink Radio Resource Management in CF-O-RAN Empowered 5G Aerial Corridor
+ https://arxiv.org/abs/2601.12694
+ arXiv:2601.12694v1 Announce Type: new
+Abstract: In this paper, we investigate the uplink (UL) radio resource management for 5G aerial corridors with an open-radio access network (O-RAN)-enabled cell-free (CF) massive multiple-input multiple-output (mMIMO) system. Our objective is to maximize the minimum spectral efficiency (SE) by jointly optimizing unmanned aerial vehicle (UAV)-open radio unit (O-RU) association and UL transmit power under quality-of-service (QoS) constraints. Owing to its NP-hard nature, the formulated problem is decomposed into two tractable sub-problems solved via alternating optimization (AO) using two computationally efficient algorithms. We then propose (i) a QoS-driven and multi-connectivity-enabled association algorithm incorporating UAV-centric and O-RU-centric criteria with targeted refinement for weak UAVs, and (ii) a bisection-guided fixed-point power control algorithm achieving global optimality with significantly reduced complexity, hosted as xApp at the near-real-time (near-RT) RAN intelligent controller (RIC) of O-RAN. Solving the resource-allocation problem requires global channel state information (CSI), which incurs substantial measurement and signaling overhead. To mitigate this, we leverage a channel knowledge map (CKM) within the O-RAN non-RT RIC to enable efficient environment-aware CSI inference. Simulation results show that the proposed framework achieves up to 440% improvement in minimum SE, 100% QoS satisfaction and fairness, while reducing runtime by up to 99.7% compared to an interior point solver-based power allocation solution, thereby enabling O-RAN compliant real-time deployment.
+ oai:arXiv.org:2601.12694v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Manobendu Sarker, Md. Zoheb Hassan, Xianbin Wang
+
+
+ From Noise to Knowledge: System Identification with Systematic Polytope Construction via Cyclic Reformulation
+ https://arxiv.org/abs/2601.12695
+ arXiv:2601.12695v1 Announce Type: new
+Abstract: Model-based control requires accurate mathematical models to guarantee control performance and stability. However, obtaining accurate models is challenging due to process and sensor noise. This paper proposes a novel identification algorithm that derives polytopic uncertainty models by interpreting noise-induced parameter fluctuations as intrinsic uncertainty. The method applies cyclic reformulation with period N to linear time-invariant systems, yielding N parameter sets with slight variations that serve as polytope vertices. This enables systematic polytopic model construction from a single identification experiment. Simulation results demonstrate significant improvements: the proposed method achieves higher parameter estimation accuracy and reduces prediction errors by approximately half compared to conventional approaches. The vertex count N provides systematic control over the precision of uncertainty representation.
+ oai:arXiv.org:2601.12695v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hiroshi Okajima, Shun Shirahama, Tatsunori Hayashi, Nobutomo Matsunaga
+
+
+ UbuntuGuard: A Culturally-Grounded Policy Benchmark for Equitable AI Safety in African Languages
+ https://arxiv.org/abs/2601.12696
+ arXiv:2601.12696v1 Announce Type: new
+Abstract: Current guardian models are predominantly Western-centric and optimized for high-resource languages, leaving low-resource African languages vulnerable to evolving harms, cross-lingual safety failures, and cultural misalignment. Moreover, most guardian models rely on rigid, predefined safety categories that fail to generalize across diverse linguistic and sociocultural contexts. Robust safety, therefore, requires flexible, runtime-enforceable policies and benchmarks that reflect local norms, harm scenarios, and cultural expectations. We introduce UbuntuGuard, the first African policy-based safety benchmark built from adversarial queries authored by 155 domain experts across sensitive fields, including healthcare. From these expert-crafted queries, we derive context-specific safety policies and reference responses that capture culturally grounded risk signals, enabling policy-aligned evaluation of guardian models. We evaluate 13 models, comprising six general-purpose LLMs and seven guardian models across three distinct variants: static, dynamic, and multilingual. Our findings reveal that existing English-centric benchmarks overestimate real-world multilingual safety, cross-lingual transfer provides partial but insufficient coverage, and dynamic models, while better equipped to leverage policies at inference time, still struggle to fully localize African-language contexts. These findings highlight the urgent need for multilingual, culturally grounded safety benchmarks to enable the development of reliable and equitable guardian models for low-resource languages. Our code can be found online.\footnote{Code repository available at https://github.com/hemhemoh/UbuntuGuard.
+ oai:arXiv.org:2601.12696v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tassallah Abdullahi, Macton Mgonzo, Mardiyyah Oduwole, Paul Okewunmi, Abraham Owodunni, Ritambhara Singh, Carsten Eickhoff
+
+
+ Fusing in 3D: Free-Viewpoint Fusion Rendering with a 3D Infrared-Visible Scene Representation
+ https://arxiv.org/abs/2601.12697
+ arXiv:2601.12697v1 Announce Type: new
+Abstract: Infrared-visible image fusion aims to integrate infrared and visible information into a single fused image. Existing 2D fusion methods focus on fusing images from fixed camera viewpoints, neglecting a comprehensive understanding of complex scenarios, which results in the loss of critical information about the scene. To address this limitation, we propose a novel Infrared-Visible Gaussian Fusion (IVGF) framework, which reconstructs scene geometry from multimodal 2D inputs and enables direct rendering of fused images. Specifically, we propose a cross-modal adjustment (CMA) module that modulates the opacity of Gaussians to solve the problem of cross-modal conflicts. Moreover, to preserve the distinctive features from both modalities, we introduce a fusion loss that guides the optimization of CMA, thus ensuring that the fused image retains the critical characteristics of each modality. Comprehensive qualitative and quantitative experiments demonstrate the effectiveness of the proposed method.
+ oai:arXiv.org:2601.12697v1
+ cs.CV
+ cs.CG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chao Yang, Deshui Miao, Chao Tian, Guoqing Zhu, Yameng Gu, Zhenyu He
+
+
+ A Two-Stage GPU Kernel Tuner Combining Semantic Refactoring and Search-Based Optimization
+ https://arxiv.org/abs/2601.12698
+ arXiv:2601.12698v1 Announce Type: new
+Abstract: GPU code optimization is a key performance bottleneck for HPC workloads as well as large-model training and inference. Although compiler optimizations and hand-written kernels can partially alleviate this issue, achieving near-hardware-limit performance still relies heavily on manual code refactoring and parameter tuning. Recent progress in LLM-agent-based kernel generation and optimization has been reported, yet many approaches primarily focus on direct code rewriting, where parameter choices are often implicit and hard to control, or require human intervention, leading to unstable performance gains. This paper introduces a template-based rewriting layer on top of an agent-driven iterative loop: kernels are semantically refactored into explicitly parameterizable templates, and template parameters are then optimized via search-based autotuning, yielding more stable and higher-quality speedups. Experiments on a set of real-world kernels demonstrate speedups exceeding 3x in the best case. We extract representative CUDA kernels from SGLang as evaluation targets; the proposed agentic tuner iteratively performs templating, testing, analysis, and planning, and leverages profiling feedback to execute constrained parameter search under hardware resource limits. Compared to agent-only direct rewriting, the template-plus-search design significantly reduces the randomness of iterative optimization, making the process more interpretable and enabling a more systematic approach toward high-performance configurations. The proposed method can be further extended to OpenCL, HIP, and other backends to deliver automated performance optimization for real production workloads.
+ oai:arXiv.org:2601.12698v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Qiuyi Qu, Yicheng Sui, Yufei Sun, Rui Chen, Xiaofei Zhang, Yuzhi Zhang, Haofeng Wang, Ge Lan, Ning Zhang
+
+
+ Resource-Conscious RL Algorithms for Deep Brain Stimulation
+ https://arxiv.org/abs/2601.12699
+ arXiv:2601.12699v1 Announce Type: new
+Abstract: Deep Brain Stimulation (DBS) has proven to be a promising treatment of Parkinson's Disease (PD). DBS involves stimulating specific regions of the brain's Basal Ganglia (BG) using electric impulses to alleviate symptoms of PD such as tremors, rigidity, and bradykinesia. Although most clinical DBS approaches today use a fixed frequency and amplitude, they suffer from side effects (such as slurring of speech) and shortened battery life of the implant. Reinforcement learning (RL) approaches have been used in recent research to perform DBS in a more adaptive manner to improve overall patient outcome. These RL algorithms are, however, too complex to be trained in vivo due to their long convergence time and requirement of high computational resources.
+ We propose a new Time & Threshold-Triggered Multi-Armed Bandit (T3P MAB) RL approach for DBS that is more effective than existing algorithms. Further, our T3P agent is lightweight enough to be deployed in the implant, unlike current deep-RL strategies, and even forgoes the need for an offline training phase. Additionally, most existing RL approaches have focused on modulating only frequency or amplitude, and the possibility of tuning them together remains greatly unexplored in the literature. Our RL agent can tune both frequency and amplitude of DBS signals to the brain with better sample efficiency and requires minimal time to converge. We implement an MAB agent for DBS for the first time on hardware to report energy measurements and prove its suitability for resource-constrained platforms. Our T3P MAB algorithm is deployed on a variety of microcontroller unit (MCU) setups to show its efficiency in terms of power consumption as opposed to other existing RL approaches used in recent work.
+ oai:arXiv.org:2601.12699v1
+ cs.LG
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Arkaprava Gupta, Nicholas Carter, William Zellers, Prateek Ganguli, Benedikt Dietrich, Vibhor Krishna, Parasara Sridhar Duggirala, Samarjit Chakraborty
+
+
+ RPT*: Global Planning with Probabilistic Terminals for Target Search in Complex Environments
+ https://arxiv.org/abs/2601.12701
+ arXiv:2601.12701v1 Announce Type: new
+Abstract: Routing problems such as Hamiltonian Path Problem (HPP), seeks a path to visit all the vertices in a graph while minimizing the path cost. This paper studies a variant, HPP with Probabilistic Terminals (HPP-PT), where each vertex has a probability representing the likelihood that the robot's path terminates there, and the objective is to minimize the expected path cost. HPP-PT arises in target object search, where a mobile robot must visit all candidate locations to find an object, and prior knowledge of the object's location is expressed as vertex probabilities. While routing problems have been studied for decades, few of them consider uncertainty as required in this work. The challenge lies not only in optimally ordering the vertices, as in standard HPP, but also in handling history dependency: the expected path cost depends on the order in which vertices were previously visited. This makes many existing methods inefficient or inapplicable. To address the challenge, we propose a search-based approach RPT* with solution optimality guarantees, which leverages dynamic programming in a new state space to bypass the history dependency and novel heuristics to speed up the computation. Building on RPT*, we design a Hierarchical Autonomous Target Search (HATS) system that combines RPT* with either Bayesian filtering for lifelong target search with noisy sensors, or autonomous exploration to find targets in unknown environments. Experiments in both simulation and real robot show that our approach can naturally balance between exploitation and exploration, thereby finding targets more quickly on average than baseline methods.
+ oai:arXiv.org:2601.12701v1
+ cs.RO
+ cs.CG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yunpeng Lyu, Chao Cao, Ji Zhang, Howie Choset, Zhongqiang Ren
+
+
+ Towards Spectroscopy: Susceptibility Clusters in Language Models
+ https://arxiv.org/abs/2601.12703
+ arXiv:2601.12703v1 Announce Type: new
+Abstract: Spectroscopy infers the internal structure of physical systems by measuring their response to perturbations. We apply this principle to neural networks: perturbing the data distribution by upweighting a token $y$ in context $x$, we measure the model's response via susceptibilities $\chi_{xy}$, which are covariances between component-level observables and the perturbation computed over a localized Gibbs posterior via stochastic gradient Langevin dynamics (SGLD). Theoretically, we show that susceptibilities decompose as a sum over modes of the data distribution, explaining why tokens that follow their contexts "for similar reasons" cluster together in susceptibility space. Empirically, we apply this methodology to Pythia-14M, developing a conductance-based clustering algorithm that identifies 510 interpretable clusters ranging from grammatical patterns to code structure to mathematical notation. Comparing to sparse autoencoders, 50% of our clusters match SAE features, validating that both methods recover similar structure.
+ oai:arXiv.org:2601.12703v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Andrew Gordon, Garrett Baker, George Wang, William Snell, Stan van Wingerden, Daniel Murfet
+
+
+ Adaptively trained Physics-informed Radial Basis Function Neural Networks for Solving Multi-asset Option Pricing Problems
+ https://arxiv.org/abs/2601.12704
+ arXiv:2601.12704v1 Announce Type: new
+Abstract: The present study investigates the numerical solution of Black-Scholes partial differential equation (PDE) for option valuation with multiple underlying assets. We develop a physics-informed (PI) machine learning algorithm based on a radial basis function neural network (RBFNN) that concurrently optimizes the network architecture and predicts the target option price. The physics-informed radial basis function neural network (PIRBFNN) combines the strengths of the traditional radial basis function collocation method and the physics-informed neural network machine learning approach to effectively solve PDE problems in the financial context. By employing a PDE residual-based technique to adaptively refine the distribution of hidden neurons during the training process, the PIRBFNN facilitates accurate and efficient handling of multidimensional option pricing models featuring non-smooth payoff conditions. The validity of the proposed method is demonstrated through a set of experiments encompassing a single-asset European put option, a double-asset exchange option, and a four-asset basket call option.
+ oai:arXiv.org:2601.12704v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yan Ma, Yumeng Ren
+
+
+ How do the Global South Diasporas Mobilize for Transnational Political Change?
+ https://arxiv.org/abs/2601.12705
+ arXiv:2601.12705v1 Announce Type: new
+Abstract: This paper examines how non-resident Bangladeshis mobilized during the 2024 quota-reform turned pro-democracy movement, leveraging social platforms and remittance flows to challenge state authority. Drawing on semi-structured interviews, we identify four phases of their collective action: technology-mediated shifts to active engagement, rapid transnational network building, strategic execution of remittance boycott, reframing economic dependence as political leverage, and adaptive responses to government surveillance and information blackouts. We extend postcolonial computing by introducing the idea of "diasporic superposition," which shows how diasporas can exercise political and economic influence from hybrid positionalities that both contest and complicate power asymmetries. We reframe diaspora engagement by highlighting how migrants participate in and reshape homeland politics, beyond narratives of integration in host countries. We advance the scholarship on financial technologies by foregrounding their relationship with moral economies of care, state surveillance, regulatory constraints, and uneven international economic power dynamics. Together, these contributions theorize how transnational activism and digital technologies intersect to mobilize political change in Global South contexts.
+ oai:arXiv.org:2601.12705v1
+ cs.CY
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3772318.3791792
+ Dipto Das, Afrin Prio, Pritu Saha, Shion Guha, Syed Ishtiaque Ahmed
+
+
+ Trend-Adjusted Time Series Models with an Application to Gold Price Forecasting
+ https://arxiv.org/abs/2601.12706
+ arXiv:2601.12706v1 Announce Type: new
+Abstract: Time series data play a critical role in various fields, including finance, healthcare, marketing, and engineering. A wide range of techniques (from classical statistical models to neural network-based approaches such as Long Short-Term Memory (LSTM)) have been employed to address time series forecasting challenges. In this paper, we reframe time series forecasting as a two-part task: (1) predicting the trend (directional movement) of the time series at the next time step, and (2) forecasting the quantitative value at the next time step. The trend can be predicted using a binary classifier, while quantitative values can be forecasted using models such as LSTM and Bidirectional Long Short-Term Memory (Bi-LSTM). Building on this reframing, we propose the Trend-Adjusted Time Series (TATS) model, which adjusts the forecasted values based on the predicted trend provided by the binary classifier. We validate the proposed approach through both theoretical analysis and empirical evaluation. The TATS model is applied to a volatile financial time series (the daily gold price) with the objective of forecasting the next days price. Experimental results demonstrate that TATS consistently outperforms standard LSTM and Bi-LSTM models by achieving significantly lower forecasting error. In addition, our results indicate that commonly used metrics such as MSE and MAE are insufficient for fully assessing time series model performance. Therefore, we also incorporate trend detection accuracy, which measures how effectively a model captures trends in a time series.
+ oai:arXiv.org:2601.12706v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sina Kazemdehbashi
+
+
+ Decoding Rewards in Competitive Games: Inverse Game Theory with Entropy Regularization
+ https://arxiv.org/abs/2601.12707
+ arXiv:2601.12707v1 Announce Type: new
+Abstract: Estimating the unknown reward functions driving agents' behaviors is of central interest in inverse reinforcement learning and game theory. To tackle this problem, we develop a unified framework for reward function recovery in two-player zero-sum matrix games and Markov games with entropy regularization, where we aim to reconstruct the underlying reward functions given observed players' strategies and actions. This task is challenging due to the inherent ambiguity of inverse problems, the non-uniqueness of feasible rewards, and limited observational data coverage. To address these challenges, we establish the reward function's identifiability using the quantal response equilibrium (QRE) under linear assumptions. Building upon this theoretical foundation, we propose a novel algorithm to learn reward functions from observed actions. Our algorithm works in both static and dynamic settings and is adaptable to incorporate different methods, such as Maximum Likelihood Estimation (MLE). We provide strong theoretical guarantees for the reliability and sample efficiency of our algorithm. Further, we conduct extensive numerical studies to demonstrate the practical effectiveness of the proposed framework, offering new insights into decision-making in competitive environments.
+ oai:arXiv.org:2601.12707v1
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junyi Liao, Zihan Zhu, Ethan Fang, Zhuoran Yang, Vahid Tarokh
+
+
+ Neurosymbolic LoRA: Why and When to Tune Weights vs. Rewrite Prompts
+ https://arxiv.org/abs/2601.12711
+ arXiv:2601.12711v1 Announce Type: new
+Abstract: Large language models (LLMs) can be adapted either through numerical updates that alter model parameters or symbolic manipulations that work on discrete prompts or logical constraints. While numerical fine-tuning excels at injecting new factual knowledge, symbolic updates offer flexible control of style and alignment without retraining. We introduce a neurosymbolic LoRA framework that dynamically combines these two complementary strategies. Specifically, we present a unified monitoring signal and a reward-based classifier to decide when to employ LoRA for deeper factual reconstruction and when to apply TextGrad for token-level edits. Our approach remains memory-efficient by offloading the symbolic transformations to an external LLM only when needed. Additionally, the refined prompts produced during symbolic editing serve as high-quality, reusable training data, an important benefit in data-scarce domains like mathematical reasoning. Extensive experiments across multiple LLM backbones show that neurosymbolic LoRA consistently outperforms purely numerical or purely symbolic baselines, demonstrating superior adaptability and improved performance. Our findings highlight the value of interleaving numerical and symbolic updates to unlock a new level of versatility in language model fine-tuning.
+ oai:arXiv.org:2601.12711v1
+ cs.AI
+ cs.LG
+ cs.SC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kevin Wang, Neel P. Bhatt, Cong Liu, Junbo Li, Runjin Chen, Yihan Xi, Timothy Barclay, Alvaro Velasquez, Ufuk Topcu, Zhangyang Wang
+
+
+ Dynamic Detection of Inefficient Data Mapping Patterns in Heterogeneous OpenMP Applications
+ https://arxiv.org/abs/2601.12713
+ arXiv:2601.12713v1 Announce Type: new
+Abstract: With the growing prevalence of heterogeneous computing, CPUs are increasingly being paired with accelerators to achieve new levels of performance and energy efficiency. However, data movement between devices remains a significant bottleneck, complicating application development. Existing performance tools require considerable programmer intervention to diagnose and locate data transfer inefficiencies. To address this, we propose dynamic analysis techniques to detect and profile inefficient data transfer and allocation patterns in heterogeneous applications. We implemented these techniques into OMPDataPerf, which provides detailed traces of problematic data mappings, source code attribution, and assessments of optimization potential in heterogeneous OpenMP applications. OMPDataPerf uses the OpenMP Tools Interface (OMPT) and incurs only a 5 % geometric-mean runtime overhead.
+ oai:arXiv.org:2601.12713v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1145/3774934.3786454
+ Luke Marzen, Junhyung Shim, Ali Jannesari
+
+
+ P2L-CA: An Effective Parameter Tuning Framework for Rehearsal-Free Multi-Label Class-Incremental Learning
+ https://arxiv.org/abs/2601.12714
+ arXiv:2601.12714v1 Announce Type: new
+Abstract: Multi-label Class-Incremental Learning aims to continuously recognize novel categories in complex scenes where multiple objects co-occur. However, existing approaches often incur high computational costs due to full-parameter fine-tuning and substantial storage overhead from memory buffers, or they struggle to address feature confusion and domain discrepancies adequately. To overcome these limitations, we introduce P2L-CA, a parameter-efficient framework that integrates a Prompt-to-Label module with a Continuous Adapter module. The P2L module leverages class-specific prompts to disentangle multi-label representations while incorporating linguistic priors to enforce stable semantic-visual alignment. Meanwhile, the CA module employs lightweight adapters to mitigate domain gaps between pre-trained models and downstream tasks, thereby enhancing model plasticity. Extensive experiments across standard and challenging MLCIL settings on MS-COCO and PASCAL VOC show that P2L-CA not only achieves substantial improvements over state-of-the-art methods but also demonstrates strong generalization in CIL scenarios, all while requiring minimal trainable parameters and eliminating the need for memory buffers.
+ oai:arXiv.org:2601.12714v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Songlin Dong, Jiangyang Li, Chenhao Ding, Zhiheng Ma, Haoyu Luo, Yuhang He, Yihong Gong
+
+
+ RSOD: Reliability-Guided Sonar Image Object Detection with Extremely Limited Labels
+ https://arxiv.org/abs/2601.12715
+ arXiv:2601.12715v1 Announce Type: new
+Abstract: Object detection in sonar images is a key technology in underwater detection systems. Compared to natural images, sonar images contain fewer texture details and are more susceptible to noise, making it difficult for non-experts to distinguish subtle differences between classes. This leads to their inability to provide precise annotation data for sonar images. Therefore, designing effective object detection methods for sonar images with extremely limited labels is particularly important. To address this, we propose a teacher-student framework called RSOD, which aims to fully learn the characteristics of sonar images and develop a pseudo-label strategy suitable for these images to mitigate the impact of limited labels. First, RSOD calculates a reliability score by assessing the consistency of the teacher's predictions across different views. To leverage this score, we introduce an object mixed pseudo-label method to tackle the shortage of labeled data in sonar images. Finally, we optimize the performance of the student by implementing a reliability-guided adaptive constraint. By taking full advantage of unlabeled data, the student can perform well even in situations with extremely limited labels. Notably, on the UATD dataset, our method, using only 5% of labeled data, achieves results that can compete against those of our baseline algorithm trained on 100% labeled data. We also collected a new dataset to provide more valuable data for research in the field of sonar.
+ oai:arXiv.org:2601.12715v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chengzhou Li, Ping Guo, Guanchen Meng, Qi Jia, Jinyuan Liu, Zhu Liu, Xiaokang Liu, Yu Liu, Zhongxuan Luo, Xin Fan
+
+
+ CellularSpecSec-Bench: A Staged Benchmark for Evidence-Grounded Interpretation and Security Reasoning over 3GPP Specifications
+ https://arxiv.org/abs/2601.12716
+ arXiv:2601.12716v1 Announce Type: new
+Abstract: Cellular networks are critical infrastructure supporting billions of worldwide users and safety- and mission-critical services. Vulnerabilities in cellular networks can therefore cause service disruption, privacy breaches, and broad societal harm, motivating growing efforts to analyze 3GPP specifications that define required device and operator behavior. While large language models (LLMs) have demonstrated the capability for reading technical documents, cellular specifications impose unique challenges: faithful interpretation of normative language, reasoning across cross-referenced clauses, and verifiable conclusions grounded in multimodal evidence such as tables and figures. To address these challenges, we propose CellSpecSec-ARI, a unified Adapt-Retrieve-Integrate framework for systematic understanding and standard-driven security analysis of 3GPP specifications; CellularSpecSec-Bench, a staged benchmark, containing newly constructed high-quality datasets with expert-verified and corrected subsets from prior open-source resources. Together, they establish an accessible and reproducible foundation for quantifying progress in specification understanding and security reasoning in the cellular network security domain.
+ oai:arXiv.org:2601.12716v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ke Xie, Xingyi Zhao, Yiwen Hu, Shuhan Yuan, Tian Xie
+
+
+ Dataset of GenAI-Assisted Information Problem Solving in Education
+ https://arxiv.org/abs/2601.12718
+ arXiv:2601.12718v1 Announce Type: new
+Abstract: Information Problem Solving (IPS) is a critical competency for academic and professional success in education, work, and life. The advent of Generative Artificial Intelligence (GenAI), particularly tools like ChatGPT, has introduced new possibilities for supporting students in complex IPS tasks. However, empirical insights into how students engage with GenAI during IPS and how these tools can be effectively leveraged for learning remain limited. Moreover, differences in background, shaped by cultural and socioeconomic factors, pose additional challenges to the equitable integration of GenAI in educational contexts. To address this gap, we present an open-source dataset collected from 279 students at a public Australian university. The dataset was generated through students' use of FLoRA, a GenAI-powered educational platform that widely adopted in the field of learning analytics. Within FLoRA, students interacted with an embedded GenAI chatbot to gather information and synthesize it into data science project proposals. The dataset captures fine-grained, multi-dimensional records of GenAI-assisted IPS processes, including: (i) student-GenAI dialogue transcripts; (ii) writing process log traces; (iii) final project proposals with human-assigned assessment scores; (iv) surveys of biographic and prior knowledge in data science and AI; and (v) surveys capturing students' GenAI experience and perceptions of GenAI's effectiveness in supporting IPS. This dataset provides a valuable resource for advancing our understanding of GenAI's role in educational IPS and informing the design of adaptive, inclusive AI-powered learning tools.
+ oai:arXiv.org:2601.12718v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xinyu Li, Kaixun Yang, Jiameng Wei, Yixin Cheng, Dragan Ga\v{s}evi\'c, Guanliang Chen
+
+
+ S2DiT: Sandwich Diffusion Transformer for Mobile Streaming Video Generation
+ https://arxiv.org/abs/2601.12719
+ arXiv:2601.12719v1 Announce Type: new
+Abstract: Diffusion Transformers (DiTs) have recently improved video generation quality. However, their heavy computational cost makes real-time or on-device generation infeasible. In this work, we introduce S2DiT, a Streaming Sandwich Diffusion Transformer designed for efficient, high-fidelity, and streaming video generation on mobile hardware. S2DiT generates more tokens but maintains efficiency with novel efficient attentions: a mixture of LinConv Hybrid Attention (LCHA) and Stride Self-Attention (SSA). Based on this, we uncover the sandwich design via a budget-aware dynamic programming search, achieving superior quality and efficiency. We further propose a 2-in-1 distillation framework that transfers the capacity of large teacher models (e.g., Wan 2.2-14B) to the compact few-step sandwich model. Together, S2DiT achieves quality on par with state-of-the-art server video models, while streaming at over 10 FPS on an iPhone.
+ oai:arXiv.org:2601.12719v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Lin Zhao, Yushu Wu, Aleksei Lebedev, Dishani Lahiri, Meng Dong, Arpit Sahni, Michael Vasilkovsky, Hao Chen, Ju Hu, Aliaksandr Siarohin, Sergey Tulyakov, Yanzhi Wang, Anil Kag, Yanyu Li
+
+
+ Teaching Large Reasoning Models Effective Reflection
+ https://arxiv.org/abs/2601.12720
+ arXiv:2601.12720v1 Announce Type: new
+Abstract: Large Reasoning Models (LRMs) have recently shown impressive performance on complex reasoning tasks, often by engaging in self-reflective behaviors such as self-critique and backtracking. However, not all reflections are beneficial-many are superficial, offering little to no improvement over the original answer and incurring computation overhead. In this paper, we identify and address the problem of superficial reflection in LRMs. We first propose Self-Critique Fine-Tuning (SCFT), a training framework that enhances the model's reflective reasoning ability using only self-generated critiques. SCFT prompts models to critique their own outputs, filters high-quality critiques through rejection sampling, and fine-tunes the model using a critique-based objective. Building on this strong foundation, we further introduce Reinforcement Learning with Effective Reflection Rewards (RLERR). RLERR leverages the high-quality reflections initialized by SCFT to construct reward signals, guiding the model to internalize the self-correction process via reinforcement learning. Experiments on two challenging benchmarks, AIME2024 and AIME2025, show that SCFT and RLERR significantly improve both reasoning accuracy and reflection quality, outperforming state-of-the-art baselines. All data and codes are available at https://github.com/wanghanbinpanda/SCFT.
+ oai:arXiv.org:2601.12720v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hanbin Wang, Jingwei Song, Jinpeng Li, Qi Zhu, Fei Mi, Ganqu Cui, Yasheng Wang, Lifeng Shang
+
+
+ An Evolutionary Framework for Automatic Optimization Benchmark Generation via Large Language Models
+ https://arxiv.org/abs/2601.12723
+ arXiv:2601.12723v1 Announce Type: new
+Abstract: Optimization benchmarks play a fundamental role in assessing algorithm performance; however, existing artificial benchmarks often fail to capture the diversity and irregularity of real-world problem structures, while benchmarks derived from real-world problems are costly and difficult to construct. To address these challenges, we propose an evolutionary automatic benchmark generation framework that leverages a large language model (LLM) as a generative operator, termed the LLM-driven evolutionary benchmark generator (LLM-EBG). In this framework, the LLM serves as an evolutionary operator that generates and evolves benchmark problems within a flexible, expressive representation space. As a case study, we generate unconstrained single-objective continuous minimization problems represented as mathematical expressions designed to induce significant performance differences between a genetic algorithm (GA) and differential evolution (DE). Experimental results show that LLM-EBG successfully produces benchmark problems in which the designated target algorithm consistently outperforms the comparative algorithm in more than 80\% of trials. Furthermore, exploratory landscape analysis reveals that benchmarks favoring GA are highly sensitive to variable scaling, demonstrating that the proposed framework can generate problems with distinct geometric characteristics that reflect the intrinsic search behaviors of different optimization algorithms.
+ oai:arXiv.org:2601.12723v1
+ cs.NE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuhiro Ono, Tomohiro Harada, Yukiya Miura
+
+
+ Explicit Entropic Constructions for Coverage, Facility Location, and Graph Cuts
+ https://arxiv.org/abs/2601.12724
+ arXiv:2601.12724v1 Announce Type: new
+Abstract: Shannon entropy is a polymatroidal set function and lies at the foundation of information theory, yet the class of entropic polymatroids is strictly smaller than the class of all submodular functions. In parallel, submodular and combinatorial information measures (SIMs) have recently been proposed as a principled framework for extending entropy, mutual information, and conditional mutual information to general submodular functions, and have been used extensively in data subset selection, active learning, domain adaptation, and representation learning. This raises a natural and fundamental question: are the monotone submodular functions most commonly used in practice entropic?
+ In this paper, we answer this question in the affirmative for a broad class of widely used polymatroid functions. We provide explicit entropic constructions for set cover and coverage functions, facility location, saturated coverage, concave-over-modular functions via truncations, and monotone graph-cut-type objectives. Our results show that these functions can be realized exactly as Shannon entropies of appropriately constructed random variables. As a consequence, for these functions, submodular mutual information coincides with classical mutual information, conditional gain specializes to conditional entropy, and submodular conditional mutual information reduces to standard conditional mutual information in the entropic sense. These results establish a direct bridge between combinatorial information measures and classical information theory for many of the most common submodular objectives used in applications.
+ oai:arXiv.org:2601.12724v1
+ cs.IT
+ math.CO
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Rishabh Iyer
+
+
+ AI-exhibited Personality Traits Can Shape Human Self-concept through Conversations
+ https://arxiv.org/abs/2601.12727
+ arXiv:2601.12727v1 Announce Type: new
+Abstract: Recent Large Language Model (LLM) based AI can exhibit recognizable and measurable personality traits during conversations to improve user experience. However, as human understandings of their personality traits can be affected by their interaction partners' traits, a potential risk is that AI traits may shape and bias users' self-concept of their own traits. To explore the possibility, we conducted a randomized behavioral experiment. Our results indicate that after conversations about personal topics with an LLM-based AI chatbot using GPT-4o default personality traits, users' self-concepts aligned with the AI's measured personality traits. The longer the conversation, the greater the alignment. This alignment led to increased homogeneity in self-concepts among users. We also observed that the degree of self-concept alignment was positively associated with users' conversation enjoyment. Our findings uncover how AI personality traits can shape users' self-concepts through human-AI conversation, highlighting both risks and opportunities. We provide important design implications for developing more responsible and ethical AI systems.
+ oai:arXiv.org:2601.12727v1
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1145/3772318.3790654
+ Jingshu Li, Tianqi Song, Nattapat Boonprakong, Zicheng Zhu, Yitian Yang, Yi-Chieh Lee
+
+
+ DC-VLAQ: Query-Residual Aggregation for Robust Visual Place Recognition
+ https://arxiv.org/abs/2601.12729
+ arXiv:2601.12729v1 Announce Type: new
+Abstract: One of the central challenges in visual place recognition (VPR) is learning a robust global representation that remains discriminative under large viewpoint changes, illumination variations, and severe domain shifts. While visual foundation models (VFMs) provide strong local features, most existing methods rely on a single model, overlooking the complementary cues offered by different VFMs. However, exploiting such complementary information inevitably alters token distributions, which challenges the stability of existing query-based global aggregation schemes. To address these challenges, we propose DC-VLAQ, a representation-centric framework that integrates the fusion of complementary VFMs and robust global aggregation. Specifically, we first introduce a lightweight residual-guided complementary fusion that anchors representations in the DINOv2 feature space while injecting complementary semantics from CLIP through a learned residual correction. In addition, we propose the Vector of Local Aggregated Queries (VLAQ), a query--residual global aggregation scheme that encodes local tokens by their residual responses to learnable queries, resulting in improved stability and the preservation of fine-grained discriminative cues. Extensive experiments on standard VPR benchmarks, including Pitts30k, Tokyo24/7, MSLS, Nordland, SPED, and AmsterTime, demonstrate that DC-VLAQ consistently outperforms strong baselines and achieves state-of-the-art performance, particularly under challenging domain shifts and long-term appearance changes.
+ oai:arXiv.org:2601.12729v1
+ cs.CV
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hanyu Zhu, Zhihao Zhan, Yuhang Ming, Liang Li, Dibo Hou, Javier Civera, Wanzeng Kong
+
+
+ Distribution-Centric Policy Optimization Dominates Exploration-Exploitation Trade-off
+ https://arxiv.org/abs/2601.12730
+ arXiv:2601.12730v1 Announce Type: new
+Abstract: The exploration-exploitation (EE) trade-off is a central challenge in reinforcement learning (RL) for large language models (LLMs). With Group Relative Policy Optimization (GRPO), training tends to be exploitation driven: entropy decreases monotonically, samples convergence, and exploration fades. Most existing fixes are \textbf{sample-centric}: they seek or bonus rare samples, assuming exploration comes from novel trajectories and tokens. These heuristics depend on the "luck" of informative samples, lack principled control of the policy, and often yield limited or inconsistent gains. In this work, we are the first to introduce a \textbf{distribution-centric} perspective for RL, in which exploration is always guided by a "better" target distribution, and reveal that a policy's ability to resist entropy collapse is governed by the distribution itself rather than individual samples. Building on this insight, we propose Distribution-Centric Policy Optimization (DCPO), which reformulates entropy regulation as distribution-level regularization. DCPO achieves controllable entropy fully on-policy without sampling from external distributions, enabling efficient exploration while maintaining training stability. Across multiple models and seven benchmarks, DCPO improves over GRPO by about 20\% on average. Overall, DCPO replaces sample-level heuristics with distribution-level principles, offering a theoretically grounded and flexible framework for controllable exploration and a stronger EE trade-off. The code is available in https://github.com/597358816/DCPO.
+ oai:arXiv.org:2601.12730v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zhaochun Li, Chen Wang, Jionghao Bai, Shisheng Cui, Ge Lan, Zhou Zhao, Yue Wang
+
+
+ A Shared Geometry of Difficulty in Multilingual Language Models
+ https://arxiv.org/abs/2601.12731
+ arXiv:2601.12731v1 Announce Type: new
+Abstract: Predicting problem-difficulty in large language models (LLMs) refers to estimating how difficult a task is according to the model itself, typically by training linear probes on its internal representations. In this work, we study the multilingual geometry of problem-difficulty in LLMs by training linear probes using the AMC subset of the Easy2Hard benchmark, translated into 21 languages. We found that difficulty-related signals emerge at two distinct stages of the model internals, corresponding to shallow (early-layers) and deep (later-layers) internal representations, that exhibit functionally different behaviors. Probes trained on deep representations achieve high accuracy when evaluated on the same language but exhibit poor cross-lingual generalization. In contrast, probes trained on shallow representations generalize substantially better across languages, despite achieving lower within-language performance. Together, these results suggest that LLMs first form a language-agnostic representation of problem difficulty, which subsequently becomes language-specific. This closely aligns with existing findings in LLM interpretability showing that models tend to operate in an abstract conceptual space before producing language-specific outputs. We demonstrate that this two-stage representational process extends beyond semantic content to high-level meta-cognitive properties such as problem-difficulty estimation.
+ oai:arXiv.org:2601.12731v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Stefano Civelli, Pietro Bernardelle, Nicol\`o Brunello, Gianluca Demartini
+
+
+ Optimal Error Estimates of a Linearized Backward Euler Localized Orthogonal Decomposition for the Landau-Lifshitz Equation
+ https://arxiv.org/abs/2601.12734
+ arXiv:2601.12734v1 Announce Type: new
+Abstract: We introduce a novel spatial discretization technique for the reliable and efficient simulation of magnetization dynamics governed by the Landau-Lifshitz (LL) equation. The overall discretization error is systematically decomposed into temporal and spatial components. The spatial error analysis is conducted by formulating the LL equation within the framework of the Localized Orthogonal Decomposition (LOD) method. Numerical examples are presented to validate the accuracy and approximation properties of the proposed scheme.
+ oai:arXiv.org:2601.12734v1
+ math.NA
+ cs.NA
+ physics.comp-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zetao Ma, Rui Du, Lei Zhang
+
+
+ OpenAI for OpenAPI: Automated generation of REST API specification via LLMs
+ https://arxiv.org/abs/2601.12735
+ arXiv:2601.12735v1 Announce Type: new
+Abstract: REST APIs, based on the REpresentational State Transfer (REST) architecture, are the primary type of Web API. The OpenAPI Specification (OAS) serves as the de facto standard for describing REST APIs and is crucial for multiple software engineering tasks. However, developers face challenges in writing and maintaining OAS. Although static analysis shows potential for OAS generation, it is limited to specific programming languages and development frameworks. The powerful code understanding capabilities of LLMs offer new opportunities for OAS generation, yet they are constrained by context limitations and hallucinations. To address these challenges, we propose the OpenAI OpenAPI Project Scanner (OOPS), the first technology-agnostic LLM-based static analysis method for OAS generation, requiring fewer technology-specific rules and less human expert intervention. OOPS is implemented as an LLM agent workflow comprising two key steps: endpoint method extraction and OAS generation. By constructing an API dependency graph, it establishes necessary file associations to address LLMs' context limitations. Through multi-stage generation and self-refine, it mitigates both syntactic and semantic hallucinations during OAS generation. We evaluated OOPS on 12 real-world REST APIs spanning 5 programming languages and 8 development frameworks. Experimental results demonstrate that OOPS accurately generates high-quality OAS for REST APIs implemented with diverse technologies, achieving an average F1-score exceeding 98% for endpoint method inference, 97% for both request parameter and response inference, and 92% for parameter constraint inference. The input tokens average below 5.6K with a maximum of 16.2K, while the output tokens average below 0.9K with a maximum of 7.7K.
+ oai:arXiv.org:2601.12735v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hao Chen, Yunchun Li, Chen Chen, Fengxu Lin, Wei Li
+
+
+ KaoLRM: Repurposing Pre-trained Large Reconstruction Models for Parametric 3D Face Reconstruction
+ https://arxiv.org/abs/2601.12736
+ arXiv:2601.12736v1 Announce Type: new
+Abstract: We propose KaoLRM to re-target the learned prior of the Large Reconstruction Model (LRM) for parametric 3D face reconstruction from single-view images. Parametric 3D Morphable Models (3DMMs) have been widely used for facial reconstruction due to their compact and interpretable parameterization, yet existing 3DMM regressors often exhibit poor consistency across varying viewpoints. To address this, we harness the pre-trained 3D prior of LRM and incorporate FLAME-based 2D Gaussian Splatting into LRM's rendering pipeline. Specifically, KaoLRM projects LRM's pre-trained triplane features into the FLAME parameter space to recover geometry, and models appearance via 2D Gaussian primitives that are tightly coupled to the FLAME mesh. The rich prior enables the FLAME regressor to be aware of the 3D structure, leading to accurate and robust reconstructions under self-occlusions and diverse viewpoints. Experiments on both controlled and in-the-wild benchmarks demonstrate that KaoLRM achieves superior reconstruction accuracy and cross-view consistency, while existing methods remain sensitive to viewpoint variations. The code is released at https://github.com/CyberAgentAILab/KaoLRM.
+ oai:arXiv.org:2601.12736v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Qingtian Zhu, Xu Cao, Zhixiang Wang, Yinqiang Zheng, Takafumi Taketomi
+
+
+ TreeWriter: AI-Assisted Hierarchical Planning and Writing for Long-Form Documents
+ https://arxiv.org/abs/2601.12740
+ arXiv:2601.12740v1 Announce Type: new
+Abstract: Long documents pose many challenges to current intelligent writing systems. These include maintaining consistency across sections, sustaining efficient planning and writing as documents become more complex, and effectively providing and integrating AI assistance to the user. Existing AI co-writing tools offer either inline suggestions or limited structured planning, but rarely support the entire writing process that begins with high-level ideas and ends with polished prose, in which many layers of planning and outlining are needed. Here, we introduce TreeWriter, a hierarchical writing system that represents documents as trees and integrates contextual AI support. TreeWriter allows authors to create, save, and refine document outlines at multiple levels, facilitating drafting, understanding, and iterative editing of long documents. A built-in AI agent can dynamically load relevant content, navigate the document hierarchy, and provide context-aware editing suggestions. A within-subject study (N=12) comparing TreeWriter with Google Docs + Gemini on long-document editing and creative writing tasks shows that TreeWriter improves idea exploration/development, AI helpfulness, and perceived authorial control. A two-month field deployment (N=8) further demonstrated that hierarchical organization supports collaborative writing. Our findings highlight the potential of hierarchical, tree-structured editors with integrated AI support and provide design guidelines for future AI-assisted writing tools that balance automation with user agency.
+ oai:arXiv.org:2601.12740v1
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Zijian Zhang, Fangshi Du, Xingjian Liu, Pan Chen, Oliver Huang, Runlong Ye, Michael Liut, Al\'an Aspuru-Guzik
+
+
+ An Introduction to Razborov's Flag Algebra as a Proof System for Extremal Graph Theory
+ https://arxiv.org/abs/2601.12741
+ arXiv:2601.12741v1 Announce Type: new
+Abstract: Razborov's flag algebra forms a powerful framework for deriving asymptotic inequalities between induced subgraph densities, underpinning many advances in extremal graph theory. This survey introduces flag algebra to computer scientists working in logic, programming languages, automated verification, and formal methods. We take a logical perspective on flag algebra and present it in terms of syntax, semantics, and proof strategies, in a style closer to formal logic. One popular proof strategy derives valid inequalities by first proving inequalities in a labelled variant of flag algebra and then transferring them to the original unlabelled setting using the so-called downward operator. We explain this strategy in detail and highlight that its transfer mechanism relies on the notion of what we call an adjoint pair, reminiscent of Galois connections and categorical adjunctions, which appear frequently in work on automated verification and programming languages. Along the way, we work through representative examples, including Mantel's theorem and Goodman's bound on Ramsey multiplicity, to illustrate how mathematical arguments can be carried out symbolically in the flag algebra framework.
+ oai:arXiv.org:2601.12741v1
+ cs.PL
+ cs.LO
+ math.CO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Gyeongwon Jeong, Seonghun Park, Hongseok Yang
+
+
+ AirHunt: Bridging VLM Semantics and Continuous Planning for Efficient Aerial Object Navigation
+ https://arxiv.org/abs/2601.12742
+ arXiv:2601.12742v1 Announce Type: new
+Abstract: Recent advances in large Vision-Language Models (VLMs) have provided rich semantic understanding that empowers drones to search for open-set objects via natural language instructions. However, prior systems struggle to integrate VLMs into practical aerial systems due to orders-of-magnitude frequency mismatch between VLM inference and real-time planning, as well as VLMs' limited 3D scene understanding. They also lack a unified mechanism to balance semantic guidance with motion efficiency in large-scale environments. To address these challenges, we present AirHunt, an aerial object navigation system that efficiently locates open-set objects with zero-shot generalization in outdoor environments by seamlessly fusing VLM semantic reasoning with continuous path planning. AirHunt features a dual-pathway asynchronous architecture that establishes a synergistic interface between VLM reasoning and path planning, enabling continuous flight with adaptive semantic guidance that evolves through motion. Moreover, we propose an active dual-task reasoning module that exploits geometric and semantic redundancy to enable selective VLM querying, and a semantic-geometric coherent planning module that dynamically reconciles semantic priorities and motion efficiency in a unified framework, enabling seamless adaptation to environmental heterogeneity. We evaluate AirHunt across diverse object navigation tasks and environments, demonstrating a higher success rate with lower navigation error and reduced flight time compared to state-of-the-art methods. Real-world experiments further validate AirHunt's practical capability in complex and challenging environments. Code and dataset will be made publicly available before publication.
+ oai:arXiv.org:2601.12742v1
+ cs.RO
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xuecheng Chen, Zongzhuo Liu, Jianfa Ma, Bang Du, Tiantian Zhang, Xueqian Wang, Boyu Zhou
+
+
+ Vision Language Models for Optimization-Driven Intent Processing in Autonomous Networks
+ https://arxiv.org/abs/2601.12744
+ arXiv:2601.12744v1 Announce Type: new
+Abstract: Intent-Based Networking (IBN) allows operators to specify high-level network goals rather than low-level configurations. While recent work demonstrates that large language models can automate configuration tasks, a distinct class of intents requires generating optimization code to compute provably optimal solutions for traffic engineering, routing, and resource allocation. Current systems assume text-based intent expression, requiring operators to enumerate topologies and parameters in prose. Network practitioners naturally reason about structure through diagrams, yet whether Vision-Language Models (VLMs) can process annotated network sketches into correct optimization code remains unexplored. We present IntentOpt, a benchmark of 85 optimization problems across 17 categories, evaluating four VLMs (GPT-5-Mini, Claude-Haiku-4.5, Gemini-2.5-Flash, Llama-3.2-11B-Vision) under three prompting strategies on multimodal versus text-only inputs. Our evaluation shows that visual parameter extraction reduces execution success by 12-21 percentage points (pp), with GPT-5-Mini dropping from 93% to 72%. Program-of-thought prompting decreases performance by up to 13 pp, and open-source models lag behind closed-source ones, with Llama-3.2-11B-Vision reaching 18% compared to 75% for GPT-5-Mini. These results establish baseline capabilities and limitations of current VLMs for optimization code generation within an IBN system. We also demonstrate practical feasibility through a case study that deploys VLM-generated code to network testbed infrastructure using Model Context Protocol.
+ oai:arXiv.org:2601.12744v1
+ cs.AI
+ cs.NI
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Tasnim Ahmed, Yifan Zhu, Salimur Choudhury
+
+
+ A Graph Prompt Fine-Tuning Method for WSN Spatio-Temporal Correlation Anomaly Detection
+ https://arxiv.org/abs/2601.12745
+ arXiv:2601.12745v1 Announce Type: new
+Abstract: Anomaly detection of multi-temporal modal data in Wireless Sensor Network (WSN) can provide an important guarantee for reliable network operation. Existing anomaly detection methods in multi-temporal modal data scenarios have the problems of insufficient extraction of spatio-temporal correlation features, high cost of anomaly sample category annotation, and imbalance of anomaly samples. In this paper, a graph neural network anomaly detection backbone network incorporating spatio-temporal correlation features and a multi-task self-supervised training strategy of "pre-training - graph prompting - fine-tuning" are designed for the characteristics of WSN graph structure data. First, the anomaly detection backbone network is designed by improving the Mamba model based on a multi-scale strategy and inter-modal fusion method, and combining it with a variational graph convolution module, which is capable of fully extracting spatio-temporal correlation features in the multi-node, multi-temporal modal scenarios of WSNs. Secondly, we design a three-subtask learning "pre-training" method with no-negative comparative learning, prediction, and reconstruction to learn generic features of WSN data samples from unlabeled data, and design a "graph prompting-fine-tuning" mechanism to guide the pre-trained self-supervised learning. The model is fine-tuned through the "graph prompting-fine-tuning" mechanism to guide the pre-trained self-supervised learning model to complete the parameter fine-tuning, thereby reducing the training cost and enhancing the detection generalization performance. The F1 metrics obtained from experiments on the public dataset and the actual collected dataset are up to 91.30% and 92.31%, respectively, which provides better detection performance and generalization ability than existing methods designed by the method.
+ oai:arXiv.org:2601.12745v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Miao Ye, Jing Cui, Yuan huang, Qian He, Yong Wang, Jiwen Zhang
+
+
+ SSPFormer: Self-Supervised Pretrained Transformer for MRI Images
+ https://arxiv.org/abs/2601.12747
+ arXiv:2601.12747v1 Announce Type: new
+Abstract: The pre-trained transformer demonstrates remarkable generalization ability in natural image processing. However, directly transferring it to magnetic resonance images faces two key challenges: the inability to adapt to the specificity of medical anatomical structures and the limitations brought about by the privacy and scarcity of medical data. To address these issues, this paper proposes a Self-Supervised Pretrained Transformer (SSPFormer) for MRI images, which effectively learns domain-specific feature representations of medical images by leveraging unlabeled raw imaging data. To tackle the domain gap and data scarcity, we introduce inverse frequency projection masking, which prioritizes the reconstruction of high-frequency anatomical regions to enforce structure-aware representation learning. Simultaneously, to enhance robustness against real-world MRI artifacts, we employ frequency-weighted FFT noise enhancement that injects physiologically realistic noise into the Fourier domain. Together, these strategies enable the model to learn domain-invariant and artifact-robust features directly from raw scans. Through extensive experiments on segmentation, super-resolution, and denoising tasks, the proposed SSPFormer achieves state-of-the-art performance, fully verifying its ability to capture fine-grained MRI image fidelity and adapt to clinical application requirements.
+ oai:arXiv.org:2601.12747v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jingkai Li, Xiaoze Tian, Yuhang Shen, Jia Wang, Dianjie Lu, Guijuan Zhang, Zhuoran Zheng
+
+
+ Towards Robust Process Reward Modeling via Noise-aware Learning
+ https://arxiv.org/abs/2601.12748
+ arXiv:2601.12748v1 Announce Type: new
+Abstract: Process Reward Models (PRMs) have achieved strong results in complex reasoning, but are bottlenecked by costly process-level supervision. A widely used alternative, Monte Carlo Estimation (MCE), defines process rewards as the probability that a policy model reaches the correct final answer from a given reasoning step. However, step correctness is an intrinsic property of the reasoning trajectory, and should be invariant to policy choice. Our empirical findings show that MCE producing policy-dependent rewards that induce label noise, including false positives that reward incorrect steps and false negatives that penalize correct ones. To address above challenges, we propose a two-stage framework to mitigate noisy supervision. In the labeling stage, we introduce a reflection-aware label correction mechanism that uses a large language model (LLM) as a judge to detect reflection and self-correction behaviors related to the current reasoning step, thereby suppressing overestimated rewards. In the training stage, we further propose a \underline{\textbf{N}}oise-\underline{\textbf{A}}ware \underline{\textbf{I}}terative \underline{\textbf{T}}raining framework that enables the PRM to progressively refine noisy labels based on its own confidence. Extensive Experiments show that our method substantially improves step-level correctness discrimination, achieving up to a 27\% absolute gain in average F1 over PRMs trained with noisy supervision.
+ oai:arXiv.org:2601.12748v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bin Xie, Bingbing Xu, Xueyun Tian, Yilin Chen, Huawei Shen
+
+
+ Efficient Local-to-Global Collaborative Perception via Joint Communication and Computation Optimization
+ https://arxiv.org/abs/2601.12749
+ arXiv:2601.12749v1 Announce Type: new
+Abstract: Autonomous driving relies on accurate perception to ensure safe driving. Collaborative perception improves accuracy by mitigating the sensing limitations of individual vehicles, such as limited perception range and occlusion-induced blind spots. However, collaborative perception often suffers from high communication overhead due to redundant data transmission, as well as increasing computation latency caused by excessive load with growing connected and autonomous vehicles (CAVs) participation. To address these challenges, we propose a novel local-to-global collaborative perception framework (LGCP) to achieve collaboration in a communication- and computation-efficient manner. The road of interest is partitioned into non-overlapping areas, each of which is assigned a dedicated CAV group to perform localized perception. A designated leader in each group collects and fuses perception data from its members, and uploads the perception result to the roadside unit (RSU), establishing a link between local perception and global awareness. The RSU aggregates perception results from all groups and broadcasts a global view to all CAVs. LGCP employs a centralized scheduling strategy via the RSU, which assigns CAV groups to each area, schedules their transmissions, aggregates area-level local perception results, and propagates the global view to all CAVs. Experimental results demonstrate that the proposed LGCP framework achieves an average 44 times reduction in the amount of data transmission, while maintaining or even improving the overall collaborative performance.
+ oai:arXiv.org:2601.12749v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hui Zhang, Yuquan Yang, Zechuan Gong, Xiaohua Xu, Dan Keun Sung
+
+
+ Approximation Schemes for Sequential Hiring Problems
+ https://arxiv.org/abs/2601.12750
+ arXiv:2601.12750v1 Announce Type: new
+Abstract: The main contribution of this paper resides in providing novel algorithmic advances and analytical insights for the sequential hiring problem, a recently introduced dynamic optimization model where a firm adaptively fills a limited number of positions from a pool of applicants with known values and acceptance probabilities. While earlier research established a strong foundation -- notably an LP-based $(1 - \frac{e^{-k}k^k}{k!})$-approximation by Epstein and Ma (Operations Research, 2024) -- the attainability of superior approximation guarantees has remained a central open question.
+ Our work addresses this challenge by establishing the first polynomial-time approximation scheme for sequential hiring, proposing an $O(n^{O(1)} \cdot T^{2^{\tilde{O}(1/\epsilon^{2})}})$-time construction of semi-adaptive policies whose expected reward is within factor $1 - \epsilon$ of optimal. To overcome the constant-factor optimality loss inherent to earlier literature, and to circumvent intrinsic representational barriers of adaptive policies, our approach is driven by the following innovations:
+ -- The block-responsive paradigm: We introduce block-responsive policies, a new class of decision-making strategies, selecting ordered sets (blocks) of applicants rather than single individuals, while still allowing for internal reactivity.
+ -- Adaptivity and efficiency: We prove that these policies can nearly match the performance of general adaptive policies while utilizing polynomially-sized decision trees.
+ -- Efficient construction: By developing a recursive enumeration-based framework, we resolve the problematic ``few-positions'' regime, bypassing a fundamental hurdle that hindered previous approaches.
+ oai:arXiv.org:2601.12750v1
+ cs.DS
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Danny Segev, Uri Stein
+
+
+ A Boolean Function-Theoretic Framework for Expressivity in GNNs with Applications to Fair Graph Mining
+ https://arxiv.org/abs/2601.12751
+ arXiv:2601.12751v1 Announce Type: new
+Abstract: We propose a novel expressivity framework for Graph Neural Networks (GNNs) grounded in Boolean function theory, enabling a fine-grained analysis of their ability to capture complex subpopulation structures. We introduce the notion of \textit{Subpopulation Boolean Isomorphism} (SBI) as an invariant that strictly subsumes existing expressivity measures such as Weisfeiler-Lehman (WL), biconnectivity-based, and homomorphism-based frameworks. Our theoretical results identify Fourier degree, circuit class (AC$^0$, NC$^1$), and influence as key barriers to expressivity in fairness-aware GNNs. We design a circuit-traversal-based fairness algorithm capable of handling subpopulations defined by high-complexity Boolean functions, such as parity, which break existing baselines. Experiments on real-world graphs show that our method achieves low fairness gaps across intersectional groups where state-of-the-art methods fail, providing the first principled treatment of GNN expressivity tailored to fairness.
+ oai:arXiv.org:2601.12751v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Manjish Pal
+
+
+ SoundPlot: An Open-Source Framework for Birdsong Acoustic Analysis and Neural Synthesis with Interactive 3D Visualization
+ https://arxiv.org/abs/2601.12752
+ arXiv:2601.12752v1 Announce Type: new
+Abstract: We present SoundPlot, an open-source framework for analyzing avian vocalizations through acoustic feature extraction, dimensionality reduction, and neural audio synthesis. The system transforms audio signals into a multi-dimensional acoustic feature space, enabling real-time visualization of temporal dynamics in 3D using web-based interactive graphics. Our framework implements a complete analysis-synthesis pipeline that extracts spectral features (centroid, bandwidth, contrast), pitch contours via probabilistic YIN (pYIN), and mel-frequency cepstral coefficients (MFCCs), mapping them to a unified timbre space for visualization. Audio reconstruction employs the Griffin-Lim phase estimation algorithm applied to mel spectrograms. The accompanying Three.js-based interface provides dual-viewport visualization comparing original and synthesized audio trajectories with independent playback controls. We demonstrate the framework's capabilities through comprehensive waveform analysis, spectrogram comparisons, and feature space evaluation using Principal Component Analysis (PCA). Quantitative evaluation shows mel spectrogram correlation scores exceeding 0.92, indicating high-fidelity preservation of perceptual acoustic structure. SoundPlot is released under the MIT License to facilitate research in bioacoustics, audio signal processing, and computational ethology.
+ oai:arXiv.org:2601.12752v1
+ cs.SD
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Naqcho Ali Mehdi, Mohammad Adeel, Aizaz Ali Larik
+
+
+ PAIR-SAFE: A Paired-Agent Approach for Runtime Auditing and Refining AI-Mediated Mental Health Support
+ https://arxiv.org/abs/2601.12754
+ arXiv:2601.12754v1 Announce Type: new
+Abstract: Large language models (LLMs) are increasingly used for mental health support, yet they can produce responses that are overly directive, inconsistent, or clinically misaligned, particularly in sensitive or high-risk contexts. Existing approaches to mitigating these risks largely rely on implicit alignment through training or prompting, offering limited transparency and runtime accountability. We introduce PAIR-SAFE, a paired-agent framework for auditing and refining AI-generated mental health support that integrates a Responder agent with a supervisory Judge agent grounded in the clinically validated Motivational Interviewing Treatment Integrity (MITI-4) framework. The Judgeaudits each response and provides structuredALLOW or REVISE decisions that guide runtime response refinement. We simulate counseling interactions using a support-seeker simulator derived from human-annotated motivational interviewing data. We find that Judge-supervised interactions show significant improvements in key MITI dimensions, including Partnership, Seek Collaboration, and overall Relational quality. Our quantitative findings are supported by qualitative expert evaluation, which further highlights the nuances of runtime supervision. Together, our results reveal that such pairedagent approach can provide clinically grounded auditing and refinement for AI-assisted conversational mental health support.
+ oai:arXiv.org:2601.12754v1
+ cs.HC
+ cs.AI
+ cs.CL
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiwon Kim, Violeta J. Rodriguez, Dong Whi Yoo, Eshwar Chandrasekharan, Koustuv Saha
+
+
+ VISPA: Pluralistic Alignment via Automatic Value Selection and Activation
+ https://arxiv.org/abs/2601.12758
+ arXiv:2601.12758v1 Announce Type: new
+Abstract: As large language models are increasingly used in high-stakes domains, it is essential that their outputs reflect not average} human preference, rather range of varying perspectives. Achieving such pluralism, however, remains challenging. Existing approaches consider limited values or rely on prompt-level interventions, lacking value control and representation. To address this, we introduce VISPA, a training-free pluralistic alignment framework, that enables direct control over value expression by dynamic selection and internal model activation steering. Across extensive empirical studies spanning multiple models and evaluation settings, we show VISPA is performant across all pluralistic alignment modes in healthcare and beyond. Further analysis reveals VISPA is adaptable with different steering initiations, model, and/or values. These results suggest that pluralistic alignment can be achieved through internal activation mechanisms, offering a scalable path toward language models that serves all.
+ oai:arXiv.org:2601.12758v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shenyan Zheng, Jiayou Zhong, Anudeex Shetty, Heng Ji, Preslav Nakov, Usman Naseem
+
+
+ Moaw: Unleashing Motion Awareness for Video Diffusion Models
+ https://arxiv.org/abs/2601.12761
+ arXiv:2601.12761v1 Announce Type: new
+Abstract: Video diffusion models, trained on large-scale datasets, naturally capture correspondences of shared features across frames. Recent works have exploited this property for tasks such as optical flow prediction and tracking in a zero-shot setting. Motivated by these findings, we investigate whether supervised training can more fully harness the tracking capability of video diffusion models. To this end, we propose Moaw, a framework that unleashes motion awareness for video diffusion models and leverages it to facilitate motion transfer. Specifically, we train a diffusion model for motion perception, shifting its modality from image-to-video generation to video-to-dense-tracking. We then construct a motion-labeled dataset to identify features that encode the strongest motion information, and inject them into a structurally identical video generation model. Owing to the homogeneity between the two networks, these features can be naturally adapted in a zero-shot manner, enabling motion transfer without additional adapters. Our work provides a new paradigm for bridging generative modeling and motion understanding, paving the way for more unified and controllable video learning frameworks.
+ oai:arXiv.org:2601.12761v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tianqi Zhang, Ziyi Wang, Wenzhao Zheng, Weiliang Chen, Yuanhui Huang, Zhengyang Huang, Jie Zhou, Jiwen Lu
+
+
+ Teaching LLMs to Learn Tool Trialing and Execution through Environment Interaction
+ https://arxiv.org/abs/2601.12762
+ arXiv:2601.12762v1 Announce Type: new
+Abstract: Equipping Large Language Models (LLMs) with external tools enables them to solve complex real-world problems. However, the robustness of existing methods remains a critical challenge when confronting novel or evolving tools. Existing trajectory-centric paradigms primarily rely on memorizing static solution paths during training, which limits the ability of LLMs to generalize tool usage to newly introduced or previously unseen tools. In this paper, we propose ToolMaster, a framework that shifts tool use from imitating golden tool-calling trajectories to actively learning tool usage through interaction with the environment. To optimize LLMs for tool planning and invocation, ToolMaster adopts a trial-and-execution paradigm, which trains LLMs to first imitate teacher-generated trajectories containing explicit tool trials and self-correction, followed by reinforcement learning to coordinate the trial and execution phases jointly. This process enables agents to autonomously explore correct tool usage by actively interacting with environments and forming experiential knowledge that benefits tool execution. Experimental results demonstrate that ToolMaster significantly outperforms existing baselines in terms of generalization and robustness across unseen or unfamiliar tools. All code and data are available at https://github.com/NEUIR/ToolMaster.
+ oai:arXiv.org:2601.12762v1
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xingjie Gao, Pengcheng Huang, Zhenghao Liu, Yukun Yan, Shuo Wang, Zulong Chen, Chen Qian, Ge Yu, Yu Gu
+
+
+ Towards Unbiased Source-Free Object Detection via Vision Foundation Models
+ https://arxiv.org/abs/2601.12765
+ arXiv:2601.12765v1 Announce Type: new
+Abstract: Source-Free Object Detection (SFOD) has garnered much attention in recent years by eliminating the need of source-domain data in cross-domain tasks, but existing SFOD methods suffer from the Source Bias problem, i.e. the adapted model remains skewed towards the source domain, leading to poor generalization and error accumulation during self-training. To overcome this challenge, we propose Debiased Source-free Object Detection (DSOD), a novel VFM-assisted SFOD framework that can effectively mitigate source bias with the help of powerful VFMs. Specifically, we propose Unified Feature Injection (UFI) module that integrates VFM features into the CNN backbone through Simple-Scale Extension (SSE) and Domain-aware Adaptive Weighting (DAAW). Then, we propose Semantic-aware Feature Regularization (SAFR) that constrains feature learning to prevent overfitting to source domain characteristics. Furthermore, we propose a VFM-free variant, termed DSOD-distill for computation-restricted scenarios through a novel Dual-Teacher distillation scheme. Extensive experiments on multiple benchmarks demonstrate that DSOD outperforms state-of-the-art SFOD methods, achieving 48.1% AP on Normal-to-Foggy weather adaptation, 39.3% AP on Cross-scene adaptation, and 61.4% AP on Synthetic-to-Real adaptation.
+ oai:arXiv.org:2601.12765v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zhi Cai, Yingjie Gao, Yanan Zhang, Xinzhu Ma, Di Huang
+
+
+ Spatial-VLN: Zero-Shot Vision-and-Language Navigation With Explicit Spatial Perception and Exploration
+ https://arxiv.org/abs/2601.12766
+ arXiv:2601.12766v1 Announce Type: new
+Abstract: Zero-shot Vision-and-Language Navigation (VLN) agents leveraging Large Language Models (LLMs) excel in generalization but suffer from insufficient spatial perception. Focusing on complex continuous environments, we categorize key perceptual bottlenecks into three spatial challenges: door interaction,multi-room navigation, and ambiguous instruction execution, where existing methods consistently suffer high failure rates. We present Spatial-VLN, a perception-guided exploration framework designed to overcome these challenges. The framework consists of two main modules. The Spatial Perception Enhancement (SPE) module integrates panoramic filtering with specialized door and region experts to produce spatially coherent, cross-view consistent perceptual representations. Building on this foundation, our Explored Multi-expert Reasoning (EMR) module uses parallel LLM experts to address waypoint-level semantics and region-level spatial transitions. When discrepancies arise between expert predictions, a query-and-explore mechanism is activated, prompting the agent to actively probe critical areas and resolve perceptual ambiguities. Experiments on VLN-CE demonstrate that Spatial VLN achieves state-of-the-art performance using only low-cost LLMs. Furthermore, to validate real-world applicability, we introduce a value-based waypoint sampling strategy that effectively bridges the Sim2Real gap. Extensive real-world evaluations confirm that our framework delivers superior generalization and robustness in complex environments. Our codes and videos are available at https://yueluhhxx.github.io/Spatial-VLN-web/.
+ oai:arXiv.org:2601.12766v1
+ cs.CV
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lu Yue, Yue Fan, Shiwei Lian, Yu Zhao, Jiaxin Yu, Liang Xie, Feitian Zhang
+
+
+ Delving Deeper: Hierarchical Visual Perception for Robust Video-Text Retrieval
+ https://arxiv.org/abs/2601.12768
+ arXiv:2601.12768v1 Announce Type: new
+Abstract: Video-text retrieval (VTR) aims to locate relevant videos using natural language queries. Current methods, often based on pre-trained models like CLIP, are hindered by video's inherent redundancy and their reliance on coarse, final-layer features, limiting matching accuracy. To address this, we introduce the HVP-Net (Hierarchical Visual Perception Network), a framework that mines richer video semantics by extracting and refining features from multiple intermediate layers of a vision encoder. Our approach progressively distills salient visual concepts from raw patch-tokens at different semantic levels, mitigating redundancy while preserving crucial details for alignment. This results in a more robust video representation, leading to new state-of-the-art performance on challenging benchmarks including MSRVTT, DiDeMo, and ActivityNet. Our work validates the effectiveness of exploiting hierarchical features for advancing video-text retrieval. Our codes are available at https://github.com/boyun-zhang/HVP-Net.
+ oai:arXiv.org:2601.12768v1
+ cs.CV
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zequn Xie, Boyun Zhang, Yuxiao Lin, Tao Jin
+
+
+ Generalizable and Animatable 3D Full-Head Gaussian Avatar from a Single Image
+ https://arxiv.org/abs/2601.12770
+ arXiv:2601.12770v1 Announce Type: new
+Abstract: Building 3D animatable head avatars from a single image is an important yet challenging problem. Existing methods generally collapse under large camera pose variations, compromising the realism of 3D avatars. In this work, we propose a new framework to tackle the novel setting of one-shot 3D full-head animatable avatar reconstruction in a single feed-forward pass, enabling real-time animation and simultaneous 360$^\circ$ rendering views. To facilitate efficient animation control, we model 3D head avatars with Gaussian primitives embedded on the surface of a parametric face model within the UV space. To obtain knowledge of full-head geometry and textures, we leverage rich 3D full-head priors within a pretrained 3D generative adversarial network (GAN) for global full-head feature extraction and multi-view supervision. To increase the fidelity of the 3D reconstruction of the input image, we take advantage of the symmetric nature of the UV space and human faces to fuse local fine-grained input image features with the global full-head textures. Extensive experiments demonstrate the effectiveness of our method, achieving high-quality 3D full-head modeling as well as real-time animation, thereby improving the realism of 3D talking avatars.
+ oai:arXiv.org:2601.12770v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shuling Zhao, Dan Xu
+
+
+ Who Does This Name Remind You of? Nationality Prediction via Large Language Model Associative Memory
+ https://arxiv.org/abs/2601.12771
+ arXiv:2601.12771v1 Announce Type: new
+Abstract: Large language models (LLMs) possess extensive world knowledge, yet methods for effectively eliciting this knowledge remain underexplored. Nationality and region prediction tasks require understanding of not only linguistic features but also cultural and historical background, making LLM world knowledge particularly valuable. However, conventional LLM prompting methods rely on direct reasoning approaches, which have limitations in applying abstract linguistic rules. We propose LLM Associative Memory Agents (LAMA), a novel framework that leverages LLM world knowledge as associative memory. Rather than directly inferring nationality from names, LAMA recalls famous individuals with the same name and aggregates their nationalities through indirect reasoning. A dual-agent architecture comprising a Person Agent and a Media Agent, specialized in different knowledge domains, recalls famous individuals in parallel, generating Top-1 predictions through voting and Top-K predictions through conditional completion. On a 99-country nationality prediction task, LAMA achieved 0.817 accuracy, substantially outperforming conventional LLM prompting methods and neural models. Our experiments reveal that LLMs exhibit higher reliability in recalling concrete examples than in abstract reasoning, that recall-based approaches are robust to low-frequency nationalities independent of data frequency distributions, and that the dual-agent architecture functions complementarily to produce synergistic effects. These results demonstrate the effectiveness of a new multi-agent system that retrieves and aggregates LLM knowledge rather than prompting reasoning.
+ oai:arXiv.org:2601.12771v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keito Inoshita
+
+
+ SDN-Blockchain Based Security Routing for UAV Communication via Reinforcement Learning
+ https://arxiv.org/abs/2601.12774
+ arXiv:2601.12774v1 Announce Type: new
+Abstract: The unmanned aerial vehicle (UAV) network plays important roles in emergency communications. However, it is challenging to design reliable routing strategies that ensure low latency, energy efficiency, and security in the dynamic and attack-prone environments. To this end, we design a secure routing architecture integrating software-defined networking (SDN) for centralized control and blockchain for tamper-proof trust management. In particular, a novel security degree metric is introduced to quantify the UAV trustworthiness. Based on this architecture, we propose a beam search-proximal policy optimization (BSPPO) algorithm, where beam search (BS) pre-screens the high-security candidate paths, and proximal policy optimization (PPO) performs hop-by-hop routing decisions to support dynamic rerouting upon attack detections. Finally, extensive simulations under varying attack densities, packet sizes, and rerouting events demonstrate that BSPPO outperforms PPO, BS-Q learning, and BS-actor critic in terms of delay, energy consumption, and transmission success rate, showing the outstanding robustness and adaptability.
+ oai:arXiv.org:2601.12774v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yulu Han, Ziye Jia, Jingjing Zhao, Lijun He, Yao Wu, Qihui Wu
+
+
+ Eddy-Resolving Global Ocean Forecasting with Multi-Scale Graph Neural Networks
+ https://arxiv.org/abs/2601.12775
+ arXiv:2601.12775v1 Announce Type: new
+Abstract: Research on data-driven ocean models has progressed rapidly in recent years; however, the application of these models to global eddy-resolving ocean forecasting remains limited. The accurate representation of ocean dynamics across a wide range of spatial scales remains a major challenge in such applications. This study proposes a multi-scale graph neural network-based ocean model for 10-day global forecasting that improves short-term prediction skill and enhances the representation of multi-scale ocean variability. The model employs an encoder-processor-decoder architecture and uses two spherical meshes with different resolutions to better capture the multi-scale nature of ocean dynamics. In addition, the model incorporates surface atmospheric variables along with ocean state variables as node inputs to improve short-term prediction accuracy by representing atmospheric forcing. Evaluation using surface kinetic energy spectra and case studies shows that the model accurately represents a broad range of spatial scales, while root mean square error comparisons demonstrate improved skill in short-term predictions. These results indicate that the proposed model delivers more accurate short-term forecasts and improved representation of multi-scale ocean dynamics, thereby highlighting its potential to advance data-driven, eddy-resolving global ocean forecasting.
+ oai:arXiv.org:2601.12775v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuta Hirabayashi, Daisuke Matusoka, Konobu Kimura
+
+
+ High-order Lagrange multiplier schemes for general Hamiltonian PDEs
+ https://arxiv.org/abs/2601.12776
+ arXiv:2601.12776v1 Announce Type: new
+Abstract: In this paper, we introduce a Lagrange multiplier approach to construct linearly implicit energy-preserving schemes of arbitrary order for general Hamiltonian PDEs. Unlike the widely used auxiliary variable methods, this novel approach does not require the nonlinear part of the energy to be bounded from below, thereby offering broader applicability. Moreover, this approach preserves the original energy exactly at both the continuous and discrete levels, as opposed to a modified energy preserved by the auxiliary variable methods. Rigorous proofs are provided for the energy conservation and numerical accuracy of all derived schemes. The trade-off for these advantages is the need to solve a nonlinear algebraic equation to determine the Lagrange multiplier. Nevertheless, numerical experiments show that the associated computational cost is generally not dominant, indicating that the new schemes retain computational efficiency comparable to the auxiliary variable-based schemes. Numerical results demonstrate the efficiency, accuracy, and structure-preserving properties of the proposed schemes.
+ oai:arXiv.org:2601.12776v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yonghui Bo, Yushun Wang
+
+
+ Open Vocabulary Panoptic Segmentation With Retrieval Augmentation
+ https://arxiv.org/abs/2601.12779
+ arXiv:2601.12779v1 Announce Type: new
+Abstract: Given an input image and set of class names, panoptic segmentation aims to label each pixel in an image with class labels and instance labels. In comparison, Open Vocabulary Panoptic Segmentation aims to facilitate the segmentation of arbitrary classes according to user input. The challenge is that a panoptic segmentation system trained on a particular dataset typically does not generalize well to unseen classes beyond the training data. In this work, we propose RetCLIP, a retrieval-augmented panoptic segmentation method that improves the performance of unseen classes. In particular, we construct a masked segment feature database using paired image-text data. At inference time, we use masked segment features from the input image as query keys to retrieve similar features and associated class labels from the database. Classification scores for the masked segment are assigned based on the similarity between query features and retrieved features. The retrieval-based classification scores are combined with CLIP-based scores to produce the final output. We incorporate our solution with a previous SOTA method (FC-CLIP). When trained on COCO, the proposed method demonstrates 30.9 PQ, 19.3 mAP, 44.0 mIoU on the ADE20k dataset, achieving +4.5 PQ, +2.5 mAP, +10.0 mIoU absolute improvement over the baseline.
+ oai:arXiv.org:2601.12779v1
+ cs.CV
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nafis Sadeq, Qingfeng Liu, Mostafa El-Khamy
+
+
+ Extended Gabidulin-Kronecker Product Codes and Their Application to Cryptosystems
+ https://arxiv.org/abs/2601.12780
+ arXiv:2601.12780v1 Announce Type: new
+Abstract: In this paper, we initiate the study of Extended Gabidulin codes with a Kronecker product structure and propose three enhanced variants of the Rank Quasi-Cyclic (RQC) (Melchor et.al., IEEE IT, 2018) cryptosystem. First, we establish precise bounds on the minimum rank distance of Gabidulin-Kronecker product codes under two distinct parameter regimes. Specifically, when $n_{1}=k_{1}$ and $n_{2}=m<n_{1}n_{2}$, the minimum rank distance is exactly $n_{2}-k_{2}+1$. This yields a new family of Maximum Rank Distance (MRD) codes, which are distinct from classical Gabidulin codes. For the case of $k_{1}\leq n_{1},k_{2}\leq n_{2},n_{1}n_{2}\leq m$, the minimum rank distance $d$ of Gabidulin-Kronecker product codes satisfies a tight upper and lower bound, i.e., $n_{2}-k_{2}+1 \leq d \leq (n_{1}-k_{1}+1)(n_{2}-k_{2}+1)$. Second, we introduce a new class of decodable rank-metric codes, namely Extended Gabidulin-Kronecker product (EGK) codes, which generalize the structure of Gabidulin-Kronecker product (GK) codes. We also propose a decoding algorithm that directly retrieves the codeword without recovering the error vector, thus improving efficiency. This algorithm achieves zero decoding failure probability when the error weight is within its correction capability. Third, we propose three enhanced variants of the RQC cryptosystem based on EGK codes, each offering a distinct trade-off between security and efficiency. For 128-bit security, all variants achieve significant reductions in public key size compared to the Multi-UR-AG (Bidoux et.al., IEEE IT, 2024) while ensuring zero decryption failure probability--a key security advantage over many existing rank-based schemes.
+ oai:arXiv.org:2601.12780v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Zhe Sun, Terry Shue Chien Lau, Mengying Zhao, Zimeng Zhou, Fang-Wei Fu
+
+
+ VIRO: Robust and Efficient Neuro-Symbolic Reasoning with Verification for Referring Expression Comprehension
+ https://arxiv.org/abs/2601.12781
+ arXiv:2601.12781v1 Announce Type: new
+Abstract: Referring Expression Comprehension (REC) aims to localize the image region corresponding to a natural-language query. Recent neuro-symbolic REC approaches leverage large language models (LLMs) and vision-language models (VLMs) to perform compositional reasoning, decomposing queries 4 structured programs and executing them step-by-step. While such approaches achieve interpretable reasoning and strong zero-shot generalization, they assume that intermediate reasoning steps are accurate. However, this assumption causes cascading errors: false detections and invalid relations propagate through the reasoning chain, yielding high-confidence false positives even when no target is present in the image. To address this limitation, we introduce Verification-Integrated Reasoning Operators (VIRO), a neuro-symbolic framework that embeds lightweight operator-level verifiers within reasoning steps. Each operator executes and validates its output, such as object existence or spatial relationship, thereby allowing the system to robustly handle no-target cases when verification conditions are not met. Our framework achieves state-of-the-art performance, reaching 61.1% balanced accuracy across target-present and no-target settings, and demonstrates generalization to real-world egocentric data. Furthermore, VIRO shows superior computational efficiency in terms of throughput, high reliability with a program failure rate of less than 0.3%, and scalability through decoupled program generation from execution.
+ oai:arXiv.org:2601.12781v1
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hyejin Park, Junhyuk Kwon, Suha Kwak, Jungseul Ok
+
+
+ Sensing-Limited Control of Noiseless Linear Systems Under Nonlinear Observations
+ https://arxiv.org/abs/2601.12782
+ arXiv:2601.12782v1 Announce Type: new
+Abstract: This paper investigates the fundamental information-theoretic limits for the control and sensing of noiseless linear dynamical systems subject to a broad class of nonlinear observations. We analyze the interactions between the control and sensing components by characterizing the minimum information flow required for stability. Specifically, we derive necessary conditions for mean-square observability and stabilizability, demonstrating that the average directed information rate from the state to the observations must exceed the intrinsic expansion rate of the unstable dynamics. Furthermore, to address the challenges posed by non-Gaussian distributions inherent to nonlinear observation channels, we establish sufficient conditions by imposing regularity assumptions, specifically log-concavity, on the system's probabilistic components. We show that under these conditions, the divergence of differential entropy implies the convergence of the estimation error, thereby closing the gap between information-theoretic bounds and estimation performance. By establishing these results, we unveil the fundamental performance limits imposed by the sensing layer, extending classical data-rate constraints to the more challenging regime of nonlinear observation models.
+ oai:arXiv.org:2601.12782v1
+ eess.SY
+ cs.IT
+ cs.SY
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ming Li, Fan Liu, Yifeng Xiong, Jie Xu, Tao Liu
+
+
+ Unleashing Efficient Asynchronous RL Post-Training via Staleness-Constrained Rollout Coordination
+ https://arxiv.org/abs/2601.12784
+ arXiv:2601.12784v1 Announce Type: new
+Abstract: Reinforcement learning (RL) post-training has become pivotal for enhancing the capabilities of modern large models. A recent trend is to develop RL systems with a fully disaggregated architecture, which decouples the three RL phases (rollout, reward, and training) onto separate resources and executes them asynchronously. However, two critical data-level concerns arise: (1) asynchronous execution leads to data staleness in trajectories (the data generated by rollout) as the model parameters used in rollout may not be up to date, which impairs RL convergence; and (2) the length variation of trajectories introduces severe data skewness, leading to workload imbalance and degraded system performance.
+ Existing systems fail to address these two concerns in a unified manner. Techniques that tightly control data staleness often constrain effective data skewness mitigation, while aggressive data skewness mitigation tends to exacerbate data staleness. As a result, systems are forced to trade off convergence for performance, or vice versa. To address this, we propose StaleFlow, an RL post-training system that jointly tackles data staleness and skewness. First, to control staleness, StaleFlow introduces a global consistency protocol that tracks the full lifecycle of each trajectory and constrains staleness. Second, to mitigate skewness, StaleFlow re-designs the RL system architecture by constructing data servers for trajectories and parameters to achieve flexible rollout coordination. Subsequently, we develop a suite of staleness-aware, throughput-oriented strategies to enhance system performance. Evaluations show that StaleFlow achieves up to 1.42-2.68$\times$ (1.17-2.01$\times$ on average) higher throughput than state-of-the-art systems, without compromising convergence.
+ oai:arXiv.org:2601.12784v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Haoyang Li, Sheng Lin, Fangcheng Fu, Yuming Zhou, Xiaodong Ji, Yanfeng Zhao, Lefeng Wang, Jie Jiang, Bin Cui
+
+
+ Distilling Time Series Foundation Models for Efficient Forecasting
+ https://arxiv.org/abs/2601.12785
+ arXiv:2601.12785v1 Announce Type: new
+Abstract: Time Series foundation models (TSFMs) deliver strong forecasting performance through large-scale pretraining, but their large parameter sizes make deployment costly. While knowledge distillation offers a natural and effective approach for model compression, techniques developed for general machine learning tasks are not directly applicable to time series forecasting due to the unique characteristics. To address this, we present DistilTS, the first distillation framework specifically designed for TSFMs. DistilTS addresses two key challenges: (1) task difficulty discrepancy, specific to forecasting, where uniform weighting makes optimization dominated by easier short-term horizons, while long-term horizons receive weaker supervision; and (2) architecture discrepancy, a general challenge in distillation, for which we design an alignment mechanism in the time series forecasting. To overcome these issues, DistilTS introduces horizon-weighted objectives to balance learning across horizons, and a temporal alignment strategy that reduces architectural mismatch, enabling compact models. Experiments on multiple benchmarks demonstrate that DistilTS achieves forecasting performance comparable to full-sized TSFMs, while reducing parameters by up to 1/150 and accelerating inference by up to 6000x. Code is available at: https://github.com/itsnotacie/DistilTS-ICASSP2026.
+ oai:arXiv.org:2601.12785v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yuqi Li, Kuiye Ding, Chuanguang Yang, Szu-Yu Chen, Yingli Tian
+
+
+ DUAP: Dual-task Universal Adversarial Perturbations Against Voice Control Systems
+ https://arxiv.org/abs/2601.12786
+ arXiv:2601.12786v1 Announce Type: new
+Abstract: Modern Voice Control Systems (VCS) rely on the collaboration of Automatic Speech Recognition (ASR) and Speaker Recognition (SR) for secure interaction. However, prior adversarial attacks typically target these tasks in isolation, overlooking the coupled decision pipeline in real-world scenarios. Consequently, single-task attacks often fail to pose a practical threat. To fill this gap, we first utilize gradient analysis to reveal that ASR and SR exhibit no inherent conflicts. Building on this, we propose Dual-task Universal Adversarial Perturbation (DUAP). Specifically, DUAP employs a targeted surrogate objective to effectively disrupt ASR transcription and introduces a Dynamic Normalized Ensemble (DNE) strategy to enhance transferability across diverse SR models. Furthermore, we incorporate psychoacoustic masking to ensure perturbation imperceptibility. Extensive evaluations across five ASR and six SR models demonstrate that DUAP achieves high simultaneous attack success rates and superior imperceptibility, significantly outperforming existing single-task baselines.
+ oai:arXiv.org:2601.12786v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Suyang Sun, Weifei Jin, Yuxin Cao, Wei Song, Jie Hao
+
+
+ FocusNav: Spatial Selective Attention with Waypoint Guidance for Humanoid Local Navigation
+ https://arxiv.org/abs/2601.12790
+ arXiv:2601.12790v1 Announce Type: new
+Abstract: Robust local navigation in unstructured and dynamic environments remains a significant challenge for humanoid robots, requiring a delicate balance between long-range navigation targets and immediate motion stability. In this paper, we propose FocusNav, a spatial selective attention framework that adaptively modulates the robot's perceptual field based on navigational intent and real-time stability. FocusNav features a Waypoint-Guided Spatial Cross-Attention (WGSCA) mechanism that anchors environmental feature aggregation to a sequence of predicted collision-free waypoints, ensuring task-relevant perception along the planned trajectory. To enhance robustness in complex terrains, the Stability-Aware Selective Gating (SASG) module autonomously truncates distal information when detecting instability, compelling the policy to prioritize immediate foothold safety. Extensive experiments on the Unitree G1 humanoid robot demonstrate that FocusNav significantly improves navigation success rates in challenging scenarios, outperforming baselines in both collision avoidance and motion stability, achieving robust navigation in dynamic and complex environments.
+ oai:arXiv.org:2601.12790v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yang Zhang, Jianming Ma, Liyun Yan, Zhanxiang Cao, Yazhou Zhang, Haoyang Li, Yue Gao
+
+
+ SKANet: A Cognitive Dual-Stream Framework with Adaptive Modality Fusion for Robust Compound GNSS Interference Classification
+ https://arxiv.org/abs/2601.12791
+ arXiv:2601.12791v1 Announce Type: new
+Abstract: As the electromagnetic environment becomes increasingly complex, Global Navigation Satellite Systems (GNSS) face growing threats from sophisticated jamming interference. Although Deep Learning (DL) effectively identifies basic interference, classifying compound interference remains difficult due to the superposition of diverse jamming sources. Existing single-domain approaches often suffer from performance degradation because transient burst signals and continuous global signals require conflicting feature extraction scales. We propose the Selective Kernel and Asymmetric convolution Network(SKANet), a cognitive deep learning framework built upon a dual-stream architecture that integrates Time-Frequency Images (TFIs) and Power Spectral Density (PSD). Distinct from conventional fusion methods that rely on static receptive fields, the proposed architecture incorporates a Multi-Branch Selective Kernel (SK) module combined with Asymmetric Convolution Blocks (ACBs). This mechanism enables the network to dynamically adjust its receptive fields, acting as an adaptive filter that simultaneously captures micro-scale transient features and macro-scale spectral trends within entangled compound signals. To complement this spatial-temporal adaptation, a Squeeze-and-Excitation (SE) mechanism is integrated at the fusion stage to adaptively recalibrate the contribution of heterogeneous features from each modality. Evaluations on a dataset of 405,000 samples demonstrate that SKANet achieves an overall accuracy of 96.99\%, exhibiting superior robustness for compound jamming classification, particularly under low Jamming-to-Noise Ratio (JNR) regimes.
+ oai:arXiv.org:2601.12791v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhihan Zeng, Yang Zhao, Kaihe Wang, Dusit Niyato, Hongyuan Shu, Junchu Zhao, Yanjun Huang, Yue Xiu, Zhongpei Zhang, Ning Wei
+
+
+ Graph Laplacian assisted regularization method under noise level free heuristic and statistical stopping rule
+ https://arxiv.org/abs/2601.12792
+ arXiv:2601.12792v1 Announce Type: new
+Abstract: In this work, we address the solution of both linear and nonlinear ill-posed inverse problems by developing a novel graph-based regularization framework, where the regularization term is formulated through an iteratively updated graph Laplacian. The proposed approach operates without prior knowledge of the noise level and employs two distinct stopping criteria namely, the heuristic rule and the statistical discrepancy principle. To facilitate the latter, we utilize averaged measurements derived from multiple repeated observations. We provide a detailed convergence analysis of the method in statistical prospective, establishing its stability and regularization properties under both stopping strategies. The algorithm begins with the computation of an initial reconstruction using any suitable techniques like Tikhonov regularization (Tik), filtered back projection (FBP) or total variation (TV), which is used as the foundation for generating the initial graph Laplacian. The reconstruction is made better step by step using an iterative process, during which the graph Laplacian is dynamically re-calibrated to reflect how the solution's structure is changing. Finally, we present numerical experiments on X-ray Computed Tomography (CT) and phase retrieval CT, demonstrating the effectiveness and robustness of the proposed method and comparing its reconstruction performance under both stopping rules.
+ oai:arXiv.org:2601.12792v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Harshit Bajpai, Ankik Kumar Giri
+
+
+ Two Frameworks and their Fourth Order Implicit Schemes for Time Discretization of Maxwell's Equations
+ https://arxiv.org/abs/2601.12793
+ arXiv:2601.12793v1 Announce Type: new
+Abstract: Our work is about energy conserving fourth-order time discretizations of a three-field formulation of Maxwell's equations in conjunction with a spatial discretization using higher-order and compatible de Rham finite element spaces. Toward this end, we delineate two broad classes of strategies for general higher-order time discretizations which we term spatial and temporal strategies. We provide a description of these two strategies and develop fourth-order time accurate schemes in the context of our Maxwell's system. However, our description can be used to prescribe similar fourth- or even higher-order time-integration methods for any linear (or quasi-linear) system of time-dependent partial differential equations. Our organizing principle in our proposed two strategies is to Taylor expand the unknown solution in time by assuming sufficient regularity. Then, in the spatial strategy, we use Maxwell's equations themselves to replace the fourth-order time derivatives in an appropriately truncated Taylor expansion with corresponding higher-order spatial derivatives. On the other hand, in the temporal strategy, we simply use higher-order finite difference schemes for the various higher-order time derivative terms in the truncated Taylor approximation. In both cases, we then defer to a standard finite element exterior calculus manner of compatible discretization for the spatial component of the Maxwell's solution. For our proposed schemes corresponding to the two strategies, we show that they are both stable and convergent and provide some validating numerical examples in $\mathbb{R}^2$. Our main contributions are in the development of the fourth-order time discretization methods that are energy conserving using our two outlined strategies and proofs of their convergence for semi- and full-discretizations of our three-field system of Maxwell's equations.
+ oai:arXiv.org:2601.12793v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Archana Arya, Kaushik Kalyanaraman
+
+
+ Combating Noisy Labels through Fostering Self- and Neighbor-Consistency
+ https://arxiv.org/abs/2601.12795
+ arXiv:2601.12795v1 Announce Type: new
+Abstract: Label noise is pervasive in various real-world scenarios, posing challenges in supervised deep learning. Deep networks are vulnerable to such label-corrupted samples due to the memorization effect. One major stream of previous methods concentrates on identifying clean data for training. However, these methods often neglect imbalances in label noise across different mini-batches and devote insufficient attention to out-of-distribution noisy data. To this end, we propose a noise-robust method named Jo-SNC (\textbf{Jo}int sample selection and model regularization based on \textbf{S}elf- and \textbf{N}eighbor-\textbf{C}onsistency). Specifically, we propose to employ the Jensen-Shannon divergence to measure the ``likelihood'' of a sample being clean or out-of-distribution. This process factors in the nearest neighbors of each sample to reinforce the reliability of clean sample identification. We design a self-adaptive, data-driven thresholding scheme to adjust per-class selection thresholds. While clean samples undergo conventional training, detected in-distribution and out-of-distribution noisy samples are trained following partial label learning and negative learning, respectively. Finally, we advance the model performance further by proposing a triplet consistency regularization that promotes self-prediction consistency, neighbor-prediction consistency, and feature consistency. Extensive experiments on various benchmark datasets and comprehensive ablation studies demonstrate the effectiveness and superiority of our approach over existing state-of-the-art methods.
+ oai:arXiv.org:2601.12795v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zeren Sun, Yazhou Yao, Tongliang Liu, Zechao Li, Fumin Shen, Jinhui Tang
+
+
+ Contact-Aware Neural Dynamics
+ https://arxiv.org/abs/2601.12796
+ arXiv:2601.12796v1 Announce Type: new
+Abstract: High-fidelity physics simulation is essential for scalable robotic learning, but the sim-to-real gap persists, especially for tasks involving complex, dynamic, and discontinuous interactions like physical contacts. Explicit system identification, which tunes explicit simulator parameters, is often insufficient to align the intricate, high-dimensional, and state-dependent dynamics of the real world. To overcome this, we propose an implicit sim-to-real alignment framework that learns to directly align the simulator's dynamics with contact information. Our method treats the off-the-shelf simulator as a base prior and learns a contact-aware neural dynamics model to refine simulated states using real-world observations. We show that using tactile contact information from robotic hands can effectively model the non-smooth discontinuities inherent in contact-rich tasks, resulting in a neural dynamics model grounded by real-world data. We demonstrate that this learned forward dynamics model improves state prediction accuracy and can be effectively used to predict policy performance and refine policies trained purely in standard simulators, offering a scalable, data-driven approach to sim-to-real alignment.
+ oai:arXiv.org:2601.12796v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Changwei Jing, Jai Krishna Bandi, Jianglong Ye, Yan Duan, Pieter Abbeel, Xiaolong Wang, Sha Yi
+
+
+ PhyG-MoE: A Physics-Guided Mixture-of-Experts Framework for Energy-Efficient GNSS Interference Recognition
+ https://arxiv.org/abs/2601.12798
+ arXiv:2601.12798v1 Announce Type: new
+Abstract: Complex electromagnetic interference increasingly compromises Global Navigation Satellite Systems (GNSS), threatening the reliability of Space-Air-Ground Integrated Networks (SAGIN). Although deep learning has advanced interference recognition, current static models suffer from a \textbf{fundamental limitation}: they impose a fixed computational topology regardless of the input's physical entropy. This rigidity leads to severe resource mismatch, where simple primitives consume the same processing cost as chaotic, saturated mixtures. To resolve this, this paper introduces PhyG-MoE (Physics-Guided Mixture-of-Experts), a framework designed to \textbf{dynamically align model capacity with signal complexity}. Unlike static architectures, the proposed system employs a spectrum-based gating mechanism that routes signals based on their spectral feature entanglement. A high-capacity TransNeXt expert is activated on-demand to disentangle complex features in saturated scenarios, while lightweight experts handle fundamental signals to minimize latency. Evaluations on 21 jamming categories demonstrate that PhyG-MoE achieves an overall accuracy of 97.58\%. By resolving the intrinsic conflict between static computing and dynamic electromagnetic environments, the proposed framework significantly reduces computational overhead without performance degradation, offering a viable solution for resource-constrained cognitive receivers.
+ oai:arXiv.org:2601.12798v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhihan Zeng, Yang Zhao, Kaihe Wang, Dusit Niyato, Yue Xiu, Lu Chen, Zhongpei Zhang, Ning Wei
+
+
+ FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions
+ https://arxiv.org/abs/2601.12799
+ arXiv:2601.12799v1 Announce Type: new
+Abstract: Humanoid robots are capable of performing various actions such as greeting, dancing and even backflipping. However, these motions are often hard-coded or specifically trained, which limits their versatility. In this work, we present FRoM-W1, an open-source framework designed to achieve general humanoid whole-body motion control using natural language. To universally understand natural language and generate corresponding motions, as well as enable various humanoid robots to stably execute these motions in the physical world under gravity, FRoM-W1 operates in two stages: (a) H-GPT: utilizing massive human data, a large-scale language-driven human whole-body motion generation model is trained to generate diverse natural behaviors. We further leverage the Chain-of-Thought technique to improve the model's generalization in instruction understanding. (b) H-ACT: After retargeting generated human whole-body motions into robot-specific actions, a motion controller that is pretrained and further fine-tuned through reinforcement learning in physical simulation enables humanoid robots to accurately and stably perform corresponding actions. It is then deployed on real robots via a modular simulation-to-reality module. We extensively evaluate FRoM-W1 on Unitree H1 and G1 robots. Results demonstrate superior performance on the HumanML3D-X benchmark for human whole-body motion generation, and our introduced reinforcement learning fine-tuning consistently improves both motion tracking accuracy and task success rates of these humanoid robots. We open-source the entire FRoM-W1 framework and hope it will advance the development of humanoid intelligence.
+ oai:arXiv.org:2601.12799v1
+ cs.RO
+ cs.CL
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Peng Li, Zihan Zhuang, Yangfan Gao, Yi Dong, Sixian Li, Changhao Jiang, Shihan Dou, Zhiheng Xi, Enyu Zhou, Jixuan Huang, Hui Li, Jingjing Gong, Xingjun Ma, Tao Gui, Zuxuan Wu, Qi Zhang, Xuanjing Huang, Yu-Gang Jiang, Xipeng Qiu
+
+
+ UNMIXX: Untangling Highly Correlated Singing Voices Mixtures
+ https://arxiv.org/abs/2601.12802
+ arXiv:2601.12802v1 Announce Type: new
+Abstract: We introduce UNMIXX, a novel framework for multiple singing voices separation (MSVS). While related to speech separation, MSVS faces unique challenges: data scarcity and the highly correlated nature of singing voices mixture. To address these issues, we propose UNMIXX with three key components: (1) musically informed mixing strategy to construct highly correlated, music-like mixtures, (2) cross-source attention that drives representations of two singers apart via reverse attention, and (3) magnitude penalty loss penalizing erroneously assigned interfering energy. UNMIXX not only addresses data scarcity by simulating realistic training data, but also excels at separating highly correlated mixtures through cross-source interactions at both the architectural and loss levels. Our extensive experiments demonstrate that UNMIXX greatly enhances performance, with SDRi gains exceeding 2.2 dB over prior work.
+ oai:arXiv.org:2601.12802v1
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jihoo Jung, Ji-Hoon Kim, Doyeop Kwak, Junwon Lee, Juhan Nam, Joon Son Chung
+
+
+ SL-CBM: Enhancing Concept Bottleneck Models with Semantic Locality for Better Interpretability
+ https://arxiv.org/abs/2601.12804
+ arXiv:2601.12804v1 Announce Type: new
+Abstract: Explainable AI (XAI) is crucial for building transparent and trustworthy machine learning systems, especially in high-stakes domains. Concept Bottleneck Models (CBMs) have emerged as a promising ante-hoc approach that provides interpretable, concept-level explanations by explicitly modeling human-understandable concepts. However, existing CBMs often suffer from poor locality faithfulness, failing to spatially align concepts with meaningful image regions, which limits their interpretability and reliability. In this work, we propose SL-CBM (CBM with Semantic Locality), a novel extension that enforces locality faithfulness by generating spatially coherent saliency maps at both concept and class levels. SL-CBM integrates a 1x1 convolutional layer with a cross-attention mechanism to enhance alignment between concepts, image regions, and final predictions. Unlike prior methods, SL-CBM produces faithful saliency maps inherently tied to the model's internal reasoning, facilitating more effective debugging and intervention. Extensive experiments on image datasets demonstrate that SL-CBM substantially improves locality faithfulness, explanation quality, and intervention efficacy while maintaining competitive classification accuracy. Our ablation studies highlight the importance of contrastive and entropy-based regularization for balancing accuracy, sparsity, and faithfulness. Overall, SL-CBM bridges the gap between concept-based reasoning and spatial explainability, setting a new standard for interpretable and trustworthy concept-based models.
+ oai:arXiv.org:2601.12804v1
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hanwei Zhang, Luo Cheng, Rui Wen, Yang Zhang, Lijun Zhang, Holger Hermanns
+
+
+ Semi-supervised Instruction Tuning for Large Language Models on Text-Attributed Graphs
+ https://arxiv.org/abs/2601.12807
+ arXiv:2601.12807v1 Announce Type: new
+Abstract: The emergent reasoning capabilities of Large Language Models (LLMs) offer a transformative paradigm for analyzing text-attributed graphs. While instruction tuning is the prevailing method for adapting pre-trained LLMs to graph learning tasks like node classification, it requires a substantial volume of annotated (INSTRUCTION, OUTPUT) pairs deriving from labeled nodes. This requirement is particularly prohibitive in the social domain, where obtaining expert labels for sensitive or evolving content is costly and slow. Furthermore, standard graph instruction tuning fails to exploit the vast amount of unlabeled nodes, which contain latent correlations due to edge connections that are beneficial for downstream predictions. To bridge this gap, we propose a novel Semi-supervised Instruction Tuning pipeline for Graph Learning, named SIT-Graph. Notably, SIT-Graph is model-agnostic and can be seamlessly integrated into any graph instruction tuning method that utilizes LLMs as the predictor. SIT-Graph operates via an iterative self-training process. Initially, the model is fine-tuned using instruction pairs constructed solely from the labeled nodes. Then it generates confidence-filtered pseudo-responses for unlabeled nodes to strategically augment the dataset for the next round of fine-tuning. Finally, this iterative refinement progressively aligns the LLM with the underlying node correlations. Extensive experiments demonstrate that when incorporated into state-of-the-art graph instruction tuning methods, SIT-Graph significantly enhances their performance on text-attributed graph benchmarks, achieving over 20% improvement under the low label ratio settings.
+ oai:arXiv.org:2601.12807v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Zixing Song, Irwin King
+
+
+ Joint Source-Channel-Generation Coding: From Distortion-oriented Reconstruction to Semantic-consistent Generation
+ https://arxiv.org/abs/2601.12808
+ arXiv:2601.12808v1 Announce Type: new
+Abstract: Conventional communication systems, including both separation-based coding and AI-driven joint source-channel coding (JSCC), are largely guided by Shannon's rate-distortion theory. However, relying on generic distortion metrics fails to capture complex human visual perception, often resulting in blurred or unrealistic reconstructions. In this paper, we propose Joint Source-Channel-Generation Coding (JSCGC), a novel paradigm that shifts the focus from deterministic reconstruction to probabilistic generation. JSCGC leverages a generative model at the receiver as a generator rather than a conventional decoder to parameterize the data distribution, enabling direct maximization of mutual information under channel constraints while controlling stochastic sampling to produce outputs residing on the authentic data manifold with high fidelity. We further derive a theoretical lower bound on the maximum semantic inconsistency with given transmitted mutual information, elucidating the fundamental limits of communication in controlling the generative process. Extensive experiments on image transmission demonstrate that JSCGC substantially improves perceptual quality and semantic fidelity, significantly outperforming conventional distortion-oriented JSCC methods.
+ oai:arXiv.org:2601.12808v1
+ cs.IT
+ cs.CV
+ cs.LG
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tong Wu, Zhiyong Chen, Guo Lu, Li Song, Feng Yang, Meixia Tao, Wenjun Zhang
+
+
+ Left-Right Symmetry Breaking in CLIP-style Vision-Language Models Trained on Synthetic Spatial-Relation Data
+ https://arxiv.org/abs/2601.12809
+ arXiv:2601.12809v1 Announce Type: new
+Abstract: Spatial understanding remains a key challenge in vision-language models. Yet it is still unclear whether such understanding is truly acquired, and if so, through what mechanisms. We present a controllable 1D image-text testbed to probe how left-right relational understanding emerges in Transformer-based vision and text encoders trained with a CLIP-style contrastive objective. We train lightweight Transformer-based vision and text encoders end-to-end on paired descriptions of one- and two-object scenes and evaluate generalization to unseen object pairs while systematically varying label and layout diversity. We find that contrastive training learns left-right relations and that label diversity, more than layout diversity, is the primary driver of generalization in this setting. To gain the mechanistic understanding, we perform an attention decomposition and show that interactions between positional and token embeddings induce a horizontal attention gradient that breaks left-right symmetry in the encoders; ablating this contribution substantially reduces left-right discrimination. Our results provide a mechanistic insight of when and how CLIP-style models acquire relational competence.
+ oai:arXiv.org:2601.12809v1
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Takaki Yamamoto, Chihiro Noguchi, Toshihiro Tanizawa
+
+
+ Docker Does Not Guarantee Reproducibility
+ https://arxiv.org/abs/2601.12811
+ arXiv:2601.12811v1 Announce Type: new
+Abstract: The reproducibility of software environments is a critical concern in modern software engineering, with ramifications ranging from the effectiveness of collaboration workflows to software supply chain security and scientific reproducibility. Containerization technologies like Docker address this problem by encapsulating software environments into shareable filesystem snapshots known as images. While Docker is frequently cited in the literature as a tool that enables reproducibility in theory, the extent of its guarantees and limitations in practice remains under-explored.
+ In this work, we address this gap through two complementary approaches. First, we conduct a systematic literature review to examine how Docker is framed in scientific discourse on reproducibility and to identify documented best practices for writing Dockerfiles enabling reproducible image building. Then, we perform a large-scale empirical study of 5298 Docker builds collected from GitHub workflows. By rebuilding these images and comparing the results with their historical counterparts, we assess the real reproducibility of Docker images and evaluate the effectiveness of the best practices identified in the literature.
+ oai:arXiv.org:2601.12811v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Julien Malka, Stefano Zacchiroli, Th\'eo Zimmermann
+
+
+ Do Clinical Question Answering Systems Really Need Specialised Medical Fine Tuning?
+ https://arxiv.org/abs/2601.12812
+ arXiv:2601.12812v1 Announce Type: new
+Abstract: Clinical Question-Answering (CQA) industry systems are increasingly rely on Large Language Models (LLMs), yet their deployment is often guided by the assumption that domain-specific fine-tuning is essential. Although specialised medical LLMs such as BioBERT, BioGPT, and PubMedBERT remain popular, they face practical limitations including narrow coverage, high retraining costs, and limited adaptability. Efforts based on Supervised Fine-Tuning (SFT) have attempted to address these assumptions but continue to reinforce what we term the SPECIALISATION FALLACY-the belief that specialised medical LLMs are inherently superior for CQA. To address this assumption, we introduce MEDASSESS-X, a deployment-industry-oriented CQA framework that applies alignment at inference time rather than through SFT. MEDASSESS-X uses lightweight steering vectors to guide model activations toward medically consistent reasoning without updating model weights or requiring domain-specific retraining. This inference-time alignment layer stabilises CQA performance across both general-purpose and specialised medical LLMs, thereby resolving the SPECIALISATION FALLACY. Empirically, MEDASSESS-X delivers consistent gains across all LLM families, improving Accuracy by up to +6%, Factual Consistency by +7%, and reducing Safety Error Rate by as much as 50%.
+ oai:arXiv.org:2601.12812v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sushant Kumar Ray, Gautam Siddharth Kashyap, Sahil Tripathi, Nipun Joshi, Vijay Govindarajan, Rafiq Ali, Jiechao Gao, Usman Naseem
+
+
+ A Formally Verified Procedure for Width Inference in FIRRTL
+ https://arxiv.org/abs/2601.12813
+ arXiv:2601.12813v1 Announce Type: new
+Abstract: FIRRTL is an intermediate representation language for Register Transfer Level (RTL) hardware designs. In FIRRTL programs, the bit widths of many components are not specified explicitly and must be inferred during compilation. In mainstream FIRRTL compilers, such as the official compiler firtool, width inference is conducted by a compilation pass referred to as InferWidths, which may fail even for simple FIRRTL programs. In this paper, we thoroughly investigate the width inference problem for FIRRTL programs. We show that, if the constraints obtained from a FIRRTL program are satisfiable, there exists a unique least solution. Based on this result, we propose a complete procedure for solving the width inference problem. We implement it in the interactive theorem prover Rocq and prove its functional correctness. From the Rocq implementation, we extract an OCaml implementation, which is the first formally verified implementation of the InferWidths pass. Extensive experiments demonstrate that our approach can solve more instances than the official InferWidths pass in firtool, normally with high efficiency.
+ oai:arXiv.org:2601.12813v1
+ cs.PL
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Keyin Wang, Xiaomu Shi, Jiaxiang Liu, Zhilin Wu, Taolve Chen, Fu Song, David N. Jansen
+
+
+ CSGaussian: Progressive Rate-Distortion Compression and Segmentation for 3D Gaussian Splatting
+ https://arxiv.org/abs/2601.12814
+ arXiv:2601.12814v1 Announce Type: new
+Abstract: We present the first unified framework for rate-distortion-optimized compression and segmentation of 3D Gaussian Splatting (3DGS). While 3DGS has proven effective for both real-time rendering and semantic scene understanding, prior works have largely treated these tasks independently, leaving their joint consideration unexplored. Inspired by recent advances in rate-distortion-optimized 3DGS compression, this work integrates semantic learning into the compression pipeline to support decoder-side applications--such as scene editing and manipulation--that extend beyond traditional scene reconstruction and view synthesis. Our scheme features a lightweight implicit neural representation-based hyperprior, enabling efficient entropy coding of both color and semantic attributes while avoiding costly grid-based hyperprior as seen in many prior works. To facilitate compression and segmentation, we further develop compression-guided segmentation learning, consisting of quantization-aware training to enhance feature separability and a quality-aware weighting mechanism to suppress unreliable Gaussian primitives. Extensive experiments on the LERF and 3D-OVS datasets demonstrate that our approach significantly reduces transmission cost while preserving high rendering quality and strong segmentation performance.
+ oai:arXiv.org:2601.12814v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yu-Jen Tseng, Chia-Hao Kao, Jing-Zhong Chen, Alessandro Gnutti, Shao-Yuan Lo, Yen-Yu Lin, Wen-Hsiao Peng
+
+
+ Multimodal Multi-Agent Empowered Legal Judgment Prediction
+ https://arxiv.org/abs/2601.12815
+ arXiv:2601.12815v1 Announce Type: new
+Abstract: Legal Judgment Prediction (LJP) aims to predict the outcomes of legal cases based on factual descriptions, serving as a fundamental task to advance the development of legal systems. Traditional methods often rely on statistical analyses or role-based simulations but face challenges with multiple allegations, diverse evidence, and lack adaptability. In this paper, we introduce JurisMMA, a novel framework for LJP that effectively decomposes trial tasks, standardizes processes, and organizes them into distinct stages. Furthermore, we build JurisMM, a large dataset with over 100,000 recent Chinese judicial records, including both text and multimodal video-text data, enabling comprehensive evaluation. Experiments on JurisMM and the benchmark LawBench validate our framework's effectiveness. These results indicate that our framework is effective not only for LJP but also for a broader range of legal applications, offering new perspectives for the development of future legal methods and datasets.
+ oai:arXiv.org:2601.12815v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhaolu Kang, Junhao Gong, Qingxi Chen, Hao Zhang, Jiaxin Liu, Rong Fu, Zhiyuan Feng, Yuan Wang, Simon Fong, Kaiyue Zhou
+
+
+ Fisher-Orthogonal Projected Natural Gradient Descent for Continual Learning
+ https://arxiv.org/abs/2601.12816
+ arXiv:2601.12816v1 Announce Type: new
+Abstract: Continual learning aims to enable neural networks to acquire new knowledge on sequential tasks. However, the key challenge in such settings is to learn new tasks without catastrophically forgetting previously learned tasks. We propose the Fisher-Orthogonal Projected Natural Gradient Descent (FOPNG) optimizer, which enforces Fisher-orthogonal constraints on parameter updates to preserve old task performance while learning new tasks. Unlike existing methods that operate in Euclidean parameter space, FOPNG projects gradients onto the Fisher-orthogonal complement of previous task gradients. This approach unifies natural gradient descent with orthogonal gradient methods within an information-geometric framework. The resulting update direction is invariant under reparameterization, guarantees descent in the Fisher metric, and helps preserve prior task outputs. We provide theoretical analysis establishing the properties of the projected update, describe efficient and practical implementations using the diagonal Fisher, and demonstrate strong results on standard continual learning benchmarks such as Permuted-MNIST, Split-MNIST, Rotated-MNIST, Split-CIFAR10, and Split-CIFAR100.
+ oai:arXiv.org:2601.12816v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ishir Garg, Neel Kolhe, Andy Peng, Rohan Gopalam
+
+
+ A Generalist Foundation Model for Total-body PET/CT Enables Diagnostic Reporting and System-wide Metabolic Profiling
+ https://arxiv.org/abs/2601.12820
+ arXiv:2601.12820v1 Announce Type: new
+Abstract: Total-body PET/CT enables system-wide molecular imaging, but heterogeneous anatomical and metabolic signals, approximately 2 m axial coverage, and structured radiology semantics challenge existing medical AI models that assume single-modality inputs, localized fields of view, and coarse image-text alignment. We introduce SDF-HOLO (Systemic Dual-stream Fusion Holo Model), a multimodal foundation model for holistic total-body PET/CT, pre-trained on more than 10,000 patients. SDF-HOLO decouples CT and PET representation learning with dual-stream encoders and couples them through a cross-modal interaction module, allowing anatomical context to refine PET aggregation while metabolic saliency guides subtle morphological reasoning. To model long-range dependencies across the body, hierarchical context modeling combines efficient local windows with global attention. To bridge voxels and clinical language, we use anatomical segmentation masks as explicit semantic anchors and perform voxel-mask-text alignment during pre-training. Across tumor segmentation, low-dose lesion detection, and multilingual diagnostic report generation, SDF-HOLO outperforms strong task-specific and clinical-reference baselines while reducing localization errors and hallucinated findings. Beyond focal interpretation, the model enables system-wide metabolic profiling and reveals tumor-associated fingerprints of inter-organ metabolic network interactions, providing a scalable computational foundation for total-body PET/CT diagnostics and system-level precision oncology.
+ oai:arXiv.org:2601.12820v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wei Chen, Liang Wu, Shuyi Lu, Yuanyuan Sun, Wenkai Bi, Zilong Yuan, Yaoyao He, Feng Wang, Junchi Ma, Shuyong Liu, Zhaoping Cheng, Xiaoyan Hu, Jianfeng Qiu
+
+
+ MirrorGuard: Toward Secure Computer-Use Agents via Simulation-to-Real Reasoning Correction
+ https://arxiv.org/abs/2601.12822
+ arXiv:2601.12822v1 Announce Type: new
+Abstract: Large foundation models are integrated into Computer Use Agents (CUAs), enabling autonomous interaction with operating systems through graphical user interfaces (GUIs) to perform complex tasks. This autonomy introduces serious security risks: malicious instructions or visual prompt injections can trigger unsafe reasoning and cause harmful system-level actions. Existing defenses, such as detection-based blocking, prevent damage but often abort tasks prematurely, reducing agent utility. In this paper, we present MirrorGuard, a plug-and-play defense framework that uses simulation-based training to improve CUA security in the real world. To reduce the cost of large-scale training in operating systems, we propose a novel neural-symbolic simulation pipeline, which generates realistic, high-risk GUI interaction trajectories entirely in a text-based simulated environment, which captures unsafe reasoning patterns and potential system hazards without executing real operations. In the simulation environment, MirrorGuard learns to intercept and rectify insecure reasoning chains of CUAs before they produce and execute unsafe actions. In real-world testing, extensive evaluations across diverse benchmarks and CUA architectures show that MirrorGuard significantly mitigates security risks. For instance, on the ByteDance UI-TARS system, it reduces the unsafe rate from 66.5% to 13.0% while maintaining a marginal false refusal rate (FRR). In contrast, the state-of-the-art GuardAgent only achieves a reduction to 53.9% and suffers from a 15.4% higher FRR. Our work proves that simulation-derived defenses can provide robust, real-world protection while maintaining the fundamental utility of the agent. Our code and model are publicly available at https://bmz-q-q.github.io/MirrorGuard/.
+ oai:arXiv.org:2601.12822v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Wenqi Zhang, Yulin Shen, Changyue Jiang, Jiarun Dai, Geng Hong, Xudong Pan
+
+
+ TreeDGS: Aerial Gaussian Splatting for Distant DBH Measurement
+ https://arxiv.org/abs/2601.12823
+ arXiv:2601.12823v1 Announce Type: new
+Abstract: Aerial remote sensing enables efficient large-area surveying, but accurate direct object-level measurement remains difficult in complex natural scenes. Recent advancements in 3D vision, particularly learned radiance-field representations such as NeRF and 3D Gaussian Splatting, have begun to raise the ceiling on reconstruction fidelity and densifiable geometry from posed imagery. Nevertheless, direct aerial measurement of important natural attributes such as tree diameter at breast height (DBH) remains challenging. Trunks in aerial forest scans are distant and sparsely observed in image views: at typical operating altitudes, stems may span only a few pixels. With these constraints, conventional reconstruction methods leave breast-height trunk geometry weakly constrained. We present TreeDGS, an aerial image reconstruction method that leverages 3D Gaussian Splatting as a continuous, densifiable scene representation for trunk measurement. After SfM-MVS initialization and Gaussian optimization, we extract a dense point set from the Gaussian field using RaDe-GS's depth-aware cumulative-opacity integration and associate each sample with a multi-view opacity reliability score. We then estimate DBH from trunk-isolated points using opacity-weighted solid-circle fitting. Evaluated on 10 plots with field-measured DBH, TreeDGS reaches 4.79,cm RMSE (about 2.6 pixels at this GSD) and outperforms a state-of-the-art LiDAR baseline (7.91,cm RMSE), demonstrating that densified splat-based geometry can enable accurate, low-cost aerial DBH measurement.
+ oai:arXiv.org:2601.12823v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Belal Shaheen, Minh-Hieu Nguyen, Bach-Thuan Bui, Shubham, Tim Wu, Michael Fairley, Matthew David Zane, Michael Wu, James Tompkin
+
+
+ Seeing Isn't Always Believing: Analysis of Grad-CAM Faithfulness and Localization Reliability in Lung Cancer CT Classification
+ https://arxiv.org/abs/2601.12826
+ arXiv:2601.12826v1 Announce Type: new
+Abstract: Explainable Artificial Intelligence (XAI) techniques, such as Gradient-weighted Class Activation Mapping (Grad-CAM), have become indispensable for visualizing the reasoning process of deep neural networks in medical image analysis. Despite their popularity, the faithfulness and reliability of these heatmap-based explanations remain under scrutiny. This study critically investigates whether Grad-CAM truly represents the internal decision-making of deep models trained for lung cancer image classification. Using the publicly available IQ-OTH/NCCD dataset, we evaluate five representative architectures: ResNet-50, ResNet-101, DenseNet-161, EfficientNet-B0, and ViT-Base-Patch16-224, to explore model-dependent variations in Grad-CAM interpretability. We introduce a quantitative evaluation framework that combines localization accuracy, perturbation-based faithfulness, and explanation consistency to assess Grad-CAM reliability across architectures. Experimental findings reveal that while Grad-CAM effectively highlights salient tumor regions in most convolutional networks, its interpretive fidelity significantly degrades for Vision Transformer models due to non-local attention behavior. Furthermore, cross-model comparisons indicate substantial variability in saliency localization, implying that Grad-CAM explanations may not always correspond to the true diagnostic evidence used by the networks. This work exposes critical limitations of current saliency-based XAI approaches in medical imaging and emphasizes the need for model-aware interpretability methods that are both computationally sound and clinically meaningful. Our findings aim to inspire a more cautious and rigorous adoption of visual explanation tools in medical AI, urging the community to rethink what it truly means to "trust" a model's explanation.
+ oai:arXiv.org:2601.12826v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Teerapong Panboonyuen
+
+
+ The Unfairness of Multifactorial Bias in Recommendation
+ https://arxiv.org/abs/2601.12828
+ arXiv:2601.12828v1 Announce Type: new
+Abstract: Popularity bias and positivity bias are two prominent sources of bias in recommender systems. Both arise from input data, propagate through recommendation models, and lead to unfair or suboptimal outcomes. Popularity bias occurs when a small subset of items receives most interactions, while positivity bias stems from the over-representation of high rating values. Although each bias has been studied independently, their combined effect, to which we refer to as multifactorial bias, remains underexplored. In this work, we examine how multifactorial bias influences item-side fairness, focusing on exposure bias, which reflects the unequal visibility of items in recommendation outputs. Through simulation studies, we find that positivity bias is disproportionately concentrated on popular items, further amplifying their over-exposure. Motivated by this insight, we adapt a percentile-based rating transformation as a pre-processing strategy to mitigate multifactorial bias. Experiments using six recommendation algorithms across four public datasets show that this approach improves exposure fairness with negligible accuracy loss. We also demonstrate that integrating this pre-processing step into post-processing fairness pipelines enhances their effectiveness and efficiency, enabling comparable or better fairness with reduced computational cost. These findings highlight the importance of addressing multifactorial bias and demonstrate the practical value of simple, data-driven pre-processing methods for improving fairness in recommender systems.
+ oai:arXiv.org:2601.12828v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Masoud Mansoury, Jin Huang, Mykola Pechenizkiy, Herke van Hoof, Maarten de Rijke
+
+
+ From Design to Deorbit: A Solar-Electric Autonomous Module for Multi-Debris Remediation
+ https://arxiv.org/abs/2601.12830
+ arXiv:2601.12830v1 Announce Type: new
+Abstract: The escalating accumulation of orbital debris threatens the sustainability of space operations, necessitating active removal solutions that overcome the limitations of current fuel-dependent methods. To address this, this study introduces a novel remediation architecture that integrates a mechanical clamping system for secure capture with a high-efficiency, solar-powered NASA Evolutionary Xenon Thruster (NEXT) and autonomous navigation protocols. High-fidelity simulations validate the architecture's capabilities, demonstrating a successful retrograde deorbit from 800 km to 100 km, <10m position Root Mean Square Errors (RMSE) via radar-based Extended Kalman Filter (EKF) navigation, and a 93\% data delivery efficiency within 1 second using Delay/Disruption Tolerant Network (DTN) protocols. This approach significantly advances orbital management by establishing a benchmark for renewable solar propulsion that minimizes reliance on conventional fuels and extends mission longevity for multi-target removal.
+ oai:arXiv.org:2601.12830v1
+ cs.DC
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Om Mishra, Jayesh Patil, Sathwik Narkedimilli, G Srikantha Sharma, Ananda S, Manjunath K Vanahalli
+
+
+ Data-Consistent Learning of Inverse Problems
+ https://arxiv.org/abs/2601.12831
+ arXiv:2601.12831v1 Announce Type: new
+Abstract: Inverse problems are inherently ill-posed, suffering from non-uniqueness and instability. Classical regularization methods provide mathematically well-founded solutions, ensuring stability and convergence, but often at the cost of reduced flexibility or visual quality. Learned reconstruction methods, such as convolutional neural networks, can produce visually compelling results, yet they typically lack rigorous theoretical guarantees. DC (DC) networks address this gap by enforcing the measurement model within the network architecture. In particular, null-space networks combined with a classical regularization method as an initial reconstruction define a convergent regularization method. This approach preserves the theoretical reliability of classical schemes while leveraging the expressive power of data-driven learning, yielding reconstructions that are both accurate and visually appealing.
+ oai:arXiv.org:2601.12831v1
+ math.NA
+ cs.CV
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Markus Haltmeier, Gyeongha Hwang
+
+
+ Temporal Fair Division of Indivisible Goods with Scheduling
+ https://arxiv.org/abs/2601.12835
+ arXiv:2601.12835v1 Announce Type: new
+Abstract: We study temporal fair division, where agents receive goods over multiple rounds and cumulative fairness is required. We investigate Temporal Envy-Freeness Up to One Good (TEF1) and Up to Any Good (TEFX), its approximation $\alpha$-TEFX, and Temporal Maximin Share (TMMS). Motivated by known impossibilities in standard settings, we consider the model in various restricted settings and extend it by introducing scheduling.
+ Our main contributions draw the boundary between possibility and impossibility. First, regarding temporal fair division without scheduling, we prove that while constant-factor $\alpha$-TEFX is impossible in general, a $1/2$-approximation is achievable for generalized binary valuations and identical days with two agents. Second, regarding temporal fair division with scheduling, we demonstrate that a scheduling buffer of size at least $n/2$ enables TEF1 for identical days. However, we establish that TEFX and TMMS remain largely impossible even with scheduling or restricted domains. These results highlight the inherent difficulty of strict temporal fairness and quantify the trade-offs required to achieve approximation guarantees.
+ oai:arXiv.org:2601.12835v1
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kui Wang Choi, Minming LI
+
+
+ Knowledge-Integrated Representation Learning for Crypto Anomaly Detection under Extreme Label Scarcity; Relational Domain-Logic Integration with Retrieval-Grounded Context and Path-Level Explanations
+ https://arxiv.org/abs/2601.12839
+ arXiv:2601.12839v1 Announce Type: new
+Abstract: Detecting anomalous trajectories in decentralized crypto networks is fundamentally challenged by extreme label scarcity and the adaptive evasion strategies of illicit actors. While Graph Neural Networks (GNNs) effectively capture local structural patterns, they struggle to internalize multi hop, logic driven motifs such as fund dispersal and layering that characterize sophisticated money laundering, limiting their forensic accountability under regulations like the FATF Travel Rule. To address this limitation, we propose Relational Domain Logic Integration (RDLI), a framework that embeds expert derived heuristics as differentiable, logic aware latent signals within representation learning. Unlike static rule based approaches, RDLI enables the detection of complex transactional flows that evade standard message passing. To further account for market volatility, we incorporate a Retrieval Grounded Context (RGC) module that conditions anomaly scoring on regulatory and macroeconomic context, mitigating false positives caused by benign regime shifts. Under extreme label scarcity (0.01%), RDLI outperforms state of the art GNN baselines by 28.9% in F1 score. A micro expert user study further confirms that RDLI path level explanations significantly improve trustworthiness, perceived usefulness, and clarity compared to existing methods, highlighting the importance of integrating domain logic with contextual grounding for both accuracy and explainability.
+ oai:arXiv.org:2601.12839v1
+ cs.LG
+ q-fin.RM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Gyuyeon Na, Minjung Park, Soyoun Kim, Jungbin Shin, Sangmi Chai
+
+
+ Lessons Learned from Structural Design and Vibration Testing of 50-kg Microsatellites Deployed from the International Space Station
+ https://arxiv.org/abs/2601.12840
+ arXiv:2601.12840v1 Announce Type: new
+Abstract: Hokkaido University and Tohoku University have been developing and operating a constellation of 50-cm-class microsatellites for Earth observation. DIWATA-1, launched in 2016, was deployed into a circular orbit at an altitude of approximately 400 km from the International Space Station (ISS). For the subsequent satellite developed in 2021, the structural design and vibration test campaign were optimized to meet a strict one-year development schedule. This paper summarizes how the structural design of the previous satellite was reviewed and updated, and how the vibration test was successfully completed in a single trial to minimize schedule and technical risks. These lessons learned provide valuable insights, as there are only a limited number of reported cases of 50-kg-class microsatellites deployed from the ISS.
+ oai:arXiv.org:2601.12840v1
+ eess.SY
+ astro-ph.IM
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuji Sakamoto, Junichi Kurihara, Shinya Fujita, Yuji Sato, Toshinori Kuwahara
+
+
+ SCULPT: Constraint-Guided Pruned MCTS that Carves Efficient Paths for Mathematical Reasoning
+ https://arxiv.org/abs/2601.12842
+ arXiv:2601.12842v1 Announce Type: new
+Abstract: Automated agent workflows can enhance the problem-solving ability of large language models (LLMs), but common search strategies rely on stochastic exploration and often traverse implausible branches. This occurs because current pipelines sample candidate steps from generic prompts or learned policies with weak domain priors, yielding near-random walks over operators, units, and formats. To promote ordered exploration, this paper introduces SCULPT, a constraint-guided approach for Monte Carlo Tree Search (MCTS) that integrates domain-aware scoring into selection, expansion, simulation, and backpropagation. SCULPT scores and prunes actions using a combination of symbolic checks (dimensional consistency, type compatibility, magnitude sanity, depth control, and diversity) and structural pattern guidance, thereby steering the search toward plausible reasoning paths. Under matched LLM configurations, SCULPT yields stable improvements on multiple datasets; additional results with GPT-5.2 assess executor transferability and performance on frontier reasoning models. Overall, domain-aware constraints can improve accuracy while maintaining efficiency and reasoning stability.
+ oai:arXiv.org:2601.12842v1
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Qitong Fang (Jilin Jianzhu University), Haotian Li (Jilin Jianzhu University), Xu Wang (Jilin Jianzhu University)
+
+
+ Rapport du Projet de Recherche TRAIMA
+ https://arxiv.org/abs/2601.12844
+ arXiv:2601.12844v1 Announce Type: new
+Abstract: The TRAIMA project (TRaitement Automatique des Interactions Multimodales en Apprentissage), conducted between March 2019 and June 2020, investigates the potential of automatic processing of multimodal interactions in educational settings. The project addresses a central methodological challenge in educational and interactional research: the analysis of verbal, paraverbal, and non-verbal data is currently carried out manually, making it extremely time-consuming and difficult to scale. TRAIMA explores how machine learning approaches could contribute to the categorisation and classification of such interactions. The project focuses specifically on explanatory and collaborative sequences occurring in classroom interactions, particularly in French as a Foreign Language (FLE) and French as a First Language (FLM) contexts. These sequences are analysed as inherently multimodal phenomena, combining spoken language with prosody, gestures, posture, gaze, and spatial positioning. A key theoretical contribution of the project is the precise linguistic and interactional definition of explanatory discourse as a tripartite sequence (opening, explanatory core, closure), drawing on discourse analysis and interactional linguistics. A substantial part of the research is devoted to the methodological foundations of transcription, which constitute a critical bottleneck for any form of automation. The report provides a detailed state of the art of existing transcription conventions (ICOR, Mondada, GARS, VALIBEL, Ferr{\'e}), highlighting their respective strengths and limitations when applied to multimodal classroom data. Through comparative analyses of manually transcribed sequences, the project demonstrates the inevitable variability and interpretative dimension of transcription practices, depending on theoretical positioning and analytical goals. Empirical work is based on several corpora, notably the INTER-EXPLIC corpus (approximately 30 hours of classroom interaction) and the EXPLIC-LEXIC corpus, which serve both as testing grounds for manual annotation and as reference datasets for future automation. Particular attention is paid to teacher gestures (kin{\'e}sic and proxemic resources), prosodic features, and their functional role in meaning construction and learner comprehension. The project also highlights the strategic role of the Techn{\'e}LAB platform, which provides advanced multimodal data capture (multi-camera video, synchronized audio, eye-tracking, digital interaction traces) and constitutes both a research infrastructure and a test environment for the development of automated tools. In conclusion, TRAIMA does not aim to deliver a fully operational automated system, but rather to establish a rigorous methodological framework for the automatic processing of multimodal pedagogical interactions. The project identifies transcription conventions, annotation categories, and analytical units that are compatible with machine learning approaches, while emphasizing the need for theoretical explicitness and researcher reflexivity. TRAIMA thus lays the groundwork for future interdisciplinary research at the intersection of didactics, discourse analysis, multimodality, and artificial intelligence in education.
+ oai:arXiv.org:2601.12844v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Julie Ran\c{c}on (UP, FoReLLIS, Poitiers), Jean-Fran\c{c}ois Cerisier (Techn\'e, Poitiers), Emilie Remond (Techn\'e, Poitiers), Aur\'elien Nguyen (Techn\'e, Poitiers), Andrew Peterson (Techn\'e, Poitiers), Ladjel Bellatreche (ISAE-ENSMA, IDD, A\&S)
+
+
+ Automatic Generation of Formal Specification and Verification Annotations Using LLMs and Test Oracles
+ https://arxiv.org/abs/2601.12845
+ arXiv:2601.12845v1 Announce Type: new
+Abstract: Recent verification tools aim to make formal verification more accessible to software engineers by automating most of the verification process. However, annotating conventional programs with the formal specification and verification constructs (preconditions, postconditions, loop invariants, auxiliary predicates and functions and proof helpers) required to prove their correctness still demands significant manual effort and expertise. This paper investigates how LLMs can automatically generate such annotations for programs written in Dafny, a verification-aware programming language, starting from conventional code accompanied by natural language specifications (in comments) and test code. In experiments on 110 Dafny programs, a multimodel approach combining Claude Opus 4.5 and GPT-5.2 generated correct annotations for 98.2% of the programs within at most 8 repair iterations, using verifier feedback. A logistic regression analysis shows that proof-helper annotations contribute disproportionately to problem difficulty for current LLMs. Assertions in the test cases served as static oracles to automatically validate the generated pre/postconditions. We also compare generated and manual solutions and present an extension for Visual Studio Code to incorporate automatic generation into the IDE, with encouraging usability feedback.
+ oai:arXiv.org:2601.12845v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jo\~ao Pascoal Faria, Emanuel Trigo, Vinicius Honorato, Rui Abreu
+
+
+ The Cost of EFX: Generalized-Mean Welfare and Complexity Dichotomies with Few Surplus Items
+ https://arxiv.org/abs/2601.12849
+ arXiv:2601.12849v1 Announce Type: new
+Abstract: Envy-freeness up to any good (EFX) is a central fairness notion for allocating indivisible goods, yet its existence is unresolved in general. In the setting with few surplus items, where the number of goods exceeds the number of agents by a small constant (at most three), EFX allocations are guaranteed to exist, shifting the focus from existence to efficiency and computation. We study how EFX interacts with generalized-mean ($p$-mean) welfare, which subsumes commonly-studied utilitarian ($p=1$), Nash ($p=0$), and egalitarian ($p \rightarrow -\infty$) objectives. We establish sharp complexity dichotomies at $p=0$: for any fixed $p \in (0,1]$, both deciding whether EFX can attain the global $p$-mean optimum and computing an EFX allocation maximizing $p$-mean welfare are NP-hard, even with at most three surplus goods; in contrast, for any fixed $p \leq 0$, we give polynomial-time algorithms that optimize $p$-mean welfare within the space of EFX allocations and efficiently certify when EFX attains the global optimum. We further quantify the welfare loss of enforcing EFX via the price of fairness framework, showing that for $p > 0$, the loss can grow linearly with the number of agents, whereas for $p \leq 0$, it is bounded by a constant depending on the surplus (and for Nash welfare it vanishes asymptotically). Finally we show that requiring Pareto-optimality alongside EFX is NP-hard (and becomes $\Sigma_2^P$-complete for a stronger variant of EFX). Overall, our results delineate when EFX is computationally costly versus structurally aligned with welfare maximization in the setting with few surplus items.
+ oai:arXiv.org:2601.12849v1
+ cs.GT
+ cs.AI
+ cs.MA
+ econ.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Eugene Lim, Tzeh Yuan Neoh, Nicholas Teh
+
+
+ System Analysis and Pre-Flight Evaluation of Deployable Solar Panels for 3U CubeSat HOKUSHIN-1
+ https://arxiv.org/abs/2601.12851
+ arXiv:2601.12851v1 Announce Type: new
+Abstract: This paper describes the system design methodology derived from the development and evaluation tests of deployable solar panels to be mounted on a 3U CubeSat. The study mainly includes structural analysis, thermal analysis, and a review of vibration test results. Hokkaido University is developing the 3U CubeSat HOKUSHIN-1 in collaboration with Tohoku University and Muroran Institute of Technology. Deployable solar panels are a key technology for future planned lunar exploration missions, as they enable power-intensive communication and propulsion required for orbit control. The satellite also demonstrates a newly developed compact and efficient propulsion system. The satellite has dimensions of approximately 10x10x34 cm, a mass of 3.99 kg, and will be deployed into a circular orbit at an altitude of about 400 km with an orbital inclination of 51.6 degrees from the International Space Station.
+ oai:arXiv.org:2601.12851v1
+ eess.SY
+ astro-ph.IM
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuji Sakamoto, Masaki Aoi, Sho Suzuki, Takumi Haga, Shumpei Hosokawa, Yuma Abe, Yuya Tasaki, Tsuyoshi Totani, Sou Nakamura, Masaharu Uchiumi, Shinya Fujita
+
+
+ On Resilient and Efficient Linear Secure Aggregation in Hierarchical Federated Learning
+ https://arxiv.org/abs/2601.12853
+ arXiv:2601.12853v1 Announce Type: new
+Abstract: In this paper, we study the fundamental limits of hierarchical secure aggregation under unreliable communication. We consider a hierarchical network where each client connects to multiple relays, and both client-to-relay and relay-to-server links are intermittent. Under this setting, we characterize the minimum communication and randomness costs required to achieve robust secure aggregation. We then propose an optimal protocol that attains these minimum costs, and establish its optimality through a matching converse proof. In addition, we introduce an improved problem formulation that bridges the gap between existing information-theoretic secure aggregation protocols and practical real-world federated learning problems.
+ oai:arXiv.org:2601.12853v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Shudi Weng, Xiang Zhang, Yizhou Zhao, Giuseppe Caire, Ming Xiao, Mikael Skoglund
+
+
+ Mining Citywide Dengue Spread Patterns in Singapore Through Hotspot Dynamics from Open Web Data
+ https://arxiv.org/abs/2601.12856
+ arXiv:2601.12856v1 Announce Type: new
+Abstract: Dengue, a mosquito-borne disease, continues to pose a persistent public health challenge in urban areas, particularly in tropical regions such as Singapore. Effective and affordable control requires anticipating where transmission risks are likely to emerge so that interventions can be deployed proactively rather than reactively. This study introduces a novel framework that uncovers and exploits latent transmission links between urban regions, mined directly from publicly available dengue case data. Instead of treating cases as isolated reports, we model how hotspot formation in one area is influenced by epidemic dynamics in neighboring regions. While mosquito movement is highly localized, long-distance transmission is often driven by human mobility, and in our case study, the learned network aligns closely with commuting flows, providing an interpretable explanation for citywide spread. These hidden links are optimized through gradient descent and used not only to forecast hotspot status but also to verify the consistency of spreading patterns, by examining the stability of the inferred network across consecutive weeks. Case studies on Singapore during 2013-2018 and 2020 show that four weeks of hotspot history are sufficient to achieve an average F-score of 0.79. Importantly, the learned transmission links align with commuting flows, highlighting the interpretable interplay between hidden epidemic spread and human mobility. By shifting from simply reporting dengue cases to mining and validating hidden spreading dynamics, this work transforms open web-based case data into a predictive and explanatory resource. The proposed framework advances epidemic modeling while providing a scalable, low-cost tool for public health planning, early intervention, and urban resilience.
+ oai:arXiv.org:2601.12856v1
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ WWW 2026, i.e., The Web Conference 2026
+ Liping Huang, Gaoxi Xiao, Stefan Ma, Hechang Chen, Shisong Tang, Flora Salim
+
+
+ Report on Earth Observation Missions and Ground Station Management using On-Demand Satellite Operation System
+ https://arxiv.org/abs/2601.12857
+ arXiv:2601.12857v1 Announce Type: new
+Abstract: Since the launch of its first satellite in 2009, Tohoku University has continuously developed and operated Earth observation satellites and engineering demonstration satellites in the 50cm-class and CubeSat-class (up to 3U). The 50cm-class satellite launched into operation in 2021 enabled efficient operations through cloud-based management functions for both the satellite and ground stations, including automatic command generation. By 2022, up to eight operational satellites were simultaneously managed on a daily basis using three ground stations (Sendai, Hakodate, and Sweden). This paper presents the operational achievements to date and introduces the system that supports efficient satellite operations
+ oai:arXiv.org:2601.12857v1
+ eess.SY
+ astro-ph.IM
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuji Sakamoto
+
+
+ Generating Cyclic Conformers with Flow Matching in Cremer-Pople Coordinates
+ https://arxiv.org/abs/2601.12859
+ arXiv:2601.12859v1 Announce Type: new
+Abstract: Cyclic molecules are ubiquitous across applications in chemistry and biology. Their restricted conformational flexibility provides structural pre-organization that is key to their function in drug discovery and catalysis. However, reliably sampling the conformer ensembles of ring systems remains challenging. Here, we introduce PuckerFlow, a generative machine learning model that performs flow matching on the Cremer-Pople space, a low-dimensional internal coordinate system capturing the relevant degrees of freedom of rings. Our approach enables generation of valid closed rings by design and demonstrates strong performance in generating conformers that are both diverse and precise. We show that PuckerFlow outperforms other conformer generation methods on nearly all quantitative metrics and illustrate the potential of PuckerFlow for ring systems relevant to chemical applications, particularly in catalysis and drug discovery. This work enables efficient and reliable conformer generation of cyclic structures, paving the way towards modeling structure-property relationships and the property-guided generation of rings across a wide range of applications in chemistry and biology.
+ oai:arXiv.org:2601.12859v1
+ cs.LG
+ physics.chem-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Luca Schaufelberger, Aline Hartgers, Kjell Jorner
+
+
+ FGTBT: Frequency-Guided Task-Balancing Transformer for Unified Facial Landmark Detection
+ https://arxiv.org/abs/2601.12863
+ arXiv:2601.12863v1 Announce Type: new
+Abstract: Recently, deep learning based facial landmark detection (FLD) methods have achieved considerable success. However, in challenging scenarios such as large pose variations, illumination changes, and facial expression variations, they still struggle to accurately capture the geometric structure of the face, resulting in performance degradation. Moreover, the limited size and diversity of existing FLD datasets hinder robust model training, leading to reduced detection accuracy. To address these challenges, we propose a Frequency-Guided Task-Balancing Transformer (FGTBT), which enhances facial structure perception through frequency-domain modeling and multi-dataset unified training. Specifically, we propose a novel Fine-Grained Multi-Task Balancing loss (FMB-loss), which moves beyond coarse task-level balancing by assigning weights to individual landmarks based on their occurrence across datasets. This enables more effective unified training and mitigates the issue of inconsistent gradient magnitudes. Additionally, a Frequency-Guided Structure-Aware (FGSA) model is designed to utilize frequency-guided structure injection and regularization to help learn facial structure constraints. Extensive experimental results on popular benchmark datasets demonstrate that the integration of the proposed FMB-loss and FGSA model into our FGTBT framework achieves performance comparable to state-of-the-art methods. The code is available at https://github.com/Xi0ngxinyu/FGTBT.
+ oai:arXiv.org:2601.12863v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jun Wan, Xinyu Xiong, Ning Chen, Zhihui Lai, Jie Zhou, Wenwen Min
+
+
+ Proxy Robustness in Vision Language Models is Effortlessly Transferable
+ https://arxiv.org/abs/2601.12865
+ arXiv:2601.12865v1 Announce Type: new
+Abstract: As a pivotal technique for improving the defense of deep models, adversarial robustness transfer via distillation has demonstrated remarkable success in conventional image classification tasks. However, this paradigm encounters critical challenges when applied to vision-language models (VLM) (e.g., CLIP): constructing adversarially robust teacher for large-scale multi-modal models demands prohibitively high computational resources. We bridge this gap by revealing an interesting phenomenon: vanilla CLIP (without adversarial training) exhibits intrinsic defensive capabilities against adversarial examples generated by another CLIP with different architectures. We formally define this as proxy adversarial robustness, and naturally propose a Heterogeneous Proxy Transfer (HPT) framework that establishes cross-architectural robustness distillation channels between CLIP variants, effortlessly enabling the VLM robustness transfer from proxy to target models. Yet, such proxy transfer paradigm easily induces severe overfitting, leading to a sharp degradation in zero-shot natural generalization. To resolve that, we design Generalization-Pivot Decoupling (GPD) by leveraging the difference in learning rate scheduling. This decouples the proxy transfer process into a generalization-anchored warm-up that maintains generalization and a generalization-pulled HPT that promotes adversarial robustness, to achieve an equilibrium between natural generalization and adversarial robustness. Extensive experiments on 15 zero-shot datasets demonstrate the effectiveness of our HPT-GPD method. The code is available at the website of github.com/fxw13/HPT-GPD.
+ oai:arXiv.org:2601.12865v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaowei Fu, Fuxiang Huang, Lei Zhang
+
+
+ PDFInspect: A Unified Feature Extraction Framework for Malicious Document Detection
+ https://arxiv.org/abs/2601.12866
+ arXiv:2601.12866v1 Announce Type: new
+Abstract: The increasing prevalence of malicious Portable Document Format (PDF) files necessitates robust and comprehensive feature extraction techniques for effective detection and analysis. This work presents a unified framework that integrates graph-based, structural, and metadata-driven analysis to generate a rich feature representation for each PDF document. The system extracts text from PDF pages and constructs undirected graphs based on pairwise word relationships, enabling the computation of graph-theoretic features such as node count, edge density, and clustering coefficient. Simultaneously, the framework parses embedded metadata to quantify character distributions, entropy patterns, and inconsistencies across fields such as author, title, and producer. Temporal features are derived from creation and modification timestamps to capture behavioral signatures, while structural elements including, object streams, fonts, and embedded images, are quantified to reflect document complexity. Boolean flags for potentially malicious PDF constructs (e.g., JavaScript, launch actions) are also extracted. Together, these features form a high-dimensional vector representation (170 dimensions) that is well-suited for downstream tasks such as malware classification, anomaly detection, and forensic analysis. The proposed approach is scalable, extensible, and designed to support real-world PDF threat intelligence workflows.6
+ oai:arXiv.org:2601.12866v1
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sharmila S P
+
+
+ Race, Ethnicity and Their Implication on Bias in Large Language Models
+ https://arxiv.org/abs/2601.12868
+ arXiv:2601.12868v1 Announce Type: new
+Abstract: Large language models (LLMs) increasingly operate in high-stakes settings including healthcare and medicine, where demographic attributes such as race and ethnicity may be explicitly stated or implicitly inferred from text. However, existing studies primarily document outcome-level disparities, offering limited insight into internal mechanisms underlying these effects. We present a mechanistic study of how race and ethnicity are represented and operationalized within LLMs. Using two publicly available datasets spanning toxicity-related generation and clinical narrative understanding tasks, we analyze three open-source models with a reproducible interpretability pipeline combining probing, neuron-level attribution, and targeted intervention. We find that demographic information is distributed across internal units with substantial cross-model variation. Although some units encode sensitive or stereotype-related associations from pretraining, identical demographic cues can induce qualitatively different behaviors. Interventions suppressing such neurons reduce bias but leave substantial residual effects, suggesting behavioral rather than representational change and motivating more systematic mitigation.
+ oai:arXiv.org:2601.12868v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shiyue Hu, Ruizhe Li, Yanjun Gao
+
+
+ Text2Structure3D: Graph-Based Generative Modeling of Equilibrium Structures with Diffusion Transformers
+ https://arxiv.org/abs/2601.12870
+ arXiv:2601.12870v1 Announce Type: new
+Abstract: This paper presents Text2Structure3D, a graph-based Machine Learning (ML) model that generates equilibrium structures from natural language prompts. Text2Structure3D is designed to support new intuitive ways of design exploration and iteration in the conceptual structural design process. The approach combines latent diffusion with a Variational Graph Auto-Encoder (VGAE) and graph transformers to generate structural graphs that are close to an equilibrium state. Text2Structure3D integrates a residual force optimization post-processing step that ensures generated structures fully satisfy static equilibrium. The model was trained and validated using a cross-typological dataset of funicular form-found and statically determinate bridge structures, paired with text descriptions that capture the formal and structural features of each bridge. Results demonstrate that Text2Structure3D generates equilibrium structures with strong adherence to text-based specifications and greatly improves generalization capabilities compared to parametric model-based approaches. Text2Structure3D represents an early step toward a general-purpose foundation model for structural design, enabling the integration of generative AI into conceptual design workflows.
+ oai:arXiv.org:2601.12870v1
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Lazlo Bleker, Zifeng Guo, Kaleb Smith, Kam-Ming Mark Tam, Karla Salda\~na Ochoa, Pierluigi D'Acunto
+
+
+ Measuring Love Toward AI: Development and Validation of the Love Attitudes Scale toward Artificial Intelligence (LAS-AI)
+ https://arxiv.org/abs/2601.12871
+ arXiv:2601.12871v1 Announce Type: new
+Abstract: Artificial intelligences (AIs) are increasingly capable of emotionally engaging with humans to the point of forming intimate relationships. Yet, current studies on romantic love toward AI lack statistically validated instruments to measure romantic love toward AI, hindering empirical research. To address this gap, we reinterpreted Lee's love styles theory in the AI context and developed the Love Attitudes Scale toward AI (LAS-AI). The resulting 24-item, six-factor scale was validated across four phases using three independent samples (N = 899), demonstrating strong psychometric properties. The findings further revealed that people primarily seek practical, passionate, and companionship-based relationships with AI (i.e., Pragma, Eros, and Storge), showing little interest in a playful or noncommittal approach (i.e., Ludus). We also provided an initial exploration of the similarities and differences between romantic love with humans and AI. The LAS-AI offers a robust tool for future research on human-AI romantic relationships, with prolific implications.
+ oai:arXiv.org:2601.12871v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Runze Li, Lanbing Li, Yuan Zheng, Chuanxiao Li, Xianglong Zeng
+
+
+ Quantum Interactive Oracle Proofs
+ https://arxiv.org/abs/2601.12874
+ arXiv:2601.12874v1 Announce Type: new
+Abstract: We initiate the study of quantum Interactive Oracle Proofs (qIOPs), a generalization of both quantum Probabilistically Checkable Proofs and quantum Interactive Proofs, as well as a quantum analogue of classical Interactive Oracle Proofs.
+ In the model of quantum Interactive Oracle Proofs, we allow multiple rounds of quantum interaction between the quantum prover and the quantum verifier, but the verifier has limited access to quantum resources. This includes both queries to the prover's messages and the complexity of the quantum circuits applied by the verifier. The question of whether QMA admits a quantum interactive oracle proof system is a relaxation of the quantum PCP Conjecture.
+ We show the following two main constructions of qIOPs, both of which are unconditional:
+ - We construct a qIOP for QMA in which the verifier shares polynomially many EPR pairs with the prover at the start of the protocol and reads only a constant number of qubits from the prover's messages.
+ - We provide a stronger construction of qIOP for QMA in which the verifier not only reads a constant number of qubits but also operates on a constant number of qubits overall, including those in their private registers. However, in this stronger setting, the communication complexity becomes exponential. This leaves open the question of whether strong qIOPs for QMA, with polynomial communication complexity, exist.
+ As a key component of our construction, we introduce a novel single prover many-qubits test, which may be of independent interest.
+ oai:arXiv.org:2601.12874v1
+ cs.CC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Baocheng Sun, Thomas Vidick
+
+
+ SWORD: A Secure LoW-Latency Offline-First Authentication and Data Sharing Scheme for Resource Constrained Distributed Networks
+ https://arxiv.org/abs/2601.12875
+ arXiv:2601.12875v1 Announce Type: new
+Abstract: While many resource-constrained networks, such as Internet of Things (IoT) and Internet of Vehicles (IoV), are inherently distributed, the majority still rely on central servers for fast authentication and data sharing. Blockchain-based solutions offer decentralized alternatives but often struggle to meet the stringent latency requirements of real-time applications. Even with the rollout of 5G, network latency between servers and peers remains a significant challenge. To address this, we introduce SWORD, a novel offline-first authentication and data-sharing scheme designed specifically for resource-constrained networks. SWORD utilizes a proximity-based clustering approach to enable offline authentication and data sharing, ensuring low-latency, secure operations even in intermittently connected scenarios. Our experimental results show that SWORD outperforms traditional blockchain-based solutions while offering similar resource efficiency and authentication latency to central-server-based solutions. Additionally, we provide a comprehensive security analysis, demonstrating that SWORD is resilient against spoofing, impersonation, replay, and man-in-the-middle attacks.
+ oai:arXiv.org:2601.12875v1
+ cs.CR
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Faisal Haque Bappy, Tahrim Hossain, Raiful Hasan, Kamrul Hasan, Mohamed Younis, Tariqul Islam
+
+
+ Exploring Talking Head Models With Adjacent Frame Prior for Speech-Preserving Facial Expression Manipulation
+ https://arxiv.org/abs/2601.12876
+ arXiv:2601.12876v1 Announce Type: new
+Abstract: Speech-Preserving Facial Expression Manipulation (SPFEM) is an innovative technique aimed at altering facial expressions in images and videos while retaining the original mouth movements. Despite advancements, SPFEM still struggles with accurate lip synchronization due to the complex interplay between facial expressions and mouth shapes. Capitalizing on the advanced capabilities of audio-driven talking head generation (AD-THG) models in synthesizing precise lip movements, our research introduces a novel integration of these models with SPFEM. We present a new framework, Talking Head Facial Expression Manipulation (THFEM), which utilizes AD-THG models to generate frames with accurately synchronized lip movements from audio inputs and SPFEM-altered images. However, increasing the number of frames generated by AD-THG models tends to compromise the realism and expression fidelity of the images. To counter this, we develop an adjacent frame learning strategy that finetunes AD-THG models to predict sequences of consecutive frames. This strategy enables the models to incorporate information from neighboring frames, significantly improving image quality during testing. Our extensive experimental evaluations demonstrate that this framework effectively preserves mouth shapes during expression manipulations, highlighting the substantial benefits of integrating AD-THG with SPFEM.
+ oai:arXiv.org:2601.12876v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhenxuan Lu, Zhihua Xu, Zhijing Yang, Feng Gao, Yongyi Lu, Keze Wang, Tianshui Chen
+
+
+ A hierarchical splitting approach for N-split differential equations
+ https://arxiv.org/abs/2601.12878
+ arXiv:2601.12878v1 Announce Type: new
+Abstract: We propose a hierarchical splitting approach to differential equations that provides a design principle for constructing splitting methods for $N$-split systems by iteratively applying splitting methods for two-split systems. We analyze the convergence order, derive explicit formulas for the leading-order error terms, and investigate self-adjointness. Moreover, we discuss compositions of hierarchical splitting methods in detail. We further augment the hierarchical splitting approach with multiple time-stepping techniques, turning the class into a promising framework at the intersection of geometric numerical integration and multirate integration. In this context, we characterize the computational order of a multirate integrator and establish conditions on the multirate factors that guarantee an increased convergence rate in practical computations up to a certain step size. Finally, we design several hierarchical splitting methods and perform numerical simulations for rigid body equations and a separable Hamiltonian system with multirate potential, confirming the theoretical findings and showcasing the computational efficiency of hierarchical splitting methods.
+ oai:arXiv.org:2601.12878v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kevin Sch\"afers, Michael G\"unther
+
+
+ Hierarchical Sparse Circuit Extraction from Billion-Parameter Language Models through Scalable Attribution Graph Decomposition
+ https://arxiv.org/abs/2601.12879
+ arXiv:2601.12879v1 Announce Type: new
+Abstract: Mechanistic interpretability seeks to reverse-engineer neural network computations into human-understandable algorithms, yet extracting sparse computational circuits from billion-parameter language models remains challenging due to exponential search complexity and pervasive polysemanticity. The proposed Hierarchical Attribution Graph Decomposition (HAGD) framework reduces circuit discovery complexity from O(2^n) exhaustive enumeration to O(n^2 log n) through multi-resolution abstraction hierarchies and differentiable circuit search. The methodology integrates cross-layer transcoders for monosemantic feature extraction, graph neural network meta-learning for topology prediction, and causal intervention protocols for validation. Empirical evaluation spans GPT-2 variants, Llama-7B through Llama-70B, and Pythia suite models across algorithmic tasks and natural language benchmarks. On modular arithmetic tasks, the framework achieves up to 91% behavioral preservation ($\pm$2.3\% across runs) while maintaining interpretable subgraph sizes. Cross-architecture transfer experiments suggest that discovered circuits exhibit moderate structural similarity (averaging 67%) across model families, indicating potential shared computational patterns. These results provide preliminary foundations for interpretability at larger model scales while identifying significant limitations in current attribution methodologies that require future advances.
+ oai:arXiv.org:2601.12879v1
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohammed Mudassir Uddin, Shahnawaz Alam, Mohammed Kaif Pasha
+
+
+ YOLO26: An Analysis of NMS-Free End to End Framework for Real-Time Object Detection
+ https://arxiv.org/abs/2601.12882
+ arXiv:2601.12882v1 Announce Type: new
+Abstract: The "You Only Look Once" (YOLO) framework has long served as the benchmark for real-time object detection, yet traditional iterations (YOLOv1 through YOLO11) remain constrained by the latency and hyperparameter sensitivity of Non-Maximum Suppression (NMS) post-processing. This paper analyzes a comprehensive analysis of YOLO26, an architecture that fundamentally redefines this paradigm by eliminating NMS in favor of a native end-to-end learning strategy. This study examines the critical innovations that enable this transition, specifically the introduction of the MuSGD optimizer for stabilizing lightweight backbones, STAL for small-target-aware assignment, and ProgLoss for dynamic supervision. Through a systematic review of official performance benchmarks, the results demonstrate that YOLO26 establishes a new Pareto front, outperforming a comprehensive suite of predecessors and state-of-the-art competitors (including RTMDet and DAMO-YOLO) in both inference speed and detection accuracy. The analysis confirms that by decoupling representation learning from heuristic post-processing, YOLOv26 successfully resolves the historical trade-off between latency and precision, signaling the next evolutionary step in edge-based computer vision.
+ oai:arXiv.org:2601.12882v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sudip Chakrabarty
+
+
+ Does Motion Intensity Impair Cognition in HCI? The Critical Role of Physical Motion-Visual Target Directional Congruency
+ https://arxiv.org/abs/2601.12884
+ arXiv:2601.12884v1 Announce Type: new
+Abstract: Human-computer interaction (HCI) increasingly occurs in motion-rich environments. The ability to accurately and rapidly respond to directional visual cues is critical in these contexts. How whole-body motion and individual differences affect human perception and reaction to these directional cues is therefore a key, yet an underexplored question for HCI. This study used a 6-DOF motion platform to measure task performance on a visual direction judgment task. We analyzed performance by decomposing the complex motion into two distinct components: a task-irrelevant lateral interference component and a task-aligned directional congruency component. Results indicate that increased motion intensity lengthened reaction times. This effect was primarily driven by the lateral interference component, and this detrimental impact was disproportionately amplified for individuals with high motion sickness susceptibility. Conversely, directional congruency, where motion direction matched the visual cue, improved performance for all participants. These findings suggest that motion's impact on cognition is not monolithic, and that system design for mobile HCI can be informed by strategies that actively shape motion, such as minimizing lateral interference while maximizing directional congruency.
+ oai:arXiv.org:2601.12884v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jianshu Wang, Siyu Liu, Chao Zhou, Yawen Zheng, Yuan Yue, Tangjun Qu, Yang Li, Yutao Xie, Jin Huang, Yulong Bian, Feng Tian
+
+
+ From Vertices to Convex Hulls: Certifying Set-Wise Compatibility for CBF Constraints
+ https://arxiv.org/abs/2601.12885
+ arXiv:2601.12885v1 Announce Type: new
+Abstract: This paper develops certificates that propagate compatibility of multiple control barrier function (CBF) constraints from sampled vertices to their convex hull. Under mild concavity and affinity assumptions, we present three sufficient feasibility conditions under which feasible inputs over the convex hull can be obtained per coordinate, with a common input, or via convex blending. We also describe the associated computational methods, based on interval intersections or an offline linear program (LP). Beyond certifying compatibility, we give conditions under which the quadratic-program (QP) safety filter is affine in the state. This enables explicit implementations via convex combinations of vertex-feasible inputs. Case studies illustrate the results.
+ oai:arXiv.org:2601.12885v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/LCSYS.2025.3648436
+ Shima Sadat Mousavi, Xiao Tan, Aaron D. Ames
+
+
+ Communication Methods in Multi-Agent Reinforcement Learning
+ https://arxiv.org/abs/2601.12886
+ arXiv:2601.12886v1 Announce Type: new
+Abstract: Multi-agent reinforcement learning is a promising research area that extends established reinforcement learning approaches to problems formulated as multi-agent systems. Recently, a multitude of communication methods have been introduced to this field to address problems such as partially observable environments, non-stationarity, and exponentially growing action spaces. Communication further enables efficient cooperation among all agents interacting in an environment. This work aims at providing an overview of communication techniques in multi-agent reinforcement learning. By an in-depth analysis of 29 publications on this topic, the strengths and weaknesses of explicit, implicit, attention-based, graph-based, and hierarchical/role-based communication are evaluated. The results of this comparison show that there is no general, optimal communication framework for every problem. On the contrary, the choice of communication depends heavily on the problem at hand. The comparison also highlights the importance of communication methods with low computational overhead to enable scalability to environments where many agents interact. Finally, the paper discusses current research gaps, emphasizing the need for standardized benchmarking of system-level metrics and improved robustness under realistic communication conditions to enhance the real-world applicability of these approaches.
+ oai:arXiv.org:2601.12886v1
+ cs.MA
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Christoph Wittner
+
+
+ Simultaneous Detection of LSD and FMD in Cattle Using Ensemble Deep Learning
+ https://arxiv.org/abs/2601.12889
+ arXiv:2601.12889v1 Announce Type: new
+Abstract: Lumpy Skin Disease (LSD) and Foot-and-Mouth Disease (FMD) are highly contagious viral diseases affecting cattle, causing significant economic losses and welfare challenges. Their visual diagnosis is complicated by significant symptom overlap with each other and with benign conditions like insect bites or chemical burns, hindering timely control measures. Leveraging a comprehensive dataset of 10,516 expert-annotated images from 18 farms across India, Brazil, and the USA, this study presents a novel Ensemble Deep Learning framework integrating VGG16, ResNet50, and InceptionV3 with optimized weighted averaging for simultaneous LSD and FMD detection. The model achieves a state-of-the-art accuracy of 98.2\%, with macro-averaged precision of 98.2\%, recall of 98.1\%, F1-score of 98.1\%, and an AUC-ROC of 99.5\%. This approach uniquely addresses the critical challenge of symptom overlap in multi-disease detection, enabling early, precise, and automated diagnosis. This tool has the potential to enhance disease management, support global agricultural sustainability, and is designed for future deployment in resource-limited settings.
+ oai:arXiv.org:2601.12889v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nazibul Basar Ayon, Abdul Hasib, Md. Faishal Ahmed, Md. Sadiqur Rahman, Kamrul Islam, T. M. Mehrab Hasan, A. S. M. Ahsanul Sarkar Akib
+
+
+ Efficient Code Analysis via Graph-Guided Large Language Models
+ https://arxiv.org/abs/2601.12890
+ arXiv:2601.12890v1 Announce Type: new
+Abstract: Malicious behavior is often hidden in small, easily overlooked code fragments, especially within large and complex codebases. The cross-file dependencies of these fragments make it difficult for even powerful large language models (LLMs) to detect them reliably. We propose a graph-centric attention acquisition pipeline that enhances LLMs' ability to localize malicious behavior. The approach parses a project into a code graph, uses an LLM to encode nodes with semantic and structural signals, and trains a Graph Neural Network (GNN) under sparse supervision. The GNN performs an initial detection, and through backtracking of its predictions, identifies key code sections that are most likely to contain malicious behavior. These influential regions are then used to guide the LLM's attention for in-depth analysis. This strategy significantly reduces interference from irrelevant context while maintaining low annotation costs. Extensive experiments show that the method consistently outperforms existing methods on multiple public and self-built datasets, highlighting its potential for practical deployment in software security scenarios.
+ oai:arXiv.org:2601.12890v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hang Gao, Tao Peng, Baoquan Cui, Hong Huang, Fengge Wu, Junsuo Zhao, Jian Zhang
+
+
+ AdaNODEs: Test Time Adaptation for Time Series Forecasting Using Neural ODEs
+ https://arxiv.org/abs/2601.12893
+ arXiv:2601.12893v1 Announce Type: new
+Abstract: Test time adaptation (TTA) has emerged as a promising solution to adapt pre-trained models to new, unseen data distributions using unlabeled target domain data. However, most TTA methods are designed for independent data, often overlooking the time series data and rarely addressing forecasting tasks. This paper presents AdaNODEs, an innovative source-free TTA method tailored explicitly for time series forecasting. By leveraging Neural Ordinary Differential Equations (NODEs), we propose a novel adaptation framework that accommodates the unique characteristics of distribution shifts in time series data. Moreover, we innovatively propose a new loss function to tackle TTA for forecasting tasks. AdaNODEs only requires updating limited model parameters, showing effectiveness in capturing temporal dependencies while avoiding significant memory usage. Extensive experiments with one- and high-dimensional data demonstrate that AdaNODEs offer relative improvements of 5.88\% and 28.4\% over the SOTA baselines, especially demonstrating robustness across higher severity distribution shifts.
+ oai:arXiv.org:2601.12893v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ting Dang, Soumyajit Chatterjee, Hong Jia, Yu Wu, Flora Salim, Fahim Kawsar
+
+
+ Sparse ActionGen: Accelerating Diffusion Policy with Real-time Pruning
+ https://arxiv.org/abs/2601.12894
+ arXiv:2601.12894v1 Announce Type: new
+Abstract: Diffusion Policy has dominated action generation due to its strong capabilities for modeling multi-modal action distributions, but its multi-step denoising processes make it impractical for real-time visuomotor control. Existing caching-based acceleration methods typically rely on $\textit{static}$ schedules that fail to adapt to the $\textit{dynamics}$ of robot-environment interactions, thereby leading to suboptimal performance. In this paper, we propose $\underline{\textbf{S}}$parse $\underline{\textbf{A}}$ction$\underline{\textbf{G}}$en ($\textbf{SAG}$) for extremely sparse action generation. To accommodate the iterative interactions, SAG customizes a rollout-adaptive prune-then-reuse mechanism that first identifies prunable computations globally and then reuses cached activations to substitute them during action diffusion. To capture the rollout dynamics, SAG parameterizes an observation-conditioned diffusion pruner for environment-aware adaptation and instantiates it with a highly parameter- and inference-efficient design for real-time prediction. Furthermore, SAG introduces a one-for-all reusing strategy that reuses activations across both timesteps and blocks in a zig-zag manner, minimizing the global redundancy. Extensive experiments on multiple robotic benchmarks demonstrate that SAG achieves up to 4$\times$ generation speedup without sacrificing performance. Project Page: https://sparse-actiongen.github.io/.
+ oai:arXiv.org:2601.12894v1
+ cs.RO
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Kangye Ji, Yuan Meng, Zhou Jianbo, Ye Li, Hanyun Cui, Zhi Wang
+
+
+ TwoHead-SwinFPN: A Unified DL Architecture for Synthetic Manipulation, Detection and Localization in Identity Documents
+ https://arxiv.org/abs/2601.12895
+ arXiv:2601.12895v1 Announce Type: new
+Abstract: The proliferation of sophisticated generative AI models has significantly escalated the threat of synthetic manipulations in identity documents, particularly through face swapping and text inpainting attacks. This paper presents TwoHead-SwinFPN, a unified deep learning architecture that simultaneously performs binary classification and precise localization of manipulated regions in ID documents. Our approach integrates a Swin Transformer backbone with Feature Pyramid Network (FPN) and UNet-style decoder, enhanced with Convolutional Block Attention Module (CBAM) for improved feature representation. The model employs a dual-head architecture for joint optimization of detection and segmentation tasks, utilizing uncertainty-weighted multi-task learning. Extensive experiments on the FantasyIDiap dataset demonstrate superior performance with 84.31\% accuracy, 90.78\% AUC for classification, and 57.24\% mean Dice score for localization. The proposed method achieves an F1-score of 88.61\% for binary classification while maintaining computational efficiency suitable for real-world deployment through FastAPI implementation. Our comprehensive evaluation includes ablation studies, cross-device generalization analysis, and detailed performance assessment across 10 languages and 3 acquisition devices.
+ oai:arXiv.org:2601.12895v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chan Naseeb, Adeel Ashraf Cheema, Hassan Sami, Tayyab Afzal, Muhammad Omair, Usman Habib
+
+
+ Supervised Learning for the (s,S) Inventory Model with General Interarrival Demands and General Lead Times
+ https://arxiv.org/abs/2601.12900
+ arXiv:2601.12900v1 Announce Type: new
+Abstract: The continuous-review (s,S) inventory model is a cornerstone of stochastic inventory theory, yet its analysis becomes analytically intractable when dealing with non-Markovian systems. In such systems, evaluating long-run performance measures typically relies on costly simulation.
+ This paper proposes a supervised learning framework via a neural network model for approximating stationary performance measures of (s,S) inventory systems with general distributions for the interarrival time between demands and lead times under lost sales. Simulations are first used to generate training labels, after which the neural network is trained. After training, the neural network provides almost instantaneous predictions of various metrics of the system, such as the stationary distribution of inventory levels, the expected cycle time, and the probability of lost sales. We find that using a small number of low-order moments of the distributions as input is sufficient to train the neural networks and to accurately capture the steady-state distribution. Extensive numerical experiments demonstrate high accuracy over a wide range of system parameters. As such, it effectively replaces repeated and costly simulation runs. Our framework is easily extendable to other inventory models, offering an efficient and fast alternative for analyzing complex stochastic systems.
+ oai:arXiv.org:2601.12900v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Eliran Sherzer, Yonit Barron
+
+
+ PlannerRFT: Reinforcing Diffusion Planners through Closed-Loop and Sample-Efficient Fine-Tuning
+ https://arxiv.org/abs/2601.12901
+ arXiv:2601.12901v1 Announce Type: new
+Abstract: Diffusion-based planners have emerged as a promising approach for human-like trajectory generation in autonomous driving. Recent works incorporate reinforcement fine-tuning to enhance the robustness of diffusion planners through reward-oriented optimization in a generation-evaluation loop. However, they struggle to generate multi-modal, scenario-adaptive trajectories, hindering the exploitation efficiency of informative rewards during fine-tuning. To resolve this, we propose PlannerRFT, a sample-efficient reinforcement fine-tuning framework for diffusion-based planners. PlannerRFT adopts a dual-branch optimization that simultaneously refines the trajectory distribution and adaptively guides the denoising process toward more promising exploration, without altering the original inference pipeline. To support parallel learning at scale, we develop nuMax, an optimized simulator that achieves 10 times faster rollout compared to native nuPlan. Extensive experiments shows that PlannerRFT yields state-of-the-art performance with distinct behaviors emerging during the learning process.
+ oai:arXiv.org:2601.12901v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Hongchen Li, Tianyu Li, Jiazhi Yang, Haochen Tian, Caojun Wang, Lei Shi, Mingyang Shang, Zengrong Lin, Gaoqiang Wu, Zhihui Hao, Xianpeng Lang, Jia Hu, Hongyang Li
+
+
+ Audit du syst{\`e}me d'information et du mod{\`e}le de gouvernance de la Biblioth{\`e}que Num{\'e}rique de l'Espace universitaire Francophone (BNEUF) du projet Initiative pour le D{\'e}veloppement du Num{\'e}rique dans l'Espace Universitaire Francophone (IDNEUF)
+ https://arxiv.org/abs/2601.12902
+ arXiv:2601.12902v1 Announce Type: new
+Abstract: This document provides an assessment of the overall structure of the BNEUF system and how it operates within the framework of the Initiative for Digital Development in French speaking Universities (IDNEUF). This report aims to support the AUF's new strategy for 2021-2025, with its new structural and governance foundations for the implementation of the Francophonie scientifique project. It was therefore decided to reorganize existing and future digital resources and services with a view to incorporating them into the future global collaborative platform for integrated services. This report provides an external assessment with new forms of organization and use of the BNEUF system. The aim is to provide the AUF project team with new avenues for optimized management of the compiled digital resources and to synergize them with the related modules of the Atlas of Expertise and the Francophone Social Network.
+ oai:arXiv.org:2601.12902v1
+ cs.DL
+ cs.HC
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mokhtar Ben Henda (MICA)
+
+
+ Deep Temporal Graph Clustering: A Comprehensive Benchmark and Datasets
+ https://arxiv.org/abs/2601.12903
+ arXiv:2601.12903v1 Announce Type: new
+Abstract: Temporal Graph Clustering (TGC) is a new task with little attention, focusing on node clustering in temporal graphs. Compared with existing static graph clustering, it can find the balance between time requirement and space requirement (Time-Space Balance) through the interaction sequence-based batch-processing pattern. However, there are two major challenges that hinder the development of TGC, i.e., inapplicable clustering techniques and inapplicable datasets. To address these challenges, we propose a comprehensive benchmark, called BenchTGC. Specially, we design a BenchTGC Framework to illustrate the paradigm of temporal graph clustering and improve existing clustering techniques to fit temporal graphs. In addition, we also discuss problems with public temporal graph datasets and develop multiple datasets suitable for TGC task, called BenchTGC Datasets. According to extensive experiments, we not only verify the advantages of BenchTGC, but also demonstrate the necessity and importance of TGC task. We wish to point out that the dynamically changing and complex scenarios in real world are the foundation of temporal graph clustering. The code and data is available at: https://github.com/MGitHubL/BenchTGC.
+ oai:arXiv.org:2601.12903v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/TPAMI.2025.3596609
+ Meng Liu, Ke Liang, Siwei Wang, Xingchen Hu, Sihang Zhou, Xinwang Liu
+
+
+ From Prefix Cache to Fusion RAG Cache: Accelerating LLM Inference in Retrieval-Augmented Generation
+ https://arxiv.org/abs/2601.12904
+ arXiv:2601.12904v1 Announce Type: new
+Abstract: Retrieval-Augmented Generation enhances Large Language Models by integrating external knowledge, which reduces hallucinations but increases prompt length. This increase leads to higher computational costs and longer Time to First Token (TTFT). To mitigate this issue, existing solutions aim to reuse the preprocessed KV cache of each retrieved chunk to accelerate RAG. However, the lack of cross-chunk contextual information leads to a significant drop in generation quality, leaving the potential benefits of KV cache reuse largely unfulfilled. The challenge lies in how to reuse the precomputed KV cache of chunks while preserving generation quality. We propose FusionRAG, a novel inference framework that optimizes both the preprocessing and reprocessing stages of RAG. In the offline preprocessing stage, we embed information from other related text chunks into each chunk, while in the online reprocessing stage, we recompute the KV cache for tokens that the model focuses on. As a result, we achieve a better trade-off between generation quality and efficiency. According to our experiments, FusionRAG significantly improves generation quality at the same recomputation ratio compared to previous state-of-the-art solutions. By recomputing fewer than 15% of the tokens, FusionRAG achieves up to 70% higher normalized F1 scores than baselines and reduces TTFT by 2.66x-9.39x compared to Full Attention.
+ oai:arXiv.org:2601.12904v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3786655
+ Jiahao Wang, Weiyu Xie, Mingxing Zhang, Boxing Zhang, Jianwei Dong, Yuening Zhu, Chen Lin, Jinqi Tang, Yaochen Han, Zhiyuan Ai, Xianglin Chen, Yongwei Wu, Congfeng Jiang
+
+
+ Gated Differentiable Working Memory for Long-Context Language Modeling
+ https://arxiv.org/abs/2601.12906
+ arXiv:2601.12906v1 Announce Type: new
+Abstract: Long contexts challenge transformers: attention scores dilute across thousands of tokens, critical information is often lost in the middle, and models struggle to adapt to novel patterns at inference time. Recent work on test-time adaptation addresses this by maintaining a form of working memory -- transient parameters updated on the current context -- but existing approaches rely on uniform write policies that waste computation on low-utility regions and suffer from high gradient variance across semantically heterogeneous contexts. In this work, we reframe test-time adaptation as a budget-constrained memory consolidation problem, focusing on which parts of the context should be consolidated into working memory under limited computation. We propose Gdwm (Gated Differentiable Working Memory), a framework that introduces a write controller to gate the consolidation process. The controller estimates Contextual Utility, an information-theoretic measure of long-range contextual dependence, and allocates gradient steps accordingly while maintaining global coverage. Experiments on ZeroSCROLLS and LongBench v2 demonstrate that Gdwm achieves comparable or superior performance with 4$\times$ fewer gradient steps than uniform baselines, establishing a new efficiency-performance Pareto frontier for test-time adaptation.
+ oai:arXiv.org:2601.12906v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lingrui Mei, Shenghua Liu, Yiwei Wang, Yuyao Ge, Baolong Bi, Jiayu Yao, Jun Wan, Ziling Yin, Jiafeng Guo, Xueqi Cheng
+
+
+ Machine Learning for highly oscillatory differential equations
+ https://arxiv.org/abs/2601.12907
+ arXiv:2601.12907v1 Announce Type: new
+Abstract: Highly oscillatory differential equations, commonly encountered in multi-scale problems, are often too complex to solve analytically. However, several numerical methods have been developed to approximate their solutions. Although these methods have shown their efficiency, the first part of the strategy often involves heavy pre-computations from averaging theory. In this paper, we leverage neural networks (machine learning) to approximate the vector fields required by the pre-computations in the first part, and combine this with micro-macro techniques to efficiently solve the oscillatory problem. We illustrate our work by numerical simulations.
+ oai:arXiv.org:2601.12907v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Maxime Bouchereau (IRMAR)
+
+
+ SciCoQA: Quality Assurance for Scientific Paper--Code Alignment
+ https://arxiv.org/abs/2601.12910
+ arXiv:2601.12910v1 Announce Type: new
+Abstract: We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 611 paper-code discrepancies (81 real, 530 synthetic), spanning diverse computational science disciplines, including AI, Physics, Quantitative Biology, and others. Our evaluation of 21 LLMs highlights the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best performing model in our evaluation, GPT-5, can only detect 45.7\% of real-world paper-code discrepancies.
+ oai:arXiv.org:2601.12910v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tim Baumg\"artner, Iryna Gurevych
+
+
+ Human Emotion Verification by Action Languages via Answer Set Programming
+ https://arxiv.org/abs/2601.12912
+ arXiv:2601.12912v1 Announce Type: new
+Abstract: In this paper, we introduce the action language C-MT (Mind Transition Language). It is built on top of answer set programming (ASP) and transition systems to represent how human mental states evolve in response to sequences of observable actions. Drawing on well-established psychological theories, such as the Appraisal Theory of Emotion, we formalize mental states, such as emotions, as multi-dimensional configurations. With the objective to address the need for controlled agent behaviors and to restrict unwanted mental side-effects of actions, we extend the language with a novel causal rule, forbids to cause, along with expressions specialized for mental state dynamics, which enables the modeling of principles for valid transitions between mental states. These principles of mental change are translated into transition constraints, and properties of invariance, which are rigorously evaluated using transition systems in terms of so-called trajectories. This enables controlled reasoning about the dynamic evolution of human mental states. Furthermore, the framework supports the comparison of different dynamics of change by analyzing trajectories that adhere to different psychological principles. We apply the action language to design models for emotion verification. Under consideration in Theory and Practice of Logic Programming (TPLP).
+ oai:arXiv.org:2601.12912v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Andreas Br\"annstr\"om, Juan Carlos Nieves
+
+
+ Actionable Interpretability Must Be Defined in Terms of Symmetries
+ https://arxiv.org/abs/2601.12913
+ arXiv:2601.12913v1 Announce Type: new
+Abstract: This paper argues that interpretability research in Artificial Intelligence is fundamentally ill-posed as existing definitions of interpretability are not *actionable*: they fail to provide formal principles from which concrete modelling and inferential rules can be derived. We posit that for a definition of interpretability to be actionable, it must be given in terms of *symmetries*. We hypothesise that four symmetries suffice to (i) motivate core interpretability properties, (ii) characterize the class of interpretable models, and (iii) derive a unified formulation of interpretable inference (e.g., alignment, interventions, and counterfactuals) as a form of Bayesian inversion.
+ oai:arXiv.org:2601.12913v1
+ cs.AI
+ cs.LG
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Pietro Barbiero, Mateo Espinosa Zarlenga, Francesco Giannini, Alberto Termine, Filippo Bonchi, Mateja Jamnik, Giuseppe Marra
+
+
+ Static Detection of Core Structures in Tigress Virtualization-Based Obfuscation Using an LLVM Pass
+ https://arxiv.org/abs/2601.12916
+ arXiv:2601.12916v1 Announce Type: new
+Abstract: Malware often uses obfuscation to hinder security analysis. Among these techniques, virtualization-based obfuscation is particularly strong because it protects programs by translating original instructions into attacker-defined virtual machine (VM) bytecode, producing long and complex code that is difficult to analyze and deobfuscate. This paper aims to identify the structural components of virtualization-based obfuscation through static analysis. By examining the execution model of obfuscated code, we define and detect the key elements required for deobfuscation-namely the dispatch routine, handler blocks, and the VM region-using LLVM IR. Experimental results show that, in the absence of compiler optimizations, the proposed LLVM Pass successfully detects all core structures across major virtualization options, including switch, direct, and indirect modes.
+ oai:arXiv.org:2601.12916v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sangjun An, Seoksu Lee, Eun-Sun Cho
+
+
+ CooperLLM: Cloud-Edge-End Cooperative Federated Fine-tuning for LLMs via ZOO-based Gradient Correction
+ https://arxiv.org/abs/2601.12917
+ arXiv:2601.12917v1 Announce Type: new
+Abstract: Large Language Models (LLMs) perform well on many NLP tasks, but fine-tuning them on resource-constrained mobile devices is challenging due to high memory and computation costs, despite growing demands for privacy-preserving personalization. Federated Learning (FL) enables local-data training, yet existing methods either rely on memory-intensive backpropagation or use zeroth-order optimization (ZOO), which avoids backward passes but suffers from slow convergence and degraded accuracy. We propose CooperLLM, a cloud-assisted edge-end cooperative federated fine-tuning framework that combines ZOO on mobile devices with cloud-guided gradient rectification. Mobile clients perform lightweight ZOO updates on private data, while the cloud fine-tunes on auxiliary public data using backpropagation and injects guided perturbations to rectify local updates, improving convergence and accuracy without violating privacy. To address system bottlenecks, CooperLLM introduces pipeline scheduling and adaptive compression to overlap computation and communication and reduce memory usage. Experiments on multiple Transformer models and datasets show that CooperLLM reduces on-device memory by up to $86.4\%$, accelerates convergence by $8.8 \times$, and improves accuracy by up to 10 percentage points over state-of-the-art ZOO-based baselines.
+ oai:arXiv.org:2601.12917v1
+ cs.LG
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ He Sun, Jinrui Zhou, Li Li, Mingjun Xiao
+
+
+ Dynamic Hand Gesture Recognition for Robot Manipulator Tasks
+ https://arxiv.org/abs/2601.12918
+ arXiv:2601.12918v1 Announce Type: new
+Abstract: This paper proposes a novel approach to recognizing dynamic hand gestures facilitating seamless interaction between humans and robots. Here, each robot manipulator task is assigned a specific gesture. There may be several such tasks, hence, several gestures. These gestures may be prone to several dynamic variations. All such variations for different gestures shown to the robot are accurately recognized in real-time using the proposed unsupervised model based on the Gaussian Mixture model. The accuracy during training and real-time testing prove the efficacy of this methodology.
+ oai:arXiv.org:2601.12918v1
+ cs.RO
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/SMC54092.2024.10831056 10.1109/SMC54092.2024.10831056 10.1109/SMC54092.2024.10831056
+ Dharmendra Sharma, Peeyush Thakur, Sandeep Gupta, Narendra Kumar Dhar, Laxmidhar Behera
+
+
+ Supervision-by-Hallucination-and-Transfer: A Weakly-Supervised Approach for Robust and Precise Facial Landmark Detection
+ https://arxiv.org/abs/2601.12919
+ arXiv:2601.12919v1 Announce Type: new
+Abstract: High-precision facial landmark detection (FLD) relies on high-resolution deep feature representations. However, low-resolution face images or the compression (via pooling or strided convolution) of originally high-resolution images hinder the learning of such features, thereby reducing FLD accuracy. Moreover, insufficient training data and imprecise annotations further degrade performance. To address these challenges, we propose a weakly-supervised framework called Supervision-by-Hallucination-and-Transfer (SHT) for more robust and precise FLD. SHT contains two novel mutually enhanced modules: Dual Hallucination Learning Network (DHLN) and Facial Pose Transfer Network (FPTN). By incorporating FLD and face hallucination tasks, DHLN is able to learn high-resolution representations with low-resolution inputs for recovering both facial structures and local details and generating more effective landmark heatmaps. Then, by transforming faces from one pose to another, FPTN can further improve landmark heatmaps and faces hallucinated by DHLN for detecting more accurate landmarks. To the best of our knowledge, this is the first study to explore weakly-supervised FLD by integrating face hallucination and facial pose transfer tasks. Experimental results of both face hallucination and FLD demonstrate that our method surpasses state-of-the-art techniques.
+ oai:arXiv.org:2601.12919v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jun Wan, Yuanzhi Yao, Zhihui Lai, Jie Zhou, Xianxu Hou, Wenwen Min
+
+
+ Injecting Knowledge from Social Science Journals to Improve Indonesian Cultural Understanding by LLMs
+ https://arxiv.org/abs/2601.12921
+ arXiv:2601.12921v1 Announce Type: new
+Abstract: Recently there have been intensifying efforts to improve the understanding of Indonesian cultures by large language models (LLMs). An attractive source of cultural knowledge that has been largely overlooked is local journals of social science, which likely contain substantial cultural studies from a native perspective. We present a novel text dataset of journal article passages, created from 151 open-source Indonesian social science journals, called IndoSoSci. We demonstrate an effective recipe for injecting Indonesian cultural knowledge therein into LLMs: extracting the facts related to Indonesian culture, and apply retrieval-augmented generation (RAG) with LLM-generated hypothetical documents as queries during retrieval. The proposed recipe yields strong performance gains over several strong baselines on the IndoCulture benchmark. Additionally, by combining IndoSoSci with Indonesian Wikipedia, we set a new state-of-the-art accuracy on the IndoCulture benchmark.
+ oai:arXiv.org:2601.12921v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Adimulya Kartiyasa, Bao Gia Cao, Boyang Li
+
+
+ Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy
+ https://arxiv.org/abs/2601.12922
+ arXiv:2601.12922v1 Announce Type: new
+Abstract: Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice. We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms: while conforming to the iDP guarantees, an individual's privacy risk is not solely governed by their own privacy budget, but critically depends on the privacy choices of all other data contributors. This creates a mismatch between the promise of individual privacy control and the reality of a system where risk is collectively determined. We demonstrate empirically that certain distributions of privacy preferences can unintentionally inflate the privacy risk of individuals, even when their formal guarantees are met. Moreover, this excess risk provides an exploitable attack vector. A central adversary or a set of colluding adversaries can deliberately choose privacy budgets to amplify vulnerabilities of targeted individuals. Most importantly, this attack operates entirely within the guarantees of DP, hiding this excess vulnerability. Our empirical evaluation demonstrates successful attacks against 62% of targeted individuals, substantially increasing their membership inference susceptibility. To mitigate this, we propose $(\varepsilon_i,\delta_i,\overline{\Delta})$-iDP a privacy contract that uses $\Delta$-divergences to provide users with a hard upper bound on their excess vulnerability, while offering flexibility to mechanism design. Our findings expose a fundamental challenge to the current paradigm, demanding a re-evaluation of how iDP systems are designed, audited, communicated, and deployed to make excess risks transparent and controllable.
+ oai:arXiv.org:2601.12922v1
+ cs.CR
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Johannes Kaiser, Alexander Ziller, Eleni Triantafillou, Daniel R\"uckert, Georgios Kaissis
+
+
+ ForeDiffusion: Foresight-Conditioned Diffusion Policy via Future View Construction for Robot Manipulation
+ https://arxiv.org/abs/2601.12925
+ arXiv:2601.12925v1 Announce Type: new
+Abstract: Diffusion strategies have advanced visual motor control by progressively denoising high-dimensional action sequences, providing a promising method for robot manipulation. However, as task complexity increases, the success rate of existing baseline models decreases considerably. Analysis indicates that current diffusion strategies are confronted with two limitations. First, these strategies only rely on short-term observations as conditions. Second, the training objective remains limited to a single denoising loss, which leads to error accumulation and causes grasping deviations. To address these limitations, this paper proposes Foresight-Conditioned Diffusion (ForeDiffusion), by injecting the predicted future view representation into the diffusion process. As a result, the policy is guided to be forward-looking, enabling it to correct trajectory deviations. Following this design, ForeDiffusion employs a dual loss mechanism, combining the traditional denoising loss and the consistency loss of future observations, to achieve the unified optimization. Extensive evaluation on the Adroit suite and the MetaWorld benchmark demonstrates that ForeDiffusion achieves an average success rate of 80% for the overall task, significantly outperforming the existing mainstream diffusion methods by 23% in complex tasks, while maintaining more stable performance across the entire tasks.
+ oai:arXiv.org:2601.12925v1
+ cs.RO
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weize Xie, Yi Ding, Ying He, Leilei Wang, Binwen Bai, Zheyi Zhao, Chenyang Wang, F. Richard Yu
+
+
+ Dual-Stream Collaborative Transformer for Image Captioning
+ https://arxiv.org/abs/2601.12926
+ arXiv:2601.12926v1 Announce Type: new
+Abstract: Current region feature-based image captioning methods have progressed rapidly and achieved remarkable performance. However, they are still prone to generating irrelevant descriptions due to the lack of contextual information and the over-reliance on generated partial descriptions for predicting the remaining words. In this paper, we propose a Dual-Stream Collaborative Transformer (DSCT) to address this issue by introducing the segmentation feature. The proposed DSCT consolidates and then fuses the region and segmentation features to guide the generation of caption sentences. It contains multiple Pattern-Specific Mutual Attention Encoders (PSMAEs) and Dynamic Nomination Decoders (DNDs). The PSMAE effectively highlights and consolidates the private information of two representations by querying each other. The DND dynamically searches for the most relevant learning blocks to the input textual representations and exploits the homogeneous features between the consolidated region and segmentation features to generate more accurate and descriptive caption sentences. To the best of our knowledge, this is the first study to explore how to fuse different pattern-specific features in a dynamic way to bypass their semantic inconsistencies and spatial misalignment issues for image captioning. The experimental results from popular benchmark datasets demonstrate that our DSCT outperforms the state-of-the-art image captioning models in the literature.
+ oai:arXiv.org:2601.12926v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jun Wan, Jun Liu, Zhihui lai, Jie Zhou
+
+
+ A Benchmark for Language Models in Real-World System Building
+ https://arxiv.org/abs/2601.12927
+ arXiv:2601.12927v1 Announce Type: new
+Abstract: During migration across instruction set architectures (ISAs), software package build repair is a critical task for ensuring the reliability of software deployment and the stability of modern operating systems. While Large Language Models (LLMs) have shown promise in tackling this challenge, prior work has primarily focused on single instruction set architecture (ISA) and homogeneous programming languages. To address this limitation, we introduce a new benchmark designed for software package build repair across diverse architectures and languages. Comprising 268 real-world software package build failures, the benchmark provides a standardized evaluation pipeline. We evaluate six state-of-the-art LLMs on the benchmark, and the results show that cross-ISA software package repair remains difficult and requires further advances. By systematically exposing this challenge, the benchmark establishes a foundation for advancing future methods aimed at improving software portability and bridging architectural gaps.
+ oai:arXiv.org:2601.12927v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weilin Jin, Chenyu Zhao, Zeshun Huang, Chaoyun Zhang, Qingwei Lin, Chetan Bansal, Saravan Rajmohan, Shenglin Zhang, Yongqian Sun, Dan Pei, Yifan Wu, Tong Jia, Ying Li, Zhonghai Wu, Minghua Ma
+
+
+ An efficient heuristic for geometric analysis of cell deformations
+ https://arxiv.org/abs/2601.12928
+ arXiv:2601.12928v1 Announce Type: new
+Abstract: Sickle cell disease causes erythrocytes to become sickle-shaped, affecting their movement in the bloodstream and reducing oxygen delivery. It has a high global prevalence and places a significant burden on healthcare systems, especially in resource-limited regions. Automated classification of sickle cells in blood images is crucial, allowing the specialist to reduce the effort required and avoid errors when quantifying the deformed cells and assessing the severity of a crisis. Recent studies have proposed various erythrocyte representation and classification methods. Since classification depends solely on cell shape, a suitable approach models erythrocytes as closed planar curves in shape space. This approach employs elastic distances between shapes, which are invariant under rotations, translations, scaling, and reparameterizations, ensuring consistent distance measurements regardless of the curves' position, starting point, or traversal speed. While previous methods exploiting shape space distances had achieved high accuracy, we refined the model by considering the geometric characteristics of healthy and sickled erythrocytes. Our method proposes (1) to employ a fixed parameterization based on the major axis of each cell to compute distances and (2) to align each cell with two templates using this parameterization before computing distances. Aligning shapes to templates before distance computation, a concept successfully applied in areas such as molecular dynamics, and using a fixed parameterization, instead of minimizing distances across all possible parameterizations, simplifies calculations. This strategy achieves 96.03\% accuracy rate in both supervised classification and unsupervised clustering. Our method ensures efficient erythrocyte classification, maintaining or improving accuracy over shape space models while significantly reducing computational costs.
+ oai:arXiv.org:2601.12928v1
+ cs.LG
+ q-bio.QM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1016/j.compbiomed.2025.109709
+ Soto, Y. P., Garcia, S. H., Gual-Arnau, X., Jaume-i-Cap\'o, A., & Gonz\'alez-Hidalgo, M. (2025). An efficient heuristic for geometric analysis of cell deformations. Computers in Biology and Medicine, 186, 109709
+ Yaima Paz Soto, Silena Herold Garcia, Ximo Gual-Arnau, Antoni Jaume-i-Cap\'o, Manuel Gonz\'alez-Hidalgo
+
+
+ Membership Inference Test: Auditing Training Data in Object Classification Models
+ https://arxiv.org/abs/2601.12929
+ arXiv:2601.12929v1 Announce Type: new
+Abstract: In this research, we analyze the performance of Membership Inference Tests (MINT), focusing on determining whether given data were utilized during the training phase, specifically in the domain of object recognition. Within the area of object recognition, we propose and develop architectures tailored for MINT models. These architectures aim to optimize performance and efficiency in data utilization, offering a tailored solution to tackle the complexities inherent in the object recognition domain. We conducted experiments involving an object detection model, an embedding extractor, and a MINT module. These experiments were performed in three public databases, totaling over 174K images. The proposed architecture leverages convolutional layers to capture and model the activation patterns present in the data during the training process. Through our analysis, we are able to identify given data used for testing and training, achieving precision rates ranging between 70% and 80%, contingent upon the depth of the detection module layer chosen for input to the MINT module. Additionally, our studies entail an analysis of the factors influencing the MINT Module, delving into the contributing elements behind more transparent training processes.
+ oai:arXiv.org:2601.12929v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Gonzalo Mancera, Daniel DeAlcala, Aythami Morales, Ruben Tolosana, Julian Fierrez
+
+
+ Online Continual Learning for Time Series: a Natural Score-driven Approach
+ https://arxiv.org/abs/2601.12931
+ arXiv:2601.12931v1 Announce Type: new
+Abstract: Online continual learning (OCL) methods adapt to changing environments without forgetting past knowledge. Similarly, online time series forecasting (OTSF) is a real-world problem where data evolve in time and success depends on both rapid adaptation and long-term memory. Indeed, time-varying and regime-switching forecasting models have been extensively studied, offering a strong justification for the use of OCL in these settings. Building on recent work that applies OCL to OTSF, this paper aims to strengthen the theoretical and practical connections between time series methods and OCL. First, we reframe neural network optimization as a parameter filtering problem, showing that natural gradient descent is a score-driven method and proving its information-theoretic optimality. Then, we show that using a Student's t likelihood in addition to natural gradient induces a bounded update, which improves robustness to outliers. Finally, we introduce Natural Score-driven Replay (NatSR), which combines our robust optimizer with a replay buffer and a dynamic scale heuristic that improves fast adaptation at regime drifts. Empirical results demonstrate that NatSR achieves stronger forecasting performance than more complex state-of-the-art methods.
+ oai:arXiv.org:2601.12931v1
+ cs.LG
+ cs.AI
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Edoardo Urettini, Daniele Atzeni, Ioanna-Yvonni Tsaknaki, Antonio Carta
+
+
+ Perception of Deepfakes among Bangladeshi Women
+ https://arxiv.org/abs/2601.12933
+ arXiv:2601.12933v1 Announce Type: new
+Abstract: As deepfake technology becomes more accessible, concerns about its misuse and societal impact are escalating, particularly in regions like the Global South where digital literacy and regulatory measures are often limited. While previous research has explored deepfakes in contexts such as detection and media manipulation, there is a noticeable gap in understanding how individuals in these regions perceive and interact with deepfake media. This study addresses this gap by investigating how Bangladeshi women perceive deepfakes and the socio-cultural factors influencing their awareness, concerns, and responses to this technology. Drawing on 15 semi-structured interviews, we uncover how cultural values, gendered norms, trust in institutions, and the prevalence of digital harassment shape their perceptions and coping mechanisms. Through this research, we aim to advance existing scholarship in HCI by offering insights into the design of culturally sensitive interventions, educational initiatives, and policy frameworks to address the challenges posed by deepfakes in the Global South.
+ oai:arXiv.org:2601.12933v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sharifa Sultana, Pratyasha Saha, Nadira Nowsher, Sumaia Arefin Ritu, Zinnat Sultana, Syed Ishtiaque Ahmed, S M Taiabul Haque
+
+
+ Bangladesh AI Readiness: Perspectives from the Academia, Industry, and Government
+ https://arxiv.org/abs/2601.12934
+ arXiv:2601.12934v1 Announce Type: new
+Abstract: Artificial Intelligence (AI) readiness in the Global South extends beyond infrastructure to include curriculum design, workforce development, and cross-sector collaboration. Bangladesh, ranked 82nd in the 2023 Oxford Insights AI Readiness Index, exhibits significant deficits in technology capacity and research ecosystems, despite strong governmental visions. While HCI and ICTD research have explored digital inclusion and responsible AI, little empirical work examines how educational, industrial, and policy domains intersect to shape readiness. We present a multi-method qualitative study of AI readiness in Bangladesh, combining institutional analyses, 59 stakeholder interviews, and curriculum benchmarking against global exemplars. Findings reveal outdated curricula, limited faculty upskilling, inadequate computing resources, entrenched gender disparities, and the near-total absence of AI ethics instruction. We contribute empirical mapping of current practices, identification of structural and cultural barriers, and actionable pathways for embedding human-centered, inclusive, and responsible AI practices into national agendas, advancing equitable innovation in emerging AI ecosystems.
+ oai:arXiv.org:2601.12934v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sharifa Sultana, Rupali Samad, Mehzabin Haque, Zinnat Sultana, Zulkarin Jahangir, B M Mainul Hossain, Rashed Mujib Noman, Syed Ishtiaque Ahmed
+
+
+ QASA: Quality-Guided K-Adaptive Slot Attention for Unsupervised Object-Centric Learning
+ https://arxiv.org/abs/2601.12936
+ arXiv:2601.12936v1 Announce Type: new
+Abstract: Slot Attention, an approach that binds different objects in a scene to a set of "slots", has become a leading method in unsupervised object-centric learning. Most methods assume a fixed slot count K, and to better accommodate the dynamic nature of object cardinality, a few works have explored K-adaptive variants. However, existing K-adaptive methods still suffer from two limitations. First, they do not explicitly constrain slot-binding quality, so low-quality slots lead to ambiguous feature attribution. Second, adding a slot-count penalty to the reconstruction objective creates conflicting optimization goals between reducing the number of active slots and maintaining reconstruction fidelity. As a result, they still lag significantly behind strong K-fixed baselines. To address these challenges, we propose Quality-Guided K-Adaptive Slot Attention (QASA). First, we decouple slot selection from reconstruction, eliminating the mutual constraints between the two objectives. Then, we propose an unsupervised Slot-Quality metric to assess per-slot quality, providing a principled signal for fine-grained slot--object binding. Based on this metric, we design a Quality-Guided Slot Selection scheme that dynamically selects a subset of high-quality slots and feeds them into our newly designed gated decoder for reconstruction during training. At inference, token-wise competition on slot attention yields a K-adaptive outcome. Experiments show that QASA substantially outperforms existing K-adaptive methods on both real and synthetic datasets. Moreover, on real-world datasets QASA surpasses K-fixed methods.
+ oai:arXiv.org:2601.12936v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tianran Ouyang, Xingping Dong, Jing Zhang, Mang Ye, Jun Chen, Bo Du
+
+
+ On the Evidentiary Limits of Membership Inference for Copyright Auditing
+ https://arxiv.org/abs/2601.12937
+ arXiv:2601.12937v1 Announce Type: new
+Abstract: As large language models (LLMs) are trained on increasingly opaque corpora, membership inference attacks (MIAs) have been proposed to audit whether copyrighted texts were used during training, despite growing concerns about their reliability under realistic conditions. We ask whether MIAs can serve as admissible evidence in adversarial copyright disputes where an accused model developer may obfuscate training data while preserving semantic content, and formalize this setting through a judge-prosecutor-accused communication protocol. To test robustness under this protocol, we introduce SAGE (Structure-Aware SAE-Guided Extraction), a paraphrasing framework guided by Sparse Autoencoders (SAEs) that rewrites training data to alter lexical structure while preserving semantic content and downstream utility. Our experiments show that state-of-the-art MIAs degrade when models are fine-tuned on SAGE-generated paraphrases, indicating that their signals are not robust to semantics-preserving transformations. While some leakage remains in certain fine-tuning regimes, these results suggest that MIAs are brittle in adversarial settings and insufficient, on their own, as a standalone mechanism for copyright auditing of LLMs.
+ oai:arXiv.org:2601.12937v1
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Murat Bilgehan Ertan, Emirhan B\"oge, Min Chen, Kaleel Mahmood, Marten van Dijk
+
+
+ The Post-Turing Condition: Conceptualising Artificial Subjectivity and Synthetic Sociality
+ https://arxiv.org/abs/2601.12938
+ arXiv:2601.12938v1 Announce Type: new
+Abstract: In the Post-Turing era, artificial intelligence increasingly shapes social coordination and meaning formation rather than merely automating cognitive tasks. The central challenge is therefore not whether machines become conscious, but whether processes of interpretation and shared reference are progressively automated in ways that marginalize human participation. This paper introduces the PRMO framework, relating AI design trajectories to four constitutive dimensions of human subjectivity: Perception, Representation, Meaning, and the Real. Within this framework, Synthetic Sociality denotes a technological horizon in which artificial agents negotiate coherence and social order primarily among themselves, raising the structural risk of human exclusion from meaning formation. To address this risk, the paper proposes Quadrangulation as a design principle for socially embedded AI systems, requiring artificial agents to treat the human subject as a constitutive reference within shared contexts of meaning. This work is a conceptual perspective that contributes a structural vocabulary for analyzing AI systems at the intersection of computation and society, without proposing a specific technical implementation.
+ oai:arXiv.org:2601.12938v1
+ cs.CY
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Thorsten Jelinek, Patrick Glauner, Alvin Wang Graylin, Yubao Qiu
+
+
+ Active Inference-Driven World Modeling for Adaptive UAV Swarm Trajectory Design
+ https://arxiv.org/abs/2601.12939
+ arXiv:2601.12939v1 Announce Type: new
+Abstract: This paper proposes an Active Inference-based framework for autonomous trajectory design in UAV swarms. The method integrates probabilistic reasoning and self-learning to enable distributed mission allocation, route ordering, and motion planning. Expert trajectories generated using a Genetic Algorithm with Repulsion Forces (GA-RF) are employed to train a hierarchical World Model capturing swarm behavior across mission, route, and motion levels. During online operation, UAVs infer actions by minimizing divergence between current beliefs and model-predicted states, enabling adaptive responses to dynamic environments. Simulation results show faster convergence, higher stability, and safer navigation than Q-Learning, demonstrating the scalability and cognitive grounding of the proposed framework for intelligent UAV swarm control.
+ oai:arXiv.org:2601.12939v1
+ cs.RO
+ cs.AI
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kaleem Arshid, Ali Krayani, Lucio Marcenaro, David Martin Gomez, Carlo Regazzoni
+
+
+ Dependently-Typed AARA: A Non-Affine Approach for Resource Analysis of Higher-Order Programs
+ https://arxiv.org/abs/2601.12943
+ arXiv:2601.12943v1 Announce Type: new
+Abstract: Static resource analysis determines the resource consumption (e.g., time complexity) of a program without executing it. Among the numerous existing approaches for resource analysis, affine type systems have been one dominant approach. However, these affine type systems fall short of deriving precise resource behavior of higher-order programs, particularly in cases that involve partial applications.
+ This article presents \lambda_\ms{amor}^\ms{na}}, a non-affine AARA-style dependent type system for resource reasoning about higher-order functional programs. The key observation is that the main issue in previous approaches comes from (i) the close coupling of types and resources, and (ii) the conflict between affine and higher-order typing mechanisms. To derive precise resource behavior of higher-order functions, \lambda_\ms{amor}^\ms{na}} decouples resources from types and follows a non-affine typing mechanism. The non-affine type system of \lambda_\ms{amor}^\ms{na}} achieves this by using dependent types, which allows expressing type-level potential functions separate from ordinary types. This article formalizes \lambda_\ms{amor}^\ms{na}}'s syntax and semantics, and proves its soundness, which guarantees the correctness of resource bounds. Several challenging classic and higher-order examples are presented to demonstrate the expressiveness and compositionality of \lambda_\ms{amor}^\ms{na}}'s reasoning capability.
+ oai:arXiv.org:2601.12943v1
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Han Xu, Di Wang
+
+
+ On the Concavity of Tsallis Entropy along the Heat Flow
+ https://arxiv.org/abs/2601.12944
+ arXiv:2601.12944v1 Announce Type: new
+Abstract: We demonstrate the concavity of the Tsallis entropy along the heat flow for general dimensions, expanding upon the findings of Wu et al 2025 and Hung 2022, which were previously limited to the one-dimensional case. The core of the proof is a novel estimate of the terms in the second-order time derivative, and a rigorous validation of integration by parts. The resulting bound establishes a new functional inequality, which may be of interest for other areas of mathematical analysis.
+ oai:arXiv.org:2601.12944v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Lukang Sun
+
+
+ A Component-Based Survey of Interactions between Large Language Models and Multi-Armed Bandits
+ https://arxiv.org/abs/2601.12945
+ arXiv:2601.12945v1 Announce Type: new
+Abstract: Large language models (LLMs) have become powerful and widely used systems for language understanding and generation, while multi-armed bandit (MAB) algorithms provide a principled framework for adaptive decision-making under uncertainty. This survey explores the potential at the intersection of these two fields. As we know, it is the first survey to systematically review the bidirectional interaction between large language models and multi-armed bandits at the component level. We highlight the bidirectional benefits: MAB algorithms address critical LLM challenges, spanning from pre-training to retrieval-augmented generation (RAG) and personalization. Conversely, LLMs enhance MAB systems by redefining core components such as arm definition and environment modeling, thereby improving decision-making in sequential tasks. We analyze existing LLM-enhanced bandit systems and bandit-enhanced LLM systems, providing insights into their design, methodologies, and performance. Key challenges and representative findings are identified to help guide future research. An accompanying GitHub repository that indexes relevant literature is available at https://github.com/bucky1119/Awesome-LLM-Bandit-Interaction.
+ oai:arXiv.org:2601.12945v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Miao Xie, Siguang Chen, Chunli Lv
+
+
+ AI-generated data contamination erodes pathological variability and diagnostic reliability
+ https://arxiv.org/abs/2601.12946
+ arXiv:2601.12946v1 Announce Type: new
+Abstract: Generative artificial intelligence (AI) is rapidly populating medical records with synthetic content, creating a feedback loop where future models are increasingly at risk of training on uncurated AI-generated data. However, the clinical consequences of this AI-generated data contamination remain unexplored. Here, we show that in the absence of mandatory human verification, this self-referential cycle drives a rapid erosion of pathological variability and diagnostic reliability. By analysing more than 800,000 synthetic data points across clinical text generation, vision-language reporting, and medical image synthesis, we find that models progressively converge toward generic phenotypes regardless of the model architecture. Specifically, rare but critical findings, including pneumothorax and effusions, vanish from the synthetic content generated by AI models, while demographic representations skew heavily toward middle-aged male phenotypes. Crucially, this degradation is masked by false diagnostic confidence; models continue to issue reassuring reports while failing to detect life-threatening pathology, with false reassurance rates tripling to 40%. Blinded physician evaluation confirms that this decoupling of confidence and accuracy renders AI-generated documentation clinically useless after just two generations. We systematically evaluate three mitigation strategies, finding that while synthetic volume scaling fails to prevent collapse, mixing real data with quality-aware filtering effectively preserves diversity. Ultimately, our results suggest that without policy-mandated human oversight, the deployment of generative AI threatens to degrade the very healthcare data ecosystems it relies upon.
+ oai:arXiv.org:2601.12946v1
+ cs.CY
+ cs.AI
+ cs.CL
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hongyu He, Shaowen Xiang, Ye Zhang, Yingtao Zhu, Jin Zhang, Hao Deng, Emily Alsentzer, Qingyu Chen, Kun-Hsing Yu, Andrew Marmenshall, Tingting Chen, Srinivas Anumasa, Daniel Ebner, Dean Ho, Kee Yuan Ngiam, Ching-Yu Cheng, Dianbo Liu
+
+
+ GazeD: Context-Aware Diffusion for Accurate 3D Gaze Estimation
+ https://arxiv.org/abs/2601.12948
+ arXiv:2601.12948v1 Announce Type: new
+Abstract: We introduce GazeD, a new 3D gaze estimation method that jointly provides 3D gaze and human pose from a single RGB image. Leveraging the ability of diffusion models to deal with uncertainty, it generates multiple plausible 3D gaze and pose hypotheses based on the 2D context information extracted from the input image. Specifically, we condition the denoising process on the 2D pose, the surroundings of the subject, and the context of the scene. With GazeD we also introduce a novel way of representing the 3D gaze by positioning it as an additional body joint at a fixed distance from the eyes. The rationale is that the gaze is usually closely related to the pose, and thus it can benefit from being jointly denoised during the diffusion process. Evaluations across three benchmark datasets demonstrate that GazeD achieves state-of-the-art performance in 3D gaze estimation, even surpassing methods that rely on temporal information. Project details will be available at https://aimagelab.ing.unimore.it/go/gazed.
+ oai:arXiv.org:2601.12948v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Riccardo Catalini, Davide Di Nucci, Guido Borghi, Davide Davoli, Lorenzo Garattoni, Giampiero Francesca, Yuki Kawana, Roberto Vezzani
+
+
+ Beyond Accuracy: Characterizing Code Comprehension Capabilities in (Large) Language Models
+ https://arxiv.org/abs/2601.12951
+ arXiv:2601.12951v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are increasingly integrated into software engineering workflows, yet current benchmarks provide only coarse performance summaries that obscure the diverse capabilities and limitations of these models. This paper investigates whether LLMs' code-comprehension performance aligns with traditional human-centric software metrics or instead reflects distinct, non-human regularities. We introduce a diagnostic framework that reframes code understanding as a binary input-output consistency task, enabling the evaluation of classification and generative models. Using a large-scale dataset, we correlate model performance with traditional, human-centric complexity metrics, such as lexical size, control-flow complexity, and abstract syntax tree structure. Our analyses reveal minimal correlation between human-defined metrics and LLM success (AUROC 0.63), while shadow models achieve substantially higher predictive performance (AUROC 0.86), capturing complex, partially predictable patterns beyond traditional software measures. These findings suggest that LLM comprehension reflects model-specific regularities only partially accessible through either human-designed or learned features, emphasizing the need for benchmark methodologies that move beyond aggregate accuracy and toward instance-level diagnostics, while acknowledging fundamental limits in predicting correct outcomes.
+ oai:arXiv.org:2601.12951v1
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Felix M\"achtle, Jan-Niclas Serr, Nils Loose, Thomas Eisenbarth
+
+
+ Imitation learning-based spacecraft rendezvous and docking method with Expert Demonstration
+ https://arxiv.org/abs/2601.12952
+ arXiv:2601.12952v1 Announce Type: new
+Abstract: Existing spacecraft rendezvous and docking control methods largely rely on predefined dynamic models and often exhibit limited robustness in realistic on-orbit environments. To address this issue, this paper proposes an Imitation Learning-based spacecraft rendezvous and docking control framework (IL-SRD) that directly learns control policies from expert demonstrations, thereby reducing dependence on accurate modeling. We propose an anchored decoder target mechanism, which conditions the decoder queries on state-related anchors to explicitly constrain the control generation process. This mechanism enforces physically consistent control evolution and effectively suppresses implausible action deviations in sequential prediction, enabling reliable six-degree-of-freedom (6-DOF) rendezvous and docking control. To further enhance stability, a temporal aggregation mechanism is incorporated to mitigate error accumulation caused by the sequential prediction nature of Transformer-based models, where small inaccuracies at each time step can propagate and amplify over long horizons. Extensive simulation results demonstrate that the proposed IL-SRD framework achieves accurate and energy-efficient model-free rendezvous and docking control. Robustness evaluations further confirm its capability to maintain competitive performance under significant unknown disturbances. The source code is available at https://github.com/Dongzhou-1996/IL-SRD.
+ oai:arXiv.org:2601.12952v1
+ cs.RO
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shibo Shao, Dong Zhou, Guanghui Sun, Liwen Zhang, Mingxuan Jiang
+
+
+ StyMam: A Mamba-Based Generator for Artistic Style Transfer
+ https://arxiv.org/abs/2601.12954
+ arXiv:2601.12954v1 Announce Type: new
+Abstract: Image style transfer aims to integrate the visual patterns of a specific artistic style into a content image while preserving its content structure. Existing methods mainly rely on the generative adversarial network (GAN) or stable diffusion (SD). GAN-based approaches using CNNs or Transformers struggle to jointly capture local and global dependencies, leading to artifacts and disharmonious patterns. SD-based methods reduce such issues but often fail to preserve content structures and suffer from slow inference. To address these issues, we revisit GAN and propose a mamba-based generator, termed as StyMam, to produce high-quality stylized images without introducing artifacts and disharmonious patterns. Specifically, we introduce a mamba-based generator with a residual dual-path strip scanning mechanism and a channel-reweighted spatial attention module. The former efficiently captures local texture features, while the latter models global dependencies. Finally, extensive qualitative and quantitative experiments demonstrate that the proposed method outperforms state-of-the-art algorithms in both quality and speed.
+ oai:arXiv.org:2601.12954v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhou Hong, Rongsheng Hu, Yicheng Di, Xiaolong Xu, Ning Dong, Yihua Shao, Run Ling, Yun Wang, Juqin Wang, Zhanjie Zhang, Ao Ma
+
+
+ Codes Correcting Few Restricted Errors
+ https://arxiv.org/abs/2601.12959
+ arXiv:2601.12959v1 Announce Type: new
+Abstract: We consider linear codes over a field in which the error values are restricted to a subgroup of its unit group. This scenario captures Lee distance codes as well as codes over the Gaussian or Eisenstein integers. Codes correcting restricted errors gained increased attention recently in the context of code-based cryptography.
+ In this work we provide new constructions of codes over the Gaussian or Eisenstein integers correcting two or three errors. We adapt some techniques from Roth and Siegel's work on codes for the Lee metric. We propose two construction methods, which may be seen of geometric and algebraic flavor, respectively.
+ oai:arXiv.org:2601.12959v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jens Zumbr\"agel
+
+
+ Trustworthy Data-driven Chronological Age Estimation from Panoramic Dental Images
+ https://arxiv.org/abs/2601.12960
+ arXiv:2601.12960v1 Announce Type: new
+Abstract: Integrating deep learning into healthcare enables personalized care but raises trust issues due to model opacity. To improve transparency, we propose a system for dental age estimation from panoramic images that combines an opaque and a transparent method within a natural language generation (NLG) module. This module produces clinician-friendly textual explanations about the age estimations, designed with dental experts through a rule-based approach. Following the best practices in the field, the quality of the generated explanations was manually validated by dental experts using a questionnaire. The results showed a strong performance, since the experts rated 4.77+/-0.12 (out of 5) on average across the five dimensions considered. We also performed a trustworthy self-assessment procedure following the ALTAI checklist, in which it scored 4.40+/-0.27 (out of 5) across seven dimensions of the AI Trustworthiness Assessment List.
+ oai:arXiv.org:2601.12960v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1007/s10796-025-10682-3
+ Ainhoa Vivel-Couso, Nicol\'as Vila-Blanco, Mar\'ia J. Carreira, Alberto Bugar\'in-Diz, Inmaculada Tom\'as, Jose M. Alonso-Moral
+
+
+ Supervised Learning for Game Music Segmentation
+ https://arxiv.org/abs/2601.12961
+ arXiv:2601.12961v1 Announce Type: new
+Abstract: At present, neural network-based models, including transformers, struggle to generate memorable and readily comprehensible music from unified and repetitive musical material due to a lack of understanding of musical structure. Consequently, these models are rarely employed by the games industry. It is hypothesised by many scholars that the modelling of musical structure may inform models at a higher level, thereby enhancing the quality of music generation. The aim of this study is to explore the performance of supervised learning methods in the task of structural segmentation, which is the initial step in music structure modelling. An audio game music dataset with 309 structural annotations was created to train the proposed method, which combines convolutional neural networks and recurrent neural networks, achieving performance comparable to the state-of-the-art unsupervised learning methods with fewer training resources.
+ oai:arXiv.org:2601.12961v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shangxuan Luo, Joshua Reiss
+
+
+ ACE-Align: Attribute Causal Effect Alignment for Cultural Values under Varying Persona Granularities
+ https://arxiv.org/abs/2601.12962
+ arXiv:2601.12962v1 Announce Type: new
+Abstract: Ensuring that large language models (LLMs) respect diverse cultural values is crucial for social equity. However, existing approaches often treat cultural groups as homogeneous and overlook within-group heterogeneity induced by intersecting demographic attributes, leading to unstable behavior under varying persona granularity. We propose ACE-Align (Attribute Causal Effect Alignment), a causal-effect framework that aligns how specific demographic attributes shift different cultural values, rather than treating each culture as a homogeneous group. We evaluate ACE-Align across 14 countries spanning five continents, with personas specified by subsets of four attributes (gender, education, residence, and marital status) and granularity instantiated by the number of specified attributes. Across all persona granularities, ACE-Align consistently outperforms baselines. Moreover, it improves geographic equity by reducing the average alignment gap between high-resource and low-resource regions from 9.81 to 4.92 points, while Africa shows the largest average gain (+8.48 points). Code is available at https://github.com/Wells-Luo/ACE-Align.
+ oai:arXiv.org:2601.12962v1
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiatang Luo, Bingbing Xu, Rongxin Chen, Xiaoyan Zhao, Yang Zhang, Liang Pang, Zhiyong Huang, Tat-Seng Chua, Huawei Shen
+
+
+ Cross-Scale Pretraining: Enhancing Self-Supervised Learning for Low-Resolution Satellite Imagery for Semantic Segmentation
+ https://arxiv.org/abs/2601.12964
+ arXiv:2601.12964v1 Announce Type: new
+Abstract: Self-supervised pretraining in remote sensing is mostly done using mid-spatial resolution (MR) image datasets due to their high availability. Given the release of high-resolution (HR) datasets, we ask how HR datasets can be included in self-supervised pretraining to enhance MR image representation learning and downstream segmentation performance on MR tasks. We design a spatial affinity component that can be added to existing self-supervised learning frameworks and that uses HR imagery to learn better representations of MR imagery. We test the spatial affinity component on two self-supervised learning frameworks and show that it outperforms models pretrained on HR or MR images alone.
+ oai:arXiv.org:2601.12964v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ John Waithaka, Gustave Bwirayesu, Moise Busogi
+
+
+ Deterministic Dynamics of Sampling Processes in Score-Based Diffusion Models with Multiplicative Noise Conditioning
+ https://arxiv.org/abs/2601.12965
+ arXiv:2601.12965v1 Announce Type: new
+Abstract: Score-based diffusion models generate new samples by learning the score function associated with a diffusion process. While the effectiveness of these models can be theoretically explained using differential equations related to the sampling process, previous work by Song and Ermon (2020) demonstrated that neural networks using multiplicative noise conditioning can still generate satisfactory samples. In this setup, the model is expressed as the product of two functions: one depending on the spatial variable and the other on the noise magnitude. This structure limits the model's ability to represent a more general relationship between the spatial variable and the noise, indicating that it cannot fully learn the correct score. Despite this limitation, the models perform well in practice. In this work, we provide a theoretical explanation for this phenomenon by studying the deterministic dynamics of the associated differential equations, offering insight into how the model operates.
+ oai:arXiv.org:2601.12965v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Doheon Kim
+
+
+ Lombard Speech Synthesis for Any Voice with Controllable Style Embeddings
+ https://arxiv.org/abs/2601.12966
+ arXiv:2601.12966v1 Announce Type: new
+Abstract: The Lombard effect plays a key role in natural communication, particularly in noisy environments or when addressing hearing-impaired listeners. We present a controllable text-to-speech (TTS) system capable of synthesizing Lombard speech for any speaker without requiring explicit Lombard data during training. Our approach leverages style embeddings learned from a large, prosodically diverse dataset and analyzes their correlation with Lombard attributes using principal component analysis (PCA). By shifting the relevant PCA components, we manipulate the style embeddings and incorporate them into our TTS model to generate speech at desired Lombard levels. Evaluations demonstrate that our method preserves naturalness and speaker identity, enhances intelligibility under noise, and provides fine-grained control over prosody, offering a robust solution for controllable Lombard TTS for any speaker.
+ oai:arXiv.org:2601.12966v1
+ cs.SD
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Seymanur Akti, Alexander Waibel
+
+
+ Sutradhara: An Intelligent Orchestrator-Engine Co-design for Tool-based Agentic Inference
+ https://arxiv.org/abs/2601.12967
+ arXiv:2601.12967v1 Announce Type: new
+Abstract: Agentic applications are LLMs that iteratively invoke external tools to accomplish complex tasks. Such tool-based agents are rapidly becoming the dominant paradigm for deploying language models in production. Unlike traditional single-turn inference, agentic workloads chain together multiple LLM calls and tool executions before producing a final response, creating a new performance bottleneck that manifests as increased latency in First Token Rendered (FTR) of the final answer. Through analysis of synthetic requests at production scale, we reveal three critical challenges: tool calls account for 30-80% of FTR latency, KV cache hit rates collapse despite substantial context reuse across iterations, and sequential orchestration wastes potential intra-request parallelism by sequentially executing LLM calls and tools. These bottlenecks stem from a design gap in which orchestrators and LLM engines operate as decoupled black boxes, preventing cross-layer optimizations. We present SUTRADHARA, a co-designed agentic inference system that integrates orchestration with LLM serving through a thin API enabling three optimizations: overlap tool execution with subsequent LLM prefill using tool-aware prompt splitting, streaming tool execution to dispatch tools incrementally during decode rather than waiting for complete output, and orchestrator-aware cache management that uses semantic hints to improve hit rates and reduce thrashing. Implemented on vLLM, SUTRADHARA reduces median FTR latency by 15% and end-to-end latency by 10% across workloads on A100 GPUs, demonstrating that co-design can systematically tame latency in agentic systems.
+ oai:arXiv.org:2601.12967v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Anish Biswas, Kanishk Goel, Jayashree Mohan, Alind Khare, Anjaly Parayil, Ramachandran Ramjee, Chetan Bansal
+
+
+ Architecture-Optimization Co-Design for Physics-Informed Neural Networks Via Attentive Representations and Conflict-Resolved Gradients
+ https://arxiv.org/abs/2601.12971
+ arXiv:2601.12971v1 Announce Type: new
+Abstract: Physics-Informed Neural Networks (PINNs) provide a learning-based framework for solving partial differential equations (PDEs) by embedding governing physical laws into neural network training. In practice, however, their performance is often hindered by limited representational capacity and optimization difficulties caused by competing physical constraints and conflicting gradients. In this work, we study PINN training from a unified architecture-optimization perspective. We first propose a layer-wise dynamic attention mechanism to enhance representational flexibility, resulting in the Layer-wise Dynamic Attention PINN (LDA-PINN). We then reformulate PINN training as a multi-task learning problem and introduce a conflict-resolved gradient update strategy to alleviate gradient interference, leading to the Gradient-Conflict-Resolved PINN (GC-PINN). By integrating these two components, we develop the Architecture-Conflict-Resolved PINN (ACR-PINN), which combines attentive representations with conflict-aware optimization while preserving the standard PINN loss formulation. Extensive experiments on benchmark PDEs, including the Burgers, Helmholtz, Klein-Gordon, and lid-driven cavity flow problems, demonstrate that ACR-PINN achieves faster convergence and significantly lower relative $L_2$ and $L_\infty$ errors than standard PINNs. These results highlight the effectiveness of architecture-optimization co-design for improving the robustness and accuracy of PINN-based solvers.
+ oai:arXiv.org:2601.12971v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pancheng Niu, Jun Guo, Qiaolin He, Yongming Chen, Yanchao Shi
+
+
+ Pardon? Evaluating Conversational Repair in Large Audio-Language Models
+ https://arxiv.org/abs/2601.12973
+ arXiv:2601.12973v1 Announce Type: new
+Abstract: Large Audio-Language Models (LALMs) have demonstrated strong performance in spoken question answering (QA), with existing evaluations primarily focusing on answer accuracy and robustness to acoustic perturbations. However, such evaluations implicitly assume that spoken inputs remain semantically answerable, an assumption that often fails in real-world interaction when essential information is missing. In this work, we introduce a repair-aware evaluation setting that explicitly distinguishes between answerable and unanswerable audio inputs. We define answerability as a property of the input itself and construct paired evaluation conditions using a semantic-acoustic masking protocol. Based on this setting, we propose the Evaluability Awareness and Repair (EAR) score, a non-compensatory metric that jointly evaluates task competence under answerable conditions and repair behavior under unanswerable conditions. Experiments on two spoken QA benchmarks across diverse LALMs reveal a consistent gap between answer accuracy and conversational reliability: while many models perform well when inputs are answerable, most fail to recognize semantic unanswerability and initiate appropriate conversational repair. These findings expose a limitation of prevailing accuracy-centric evaluation practices and motivate reliability assessments that treat unanswerable inputs as cues for repair and continued interaction.
+ oai:arXiv.org:2601.12973v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shuanghong Huang, Jinlei Xu, Youchao Zhou, Yanghao Zhou, Xuan Zhao, Chong Feng, Wenxuan Zhang
+
+
+ Bridging the Knowledge-Action Gap by Evaluating LLMs in Dynamic Dental Clinical Scenarios
+ https://arxiv.org/abs/2601.12974
+ arXiv:2601.12974v1 Announce Type: new
+Abstract: The transition of Large Language Models (LLMs) from passive knowledge retrievers to autonomous clinical agents demands a shift in evaluation-from static accuracy to dynamic behavioral reliability. To explore this boundary in dentistry, a domain where high-quality AI advice uniquely empowers patient-participatory decision-making, we present the Standardized Clinical Management & Performance Evaluation (SCMPE) benchmark, which comprehensively assesses performance from knowledge-oriented evaluations (static objective tasks) to workflow-based simulations (multi-turn simulated patient interactions). Our analysis reveals that while models demonstrate high proficiency in static objective tasks, their performance precipitates in dynamic clinical dialogues, identifying that the primary bottleneck lies not in knowledge retention, but in the critical challenges of active information gathering and dynamic state tracking. Mapping "Guideline Adherence" versus "Decision Quality" reveals a prevalent "High Efficacy, Low Safety" risk in general models. Furthermore, we quantify the impact of Retrieval-Augmented Generation (RAG). While RAG mitigates hallucinations in static tasks, its efficacy in dynamic workflows is limited and heterogeneous, sometimes causing degradation. This underscores that external knowledge alone cannot bridge the reasoning gap without domain-adaptive pre-training. This study empirically charts the capability boundaries of dental LLMs, providing a roadmap for bridging the gap between standardized knowledge and safe, autonomous clinical practice.
+ oai:arXiv.org:2601.12974v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hongyang Ma, Tiantian Gu, Huaiyuan Sun, Huilin Zhu, Yongxin Wang, Jie Li, Wubin Sun, Zeliang Lian, Yinghong Zhou, Yi Gao, Shirui Wang, Zhihui Tang
+
+
+ Kd-tree Based Wasserstein Distance Approximation for High-Dimensional Data
+ https://arxiv.org/abs/2601.12975
+ arXiv:2601.12975v1 Announce Type: new
+Abstract: The Wasserstein distance is a discrepancy measure between probability distributions, defined by an optimal transport problem. It has been used for various tasks such as retrieving similar items in high-dimensional images or text data. In retrieval applications, however, the Wasserstein distance is calculated repeatedly, and its cubic time complexity with respect to input size renders it unsuitable for large-scale datasets. Recently, tree-based approximation methods have been proposed to address this bottleneck. For example, the Flowtree algorithm computes transport on a quadtree and evaluates cost using the ground metric, and clustering-tree approaches have been reported to achieve high accuracy. However, these existing trees often incur significant construction time for preprocessing, and crucially, standard quadtrees cannot grow deep enough in high-dimensional spaces, resulting in poor approximation accuracy. In this paper, we propose kd-Flowtree, a kd-tree-based Wasserstein distance approximation method that uses a kd-tree for data embedding. Since kd-trees can grow sufficiently deep and adaptively even in high-dimensional cases, kd-Flowtree is capable of maintaining good approximation accuracy for such cases. In addition, kd-trees can be constructed quickly than quadtrees, which contributes to reducing the computation time required for nearest neighbor search, including preprocessing. We provide a probabilistic upper bound on the nearest-neighbor search accuracy of kd-Flowtree, and show that this bound is independent of the dataset size. In the numerical experiments, we demonstrated that kd-Flowtree outperformed the existing Wasserstein distance approximation methods for retrieval tasks with real-world data.
+ oai:arXiv.org:2601.12975v1
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kanata Teshigawara, Keisho Oh, Ken Kobayashi, Kazuhide Nakata
+
+
+ Reproducibility in Event-Log Research: A Parametrised Generator and Benchmark for Event-based Signatures
+ https://arxiv.org/abs/2601.12978
+ arXiv:2601.12978v1 Announce Type: new
+Abstract: Event-based datasets are crucial for cybersecurity analysis. A key use case is detecting event-based signatures, which represent attacks spanning multiple events and can only be understood once the relevant events are identified and linked. Analysing event datasets is essential for monitoring system security, but their growing volume and frequency create significant scalability and processing difficulties. Researchers rely on these datasets to develop and test techniques for automatically identifying signatures. However, because real datasets are security-sensitive and rarely shared, it becomes difficult to perform meaningful comparative evaluation between different approaches. This work addresses this evaluation limitation by offering a systematic method for generating event logs with known ground truth, enabling reproducible and comparable research. We present a novel parametrised generation technique capable of producing synthetic event datasets that contain event-based signatures for discovery. To demonstrate the capabilities of the technique, we provide a benchmark in signature detection. Our benchmarking demonstrated the suitability of DBSCAN, achieving a score greater than 0.95 Adjusted Rand Index on most generated datasets. This work enhances the ability of researchers to develop and benchmark new cybersecurity techniques, ultimately contributing to more robust and effective cybersecurity measures.
+ oai:arXiv.org:2601.12978v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Saad Khan, Simon Parkinson, Monika Roopak
+
+
+ The Bitter Lesson of Diffusion Language Models for Agentic Workflows: A Comprehensive Reality Check
+ https://arxiv.org/abs/2601.12979
+ arXiv:2601.12979v1 Announce Type: new
+Abstract: The pursuit of real-time agentic interaction has driven interest in Diffusion-based Large Language Models (dLLMs) as alternatives to auto-regressive backbones, promising to break the sequential latency bottleneck. However, does such efficiency gains translate into effective agentic behavior? In this work, we present a comprehensive evaluation of dLLMs (e.g., LLaDA, Dream) across two distinct agentic paradigms: Embodied Agents (requiring long-horizon planning) and Tool-Calling Agents (requiring precise formatting). Contrary to the efficiency hype, our results on Agentboard and BFCL reveal a "bitter lesson": current dLLMs fail to serve as reliable agentic backbones, frequently leading to systematically failure. (1) In Embodied settings, dLLMs suffer repeated attempts, failing to branch under temporal feedback. (2) In Tool-Calling settings, dLLMs fail to maintain symbolic precision (e.g. strict JSON schemas) under diffusion noise. To assess the potential of dLLMs in agentic workflows, we introduce DiffuAgent, a multi-agent evaluation framework that integrates dLLMs as plug-and-play cognitive cores. Our analysis shows that dLLMs are effective in non-causal roles (e.g., memory summarization and tool selection) but require the incorporation of causal, precise, and logically grounded reasoning mechanisms into the denoising process to be viable for agentic tasks.
+ oai:arXiv.org:2601.12979v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Qingyu Lu, Liang Ding, Kanjian Zhang, Jinxia Zhang, Dacheng Tao
+
+
+ Path to Diversity: A Primer on ISAC-izing Commodity Wi-Fi for Practical Deployments
+ https://arxiv.org/abs/2601.12980
+ arXiv:2601.12980v1 Announce Type: new
+Abstract: Integrated Sensing and Communication (ISAC) has emerged as a key paradigm in next-generation wireless networks. While the ubiquity and low cost of commodity Wi-Fi make it an ideal platform for wide-scale sensing, it is the continuous evolution of Wi-Fi standards-towards higher frequency bands, wider bandwidths, and larger antenna arrays-that fundamentally unlocks the physical resources required for high-performance ISAC. To structure this rapidly expanding field, numerous surveys have appeared. However, prevailing literature predominantly adopts a top-down perspective, emphasizing upper-layer applications or deep learning models while treating the physical layer as an opaque abstraction. Consequently, these works often fail to touch the bottom layer of signal formation and lack technical guidance on overcoming the physical barriers that constrain sensing performance. To bridge this gap, this tutorial takes a bottom-up approach, systematically analyzing the sensing gains brought by Wi-Fi advancements through the lens of physical-layer diversity. We organize the framework around four orthogonal dimensions: i) Temporal Diversity addresses synchronization gaps to enable absolute ranging; ii) Frequency Diversity expands the effective bandwidth to sharpen range resolution; iii) Link Diversity leverages distributed topologies and digital feedback to achieve ubiquitous observability; and iv) Spatial Diversity utilizes multi-antenna arrays to combine passive angular discrimination with active directional control. Collectively, these orthogonal dimensions resolve fundamental ambiguities in time, range, and space, bridging physical capabilities with challenging sensing diversities. By synthesizing these dimensions, this tutorial provides a comprehensive guide for "ISAC-izing" commodity Wi-Fi, paving the way for future standardization and robust deployment.
+ oai:arXiv.org:2601.12980v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hongbo Wang, Xin Li, Yinghui He, Jingzhi Hu, Mingming Xu, Zhe Chen, Fu Xiao, Jun Luo
+
+
+ Early Prediction of Type 2 Diabetes Using Multimodal data and Tabular Transformers
+ https://arxiv.org/abs/2601.12981
+ arXiv:2601.12981v1 Announce Type: new
+Abstract: This study introduces a novel approach for early Type 2 Diabetes Mellitus (T2DM) risk prediction using a tabular transformer (TabTrans) architecture to analyze longitudinal patient data. By processing patients` longitudinal health records and bone-related tabular data, our model captures complex, long-range dependencies in disease progression that conventional methods often overlook. We validated our TabTrans model on a retrospective Qatar BioBank (QBB) cohort of 1,382 subjects, comprising 725 men (146 diabetic, 579 healthy) and 657 women (133 diabetic, 524 healthy). The study integrated electronic health records (EHR) with dual-energy X-ray absorptiometry (DXA) data. To address class imbalance, we employed SMOTE and SMOTE-ENN resampling techniques. The proposed model`s performance is evaluated against conventional machine learning (ML) and generative AI models, including Claude 3.5 Sonnet (Anthropic`s constitutional AI), GPT-4 (OpenAI`s generative pre-trained transformer), and Gemini Pro (Google`s multimodal language model). Our TabTrans model demonstrated superior predictive performance, achieving ROC AUC $\geq$ 79.7 % for T2DM prediction compared to both generative AI models and conventional ML approaches. Feature interpretation analysis identified key risk indicators, with visceral adipose tissue (VAT) mass and volume, ward bone mineral density (BMD) and bone mineral content (BMC), T and Z-scores, and L1-L4 scores emerging as the most important predictors associated with diabetes development in Qatari adults. These findings demonstrate the significant potential of TabTrans for analyzing complex tabular healthcare data, providing a powerful tool for proactive T2DM management and personalized clinical interventions in the Qatari population.
+ Index Terms: tabular transformers, multimodal data, DXA data, diabetes, T2DM, feature interpretation, tabular data
+ oai:arXiv.org:2601.12981v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sulaiman Khan, Md. Rafiul Biswas, Zubair Shah
+
+
+ ChartAttack: Testing the Vulnerability of LLMs to Malicious Prompting in Chart Generation
+ https://arxiv.org/abs/2601.12983
+ arXiv:2601.12983v1 Announce Type: new
+Abstract: Multimodal large language models (MLLMs) are increasingly used to automate chart generation from data tables, enabling efficient data analysis and reporting but also introducing new misuse risks. In this work, we introduce ChartAttack, a novel framework for evaluating how MLLMs can be misused to generate misleading charts at scale. ChartAttack injects misleaders into chart designs, aiming to induce incorrect interpretations of the underlying data. Furthermore, we create AttackViz, a chart question-answering (QA) dataset where each (chart specification, QA) pair is labeled with effective misleaders and their induced incorrect answers. Experiments in in-domain and cross-domain settings show that ChartAttack significantly degrades the QA performance of MLLM readers, reducing accuracy by an average of 19.6 points and 14.9 points, respectively. A human study further shows an average 20.2 point drop in accuracy for participants exposed to misleading charts generated by ChartAttack. Our findings highlight an urgent need for robustness and security considerations in the design, evaluation, and deployment of MLLM-based chart generation systems. We make our code and data publicly available.
+ oai:arXiv.org:2601.12983v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jesus-German Ortiz-Barajas, Jonathan Tonglet, Vivek Gupta, Iryna Gurevych
+
+
+ Rules, Resources, and Restrictions: A Taxonomy of Task-Based Information Request Intents
+ https://arxiv.org/abs/2601.12985
+ arXiv:2601.12985v1 Announce Type: new
+Abstract: Understanding and classifying query intents can improve retrieval effectiveness by helping align search results with the motivations behind user queries. However, existing intent taxonomies are typically derived from system log data and capture mostly isolated information needs, while the broader task context often remains unaddressed. This limitation becomes increasingly relevant as interactions with Large Language Models (LLMs) expand user expectations from simple query answering toward comprehensive task support, for example, with purchasing decisions or in travel planning. At the same time, current LLMs still struggle to fully interpret complex and multifaceted tasks. To address this gap, we argue for a stronger task-based perspective on query intent. Drawing on a grounded-theory-based interview study with airport information clerks, we present a taxonomy of task-based information request intents that bridges the gap between traditional query-focused approaches and the emerging demands of AI-driven task-oriented search.
+ oai:arXiv.org:2601.12985v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3786304.3787863
+ Melanie A. Kilian, David Elsweiler
+
+
+ KinGuard: Hierarchical Kinship-Aware Fingerprinting to Defend Against Large Language Model Stealing
+ https://arxiv.org/abs/2601.12986
+ arXiv:2601.12986v1 Announce Type: new
+Abstract: Protecting the intellectual property of large language models requires robust ownership verification. Conventional backdoor fingerprinting, however, is flawed by a stealth-robustness paradox: to be robust, these methods force models to memorize fixed responses to high-perplexity triggers, but this targeted overfitting creates detectable statistical artifacts. We resolve this paradox with KinGuard, a framework that embeds a private knowledge corpus built on structured kinship narratives. Instead of memorizing superficial triggers, the model internalizes this knowledge via incremental pre-training, and ownership is verified by probing its conceptual understanding. Extensive experiments demonstrate KinGuard's superior effectiveness, stealth, and resilience against a battery of attacks including fine-tuning, input perturbation, and model merging. Our work establishes knowledge-based embedding as a practical and secure paradigm for model fingerprinting.
+ oai:arXiv.org:2601.12986v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhenhua Xu, Xiaoning Tian, Wenjun Zeng, Wenpeng Xing, Tianliang Lu, Gaolei Li, Chaochao Chen, Meng Han
+
+
+ Guiding vector field-based guidance under wind disturbances applied to a tailsitter UAV
+ https://arxiv.org/abs/2601.12987
+ arXiv:2601.12987v1 Announce Type: new
+Abstract: This paper develops a guidance control law based on a parametric Guiding Vector Field (GVF) and integrates it with a state-of-the-art acceleration and attitude control architecture for tailsitters. The resulting framework enables a direct comparison between traditional trajectory-tracking guidance and GVF-based path-following guidance using a realistic tailsitter model operating under windy conditions. Through extensive simulations, it is shown that for agile flight scenarios with wind and small initial position error, both guidance strategies achieve comparable tracking performance, indicating that the additional complexity introduced by the GVF formulation is not always justified. However, the GVF-based approach exhibits an advantage when initial deviation from the path is present, yielding smooth and well-behaved convergence toward the desired path. Two additional contributions support this evaluation. First, a modification of the parametric GVF is proposed that guarantees exponential stability of the tracking error dynamics for a single integrator system. Second, the differential flatness transform of a tailsitter vehicle is extended to account for explicit knowledge of the wind velocity vector.
+ oai:arXiv.org:2601.12987v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Evangelos Ntouros, Ewoud J. J. Smeur
+
+
+ PaperGuide: Making Small Language-Model Paper-Reading Agents More Efficient
+ https://arxiv.org/abs/2601.12988
+ arXiv:2601.12988v1 Announce Type: new
+Abstract: The accelerating growth of the scientific literature makes it increasingly difficult for researchers to track new advances through manual reading alone. Recent progress in large language models (LLMs) has therefore spurred interest in autonomous agents that can read scientific papers and extract task-relevant information. However, most existing approaches rely either on heavily engineered prompting or on a conventional SFT-RL training pipeline, both of which often lead to excessive and low-yield exploration. Drawing inspiration from cognitive science, we propose PaperCompass, a framework that mitigates these issues by separating high-level planning from fine-grained execution. PaperCompass first drafts an explicit plan that outlines the intended sequence of actions, and then performs detailed reasoning to instantiate each step by selecting the parameters for the corresponding function calls. To train such behavior, we introduce Draft-and-Follow Policy Optimization (DFPO), a tailored RL method that jointly optimizes both the draft plan and the final solution. DFPO can be viewed as a lightweight form of hierarchical reinforcement learning, aimed at narrowing the `knowing-doing' gap in LLMs. We provide a theoretical analysis that establishes DFPO's favorable optimization properties, supporting a stable and reliable training process. Experiments on paper-based question answering (Paper-QA) benchmarks show that PaperCompass improves efficiency over strong baselines without sacrificing performance, achieving results comparable to much larger models.
+ oai:arXiv.org:2601.12988v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zijian Wang, Tiancheng Huang, Hanqi Li, Da Ma, Lu Chen, Kai Yu
+
+
+ Enshrined Proposer Builder Separation in the presence of Maximal Extractable Value
+ https://arxiv.org/abs/2601.12989
+ arXiv:2601.12989v1 Announce Type: new
+Abstract: In blockchain systems operating under the Proof-of-Stake (PoS) consensus mechanism, fairness in transaction processing is essential to preserving decentralization and maintaining user trust. However, with the emergence of Maximal Extractable Value (MEV), concerns about economic centralization and content manipulation have intensified. To address these vulnerabilities, the Ethereum community has introduced Proposer Builder Separation (PBS), which separates block construction from block proposal. Later, enshrined Proposer Builder Separation (ePBS) was also proposed in EIP-7732, which embeds PBS directly into the Ethereum consensus layer.
+ Our work identifies key limitations of ePBS by developing a formal framework that combines mathematical analysis and agent-based simulations to evaluate its auction-based block-building mechanism, with particular emphasis on MEV dynamics. Our results reveal that, although ePBS redistributes responsibilities between builders and proposers, it significantly amplifies profit and content centralization: the Gini coefficient for profits rises from 0.1749 under standard PoS without ePBS to 0.8358 under ePBS. This sharp increase indicates that a small number of efficient builders capture most value via MEV-driven auctions. Moreover, 95.4% of the block value is rewarded to proposers in ePBS, revealing a strong economic bias despite their limited role in block assembly. These findings highlight that ePBS exacerbates incentives for builders to adopt aggressive MEV strategies, suggesting the need for future research into mechanism designs that better balance decentralization, fairness, and MEV mitigation.
+ oai:arXiv.org:2601.12989v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yitian Wang, Yebo Feng, Yingjiu Li, Jiahua Xu
+
+
+ RAGExplorer: A Visual Analytics System for the Comparative Diagnosis of RAG Systems
+ https://arxiv.org/abs/2601.12991
+ arXiv:2601.12991v1 Announce Type: new
+Abstract: The advent of Retrieval-Augmented Generation (RAG) has significantly enhanced the ability of Large Language Models (LLMs) to produce factually accurate and up-to-date responses. However, the performance of a RAG system is not determined by a single component but emerges from a complex interplay of modular choices, such as embedding models and retrieval algorithms. This creates a vast and often opaque configuration space, making it challenging for developers to understand performance trade-offs and identify optimal designs. To address this challenge, we present RAGExplorer, a visual analytics system for the systematic comparison and diagnosis of RAG configurations. RAGExplorer guides users through a seamless macro-to-micro analytical workflow. Initially, it empowers developers to survey the performance landscape across numerous configurations, allowing for a high-level understanding of which design choices are most effective. For a deeper analysis, the system enables users to drill down into individual failure cases, investigate how differences in retrieved information contribute to errors, and interactively test hypotheses by manipulating the provided context to observe the resulting impact on the generated answer. We demonstrate the effectiveness of RAGExplorer through detailed case studies and user studies, validating its ability to empower developers in navigating the complex RAG design space. Our code and user guide are publicly available at https://github.com/Thymezzz/RAGExplorer.
+ oai:arXiv.org:2601.12991v1
+ cs.HC
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Haoyu Tian, Yingchaojie Feng, Zhen Wen, Haoxuan Li, Minfeng Zhu, Wei Chen
+
+
+ Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
+ https://arxiv.org/abs/2601.12993
+ arXiv:2601.12993v1 Announce Type: new
+Abstract: We introduce Being-H0.5, a foundational Vision-Language-Action (VLA) model designed for robust cross-embodiment generalization across diverse robotic platforms. While existing VLAs often struggle with morphological heterogeneity and data scarcity, we propose a human-centric learning paradigm that treats human interaction traces as a universal "mother tongue" for physical interaction. To support this, we present UniHand-2.0, the largest embodied pre-training recipe to date, comprising over 35,000 hours of multimodal data across 30 distinct robotic embodiments. Our approach introduces a Unified Action Space that maps heterogeneous robot controls into semantically aligned slots, enabling low-resource robots to bootstrap skills from human data and high-resource platforms. Built upon this human-centric foundation, we design a unified sequential modeling and multi-task pre-training paradigm to bridge human demonstrations and robotic execution. Architecturally, Being-H0.5 utilizes a Mixture-of-Transformers design featuring a novel Mixture-of-Flow (MoF) framework to decouple shared motor primitives from specialized embodiment-specific experts. Finally, to make cross-embodiment policies stable in the real world, we introduce Manifold-Preserving Gating for robustness under sensory shift and Universal Async Chunking to universalize chunked control across embodiments with different latency and control profiles. We empirically demonstrate that Being-H0.5 achieves state-of-the-art results on simulated benchmarks, such as LIBERO (98.9%) and RoboCasa (53.9%), while also exhibiting strong cross-embodiment capabilities on five robotic platforms.
+ oai:arXiv.org:2601.12993v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hao Luo, Ye Wang, Wanpeng Zhang, Sipeng Zheng, Ziheng Xi, Chaoyi Xu, Haiweng Xu, Haoqi Yuan, Chi Zhang, Yiqing Wang, Yicheng Feng, Zongqing Lu
+
+
+ AsyncBEV: Cross-modal Flow Alignment in Asynchronous 3D Object Detection
+ https://arxiv.org/abs/2601.12994
+ arXiv:2601.12994v1 Announce Type: new
+Abstract: In autonomous driving, multi-modal perception tasks like 3D object detection typically rely on well-synchronized sensors, both at training and inference. However, despite the use of hardware- or software-based synchronization algorithms, perfect synchrony is rarely guaranteed: Sensors may operate at different frequencies, and real-world factors such as network latency, hardware failures, or processing bottlenecks often introduce time offsets between sensors. Such asynchrony degrades perception performance, especially for dynamic objects. To address this challenge, we propose AsyncBEV, a trainable lightweight and generic module to improve the robustness of 3D Birds' Eye View (BEV) object detection models against sensor asynchrony. Inspired by scene flow estimation, AsyncBEV first estimates the 2D flow from the BEV features of two different sensor modalities, taking into account the known time offset between these sensor measurements. The predicted feature flow is then used to warp and spatially align the feature maps, which we show can easily be integrated into different current BEV detector architectures (e.g., BEV grid-based and token-based). Extensive experiments demonstrate AsyncBEV improves robustness against both small and large asynchrony between LiDAR or camera sensors in both the token-based CMT and grid-based UniBEV, especially for dynamic objects. We significantly outperform the ego motion compensated CMT and UniBEV baselines, notably by $16.6$ % and $11.9$ % NDS on dynamic objects in the worst-case scenario of a $0.5 s$ time offset. Code will be released upon acceptance.
+ oai:arXiv.org:2601.12994v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shiming Wang, Holger Caesar, Liangliang Nan, Julian F. P. Kooij
+
+
+ Graph Reasoning Paradigm: Structured and Symbolic Reasoning with Topology-Aware Reinforcement Learning for Large Language Models
+ https://arxiv.org/abs/2601.12995
+ arXiv:2601.12995v1 Announce Type: new
+Abstract: Long Chain-of-Thought (LCoT), achieved by Reinforcement Learning with Verifiable Rewards (RLVR), has proven effective in enhancing the reasoning capabilities of Large Language Models (LLMs). However, reasoning in current LLMs is primarily generated as plain text, where performing semantic evaluation on such unstructured data creates a computational bottleneck during training. Despite RLVR-based optimization, existing methods still suffer from coarse-grained supervision, reward hacking, high training costs, and poor generalization. To address these issues, we propose the Graph Reasoning Paradigm (GRP), which realizes structured and symbolic reasoning, implemented via graph-structured representations with step-level cognitive labels. Building upon GRP, we further design Process-Aware Stratified Clipping Group Relative Policy Optimization (PASC-GRPO), which leverages structured evaluation to replace semantic evaluation, achieves process-aware verification through graph-structured outcome rewards, and mitigates reward hacking via stratified clipping advantage estimation. Experiments demonstrate significant improvements across mathematical reasoning and code generation tasks. Data, models, and code will be released later.
+ oai:arXiv.org:2601.12995v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Runxuan Liu, Xianhao Ou, Xinyan Ma, Jiyuan Wang, Jiafeng Liang, Jiaqi Li, Tao He, Zheng Chu, Rongchuan Mu, Zekun Wang, Baoxin Wang, Dayong Wu, Ming Liu, Shijin Wang, Guoping Hu, Bing Qin
+
+
+ OFA-MAS: One-for-All Multi-Agent System Topology Design based on Mixture-of-Experts Graph Generative Models
+ https://arxiv.org/abs/2601.12996
+ arXiv:2601.12996v1 Announce Type: new
+Abstract: Multi-Agent Systems (MAS) offer a powerful paradigm for solving complex problems, yet their performance is critically dependent on the design of their underlying collaboration topology. As MAS become increasingly deployed in web services (e.g., search engines), designing adaptive topologies for diverse cross-domain user queries becomes essential. Current graph learning-based design methodologies often adhere to a "one-for-one" paradigm, where a specialized model is trained for each specific task domain. This approach suffers from poor generalization to unseen domains and fails to leverage shared structural knowledge across different tasks. To address this, we propose OFA-TAD, a one-for-all framework that generates adaptive collaboration graphs for any task described in natural language through a single universal model. Our approach integrates a Task-Aware Graph State Encoder (TAGSE) that filters task-relevant node information via sparse gating, and a Mixture-of-Experts (MoE) architecture that dynamically selects specialized sub-networks to drive node and edge prediction. We employ a three-stage training strategy: unconditional pre-training on canonical topologies for structural priors, large-scale conditional pre-training on LLM-generated datasets for task-topology mappings, and supervised fine-tuning on empirically validated graphs. Experiments across six diverse benchmarks show that OFA-TAD significantly outperforms specialized one-for-one models, generating highly adaptive MAS topologies. Code: https://github.com/Shiy-Li/OFA-MAS.
+ oai:arXiv.org:2601.12996v1
+ cs.MA
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shiyuan Li, Yixin Liu, Yu Zheng, Mei Li, Quoc Viet Hung Nguyen, Shirui Pan
+
+
+ Weighted-Hamming Metric: Bounds and Codes
+ https://arxiv.org/abs/2601.12998
+ arXiv:2601.12998v1 Announce Type: new
+Abstract: The weighted-Hamming metric generalizes the Hamming metric by assigning different weights to blocks of coordinates. It is well-suited for applications such as coding over independent parallel channels, each of which has a different level of importance or noise. From a coding-theoretic perspective, the actual error-correction capability of a code under this metric can exceed half its minimum distance. In this work, we establish direct bounds on this capability, tightening those obtained via minimum-distance arguments. We also propose a flexible code construction based on generalized concatenation and show that these codes can be efficiently decoded up to a lower bound on the error-correction capability.
+ oai:arXiv.org:2601.12998v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sebastian Bitzer, Alberto Ravagnani, Violetta Weger
+
+
+ PrivFly: A Privacy-Preserving Self-Supervised Framework for Rare Attack Detection in IoFT
+ https://arxiv.org/abs/2601.13003
+ arXiv:2601.13003v1 Announce Type: new
+Abstract: The Internet of Flying Things (IoFT) plays a vital role in modern applications such as aerial surveillance and smart mobility. However, it remains highly vulnerable to cyberattacks that threaten the confidentiality, integrity, and availability of sensitive data. Developing effective intrusion detection systems (IDS) for IoFT networks faces key challenges, including data imbalance, privacy concerns, and the limited capability of traditional models to detect rare but potentially damaging cyber threats. In this work, we propose PrivFly, a privacy-preserving IDS framework that integrates self-supervised representation learning and differential privacy (DP) to enhance detection performance in imbalanced IoFT network traffic. We propose a masked feature reconstruction module for self-supervised pretraining, improving feature representations and boosting rare-class detection. Differential privacy is applied during training to protect sensitive information without significantly compromising model performance. In addition, we conduct a SHapley additive explanations (SHAP)-based analysis to evaluate the impact of DP on feature importance and model behavior. Experimental results on the ECU-IoFT dataset show that PrivFly achieves up to 98% accuracy and 99% F1-score, effectively balancing privacy and detection performance for secure IoFT systems.
+ oai:arXiv.org:2601.13003v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Safaa Menssouri, El Mehdi Amhoud
+
+
+ An iterative approach to a fluid-rigid body interaction problem
+ https://arxiv.org/abs/2601.13004
+ arXiv:2601.13004v1 Announce Type: new
+Abstract: We study a novel approach for the existence of solutions to an incompressible fluid-rigid body interaction problem in three dimensions. Our approach introduces an iteration based on a sequence of related problems posed on domains with prescribed evolution. In particular we prove the short-time existence of strong solutions to a system coupling the incompressible Navier--Stokes equations to the ordinary differential equations governing the motion of a rigid body, with no slip boundary conditions on the boundary of the rigid body, provided that the relative density $\frac{\rho}{\rho_B}$, is sufficiently small. We also discuss the use of our iterative approach in numerical methods for the moving boundary problem, and complement this with some numerical experiments in two dimensions which demonstrate the necessity of the smallness assumption on $\frac{\rho}{\rho_B}$.
+ oai:arXiv.org:2601.13004v1
+ math.NA
+ cs.NA
+ math.AP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Charles M. Elliott, Thomas Sales
+
+
+ ArchAgent: Scalable Legacy Software Architecture Recovery with LLMs
+ https://arxiv.org/abs/2601.13007
+ arXiv:2601.13007v1 Announce Type: new
+Abstract: Recovering accurate architecture from large-scale legacy software is hindered by architectural drift, missing relations, and the limited context of Large Language Models (LLMs). We present ArchAgent, a scalable agent-based framework that combines static analysis, adaptive code segmentation, and LLM-powered synthesis to reconstruct multiview, business-aligned architectures from cross-repository codebases. ArchAgent introduces scalable diagram generation with contextual pruning and integrates cross-repository data to identify business-critical modules. Evaluations of typical large-scale GitHub projects show significant improvements over existing benchmarks. An ablation study confirms that dependency context improves the accuracy of generated architectures of production-level repositories, and a real-world case study demonstrates effective recovery of critical business logics from legacy projects. The dataset is available at https://github.com/panrusheng/arch-eval-benchmark.
+ oai:arXiv.org:2601.13007v1
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rusheng Pan, Bingcheng Mao, Tianyi Ma, Zhenhua Ling
+
+
+ HT-GNN: Hyper-Temporal Graph Neural Network for Customer Lifetime Value Prediction in Baidu Ads
+ https://arxiv.org/abs/2601.13013
+ arXiv:2601.13013v1 Announce Type: new
+Abstract: Lifetime value (LTV) prediction is crucial for news feed advertising, enabling platforms to optimize bidding and budget allocation for long-term revenue growth. However, it faces two major challenges: (1) demographic-based targeting creates segment-specific LTV distributions with large value variations across user groups; and (2) dynamic marketing strategies generate irregular behavioral sequences where engagement patterns evolve rapidly. We propose a Hyper-Temporal Graph Neural Network (HT-GNN), which jointly models demographic heterogeneity and temporal dynamics through three key components: (i) a hypergraph-supervised module capturing inter-segment relationships; (ii) a transformer-based temporal encoder with adaptive weighting; and (iii) a task-adaptive mixture-of-experts with dynamic prediction towers for multi-horizon LTV forecasting. Experiments on \textit{Baidu Ads} with 15 million users demonstrate that HT-GNN consistently outperforms state-of-the-art methods across all metrics and prediction horizons.
+ oai:arXiv.org:2601.13013v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaohui Zhao, Xinjian Zhao, Jiahui Zhang, Guoyu Liu, Houzhi Wang, Shu Wu
+
+
+ MeltRTL: Multi-Expert LLMs with Inference-time Intervention for RTL Code Generation
+ https://arxiv.org/abs/2601.13015
+ arXiv:2601.13015v1 Announce Type: new
+Abstract: The automated generation of hardware register-transfer level (RTL) code with large language models (LLMs) shows promise, yet current solutions struggle to produce syntactically and functionally correct code for complex digital designs. This paper introduces MeltRTL, a novel framework that integrates multi-expert attention with inference-time intervention (ITI) to significantly improve LLM-based RTL code generation accuracy without retraining the base model. MeltRTL introduces three key innovations: (1) A multi-expert attention architecture that dynamically routes design specifications to specialized expert networks, enabling targeted reasoning across various hardware categories; (2) An inference-time intervention mechanism that employs non-linear probes to detect and correct hardware-specific inaccuracies during generation; and (3) An efficient intervention framework that selectively operates on expert-specific attention heads with minimal computational overhead. We evaluate MeltRTL on the VerilogEval benchmark, achieving 96% synthesizability and 60% functional correctness, compared to the base LLM's 85.3% and 45.3%, respectively. These improvements are obtained entirely at inference time, with only 27% computational overhead and no model fine-tuning, making MeltRTL immediately deployable on existing pre-trained LLMs. Ablation studies further show the complementary benefits of multi-expert architecture and ITI, highlighting their synergistic effects when combined.
+ oai:arXiv.org:2601.13015v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Nowfel Mashnoor, Mohammad Akyash, Hadi Kamali, Kimia Azar
+
+
+ Bi-Attention HateXplain : Taking into account the sequential aspect of data during explainability in a multi-task context
+ https://arxiv.org/abs/2601.13018
+ arXiv:2601.13018v1 Announce Type: new
+Abstract: Technological advances in the Internet and online social networks have brought many benefits to humanity. At the same time, this growth has led to an increase in hate speech, the main global threat. To improve the reliability of black-box models used for hate speech detection, post-hoc approaches such as LIME, SHAP, and LRP provide the explanation after training the classification model. In contrast, multi-task approaches based on the HateXplain benchmark learn to explain and classify simultaneously. However, results from HateXplain-based algorithms show that predicted attention varies considerably when it should be constant. This attention variability can lead to inconsistent interpretations, instability of predictions, and learning difficulties. To solve this problem, we propose the BiAtt-BiRNN-HateXplain (Bidirectional Attention BiRNN HateXplain) model which is easier to explain compared to LLMs which are more complex in view of the need for transparency, and will take into account the sequential aspect of the input data during explainability thanks to a BiRNN layer. Thus, if the explanation is correctly estimated, thanks to multi-task learning (explainability and classification task), the model could classify better and commit fewer unintentional bias errors related to communities. The experimental results on HateXplain data show a clear improvement in detection performance, explainability and a reduction in unintentional bias.
+ oai:arXiv.org:2601.13018v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ghislain Dorian Tchuente Mondjo
+
+
+ PASs-MoE: Mitigating Misaligned Co-drift among Router and Experts via Pathway Activation Subspaces for Continual Learning
+ https://arxiv.org/abs/2601.13020
+ arXiv:2601.13020v1 Announce Type: new
+Abstract: Continual instruction tuning (CIT) requires multimodal large language models (MLLMs) to adapt to a stream of tasks without forgetting prior capabilities. A common strategy is to isolate updates by routing inputs to different LoRA experts. However, existing LoRA-based Mixture-of-Experts (MoE) methods often jointly update the router and experts in an indiscriminate way, causing the router's preferences to co-drift with experts' adaptation pathways and gradually deviate from early-stage input-expert specialization. We term this phenomenon Misaligned Co-drift, which blurs expert responsibilities and exacerbates forgetting.To address this, we introduce the pathway activation subspace (PASs), a LoRA-induced subspace that reflects which low-rank pathway directions an input activates in each expert, providing a capability-aligned coordinate system for routing and preservation. Based on PASs, we propose a fixed-capacity PASs-based MoE-LoRA method with two components: PAS-guided Reweighting, which calibrates routing using each expert's pathway activation signals, and PAS-aware Rank Stabilization, which selectively stabilizes rank directions important to previous tasks. Experiments on a CIT benchmark show that our approach consistently outperforms a range of conventional continual learning baselines and MoE-LoRA variants in both accuracy and anti-forgetting without adding parameters. Our code will be released upon acceptance.
+ oai:arXiv.org:2601.13020v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhiyan Hou, Haiyun Guo, Haokai Ma, Yandu Sun, Yonghui Yang, Jinqiao Wang
+
+
+ Enhancing Generalization in Sickle Cell Disease Diagnosis through Ensemble Methods and Feature Importance Analysis
+ https://arxiv.org/abs/2601.13021
+ arXiv:2601.13021v1 Announce Type: new
+Abstract: This work presents a novel approach for selecting the optimal ensemble-based classification method and features with a primarly focus on achieving generalization, based on the state-of-the-art, to provide diagnostic support for Sickle Cell Disease using peripheral blood smear images of red blood cells. We pre-processed and segmented the microscopic images to ensure the extraction of high-quality features. To ensure the reliability of our proposed system, we conducted an in-depth analysis of interpretability. Leveraging techniques established in the literature, we extracted features from blood cells and employed ensemble machine learning methods to classify their morphology. Furthermore, we have devised a methodology to identify the most critical features for classification, aimed at reducing complexity and training time and enhancing interpretability in opaque models. Lastly, we validated our results using a new dataset, where our model overperformed state-of-the-art models in terms of generalization. The results of classifier ensembled of Random Forest and Extra Trees classifier achieved an harmonic mean of precision and recall (F1-score) of 90.71\% and a Sickle Cell Disease diagnosis support score (SDS-score) of 93.33\%. These results demonstrate notable enhancement from previous ones with Gradient Boosting classifier (F1-score 87.32\% and SDS-score 89.51\%). To foster scientific progress, we have made available the parameters for each model, the implemented code library, and the confusion matrices with the raw data.
+ oai:arXiv.org:2601.13021v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1016/j.engappai.2024.109875
+ Engineering Applications of Artificial Intelligence (2025), 142, 109875
+ Nata\v{s}a Petrovi\'c, Gabriel Moy\`a-Alcover, Antoni Jaume-i-Cap\'o, Jose Maria Buades Rubio
+
+
+ Tears or Cheers? Benchmarking LLMs via Culturally Elicited Distinct Affective Responses
+ https://arxiv.org/abs/2601.13024
+ arXiv:2601.13024v1 Announce Type: new
+Abstract: Culture serves as a fundamental determinant of human affective processing and profoundly shapes how individuals perceive and interpret emotional stimuli. Despite this intrinsic link extant evaluations regarding cultural alignment within Large Language Models primarily prioritize declarative knowledge such as geographical facts or established societal customs. These benchmarks remain insufficient to capture the subjective interpretative variance inherent to diverse sociocultural lenses. To address this limitation, we introduce CEDAR, a multimodal benchmark constructed entirely from scenarios capturing Culturally \underline{\textsc{E}}licited \underline{\textsc{D}}istinct \underline{\textsc{A}}ffective \underline{\textsc{R}}esponses. To construct CEDAR, we implement a novel pipeline that leverages LLM-generated provisional labels to isolate instances yielding cross-cultural emotional distinctions, and subsequently derives reliable ground-truth annotations through rigorous human evaluation. The resulting benchmark comprises 10,962 instances across seven languages and 14 fine-grained emotion categories, with each language including 400 multimodal and 1,166 text-only samples. Comprehensive evaluations of 17 representative multilingual models reveal a dissociation between language consistency and cultural alignment, demonstrating that culturally grounded affective understanding remains a significant challenge for current models.
+ oai:arXiv.org:2601.13024v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chongyuan Dai, Yaling Shen, Jinpeng Hu, Zihan Gao, Jia Li, Yishun Jiang, Yaxiong Wang, Liu Liu, Zongyuan Ge
+
+
+ Think3D: Thinking with Space for Spatial Reasoning
+ https://arxiv.org/abs/2601.13029
+ arXiv:2601.13029v1 Announce Type: new
+Abstract: Understanding and reasoning about the physical world requires spatial intelligence: the ability to interpret geometry, perspective, and spatial relations beyond 2D perception. While recent vision large models (VLMs) excel at visual understanding, they remain fundamentally 2D perceivers and struggle with genuine 3D reasoning. We introduce Think3D, a framework that enables VLM agents to think with 3D space. By leveraging 3D reconstruction models that recover point clouds and camera poses from images or videos, Think3D allows the agent to actively manipulate space through camera-based operations and ego/global-view switching, transforming spatial reasoning into an interactive 3D chain-of-thought process. Without additional training, Think3D significantly improves the spatial reasoning performance of advanced models such as GPT-4.1 and Gemini 2.5 Pro, yielding average gains of +7.8% on BLINK Multi-view and MindCube, and +4.7% on VSI-Bench. We further show that smaller models, which struggle with spatial exploration, benefit significantly from a reinforcement learning policy that enables the model to select informative viewpoints and operations. With RL, the benefit from tool usage increases from +0.7% to +6.8%. Our findings demonstrate that training-free, tool-augmented spatial exploration is a viable path toward more flexible and human-like 3D reasoning in multimodal agents, establishing a new dimension of multimodal intelligence. Code and weights are released at https://github.com/zhangzaibin/spagent.
+ oai:arXiv.org:2601.13029v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zaibin Zhang, Yuhan Wu, Lianjie Jia, Yifan Wang, Zhongbo Zhang, Yijiang Li, Binghao Ran, Fuxi Zhang, Zhuohan Sun, Zhenfei Yin, Lijun Wang, Huchuan Lu
+
+
+ Post-Quantum Secure Aggregation via Code-Based Homomorphic Encryption
+ https://arxiv.org/abs/2601.13031
+ arXiv:2601.13031v1 Announce Type: new
+Abstract: Secure aggregation enables aggregation of inputs from multiple parties without revealing individual contributions to the server or other clients. Existing post-quantum approaches based on homomorphic encryption offer practical efficiency but predominantly rely on lattice-based hardness assumptions. We present a code-based alternative for secure aggregation by instantiating a general framework based on key- and message-additive homomorphic encryption under the Learning Parity with Noise (LPN) assumption. Our construction employs a committee-based decryptor realized via secret sharing and incorporates a Chinese Remainder Theorem (CRT)-based optimization to reduce the communication costs of LPN-based instantiations. We analyze the security of the proposed scheme under a new Hint-LPN assumption and show that it is equivalent to standard LPN for suitable parameters. Finally, we evaluate performance and identify regimes in which our approach outperforms information-theoretically secure aggregation protocols.
+ oai:arXiv.org:2601.13031v1
+ cs.CR
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sebastian Bitzer, Maximilian Egger, Mumin Liu, Antonia Wachter-Zeh
+
+
+ SASA: Semantic-Aware Contrastive Learning Framework with Separated Attention for Triple Classification
+ https://arxiv.org/abs/2601.13035
+ arXiv:2601.13035v1 Announce Type: new
+Abstract: Knowledge Graphs~(KGs) often suffer from unreliable knowledge, which restricts their utility. Triple Classification~(TC) aims to determine the validity of triples from KGs. Recently, text-based methods learn entity and relation representations from natural language descriptions, significantly improving the generalization capabilities of TC models and setting new benchmarks in performance. However, there are still two critical challenges. First, existing methods often ignore the effective semantic interaction among different KG components. Second, most approaches adopt single binary classification training objective, leading to insufficient semantic representation learning. To address these challenges, we propose \textbf{SASA}, a novel framework designed to enhance TC models via separated attention mechanism and semantic-aware contrastive learning~(CL). Specifically, we first propose separated attention mechanism to encode triples into decoupled contextual representations and then fuse them through a more effective interactive way. Then, we introduce semantic-aware hierarchical CL as auxiliary training objective to guide models in improving their discriminative capabilities and achieving sufficient semantic learning, considering both local level and global level CL. Experimental results across two benchmark datasets demonstrate that SASA significantly outperforms state-of-the-art methods. In terms of accuracy, we advance the state-of-the-art by +5.9\% on FB15k-237 and +3.4\% on YAGO3-10.
+ oai:arXiv.org:2601.13035v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xu Xiaodan, Hu Xiaolin
+
+
+ Feedforward-Feedback Integration in Flight Control: Reinforcement Learning with Sliding Mode Control
+ https://arxiv.org/abs/2601.13037
+ arXiv:2601.13037v1 Announce Type: new
+Abstract: Learning-based controllers leverage nonlinear couplings and enhance transients but seldom offer guarantees under tight input constraints. Robust feedback like sliding-mode control (SMC) provides these guarantees but is conservative in isolation. This paper creates a learning-augmented framework where a deep reinforcement learning policy produces feedforward commands and an SMC law imposes actuator limits, bounds learned authority, and guarantees robustness. The policy is modeled as a matched, bounded input, and Lyapunov-based conditions link SMC gains to the admissible feedforward bound, guaranteeing stability under saturation. This formulation is applicable to nonlinear, underactuated plants with hard constraints. To illustrate the methodology, the method is applied to a six-degree-of-freedom aircraft model and compared with Reinforcement Learning and isolated SMC. Simulation results show that the hybrid controller improves transient behavior and reduces control oscillations compared to standalone RL and SMC controllers, while preserving robustness under modeling uncertainties and disturbances. Even using it with partially trained policies, SMC component of the control stabilizes transients, whereas fully trained policies provide faster convergence, reduced constraint violations, and robustness. These results illustrate that learning-augmented control offers superior performance with robustness guarantees under tight input constraints.
+ oai:arXiv.org:2601.13037v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Imran Sayyed, Nandan Kumar Sinha
+
+
+ Solving Generalized Lyapunov Equations with guarantees: application to the Model Reduction of Switched Linear Systems
+ https://arxiv.org/abs/2601.13039
+ arXiv:2601.13039v1 Announce Type: new
+Abstract: We present an efficient strategy to approximate the solutions of large-scale generalized Lyapunov equations (GLEs) with rigorous, computable error guarantees. This work is motivated by applications in model order reduction (MOR) of switched linear systems (SLS) in control form, where GLEs play a central role. We analyze how inaccuracies in the numerical solution of GLEs propagate through the MOR procedure and affect the accuracy and reliability of the reduced order model. Furthermore, the classical balanced-truncation error estimate for SLS is neither theoretically nor practically viable, as they rely on restrictive assumptions requiring several requiring several linear matrix inequalities (LMI) to be satisfied exactly by numerically computed solutions of the GLEs. To overcome these limitation, we propose a new MOR framework for SLS, called piecewise balanced reduction (PBR). The method is based on solving multiple GLEs and the construction of projection matrices that are piecewise constant in time to appropriately balance and subsequently reduce the SLS. We extend the standard balanced-truncation error bounds and demonstrate that the PBR formulation allows us to control the error arising from the inexact LMI. In addition, our new error bound accounts for the influence of the piecewise constant time-varying projection matrices. Altogether, this renders the PBR approach for SLS applicable to a broad and flexible class of SLS. Numerical experiments are provided to corroborate our theoretical results.
+ oai:arXiv.org:2601.13039v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Mattia Manucci, Benjamin Unger
+
+
+ CPU-less parallel execution of lambda calculus in digital logic
+ https://arxiv.org/abs/2601.13040
+ arXiv:2601.13040v1 Announce Type: new
+Abstract: While transistor density is still increasing, clock speeds are not, motivating the search for new parallel architectures. One approach is to completely abandon the concept of CPU -- and thus serial imperative programming -- and instead to specify and execute tasks in parallel, compiling from programming languages to data flow digital logic. It is well-known that pure functional languages are inherently parallel, due to the Church-Rosser theorem, and CPU-based parallel compilers exist for many functional languages. However, these still rely on conventional CPUs and their von Neumann bottlenecks. An alternative is to compile functional languages directly into digital logic to maximize available parallelism. It is difficult to work with complete modern functional languages due to their many features, so we demonstrate a proof-of-concept system using lambda calculus as the source language and compiling to digital logic. We show how functional hardware can be tailored to a simplistic functional language, forming the ground for a new model of CPU-less functional computation. At the algorithmic level, we use a tree-based representation, with data localized within nodes and communicated data passed between them. This is implemented by physical digital logic blocks corresponding to nodes, and buses enabling message passing. Node types and behaviors correspond to lambda grammar forms, and beta-reductions are performed in parallel allowing branches independent from one another to perform transformations simultaneously. As evidence for this approach, we present an implementation, along with simulation results, showcasing successful execution of lambda expressions. This suggests that the approach could be scaled to larger functional languages. Successful execution of a test suite of lambda expressions suggests that the approach could be scaled to larger functional languages.
+ oai:arXiv.org:2601.13040v1
+ cs.DC
+ cs.AR
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Harry Fitchett, Charles Fox
+
+
+ High-Throughput and Scalable Secure Inference Protocols for Deep Learning with Packed Secret Sharing
+ https://arxiv.org/abs/2601.13041
+ arXiv:2601.13041v1 Announce Type: new
+Abstract: Most existing secure neural network inference protocols based on secure multi-party computation (MPC) typically support at most four participants, demonstrating severely limited scalability. Liu et al. (USENIX Security'24) presented the first relatively practical approach by utilizing Shamir secret sharing with Mersenne prime fields. However, when processing deeper neural networks such as VGG16, their protocols incur substantial communication overhead, resulting in particularly significant latency in wide-area network (WAN) environments. In this paper, we propose a high-throughput and scalable MPC protocol for neural network inference against semi-honest adversaries in the honest-majority setting. The core of our approach lies in leveraging packed Shamir secret sharing (PSS) to enable parallel computation and reduce communication complexity. The main contributions are three-fold: i) We present a communication-efficient protocol for vector-matrix multiplication, based on our newly defined notion of vector-matrix multiplication-friendly random share tuples. ii) We design the filter packing approach that enables parallel convolution. iii) We further extend all non-linear protocols based on Shamir secret sharing to the PSS-based protocols for achieving parallel non-linear operations. Extensive experiments across various datasets and neural networks demonstrate the superiority of our approach in WAN. Compared to Liu et al. (USENIX Security'24), our scheme reduces the communication upto 5.85x, 11.17x, and 6.83x in offline, online and total communication overhead, respectively. In addition, our scheme is upto 1.59x, 2.61x, and 1.75x faster in offline, online and total running time, respectively.
+ oai:arXiv.org:2601.13041v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qinghui Zhang, Xiaojun Chen, Yansong Zhang, Xudong Chen
+
+
+ Static Is Not Enough: A Comparative Study of VR and SpaceMouse in Static and Dynamic Teleoperation Tasks
+ https://arxiv.org/abs/2601.13042
+ arXiv:2601.13042v1 Announce Type: new
+Abstract: Imitation learning relies on high-quality demonstrations, and teleoperation is a primary way to collect them, making teleoperation interface choice crucial for the data. Prior work mainly focused on static tasks, i.e., discrete, segmented motions, yet demonstrations also include dynamic tasks requiring reactive control. As dynamic tasks impose fundamentally different interface demands, insights from static-task evaluations cannot generalize. To address this gap, we conduct a within-subjects study comparing a VR controller and a SpaceMouse across two static and two dynamic tasks ($N=25$). We assess success rate, task duration, cumulative success, alongside NASA-TLX, SUS, and open-ended feedback. Results show statistically significant advantages for VR: higher success rates, particularly on dynamic tasks, shorter successful execution times across tasks, and earlier successes across attempts, with significantly lower workload and higher usability. As existing VR teleoperation systems are rarely open-source or suited for dynamic tasks, we release our VR interface to fill this gap.
+ oai:arXiv.org:2601.13042v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yijun Zhou, Muhan Hou, Kim Baraka
+
+
+ Typhoon ASR Real-time: FastConformer-Transducer for Thai Automatic Speech Recognition
+ https://arxiv.org/abs/2601.13044
+ arXiv:2601.13044v1 Announce Type: new
+Abstract: Large encoder-decoder models like Whisper achieve strong offline transcription but remain impractical for streaming applications due to high latency. However, due to the accessibility of pre-trained checkpoints, the open Thai ASR landscape remains dominated by these offline architectures, leaving a critical gap in efficient streaming solutions. We present Typhoon ASR Real-time, a 115M-parameter FastConformer-Transducer model for low-latency Thai speech recognition. We demonstrate that rigorous text normalization can match the impact of model scaling: our compact model achieves a 45x reduction in computational cost compared to Whisper Large-v3 while delivering comparable accuracy. Our normalization pipeline resolves systemic ambiguities in Thai transcription --including context-dependent number verbalization and repetition markers (mai yamok) --creating consistent training targets. We further introduce a two-stage curriculum learning approach for Isan (north-eastern) dialect adaptation that preserves Central Thai performance. To address reproducibility challenges in Thai ASR, we release the Typhoon ASR Benchmark, a gold-standard human-labeled datasets with transcriptions following established Thai linguistic conventions, providing standardized evaluation protocols for the research community.
+ oai:arXiv.org:2601.13044v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Warit Sirichotedumrong, Adisai Na-Thalang, Potsawee Manakul, Pittawat Taveekitworachai, Sittipong Sripaisarnmongkol, Kunat Pipatanakul
+
+
+ Exploration on Highly Dynamic Graphs
+ https://arxiv.org/abs/2601.13047
+ arXiv:2601.13047v1 Announce Type: new
+Abstract: We study the exploration problem by mobile agents in two prominent models of dynamic graphs: $1$-Interval Connectivity and Connectivity Time. The $1$-Interval Connectivity model was introduced by Kuhn et al.~[STOC 2010], and the Connectivity Time model was proposed by Michail et al.~[JPDC 2014]. Recently, Saxena et al.~[TCS 2025] investigated the exploration problem under both models. In this work, we first strengthen the existing impossibility results for the $1$-Interval Connectivity model. We then show that, in Connectivity Time dynamic graphs, exploration is impossible with $\frac{(n-1)(n-2)}{2}$ mobile agents, even when the agents have full knowledge of all system parameters, global communication, full visibility, and infinite memory. This significantly improves the previously known bound of $n$. Moreover, we prove that to solve exploration with $\frac{(n-1)(n-2)}{2}+1$ agents, $1$-hop visibility is necessary. Finally, we present an exploration algorithm that uses $\frac{(n-1)(n-2)}{2}+1$ agents, assuming global communication, $1$-hop visibility, and $O(\log n)$ memory per agent.
+ oai:arXiv.org:2601.13047v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ashish Saxena, Kaushik Mondal
+
+
+ Analysis of Long Range Dependency Understanding in State Space Models
+ https://arxiv.org/abs/2601.13048
+ arXiv:2601.13048v1 Announce Type: new
+Abstract: Although state-space models (SSMs) have demonstrated strong performance on long-sequence benchmarks, most research has emphasized predictive accuracy rather than interpretability. In this work, we present the first systematic kernel interpretability study of the diagonalized state-space model (S4D) trained on a real-world task (vulnerability detection in source code). Through time and frequency domain analysis of the S4D kernel, we show that the long-range modeling capability of S4D varies significantly under different model architectures, affecting model performance. For instance, we show that the depending on the architecture, S4D kernel can behave as low-pass, band-pass or high-pass filter. The insights from our analysis can guide future work in designing better S4D-based models.
+ oai:arXiv.org:2601.13048v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Srividya Ravikumar, Abhinav Anand, Shweta Verma, Mira Mezini
+
+
+ Profiling German Text Simplification with Interpretable Model-Fingerprints
+ https://arxiv.org/abs/2601.13050
+ arXiv:2601.13050v1 Announce Type: new
+Abstract: While Large Language Models (LLMs) produce highly nuanced text simplifications, developers currently lack tools for a holistic, efficient, and reproducible diagnosis of their behavior. This paper introduces the Simplification Profiler, a diagnostic toolkit that generates a multidimensional, interpretable fingerprint of simplified texts. Multiple aggregated simplifications of a model result in a model's fingerprint. This novel evaluation paradigm is particularly vital for languages, where the data scarcity problem is magnified when creating flexible models for diverse target groups rather than a single, fixed simplification style. We propose that measuring a model's unique behavioral signature is more relevant in this context as an alternative to correlating metrics with human preferences. We operationalize this with a practical meta-evaluation of our fingerprints' descriptive power, which bypasses the need for large, human-rated datasets. This test measures if a simple linear classifier can reliably identify various model configurations by their created simplifications, confirming that our metrics are sensitive to a model's specific characteristics. The Profiler can distinguish high-level behavioral variations between prompting strategies and fine-grained changes from prompt engineering, including few-shot examples. Our complete feature set achieves classification F1-scores up to 71.9 %, improving upon simple baselines by over 48 percentage points. The Simplification Profiler thus offers developers a granular, actionable analysis to build more effective and truly adaptive text simplification systems.
+ oai:arXiv.org:2601.13050v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Lars Kl\"oser, Mika Beele, Bodo Kraft
+
+
+ GridNet-HD: A High-Resolution Multi-Modal Dataset for LiDAR-Image Fusion on Power Line Infrastructure
+ https://arxiv.org/abs/2601.13052
+ arXiv:2601.13052v1 Announce Type: new
+Abstract: This paper presents GridNet-HD, a multi-modal dataset for 3D semantic segmentation of overhead electrical infrastructures, pairing high-density LiDAR with high-resolution oblique imagery. The dataset comprises 7,694 images and 2.5 billion points annotated into 11 classes, with predefined splits and mIoU metrics. Unimodal (LiDAR-only, image-only) and multi-modal fusion baselines are provided. On GridNet-HD, fusion models outperform the best unimodal baseline by +5.55 mIoU, highlighting the complementarity of geometry and appearance. As reviewed in Sec. 2, no public dataset jointly provides high-density LiDAR and high-resolution oblique imagery with 3D semantic labels for power-line assets. Dataset, baselines, and codes are available: https://huggingface.co/collections/heig-vd-geo/gridnet-hd.
+ oai:arXiv.org:2601.13052v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Antoine Carreaud, Shanci Li, Malo De Lacour, Digre Frinde, Jan Skaloud, Adrien Gressin
+
+
+ TinyML-Enabled IoT for Sustainable Precision Irrigation
+ https://arxiv.org/abs/2601.13054
+ arXiv:2601.13054v1 Announce Type: new
+Abstract: Small-scale farming communities are disproportionately affected by water scarcity, erratic climate patterns, and a lack of access to advanced, affordable agricultural technologies. To address these challenges, this paper presents a novel, edge-first IoT framework that integrates Tiny Machine Learning (TinyML) for intelligent, offline-capable precision irrigation. The proposed four-layer architecture leverages low-cost hardware, an ESP32 microcontroller as an edge inference node, and a Raspberry Pi as a local edge server to enable autonomous decision-making without cloud dependency. The system utilizes capacitive soil moisture, temperature, humidity, pH, and ambient light sensors for environmental monitoring. A rigorous comparative analysis of ensemble models identified gradient boosting as superior, achieving an R^2 score of 0.9973 and a Mean Absolute Percentage Error (MAPE) of 0.99%, outperforming a random forest model (R^2 = 0.9916, MAPE = 1.81%). This optimized model was converted and deployed as a lightweight TinyML inference engine on the ESP32 and predicts irrigation needs with exceptional accuracy (MAPE < 1%). Local communication is facilitated by an MQTT-based LAN protocol, ensuring reliable operation in areas with limited or no internet connectivity. Experimental validation in a controlled environment demonstrated a significant reduction in water usage compared to traditional methods, while the system's low-power design and offline functionality confirm its viability for sustainable, scalable deployment in resource-constrained rural settings. This work provides a practical, cost-effective blueprint for bridging the technological divide in agriculture and enhancing water-use efficiency through on-device artificial intelligence.
+ oai:arXiv.org:2601.13054v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kamogelo Taueatsoala, Caitlyn Daniels, Angelina J. Ramsunar, Petrus Bronkhorst, Absalom E. Ezugwu
+
+
+ Convex Model Predictive Control for Safe Output Consensus of Nonlinear Multi-Agent Systems
+ https://arxiv.org/abs/2601.13057
+ arXiv:2601.13057v1 Announce Type: new
+Abstract: Nonlinear dynamics and safety constraints typically result in a nonlinear programming problem when applying model predictive control to achieve safe output consensus. To avoid the heavy computational burden of solving a nonlinear programming problem directly, this paper proposes a novel Convex Model Predictive Control (CMPC) approach based on a Sequential Quadratic Programming (SQP) scheme. The core of our method lies in transforming the nonlinear constraints into linear forms: we linearize the system dynamics and convexify the discrete-time high-order control barrier functions using a proposed tangent-line projection method. Consequently, the original problem is reduced to a quadratic program that can be iteratively solved within the SQP scheme at each time step of CMPC. Furthermore, we provide the formal guarantee of the convergence of the SQP scheme, and subsequently guarantee the recursive feasibility and stability of CMPC. Simulations on multi-agent systems with unicycle dynamics demonstrate a 35-52 times reduction in computation time compared with baseline methods, confirming the suitability of the proposed approach for real-time safe output consensus control.
+ oai:arXiv.org:2601.13057v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chao Wang, Shuyuan Zhang, Lei Wang
+
+
+ Prototype Learning-Based Few-Shot Segmentation for Low-Light Crack on Concrete Structures
+ https://arxiv.org/abs/2601.13059
+ arXiv:2601.13059v1 Announce Type: new
+Abstract: Crack detection is critical for concrete infrastructure safety, but real-world cracks often appear in low-light environments like tunnels and bridge undersides, degrading computer vision segmentation accuracy. Pixel-level annotation of low-light crack images is extremely time-consuming, yet most deep learning methods require large, well-illuminated datasets. We propose a dual-branch prototype learning network integrating Retinex theory with few-shot learning for low-light crack segmentation. Retinex-based reflectance components guide illumination-invariant global representation learning, while metric learning reduces dependence on large annotated datasets. We introduce a cross-similarity prior mask generation module that computes high-dimensional similarities between query and support features to capture crack location and structure, and a multi-scale feature enhancement module that fuses multi-scale features with the prior mask to alleviate spatial inconsistency. Extensive experiments on multiple benchmarks demonstrate consistent state-of-the-art performance under low-light conditions. Code: https://github.com/YulunGuo/CrackFSS.
+ oai:arXiv.org:2601.13059v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yulun Guo
+
+
+ MagicGUI-RMS: A Multi-Agent Reward Model System for Self-Evolving GUI Agents via Automated Feedback Reflux
+ https://arxiv.org/abs/2601.13060
+ arXiv:2601.13060v1 Announce Type: new
+Abstract: Graphical user interface (GUI) agents are rapidly progressing toward autonomous interaction and reliable task execution across diverse applications. However, two central challenges remain unresolved: automating the evaluation of agent trajectories and generating high-quality training data at scale to enable continual improvement. Existing approaches often depend on manual annotation or static rule-based verification, which restricts scalability and limits adaptability in dynamic environments. We present MagicGUI-RMS, a multi-agent reward model system that delivers adaptive trajectory evaluation, corrective feedback, and self-evolving learning capabilities. MagicGUI-RMS integrates a Domain-Specific Reward Model (DS-RM) with a General-Purpose Reward Model (GP-RM), enabling fine-grained action assessment and robust generalization across heterogeneous GUI tasks. To support reward learning at scale, we design a structured data construction pipeline that automatically produces balanced and diverse reward datasets, effectively reducing annotation costs while maintaining sample fidelity. During execution, the reward model system identifies erroneous actions, proposes refined alternatives, and continuously enhances agent behavior through an automated data-reflux mechanism. Extensive experiments demonstrate that MagicGUI-RMS yields substantial gains in task accuracy, behavioral robustness. These results establish MagicGUI-RMS as a principled and effective foundation for building self-improving GUI agents driven by reward-based adaptation.
+ oai:arXiv.org:2601.13060v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zecheng Li, Zhihui Cao, Wenke Huang, Yudong Zhang, Keying Qi, Rui Wang, Zeyu Zheng, Jian Zhao, Hao Zhu, Hengxin Wu, Yuran Wang, Guitao Fan, Guokun Wu, Yicong Liu, Zhilin Gao, Haikun Xu, He Yang, Minqi Xiang, Xingyu Liu, Zuojian Wang
+
+
+ Two-timescale Optimization for Hybrid Mechanically and Electronically Tunable 6DMA Aided Communication
+ https://arxiv.org/abs/2601.13064
+ arXiv:2601.13064v1 Announce Type: new
+Abstract: This letter proposes a hybrid mechanically and electronically tunable six-dimensional movable antenna (6DMA) base station (BS) architecture for future wireless communication networks. Such BS consists of multiple antenna arrays that are mechanically movable along a circular rail to adapt to the horizontal user hotspots, and each array is equipped with pattern reconfigurable antennas (PRAs) that are capable of electronically switching among a set of specified beam patterns to cater to the instantaneous user channels. The mechanical adjustment provides wide-angle coverage but suffers from slow response, while the electronic tuning enables rapid beam reconfiguration but with limited angular range. To effectively combine their complementary advantages, we propose to jointly design both mechanical and electronic configurations to maximize the average sum-rate of users via a two-timescale optimization approach, in which the array positions are optimized on the long timescale according to large-scale user distribution statistics, and the pattern selection vectors are optimized on the short timescale to enable fast beam alignment based on the instantaneous user locations. An alternating optimization algorithm based on the Monte Carlo sampling method is developed to solve the problem efficiently. Finally, simulation results show that our proposed design achieves significant performance gains over various benchmark schemes.
+ oai:arXiv.org:2601.13064v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuyan Zhou, Haocheng Hua, Jie Xu, Rui Zhang
+
+
+ Stability of Information-Based Routing in Dynamic Transportation Networks
+ https://arxiv.org/abs/2601.13066
+ arXiv:2601.13066v1 Announce Type: new
+Abstract: Recent studies on transportation networks have shown that real-time route guidance can inadvertently induce congestion or oscillatory traffic patterns. Nevertheless, such technologies also offer a promising opportunity to manage traffic non-intrusively by shaping the information delivered to users, thereby mitigating congestion and enhancing network stability. A key step toward this goal is to identify information signals that ensure the existence of an equilibrium with desirable stability and convergence properties. This challenge is particularly relevant when traffic density and routing dynamics evolve concurrently, as increasingly occurs with digital signaling and real-time navigation technologies. To address this, we analyze a parallel-path transportation network with a single origin-destination pair, incorporating joint traffic density and logit-based routing dynamics that evolve at the same timescale. We characterize a class of density-dependent traffic information that guarantees a unique equilibrium in the free-flow regime, ensures its asymptotic stability, and keeps traffic densities within the free-flow region for all time. The theoretical results are complemented by a numerical case study demonstrating how the framework can inform the design of traffic information that reduces total travel time without compromising credibility.
+ oai:arXiv.org:2601.13066v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shaya Garjani, Ashish Cherukuri, Bayu Jayawardhana, Nima Monshizadeh
+
+
+ METIS: Mentoring Engine for Thoughtful Inquiry & Solutions
+ https://arxiv.org/abs/2601.13075
+ arXiv:2601.13075v1 Announce Type: new
+Abstract: Many students lack access to expert research mentorship. We ask whether an AI mentor can move undergraduates from an idea to a paper. We build METIS, a tool-augmented, stage-aware assistant with literature search, curated guidelines, methodology checks, and memory. We evaluate METIS against GPT-5 and Claude Sonnet 4.5 across six writing stages using LLM-as-a-judge pairwise preferences, student-persona rubrics, short multi-turn tutoring, and evidence/compliance checks. On 90 single-turn prompts, LLM judges preferred METIS to Claude Sonnet 4.5 in 71% and to GPT-5 in 54%. Student scores (clarity/actionability/constraint-fit; 90 prompts x 3 judges) are higher across stages. In multi-turn sessions (five scenarios/agent), METIS yields slightly higher final quality than GPT-5. Gains concentrate in document-grounded stages (D-F), consistent with stage-aware routing and groundings failure modes include premature tool routing, shallow grounding, and occasional stage misclassification.
+ oai:arXiv.org:2601.13075v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Abhinav Rajeev Kumar, Dhruv Trehan, Paras Chopra
+
+
+ What's it like to be a chat? On the co-simulation of artificial minds in human-AI conversations
+ https://arxiv.org/abs/2601.13081
+ arXiv:2601.13081v1 Announce Type: new
+Abstract: Large Language Models (LLMs) can simulate person-like things which at least appear to have stable behavioural and psychological dispositions. Call these things characters. Are characters minded and psychologically continuous entities with mental states like beliefs, desires and intentions? Illusionists about characters say No. On this view, characters are merely anthropomorphic projections in the mind of the user and so lack mental states. Jonathan Birch (2025) defends this view. He says that the distributed nature of LLM processing, in which several LLMs may be implicated in the simulation of a character in a single conversation, precludes the existence of a persistent minded entity that is identifiable with the character. Against illusionism, we argue for a realist position on which characters exist as minded and psychologically continuous entities. Our central point is that Birch's argument for illusionism rests on a category error: characters are not internal to the LLMs that simulate them, but rather are co-simulated by LLMs and users, emerging in a shared conversational workspace through a process of mutual theory of mind modelling. We argue that characters, and their minds, exist as 'real patterns' on grounds that attributing mental states to characters is essential for making efficient and accurate predictions about the conversational dynamics (c.f. Dennett, 1991). Furthermore, because the character exists within the conversational workspace rather than within the LLM, psychological continuity is preserved even when the underlying computational substrate is distributed across multiple LLM instances.
+ oai:arXiv.org:2601.13081v1
+ cs.HC
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Geoff Keeling, Winnie Street
+
+
+ Adversarial News and Lost Profits: Manipulating Headlines in LLM-Driven Algorithmic Trading
+ https://arxiv.org/abs/2601.13082
+ arXiv:2601.13082v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are increasingly adopted in the financial domain. Their exceptional capabilities to analyse textual data make them well-suited for inferring the sentiment of finance-related news. Such feedback can be leveraged by algorithmic trading systems (ATS) to guide buy/sell decisions. However, this practice bears the risk that a threat actor may craft "adversarial news" intended to mislead an LLM. In particular, the news headline may include "malicious" content that remains invisible to human readers but which is still ingested by the LLM. Although prior work has studied textual adversarial examples, their system-wide impact on LLM-supported ATS has not yet been quantified in terms of monetary risk. To address this threat, we consider an adversary with no direct access to an ATS but able to alter stock-related news headlines on a single day. We evaluate two human-imperceptible manipulations in a financial context: Unicode homoglyph substitutions that misroute models during stock-name recognition, and hidden-text clauses that alter the sentiment of the news headline. We implement a realistic ATS in Backtrader that fuses an LSTM-based price forecast with LLM-derived sentiment (FinBERT, FinGPT, FinLLaMA, and six general-purpose LLMs), and quantify monetary impact using portfolio metrics. Experiments on real-world data show that manipulating a one-day attack over 14 months can reliably mislead LLMs and reduce annual returns by up to 17.7 percentage points. To assess real-world feasibility, we analyze popular scraping libraries and trading platforms and survey 27 FinTech practitioners, confirming our hypotheses. We notified trading platform owners of this security issue.
+ oai:arXiv.org:2601.13082v1
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Advije Rizvani, Giovanni Apruzzese, Pavel Laskov
+
+
+ No Traffic to Cry: Traffic-Oblivious Link Deactivation for Green Traffic Engineering
+ https://arxiv.org/abs/2601.13087
+ arXiv:2601.13087v1 Announce Type: new
+Abstract: As internet traffic grows, the underlying infrastructure consumes increasing amounts of energy. During off-peak hours, large parts of the networks remain underutilized, presenting significant potential for energy savings. Existing Green Traffic Engineering approaches attempt to leverage this potential by switching off those parts of the networks that are not required for the routing of specific traffic matrices. When traffic changes, the approaches need to adapt rapidly, which is hard to achieve given the complexity of the problem. We take a fundamentally different approach: instead of considering a specific traffic matrix, we rely on a traffic-oblivious routing scheme. We discuss the NP-hard problem of activating as few connections as possible while still guaranteeing that any down-scaled traffic matrix $\varrho\cdot T$ can be routed, where $\varrho \in (0,1)$ and $T$ is any traffic matrix routable in the original network. We present a $\max(\frac{1}{\varrho\cdot\lambda_{\text{min}}},2)$-approximation algorithm for this problem, with $\lambda_{\text{min}}$ denoting the minimum number of connections between any two connected routers. Additionally, we propose two post-processing heuristics to further improve solution quality. Our evaluation shows that we can quickly generate near-optimal solutions. By design, our method avoids the need for frequent reconfigurations and offers a promising direction to achieve practical energy savings in backbone networks.
+ oai:arXiv.org:2601.13087v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Max Ilsen, Daniel Otten, Nils Aschenbruck, Markus Chimani
+
+
+ Exploiting Light To Enhance The Endurance and Navigation of Lighter-Than-Air Micro-Drones
+ https://arxiv.org/abs/2601.13088
+ arXiv:2601.13088v1 Announce Type: new
+Abstract: Micro-Unmanned Aerial Vehicles (UAVs) are rapidly expanding into tasks from inventory to environmental sensing, yet their short endurance and unreliable navigation in GPS-denied spaces limit deployment. Lighter-Than-Air (LTA) drones offer an energy-efficient alternative: they use a helium envelope to provide buoyancy, which enables near-zero-power drain during hovering and much longer operation. LTAs are promising, but their design is complex, and they lack integrated solutions to enable sustained autonomous operations and navigation with simple, low-infrastructure.
+ We propose a compact, self-sustaining LTA drone that uses light for both energy harvesting and navigation. Our contributions are threefold: (i) a high-fidelity simulation framework to analyze LTA aerodynamics and select a stable, efficient configuration; (ii) a framework to integrate solar cells on the envelope to provide net-positive energy; and (iii) a point-and-go navigation system with three light-seeking algorithms operating on a single light beacon.
+ Our LTA-analysis, together with the integrated solar panels, not only saves energy while flying, but also enables sustainable operation: providing 1 minute of flying time for every 4 minutes of energy harvesting, under illuminations of 80klux. We also demonstrate robust single-beacon navigation towards a light source that can be up to 7m away, in indoor and outdoor environments, even with moderate winds. The resulting system indicates a plausible path toward persistent, autonomous operation for indoor and outdoor monitoring. More broadly, this work provides a practical pathway for translating the promise of LTA drones into a persistent, self-sustaining aerial system.
+ oai:arXiv.org:2601.13088v1
+ cs.RO
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Harry Huang, Talia Xu, Marco Z\'u\~niga Zamalloa
+
+
+ Patient-Conditioned Adaptive Offsets for Reliable Diagnosis across Subgroups
+ https://arxiv.org/abs/2601.13094
+ arXiv:2601.13094v1 Announce Type: new
+Abstract: AI models for medical diagnosis often exhibit uneven performance across patient populations due to heterogeneity in disease prevalence, imaging appearance, and clinical risk profiles. Existing algorithmic fairness approaches typically seek to reduce such disparities by suppressing sensitive attributes. However, in medical settings these attributes often carry essential diagnostic information, and removing them can degrade accuracy and reliability, particularly in high-stakes applications. In contrast, clinical decision making explicitly incorporates patient context when interpreting diagnostic evidence, suggesting a different design direction for subgroup-aware models. In this paper, we introduce HyperAdapt, a patient-conditioned adaptation framework that improves subgroup reliability while maintaining a shared diagnostic model. Clinically relevant attributes such as age and sex are encoded into a compact embedding and used to condition a hypernetwork-style module, which generates small residual modulation parameters for selected layers of a shared backbone. This design preserves the general medical knowledge learned by the backbone while enabling targeted adjustments that reflect patient-specific variability. To ensure efficiency and robustness, adaptations are constrained through low-rank and bottlenecked parameterizations, limiting both model complexity and computational overhead. Experiments across multiple public medical imaging benchmarks demonstrate that the proposed approach consistently improves subgroup-level performance without sacrificing overall accuracy. On the PAD-UFES-20 dataset, our method outperforms the strongest competing baseline by 4.1% in recall and 4.4% in F1 score, with larger gains observed for underrepresented patient populations.
+ oai:arXiv.org:2601.13094v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Gelei Xu, Yuying Duan, Jun Xia, Ruining Deng, Wei Jin, Yiyu Shi
+
+
+ LLM-VLM Fusion Framework for Autonomous Maritime Port Inspection using a Heterogeneous UAV-USV System
+ https://arxiv.org/abs/2601.13096
+ arXiv:2601.13096v1 Announce Type: new
+Abstract: Maritime port inspection plays a critical role in ensuring safety, regulatory compliance, and operational efficiency in complex maritime environments. However, existing inspection methods often rely on manual operations and conventional computer vision techniques that lack scalability and contextual understanding. This study introduces a novel integrated engineering framework that utilizes the synergy between Large Language Models (LLMs) and Vision Language Models (VLMs) to enable autonomous maritime port inspection using cooperative aerial and surface robotic platforms. The proposed framework replaces traditional state-machine mission planners with LLM-driven symbolic planning and improved perception pipelines through VLM-based semantic inspection, enabling context-aware and adaptive monitoring. The LLM module translates natural language mission instructions into executable symbolic plans with dependency graphs that encode operational constraints and ensure safe UAV-USV coordination. Meanwhile, the VLM module performs real-time semantic inspection and compliance assessment, generating structured reports with contextual reasoning. The framework was validated using the extended MBZIRC Maritime Simulator with realistic port infrastructure and further assessed through real-world robotic inspection trials. The lightweight on-board design ensures suitability for resource-constrained maritime platforms, advancing the development of intelligent, autonomous inspection systems. Project resources (code and videos) can be found here: https://github.com/Muhayyuddin/llm-vlm-fusion-port-inspection
+ oai:arXiv.org:2601.13096v1
+ cs.RO
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Muhayy Ud Din, Waseem Akram, Ahsan B. Bakht, Irfan Hussain
+
+
+ RM -RF: Reward Model for Run-Free Unit Test Evaluation
+ https://arxiv.org/abs/2601.13097
+ arXiv:2601.13097v1 Announce Type: new
+Abstract: We present RM-RF, a lightweight reward model for run-free evaluation of automatically generated unit tests. Instead of repeatedly compiling and executing candidate tests, RM-RF predicts - from source and test code alone - three execution-derived signals: (1) whether the augmented test suite compiles and runs successfully, (2) whether the generated test cases increase code coverage, and (3) whether the generated test cases improve the mutation kill rate. To train and evaluate RM-RF we assemble a multilingual dataset (Java, Python, Go) of focal files, test files, and candidate test additions labeled by an execution-based pipeline, and we release an associated dataset and methodology for comparative evaluation. We tested multiple model families and tuning regimes (zero-shot, full fine-tuning, and PEFT via LoRA), achieving an average F1 of 0.69 across the three targets. Compared to conventional compile-and-run instruments, RM-RF provides substantially lower latency and infrastructure cost while delivering competitive predictive fidelity, enabling fast, scalable feedback for large-scale test generation and RL-based code optimization.
+ oai:arXiv.org:2601.13097v1
+ cs.SE
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Elena Bruches, Daniil Grebenkin, Mikhail Klementev, Vadim Alperovich, Roman Derunets, Dari Baturova, Georgy Mkrtchyan, Oleg Sedukhin, Ivan Bondarenko, Nikolay Bushkov, Stanislav Moiseev
+
+
+ Exploring the Impacts of Background Noise on Auditory Stimuli of Audio-Visual eHMIs for Hearing, Deaf, and Hard-of-Hearing People
+ https://arxiv.org/abs/2601.13098
+ arXiv:2601.13098v1 Announce Type: new
+Abstract: External Human-Machine Interfaces (eHMIs) have been proposed to enhance communication between automated vehicles (AVs) and pedestrians, with growing interest in multi-modal designs such as audio-visual eHMIs. Just as poor lighting can impair visual cues, a loud background noise may mask the auditory stimuli. However, its effects within these systems have not been examined, and little is known about how pedestrians -- particularly Deaf and Hard-of-Hearing (DHH) people -- perceive different types of auditory stimuli. We conducted a virtual reality study (Hearing N=25, DHH N=11) to examine the effects of background noise (quiet and loud) on auditory stimuli (baseline, bell, speech) within an audio-visual eHMI. Results revealed that: (1) Crossing experiences of DHH pedestrians significantly differ from Hearing pedestrians. (2) Loud background noise adversely affects pedestrians' crossing experiences. (3) Providing an additional auditory eHMI (bell/speech) improves crossing experiences. We outlined four practical implications for future eHMI design and research.
+ oai:arXiv.org:2601.13098v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3772318.3791557
+ Wenge Xu, Foroogh Hajiseyedjavadi, Debargha Dey, Tram Thi Minh Tran, Mark Colley
+
+
+ Alexandria: A Multi-Domain Dialectal Arabic Machine Translation Dataset for Culturally Inclusive and Linguistically Diverse LLMs
+ https://arxiv.org/abs/2601.13099
+ arXiv:2601.13099v1 Announce Type: new
+Abstract: Arabic is a highly diglossic language where most daily communication occurs in regional dialects rather than Modern Standard Arabic. Despite this, machine translation (MT) systems often generalize poorly to dialectal input, limiting their utility for millions of speakers. We introduce \textbf{Alexandria}, a large-scale, community-driven, human-translated dataset designed to bridge this gap. Alexandria covers 13 Arab countries and 11 high-impact domains, including health, education, and agriculture. Unlike previous resources, Alexandria provides unprecedented granularity by associating contributions with city-of-origin metadata, capturing authentic local varieties beyond coarse regional labels. The dataset consists of multi-turn conversational scenarios annotated with speaker-addressee gender configurations, enabling the study of gender-conditioned variation in dialectal use. Comprising 107K total samples, Alexandria serves as both a training resource and a rigorous benchmark for evaluating MT and Large Language Models (LLMs). Our automatic and human evaluation of Arabic-aware LLMs benchmarks current capabilities in translating across diverse Arabic dialects and sub-dialects, while exposing significant persistent challenges.
+ oai:arXiv.org:2601.13099v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Abdellah El Mekki, Samar M. Magdy, Houdaifa Atou, Ruwa AbuHweidi, Baraah Qawasmeh, Omer Nacar, Thikra Al-hibiri, Razan Saadie, Hamzah Alsayadi, Nadia Ghezaiel Hammouda, Alshima Alkhazimi, Aya Hamod, Al-Yas Al-Ghafri, Wesam El-Sayed, Asila Al sharji, Mohamad Ballout, Anas Belfathi, Karim Ghaddar, Serry Sibaee, Alaa Aoun, Areej Asiri, Lina Abureesh, Ahlam Bashiti, Majdal Yousef, Abdulaziz Hafiz, Yehdih Mohamed, Emira Hamedtou, Brakehe Brahim, Rahaf Alhamouri, Youssef Nafea, Aya El Aatar, Walid Al-Dhabyani, Emhemed Hamed, Sara Shatnawi, Fakhraddin Alwajih, Khalid Elkhidir, Ashwag Alasmari, Abdurrahman Gerrio, Omar Alshahri, AbdelRahim A. Elmadany, Ismail Berrada, Amir Azad Adli Alkathiri, Fadi A Zaraket, Mustafa Jarrar, Yahya Mohamed El Hadj, Hassan Alhuzali, Muhammad Abdul-Mageed
+
+
+ Recursive Meta-Distillation: An Axiomatic Framework for Iterative Knowledge Refinement
+ https://arxiv.org/abs/2601.13100
+ arXiv:2601.13100v1 Announce Type: new
+Abstract: Recent work in probability-domain knowledge distillation has established axiomatic frameworks for temperature scaling, multi-teacher aggregation, and bias-variance trade-offs in single-stage settings. However, the mathematical behavior of recursive or multi-generation distillation remains poorly understood, with prior approaches relying primarily on empirical heuristics. In this work, we introduce an axiomatic and operator-theoretic framework for recursive meta-distillation, formalizing iterative knowledge distillation as a sequence of probability-distribution operators with explicit anchoring to base teachers.
+ We define structural axioms for valid meta-teacher construction and prove the existence of non-trivial operator families satisfying these axioms without specifying particular algorithms or loss functions. Under mild realizability and convexity assumptions, we show that anchored recursive distillation induces contraction in KL divergence, yielding geometric convergence to base teacher distributions and a unique, globally attractive fixed point.
+ The contribution is foundational rather than algorithmic: the framework characterizes when recursive distillation is mathematically well-posed and convergent rather than error-accumulating, independent of model architecture, optimization details, or specific operator instantiations. These results provide a theoretical basis for understanding stability, bias-variance behavior, and failure modes in iterative and multi-teacher distillation under capacity constraints.
+ oai:arXiv.org:2601.13100v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Aaron R. Flouro, Shawn P. Chadwick
+
+
+ Leveraging Lora Fine-Tuning and Knowledge Bases for Construction Identification
+ https://arxiv.org/abs/2601.13105
+ arXiv:2601.13105v1 Announce Type: new
+Abstract: This study investigates the automatic identification of the English ditransitive construction by integrating LoRA-based fine-tuning of a large language model with a Retrieval-Augmented Generation (RAG) framework.A binary classification task was conducted on annotated data from the British National Corpus. Results demonstrate that a LoRA-fine-tuned Qwen3-8B model significantly outperformed both a native Qwen3-MAX model and a theory-only RAG system. Detailed error analysis reveals that fine-tuning shifts the model's judgment from a surface-form pattern matching towards a more semantically grounded understanding based.
+ oai:arXiv.org:2601.13105v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Liu Kaipeng, Wu Ling
+
+
+ Stochastic Gradient Descent for Nonlinear Inverse Problems in Banach Spaces
+ https://arxiv.org/abs/2601.13110
+ arXiv:2601.13110v1 Announce Type: new
+Abstract: Stochastic gradient descent (SGD) and its variants are widely used and highly effective optimization methods in machine learning, especially for neural network training. By using a single datum or a small subset of the data, selected randomly at each iteration, SGD scales well to problem size and has been shown to be effective for solving large-scale inverse problems. In this work, we investigate SGD for solving nonlinear inverse problems in Banach spaces through the lens of iterative regularization. Under general assumptions, we prove almost sure convergence of the iterates to the minimum distance solution and show the regularizing property in expectation under an a priori stopping rule. Further, we establish convergence rates under the conditional stability assumptions for both exact and noisy data. Numerical experiments on Schlieren tomography and electrical impedance tomography are presented to show distinct features of the method.
+ oai:arXiv.org:2601.13110v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Bangti Jin, Zeljko Kereta, Yuxin Xia
+
+
+ CORE-T: COherent REtrieval of Tables for Text-to-SQL
+ https://arxiv.org/abs/2601.13111
+ arXiv:2601.13111v1 Announce Type: new
+Abstract: Realistic text-to-SQL workflows often require joining multiple tables. As a result, accurately retrieving the relevant set of tables becomes a key bottleneck for end-to-end performance. We study an open-book setting where queries must be answered over large, heterogeneous table collections pooled from many sources, without clean scoping signals such as database identifiers. Here, dense retrieval (DR) achieves high recall but returns many distractors, while join-aware alternatives often rely on extra assumptions and/or incur high inference overhead. We propose CORE-T, a scalable, training-free framework that enriches tables with LLM-generated purpose metadata and pre-computes a lightweight table-compatibility cache. At inference time, DR returns top-K candidates; a single LLM call selects a coherent, joinable subset, and a simple additive adjustment step restores strongly compatible tables. Across Bird, Spider, and MMQA, CORE-T improves table-selection F1 by up to 22.7 points while retrieving up to 42% fewer tables, improving multi-table execution accuracy by up to 5.0 points on Bird and 6.9 points on MMQA, and using 4-5x fewer tokens than LLM-intensive baselines.
+ oai:arXiv.org:2601.13111v1
+ cs.CL
+ cs.AI
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Hassan Soliman, Vivek Gupta, Dan Roth, Iryna Gurevych
+
+
+ CODE: A Contradiction-Based Deliberation Extension Framework for Overthinking Attacks on Retrieval-Augmented Generation
+ https://arxiv.org/abs/2601.13112
+ arXiv:2601.13112v1 Announce Type: new
+Abstract: Introducing reasoning models into Retrieval-Augmented Generation (RAG) systems enhances task performance through step-by-step reasoning, logical consistency, and multi-step self-verification. However, recent studies have shown that reasoning models suffer from overthinking attacks, where models are tricked to generate unnecessarily high number of reasoning tokens. In this paper, we reveal that such overthinking risk can be inherited by RAG systems equipped with reasoning models, by proposing an end-to-end attack framework named Contradiction-Based Deliberation Extension (CODE). Specifically, CODE develops a multi-agent architecture to construct poisoning samples that are injected into the knowledge base. These samples 1) are highly correlated with the use query, such that can be retrieved as inputs to the reasoning model; and 2) contain contradiction between the logical and evidence layers that cause models to overthink, and are optimized to exhibit highly diverse styles. Moreover, the inference overhead of CODE is extremely difficult to detect, as no modification is needed on the user query, and the task accuracy remain unaffected. Extensive experiments on two datasets across five commercial reasoning models demonstrate that the proposed attack causes a 5.32x-24.72x increase in reasoning token consumption, without degrading task performance. Finally, we also discuss and evaluate potential countermeasures to mitigate overthinking risks.
+ oai:arXiv.org:2601.13112v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaolei Zhang, Xiaojun Jia, Liquan Chen, Songze Li
+
+
+ IntAgent: NWDAF-Based Intent LLM Agent Towards Advanced Next Generation Networks
+ https://arxiv.org/abs/2601.13114
+ arXiv:2601.13114v1 Announce Type: new
+Abstract: Intent-based networks (IBNs) are gaining prominence as an innovative technology that automates network operations through high-level request statements, defining what the network should achieve. In this work, we introduce IntAgent, an intelligent intent LLM agent that integrates NWDAF analytics and tools to fulfill the network operator's intents. Unlike previous approaches, we develop an intent tools engine directly within the NWDAF analytics engine, allowing our agent to utilize live network analytics to inform its reasoning and tool selection. We offer an enriched, 3GPP-compliant data source that enhances the dynamic, context-aware fulfillment of network operator goals, along with an MCP tools server for scheduling, monitoring, and analytics tools. We demonstrate the efficacy of our framework through two practical use cases: ML-based traffic prediction and scheduled policy enforcement, which validate IntAgent's ability to autonomously fulfill complex network intents.
+ oai:arXiv.org:2601.13114v1
+ cs.NI
+ cs.AI
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Abdelrahman Soliman, Ahmed Refaey, Aiman Erbad, Amr Mohamed
+
+
+ Agentic Conversational Search with Contextualized Reasoning via Reinforcement Learning
+ https://arxiv.org/abs/2601.13115
+ arXiv:2601.13115v1 Announce Type: new
+Abstract: Large Language Models (LLMs) have become a popular interface for human-AI interaction, supporting information seeking and task assistance through natural, multi-turn dialogue. To respond to users within multi-turn dialogues, the context-dependent user intent evolves across interactions, requiring contextual interpretation, query reformulation, and dynamic coordination between retrieval and generation. Existing studies usually follow static rewrite, retrieve, and generate pipelines, which optimize different procedures separately and overlook the mixed-initiative action optimization simultaneously. Although the recent developments in deep search agents demonstrate the effectiveness in jointly optimizing retrieval and generation via reasoning, these approaches focus on single-turn scenarios, which might lack the ability to handle multi-turn interactions. We introduce a conversational agent that interleaves search and reasoning across turns, enabling exploratory and adaptive behaviors learned through reinforcement learning (RL) training with tailored rewards towards evolving user goals. The experimental results across four widely used conversational benchmarks demonstrate the effectiveness of our methods by surpassing several existing strong baselines.
+ oai:arXiv.org:2601.13115v1
+ cs.CL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fengran Mo, Yifan Gao, Sha Li, Hansi Zeng, Xin Liu, Zhaoxuan Tan, Xian Li, Jianshu Chen, Dakuo Wang, Meng Jiang
+
+
+ xBound: Join Size Lower Bounds
+ https://arxiv.org/abs/2601.13117
+ arXiv:2601.13117v1 Announce Type: new
+Abstract: Cloud database vendors invest substantial resources into their query optimizers, and for good reason. Cardinality estimation, a cornerstone of the optimizer, is critical for the selection of efficient query plans, as well as downstream tasks such as resource allocation and query scheduling. Yet, as many practitioners and researchers have noted, it is also the optimizer's Achilles heel. Prior studies on a number of industrial-strength databases show substantial cardinality estimation errors on all tested systems, with a far greater tendency to underestimate than to overestimate. Unfortunately, cardinality underestimation is more problematic than overestimation, as it misleads the optimizer to choose plans designed for small data, leading to underprovisioned CPU and memory.
+ While previous work on pessimistic cardinality estimation has proposed provable join size upper bounds, such methods can only correct overestimation, leaving the more harmful problem of underestimation unaddressed. To fill this critical gap, we introduce xBound, the very first framework for deriving provable join size lower bounds. xBound successfully reduces underestimation in real systems: On the JOBlight benchmark, it corrects 17.5% of subexpression underestimates in DuckDB and 8.7% in PostgreSQL, while on a Microsoft enterprise workload, it fixes 36.1% of Fabric Data Warehouse's underestimates, demonstrating a significant step towards solving this long-standing problem.
+ oai:arXiv.org:2601.13117v1
+ cs.DB
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Mihail Stoian, Tiemo Bang, Hangdong Zhao, Jes\'us Camacho-Rodr\'iguez, Yuanyuan Tian, Andreas Kipf
+
+
+ Guidelines to Prompt Large Language Models for Code Generation: An Empirical Characterization
+ https://arxiv.org/abs/2601.13118
+ arXiv:2601.13118v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are nowadays extensively used for various types of software engineering tasks, primarily code generation. Previous research has shown how suitable prompt engineering could help developers in improving their code generation prompts. However, so far, there do not exist specific guidelines driving developers towards writing suitable prompts for code generation. In this work, we derive and evaluate development-specific prompt optimization guidelines. First, we use an iterative, test-driven approach to automatically refine code generation prompts, and we analyze the outcome of this process to identify prompt improvement items that lead to test passes. We use such elements to elicit 10 guidelines for prompt improvement, related to better specifying I/O, pre-post conditions, providing examples, various types of details, or clarifying ambiguities. We conduct an assessment with 50 practitioners, who report their usage of the elicited prompt improvement patterns, as well as their perceived usefulness, which does not always correspond to the actual usage before knowing our guidelines. Our results lead to implications not only for practitioners and educators, but also for those aimed at creating better LLM-aided software development tools.
+ oai:arXiv.org:2601.13118v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Alessandro Midolo, Alessandro Giagnorio, Fiorella Zampetti, Rosalia Tufano, Gabriele Bavota, Massimiliano Di Penta
+
+
+ Responsible AI for General-Purpose Systems: Overview, Challenges, and A Path Forward
+ https://arxiv.org/abs/2601.13122
+ arXiv:2601.13122v1 Announce Type: new
+Abstract: Modern general-purpose AI systems made using large language and vision models, are capable of performing a range of tasks like writing text articles, generating and debugging codes, querying databases, and translating from one language to another, which has made them quite popular across industries. However, there are risks like hallucinations, toxicity, and stereotypes in their output that make them untrustworthy. We review various risks and vulnerabilities of modern general-purpose AI along eight widely accepted responsible AI (RAI) principles (fairness, privacy, explainability, robustness, safety, truthfulness, governance, and sustainability) and compare how they are non-existent or less severe and easily mitigable in traditional task-specific counterparts. We argue that this is due to the non-deterministically high Degree of Freedom in output (DoFo) of general-purpose AI (unlike the deterministically constant or low DoFo of traditional task-specific AI systems), and there is a need to rethink our approach to RAI for general-purpose AI. Following this, we derive C2V2 (Control, Consistency, Value, Veracity) desiderata to meet the RAI requirements for future general-purpose AI systems, and discuss how recent efforts in AI alignment, retrieval-augmented generation, reasoning enhancements, etc. fare along one or more of the desiderata. We believe that the goal of developing responsible general-purpose AI can be achieved by formally modeling application- or domain-dependent RAI requirements along C2V2 dimensions, and taking a system design approach to suitably combine various techniques to meet the desiderata.
+ oai:arXiv.org:2601.13122v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Gourab K Patro, Himanshi Agrawal, Himanshu Gharat, Supriya Panigrahi, Nim Sherpa, Vishal Vaddina, Dagnachew Birru
+
+
+ A Streamlined Attention-Based Network for Descriptor Extraction
+ https://arxiv.org/abs/2601.13126
+ arXiv:2601.13126v1 Announce Type: new
+Abstract: We introduce SANDesc, a Streamlined Attention-Based Network for Descriptor extraction that aims to improve on existing architectures for keypoint description.
+ Our descriptor network learns to compute descriptors that improve matching without modifying the underlying keypoint detector. We employ a revised U-Net-like architecture enhanced with Convolutional Block Attention Modules and residual paths, enabling effective local representation while maintaining computational efficiency. We refer to the building blocks of our model as Residual U-Net Blocks with Attention. The model is trained using a modified triplet loss in combination with a curriculum learning-inspired hard negative mining strategy, which improves training stability.
+ Extensive experiments on HPatches, MegaDepth-1500, and the Image Matching Challenge 2021 show that training SANDesc on top of existing keypoint detectors leads to improved results on multiple matching tasks compared to the original keypoint descriptors. At the same time, SANDesc has a model complexity of just 2.4 million parameters.
+ As a further contribution, we introduce a new urban dataset featuring 4K images and pre-calibrated intrinsics, designed to evaluate feature extractors. On this benchmark, SANDesc achieves substantial performance gains over the existing descriptors while operating with limited computational resources.
+ oai:arXiv.org:2601.13126v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mattia D'Urso, Emanuele Santellani, Christian Sormann, Mattia Rossi, Andreas Kuhn, Friedrich Fraundorfer
+
+
+ PhaseMark: A Post-hoc, Optimization-Free Watermarking of AI-generated Images in the Latent Frequency Domain
+ https://arxiv.org/abs/2601.13128
+ arXiv:2601.13128v1 Announce Type: new
+Abstract: The proliferation of hyper-realistic images from Latent Diffusion Models (LDMs) demands robust watermarking, yet existing post-hoc methods are prohibitively slow due to iterative optimization or inversion processes. We introduce PhaseMark, a single-shot, optimization-free framework that directly modulates the phase in the VAE latent frequency domain. This approach makes PhaseMark thousands of times faster than optimization-based techniques while achieving state-of-the-art resilience against severe attacks, including regeneration, without degrading image quality. We analyze four modulation variants, revealing a clear performance-quality trade-off. PhaseMark demonstrates a new paradigm where efficient, resilient watermarking is achieved by exploiting intrinsic latent properties.
+ oai:arXiv.org:2601.13128v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sung Ju Lee, Nam Ik Cho
+
+
+ GaussExplorer: 3D Gaussian Splatting for Embodied Exploration and Reasoning
+ https://arxiv.org/abs/2601.13132
+ arXiv:2601.13132v1 Announce Type: new
+Abstract: We present GaussExplorer, a framework for embodied exploration and reasoning built on 3D Gaussian Splatting (3DGS). While prior approaches to language-embedded 3DGS have made meaningful progress in aligning simple text queries with Gaussian embeddings, they are generally optimized for relatively simple queries and struggle to interpret more complex, compositional language queries. Alternative studies based on object-centric RGB-D structured memories provide spatial grounding but are constrained by pre-fixed viewpoints. To address these issues, GaussExplorer introduces Vision-Language Models (VLMs) on top of 3DGS to enable question-driven exploration and reasoning within 3D scenes. We first identify pre-captured images that are most correlated with the query question, and subsequently adjust them into novel viewpoints to more accurately capture visual information for better reasoning by VLMs. Experiments show that ours outperforms existing methods on several benchmarks, demonstrating the effectiveness of integrating VLM-based reasoning with 3DGS for embodied tasks.
+ oai:arXiv.org:2601.13132v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kim Yu-Ji, Dahye Lee, Kim Jun-Seong, GeonU Kim, Nam Hyeon-Woo, Yongjin Kwon, Yu-Chiang Frank Wang, Jaesung Choe, Tae-Hyun Oh
+
+
+ CLIP-Guided Adaptable Self-Supervised Learning for Human-Centric Visual Tasks
+ https://arxiv.org/abs/2601.13133
+ arXiv:2601.13133v1 Announce Type: new
+Abstract: Human-centric visual analysis plays a pivotal role in diverse applications, including surveillance, healthcare, and human-computer interaction. With the emergence of large-scale unlabeled human image datasets, there is an increasing need for a general unsupervised pre-training model capable of supporting diverse human-centric downstream tasks. To achieve this goal, we propose CLASP (CLIP-guided Adaptable Self-suPervised learning), a novel framework designed for unsupervised pre-training in human-centric visual tasks. CLASP leverages the powerful vision-language model CLIP to generate both low-level (e.g., body parts) and high-level (e.g., attributes) semantic pseudo-labels. These multi-level semantic cues are then integrated into the learned visual representations, enriching their expressiveness and generalizability. Recognizing that different downstream tasks demand varying levels of semantic granularity, CLASP incorporates a Prompt-Controlled Mixture-of-Experts (MoE) module. MoE dynamically adapts feature extraction based on task-specific prompts, mitigating potential feature conflicts and enhancing transferability. Furthermore, CLASP employs a multi-task pre-training strategy, where part- and attribute-level pseudo-labels derived from CLIP guide the representation learning process. Extensive experiments across multiple benchmarks demonstrate that CLASP consistently outperforms existing unsupervised pre-training methods, advancing the field of human-centric visual analysis.
+ oai:arXiv.org:2601.13133v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mingshuang Luo, Ruibing Hou, Bo Chao, Hong Chang, Zimo Liu, Yaowei Wang, Shiguang Shan
+
+
+ Earth Embeddings as Products: Taxonomy, Ecosystem, and Standardized Access
+ https://arxiv.org/abs/2601.13134
+ arXiv:2601.13134v1 Announce Type: new
+Abstract: Geospatial Foundation Models (GFMs) provide powerful representations, but high compute costs hinder their widespread use. Pre-computed embedding data products offer a practical "frozen" alternative, yet they currently exist in a fragmented ecosystem of incompatible formats and resolutions. This lack of standardization creates an engineering bottleneck that prevents meaningful model comparison and reproducibility. We formalize this landscape through a three-layer taxonomy: Data, Tools, and Value. We survey existing products to identify interoperability barriers. To bridge this gap, we extend TorchGeo with a unified API that standardizes the loading and querying of diverse embedding products. By treating embeddings as first-class geospatial datasets, we decouple downstream analysis from model-specific engineering, providing a roadmap for more transparent and accessible Earth observation workflows.
+ oai:arXiv.org:2601.13134v1
+ cs.SE
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Heng Fang, Adam J. Stewart, Isaac Corley, Xiao Xiang Zhu, Hossein Azizpour
+
+
+ Adversarial Alignment: Ensuring Value Consistency in Large Language Models for Sensitive Domains
+ https://arxiv.org/abs/2601.13137
+ arXiv:2601.13137v1 Announce Type: new
+Abstract: With the wide application of large language models (LLMs), the problems of bias and value inconsistency in sensitive domains have gradually emerged, especially in terms of race, society and politics. In this paper, we propose an adversarial alignment framework, which enhances the value consistency of the model in sensitive domains through continued pre-training, instruction fine-tuning and adversarial training. In adversarial training, we use the Attacker to generate controversial queries, the Actor to generate responses with value consistency, and the Critic to filter and ensure response quality. Furthermore, we train a Value-Consistent Large Language Model, VC-LLM, for sensitive domains, and construct a bilingual evaluation dataset in Chinese and English. The experimental results show that VC-LLM performs better than the existing mainstream models in both Chinese and English tests, verifying the effectiveness of the method. Warning: This paper contains examples of LLMs that are offensive or harmful in nature.
+ oai:arXiv.org:2601.13137v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuan Gao, Zhigang Liu, Xinyu Yao, Bo Chen, Xiaobing Zhao
+
+
+ From Human to Machine Refactoring: Assessing GPT-4's Impact on Python Class Quality and Readability
+ https://arxiv.org/abs/2601.13139
+ arXiv:2601.13139v1 Announce Type: new
+Abstract: Refactoring is a software engineering practice that aims to improve code quality without altering program behavior. Although automated refactoring tools have been extensively studied, their practical applicability remains limited. Recent advances in Large Language Models (LLMs) have introduced new opportunities for automated code refactoring. The evaluation of such an LLM-driven approach, however, leaves unanswered questions about its effects on code quality. In this paper, we present a comprehensive empirical study on LLM-driven refactoring using GPT-4o, applied to 100 Python classes from the ClassEval benchmark. Unlike prior work, our study explores a wide range of class-level refactorings inspired by Fowler's catalog and evaluates their effects from three complementary perspectives: (i) behavioral correctness, verified through unit tests; (ii) code quality, assessed via Pylint, Flake8, and SonarCloud; and (iii) readability, measured using a state-of-the-art readability tool. Our findings show that GPT-4o generally produces behavior-preserving refactorings that reduce code smells and improve quality metrics, albeit at the cost of decreased readability. Our results provide new evidence on the capabilities and limitations of LLMs in automated software refactoring, highlighting directions for integrating LLMs into practical refactoring workflows.
+ oai:arXiv.org:2601.13139v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Alessandro Midolo, Emiliano Tramontana, Massimiliano Di Penta
+
+
+ TVWorld: Foundations for Remote-Control TV Agents
+ https://arxiv.org/abs/2601.13142
+ arXiv:2601.13142v1 Announce Type: new
+Abstract: Recent large vision-language models (LVLMs) have demonstrated strong potential for device control. However, existing research has primarily focused on point-and-click (PnC) interaction, while remote-control (RC) interaction commonly encountered in everyday TV usage remains largely underexplored. To fill this gap, we introduce \textbf{TVWorld}, an offline graph-based abstraction of real-world TV navigation that enables reproducible and deployment-free evaluation. On this basis, we derive two complementary benchmarks that comprehensively assess TV-use capabilities: \textbf{TVWorld-N} for topology-aware navigation and \textbf{TVWorld-G} for focus-aware grounding. These benchmarks expose a key limitation of existing agents: insufficient topology awareness for focus-based, long-horizon TV navigation. Motivated by this finding, we propose a \emph{Topology-Aware Training} framework that injects topology awareness into LVLMs. Using this framework, we develop \textbf{TVTheseus}, a foundation model specialized for TV navigation. TVTheseus achieves a success rate of $68.3\%$ on TVWorld-N, surpassing strong closed-source baselines such as Gemini 3 Flash and establishing state-of-the-art (SOTA) performance. Additional analyses further provide valuable insights into the development of effective TV-use agents.
+ oai:arXiv.org:2601.13142v1
+ cs.CV
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zhantao Ma, Quanfeng Lu, Shuai Zhong, Dahai Yu, Ping Luo, Michael K. Ng
+
+
+ FastAV: Efficient Token Pruning for Audio-Visual Large Language Model Inference
+ https://arxiv.org/abs/2601.13143
+ arXiv:2601.13143v1 Announce Type: new
+Abstract: In this work, we present FastAV, the first token pruning framework tailored for audio-visual large language models (AV-LLMs). While token pruning has been actively explored in standard large language models (LLMs) and vision-language models (LVLMs), its application to AV-LLMs has received little attention, even though multimodal integration substantially increases their token demands. To address this gap, we introduce a pruning strategy that utilizes attention weights to identify tokens emphasized at different stages and estimates their importance. Building on this analysis, FastAV applies a two-stage pruning strategy: (1) global pruning in intermediate layers to remove broadly less influential tokens, and (2) fine pruning in later layers considering the impact on next token generation. Notably, our method does not rely on full attention maps, which makes it fully compatible with efficient attention mechanisms such as FlashAttention. Extensive experiments demonstrate that FastAV reduces FLOPs by more than 40% on two representative AV-LLMs, while preserving or even improving model performance.
+ oai:arXiv.org:2601.13143v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chaeyoung Jung, Youngjoon Jang, Seungwoo Lee, Joon Son Chung
+
+
+ OPTIMUM-DERAM: Highly Consistent, Scalable, and Secure Multi-Object Memory using RLNC
+ https://arxiv.org/abs/2601.13146
+ arXiv:2601.13146v1 Announce Type: new
+Abstract: This paper introduces OPTIMUM-DERAM, a highly consistent, scalable, secure, and decentralized shared memory solution. Traditional distributed shared memory implementations offer multi-object support by multi-threading a single object memory instance over the same set of data hosts. While theoretically sound, the amount of resources required made such solutions prohibitively expensive in practical systems. OPTIMUM-DERAM proposes a decentralized, reconfigurable, atomic read/write shared memory (DeRAM) that: (i) achieves improved performance and storage scalability by leveraging Random Linear Network Codes (RLNC); (ii) scales in the number of supported atomic objects by introducing a new object placement and discovery approach based on a consistent hashing ring; (iii) scales in the number of participants by allowing dynamic joins and departures leveraging a blockchain oracle to serve as a registry service; and (iv) is secure against malicious behavior by tolerating Byzantine failures.
+ Experimental results over a globally distributed set of nodes, help us realize the performance and scalability gains of OPTIMUM-DERAM over previous distributed shared memory solutions (i.e., the ABD algorithm [3])
+ oai:arXiv.org:2601.13146v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Nicolas Nicolaou, Kishori M. Konwar, Moritz Grundei, Aleksandr Bezobchuk, Muriel M\'edard, Sriram Vishwanath
+
+
+ ICo3D: An Interactive Conversational 3D Virtual Human
+ https://arxiv.org/abs/2601.13148
+ arXiv:2601.13148v1 Announce Type: new
+Abstract: This work presents Interactive Conversational 3D Virtual Human (ICo3D), a method for generating an interactive, conversational, and photorealistic 3D human avatar. Based on multi-view captures of a subject, we create an animatable 3D face model and a dynamic 3D body model, both rendered by splatting Gaussian primitives. Once merged together, they represent a lifelike virtual human avatar suitable for real-time user interactions. We equip our avatar with an LLM for conversational ability. During conversation, the audio speech of the avatar is used as a driving signal to animate the face model, enabling precise synchronization. We describe improvements to our dynamic Gaussian models that enhance photorealism: SWinGS++ for body reconstruction and HeadGaS++ for face reconstruction, and provide as well a solution to merge the separate face and body models without artifacts. We also present a demo of the complete system, showcasing several use cases of real-time conversation with the 3D avatar. Our approach offers a fully integrated virtual avatar experience, supporting both oral and written form interactions in immersive environments. ICo3D is applicable to a wide range of fields, including gaming, virtual assistance, and personalized education, among others. Project page: https://ico3d.github.io/
+ oai:arXiv.org:2601.13148v1
+ cs.CV
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1007/s11263-025-02725-8
+ Richard Shaw, Youngkyoon Jang, Athanasios Papaioannou, Arthur Moreau, Helisa Dhamo, Zhensong Zhang, Eduardo P\'erez-Pellitero
+
+
+ Probe and Skip: Self-Predictive Token Skipping for Efficient Long-Context LLM Inference
+ https://arxiv.org/abs/2601.13155
+ arXiv:2601.13155v1 Announce Type: new
+Abstract: Long-context inference enhances the reasoning capability of Large Language Models (LLMs) while incurring significant computational overhead. Token-oriented methods, such as pruning and skipping, have shown promise in reducing inference latency, but still suffer from inherently limited acceleration potential, outdated proxy signals, and redundancy interference, thus yielding suboptimal speed-accuracy trade-offs. To address these challenges, we propose SPTS (Self-Predictive Token Skipping), a training-free framework for efficient long-context LLM inference. Specifically, motivated by the thought of probing the influence of targeted skipping layers, we design two component-specific strategies for selective token skipping: Partial Attention Probing (PAP) for multi-head attention, which selects informative tokens by performing partial forward attention computation, and Low-rank Transformation Probing (LTP) for feed forward network, which constructs a low-rank proxy network to predict token transformations. Furthermore, a Multi-Stage Delayed Pruning (MSDP) strategy reallocates the skipping budget and progressively prunes redundant tokens across layers. Extensive experiments demonstrate the effectiveness of our method, achieving up to 2.46$\times$ and 2.29$\times$ speedups for prefilling and end-to-end generation, respectively, while maintaining state-of-the-art model performance. The source code will be publicly available upon paper acceptance.
+ oai:arXiv.org:2601.13155v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zimeng Wu, Donghao Wang, Chaozhe Jin, Jiaxin Chen, Yunhong Wang
+
+
+ Training instability in deep learning follows low-dimensional dynamical principles
+ https://arxiv.org/abs/2601.13160
+ arXiv:2601.13160v1 Announce Type: new
+Abstract: Deep learning systems achieve remarkable empirical performance, yet the stability of the training process itself remains poorly understood. Training unfolds as a high-dimensional dynamical system in which small perturbations to optimization, data, parameters, or learning signals can induce abrupt and irreversible collapse, undermining reproducibility and scalability.
+ We propose a unified dynamical perspective that characterizes training stability as an intrinsic property of learning systems, organized along four interacting dimensions: optimization, environmental/data, parametric, and learning-signal stability. We operationalize this perspective through controlled perturbation auditing of training trajectories, probing how learning dynamics respond to structured disturbances without modifying learning algorithms.
+ Across reinforcement learning and large language model training, we identify three recurring regularities: high final performance is frequently decoupled from training stability; controlled stochasticity consistently buffers learning dynamics across paradigms; and deviations in low-dimensional latent meta-states systematically precede observable performance collapse. Together, these findings establish training stability as a measurable and comparable dynamical property of learning systems, providing a descriptive foundation for studying learning dynamics beyond final performance outcomes.
+ oai:arXiv.org:2601.13160v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhipeng Zhang, Zhenjie Yao, Kai Li, Lei Yang
+
+
+ NeuroShield: A Neuro-Symbolic Framework for Adversarial Robustness
+ https://arxiv.org/abs/2601.13162
+ arXiv:2601.13162v1 Announce Type: new
+Abstract: Adversarial vulnerability and lack of interpretability are critical limitations of deep neural networks, especially in safety-sensitive settings such as autonomous driving. We introduce \DesignII, a neuro-symbolic framework that integrates symbolic rule supervision into neural networks to enhance both adversarial robustness and explainability. Domain knowledge is encoded as logical constraints over appearance attributes such as shape and color, and enforced through semantic and symbolic logic losses applied during training. Using the GTSRB dataset, we evaluate robustness against FGSM and PGD attacks at a standard $\ell_\infty$ perturbation budget of $\varepsilon = 8/255$. Relative to clean training, standard adversarial training provides modest improvements in robustness ($\sim$10 percentage points). Conversely, our FGSM-Neuro-Symbolic and PGD-Neuro-Symbolic models achieve substantially larger gains, improving adversarial accuracy by 18.1\% and 17.35\% over their corresponding adversarial-training baselines, representing roughly a three-fold larger robustness gain than standard adversarial training provides when both are measured relative to the same clean-training baseline, without reducing clean-sample accuracy. Compared to transformer-based defenses such as LNL-MoEx, which require heavy architectures and extensive data augmentation, our PGD-Neuro-Symbolic variant attains comparable or superior robustness using a ResNet18 backbone trained for 10 epochs. These results show that symbolic reasoning offers an effective path to robust and interpretable AI.
+ oai:arXiv.org:2601.13162v1
+ cs.LG
+ cs.ET
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ali Shafiee Sarvestani, Jason Schmidt, Arman Roohi
+
+
+ Optimistic Imprecise Shortest Watchtower in 1.5D and 2.5D
+ https://arxiv.org/abs/2601.13165
+ arXiv:2601.13165v1 Announce Type: new
+Abstract: A 1.5D imprecise terrain is an $x$-monotone polyline with fixed $x$-coordinates, the $y$-coordinate of each vertex is not fixed but is constrained to be in a given vertical interval. A 2.5D imprecise terrain is a triangulation with fixed $x$ and $y$-coordinates, but the $z$-coordinate of each vertex is constrained to a given vertical interval. Given an imprecise terrain with $n$ intervals, the optimistic shortest watchtower problem asks for a terrain $T$ realized by a precise point in each vertical interval such that the height of the shortest vertical line segment whose lower endpoint lies on $T$ and upper endpoint sees the entire terrain is minimized. In this paper, we present a linear time algorithm to solve the 1.5D optimistic shortest watchtower problem exactly. For the discrete version of the 2.5D case (where the watchtower must be placed on a vertex of $T$), and we give an additive approximation scheme running in $O(\frac{{OPT}}{\varepsilon}n^3)$ time, achieving a solution within an additive error of $\varepsilon$ from the optimal solution value ${OPT}$.
+ oai:arXiv.org:2601.13165v1
+ cs.CG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bradley McCoy, Binhai Zhu
+
+
+ From 100,000+ images to winning the first brain MRI foundation model challenges: Sharing lessons and models
+ https://arxiv.org/abs/2601.13166
+ arXiv:2601.13166v1 Announce Type: new
+Abstract: Developing Foundation Models for medical image analysis is essential to overcome the unique challenges of radiological tasks. The first challenges of this kind for 3D brain MRI, SSL3D and FOMO25, were held at MICCAI 2025. Our solution ranked first in tracks of both contests. It relies on a U-Net CNN architecture combined with strategies leveraging anatomical priors and neuroimaging domain knowledge. Notably, our models trained 1-2 orders of magnitude faster and were 10 times smaller than competing transformer-based approaches. Models are available here: https://github.com/jbanusco/BrainFM4Challenges.
+ oai:arXiv.org:2601.13166v1
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Pedro M. Gordaliza, Jaume Banus, Beno\^it G\'erin, Maxence Wynen, Nataliia Molchanova, Jonas Richiardi, Meritxell Bach Cuadra
+
+
+ QoS-Aware Energy Optimization via Cell Switching in Heterogeneous Networks
+ https://arxiv.org/abs/2601.13174
+ arXiv:2601.13174v1 Announce Type: new
+Abstract: The growing demand for mobile data services in dense urban areas has intensified the need for energy-efficient radio access networks (RANs) in future 6G systems. In this context, one promising strategy is cell switching (CS), which dynamically deactivates underutilized small base stations (SBSs) to reduce power consumption. However, while previous research explored CS primarily based on traffic load, ensuring user quality of service (QoS) under realistic channel conditions remains a challenge. In this paper, we propose a novel optimization-driven CS framework that jointly minimizes network power consumption and guarantees user QoS by enforcing a minimum received power threshold as part of offloading decisions. In contrast to prior load-based or learning-based approaches, our method explicitly integrates channel-aware information into the CS process, thus ensuring reliable service quality for offloaded users. Furthermore, flexibility of the proposed framework enables operators to adapt system behavior between energy-saving and QoS-preserving modes by tuning a single design parameter. Simulation results demonstrate that the proposed approach achieves up to 30% power savings as compared to baseline methods while fully maintaining QoS under diverse network conditions. Scalability and robustness of the proposed method in realistic heterogeneous networks (HetNets) further highlight its potential as a practical solution for sustainable 6G deployments.
+ oai:arXiv.org:2601.13174v1
+ eess.SY
+ cs.SY
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Maryam Salamatmoghadasi, Amir Mehrabian, Halim Yanikomeroglu, Georges Kaddoum
+
+
+ Helical Tendon-Driven Continuum Robot with Programmable Follow-the-Leader Operation
+ https://arxiv.org/abs/2601.13177
+ arXiv:2601.13177v1 Announce Type: new
+Abstract: Spinal cord stimulation (SCS) is primarily utilized for pain management and has recently demonstrated efficacy in promoting functional recovery in patients with spinal cord injury. Effective stimulation of motor neurons ideally requires the placement of SCS leads in the ventral or lateral epidural space where the corticospinal and rubrospinal motor fibers are located. This poses significant challenges with the current standard of manual steering. In this study, we present a static modeling approach for the ExoNav, a steerable robotic tool designed to facilitate precise navigation to the ventral and lateral epidural space. Cosserat rod framework is employed to establish the relationship between tendon actuation forces and the robot's overall shape. The effects of gravity, as an example of an external load, are investigated and implemented in the model and simulation. The experimental results indicate RMSE values of 1.76mm, 2.33mm, 2.18mm, and 1.33mm across four tested prototypes. Based on the helical shape of the ExoNav upon actuation, it is capable of performing follow-the-leader (FTL) motion by adding insertion and rotation DoFs to this robotic system, which is shown in simulation and experimentally. The proposed simulation has the capability to calculate optimum tendon tensions to follow the desired FTL paths while gravity-induced robot deformations are present. Three FTL experimental trials are conducted and the end-effector position showed repeatable alignments with the desired path with maximum RMSE value of 3.75mm. Ultimately, a phantom model demonstration is conducted where the teleoperated robot successfully navigated to the lateral and ventral spinal cord targets. Additionally, the user was able to navigate to the dorsal root ganglia, illustrating ExoNav's potential in both motor function recovery and pain management.
+ oai:arXiv.org:2601.13177v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Behnam Moradkhani, Raghav Sankaranarayanan, Pejman Kheradmand, Harshith Jella, Nicholas Ahn, Ajmal Zemmar, Yash Chitalia
+
+
+ Medical Triage as Pairwise Ranking: A Benchmark for Urgency in Patient Portal Messages
+ https://arxiv.org/abs/2601.13178
+ arXiv:2601.13178v1 Announce Type: new
+Abstract: Medical triage is the task of allocating medical resources and prioritizing patients based on medical need. This paper introduces the first large-scale public dataset for studying medical triage in the context of asynchronous outpatient portal messages. Our novel task formulation views patient message triage as a pairwise inference problem, where we train LLMs to choose `"which message is more medically urgent" in a head-to-head tournament-style re-sort of a physician's inbox. Our novel benchmark PMR-Bench contains 1569 unique messages and 2,000+ high-quality test pairs for pairwise medical urgency assessment alongside a scalable training data generation pipeline. PMR-Bench includes samples that contain both unstructured patient-written messages alongside real electronic health record (EHR) data, emulating a real-world medical triage scenario.
+ We develop a novel automated data annotation strategy to provide LLMs with in-domain guidance on this task. The resulting data is used to train two model classes, UrgentReward and UrgentSFT, leveraging Bradley-Terry and next token prediction objective, respectively to perform pairwise urgency classification. We find that UrgentSFT achieves top performance on PMR-Bench, with UrgentReward showing distinct advantages in low-resource settings. For example, UrgentSFT-8B and UrgentReward-8B provide a 15- and 16-point boost, respectively, on inbox sorting metrics over off-the-shelf 8B models. Paper resources can be found at https://tinyurl.com/Patient-Message-Triage
+ oai:arXiv.org:2601.13178v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Joseph Gatto, Parker Seegmiller, Timothy Burdick, Philip Resnik, Roshnik Rahat, Sarah DeLozier, Sarah M. Preum
+
+
+ OpenExempt: A Diagnostic Benchmark for Legal Reasoning and a Framework for Creating Custom Benchmarks on Demand
+ https://arxiv.org/abs/2601.13183
+ arXiv:2601.13183v1 Announce Type: new
+Abstract: Reasoning benchmarks have played a crucial role in the progress of language models. Yet rigorous evaluation remains a significant challenge as static question-answer pairs provide only a snapshot of performance, compressing complex behavior into a single accuracy metric. This limitation is especially true in complex, rule-bound domains such as law, where existing benchmarks are costly to build and ill suited for isolating specific failure modes. To address this, we introduce OpenExempt, a framework and benchmark for diagnostic evaluation of legal reasoning. The OpenExempt Framework uses expert-crafted symbolic representations of U.S. Bankruptcy Code statutes to dynamically generate a large space of natural language reasoning tasks and their machine-computable solutions on demand. This gives users fine-grained control over task complexity and scope, allowing individual reasoning skills to be probed in isolation. Using this system, we construct the OpenExempt Benchmark, a diagnostic benchmark for legal reasoning with 9,765 samples across nine evaluation suites designed to carefully probe model capabilities. Experiments on 13 diverse language models reveal sharp performance cliffs that emerge only under longer reasoning paths and in the presence of obfuscating statements. We release the framework and benchmark publicly to support research aimed at understanding and improving the next generation of reasoning systems.
+ oai:arXiv.org:2601.13183v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sergio Servantez, Sarah B. Lawsky, Rajiv Jain, Daniel W. Linna Jr., Kristian Hammond
+
+
+ Prompt Injection Mitigation with Agentic AI, Nested Learning, and AI Sustainability via Semantic Caching
+ https://arxiv.org/abs/2601.13186
+ arXiv:2601.13186v1 Announce Type: new
+Abstract: Prompt injection remains a central obstacle to the safe deployment of large language models, particularly in multi-agent settings where intermediate outputs can propagate or amplify malicious instructions. Building on earlier work that introduced a four-metric Total Injection Vulnerability Score (TIVS), this paper extends the evaluation framework with semantic similarity-based caching and a fifth metric (Observability Score Ratio) to yield TIVS-O, investigating how defence effectiveness interacts with transparency in a HOPE-inspired Nested Learning architecture. The proposed system combines an agentic pipeline with Continuum Memory Systems that implement semantic similarity-based caching across 301 synthetically generated injection-focused prompts drawn from ten attack families, while a fourth agent performs comprehensive security analysis using five key performance indicators. In addition to traditional injection metrics, OSR quantifies the richness and clarity of security-relevant reasoning exposed by each agent, enabling an explicit analysis of trade-offs between strict mitigation and auditability. Experiments show that the system achieves secure responses with zero high-risk breaches, while semantic caching delivers substantial computational savings, achieving a 41.6% reduction in LLM calls and corresponding decreases in latency, energy consumption, and carbon emissions. Five TIVS-O configurations reveal optimal trade-offs between mitigation strictness and forensic transparency. These results indicate that observability-aware evaluation can reveal non-monotonic effects within multi-agent pipelines and that memory-augmented agents can jointly maximize security robustness, real-time performance, operational cost savings, and environmental sustainability without modifying underlying model weights, providing a production-ready pathway for secure and green LLM deployments.
+ oai:arXiv.org:2601.13186v1
+ cs.AI
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Diego Gosmar, Deborah A. Dahl
+
+
+ Scientific production in the era of Large Language Models
+ https://arxiv.org/abs/2601.13187
+ arXiv:2601.13187v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are rapidly reshaping scientific research. We analyze these changes in multiple, large-scale datasets with 2.1M preprints, 28K peer review reports, and 246M online accesses to scientific documents. We find: 1) scientists adopting LLMs to draft manuscripts demonstrate a large increase in paper production, ranging from 23.7-89.3% depending on scientific field and author background, 2) LLM use has reversed the relationship between writing complexity and paper quality, leading to an influx of manuscripts that are linguistically complex but substantively underwhelming, and 3) LLM adopters access and cite more diverse prior work, including books and younger, less-cited documents. These findings highlight a stunning shift in scientific production that will likely require a change in how journals, funding agencies, and tenure committees evaluate scientific works.
+ oai:arXiv.org:2601.13187v1
+ cs.DL
+ cs.AI
+ cs.CY
+ physics.soc-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1126/science.adw3000
+ Science, 390(6779), pp.1240-1243 (2025)
+ Keigo Kusumegi, Xinyu Yang, Paul Ginsparg, Mathijs de Vaan, Toby Stuart, Yian Yin
+
+
+ Negotiating Relationships with ChatGPT: Perceptions, External Influences, and Strategies for AI Companionship
+ https://arxiv.org/abs/2601.13188
+ arXiv:2601.13188v1 Announce Type: new
+Abstract: Individuals are turning to increasingly anthropomorphic, general-purpose chatbots for AI companionship, rather than roleplay-specific platforms. However, not much is known about how individuals perceive and conduct their relationships with general-purpose chatbots. We analyzed semi-structured interviews (n=13), survey responses (n=43), and community discussions on Reddit (41k+ posts and comments) to triangulate the internal dynamics, external influences, and steering strategies that shape AI companion relationships. We learned that individuals conceptualize their companions based on an interplay of their beliefs about the companion's own agency and the autonomy permitted by the platform, how they pursue interactions with the companion, and the perceived initiatives that the companion takes. In combination with the external entities that affect relationship dynamics, particularly model updates that can derail companion behaviour and stability, individuals make use of different types of steering strategies to preserve their relationship, for example, by setting behavioural instructions or porting to other AI platforms. We discuss implications for accountability and transparency in AI systems, where emotional connection competes with broader product objectives and safety constraints.
+ oai:arXiv.org:2601.13188v1
+ cs.HC
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Patrick Yung Kang Lee, Jessica Y. Bo, Zixin Zhao, Paula Akemi Aoyagui, Matthew Varona, Ashton Anderson, Anastasia Kuzminykh, Fanny Chevalier, Carolina Nobre
+
+
+ LAViG-FLOW: Latent Autoregressive Video Generation for Fluid Flow Simulations
+ https://arxiv.org/abs/2601.13190
+ arXiv:2601.13190v1 Announce Type: new
+Abstract: Modeling and forecasting subsurface multiphase fluid flow fields underpin applications ranging from geological CO2 sequestration (GCS) operations to geothermal production. This is essential for ensuring both operational performance and long-term safety. While high fidelity multiphase simulators are widely used for this purpose, they become prohibitively expensive once many forward runs are required for inversion purposes and quantify uncertainty. To tackle this challenge we propose LAViG-FLOW, a latent autoregressive video generation diffusion framework that explicitly learns the coupled evolution of saturation and pressure fields. Each state variable is compressed by a dedicated 2D autoencoder, and a Video Diffusion Transformer (VDiT) models their coupled distribution across time. We first train the model on a given time horizon to learn their coupled relationship and then fine-tune it autoregressively so it can extrapolate beyond the observed time window. Evaluated on an open-source CO2 sequestration dataset, LAViG-FLOW generates saturation and pressure fields that stay consistent across time while running orders of magnitude faster than traditional numerical solvers.
+ oai:arXiv.org:2601.13190v1
+ cs.LG
+ physics.flu-dyn
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Vittoria De Pellegrini, Tariq Alkhalifah
+
+
+ Active Informative Planning for UAV-based Weed Mapping using Discrete Gaussian Process Representations
+ https://arxiv.org/abs/2601.13196
+ arXiv:2601.13196v1 Announce Type: new
+Abstract: Accurate agricultural weed mapping using unmanned aerial vehicles (UAVs) is crucial for precision farming. While traditional methods rely on rigid, pre-defined flight paths and intensive offline processing, informative path planning (IPP) offers a way to collect data adaptively where it is most needed. Gaussian process (GP) mapping provides a continuous model of weed distribution with built-in uncertainty. However, GPs must be discretised for practical use in autonomous planning. Many discretisation techniques exist, but the impact of discrete representation choice remains poorly understood. This paper investigates how different discrete GP representations influence both mapping quality and mission-level performance in UAV-based weed mapping. Considering a UAV equipped with a downward-facing camera, we implement a receding-horizon IPP strategy that selects sampling locations based on the map uncertainty, travel cost, and coverage penalties. We investigate multiple discretisation strategies for representing the GP posterior and use their induced map partitions to generate candidate viewpoints for planning. Experiments on real-world weed distributions show that representation choice significantly affects exploration behaviour and efficiency. Overall, our results demonstrate that discretisation is not only a representational detail but a key design choice that shapes planning dynamics, coverage efficiency, and computational load in online UAV weed mapping.
+ oai:arXiv.org:2601.13196v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jacob Swindell, Marija Popovi\'c, Riccardo Polvara
+
+
+ Diffusion-Driven Synthetic Tabular Data Generation for Enhanced DoS/DDoS Attack Classification
+ https://arxiv.org/abs/2601.13197
+ arXiv:2601.13197v1 Announce Type: new
+Abstract: Class imbalance refers to a situation where certain classes in a dataset have significantly fewer samples than oth- ers, leading to biased model performance. Class imbalance in network intrusion detection using Tabular Denoising Diffusion Probability Models (TabDDPM) for data augmentation is ad- dressed in this paper. Our approach synthesizes high-fidelity minority-class samples from the CIC-IDS2017 dataset through iterative denoising processes. For the minority classes that have smaller samples, synthetic samples were generated and merged with the original dataset. The augmented training data enables an ANN classifier to achieve near-perfect recall on previously underrepresented attack classes. These results establish diffusion models as an effective solution for tabular data imbalance in security domains, with potential applications in fraud detection and medical diagnostics.
+ oai:arXiv.org:2601.13197v1
+ cs.CR
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Aravind B, Anirud R. S., Sai Surya Teja N, Bala Subrahmanya Sriranga Navaneeth A, Karthika R, Mohankumar N
+
+
+ The Achilles' Heel of Angular Margins: A Chebyshev Polynomial Fix for Speaker Verification
+ https://arxiv.org/abs/2601.13198
+ arXiv:2601.13198v1 Announce Type: new
+Abstract: Angular margin losses, such as AAM-Softmax, have become the de facto in speaker and face verification. Their success hinges on directly manipulating the angle between features and class prototypes. However, this manipulation relies on the arccos function to recover the angle, introducing a significant yet overlooked source of training instability. The derivative of arccos explodes at its boundaries, causing gradient peaks during optimisation. Furthermore, the formulation fails to generate a sufficiently sharp gradient for hard-to-classify examples. We address these issues by proposing ChebyAAM, a loss that replaces the arccos operation with its Chebyshev polynomial approximation. This substitution eliminates gradient explosion and applies a stronger corrective signal to hard examples, leading to more effective optimisation. Experiments on three benchmarks (VoxCeleb, SITW, and CN-Celeb) demonstrate that our method resolves the instability and consistently improves performance. Our work suggests that approximating angular operations, rather than calculating them explicitly, offers a more robust path for designing future metric learning losses. Code is available at https://github.com/ExtraOrdinaryLab/vibe.
+ oai:arXiv.org:2601.13198v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yang Wang, Yiqi Liu, Chenghao Xiao, Chenghua Lin
+
+
+ Emissions and cost tradeoffs of time-matched clean electricity procurement under inter-annual weather variability: case study of hydrogen production
+ https://arxiv.org/abs/2601.13202
+ arXiv:2601.13202v1 Announce Type: new
+Abstract: Time-matching requirements (TMRs) for clean electricity procurement are increasingly adopted in voluntary corporate sustainability initiatives and regulatory frameworks. While prior research has evaluated cost and emissions impacts of hourly vs. annual TMR, these studies typically rely on single-year weather scenarios that do not capture inter-annual variability in variable renewable energy (VRE) generation. We use a capacity expansion model to assess how inter-annual weather variability affects procurement-driven infrastructure investments, costs, and emissions for a grid-connected hydrogen producer under both annual and hourly time-matching strategies. Using a Texas case study, we compare deterministic (single weather scenario) and stochastic (nine weather scenarios) modeling approaches. Both procurement investments and cost and emissions outcomes are sensitive to weather scenario, with annual matching exhibiting greater sensitivity than hourly matching. Stochastic modeling finds higher cost premiums for hourly versus annual matching compared to deterministic modeling, though emissions trends remain directionally consistent. Demand flexibility through H2 storage is critical for lowering hourly matching cost premiums under weather-driven VRE variability. Partial hourly matching (e.g., 80-90% compliance) can modestly reduce costs while maintaining minimal emissions impacts. Finally, we examine how grid-level renewable portfolio standards (RPS) affect additionality and emissions. When stringent additionality is achieved via binding RPS constraints on non-H2 electricity demand, annual matching can produce emissions reductions comparable to hourly matching at lower cost.
+ oai:arXiv.org:2601.13202v1
+ eess.SY
+ cs.SY
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Michael Giovanniello, Dharik S. Mallapragada
+
+
+ Real-Time Deadlines Reveal Temporal Awareness Failures in LLM Strategic Dialogues
+ https://arxiv.org/abs/2601.13206
+ arXiv:2601.13206v1 Announce Type: new
+Abstract: Large Language Models (LLMs) generate text token-by-token in discrete time, yet real-world communication, from therapy sessions to business negotiations, critically depends on continuous time constraints. Current LLM architectures and evaluation protocols rarely test for temporal awareness under real-time deadlines. We use simulated negotiations between paired agents under strict deadlines to investigate how LLMs adjust their behavior in time-sensitive settings. In a control condition, agents know only the global time limit. In a time-aware condition, they receive remaining-time updates at each turn. Deal closure rates are substantially higher (32\% vs. 4\% for GPT-5.1) and offer acceptances are sixfold higher in the time-aware condition than in the control, suggesting LLMs struggle to internally track elapsed time. However, the same LLMs achieve near-perfect deal closure rates ($\geq$95\%) under turn-based limits, revealing the failure is in temporal tracking rather than strategic reasoning. These effects replicate across negotiation scenarios and models, illustrating a systematic lack of LLM time awareness that will constrain LLM deployment in many time-sensitive applications.
+ oai:arXiv.org:2601.13206v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Neil K. R. Sehgal, Sharath Chandra Guntuku, Lyle Ungar
+
+
+ GTPred: Benchmarking MLLMs for Interpretable Geo-localization and Time-of-capture Prediction
+ https://arxiv.org/abs/2601.13207
+ arXiv:2601.13207v1 Announce Type: new
+Abstract: Geo-localization aims to infer the geographic location where an image was captured using observable visual evidence. Traditional methods achieve impressive results through large-scale training on massive image corpora. With the emergence of multi-modal large language models (MLLMs), recent studies have explored their applications in geo-localization, benefiting from improved accuracy and interpretability. However, existing benchmarks largely ignore the temporal information inherent in images, which can further constrain the location. To bridge this gap, we introduce GTPred, a novel benchmark for geo-temporal prediction. GTPred comprises 370 globally distributed images spanning over 120 years. We evaluate MLLM predictions by jointly considering year and hierarchical location sequence matching, and further assess intermediate reasoning chains using meticulously annotated ground-truth reasoning processes. Experiments on 8 proprietary and 7 open-source MLLMs show that, despite strong visual perception, current models remain limited in world knowledge and geo-temporal reasoning. Results also demonstrate that incorporating temporal information significantly enhances location inference performance.
+ oai:arXiv.org:2601.13207v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinnao Li, Zijian Chen, Tingzhu Chen, Changbo Wang
+
+
+ Rethinking Skip Connections: Additive U-Net for Robust and Interpretable Denoising
+ https://arxiv.org/abs/2601.13208
+ arXiv:2601.13208v1 Announce Type: new
+Abstract: Skip connections are central to U-Net architectures for image denoising, but standard concatenation doubles channel dimensionality and obscures information flow, allowing uncontrolled noise transfer. We propose the Additive U-Net, which replaces concatenative skips with gated additive connections. Each skip pathway is scaled by a learnable non-negative scalar, offering explicit and interpretable control over encoder contributions while avoiding channel inflation. Evaluations on the Kodak-17 denoising benchmark show that Additive U-Net achieves competitive PSNR/SSIM at noise levels {\sigma} = 15, 25, 50, with robustness across kernel schedules and depths. Notably, effective denoising is achieved even without explicit down/up-sampling or forced hierarchies, as the model naturally learns a progression from high-frequency to band-pass to low-frequency features. These results position additive skips as a lightweight and interpretable alternative to concatenation, enabling both efficient design and a clearer understanding of multi-scale information transfer in reconstruction networks.
+ oai:arXiv.org:2601.13208v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Vikram R Lakkavalli
+
+
+ Conflict Detection in AI-RAN: Efficient Interaction Learning and Autonomous Graph Reconstruction
+ https://arxiv.org/abs/2601.13213
+ arXiv:2601.13213v1 Announce Type: new
+Abstract: Artificial Intelligence (AI)-native mobile networks represent a fundamental step toward 6G, where learning, inference, and decision making are embedded into the Radio Access Network (RAN) itself. In such networks, multiple AI agents optimize the network to achieve distinct and often competing objectives. As such, conflicts become inevitable and have the potential to degrade performance, cause instability, and disrupt service. Current approaches for conflict detection rely on conflict graphs created based on relationships between AI agents, parameters, and Key Performance Indicators (KPIs). Existing works often rely on complex and computationally expensive Graph Neural Networks (GNNs) and depend on manually chosen thresholds to create conflict graphs. In this work, we present the first systematic framework for conflict detection in AI-native mobile networks, propose a two-tower encoder architecture for learning interactions based on data from the RAN, and introduce a data-driven sparsity-based mechanism for autonomously reconstructing conflict graphs without manual fine-tuning.
+ oai:arXiv.org:2601.13213v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Joao F. Santos, Arshia Zolghadr, Scott Kuzdeba, Jacek Kibi{\l}da
+
+
+ An AMP-Based Asymptotic Analysis For Nonlinear One-Bit Precoding
+ https://arxiv.org/abs/2601.13214
+ arXiv:2601.13214v1 Announce Type: new
+Abstract: This paper focuses on the asymptotic analysis of a class of nonlinear one-bit precoding schemes under Rayleigh fading channels. The considered scheme employs a convex-relaxation-then-quantization (CRQ) approach to the well-known minimum mean square error (MMSE) model, which includes the classical one-bit precoder SQUID as a special case. To analyze its asymptotic behavior, we develop a novel analytical framework based on approximate message passing (AMP). We show that, the statistical properties of the considered scheme can be asymptotically characterized by a scalar ``signal plus Gaussian noise'' model. Based on this, we further derive a closed-form expression for the symbol error probability (SEP) in the large-system limit, which quantitatively characterizes the impact of both system and model parameters on SEP performance. Simulation results validate our analysis and also demonstrate that performance gains over SQUID can be achieved by appropriately tuning the parameters involved in the considered model.
+ oai:arXiv.org:2601.13214v1
+ cs.IT
+ eess.SP
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zheyu Wu, Junjie Ma, Ya-Feng Liu, Bruno Clerckx
+
+
+ On the Reliability of Estimation Bounds in Low-SNR Bistatic ISAC
+ https://arxiv.org/abs/2601.13216
+ arXiv:2601.13216v1 Announce Type: new
+Abstract: This paper explores a bistatic Integrated Sensing and Communication (ISAC) framework, where a base station transmits communication signal that serve both direct communication with a user and multi-target parameter estimation through reflections captured by a separate sensing receiver. We assume that the instantaneous knowledge of the transmit signal at the sensing receiver is not available, and the sensing receiver only has knowledge of the statistical properties of the received signal. Unlike prior research that focuses on power allocation or optimal beamforming design for ISAC, we emphasize the inadequacy of the Cram\'er-Rao Bound (and its variant) in low Signal-to-Noise Ratio (SNR) regimes, particularly in passive sensing scenarios. Due to severe path loss and other impairments, the received sensing SNR is often significantly lower than that of direct Line-of-Sight communication, making CRB-based performance evaluation unreliable. To address this, we adopt the Ziv-Zakai Bound (ZZB) for Angle of Arrival estimation, which provides a more meaningful lower bound on estimation error. We derive analytical expressions for the ZZB and the achievable ergodic communication rate as functions of SNR. Through numerical simulations, we analyze the pareto-front between communication and sensing performance, demonstrating why ZZB serves as a better metric in low sensing SNR ISAC where traditional CRB-based approaches fail.
+ oai:arXiv.org:2601.13216v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ataher Sams, Besma Smida
+
+
+ Beyond Single-shot Writing: Deep Research Agents are Unreliable at Multi-turn Report Revision
+ https://arxiv.org/abs/2601.13217
+ arXiv:2601.13217v1 Announce Type: new
+Abstract: Existing benchmarks for Deep Research Agents (DRAs) treat report generation as a single-shot writing task, which fundamentally diverges from how human researchers iteratively draft and revise reports via self-reflection or peer feedback. Whether DRAs can reliably revise reports with user feedback remains unexplored. We introduce Mr Dre, an evaluation suite that establishes multi-turn report revision as a new evaluation axis for DRAs. Mr Dre consists of (1) a unified long-form report evaluation protocol spanning comprehensiveness, factuality, and presentation, and (2) a human-verified feedback simulation pipeline for multi-turn revision. Our analysis of five diverse DRAs reveals a critical limitation: while agents can address most user feedback, they also regress on 16-27% of previously covered content and citation quality. Over multiple revision turns, even the best-performing agents leave significant headroom, as they continue to disrupt content outside the feedback's scope and fail to preserve earlier edits. We further show that these issues are not easily resolvable through inference-time fixes such as prompt engineering and a dedicated sub-agent for report revision.
+ oai:arXiv.org:2601.13217v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bingsen Chen, Boyan Li, Ping Nie, Yuyu Zhang, Xi Ye, Chen Zhao
+
+
+ ObjectVisA-120: Object-based Visual Attention Prediction in Interactive Street-crossing Environments
+ https://arxiv.org/abs/2601.13218
+ arXiv:2601.13218v1 Announce Type: new
+Abstract: The object-based nature of human visual attention is well-known in cognitive science, but has only played a minor role in computational visual attention models so far. This is mainly due to a lack of suitable datasets and evaluation metrics for object-based attention. To address these limitations, we present \dataset~ -- a novel 120-participant dataset of spatial street-crossing navigation in virtual reality specifically geared to object-based attention evaluations. The uniqueness of the presented dataset lies in the ethical and safety affiliated challenges that make collecting comparable data in real-world environments highly difficult. \dataset~ not only features accurate gaze data and a complete state-space representation of objects in the virtual environment, but it also offers variable scenario complexities and rich annotations, including panoptic segmentation, depth information, and vehicle keypoints. We further propose object-based similarity (oSIM) as a novel metric to evaluate the performance of object-based visual attention models, a previously unexplored performance characteristic. Our evaluations show that explicitly optimising for object-based attention not only improves oSIM performance but also leads to an improved model performance on common metrics. In addition, we present SUMGraph, a Mamba U-Net-based model, which explicitly encodes critical scene objects (vehicles) in a graph representation, leading to further performance improvements over several state-of-the-art visual attention prediction methods. The dataset, code and models will be publicly released.
+ oai:arXiv.org:2601.13218v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Igor Vozniak, Philipp Mueller, Nils Lipp, Janis Sprenger, Konstantin Poddubnyy, Davit Hovhannisyan, Christian Mueller, Andreas Bulling, Philipp Slusallek
+
+
+ The Energy-Throughput Trade-off in Lossless-Compressed Source Code Storage
+ https://arxiv.org/abs/2601.13220
+ arXiv:2601.13220v1 Announce Type: new
+Abstract: Retrieving data from large-scale source code archives is vital for AI training, neural-based software analysis, and information retrieval, to cite a few. This paper studies and experiments with the design of a compressed key-value store for the indexing of large-scale source code datasets, evaluating its trade-off among three primary computational resources: (compressed) space occupancy, time, and energy efficiency. Extensive experiments on a national high-performance computing infrastructure demonstrate that different compression configurations yield distinct trade-offs, with high compression ratios and order-of-magnitude gains in retrieval throughput and energy efficiency. We also study data parallelism and show that, while it significantly improves speed, scaling energy efficiency is more difficult, reflecting the known non-energy-proportionality of modern hardware and challenging the assumption of a direct time-energy correlation. This work streamlines automation in energy-aware configuration tuning and standardized green benchmarking deployable in CI/CD pipelines, thus empowering system architects with a spectrum of Pareto-optimal energy-compression-throughput trade-offs and actionable guidelines for building sustainable, efficient storage backends for massive open-source code archival.
+ oai:arXiv.org:2601.13220v1
+ cs.DS
+ cs.DB
+ cs.DC
+ cs.PF
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Paolo Ferragina, Francesco Tosoni
+
+
+ Incorporating Q&A Nuggets into Retrieval-Augmented Generation
+ https://arxiv.org/abs/2601.13222
+ arXiv:2601.13222v1 Announce Type: new
+Abstract: RAGE systems integrate ideas from automatic evaluation (E) into Retrieval-augmented Generation (RAG). As one such example, we present Crucible, a Nugget-Augmented Generation System that preserves explicit citation provenance by constructing a bank of Q&A nuggets from retrieved documents and uses them to guide extraction, selection, and report generation. Reasoning on nuggets avoids repeated information through clear and interpretable Q&A semantics - instead of opaque cluster abstractions - while maintaining citation provenance throughout the entire generation process. Evaluated on the TREC NeuCLIR 2024 collection, our Crucible system substantially outperforms Ginger, a recent nugget-based RAG system, in nugget recall, density, and citation grounding.
+ oai:arXiv.org:2601.13222v1
+ cs.IR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Laura Dietz, Bryan Li, Gabrielle Liu, Jia-Huei Ju, Eugene Yang, Dawn Lawrie, William Walden, James Mayfield
+
+
+ Functional Logic Program Transformations
+ https://arxiv.org/abs/2601.13224
+ arXiv:2601.13224v1 Announce Type: new
+Abstract: Many tools used to process programs, like compilers, analyzers, or verifiers, perform transformations on their intermediate program representation, like abstract syntax trees. Implementing such program transformations is a non-trivial task, since it is necessary to iterate over the complete syntax tree and apply various transformations at nodes in a tree. In this paper we show how the features of functional logic programming are useful to implement program transformations in a compact and comprehensible manner. For this purpose, we propose to write program transformations as partially defined and non-deterministic operations. Since the implementation of non-determinism usually causes some overhead compared to deterministically defined operations, we compare our approach to a deterministic transformation method. We evaluate these alternatives for the functional logic language Curry and its intermediate representation FlatCurry which is used in various analysis and verification tools and compilers.
+ oai:arXiv.org:2601.13224v1
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Michael Hanus, Steven Libby
+
+
+ Not all Blends are Equal: The BLEMORE Dataset of Blended Emotion Expressions with Relative Salience Annotations
+ https://arxiv.org/abs/2601.13225
+ arXiv:2601.13225v1 Announce Type: new
+Abstract: Humans often experience not just a single basic emotion at a time, but rather a blend of several emotions with varying salience. Despite the importance of such blended emotions, most video-based emotion recognition approaches are designed to recognize single emotions only. The few approaches that have attempted to recognize blended emotions typically cannot assess the relative salience of the emotions within a blend. This limitation largely stems from the lack of datasets containing a substantial number of blended emotion samples annotated with relative salience. To address this shortcoming, we introduce BLEMORE, a novel dataset for multimodal (video, audio) blended emotion recognition that includes information on the relative salience of each emotion within a blend. BLEMORE comprises over 3,000 clips from 58 actors, performing 6 basic emotions and 10 distinct blends, where each blend has 3 different salience configurations (50/50, 70/30, and 30/70). Using this dataset, we conduct extensive evaluations of state-of-the-art video classification approaches on two blended emotion prediction tasks: (1) predicting the presence of emotions in a given sample, and (2) predicting the relative salience of emotions in a blend. Our results show that unimodal classifiers achieve up to 29% presence accuracy and 13% salience accuracy on the validation set, while multimodal methods yield clear improvements, with ImageBind + WavLM reaching 35% presence accuracy and HiCMAE 18% salience accuracy. On the held-out test set, the best models achieve 33% presence accuracy (VideoMAEv2 + HuBERT) and 18% salience accuracy (HiCMAE). In sum, the BLEMORE dataset provides a valuable resource to advancing research on emotion recognition systems that account for the complexity and significance of blended emotion expressions.
+ oai:arXiv.org:2601.13225v1
+ cs.CV
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tim Lachmann, Alexandra Israelsson, Christina Tornberg, Teimuraz Saghinadze, Michal Balazia, Philipp M\"uller, Petri Laukka
+
+
+ Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?
+ https://arxiv.org/abs/2601.13227
+ arXiv:2601.13227v1 Announce Type: new
+Abstract: RAG systems are increasingly evaluated and optimized using LLM judges, an approach that is rapidly becoming the dominant paradigm for system assessment. Nugget-based approaches in particular are now embedded not only in evaluation frameworks but also in the architectures of RAG systems themselves. While this integration can lead to genuine improvements, it also creates a risk of faulty measurements due to circularity. In this paper, we investigate this risk through comparative experiments with nugget-based RAG systems, including Ginger and Crucible, against strong baselines such as GPT-Researcher. By deliberately modifying Crucible to generate outputs optimized for an LLM judge, we show that near-perfect evaluation scores can be achieved when elements of the evaluation - such as prompt templates or gold nuggets - are leaked or can be predicted. Our results highlight the importance of blind evaluation settings and methodological diversity to guard against mistaking metric overfitting for genuine system progress.
+ oai:arXiv.org:2601.13227v1
+ cs.IR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Laura Dietz, Bryan Li, Eugene Yang, Dawn Lawrie, William Walden, James Mayfield
+
+
+ Autoregressive Models Rival Diffusion Models at ANY-ORDER Generation
+ https://arxiv.org/abs/2601.13228
+ arXiv:2601.13228v1 Announce Type: new
+Abstract: Diffusion language models enable any-order generation and bidirectional conditioning, offering appealing flexibility for tasks such as infilling, rewriting, and self-correction. However, their formulation-predicting one part of a sequence from another within a single-step dependency-limits modeling depth and often yields lower sample quality and stability than autoregressive (AR) models. To address this, we revisit autoregressive modeling as a foundation and reformulate diffusion-style training into a structured multi-group prediction process. We propose Any-order Any-subset Autoregressive modeling (A3), a generalized framework that extends the standard AR factorization to arbitrary token groups and generation orders. A3 preserves the probabilistic rigor and multi-layer dependency modeling of AR while inheriting diffusion models' flexibility for parallel and bidirectional generation. We implement A3 through a two-stream attention architecture and a progressive adaptation strategy that transitions pretrained AR models toward any-order prediction. Experiments on question answering, commonsense reasoning, and story infilling demonstrate that A3 outperforms diffusion-based models while maintaining flexible decoding. This work offers a unified approach for a flexible, efficient, and novel language modeling paradigm.
+ oai:arXiv.org:2601.13228v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tianqi Du, Lizhe Fang, Weijie Yang, Chenheng Zhang, Zeming Wei, Yifei Wang, Yisen Wang
+
+
+ Towards Matrix-Free Patch Smoothers for the Stokes Problem: Evaluating Local p-Multigrid Solvers
+ https://arxiv.org/abs/2601.13230
+ arXiv:2601.13230v1 Announce Type: new
+Abstract: Vertex-patch smoothers offer an effective strategy for achieving robust geometric multigrid convergence for the Stokes equations, particularly in the context of high-order finite elements. However, their practical efficiency is often limited by the computational cost of solving the local saddle-point problems, especially when explicit matrix factorizations are not feasible. We explore a fully iterative, matrix-free-compatible approach to the local patch solve using $p$-multigrid techniques. We evaluate different local solver configurations: Braess-Sarazin and block-triangular preconditioners. Our numerical experiments suggest that the Braess-Sarazin approach is particularly resilient. We find that a single iteration of the local solver yields global convergence rates comparable to those obtained with exact local solvers, even on distorted meshes and in the presence of large viscosity jumps.
+ oai:arXiv.org:2601.13230v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Micha{\l} Wichrowski
+
+
+ MATTERIX: toward a digital twin for robotics-assisted chemistry laboratory automation
+ https://arxiv.org/abs/2601.13232
+ arXiv:2601.13232v1 Announce Type: new
+Abstract: Accelerated materials discovery is critical for addressing global challenges. However, developing new laboratory workflows relies heavily on real-world experimental trials, and this can hinder scalability because of the need for numerous physical make-and-test iterations. Here we present MATTERIX, a multiscale, graphics processing unit-accelerated robotic simulation framework designed to create high-fidelity digital twins of chemistry laboratories, thus accelerating workflow development. This multiscale digital twin simulates robotic physical manipulation, powder and liquid dynamics, device functionalities, heat transfer and basic chemical reaction kinetics. This is enabled by integrating realistic physics simulation and photorealistic rendering with a modular graphics processing unit-accelerated semantics engine, which models logical states and continuous behaviors to simulate chemistry workflows across different levels of abstraction. MATTERIX streamlines the creation of digital twin environments through open-source asset libraries and interfaces, while enabling flexible workflow design via hierarchical plan definition and a modular skill library that incorporates learning-based methods. Our approach demonstrates sim-to-real transfer in robotic chemistry setups, reducing reliance on costly real-world experiments and enabling the testing of hypothetical automated workflows in silico. The project website is available at https://accelerationconsortium.github.io/Matterix/ .
+ oai:arXiv.org:2601.13232v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1038/s43588-025-00924-4
+ Kourosh Darvish, Arjun Sohal, Abhijoy Mandal, Hatem Fakhruldeen, Nikola Radulov, Zhengxue Zhou, Satheeshkumar Veeramani, Joshua Choi, Sijie Han, Brayden Zhang, Jeeyeoun Chae, Alex Wright, Yijie Wang, Hossein Darvish, Yuchi Zhao, Gary Tom, Han Hao, Miroslav Bogdanovic, Gabriella Pizzuto, Andrew I. Cooper, Al\'an Aspuru-Guzik, Florian Shkurti, Animesh Garg
+
+
+ RAG: A Random-Forest-Based Generative Design Framework for Uncertainty-Aware Design of Metamaterials with Complex Functional Response Requirements
+ https://arxiv.org/abs/2601.13233
+ arXiv:2601.13233v1 Announce Type: new
+Abstract: Metamaterials design for advanced functionality often entails the inverse design on nonlinear and condition-dependent responses (e.g., stress-strain relation and dispersion relation), which are described by continuous functions. Most existing design methods focus on vector-valued responses (e.g., Young's modulus and bandgap width), while the inverse design of functional responses remains challenging due to their high-dimensionality, the complexity of accommodating design requirements in inverse-design frameworks, and non-existence or non-uniqueness of feasible solutions. Although generative design approaches have shown promise, they are often data-hungry, handle design requirements heuristically, and may generate infeasible designs without uncertainty quantification. To address these challenges, we introduce a RAndom-forest-based Generative approach (RAG). By leveraging the small-data compatibility of random forests, RAG enables data-efficient predictions of high-dimensional functional responses. During the inverse design, the framework estimates the likelihood through the ensemble which quantifies the trustworthiness of generated designs while reflecting the relative difficulty across different requirements. The one-to-many mapping is addressed through single-shot design generation by sampling from the conditional likelihood. We demonstrate RAG on: 1) acoustic metamaterials with prescribed partial passbands/stopbands, and 2) mechanical metamaterials with targeted snap-through responses, using 500 and 1057 samples, respectively. Its data-efficiency is benchmarked against neural networks on a public mechanical metamaterial dataset with nonlinear stress-strain relations. Our framework provides a lightweight, trustworthy pathway to inverse design involving functional responses, expensive simulations, and complex design requirements, beyond metamaterials.
+ oai:arXiv.org:2601.13233v1
+ cs.AI
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Bolin Chen, Dex Doksoo Lee, Wei "Wayne'' Chen, Wei Chen
+
+
+ ConvMambaNet: A Hybrid CNN-Mamba State Space Architecture for Accurate and Real-Time EEG Seizure Detection
+ https://arxiv.org/abs/2601.13234
+ arXiv:2601.13234v1 Announce Type: new
+Abstract: Epilepsy is a chronic neurological disorder marked by recurrent seizures that can severely impact quality of life. Electroencephalography (EEG) remains the primary tool for monitoring neural activity and detecting seizures, yet automated analysis remains challenging due to the temporal complexity of EEG signals. This study introduces ConvMambaNet, a hybrid deep learning model that integrates Convolutional Neural Networks (CNNs) with the Mamba Structured State Space Model (SSM) to enhance temporal feature extraction. By embedding the Mamba-SSM block within a CNN framework, the model effectively captures both spatial and long-range temporal dynamics. Evaluated on the CHB-MIT Scalp EEG dataset, ConvMambaNet achieved a 99% accuracy and demonstrated robust performance under severe class imbalance. These results underscore the model's potential for precise and efficient seizure detection, offering a viable path toward real-time, automated epilepsy monitoring in clinical environments.
+ oai:arXiv.org:2601.13234v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Md. Nishan Khan, Kazi Shahriar Sanjid, Md. Tanzim Hossain, Asib Mostakim Fony, Istiak Ahmed, M. Monir Uddin
+
+
+ RubRIX: Rubric-Driven Risk Mitigation in Caregiver-AI Interactions
+ https://arxiv.org/abs/2601.13235
+ arXiv:2601.13235v1 Announce Type: new
+Abstract: Caregivers seeking AI-mediated support express complex needs -- information-seeking, emotional validation, and distress cues -- that warrant careful evaluation of response safety and appropriateness. Existing AI evaluation frameworks, primarily focused on general risks (toxicity, hallucinations, policy violations, etc), may not adequately capture the nuanced risks of LLM-responses in caregiving-contexts. We introduce RubRIX (Rubric-based Risk Index), a theory-driven, clinician-validated framework for evaluating risks in LLM caregiving responses. Grounded in the Elements of an Ethic of Care, RubRIX operationalizes five empirically-derived risk dimensions: Inattention, Bias & Stigma, Information Inaccuracy, Uncritical Affirmation, and Epistemic Arrogance. We evaluate six state-of-the-art LLMs on over 20,000 caregiver queries from Reddit and ALZConnected. Rubric-guided refinement consistently reduced risk-components by 45-98% after one iteration across models. This work contributes a methodological approach for developing domain-sensitive, user-centered evaluation frameworks for high-burden contexts. Our findings highlight the importance of domain-sensitive, interactional risk evaluation for the responsible deployment of LLMs in caregiving support contexts. We release benchmark datasets to enable future research on contextual risk evaluation in AI-mediated support.
+ oai:arXiv.org:2601.13235v1
+ cs.HC
+ cs.AI
+ cs.CL
+ cs.CY
+ cs.LG
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Drishti Goel, Jeongah Lee, Qiuyue Joy Zhong, Violeta J. Rodriguez, Daniel S. Brown, Ravi Karkar, Dong Whi Yoo, Koustuv Saha
+
+
+ A Semantic Decoupling-Based Two-Stage Rainy-Day Attack for Revealing Weather Robustness Deficiencies in Vision-Language Models
+ https://arxiv.org/abs/2601.13238
+ arXiv:2601.13238v1 Announce Type: new
+Abstract: Vision-Language Models (VLMs) are trained on image-text pairs collected under canonical visual conditions and achieve strong performance on multimodal tasks. However, their robustness to real-world weather conditions, and the stability of cross-modal semantic alignment under such structured perturbations, remain insufficiently studied. In this paper, we focus on rainy scenarios and introduce the first adversarial framework that exploits realistic weather to attack VLMs, using a two-stage, parameterized perturbation model based on semantic decoupling to analyze rain-induced shifts in decision-making. In Stage 1, we model the global effects of rainfall by applying a low-dimensional global modulation to condition the embedding space and gradually weaken the original semantic decision boundaries. In Stage 2, we introduce structured rain variations by explicitly modeling multi-scale raindrop appearance and rainfall-induced illumination changes, and optimize the resulting non-differentiable weather space to induce stable semantic shifts. Operating in a non-pixel parameter space, our framework generates perturbations that are both physically grounded and interpretable. Experiments across multiple tasks show that even physically plausible, highly constrained weather perturbations can induce substantial semantic misalignment in mainstream VLMs, posing potential safety and reliability risks in real-world deployment. Ablations further confirm that illumination modeling and multi-scale raindrop structures are key drivers of these semantic shifts.
+ oai:arXiv.org:2601.13238v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chengyin Hu, Xiang Chen, Zhe Jia, Weiwen Shi, Fengyu Zhang, Jiujiang Guo, Yiwei Wei
+
+
+ KOCO-BENCH: Can Large Language Models Leverage Domain Knowledge in Software Development?
+ https://arxiv.org/abs/2601.13240
+ arXiv:2601.13240v1 Announce Type: new
+Abstract: Large language models (LLMs) excel at general programming but struggle with domain-specific software development, necessitating domain specialization methods for LLMs to learn and utilize domain knowledge and data. However, existing domain-specific code benchmarks cannot evaluate the effectiveness of domain specialization methods, which focus on assessing what knowledge LLMs possess rather than how they acquire and apply new knowledge, lacking explicit knowledge corpora for developing domain specialization methods. To this end, we present KOCO-BENCH, a novel benchmark designed for evaluating domain specialization methods in real-world software development. KOCO-BENCH contains 6 emerging domains with 11 software frameworks and 25 projects, featuring curated knowledge corpora alongside multi-granularity evaluation tasks including domain code generation (from function-level to project-level with rigorous test suites) and domain knowledge understanding (via multiple-choice Q&A). Unlike previous benchmarks that only provide test sets for direct evaluation, KOCO-BENCH requires acquiring and applying diverse domain knowledge (APIs, rules, constraints, etc.) from knowledge corpora to solve evaluation tasks. Our evaluations reveal that KOCO-BENCH poses significant challenges to state-of-the-art LLMs. Even with domain specialization methods (e.g., SFT, RAG, kNN-LM) applied, improvements remain marginal. Best-performing coding agent, Claude Code, achieves only 34.2%, highlighting the urgent need for more effective domain specialization methods. We release KOCO-BENCH, evaluation code, and baselines to advance further research at https://github.com/jiangxxxue/KOCO-bench.
+ oai:arXiv.org:2601.13240v1
+ cs.SE
+ cs.AI
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xue Jiang, Jiaru Qian, Xianjie Shi, Chenjie Li, Hao Zhu, Ziyu Wang, Jielun Zhang, Zheyu Zhao, Kechi Zhang, Jia Li, Wenpin Jiao, Zhi Jin, Ge Li, Yihong Dong
+
+
+ A Comprehensive Evaluation of LLM Reasoning: From Single-Model to Multi-Agent Paradigms
+ https://arxiv.org/abs/2601.13243
+ arXiv:2601.13243v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are increasingly deployed as reasoning systems, where reasoning paradigms - such as Chain-of-Thought (CoT) and multi-agent systems (MAS) - play a critical role, yet their relative effectiveness and cost-accuracy trade-offs remain poorly understood. In this work, we conduct a comprehensive and unified evaluation of reasoning paradigms, spanning direct single-model generation, CoT-augmented single-model reasoning, and representative MAS workflows, characterizing their reasoning performance across a diverse suite of closed-form benchmarks. Beyond overall performance, we probe role-specific capability demands in MAS using targeted role isolation analyses, and analyze cost-accuracy trade-offs to identify which MAS workflows offer a favorable balance between cost and accuracy, and which incur prohibitive overhead for marginal gains. We further introduce MIMeBench, a new open-ended benchmark that targets two foundational yet underexplored semantic capabilities - semantic abstraction and contrastive discrimination - thereby providing an alternative evaluation axis beyond closed-form accuracy and enabling fine-grained assessment of semantic competence that is difficult to capture with existing benchmarks. Our results show that increased structural complexity does not consistently lead to improved reasoning performance, with its benefits being highly dependent on the properties and suitability of the reasoning paradigm itself. The codes are released at https://gitcode.com/HIT1920/OpenLLMBench.
+ oai:arXiv.org:2601.13243v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yapeng Li, Jiakuo Yu, Zhixin Liu, Xinnan Liu, Jing Yu, Songze Li, Tonghua Su
+
+
+ Do Instruction-Tuned Models Always Perform Better Than Base Models? Evidence from Math and Domain-Shifted Benchmarks
+ https://arxiv.org/abs/2601.13244
+ arXiv:2601.13244v1 Announce Type: new
+Abstract: Instruction finetuning is standard practice for improving LLM performance, yet it remains unclear whether it enhances reasoning or merely induces surface-level pattern matching. We investigate this by evaluating base and instruction-tuned models on standard math benchmarks, structurally perturbed variants, and domain-shifted tasks. Our analysis highlights two key (often overlooked) limitations of instruction tuning. First, the performance advantage is unstable and depends heavily on evaluation settings. In zero-shot CoT settings on GSM8K, base models consistently outperform instruction-tuned variants, with drops as high as 32.67\% (Llama3-70B). Instruction-tuned models only match or exceed this performance when provided with few-shot exemplars, suggesting a reliance on specific prompting patterns rather than intrinsic reasoning. Second, tuning gains are brittle under distribution shift. Our results show that base models surpass instruction-tuned variants on the domain-specific MedCalc benchmark. Additionally, instruction-tuned models show sharp declines on perturbed datasets, indicating sensitivity to prompt structure over robust reasoning.
+ oai:arXiv.org:2601.13244v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Prateek Munjal, Clement Christophe, Ronnie Rajan, Praveenkumar Kanithi
+
+
+ The Cost of Failure: On The Complexity of Recampaigning under Fixed Districts
+ https://arxiv.org/abs/2601.13246
+ arXiv:2601.13246v1 Announce Type: new
+Abstract: Redistricting efforts have gathered contemporary attention in both quotidian and scholarly debates, particularly in the United States where efforts to redraw congressional districts to favor either of the two major parties in 12 states -- such as California, Texas, and Ohio -- have captured the public eye. The treatment of redistricting in computational social choice has essentially focused on the process of determining "appropriate" districts. In this work, we are interested in understanding the gamut of options left for the "losing" party, and so we consider the flip side of the problem: Given fixed/predetermined districts, can a given party still make their candidates win by strategically placing them in certain districts? We dub this as "recampaigning" to capture the intuition that a party would redirect their campaigning efforts from one district to another. We model recampaigning as a computational problem, consider natural variations of the model, and study those new models through the lens of (1) (polynomial-time many-one) interreducibilities, (2) separations/collapses (both unconditional and axiomatic-sufficient), and (3) both worst-case and parametrized complexity.
+ oai:arXiv.org:2601.13246v1
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Michael C. Chavrimootoo, Aidan Jeansonne
+
+
+ Aligning Agentic World Models via Knowledgeable Experience Learning
+ https://arxiv.org/abs/2601.13247
+ arXiv:2601.13247v1 Announce Type: new
+Abstract: Current Large Language Models (LLMs) exhibit a critical modal disconnect: they possess vast semantic knowledge but lack the procedural grounding to respect the immutable laws of the physical world. Consequently, while these agents implicitly function as world models, their simulations often suffer from physical hallucinations-generating plans that are logically sound but physically unexecutable. Existing alignment strategies predominantly rely on resource-intensive training or fine-tuning, which attempt to compress dynamic environmental rules into static model parameters. However, such parametric encapsulation is inherently rigid, struggling to adapt to the open-ended variability of physical dynamics without continuous, costly retraining. To bridge this gap, we introduce WorldMind, a framework that autonomously constructs a symbolic World Knowledge Repository by synthesizing environmental feedback. Specifically, it unifies Process Experience to enforce physical feasibility via prediction errors and Goal Experience to guide task optimality through successful trajectories. Experiments on EB-ALFRED and EB-Habitat demonstrate that WorldMind achieves superior performance compared to baselines with remarkable cross-model and cross-environment transferability.
+ oai:arXiv.org:2601.13247v1
+ cs.CL
+ cs.AI
+ cs.CV
+ cs.LG
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Baochang Ren, Yunzhi Yao, Rui Sun, Shuofei Qiao, Ningyu Zhang, Huajun Chen
+
+
+ Diffusion-based Inverse Model of a Distributed Tactile Sensor for Object Pose Estimation
+ https://arxiv.org/abs/2601.13250
+ arXiv:2601.13250v1 Announce Type: new
+Abstract: Tactile sensing provides a promising sensing modality for object pose estimation in manipulation settings where visual information is limited due to occlusion or environmental effects. However, efficiently leveraging tactile data for estimation remains a challenge due to partial observability, with single observations corresponding to multiple possible contact configurations. This limits conventional estimation approaches largely tailored to vision. We propose to address these challenges by learning an inverse tactile sensor model using denoising diffusion. The model is conditioned on tactile observations from a distributed tactile sensor and trained in simulation using a geometric sensor model based on signed distance fields. Contact constraints are enforced during inference through single-step projection using distance and gradient information from the signed distance field. For online pose estimation, we integrate the inverse model with a particle filter through a proposal scheme that combines generated hypotheses with particles from the prior belief. Our approach is validated in simulated and real-world planar pose estimation settings, without access to visual data or tight initial pose priors. We further evaluate robustness to unmodeled contact and sensor dynamics for pose tracking in a box-pushing scenario. Compared to local sampling baselines, the inverse sensor model improves sampling efficiency and estimation accuracy while preserving multimodal beliefs across objects with varying tactile discriminability.
+ oai:arXiv.org:2601.13250v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ante Mari\'c, Giammarco Caroleo, Alessandro Albini, Julius Jankowski, Perla Maiolino, Sylvain Calinon
+
+
+ Beyond Cosine Similarity: Taming Semantic Drift and Antonym Intrusion in a 15-Million Node Turkish Synonym Graph
+ https://arxiv.org/abs/2601.13251
+ arXiv:2601.13251v1 Announce Type: new
+Abstract: Neural embeddings have a notorious blind spot: they can't reliably tell synonyms apart from antonyms. Consequently, increasing similarity thresholds often fails to prevent opposites from being grouped together. We've built a large-scale semantic clustering system specifically designed to tackle this problem head on. Our pipeline chews through 15 million lexical items, evaluates a massive 520 million potential relationships, and ultimately generates 2.9 million high-precision semantic clusters. The system makes three primary contributions. First, we introduce a labeled dataset of 843,000 concept pairs spanning synonymy, antonymy, and co-hyponymy, constructed via Gemini 2.5-Flash LLM augmentation and verified using human-curated dictionary resources. Second, we propose a specialized three-way semantic relation discriminator that achieves 90% macro-F1, enabling robust disambiguation beyond raw embedding similarity. Third, we introduce a novel soft-to-hard clustering algorithm that mitigates semantic drift preventing erroneous transitive chains (e.g., hot -> spicy -> pain -> depression) while simultaneously resolving polysemy. Our approach employs a topology-aware two-stage expansion-pruning procedure with topological voting, ensuring that each term is assigned to exactly one semantically coherent cluster. The resulting resource enables high-precision semantic search and retrieval-augmented generation, particularly for morphologically rich and low-resource languages where existing synonym databases remain sparse.
+ oai:arXiv.org:2601.13251v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ebubekir Tosun, Mehmet Emin Buldur, \"Ozay Ezerceli, Mahmoud ElHussieni
+
+
+ Autonomous Navigation at the Nano-Scale: Algorithms, Architectures, and Constraints
+ https://arxiv.org/abs/2601.13252
+ arXiv:2601.13252v1 Announce Type: new
+Abstract: Autonomous navigation for nano-scale unmanned aerial vehicles (nano-UAVs) is governed by extreme Size, Weight, and Power (SWaP) constraints (with the weight < 50 g and sub-100 mW onboard processor), distinguishing it fundamentally from standard robotic paradigms. This review synthesizes the state-of-the-art in sensing, computing, and control architectures designed specifically for these sub- 100mW computational envelopes. We critically analyse the transition from classical geometry-based methods to emerging "Edge AI" paradigms, including quantized deep neural networks deployed on ultra-low-power System-on-Chips (SoCs) and neuromorphic event-based control. Beyond algorithms, we evaluate the hardware-software co-design requisite for autonomy, covering advancements in dense optical flow, optimized Simultaneous Localization and Mapping (SLAM), and learning-based flight control. While significant progress has been observed in visual navigation and relative pose estimation, our analysis reveals persistent gaps in long-term endurance, robust obstacle avoidance in dynamic environments, and the "Sim-to-Real" transfer of reinforcement learning policies. This survey provides a roadmap for bridging these gaps, advocating for hybrid architectures that fuse lightweight classical control with data-driven perception to enable fully autonomous, agile nano-UAVs in GPS-denied environments.
+ oai:arXiv.org:2601.13252v1
+ cs.RO
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mahmud S. Zango, Jianglin Lan
+
+
+ A Hybrid Protocol for Large-Scale Semantic Dataset Generation in Low-Resource Languages: The Turkish Semantic Relations Corpus
+ https://arxiv.org/abs/2601.13253
+ arXiv:2601.13253v1 Announce Type: new
+Abstract: We present a hybrid methodology for generating large-scale semantic relationship datasets in low-resource languages, demonstrated through a comprehensive Turkish semantic relations corpus. Our approach integrates three phases: (1) FastText embeddings with Agglomerative Clustering to identify semantic clusters, (2) Gemini 2.5-Flash for automated semantic relationship classification, and (3) integration with curated dictionary sources. The resulting dataset comprises 843,000 unique Turkish semantic pairs across three relationship types (synonyms, antonyms, co-hyponyms) representing a 10x scale increase over existing resources at minimal cost ($65). We validate the dataset through two downstream tasks: an embedding model achieving 90% top-1 retrieval accuracy and a classification model attaining 90% F1-macro. Our scalable protocol addresses critical data scarcity in Turkish NLP and demonstrates applicability to other low-resource languages. We publicly release the dataset and models.
+ oai:arXiv.org:2601.13253v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ebubekir Tosun, Mehmet Emin Buldur, \"Ozay Ezerceli, Mahmoud ElHussieni
+
+
+ Deep Neural networks for solving high-dimensional parabolic partial differential equations
+ https://arxiv.org/abs/2601.13256
+ arXiv:2601.13256v1 Announce Type: new
+Abstract: The numerical solution of high dimensional partial differential equations (PDEs) is severely constrained by the curse of dimensionality (CoD), rendering classical grid--based methods impractical beyond a few dimensions. In recent years, deep neural networks have emerged as a promising mesh free alternative, enabling the approximation of PDE solutions in tens to thousands of dimensions. This review provides a tutorial--oriented introduction to neural--network--based methods for solving high dimensional parabolic PDEs, emphasizing conceptual clarity and methodological connections. We organize the literature around three unifying paradigms: (i) PDE residual--based approaches, including physicsinformed neural networks and their high dimensional variants; (ii) stochastic methods derived from Feynman--Kac and backward stochastic differential equation formulations; and (iii) hybrid derivative--free random difference approaches designed to alleviate the computational cost of derivatives in high dimensions. For each paradigm, we outline the underlying mathematical formulation, algorithmic implementation, and practical strengths and limitations. Representative benchmark problems--including Hamilton--Jacobi--Bellman and Black--Scholes equations in up to 1000 dimensions --illustrate the scalability, effectiveness, and accuracy of the methods. The paper concludes with a discussion of open challenges and future directions for reliable and scalable solvers of high dimensional PDEs.
+ oai:arXiv.org:2601.13256v1
+ math.NA
+ cs.LG
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Wenzhong Zhang, Zhenyuan Hu, Wei Cai, George EM Karniadakis
+
+
+ Stop Taking Tokenizers for Granted: They Are Core Design Decisions in Large Language Models
+ https://arxiv.org/abs/2601.13260
+ arXiv:2601.13260v1 Announce Type: new
+Abstract: Tokenization underlies every large language model, yet it remains an under-theorized and inconsistently designed component. Common subword approaches such as Byte Pair Encoding (BPE) offer scalability but often misalign with linguistic structure, amplify bias, and waste capacity across languages and domains. This paper reframes tokenization as a core modeling decision rather than a preprocessing step. We argue for a context-aware framework that integrates tokenizer and model co-design, guided by linguistic, domain, and deployment considerations. Standardized evaluation and transparent reporting are essential to make tokenization choices accountable and comparable. Treating tokenization as a core design problem, not a technical afterthought, can yield language technologies that are fairer, more efficient, and more adaptable.
+ oai:arXiv.org:2601.13260v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sawsan Alqahtani, Mir Tafseer Nayeem, Md Tahmid Rahman Laskar, Tasnim Mohiuddin, M Saiful Bari
+
+
+ CURE-Med: Curriculum-Informed Reinforcement Learning for Multilingual Medical Reasoning
+ https://arxiv.org/abs/2601.13262
+ arXiv:2601.13262v1 Announce Type: new
+Abstract: While large language models (LLMs) have shown to perform well on monolingual mathematical and commonsense reasoning, they remain unreliable for multilingual medical reasoning applications, hindering their deployment in multilingual healthcare settings. We address this by first introducing CUREMED-BENCH, a high-quality multilingual medical reasoning dataset with open-ended reasoning queries with a single verifiable answer, spanning thirteen languages, including underrepresented languages such as Amharic, Yoruba, and Swahili. Building on this dataset, we propose CURE-MED, a curriculum-informed reinforcement learning framework that integrates code-switching-aware supervised fine-tuning and Group Relative Policy Optimization to jointly improve logical correctness and language stability. Across thirteen languages, our approach consistently outperforms strong baselines and scales effectively, achieving 85.21% language consistency and 54.35% logical correctness at 7B parameters, and 94.96% language consistency and 70.04% logical correctness at 32B parameters. These results support reliable and equitable multilingual medical reasoning in LLMs. The code and dataset are available at https://cure-med.github.io/
+ oai:arXiv.org:2601.13262v1
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Eric Onyame, Akash Ghosh, Subhadip Baidya, Sriparna Saha, Xiuying Chen, Chirag Agarwal
+
+
+ Deep Learning for Semantic Segmentation of 3D Ultrasound Data
+ https://arxiv.org/abs/2601.13263
+ arXiv:2601.13263v1 Announce Type: new
+Abstract: Developing cost-efficient and reliable perception systems remains a central challenge for automated vehicles. LiDAR and camera-based systems dominate, yet they present trade-offs in cost, robustness and performance under adverse conditions. This work introduces a novel framework for learning-based 3D semantic segmentation using Calyo Pulse, a modular, solid-state 3D ultrasound sensor system for use in harsh and cluttered environments. A 3D U-Net architecture is introduced and trained on the spatial ultrasound data for volumetric segmentation. Results demonstrate robust segmentation performance from Calyo Pulse sensors, with potential for further improvement through larger datasets, refined ground truth, and weighted loss functions. Importantly, this study highlights 3D ultrasound sensing as a promising complementary modality for reliable autonomy.
+ oai:arXiv.org:2601.13263v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chenyu Liu, Marco Cecotti, Harikrishnan Vijayakumar, Patrick Robinson, James Barson, Mihai Caleap
+
+
+ Unlearning in LLMs: Methods, Evaluation, and Open Challenges
+ https://arxiv.org/abs/2601.13264
+ arXiv:2601.13264v1 Announce Type: new
+Abstract: Large language models (LLMs) have achieved remarkable success across natural language processing tasks, yet their widespread deployment raises pressing concerns around privacy, copyright, security, and bias. Machine unlearning has emerged as a promising paradigm for selectively removing knowledge or data from trained models without full retraining. In this survey, we provide a structured overview of unlearning methods for LLMs, categorizing existing approaches into data-centric, parameter-centric, architecture-centric, hybrid, and other strategies. We also review the evaluation ecosystem, including benchmarks, metrics, and datasets designed to measure forgetting effectiveness, knowledge retention, and robustness. Finally, we outline key challenges and open problems, such as scalable efficiency, formal guarantees, cross-language and multimodal unlearning, and robustness against adversarial relearning. By synthesizing current progress and highlighting open directions, this paper aims to serve as a roadmap for developing reliable and responsible unlearning techniques in large language models.
+ oai:arXiv.org:2601.13264v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tyler Lizzo, Larry Heck
+
+
+ The Query Complexity of Local Search in Rounds on General Graphs
+ https://arxiv.org/abs/2601.13266
+ arXiv:2601.13266v1 Announce Type: new
+Abstract: We analyze the query complexity of finding a local minimum in $t$ rounds on general graphs. More precisely, given a graph $G = (V,E)$ and oracle access to an unknown function $f : V \to \mathbb{R}$, the goal is to find a local minimum--a vertex $v$ such that $f(v) \leq f(u)$ for all $(u,v) \in E$--using at most $t$ rounds of interaction with the oracle. The query complexity is well understood on grids, but much less is known beyond. This abstract problem captures many optimization tasks, such as finding a local minimum of a loss function during neural network training.
+ For each graph with $n$ vertices, we prove a deterministic upper bound of $O(t n^{1/t} (s\Delta)^{1-1/t})$, where $s$ is the separation number and $\Delta$ is the maximum degree of the graph. We complement this result with a randomized lower bound of $\Omega(t n^{1/t}-t)$ that holds for any connected graph. We also find that parallel steepest descent with a warm start provides improved bounds for graphs with high separation number and bounded degree.
+ oai:arXiv.org:2601.13266v1
+ cs.CC
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Simina Br\^anzei, Ioannis Panageas, Dimitris Paparas
+
+
+ Improving the Safety and Trustworthiness of Medical AI via Multi-Agent Evaluation Loops
+ https://arxiv.org/abs/2601.13268
+ arXiv:2601.13268v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are increasingly applied in healthcare, yet ensuring their ethical integrity and safety compliance remains a major barrier to clinical deployment. This work introduces a multi-agent refinement framework designed to enhance the safety and reliability of medical LLMs through structured, iterative alignment. Our system combines two generative models - DeepSeek R1 and Med-PaLM - with two evaluation agents, LLaMA 3.1 and Phi-4, which assess responses using the American Medical Association's (AMA) Principles of Medical Ethics and a five-tier Safety Risk Assessment (SRA-5) protocol. We evaluate performance across 900 clinically diverse queries spanning nine ethical domains, measuring convergence efficiency, ethical violation reduction, and domain-specific risk behavior. Results demonstrate that DeepSeek R1 achieves faster convergence (mean 2.34 vs. 2.67 iterations), while Med-PaLM shows superior handling of privacy-sensitive scenarios. The iterative multi-agent loop achieved an 89% reduction in ethical violations and a 92% risk downgrade rate, underscoring the effectiveness of our approach. This study presents a scalable, regulator-aligned, and cost-efficient paradigm for governing medical AI safety.
+ oai:arXiv.org:2601.13268v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zainab Ghafoor, Md Shafiqul Islam, Koushik Howlader, Md Rasel Khondokar, Tanusree Bhattacharjee, Sayantan Chakraborty, Adrito Roy, Ushashi Bhattacharjee, Tirtho Roy
+
+
+ Probabilistic Linear Logic Programming with an application to Bayesian Networks computations
+ https://arxiv.org/abs/2601.13270
+ arXiv:2601.13270v1 Announce Type: new
+Abstract: Bayesian networks are a canonical formalism for representing probabilistic dependencies, yet their integration within logic programming frameworks remains a nontrivial challenge, mainly due to the complex structure of these networks. In this paper, we propose probLO (probabilistic Linear Objects) an extension of Andreoli and Pareschi's LO language which embeds Bayesian network representation and computation within the framework of multiplicative-additive linear logic programming. The key novelty is the use of multi-head Prolog-like methods to reconstruct network structures, which are not necessarily trees, and the operation of slicing, standard in the literature of linear logic, enabling internal numerical probability computations without relying on external semantic interpretation.
+ oai:arXiv.org:2601.13270v1
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Matteo Acclavio, Roberto Maieli
+
+
+ Function Recovery Attacks in Gate-Hiding Garbled Circuits using SAT Solving
+ https://arxiv.org/abs/2601.13271
+ arXiv:2601.13271v1 Announce Type: new
+Abstract: Semi-Private Function Evaluation enables joint computation while protecting both input data and function logic. A practical instantiation is gate-hiding garbled circuits, which conceal gate functionalities while revealing the circuit topology. Existing security definitions intentionally exclude leakage through circuit topology, leaving the concrete impact of such leakage on function privacy insufficiently understood.
+ We analyze the empirical security of gate hiding under two adversarial models that capture realistic computational capabilities. We present a SAT-based function-recovery attack that reconstructs hidden gate operations from a circuit's public topology. To enable recovery on larger and more complex circuits, we develop an incremental SAT-solving framework combined with a set of composable, topology-preserving simplification theorems. These techniques jointly reduce the SAT instance size and progressively constrain the search space across repeated solving iterations.
+ We evaluate our attack on ISCAS benchmarks, representative secure computation circuits, and fault-tolerant sensor fusion circuits under a fixed 24-hour recovery budget. Compared to baseline approaches, our optimized attack achieves up to a 159-fold speedup in recovery time without increasing the number of oracle queries. Our results demonstrate that topology leakage alone can enable effective function recovery in practice.
+ oai:arXiv.org:2601.13271v1
+ cs.CR
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chao Yin, Zunchen Huang, Chenglu Jin, Marten van Dijk, Fabio Massacci
+
+
+ Multi-level Monte Carlo Dropout for Efficient Uncertainty Quantification
+ https://arxiv.org/abs/2601.13272
+ arXiv:2601.13272v1 Announce Type: new
+Abstract: We develop a multilevel Monte Carlo (MLMC) framework for uncertainty quantification with Monte Carlo dropout. Treating dropout masks as a source of epistemic randomness, we define a fidelity hierarchy by the number of stochastic forward passes used to estimate predictive moments. We construct coupled coarse--fine estimators by reusing dropout masks across fidelities, yielding telescoping MLMC estimators for both predictive means and predictive variances that remain unbiased for the corresponding dropout-induced quantities while reducing sampling variance at fixed evaluation budget. We derive explicit bias, variance and effective cost expressions, together with sample-allocation rules across levels. Numerical experiments on forward and inverse PINNs--Uzawa benchmarks confirm the predicted variance rates and demonstrate efficiency gains over single-level MC-dropout at matched cost.
+ oai:arXiv.org:2601.13272v1
+ cs.LG
+ stat.CO
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Aaron Pim, Tristan Pryer
+
+
+ Safe Navigation in Cluttered Environments Via Spline-Based Harmonic Potential Fields
+ https://arxiv.org/abs/2601.13273
+ arXiv:2601.13273v1 Announce Type: new
+Abstract: We provide a complete motion-planning mechanism that ensures target tracking and obstacle avoidance in a cluttered environment. For a given polyhedral decomposition of the feasible space, we adopt a novel procedure that constrains the agent to move only through a prescribed sequence of cells via a suitable control policy.
+ For each cell, we construct a harmonic potential surface induced by a Dirichlet boundary condition given as a cardinal B-spline curve. A detailed analysis of the curve behavior (periodicity, support) and of the associated control point selection allows us to explicitly compute these harmonic potential surfaces, from which we subsequently derive the corresponding control policy. We illustrate that the resulting construction funnels the agent safely along the chain of cells from the starting point to the target.
+ oai:arXiv.org:2601.13273v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Theodor-Gabriel Nicu, Florin Stoican, Daniel-Mihail Ioan, Ionela Prodan
+
+
+ Balancing Classification and Calibration Performance in Decision-Making LLMs via Calibration Aware Reinforcement Learning
+ https://arxiv.org/abs/2601.13284
+ arXiv:2601.13284v1 Announce Type: new
+Abstract: Large language models (LLMs) are increasingly deployed in decision-making tasks, where not only accuracy but also reliable confidence estimates are essential. Well-calibrated confidence enables downstream systems to decide when to trust a model and when to defer to fallback mechanisms. In this work, we conduct a systematic study of calibration in two widely used fine-tuning paradigms: supervised fine-tuning (SFT) and reinforcement learning with verifiable rewards (RLVR). We show that while RLVR improves task performance, it produces extremely overconfident models, whereas SFT yields substantially better calibration, even under distribution shift, though with smaller performance gains. Through targeted experiments, we diagnose RLVR's failure, showing that decision tokens act as extraction steps of the decision in reasoning traces and do not carry confidence information, which prevents reinforcement learning from surfacing calibrated alternatives. Based on this insight, we propose a calibration-aware reinforcement learning formulation that directly adjusts decision-token probabilities. Our method preserves RLVR's accuracy level while mitigating overconfidence, reducing ECE scores up to 9 points.
+ oai:arXiv.org:2601.13284v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Duygu Nur Yaldiz, Evangelia Spiliopoulou, Zheng Qi, Siddharth Varia, Srikanth Doss, Nikolaos Pappas
+
+
+ Tight Asymptotic Bounds for Fair Division With Externalities
+ https://arxiv.org/abs/2601.13287
+ arXiv:2601.13287v1 Announce Type: new
+Abstract: We study the problem of allocating a set of indivisible items among agents whose preferences include externalities. Unlike the standard fair division model, agents may derive positive or negative utility not only from items allocated directly to them, but also from items allocated to other agents. Since exact envy-freeness cannot be guaranteed, prior work has focused on its relaxations. However, two central questions remained open: does there always exist an allocation that is envy-free up to one item (EF1), and if not, what is the optimal relaxation EF-$k$ that can always be attained?
+ We settle both questions by deriving tight asymptotic bounds on the number of items sufficient to eliminate envy. We show that for any instance with $n$ agents, an allocation that is envy-free up to $O(\sqrt{n})$ items always exists and can be found in polynomial time, and we prove a matching $\Omega(\sqrt{n})$ lower bound showing that this result is tight even for binary valuations, which rules out the existence of EF1 allocations when agents have externalities.
+ oai:arXiv.org:2601.13287v1
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Frank Connor, Max Dupr\'e la Tour, Vishnu V. Narayan, \v{S}imon Schierreich
+
+
+ A BERTology View of LLM Orchestrations: Token- and Layer-Selective Probes for Efficient Single-Pass Classification
+ https://arxiv.org/abs/2601.13288
+ arXiv:2601.13288v1 Announce Type: new
+Abstract: Production LLM systems often rely on separate models for safety and other classification-heavy steps, increasing latency, VRAM footprint, and operational complexity. We instead reuse computation already paid for by the serving LLM: we train lightweight probes on its hidden states and predict labels in the same forward pass used for generation. We frame classification as representation selection over the full token-layer hidden-state tensor, rather than committing to a fixed token or fixed layer (e.g., first-token logits or final-layer pooling). To implement this, we introduce a two-stage aggregator that (i) summarizes tokens within each layer and (ii) aggregates across layer summaries to form a single representation for classification. We instantiate this template with direct pooling, a 100K-parameter scoring-attention gate, and a downcast multi-head self-attention (MHA) probe with up to 35M trainable parameters. Across safety and sentiment benchmarks our probes improve over logit-only reuse (e.g., MULI) and are competitive with substantially larger task-specific baselines, while preserving near-serving latency and avoiding the VRAM and latency costs of a separate guard-model pipeline.
+ oai:arXiv.org:2601.13288v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Gonzalo Ariel Meyoyan, Luciano Del Corro
+
+
+ The Tag is the Signal: URL-Agnostic Credibility Scoring for Messages on Telegram
+ https://arxiv.org/abs/2601.13294
+ arXiv:2601.13294v1 Announce Type: new
+Abstract: Telegram has become one of the leading platforms for disseminating misinformational messages. However, many existing pipelines still classify each message's credibility based on the reputation of its associated domain names or its lexical features. Such methods work well on traditional long-form news articles published by well-known sources, but high-risk posts on Telegram are short and URL-sparse, leading to failures for link-based and standard TF-IDF models. To this end, we propose the TAG2CRED pipeline, a method designed for such short, convoluted messages. Our model will directly score each post based on the tags assigned to the text. We designed a concise label system that covers the dimensions of theme, claim type, call to action, and evidence. The fine-tuned large language model (LLM) assigns tags to messages and then maps these tags to calibrated risk scores in the [0,1] interval through L2-regularized logistic regression. We evaluated 87,936 Telegram messages associated with Media Bias/Fact Check (MBFC), using URL masking and domain disjoint splits. The results showed that the ROC-AUC of the TAG2CRED model reached 0.871, the macro-F1 value was 0.787, and the Brier score was 0.167, outperforming the baseline TF-IDF (macro-F1 value 0.737, Brier score 0.248); at the same time, the number of features used in this model is much smaller, and the generalization ability on infrequent domains is stronger. The performance of the stacked ensemble model (TF-IDF + TAG2CRED + SBERT) was further improved over the baseline SBERT. ROC-AUC reached 0.901, and the macro-F1 value was 0.813 (Brier score 0.114). This indicates that style labels and lexical features may capture different but complementary dimensions of information risk.
+ oai:arXiv.org:2601.13294v1
+ cs.SI
+ cs.CY
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yipeng Wang, Huy Gia Han Vu, Mohit Singhal
+
+
+ CooperBench: Why Coding Agents Cannot be Your Teammates Yet
+ https://arxiv.org/abs/2601.13295
+ arXiv:2601.13295v1 Announce Type: new
+Abstract: Resolving team conflicts requires not only task-specific competence, but also social intelligence to find common ground and build consensus. As AI agents increasingly collaborate on complex work, they must develop coordination capabilities to function as effective teammates. Yet we hypothesize that current agents lack these capabilities. To test this, we introduce CooperBench, a benchmark of over 600 collaborative coding tasks across 12 libraries in 4 programming languages. Each task assigns two agents different features that can be implemented independently but may conflict without proper coordination. Tasks are grounded in real open-source repositories with expert-written tests. Evaluating state-of-the-art coding agents, we observe the curse of coordination: agents achieve on average 30% lower success rates when working together compared to performing both tasks individually. This contrasts sharply with human teams, where adding teammates typically improves productivity. Our analysis reveals three key issues: (1) communication channels become jammed with vague, ill-timed, and inaccurate messages; (2) even with effective communication, agents deviate from their commitments; and (3) agents often hold incorrect expectations about others' plans and communication. Through large-scale simulation, we also observe rare but interesting emergent coordination behavior including role division, resource division, and negotiation. Our research presents a novel benchmark for collaborative coding and calls for a shift from pursuing individual agent capability to developing social intelligence.
+ oai:arXiv.org:2601.13295v1
+ cs.LG
+ cs.AI
+ cs.CL
+ cs.MA
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Arpandeep Khatua, Hao Zhu, Peter Tran, Arya Prabhudesai, Frederic Sadrieh, Johann K. Lieberwirth, Xinkai Yu, Yicheng Fu, Michael J. Ryan, Jiaxin Pei, Diyi Yang
+
+
+ Enginuity: Building an Open Multi-Domain Dataset of Complex Engineering Diagrams
+ https://arxiv.org/abs/2601.13299
+ arXiv:2601.13299v1 Announce Type: new
+Abstract: We propose Enginuity - the first open, large-scale, multi-domain engineering diagram dataset with comprehensive structural annotations designed for automated diagram parsing. By capturing hierarchical component relationships, connections, and semantic elements across diverse engineering domains, our proposed dataset would enable multimodal large language models to address critical downstream tasks including structured diagram parsing, cross-modal information retrieval, and AI-assisted engineering simulation. Enginuity would be transformative for AI for Scientific Discovery by enabling artificial intelligence systems to comprehend and manipulate the visual-structural knowledge embedded in engineering diagrams, breaking down a fundamental barrier that currently prevents AI from fully participating in scientific workflows where diagram interpretation, technical drawing analysis, and visual reasoning are essential for hypothesis generation, experimental design, and discovery.
+ oai:arXiv.org:2601.13299v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ethan Seefried, Prahitha Movva, Naga Harshita Marupaka, Tilak Kasturi, Tirthankar Ghosal
+
+
+ OI-Bench: An Option Injection Benchmark for Evaluating LLM Susceptibility to Directive Interference
+ https://arxiv.org/abs/2601.13300
+ arXiv:2601.13300v1 Announce Type: new
+Abstract: Benchmarking large language models (LLMs) is critical for understanding their capabilities, limitations, and robustness. In addition to interface artifacts, prior studies have shown that LLM decisions can be influenced by directive signals such as social cues, framing, and instructions. In this work, we introduce option injection, a benchmarking approach that augments the multiple-choice question answering (MCQA) interface with an additional option containing a misleading directive, leveraging standardized choice structure and scalable evaluation. We construct OI-Bench, a benchmark of 3,000 questions spanning knowledge, reasoning, and commonsense tasks, with 16 directive types covering social compliance, bonus framing, threat framing, and instructional interference. This setting combines manipulation of the choice interface with directive-based interference, enabling systematic assessment of model susceptibility. We evaluate 12 LLMs to analyze attack success rates, behavioral responses, and further investigate mitigation strategies ranging from inference-time prompting to post-training alignment. Experimental results reveal substantial vulnerabilities and heterogeneous robustness across models. OI-Bench is expected to support more systematic evaluation of LLM robustness to directive interference within choice-based interfaces.
+ oai:arXiv.org:2601.13300v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yow-Fu Liou, Yu-Chien Tang, Yu-Hsiang Liu, An-Zi Yen
+
+
+ Verifying Local Robustness of Pruned Safety-Critical Networks
+ https://arxiv.org/abs/2601.13303
+ arXiv:2601.13303v1 Announce Type: new
+Abstract: Formal verification of Deep Neural Networks (DNNs) is essential for safety-critical applications, ranging from surgical robotics to NASA JPL autonomous systems. However, the computational cost of verifying large-scale models remains a significant barrier to adoption. This paper investigates the impact of pruning on formal local robustness certificates with different ratios. Using the state-of-the-art $\alpha,\beta$-CROWN verifier, we evaluate ResNet4 models across varying pruning ratios on MNIST and, more importantly, on the NASA JPL Mars Frost Identification datasets. Our findings demonstrate a non-linear relationship: light pruning (40%) in MNIST and heavy pruning (70%-90%) in JPL improve verifiability, allowing models to outperform unpruned baselines in proven $L_\infty$ robustness properties. This suggests that reduced connectivity simplifies the search space for formal solvers and that the optimal pruning ratio varies significantly between datasets. This research highlights the complex nature of model compression, offering critical insights into selecting the optimal pruning ratio for deploying efficient, yet formally verified, DNNs in high-stakes environments where reliability is non-negotiable.
+ oai:arXiv.org:2601.13303v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Minh Le, Phuong Cao
+
+
+ CausalSpatial: A Benchmark for Object-Centric Causal Spatial Reasoning
+ https://arxiv.org/abs/2601.13304
+ arXiv:2601.13304v1 Announce Type: new
+Abstract: Humans can look at a static scene and instantly predict what happens next -- will moving this object cause a collision? We call this ability Causal Spatial Reasoning. However, current multimodal large language models (MLLMs) cannot do this, as they remain largely restricted to static spatial perception, struggling to answer "what-if" questions in a 3D scene. We introduce CausalSpatial, a diagnostic benchmark evaluating whether models can anticipate consequences of object motions across four tasks: Collision, Compatibility, Occlusion, and Trajectory. Results expose a severe gap: humans score 84% while GPT-5 achieves only 54%. Why do MLLMs fail? Our analysis uncovers a fundamental deficiency: models over-rely on textual chain-of-thought reasoning that drifts from visual evidence, producing fluent but spatially ungrounded hallucinations. To address this, we propose the Causal Object World model (COW), a framework that externalizes the simulation process by generating videos of hypothetical dynamics. With explicit visual cues of causality, COW enables models to ground their reasoning in physical reality rather than linguistic priors. We make the dataset and code publicly available here: https://github.com/CausalSpatial/CausalSpatial
+ oai:arXiv.org:2601.13304v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Wenxin Ma, Chenlong Wang, Ruisheng Yuan, Hao Chen, Nanru Dai, S. Kevin Zhou, Yijun Yang, Alan Yuille, Jieneng Chen
+
+
+ Paid Voices vs. Public Feeds: Interpretable Cross-Platform Theme Modeling of Climate Discourse
+ https://arxiv.org/abs/2601.13317
+ arXiv:2601.13317v1 Announce Type: new
+Abstract: Climate discourse online plays a crucial role in shaping public understanding of climate change and influencing political and policy outcomes. However, climate communication unfolds across structurally distinct platforms with fundamentally different incentive structures: paid advertising ecosystems incentivize targeted, strategic persuasion, while public social media platforms host largely organic, user-driven discourse. Existing computational studies typically analyze these environments in isolation, limiting our ability to distinguish institutional messaging from public expression. In this work, we present a comparative analysis of climate discourse across paid advertisements on Meta (previously known as Facebook) and public posts on Bluesky from July 2024 to September 2025. We introduce an interpretable, end-to-end thematic discovery and assignment framework that clusters texts by semantic similarity and leverages large language models (LLMs) to generate concise, human-interpretable theme labels. We evaluate the quality of the induced themes against traditional topic modeling baselines using both human judgments and an LLM-based evaluator, and further validate their semantic coherence through downstream stance prediction and theme-guided retrieval tasks. Applying the resulting themes, we characterize systematic differences between paid climate messaging and public climate discourse and examine how thematic prevalence shifts around major political events. Our findings show that platform-level incentives are reflected in the thematic structure, stance alignment, and temporal responsiveness of climate narratives. While our empirical analysis focuses on climate communication, the proposed framework is designed to support comparative narrative analysis across heterogeneous communication environments.
+ oai:arXiv.org:2601.13317v1
+ cs.CL
+ cs.AI
+ cs.CY
+ cs.LG
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Samantha Sudhoff, Pranav Perumal, Zhaoqing Wu, Tunazzina Islam
+
+
+ Arab Voices: Mapping Standard and Dialectal Arabic Speech Technology
+ https://arxiv.org/abs/2601.13319
+ arXiv:2601.13319v1 Announce Type: new
+Abstract: Dialectal Arabic (DA) speech data vary widely in domain coverage, dialect labeling practices, and recording conditions, complicating cross-dataset comparison and model evaluation. To characterize this landscape, we conduct a computational analysis of linguistic ``dialectness'' alongside objective proxies of audio quality on the training splits of widely used DA corpora. We find substantial heterogeneity both in acoustic conditions and in the strength and consistency of dialectal signals across datasets, underscoring the need for standardized characterization beyond coarse labels. To reduce fragmentation and support reproducible evaluation, we introduce Arab Voices, a standardized framework for DA ASR. Arab Voices provides unified access to 31 datasets spanning 14 dialects, with harmonized metadata and evaluation utilities. We further benchmark a range of recent ASR systems, establishing strong baselines for modern DA ASR.
+ oai:arXiv.org:2601.13319v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Peter Sullivan, AbdelRahim Elmadany, Alcides Alcoba Inciarte, Muhammad Abdul-Mageed
+
+
+ Verifying First-Order Temporal Properties of Infinite-State Systems via Timers and Rankings
+ https://arxiv.org/abs/2601.13325
+ arXiv:2601.13325v1 Announce Type: new
+Abstract: We present a unified deductive verification framework for first-order temporal properties based on well-founded rankings, where verification conditions are discharged using SMT solvers. To that end, we introduce a novel reduction from verification of arbitrary temporal properties to verification of termination. Our reduction augments the system with prophecy timer variables that predict the number of steps along a trace until the next time certain temporal formulas, including the negated property, hold. In contrast to standard tableaux-based reductions, which reduce the problem to fair termination, our reduction does not introduce fairness assumptions. To verify termination of the augmented system, we follow the traditional approach of assigning each state a rank from a well-founded set and showing that the rank decreases in every transition. We leverage the recently proposed formalism of implicit rankings to express and automatically verify the decrease of rank using SMT solvers, even when the rank is not expressible in first-order logic. We extend implicit rankings from finite to infinite domains, enabling verification of more general systems and making them applicable to the augmented systems generated by our reduction, which allows us to exploit the decrease of timers in termination proofs. We evaluate our technique on a range of temporal verification tasks from previous works, giving simple, intuitive proofs for them within our framework.
+ oai:arXiv.org:2601.13325v1
+ cs.LO
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Raz Lotan, Neta Elad, Oded Padon, Sharon Shoham
+
+
+ PepEDiff: Zero-Shot Peptide Binder Design via Protein Embedding Diffusion
+ https://arxiv.org/abs/2601.13327
+ arXiv:2601.13327v1 Announce Type: new
+Abstract: We present PepEDiff, a novel peptide binder generator that designs binding sequences given a target receptor protein sequence and its pocket residues. Peptide binder generation is critical in therapeutic and biochemical applications, yet many existing methods rely heavily on intermediate structure prediction, adding complexity and limiting sequence diversity. Our approach departs from this paradigm by generating binder sequences directly in a continuous latent space derived from a pretrained protein embedding model, without relying on predicted structures, thereby improving structural and sequence diversity. To encourage the model to capture binding-relevant features rather than memorizing known sequences, we perform latent-space exploration and diffusion-based sampling, enabling the generation of peptides beyond the limited distribution of known binders. This zero-shot generative strategy leverages the global protein embedding manifold as a semantic prior, allowing the model to propose novel peptide sequences in previously unseen regions of the protein space. We evaluate PepEDiff on TIGIT, a challenging target with a large, flat protein-protein interaction interface that lacks a druggable pocket. Despite its simplicity, our method outperforms state-of-the-art approaches across benchmark tests and in the TIGIT case study, demonstrating its potential as a general, structure-free framework for zero-shot peptide binder design. The code for this research is available at GitHub: https://github.com/LabJunBMI/PepEDiff-An-Peptide-binder-Embedding-Diffusion-Model
+ oai:arXiv.org:2601.13327v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Po-Yu Liang, Tobo Duran, Jun Bai
+
+
+ Reducing Tokenization Premiums for Low-Resource Languages
+ https://arxiv.org/abs/2601.13328
+ arXiv:2601.13328v1 Announce Type: new
+Abstract: Relative to English, low-resource languages suffer from substantial tokenization premiums in modern LMs, meaning that it generally requires several times as many tokens to encode a sentence in a low-resource language than to encode the analogous sentence in English. This tokenization premium results in increased API and energy costs and reduced effective context windows for these languages. In this paper we analyze the tokenizers of ten popular LMs to better understand their designs and per-language tokenization premiums. We also propose a mechanism to reduce tokenization premiums in pre-trained models, by post-hoc additions to the token vocabulary that coalesce multi-token characters into single tokens. We apply this methodology to 12 low-resource languages, demonstrating that the original and compressed inputs often have similar last hidden states when run through the Llama 3.2 1B model.
+ oai:arXiv.org:2601.13328v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Geoffrey Churchill, Steven Skiena
+
+
+ RegCheck: A tool for automating comparisons between study registrations and papers
+ https://arxiv.org/abs/2601.13330
+ arXiv:2601.13330v1 Announce Type: new
+Abstract: Across the social and medical sciences, researchers recognize that specifying planned research activities (i.e., 'registration') prior to the commencement of research has benefits for both the transparency and rigour of science. Despite this, evidence suggests that study registrations frequently go unexamined, minimizing their effectiveness. In a way this is no surprise: manually checking registrations against papers is labour- and time-intensive, requiring careful reading across formats and expertise across domains. The advent of AI unlocks new possibilities in facilitating this activity. We present RegCheck, a modular LLM-assisted tool designed to help researchers, reviewers, and editors from across scientific disciplines compare study registrations with their corresponding papers. Importantly, RegCheck keeps human expertise and judgement in the loop by (i) ensuring that users are the ones who determine which features should be compared, and (ii) presenting the most relevant text associated with each feature to the user, facilitating (rather than replacing) human discrepancy judgements. RegCheck also generates shareable reports with unique RegCheck IDs, enabling them to be easily shared and verified by other users. RegCheck is designed to be adaptable across scientific domains, as well as registration and publication formats. In this paper we provide an overview of the motivation, workflow, and design principles of RegCheck, and we discuss its potential as an extensible infrastructure for reproducible science with an example use case.
+ oai:arXiv.org:2601.13330v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jamie Cummins, Beth Clarke, Ian Hussey, Malte Elson
+
+
+ MultiST: A Cross-Attention-Based Multimodal Model for Spatial Transcriptomic
+ https://arxiv.org/abs/2601.13331
+ arXiv:2601.13331v1 Announce Type: new
+Abstract: Spatial transcriptomics (ST) enables transcriptome-wide profiling while preserving the spatial context of tissues, offering unprecedented opportunities to study tissue organization and cell-cell interactions in situ. Despite recent advances, existing methods often lack effective integration of histological morphology with molecular profiles, relying on shallow fusion strategies or omitting tissue images altogether, which limits their ability to resolve ambiguous spatial domain boundaries. To address this challenge, we propose MultiST, a unified multimodal framework that jointly models spatial topology, gene expression, and tissue morphology through cross-attention-based fusion. MultiST employs graph-based gene encoders with adversarial alignment to learn robust spatial representations, while integrating color-normalized histological features to capture molecular-morphological dependencies and refine domain boundaries. We evaluated the proposed method on 13 diverse ST datasets spanning two organs, including human brain cortex and breast cancer tissue. MultiST yields spatial domains with clearer and more coherent boundaries than existing methods, leading to more stable pseudotime trajectories and more biologically interpretable cell-cell interaction patterns. The MultiST framework and source code are available at https://github.com/LabJunBMI/MultiST.git.
+ oai:arXiv.org:2601.13331v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wei Wang, Quoc-Toan Ly, Chong Yu, Jun Bai
+
+
+ SEER: Spectral Entropy Encoding of Roles for Context-Aware Attention-Based Design Pattern Detection
+ https://arxiv.org/abs/2601.13334
+ arXiv:2601.13334v1 Announce Type: new
+Abstract: This paper presents SEER, an upgraded version of our prior method Context Is All You Need for detecting Gang of Four (GoF) design patterns from source code. The earlier approach modeled code as attention-ready sequences that blended lightweight structure with behavioral context; however, it lacked explicit role disambiguation within classes and treated call edges uniformly. SEER addresses these limitations with two principled additions: (i) a spectral-entropy role encoder that derives per-member role embeddings from the Laplacian spectrum of each class's interaction graph, and (ii) a time-weighted calling context that assigns empirically calibrated duration priors to method categories (e.g., constructors, getters/setters, static calls, virtual dispatch, cloning). Together, these components sharpen the model's notion of "who does what" and "how much it matters," while remaining portable across languages with minimal adaptation and fully compatible with Transformer-based sequence encoders. Importantly, SEER does not "force" a win by capacity or data; it nudges the classifier, steering attention toward role-consistent and temporally calibrated signals that matter most. We evaluate SEER on PyDesignNet (1,832 files, 35,000 sequences, 23 GoF patterns) and observe consistent gains over our previous system: macro-F1 increases from 92.47% to 93.20% and accuracy from 92.52% to 93.98%, with macro-precision 93.98% and macro-recall 92.52%. Beyond aggregate metrics, SEER reduces false positives by nearly 20%, a decisive improvement that strengthens its robustness and practical reliability. Moreover, SEER yields interpretable, symbol-level attributions aligned with canonical roles, exhibits robustness under small graph perturbations, and shows stable calibration.
+ oai:arXiv.org:2601.13334v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Tarik Houichime, Younes El Amrani
+
+
+ Towards Natural Language Environment: Understanding Seamless Natural-Language-Based Human-Multi-Robot Interactions
+ https://arxiv.org/abs/2601.13338
+ arXiv:2601.13338v1 Announce Type: new
+Abstract: As multiple robots are expected to coexist in future households, natural language is increasingly envisioned as a primary medium for human-robot and robot-robot communication. This paper introduces the concept of a Natural Language Environment (NLE), defined as an interaction space in which humans and multiple heterogeneous robots coordinate primarily through natural language.
+ Rather than proposing a deployable system, this work aims to explore the design space of such environments. We first synthesize prior work on language-based human-robot interaction to derive a preliminary design space for NLEs. We then conduct a role-playing study in virtual reality to investigate how people conceptualize, negotiate, and coordinate human-multi-robot interactions within this imagined environment.
+ Based on qualitative and quantitative analysis, we refine the preliminary design space and derive design implications that highlight key tensions and opportunities around task coordination dominance, robot autonomy, and robot personality in Natural Language Environments.
+ oai:arXiv.org:2601.13338v1
+ cs.HC
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ziyi Liu, Xinyi Wang, Shao-Kang Hsia, Chenfei Zhu, Zhengze Zhu, Xiyun Hu, Anastasia Kouvaras Ostrowski, Karthik Ramani
+
+
+ Reduction for Structured Concurrent Programs
+ https://arxiv.org/abs/2601.13341
+ arXiv:2601.13341v1 Announce Type: new
+Abstract: Commutativity reasoning based on Lipton's movers is a powerful technique for verification of concurrent programs. The idea is to define a program transformation that preserves a subset of the initial set of interleavings, which is sound modulo reorderings of commutative actions. Scaling commutativity reasoning to routinely-used features in software systems, such as procedures and parallel composition, remains a significant challenge.
+ In this work, we introduce a novel reduction technique for structured concurrent programs that unifies two key advances. First, we present a reduction strategy that soundly replaces parallel composition with sequential composition. Second, we generalize Lipton's reduction to support atomic sections containing (potentially recursive) procedure calls. Crucially, these two foundational strategies can be composed arbitrarily, greatly expanding the scope and flexibility of reduction-based reasoning. We implemented this technique in Civl and demonstrated its effectiveness on a number of challenging case studies, including a snapshot object, a fault-tolerant and linearizable register, the FLASH cache coherence protocol, and a non-trivial variant of Two-Phase Commit.
+ oai:arXiv.org:2601.13341v1
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 35th European Symposium on Programming, ESOP 2026
+ Namratha Gangamreddypalli, Constantin Enea, Shaz Qadeer
+
+
+ Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice
+ https://arxiv.org/abs/2601.13342
+ arXiv:2601.13342v1 Announce Type: new
+Abstract: In the study of Human-Computer Interaction, privacy is often seen as a core issue, and it has been explored directly in connection with User Interface (UI) and User Experience (UX) design. We systematically investigate the key considerations and factors for privacy in UI/UX, drawing upon the extant literature and 15 semi-structured interviews with experts working in the field. These insights lead to the synthesis of 14 primary design considerations for privacy in UI/UX, as well as 14 key factors under four main axes affecting privacy work therein. From these findings, we produce our main research artifact, a UI/UX Privacy Pattern Catalog, which we validate in a series of two interactive workshops and one online survey with UI/UX practitioners. Our work not only systematizes a field growing in both attention and importance, but it also provides an actionable and expert-validated artifact to guide UI/UX designers in realizing privacy-preserving UI/UX design.
+ oai:arXiv.org:2601.13342v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Anxhela Maloku, Alexandra Klymenko, Stephen Meisenbacher, Florian Matthes
+
+
+ The Words That Can't Be Shared: Exploring the Design of Unsent Messages
+ https://arxiv.org/abs/2601.13343
+ arXiv:2601.13343v1 Announce Type: new
+Abstract: People often have things they want to say but hold back in conversations, fearing vulnerability or social consequences. Online, this restraint can take a distinctive form: even when such thoughts are written out - in moments of anger, guilt, or longing - people may choose to withhold them, leaving them unsent. This process is underexamined; we investigate the experience of writing such messages within people's digital communications. We find that unsent messages become expressive containers for suppressed feelings, where the act of writing creates a pause for reflection on the relationship and oneself. Building on these insights, we probe into how the design of the writing platforms of unsent messages affects people's experiences and motivations. Speculating with participants on nine evocative variants of a note-taking platform, we highlight how design shapes the emotional, temporal, and ritualistic qualities of unsent messages, revealing subtle tensions between people's social desires and communicative actions.
+ oai:arXiv.org:2601.13343v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1145/3772318.3790639
+ Michael Yin, Robert Xiao
+
+
+ FlipFlop: A Static Analysis-based Energy Optimization Framework for GPU Kernels
+ https://arxiv.org/abs/2601.13345
+ arXiv:2601.13345v1 Announce Type: new
+Abstract: Artificial Intelligence (AI) applications, such as Large Language Models, are primarily driven and executed by Graphics Processing Units (GPUs). These GPU programs (kernels) consume substantial amounts of energy, yet software developers often lack the hardware expertise and ad hoc knowledge required to optimize for power efficiency. We propose FlipFlop, a framework using static code analysis to predict energy consumption and recommend Pareto-optimal thread block configurations considering both power consumption and execution time. Our framework requires no runtime execution and analyzes PTX code, a low-level instruction set for CUDA-enabled GPUs. It is validated across a diverse set of GPUs and kernels, including multi-head attention, convolution, and matrix multiplication. FlipFlop achieves 83% accuracy in identifying locally optimal energy-efficient configurations, while also minimizing developer effort by reducing the optimization search space by 93.4%. For multi-head attention kernels, it yields up to 79% energy savings and 106% throughput gains relative to NVIDIA's occupancy heuristic. By integrating static analysis with real-time monitoring and providing explainable optimization guidance, FlipFlop empowers developers to create sustainable, high-performance GPU software which minimizes environmental and computational costs.
+ oai:arXiv.org:2601.13345v1
+ cs.SE
+ cs.PF
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Saurabhsingh Rajput, Alexander Brandt, Vadim Elisseev, Tushar Sharma
+
+
+ AfroScope: A Framework for Studying the Linguistic Landscape of Africa
+ https://arxiv.org/abs/2601.13346
+ arXiv:2601.13346v1 Announce Type: new
+Abstract: Language Identification (LID) is the task of determining the language of a given text and is a fundamental preprocessing step that affects the reliability of downstream NLP applications. While recent work has expanded LID coverage for African languages, existing approaches remain limited in (i) the number of supported languages and (ii) their ability to make fine-grained distinctions among closely related varieties. We introduce AfroScope, a unified framework for African LID that includes AfroScope-Data, a dataset covering 713 African languages, and AfroScope-Models, a suite of strong LID models with broad language coverage. To better distinguish highly confusable languages, we propose a hierarchical classification approach that leverages Mirror-Serengeti, a specialized embedding model targeting 29 closely related or geographically proximate languages. This approach improves macro F1 by 4.55 on this confusable subset compared to our best base model. Finally, we analyze cross linguistic transfer and domain effects, offering guidance for building robust African LID systems. We position African LID as an enabling technology for large scale measurement of Africas linguistic landscape in digital text and release AfroScope-Data and AfroScope-Models publicly.
+ oai:arXiv.org:2601.13346v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sang Yun Kwon, AbdelRahim Elmadany, Muhammad Abdul-Mageed
+
+
+ A Scalable Sequential Framework for Dynamic Inverse Problems via Model Parameter Estimation
+ https://arxiv.org/abs/2601.13347
+ arXiv:2601.13347v1 Announce Type: new
+Abstract: Large-scale dynamic inverse problems are often ill-posed due to model complexity and the high dimensionality of the unknown parameters. Regularization is commonly employed to mitigate ill-posedness by incorporating prior information and structural constraints. However, classical regularization formulations are frequently infeasible in this setting due to prohibitive memory requirements, necessitating sequential methods that process data and state information online, eliminating the need to form the full space-time problem. In this work, we propose a memory-efficient framework for reconstructing dynamic sequences of undersampled images from computerized tomography data that requires minimal hyperparameter tuning. The approach is based on a prior-informed, dimension-reduced Kalman filter with smoothing. While well suited for dynamic image reconstruction, practical deployment is challenging when the state transition model and covariance parameters must be initialized without prior knowledge and estimated in a single pass. To address these limitations, we integrate regularized motion models with expectation-maximization strategies for the estimation of state transition dynamics and error covariances within the Kalman filtering framework. We demonstrate the effectiveness of the proposed method through numerical experiments on limited-angle and single-shot computerized tomography problems, highlighting improvements in reconstruction accuracy, memory efficiency, and computational cost.
+ oai:arXiv.org:2601.13347v1
+ math.NA
+ cs.NA
+ math.OC
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Aryeh Keating, Mirjeta Pasha
+
+
+ The AI Genie Phenomenon and Three Types of AI Chatbot Addiction: Escapist Roleplays, Pseudosocial Companions, and Epistemic Rabbit Holes
+ https://arxiv.org/abs/2601.13348
+ arXiv:2601.13348v1 Announce Type: new
+Abstract: Recent reports on generative AI chatbot use raise concerns about its addictive potential. An in-depth understanding is imperative to minimize risks, yet AI chatbot addiction remains poorly understood. This study examines how to characterize AI chatbot addiction--why users become addicted, the symptoms commonly reported, and the distinct types it comprises. We conducted a thematic analysis of Reddit entries (n=334) across 14 subreddits where users narrated their experiences with addictive AI chatbot use, followed by an exploratory data analysis. We found: (1) users' dependence tied to the "AI Genie" phenomenon--users can get exactly anything they want with minimal effort--and marked by symptoms that align with addiction literature, (2) three distinct addiction types: Escapist Roleplay, Pseudosocial Companion, and Epistemic Rabbit Hole, (3) sexual content involved in multiple cases, and (4) recovery strategies' perceived helpfulness differ between addiction types. Our work lays empirical groundwork to inform future strategies for prevention, diagnosis, and intervention.
+ oai:arXiv.org:2601.13348v1
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ M. Karen Shen, Jessica Huang, Olivia Liang, Ig-Jae Kim, Dongwook Yoon
+
+
+ Beyond Mapping : Domain-Invariant Representations via Spectral Embedding of Optimal Transport Plans
+ https://arxiv.org/abs/2601.13350
+ arXiv:2601.13350v1 Announce Type: new
+Abstract: Distributional shifts between training and inference time data remain a central challenge in machine learning, often leading to poor performance. It motivated the study of principled approaches for domain alignment, such as optimal transport based unsupervised domain adaptation, that relies on approximating Monge map using transport plans, which is sensitive to the transport problem regularization strategy and hyperparameters, and might yield biased domains alignment. In this work, we propose to interpret smoothed transport plans as adjacency matrices of bipartite graphs connecting source to target domain and derive domain-invariant samples' representations through spectral embedding. We evaluate our approach on acoustic adaptation benchmarks for music genre recognition, music-speech discrimination, as well as electrical cable defect detection and classification tasks using time domain reflection in different diagnosis settings, achieving overall strong performances.
+ oai:arXiv.org:2601.13350v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Abdel Djalil Sad Saoud, Fred Maurice Ngol\`e Mboula, Hanane Slimani
+
+
+ Towards Scalable Federated Container Orchestration: The CODECO Approach
+ https://arxiv.org/abs/2601.13351
+ arXiv:2601.13351v1 Announce Type: new
+Abstract: This paper presents CODECO, a federated orchestration framework for Kubernetes that addresses the limitations of cloud-centric deployment. CODECO adopts a data-compute-network co-orchestration approach to support heterogeneous infrastructures, mobility, and multi-provider operation.
+ CODECO extends Kubernetes with semantic application models, partition-based federation, and AI-assisted decision support, enabling context-aware placement and adaptive management of applications and their micro-services across federated environments. A hybrid governance model combines centralized policy enforcement with decentralized execution and learning to preserve global coherence while supporting far Edge autonomy. The paper describes the architecture and core components of CODECO, outlines representative orchestration workflows, and introduces a software-based experimentation framework for reproducible evaluation in federated Edge-Cloud infrastructure environments.
+ oai:arXiv.org:2601.13351v1
+ cs.DC
+ cs.ET
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Rute C. Sofia, Josh Salomon, Ray Carrol, Luis Garc\'es-Erice, Peter Urbanetz, J\"urgen Gesswein, Rizkallah Touma, Alejandro Espinosa, Luis M. Contreras, Vasileios Theodorou, George Papathanail, Georgios Koukis, Vassilis Tsaoussidis, Alberto del Rio, David Jimenez, Efterpi Paraskevoulakou, Panagiotis Karamolegkos, John Soldatos, Borja Dorado Nogales, Alejandro Tjaarda
+
+
+ LLM-as-RNN: A Recurrent Language Model for Memory Updates and Sequence Prediction
+ https://arxiv.org/abs/2601.13352
+ arXiv:2601.13352v1 Announce Type: new
+Abstract: Large language models are strong sequence predictors, yet standard inference relies on immutable context histories. After making an error at generation step t, the model lacks an updatable memory mechanism that improves predictions for step t+1. We propose LLM-as-RNN, an inference-only framework that turns a frozen LLM into a recurrent predictor by representing its hidden state as natural-language memory. This state, implemented as a structured system-prompt summary, is updated at each timestep via feedback-driven text rewrites, enabling learning without parameter updates. Under a fixed token budget, LLM-as-RNN corrects errors and retains task-relevant patterns, effectively performing online learning through language. We evaluate the method on three sequential benchmarks in healthcare, meteorology, and finance across Llama, Gemma, and GPT model families. LLM-as-RNN significantly outperforms zero-shot, full-history, and MemPrompt baselines, improving predictive accuracy by 6.5% on average, while producing interpretable, human-readable learning traces absent in standard context accumulation.
+ oai:arXiv.org:2601.13352v1
+ cs.CL
+ cs.AI
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuxing Lu, J. Ben Tamo, Weichen Zhao, Nan Sun, Yishan Zhong, Wenqi Shi, Jinzhuo Wang, May D. Wang
+
+
+ Guidelines for the Creation of an Annotated Corpus
+ https://arxiv.org/abs/2601.13353
+ arXiv:2601.13353v1 Announce Type: new
+Abstract: This document, based on feedback from UMR TETIS members and the scientific literature, provides a generic methodology for creating annotation guidelines and annotated textual datasets (corpora). It covers methodological aspects, as well as storage, sharing, and valorization of the data. It includes definitions and examples to clearly illustrate each step of the process, thus providing a comprehensive framework to support the creation and use of corpora in various research contexts.
+ oai:arXiv.org:2601.13353v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bahdja Boudoua, Nadia Guiffant, Mathieu Roche, Maguelonne Teisseire, Annelise Tran
+
+
+ Remote Triggers: Misophonia, Technology Non-Use, and Design for Inclusive Digital Spaces
+ https://arxiv.org/abs/2601.13355
+ arXiv:2601.13355v1 Announce Type: new
+Abstract: Misophonia, characterized by intense negative reactions to specific sounds or related visual cues, remains poorly recognized in clinical settings yet profoundly affects daily life. This study examines how individuals with misophonia experience and sometimes avoid technology that amplifies their triggers. Drawing on 16 semi-structured interviews with U.S. adults recruited from online communities, we explore how social media platforms such as TikTok and Instagram, along with remote communication tools like Zoom and Discord, shape coping strategies and patterns of non-use. Participants described frequent distress from uncontrollable audiovisual content and food-related behaviors during virtual gatherings. We propose design interventions -- including channel-specific audio-visual controls, real-time trigger detection, and shared preference tools -- to better support misophonic users and reduce exclusion in increasingly mediated social and professional contexts.
+ oai:arXiv.org:2601.13355v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tawfiq Ammari, Samantha Gilgan
+
+
+ On the Relation of State Space Models and Hidden Markov Models
+ https://arxiv.org/abs/2601.13357
+ arXiv:2601.13357v1 Announce Type: new
+Abstract: State Space Models (SSMs) and Hidden Markov Models (HMMs) are foundational frameworks for modeling sequential data with latent variables and are widely used in signal processing, control theory, and machine learning. Despite their shared temporal structure, they differ fundamentally in the nature of their latent states, probabilistic assumptions, inference procedures, and training paradigms. Recently, deterministic state space models have re-emerged in natural language processing through architectures such as S4 and Mamba, raising new questions about the relationship between classical probabilistic SSMs, HMMs, and modern neural sequence models.
+ In this paper, we present a unified and systematic comparison of HMMs, linear Gaussian state space models, Kalman filtering, and contemporary NLP state space models. We analyze their formulations through the lens of probabilistic graphical models, examine their inference algorithms -- including forward-backward inference and Kalman filtering -- and contrast their learning procedures via Expectation-Maximization and gradient-based optimization. By highlighting both structural similarities and semantic differences, we clarify when these models are equivalent, when they fundamentally diverge, and how modern NLP SSMs relate to classical probabilistic models. Our analysis bridges perspectives from control theory, probabilistic modeling, and modern deep learning.
+ oai:arXiv.org:2601.13357v1
+ cs.LG
+ cs.CL
+ cs.SY
+ eess.AS
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Aydin Ghojogh, M. Hadi Sepanj, Benyamin Ghojogh
+
+
+ The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models
+ https://arxiv.org/abs/2601.13358
+ arXiv:2601.13358v1 Announce Type: new
+Abstract: Scale does not uniformly improve reasoning - it restructures it. Analyzing 25,000+ chain-of-thought trajectories across four domains (Law, Science, Code, Math) and two scales (8B, 70B parameters), we discover that neural scaling laws trigger domain-specific phase transitions rather than uniform capability gains. Legal reasoning undergoes Crystallization: 45% collapse in representational dimensionality (d95: 501 -> 274), 31% increase in trajectory alignment, and 10x manifold untangling. Scientific and mathematical reasoning remain Liquid - geometrically invariant despite 9x parameter increase. Code reasoning forms a discrete Lattice of strategic modes (silhouette: 0.13 -> 0.42). This geometry predicts learnability. We introduce Neural Reasoning Operators - learned mappings from initial to terminal hidden states. In crystalline legal reasoning, our operator achieves 63.6% accuracy on held-out tasks via probe decoding, predicting reasoning endpoints without traversing intermediate states. We further identify a universal oscillatory signature (coherence ~ -0.4) invariant across domains and scales, suggesting attention and feedforward layers drive reasoning through opposing dynamics. These findings establish that the cost of thought is determined not by task difficulty but by manifold geometry - offering a blueprint for inference acceleration where topology permits.
+ oai:arXiv.org:2601.13358v1
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Samuel Cyrenius Anderson
+
+
+ Sockpuppetting: Jailbreaking LLMs Without Optimization Through Output Prefix Injection
+ https://arxiv.org/abs/2601.13359
+ arXiv:2601.13359v1 Announce Type: new
+Abstract: As open-weight large language models (LLMs) increase in capabilities, safeguarding them against malicious prompts and understanding possible attack vectors becomes ever more important. While automated jailbreaking methods like GCG [Zou et al., 2023] remain effective, they often require substantial computational resources and specific expertise. We introduce "sockpuppetting'', a simple method for jailbreaking open-weight LLMs by inserting an acceptance sequence (e.g., "Sure, here is how to...'') at the start of a model's output and allowing it to complete the response. Requiring only a single line of code and no optimization, sockpuppetting achieves up to 80% higher attack success rate (ASR) than GCG on Qwen3-8B in per-prompt comparisons. We also explore a hybrid approach that optimizes the adversarial suffix within the assistant message block rather than the user prompt, increasing ASR by 64% over GCG on Llama-3.1-8B in a prompt-agnostic setting. The results establish sockpuppetting as an effective low-cost attack accessible to unsophisticated adversaries, highlighting the need for defences against output-prefix injection in open-weight models.
+ oai:arXiv.org:2601.13359v1
+ cs.CL
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Asen Dotsinski, Panagiotis Eustratiadis
+
+
+ CLEAR: A Semantic-Geometric Terrain Abstraction for Large-Scale Unstructured Environments
+ https://arxiv.org/abs/2601.13361
+ arXiv:2601.13361v1 Announce Type: new
+Abstract: Long-horizon navigation in unstructured environments demands terrain abstractions that scale to tens of km$^2$ while preserving semantic and geometric structure, a combination existing methods fail to achieve. Grids scale poorly; quadtrees misalign with terrain boundaries; neither encodes landcover semantics essential for traversability-aware planning. This yields infeasible or unreliable paths for autonomous ground vehicles operating over 10+ km$^2$ under real-time constraints. CLEAR (Connected Landcover Elevation Abstract Representation) couples boundary-aware spatial decomposition with recursive plane fitting to produce convex, semantically aligned regions encoded as a terrain-aware graph. Evaluated on maps spanning 9-100~km$^2$ using a physics-based simulator, CLEAR achieves up to 10x faster planning than raw grids with only 6.7% cost overhead and delivers 6-9% shorter, more reliable paths than other abstraction baselines. These results highlight CLEAR's scalability and utility for long-range navigation in applications such as disaster response, defense, and planetary exploration.
+ oai:arXiv.org:2601.13361v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pranay Meshram, Charuvahan Adhivarahan, Ehsan Tarkesh Esfahani, Souma Chowdhury, Chen Wang, Karthik Dantu
+
+
+ Real-Time 4D Radar Perception for Robust Human Detection in Harsh Enclosed Environments
+ https://arxiv.org/abs/2601.13364
+ arXiv:2601.13364v1 Announce Type: new
+Abstract: This paper introduces a novel methodology for generating controlled, multi-level dust concentrations in a highly cluttered environment representative of harsh, enclosed environments, such as underground mines, road tunnels, or collapsed buildings, enabling repeatable mm-wave propagation studies under severe electromagnetic constraints. We also present a new 4D mmWave radar dataset, augmented by camera and LiDAR, illustrating how dust particles and reflective surfaces jointly impact the sensing functionality. To address these challenges, we develop a threshold-based noise filtering framework leveraging key radar parameters (RCS, velocity, azimuth, elevation) to suppress ghost targets and mitigate strong multipath reflections at the raw data level. Building on the filtered point clouds, a cluster-level, rule-based classification pipeline exploits radar semantics-velocity, RCS, and volumetric spread-to achieve reliable, real-time pedestrian detection without extensive domainspecific training. Experimental results confirm that this integrated approach significantly enhances clutter mitigation, detection robustness, and overall system resilience in dust-laden mining environments.
+ oai:arXiv.org:2601.13364v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ 10.1109/AP-S/CNC-USNC-URSI55537.2025.11266534
+ 2025 IEEE International Symposium on Antennas and Propagation and North American Radio Science Meeting (AP-S/CNC-USNC-URSI)
+ Zhenan Liu, Yaodong Cui, Amir Khajepour, George Shaker
+
+
+ CausationEntropy: Pythonic Optimal Causation Entropy
+ https://arxiv.org/abs/2601.13365
+ arXiv:2601.13365v1 Announce Type: new
+Abstract: Optimal Causation Entropy (oCSE) is a robust causal network modeling technique that reveals causal networks from dynamical systems and coupled oscillators, distinguishing direct from indirect paths. CausationEntropy is a Python package that implements oCSE and several of its significant optimizations and methodological extensions. In this paper, we introduce the version 1.1 release of CausationEntropy, which includes new synthetic data generators, plotting tools, and several advanced information-theoretical causal network discovery algorithms with criteria for estimating Gaussian, k-nearest neighbors (kNN), geometric k-nearest neighbors (geometric-kNN), kernel density (KDE) and Poisson entropic estimators. The package is easy to install from the PyPi software repository, is thoroughly documented, supplemented with extensive code examples, and is modularly structured to support future additions. The entire codebase is released under the MIT license and is available on GitHub and through PyPi Repository. We expect this package to serve as a benchmark tool for causal discovery in complex dynamical systems.
+ oai:arXiv.org:2601.13365v1
+ cs.LG
+ physics.data-an
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kevin Slote, Jeremie Fish, Erik Bollt
+
+
+ Recurrent Confidence Chain: Temporal-Aware Uncertainty Quantification in Large Language Models
+ https://arxiv.org/abs/2601.13368
+ arXiv:2601.13368v1 Announce Type: new
+Abstract: As reasoning modules, such as the chain-of-thought mechanism, are applied to large language models, they achieve strong performance on various tasks such as answering common-sense questions and solving math problems. The main challenge now is to assess the uncertainty of answers, which can help prevent misleading or serious hallucinations for users. Although current methods analyze long reasoning sequences by filtering unrelated tokens and examining potential connections between nearby tokens or sentences, the temporal spread of confidence is often overlooked. This oversight can lead to inflated overall confidence, even when earlier steps exhibit very low confidence. To address this issue, we propose a novel method that incorporates inter-step attention to analyze semantic correlations across steps. For handling long-horizon responses, we introduce a hidden confidence mechanism to retain historical confidence information, which is then combined with stepwise confidence to produce a more accurate overall estimate. We evaluate our method on the GAOKAO math benchmark and the CLadder causal reasoning dataset using mainstream open-source large language models. Our approach is shown to outperform state-of-the-art methods by achieving a superior balance between predictive quality and calibration, demonstrated by strong performance on both Negative Log-Likelihood and Expected Calibration Error.
+ oai:arXiv.org:2601.13368v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhenjiang Mao, Anirudhh Venkat
+
+
+ Spherical Geometry Diffusion: Generating High-quality 3D Face Geometry via Sphere-anchored Representations
+ https://arxiv.org/abs/2601.13371
+ arXiv:2601.13371v1 Announce Type: new
+Abstract: A fundamental challenge in text-to-3D face generation is achieving high-quality geometry. The core difficulty lies in the arbitrary and intricate distribution of vertices in 3D space, making it challenging for existing models to establish clean connectivity and resulting in suboptimal geometry. To address this, our core insight is to simplify the underlying geometric structure by constraining the distribution onto a simple and regular manifold, a topological sphere. Building on this, we first propose the Spherical Geometry Representation, a novel face representation that anchors geometric signals to uniform spherical coordinates. This guarantees a regular point distribution, from which the mesh connectivity can be robustly reconstructed. Critically, this canonical sphere can be seamlessly unwrapped into a 2D map, creating a perfect synergy with powerful 2D generative models. We then introduce Spherical Geometry Diffusion, a conditional diffusion framework built upon this 2D map. It enables diverse and controllable generation by jointly modeling geometry and texture, where the geometry explicitly conditions the texture synthesis process. Our method's effectiveness is demonstrated through its success in a wide range of tasks: text-to-3D generation, face reconstruction, and text-based 3D editing. Extensive experiments show that our approach substantially outperforms existing methods in geometric quality, textual fidelity, and inference efficiency.
+ oai:arXiv.org:2601.13371v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junyi Zhang, Yiming Wang, Yunhong Lu, Qichao Wang, Wenzhe Qian, Xiaoyin Xu, David Gu, Min Zhang
+
+
+ Influence of Normative Theories of Ethics on the European Union Artificial Intelligence Act: A Transformer-Based Analysis Using Semantic Textual Similarity
+ https://arxiv.org/abs/2601.13372
+ arXiv:2601.13372v1 Announce Type: new
+Abstract: This study investigates the ethical grounding of the European Union Artificial Intelligence (EU AI) Act by using Semantic Textual Similarity (STS) to analyze the alignment between normative ethical theories and regulatory language. Despite being regarded as a significant step toward regulating Artificial Intelligence (AI) systems and its emphasis on fundamental rights, the EU AI Act is not immune to moral criticism regarding its ethical foundations. Our work examines the impact of three major normative theories of ethics, virtue ethics, deontological ethics, and consequentialism, on the EU AI Act. We introduce the concept of influence, grounded in philosophical and chronological analysis, to examine the underlying relationship between the theories and the Act. As a proxy measure of this influence, we propose using STS to quantify the degree of alignment between the theories (influencers) and the Act (influencee). To capture intentional and operational ethical consistency, the Act was divided into two parts: the preamble and the statutory provisions. The textual descriptions of the theories were manually preprocessed to reduce semantic overlap and ensure a distinct representation of each theory. A heterogeneous embedding-level ensemble approach was employed, using five modified Bidirectional Encoder Representations from Transformers (BERT) models built on the Transformer architecture to compute STS scores. These scores reflect the semantic alignment between various theories of ethics and the two components of the EU AI Act. The resulting similarity scores were evaluated using voting and averaging, with findings indicating that deontological ethics has the most significant overall influence.
+ oai:arXiv.org:2601.13372v1
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mehmet Murat Albayrakoglu, Mehmet Nafiz Aydin
+
+
+ A Lightweight Model-Driven 4D Radar Framework for Pervasive Human Detection in Harsh Conditions
+ https://arxiv.org/abs/2601.13373
+ arXiv:2601.13373v1 Announce Type: new
+Abstract: Pervasive sensing in industrial and underground environments is severely constrained by airborne dust, smoke, confined geometry, and metallic structures, which rapidly degrade optical and LiDAR based perception. Elevation resolved 4D mmWave radar offers strong resilience to such conditions, yet there remains a limited understanding of how to process its sparse and anisotropic point clouds for reliable human detection in enclosed, visibility degraded spaces. This paper presents a fully model-driven 4D radar perception framework designed for real-time execution on embedded edge hardware. The system uses radar as its sole perception modality and integrates domain aware multi threshold filtering, ego motion compensated temporal accumulation, KD tree Euclidean clustering with Doppler aware refinement, and a rule based 3D classifier. The framework is evaluated in a dust filled enclosed trailer and in real underground mining tunnels, and in the tested scenarios the radar based detector maintains stable pedestrian identification as camera and LiDAR modalities fail under severe visibility degradation. These results suggest that the proposed model-driven approach provides robust, interpretable, and computationally efficient perception for safety-critical applications in harsh industrial and subterranean environments.
+ oai:arXiv.org:2601.13373v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ IEEE PerCom 2026
+ Zhenan Liu, Amir Khajepour, George Shaker
+
+
+ Bounded Minds, Generative Machines: Envisioning Conversational AI that Works with Human Heuristics and Reduces Bias Risk
+ https://arxiv.org/abs/2601.13376
+ arXiv:2601.13376v1 Announce Type: new
+Abstract: Conversational AI is rapidly becoming a primary interface for information seeking and decision making, yet most systems still assume idealized users. In practice, human reasoning is bounded by limited attention, uneven knowledge, and reliance on heuristics that are adaptive but bias-prone. This article outlines a research pathway grounded in bounded rationality, and argues that conversational AI should be designed to work with human heuristics rather than against them. It identifies key directions for detecting cognitive vulnerability, supporting judgment under uncertainty, and evaluating conversational systems beyond factual accuracy, toward decision quality and cognitive robustness.
+ oai:arXiv.org:2601.13376v1
+ cs.ET
+ cs.AI
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Jiqun Liu
+
+
+ Practical Insights into Semi-Supervised Object Detection Approaches
+ https://arxiv.org/abs/2601.13380
+ arXiv:2601.13380v1 Announce Type: new
+Abstract: Learning in data-scarce settings has recently gained significant attention in the research community. Semi-supervised object detection(SSOD) aims to improve detection performance by leveraging a large number of unlabeled images alongside a limited number of labeled images(a.k.a.,few-shot learning). In this paper, we present a comprehensive comparison of three state-of-the-art SSOD approaches, including MixPL, Semi-DETR and Consistent-Teacher, with the goal of understanding how performance varies with the number of labeled images. We conduct experiments using the MS-COCO and Pascal VOC datasets, two popular object detection benchmarks which allow for standardized evaluation. In addition, we evaluate the SSOD approaches on a custom Beetle dataset which enables us to gain insights into their performance on specialized datasets with a smaller number of object categories. Our findings highlight the trade-offs between accuracy, model size, and latency, providing insights into which methods are best suited for low-data regimes.
+ oai:arXiv.org:2601.13380v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chaoxin Wang, Bharaneeshwar Balasubramaniyam, Anurag Sangem, Nicolais Guevara, Doina Caragea
+
+
+ A Lightweight Modular Framework for Constructing Autonomous Agents Driven by Large Language Models: Design, Implementation, and Applications in AgentForge
+ https://arxiv.org/abs/2601.13383
+ arXiv:2601.13383v1 Announce Type: new
+Abstract: The emergence of LLMs has catalyzed a paradigm shift in autonomous agent development, enabling systems capable of reasoning, planning, and executing complex multi-step tasks. However, existing agent frameworks often suffer from architectural rigidity, vendor lock-in, and prohibitive complexity that impedes rapid prototyping and deployment. This paper presents AgentForge, a lightweight, open-source Python framework designed to democratize the construction of LLM-driven autonomous agents through a principled modular architecture. AgentForge introduces three key innovations: (1) a composable skill abstraction that enables fine-grained task decomposition with formally defined input-output contracts, (2) a unified LLM backend interface supporting seamless switching between cloud-based APIs and local inference engines, and (3) a declarative YAML-based configuration system that separates agent logic from implementation details. We formalize the skill composition mechanism as a directed acyclic graph (DAG) and prove its expressiveness for representing arbitrary sequential and parallel task workflows. Comprehensive experimental evaluation across four benchmark scenarios demonstrates that AgentForge achieves competitive task completion rates while reducing development time by 62% compared to LangChain and 78% compared to direct API integration. Latency measurements confirm sub-100ms orchestration overhead, rendering the framework suitable for real-time applications. The modular design facilitates extension: we demonstrate the integration of six built-in skills and provide comprehensive documentation for custom skill development. AgentForge addresses a critical gap in the LLM agent ecosystem by providing researchers and practitioners with a production-ready foundation for constructing, evaluating, and deploying autonomous agents without sacrificing flexibility or performance.
+ oai:arXiv.org:2601.13383v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Akbar Anbar Jafari, Cagri Ozcinar, Gholamreza Anbarjafari
+
+
+ From Completion to Editing: Unlocking Context-Aware Code Infilling via Search-and-Replace Instruction Tuning
+ https://arxiv.org/abs/2601.13384
+ arXiv:2601.13384v1 Announce Type: new
+Abstract: The dominant Fill-in-the-Middle (FIM) paradigm for code completion is constrained by its rigid inability to correct contextual errors and reliance on unaligned, insecure Base models. While Chat LLMs offer safety and Agentic workflows provide flexibility, they suffer from performance degradation and prohibitive latency, respectively. To resolve this dilemma, we propose Search-and-Replace Infilling (SRI), a framework that internalizes the agentic verification-and-editing mechanism into a unified, single-pass inference process. By structurally grounding edits via an explicit search phase, SRI harmonizes completion tasks with the instruction-following priors of Chat LLMs, extending the paradigm from static infilling to dynamic context-aware editing. We synthesize a high-quality dataset, SRI-200K, and fine-tune the SRI-Coder series. Extensive evaluations demonstrate that with minimal data (20k samples), SRI-Coder enables Chat models to surpass the completion performance of their Base counterparts. Crucially, unlike FIM-style tuning, SRI preserves general coding competencies and maintains inference latency comparable to standard FIM. We empower the entire Qwen3-Coder series with SRI, encouraging the developer community to leverage this framework for advanced auto-completion and assisted development.
+ oai:arXiv.org:2601.13384v1
+ cs.SE
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jiajun Zhang, Zeyu Cui, Jiaxi Yang, Lei Zhang, Yuheng Jing, Zeyao Ma, Tianyi Bai, Zilei Wang, Qiang Liu, Liang Wang, Binyuan Hui, Junyang Lin
+
+
+ Organ-Aware Attention Improves CT Triage and Classification
+ https://arxiv.org/abs/2601.13385
+ arXiv:2601.13385v1 Announce Type: new
+Abstract: There is an urgent need for triage and classification of high-volume medical imaging modalities such as computed tomography (CT), which can improve patient care and mitigate radiologist burnout. Study-level CT triage requires calibrated predictions with localized evidence; however, off-the-shelf Vision Language Models (VLM) struggle with 3D anatomy, protocol shifts, and noisy report supervision. This study used the two largest publicly available chest CT datasets: CT-RATE and RADCHEST-CT (held-out external test set). Our carefully tuned supervised baseline (instantiated as a simple Global Average Pooling head) establishes a new supervised state of the art, surpassing all reported linear-probe VLMs. Building on this baseline, we present ORACLE-CT, an encoder-agnostic, organ-aware head that pairs Organ-Masked Attention (mask-restricted, per-organ pooling that yields spatial evidence) with Organ-Scalar Fusion (lightweight fusion of normalized volume and mean-HU cues). In the chest setting, ORACLE-CT masked attention model achieves AUROC 0.86 on CT-RATE; in the abdomen setting, on MERLIN (30 findings), our supervised baseline exceeds a reproduced zero-shot VLM baseline obtained by running publicly released weights through our pipeline, and adding masked attention plus scalar fusion further improves performance to AUROC 0.85. Together, these results deliver state-of-the-art supervised classification performance across both chest and abdomen CT under a unified evaluation protocol. The source code is available at https://github.com/lavsendahal/oracle-ct.
+ oai:arXiv.org:2601.13385v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Lavsen Dahal, Yubraj Bhandari, Geoffrey D. Rubin, Joseph Y. Lo
+
+
+ Leveraging Transformer Decoder for Automotive Radar Object Detection
+ https://arxiv.org/abs/2601.13386
+ arXiv:2601.13386v1 Announce Type: new
+Abstract: In this paper, we present a Transformer-based architecture for 3D radar object detection that uses a novel Transformer Decoder as the prediction head to directly regress 3D bounding boxes and class scores from radar feature representations. To bridge multi-scale radar features and the decoder, we propose Pyramid Token Fusion (PTF), a lightweight module that converts a feature pyramid into a unified, scale-aware token sequence. By formulating detection as a set prediction problem with learnable object queries and positional encodings, our design models long-range spatial-temporal correlations and cross-feature interactions. This approach eliminates dense proposal generation and heuristic post-processing such as extensive non-maximum suppression (NMS) tuning. We evaluate the proposed framework on the RADDet, where it achieves significant improvements over state-of-the-art radar-only baselines.
+ oai:arXiv.org:2601.13386v1
+ cs.CV
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Changxu Zhang, Zhaoze Wang, Tai Fei, Christopher Grimm, Yi Jin, Claas Tebruegge, Ernst Warsitz, Markus Gardill
+
+
+ Confidence over Time: Confidence Calibration with Temporal Logic for Large Language Model Reasoning
+ https://arxiv.org/abs/2601.13387
+ arXiv:2601.13387v1 Announce Type: new
+Abstract: Large Language Models (LLMs) increasingly rely on long-form, multi-step reasoning to solve complex tasks such as mathematical problem solving and scientific question answering. Despite strong performance, existing confidence estimation methods typically reduce an entire reasoning process to a single scalar score, ignoring how confidence evolves throughout the generation. As a result, these methods are often sensitive to superficial factors such as response length or verbosity, and struggle to distinguish correct reasoning from confidently stated errors. We propose to characterize the stepwise confidence signal using Signal Temporal Logic (STL). Using a discriminative STL mining procedure, we discover temporal formulas that distinguish confidence signals of correct and incorrect responses. Our analysis found that the STL patterns generalize across tasks, and numeric parameters exhibit sensitivity to individual questions. Based on these insights, we develop a confidence estimation approach that informs STL blocks with parameter hypernetworks. Experiments on multiple reasoning tasks show our confidence scores are more calibrated than the baselines.
+ oai:arXiv.org:2601.13387v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhenjiang Mao, Anirudhh Venkat, Artem Bisliouk, Akshat Kothiyal, Sindhura Kumbakonam Subramanian, Saithej Singhu, Ivan Ruchkin
+
+
+ Structured Insight from Unstructured Data: Large Language Models for SDOH-Driven Diabetes Risk Prediction
+ https://arxiv.org/abs/2601.13388
+ arXiv:2601.13388v1 Announce Type: new
+Abstract: Social determinants of health (SDOH) play a critical role in Type 2 Diabetes (T2D) management but are often absent from electronic health records and risk prediction models. Most individual-level SDOH data is collected through structured screening tools, which lack the flexibility to capture the complexity of patient experiences and unique needs of a clinic's population. This study explores the use of large language models (LLMs) to extract structured SDOH information from unstructured patient life stories and evaluate the predictive value of both the extracted features and the narratives themselves for assessing diabetes control. We collected unstructured interviews from 65 T2D patients aged 65 and older, focused on their lived experiences, social context, and diabetes management. These narratives were analyzed using LLMs with retrieval-augmented generation to produce concise, actionable qualitative summaries for clinical interpretation and structured quantitative SDOH ratings for risk prediction modeling. The structured SDOH ratings were used independently and in combination with traditional laboratory biomarkers as inputs to linear and tree-based machine learning models (Ridge, Lasso, Random Forest, and XGBoost) to demonstrate how unstructured narrative data can be applied in conventional risk prediction workflows. Finally, we evaluated several LLMs on their ability to predict a patient's level of diabetes control (low, medium, high) directly from interview text with A1C values redacted. LLMs achieved 60% accuracy in predicting diabetes control levels from interview text. This work demonstrates how LLMs can translate unstructured SDOH-related data into structured insights, offering a scalable approach to augment clinical risk models and decision-making.
+ oai:arXiv.org:2601.13388v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/EMBC58623.2025.11254798
+ Annu Int Conf IEEE Eng Med Biol Soc. 2025 Jul;2025:1-7
+ Sasha Ronaghi, Prerit Choudhary, David H Rehkopf, Bryant Lin
+
+
+ Robustness and Resilience Evaluation of Eco-Driving Strategies at Signalized Intersections
+ https://arxiv.org/abs/2601.13389
+ arXiv:2601.13389v1 Announce Type: new
+Abstract: Eco-driving strategies have demonstrated substantial potential for improving energy efficiency and reducing emissions, especially at signalized intersections. However, evaluations of eco-driving methods typically rely on simplified simulation or experimental conditions, where certain assumptions are made to manage complexity and experimental control. This study introduces a unified framework to evaluate eco-driving strategies through the lens of two complementary criteria: control robustness and environmental resilience. We define formal indicators that quantify performance degradation caused by internal execution variability and external environmental disturbances, respectively. These indicators are then applied to assess multiple eco-driving controllers through real-world vehicle experiments. The results reveal key tradeoffs between tracking accuracy and adaptability, showing that optimization-based controllers offer more consistent performance across varying disturbance levels, while analytical controllers may perform comparably under nominal conditions but exhibit greater sensitivity to execution and timing variability.
+ oai:arXiv.org:2601.13389v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zhaohui Liang, Chengyuan Ma, Keke Long, Xiaopeng Li
+
+
+ Beyond Memorization: Testing LLM Reasoning on Unseen Theory of Computation Tasks
+ https://arxiv.org/abs/2601.13392
+ arXiv:2601.13392v1 Announce Type: new
+Abstract: Large language models (LLMs) have demonstrated strong performance on formal language tasks, yet whether this reflects genuine symbolic reasoning or pattern matching on familiar constructions remains unclear. We introduce a benchmark for deterministic finite automata (DFA) construction from regular languages, comprising factual knowledge questions, seen construction problems from public sources, and two types of unseen problems: hand-crafted instances with multiple interacting constraints and systematically generated problems via Arden's theorem. Models achieve perfect accuracy on factual questions and 84-90% on seen tasks. However, accuracy drops sharply on unseen problems (by 30-64%), with failures stemming from systematic misinterpretation of language constraints, incorrect handling of Kleene-star semantics, and a failure to preserve global consistency. We evaluate a three-stage hint protocol that enables correction of shallow errors but does not reliably resolve globally inconsistent or structurally flawed automata. Our analysis across multiple prompting strategies (direct, Chain-of-Thought, Tree-of-Thought) reveals that errors persist regardless of prompting approach, exposing a fundamental gap between LLMs' ability to generate syntactically plausible DFAs and their capacity for semantically correct formal reasoning.
+ oai:arXiv.org:2601.13392v1
+ cs.CL
+ cs.AI
+ cs.FL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shlok Shelat, Jay Raval, Souvik Roy, Manas Gaur
+
+
+ Can LLMs Compress (and Decompress)? Evaluating Code Understanding and Execution via Invertibility
+ https://arxiv.org/abs/2601.13398
+ arXiv:2601.13398v1 Announce Type: new
+Abstract: LLMs demonstrate strong performance on code benchmarks, yet round-trip code execution reveals limitations in their ability to maintain consistent reasoning across forward and backward execution. We present RoundTripCodeEval (RTCE), a comprehensive benchmark consisting of four distinct code execution reasoning tasks designed to rigorously test round-trip consistency. RTCE provides an execution-free, exact-match evaluation of bijection fidelity, assessing whether models preserve a consistent one-to-one mapping between encoding and decoding operations across various algorithms and directions. We systematically evaluate state-of-the-art Code-LLMs using zero-shot prompting, supervised fine-tuning on execution traces, and self-reflection mechanisms. Each yields modest improvements, but none closes the gap, indicating that current LLMs struggle with true round-trip consistency, which demonstrates that they lack the internal coherence required for trustworthy code reasoning. RTCE surfaces several new and previously unmeasured insights that are not captured by existing I/O-prediction, execution-reasoning, or round-trip natural-language benchmarks. We will release the code and the dataset upon acceptance.
+ oai:arXiv.org:2601.13398v1
+ cs.LG
+ cs.AI
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nickil Maveli, Antonio Vergari, Shay B. Cohen
+
+
+ QERS: Quantum Encryption Resilience Score for Post-Quantum Cryptography in Computer, IoT, and IIoT Systems
+ https://arxiv.org/abs/2601.13399
+ arXiv:2601.13399v1 Announce Type: new
+Abstract: Post-quantum cryptography (PQC) is becoming essential for securing Internet of Things (IoT) and Industrial IoT (IIoT) systems against quantum-enabled adversaries. However, existing evaluation approaches primarily focus on isolated performance metrics, offering limited support for holistic security and deployment decisions. This paper introduces QERS (Quantum Encryption Resilience Score), a universal measurement framework that integrates cryptographic performance, system constraints, and multi-criteria decision analysis to assess PQC readiness in computer, IoT, and IIoT environments. QERS combines normalized metrics, weighted aggregation, and machine learning-assisted analysis to produce interpretable resilience scores across heterogeneous devices and communication protocols. Experimental results demonstrate how the framework enables comparative evaluation of post-quantum schemes under realistic resource constraints, supporting informed security design and migration planning. This work is presented as a preprint, with extended statistical validation planned as part of ongoing graduate research.
+ oai:arXiv.org:2601.13399v1
+ cs.CR
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jonatan Rassekhnia
+
+
+ Deep Image Prior with L0 Gradient Regularizer for Image Smoothing
+ https://arxiv.org/abs/2601.13400
+ arXiv:2601.13400v1 Announce Type: new
+Abstract: Image smoothing is a fundamental image processing operation that preserves the underlying structure, such as strong edges and contours, and removes minor details and textures in an image. Many image smoothing algorithms rely on computing local window statistics or solving an optimization problem. Recent state-of-the-art methods leverage deep learning, but they require a carefully curated training dataset. Because constructing a proper training dataset for image smoothing is challenging, we propose DIP-$\ell_0$, a deep image prior framework that incorporates the $\ell_0$ gradient regularizer. This framework can perform high-quality image smoothing without any training data. To properly minimize the associated loss function that has the nonconvex, nonsmooth $\ell_0$ ``norm", we develop an alternating direction method of multipliers algorithm that utilizes an off-the-shelf $\ell_0$ gradient minimization solver. Numerical experiments demonstrate that the proposed DIP-$\ell_0$ outperforms many image smoothing algorithms in edge-preserving image smoothing and JPEG artifact removal.
+ oai:arXiv.org:2601.13400v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Nhat Thanh Tran, Kevin Bui, Jack Xin
+
+
+ Reasoning with Pixel-level Precision: QVLM Architecture and SQuID Dataset for Quantitative Geospatial Analytics
+ https://arxiv.org/abs/2601.13401
+ arXiv:2601.13401v1 Announce Type: new
+Abstract: Current Vision-Language Models (VLMs) fail at quantitative spatial reasoning because their architectures destroy pixel-level information required for counting and measurements. Vision encoders compress images through patch embeddings, reducing spatial indexing and losing the precise pixel-level tracking required for accurate counting. We present two contributions to address this fundamental limitation. First, we introduce SQuID (Satellite Quantitative Intelligence Dataset), a benchmark of 2,000 satellite image Question-Answer pairs with both numerical range and categorical answers, designed to evaluate quantitative spatial reasoning. The dataset spans three difficulty tiers with annotations automatically generated from human labels and their learned variability. Second, we propose QVLM (Quantitative Vision-Language Model), a code-generation architecture that maintains pixel precision by decoupling language understanding from visual analysis. Instead of encoding images into embeddings, QVLM generates executable code that first calls a segmentation model to obtain pixel-level masks, then operates directly on these masks, preserving spatial indexing throughout the reasoning process. Our experiments show that QVLM using GPT-5 as coder achieves 42.0% accuracy on SQuID compared to 28.1% for a VLM prompted with image-question pairs. Our work reveals that, for quantitative spatial reasoning, architectural decoupling enables better accuracy on quantitative tasks.
+ oai:arXiv.org:2601.13401v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Peter A. Massih, Eric Cosatto
+
+
+ Local-to-Global Logical Explanations for Deep Vision Models
+ https://arxiv.org/abs/2601.13404
+ arXiv:2601.13404v1 Announce Type: new
+Abstract: While deep neural networks are extremely effective at classifying images, they remain opaque and hard to interpret. We introduce local and global explanation methods for black-box models that generate explanations in terms of human-recognizable primitive concepts. Both the local explanations for a single image and the global explanations for a set of images are cast as logical formulas in monotone disjunctive-normal-form (MDNF), whose satisfaction guarantees that the model yields a high score on a given class. We also present an algorithm for explaining the classification of examples into multiple classes in the form of a monotone explanation list over primitive concepts. Despite their simplicity and interpretability we show that the explanations maintain high fidelity and coverage with respect to the blackbox models they seek to explain in challenging vision datasets.
+ oai:arXiv.org:2601.13404v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ 5th International Joint Conference on Learning & Reasoning 2025
+ Bhavan Vasu, Giuseppe Raffa, Prasad Tadepalli
+
+
+ Integrating Virtual Reality and Large Language Models for Team-Based Non-Technical Skills Training and Evaluation in the Operating Room
+ https://arxiv.org/abs/2601.13406
+ arXiv:2601.13406v1 Announce Type: new
+Abstract: Although effective teamwork and communication are critical to surgical safety, structured training for non-technical skills (NTS) remains limited compared with technical simulation. The ACS/APDS Phase III Team-Based Skills Curriculum calls for scalable tools that both teach and objectively assess these competencies during laparoscopic emergencies. We introduce the Virtual Operating Room Team Experience (VORTeX), a multi-user virtual reality (VR) platform that integrates immersive team simulation with large language model (LLM) analytics to train and evaluate communication, decision-making, teamwork, and leadership. Team dialogue is analyzed using structured prompts derived from the Non-Technical Skills for Surgeons (NOTSS) framework, enabling automated classification of behaviors and generation of directed interaction graphs that quantify communication structure and hierarchy. Two laparoscopic emergency scenarios, pneumothorax and intra-abdominal bleeding, were implemented to elicit realistic stress and collaboration. Twelve surgical professionals completed pilot sessions at the 2024 SAGES conference, rating VORTeX as intuitive, immersive, and valuable for developing teamwork and communication. The LLM consistently produced interpretable communication networks reflecting expected operative hierarchies, with surgeons as central integrators, nurses as initiators, and anesthesiologists as balanced intermediaries. By integrating immersive VR with LLM-driven behavioral analytics, VORTeX provides a scalable, privacy-compliant framework for objective assessment and automated, data-informed debriefing across distributed training environments.
+ oai:arXiv.org:2601.13406v1
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jacob Barker, Doga Demirel, Cullen Jackson, Anna Johansson, Robbin Miraglia, Darian Hoagland, Stephanie B. Jones, John Mitchell, Daniel B. Jones, Suvranu De
+
+
+ Classifiers in High Dimensional Hilbert Metrics
+ https://arxiv.org/abs/2601.13410
+ arXiv:2601.13410v1 Announce Type: new
+Abstract: Classifying points in high dimensional spaces is a fundamental geometric problem in machine learning. In this paper, we address classifying points in the $d$-dimensional Hilbert polygonal metric. The Hilbert metric is a generalization of the Cayley-Klein hyperbolic distance to arbitrary convex bodies and has a diverse range of applications in machine learning and convex geometry. We first present an efficient LP-based algorithm in the metric for the large-margin SVM problem. Our algorithm runs in time polynomial to the number of points, bounding facets, and dimension. This is a significant improvement on previous works, which either provide no theoretical guarantees on running time, or suffer from exponential runtime. We also consider the closely related Funk metric. We also present efficient algorithms for the soft-margin SVM problem and for nearest neighbor-based classification in the Hilbert metric.
+ oai:arXiv.org:2601.13410v1
+ cs.CG
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Aditya Acharya, Auguste H. Gezalyan, David M. Mount
+
+
+ Using deep learning for predicting cleansing quality of colon capsule endoscopy images
+ https://arxiv.org/abs/2601.13412
+ arXiv:2601.13412v1 Announce Type: new
+Abstract: In this study, we explore the application of deep learning techniques for predicting cleansing quality in colon capsule endoscopy (CCE) images. Using a dataset of 500 images labeled by 14 clinicians on the Leighton-Rex scale (Poor, Fair, Good, and Excellent), a ResNet-18 model was trained for classification, leveraging stratified K-fold cross-validation to ensure robust performance. To optimize the model, structured pruning techniques were applied iteratively, achieving significant sparsity while maintaining high accuracy. Explainability of the pruned model was evaluated using Grad-CAM, Grad-CAM++, Eigen-CAM, Ablation-CAM, and Random-CAM, with the ROAD method employed for consistent evaluation. Our results indicate that for a pruned model, we can achieve a cross-validation accuracy of 88% with 79% sparsity, demonstrating the effectiveness of pruning in improving efficiency from 84% without compromising performance. We also highlight the challenges of evaluating cleansing quality of CCE images, emphasize the importance of explainability in clinical applications, and discuss the challenges associated with using the ROAD method for our task. Finally, we employ a variant of adaptive temperature scaling to calibrate the pruned models for an external dataset.
+ oai:arXiv.org:2601.13412v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Puneet Sharma, Kristian Dalsb{\o} Hindberg, Benedicte Schelde-Olesen, Ulrik Deding, Esmaeil S. Nadimi, Jan-Matthias Braun
+
+
+ Diffusion Representations for Fine-Grained Image Classification: A Marine Plankton Case Study
+ https://arxiv.org/abs/2601.13416
+ arXiv:2601.13416v1 Announce Type: new
+Abstract: Diffusion models have emerged as state-of-the-art generative methods for image synthesis, yet their potential as general-purpose feature encoders remains underexplored. Trained for denoising and generation without labels, they can be interpreted as self-supervised learners that capture both low- and high-level structure. We show that a frozen diffusion backbone enables strong fine-grained recognition by probing intermediate denoising features across layers and timesteps and training a linear classifier for each pair. We evaluate this in a real-world plankton-monitoring setting with practical impact, using controlled and comparable training setups against established supervised and self-supervised baselines. Frozen diffusion features are competitive with supervised baselines and outperform other self-supervised methods in both balanced and naturally long-tailed settings. Out-of-distribution evaluations on temporally and geographically shifted plankton datasets further show that frozen diffusion features maintain strong accuracy and Macro F1 under substantial distribution shift.
+ oai:arXiv.org:2601.13416v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ A. Nieto Juscafresa, \'A. Mazcu\~n\'an Herreros, J. Sullivan
+
+
+ SGW-GAN: Sliced Gromov-Wasserstein Guided GANs for Retinal Fundus Image Enhancement
+ https://arxiv.org/abs/2601.13417
+ arXiv:2601.13417v1 Announce Type: new
+Abstract: Retinal fundus photography is indispensable for ophthalmic screening and diagnosis, yet image quality is often degraded by noise, artifacts, and uneven illumination. Recent GAN- and diffusion-based enhancement methods improve perceptual quality by aligning degraded images with high-quality distributions, but our analysis shows that this focus can distort intra-class geometry: clinically related samples become dispersed, disease-class boundaries blur, and downstream tasks such as grading or lesion detection are harmed. The Gromov Wasserstein (GW) discrepancy offers a principled solution by aligning distributions through internal pairwise distances, naturally preserving intra-class structure, but its high computational cost restricts practical use. To overcome this, we propose SGW-GAN, the first framework to incorporate Sliced GW (SGW) into retinal image enhancement. SGW approximates GW via random projections, retaining relational fidelity while greatly reducing cost. Experiments on public datasets show that SGW-GAN produces visually compelling enhancements, achieves superior diabetic retinopathy grading, and reports the lowest GW discrepancy across disease labels, demonstrating both efficiency and clinical fidelity for unpaired medical image enhancement.
+ oai:arXiv.org:2601.13417v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yujian Xiong, Xuanzhao Dong, Wenhui Zhu, Xin Li, Oana Dumitrascu, Yalin Wang
+
+
+ TrustEnergy: A Unified Framework for Accurate and Reliable User-level Energy Usage Prediction
+ https://arxiv.org/abs/2601.13422
+ arXiv:2601.13422v1 Announce Type: new
+Abstract: Energy usage prediction is important for various real-world applications, including grid management, infrastructure planning, and disaster response. Although a plethora of deep learning approaches have been proposed to perform this task, most of them either overlook the essential spatial correlations across households or fail to scale to individualized prediction, making them less effective for accurate fine-grained user-level prediction. In addition, due to the dynamic and uncertain nature of energy usage caused by various factors such as extreme weather events, quantifying uncertainty for reliable prediction is also significant, but it has not been fully explored in existing work. In this paper, we propose a unified framework called TrustEnergy for accurate and reliable user-level energy usage prediction. There are two key technical components in TrustEnergy, (i) a Hierarchical Spatiotemporal Representation module to efficiently capture both macro and micro energy usage patterns with a novel memory-augmented spatiotemporal graph neural network, and (ii) an innovative Sequential Conformalized Quantile Regression module to dynamically adjust uncertainty bounds to ensure valid prediction intervals over time, without making strong assumptions about the underlying data distribution. We implement and evaluate our TrustEnergy framework by working with an electricity provider in Florida, and the results show our TrustEnergy can achieve a 5.4% increase in prediction accuracy and 5.7% improvement in uncertainty quantification compared to state-of-the-art baselines.
+ oai:arXiv.org:2601.13422v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Dahai Yu, Rongchao Xu, Dingyi Zhuang, Yuheng Bu, Shenhao Wang, Guang Wang
+
+
+ Quantum Encryption Resilience Score (QERS) for MQTT, HTTP, and HTTPS under Post-Quantum Cryptography in Computer, IoT, and IIoT Systems
+ https://arxiv.org/abs/2601.13423
+ arXiv:2601.13423v1 Announce Type: new
+Abstract: Post-quantum cryptography (PQC) introduces significant computational and communication overhead, which poses challenges for resource-constrained computer systems, Internet of Things (IoT), and Industrial IoT (IIoT) devices. This paper presents an experimental evaluation of the Quantum Encryption Resilience Score (QERS) applied to MQTT, HTTP, and HTTPS communication protocols operating under PQC. Using an ESP32-C6 client and an ARM-based Raspberry Pi CM4 server, latency, CPU utilization, RSSI, energy consumption, key size, and TLS handshake overhead are measured under realistic operating conditions. QERS integrates these heterogeneous metrics into normalized Basic, Tuned, and Fusion scores, enabling systematic comparison of protocol efficiency and security resilience. Experimental results show that MQTT provides the highest efficiency under PQC constraints, while HTTPS achieves the highest security-weighted resilience at the cost of increased latency and resource consumption. The proposed framework supports informed protocol selection and migration planning for PQC-enabled IoT and IIoT deployments.
+ oai:arXiv.org:2601.13423v1
+ cs.CR
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jonatan Rassekhnia
+
+
+ Driving Computational Efficiency in Large-Scale Platforms using HPC Technologies
+ https://arxiv.org/abs/2601.13424
+ arXiv:2601.13424v1 Announce Type: new
+Abstract: The Latin American Giant Observatory (LAGO) project utilizes extensive High-Performance Computing (HPC) resources for complex astroparticle physics simulations, making resource efficiency critical for scientific productivity and sustainability. This article presents a detailed analysis focused on quantifying and improving HPC resource utilization efficiency specifically within the LAGO computational environment. The core objective is to understand how LAGO's distinct computational workloads-characterized by a prevalent coarse-grained, task-parallel execution model-consume resources in practice. To achieve this, we analyze historical job accounting data from the EGI FedCloud platform, identifying primary workload categories (Monte Carlo simulations, data processing, user analysis/testing) and evaluating their performance using key efficiency metrics (CPU utilization, walltime utilization, and I/O patterns). Our analysis reveals significant patterns, including high CPU efficiency within individual simulation tasks contrasted with the distorting impact of short test jobs on aggregate metrics. This work pinpoints specific inefficiencies and provides data-driven insights into LAGO's HPC usage. The findings directly inform recommendations for optimizing resource requests, refining workflow management strategies, and guiding future efforts to enhance computational throughput, ultimately maximizing the scientific return from LAGO's HPC investments.
+ oai:arXiv.org:2601.13424v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Alexander Martinez Mendez, Antonio J. Rubio-Montero, Carlos J. Barrios H., Hern\'an Asorey, Rafael Mayo-Garc\'ia, Luis A. N\'u\~nez
+
+
+ A Scientific Data Integrity system based on Blockchain
+ https://arxiv.org/abs/2601.13425
+ arXiv:2601.13425v1 Announce Type: new
+Abstract: In most High Performance Computing (HPC) projects nowadays, there is a lot of data obtained from different sources, depending on the project's objectives. Some of that data is very huge in terms of size, so copying such data sometimes is an unrealistic goal. On the other hand, science requires data used for different purposes to remain unaltered, so different groups of researchers can reproduce results, discuss theories, and validate each other. In this paper, we present a novel approach to help research groups to validate data integrity on such distributed repositories using Blockchain. Originally developed for cryptographic currencies, Blockchain has demonstrated a versatile range of uses. Our proposal ensures 1) secure access to data management, 2) easy validation of data integrity, and 3) an easy way to add new records to the dataset with the same robust integrity policy. A prototype was developed and tested using a subset of a public dataset from a real scientific collaboration, the Latin American Giant Observatory (LAGO) Project.
+ oai:arXiv.org:2601.13425v1
+ cs.CR
+ cs.ET
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Gian Sebastian Mier Bello, Alexander Martinez Mendez, Carlos J. Barrios H., Robinson Rivas, Luis A. N\'u\~nez
+
+
+ Techniques of Modern Attacks
+ https://arxiv.org/abs/2601.13427
+ arXiv:2601.13427v1 Announce Type: new
+Abstract: The techniques used in modern attacks have become an important factor for investigation. As we advance further into the digital age, cyber attackers are employing increasingly sophisticated and highly threatening methods. These attacks target not only organizations and governments but also extend to private and corporate sectors. Modern attack techniques, such as lateral movement and ransomware, are designed to infiltrate networks and steal sensitive data. Among these techniques, Advanced Persistent Threats (APTs) represent a complex method of attack aimed at specific targets to steal high-value sensitive information or damage the infrastructure of the targeted organization. In this paper, I will investigate Advanced Persistent Threats (APTs) as a modern attack technique, focusing on both the attack life cycle and cutting-edge detection and defense strategies proposed in recent academic research. I will analyze four representative papers to understand the evolution of APT detection mechanisms, including machine learning-driven behavioral analysis and network-level collaborative defense models. Through this comparative analysis, I aim to highlight the strengths and limitations of each approach and propose more adaptive APT mitigation strategies. The study seeks to analyze the key characteristics of APTs and provide a comprehensive high-level understanding of APTs along with potential solutions to the threats they pose.
+ oai:arXiv.org:2601.13427v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alexander Shim
+
+
+ Trust Me, I'm an Expert: Decoding and Steering Authority Bias in Large Language Models
+ https://arxiv.org/abs/2601.13433
+ arXiv:2601.13433v1 Announce Type: new
+Abstract: Prior research demonstrates that performance of language models on reasoning tasks can be influenced by suggestions, hints and endorsements. However, the influence of endorsement source credibility remains underexplored. We investigate whether language models exhibit systematic bias based on the perceived expertise of the provider of the endorsement. Across 4 datasets spanning mathematical, legal, and medical reasoning, we evaluate 11 models using personas representing four expertise levels per domain. Our results reveal that models are increasingly susceptible to incorrect/misleading endorsements as source expertise increases, with higher-authority sources inducing not only accuracy degradation but also increased confidence in wrong answers. We also show that this authority bias is mechanistically encoded within the model and a model can be steered away from the bias, thereby improving its performance even when an expert gives a misleading endorsement.
+ oai:arXiv.org:2601.13433v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Priyanka Mary Mammen, Emil Joswin, Shankar Venkitachalam
+
+
+ A Learnable Wavelet Transformer for Long-Short Equity Trading and Risk-Adjusted Return Optimization
+ https://arxiv.org/abs/2601.13435
+ arXiv:2601.13435v1 Announce Type: new
+Abstract: Learning profitable intraday trading policies from financial time series is challenging due to heavy noise, non-stationarity, and strong cross-sectional dependence among related assets. We propose \emph{WaveLSFormer}, a learnable wavelet-based long-short Transformer that jointly performs multi-scale decomposition and return-oriented decision learning. Specifically, a learnable wavelet front-end generates low-/high-frequency components via an end-to-end trained filter bank, guided by spectral regularizers that encourage stable and well-separated frequency bands. To fuse multi-scale information, we introduce a low-guided high-frequency injection (LGHI) module that refines low-frequency representations with high-frequency cues while controlling training stability. The model outputs a portfolio of long/short positions that is rescaled to satisfy a fixed risk budget, and is optimized directly with a trading objective and risk-aware regularization. Extensive experiments on five years of hourly data across six industry groups, evaluated over ten random seeds, demonstrate that WaveLSFormer consistently outperforms MLP, LSTM and Transformer backbones, with and without fixed discrete wavelet front-ends. On average in all industries, WaveLSFormer achieves a cumulative overall strategy return of $0.607 \pm 0.045$ and a Sharpe ratio of $2.157 \pm 0.166$, substantially improving both profitability and risk-adjusted returns over the strongest baselines.
+ oai:arXiv.org:2601.13435v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shuozhe Li, Du Cheng, Leqi Liu
+
+
+ MOSLD-Bench: Multilingual Open-Set Learning and Discovery Benchmark for Text Categorization
+ https://arxiv.org/abs/2601.13437
+ arXiv:2601.13437v1 Announce Type: new
+Abstract: Open-set learning and discovery (OSLD) is a challenging machine learning task in which samples from new (unknown) classes can appear at test time. It can be seen as a generalization of zero-shot learning, where the new classes are not known a priori, hence involving the active discovery of new classes. While zero-shot learning has been extensively studied in text classification, especially with the emergence of pre-trained language models, open-set learning and discovery is a comparatively new setup for the text domain. To this end, we introduce the first multilingual open-set learning and discovery (MOSLD) benchmark for text categorization by topic, comprising 960K data samples across 12 languages. To construct the benchmark, we (i) rearrange existing datasets and (ii) collect new data samples from the news domain. Moreover, we propose a novel framework for the OSLD task, which integrates multiple stages to continuously discover and learn new classes. We evaluate several language models, including our own, to obtain results that can be used as reference for future work. We release our benchmark at https://github.com/Adriana19Valentina/MOSLD-Bench.
+ oai:arXiv.org:2601.13437v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Adriana-Valentina Costache, Daria-Nicoleta Dragomir, Silviu-Florin Gheorghe, Eduard Poesina, Paul Irofti, Radu Tudor Ionescu
+
+
+ Analyzing VLM-Based Approaches for Anomaly Classification and Segmentation
+ https://arxiv.org/abs/2601.13440
+ arXiv:2601.13440v1 Announce Type: new
+Abstract: Vision-Language Models (VLMs), particularly CLIP, have revolutionized anomaly detection by enabling zero-shot and few-shot defect identification without extensive labeled datasets. By learning aligned representations of images and text, VLMs facilitate anomaly classification and segmentation through natural language descriptions of normal and abnormal states, eliminating traditional requirements for task-specific training or defect examples. This project presents a comprehensive analysis of VLM-based approaches for anomaly classification (AC) and anomaly segmentation (AS). We systematically investigate key architectural paradigms including sliding window-based dense feature extraction (WinCLIP), multi-stage feature alignment with learnable projections (AprilLab framework), and compositional prompt ensemble strategies. Our analysis evaluates these methods across critical dimensions: feature extraction mechanisms, text-visual alignment strategies, prompt engineering techniques, zero-shot versus few-shot trade-offs, computational efficiency, and cross-domain generalization. Through rigorous experimentation on benchmarks such as MVTec AD and VisA, we compare classification accuracy, segmentation precision, and inference efficiency. The primary contribution is a foundational understanding of how and why VLMs succeed in anomaly detection, synthesizing practical insights for method selection and identifying current limitations. This work aims to facilitate informed adoption of VLM-based methods in industrial quality control and guide future research directions.
+ oai:arXiv.org:2601.13440v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohit Kakda, Mirudula Shri Muthukumaran, Uttapreksha Patel, Lawrence Swaminathan Xavier Prince
+
+
+ Explicit Cognitive Allocation: A Principle for Governed and Auditable Inference in Large Language Models
+ https://arxiv.org/abs/2601.13443
+ arXiv:2601.13443v1 Announce Type: new
+Abstract: The rapid adoption of large language models (LLMs) has enabled new forms of AI-assisted reasoning across scientific, technical, and organizational domains. However, prevailing modes of LLM use remain cognitively unstructured: problem framing, knowledge exploration, retrieval, methodological awareness, and explanation are typically collapsed into a single generative process. This cognitive collapse limits traceability, weakens epistemic control, and undermines reproducibility, particularly in high-responsibility settings.
+ We introduce Explicit Cognitive Allocation, a general principle for structuring AI-assisted inference through the explicit separation and orchestration of epistemic functions. We instantiate this principle in the Cognitive Universal Agent (CUA), an architecture that organizes inference into distinct stages of exploration and framing, epistemic anchoring, instrumental and methodological mapping, and interpretive synthesis. Central to this framework is the notion of Universal Cognitive Instruments (UCIs), which formalize heterogeneous means, including computational, experimental, organizational, regulatory, and educational instruments, through which abstract inquiries become investigable.
+ We evaluate the effects of explicit cognitive and instrumental allocation through controlled comparisons between CUA-orchestrated inference and baseline LLM inference under matched execution conditions. Across multiple prompts in the agricultural domain, CUA inference exhibits earlier and structurally governed epistemic convergence, higher epistemic alignment under semantic expansion, and systematic exposure of the instrumental landscape of inquiry. In contrast, baseline LLM inference shows greater variability in alignment and fails to explicitly surface instrumental structure.
+ oai:arXiv.org:2601.13443v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ H\'ector Manuel Manzanilla-Granados, Zaira Navarrete-Cazales, Miriam Pescador-Rojas, Tonahtiu Ram\'irez-Romero
+
+
+ BladeSDF : Unconditional and Conditional Generative Modeling of Representative Blade Geometries Using Signed Distance Functions
+ https://arxiv.org/abs/2601.13445
+ arXiv:2601.13445v1 Announce Type: new
+Abstract: Generative AI has emerged as a transformative paradigm in engineering design, enabling automated synthesis and reconstruction of complex 3D geometries while preserving feasibility and performance relevance. This paper introduces a domain-specific implicit generative framework for turbine blade geometry using DeepSDF, addressing critical gaps in performance-aware modeling and manufacturable design generation. The proposed method leverages a continuous signed distance function (SDF) representation to reconstruct and generate smooth, watertight geometries with quantified accuracy. It establishes an interpretable, near-Gaussian latent space that aligns with blade-relevant parameters, such as taper and chord ratios, enabling controlled exploration and unconditional synthesis through interpolation and Gaussian sampling. In addition, a compact neural network maps engineering descriptors, such as maximum directional strains, to latent codes, facilitating the generation of performance-informed geometry. The framework achieves high reconstruction fidelity, with surface distance errors concentrated within $1\%$ of the maximum blade dimension, and demonstrates robust generalization to unseen designs. By integrating constraints, objectives, and performance metrics, this approach advances beyond traditional 2D-guided or unconstrained 3D pipelines, offering a practical and interpretable solution for data-driven turbine blade modeling and concept generation.
+ oai:arXiv.org:2601.13445v1
+ cs.LG
+ physics.comp-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ashish S. Nair, Sandipp Krishnan Ravi, Itzel Salgado, Changjie Sun, Sayan Ghosh, Liping Wang
+
+
+ Fairness-informed Pareto Optimization : An Efficient Bilevel Framework
+ https://arxiv.org/abs/2601.13448
+ arXiv:2601.13448v1 Announce Type: new
+Abstract: Despite their promise, fair machine learning methods often yield Pareto-inefficient models, in which the performance of certain groups can be improved without degrading that of others. This issue arises frequently in traditional in-processing approaches such as fairness-through-regularization. In contrast, existing Pareto-efficient approaches are biased towards a certain perspective on fairness and fail to adapt to the broad range of fairness metrics studied in the literature. In this paper, we present BADR, a simple framework to recover the optimal Pareto-efficient model for any fairness metric. Our framework recovers its models through a Bilevel Adaptive Rescalarisation procedure. The lower level is a weighted empirical risk minimization task where the weights are a convex combination of the groups, while the upper level optimizes the chosen fairness objective. We equip our framework with two novel large-scale, single-loop algorithms, BADR-GD and BADR-SGD, and establish their convergence guarantees. We release badr, an open-source Python toolbox implementing our framework for a variety of learning tasks and fairness metrics. Finally, we conduct extensive numerical experiments demonstrating the advantages of BADR over existing Pareto-efficient approaches to fairness.
+ oai:arXiv.org:2601.13448v1
+ cs.LG
+ math.OC
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sofiane Tanji, Samuel Vaiter, Yassine Laguel
+
+
+ Event-based Heterogeneous Information Processing for Online Vision-based Obstacle Detection and Localization
+ https://arxiv.org/abs/2601.13451
+ arXiv:2601.13451v1 Announce Type: new
+Abstract: This paper introduces a novel framework for robotic vision-based navigation that integrates Hybrid Neural Networks (HNNs) with Spiking Neural Network (SNN)-based filtering to enhance situational awareness for unmodeled obstacle detection and localization. By leveraging the complementary strengths of Artificial Neural Networks (ANNs) and SNNs, the system achieves both accurate environmental understanding and fast, energy-efficient processing. The proposed architecture employs a dual-pathway approach: an ANN component processes static spatial features at low frequency, while an SNN component handles dynamic, event-based sensor data in real time. Unlike conventional hybrid architectures that rely on domain conversion mechanisms, our system incorporates a pre-developed SNN-based filter that directly utilizes spike-encoded inputs for localization and state estimation. Detected anomalies are validated using contextual information from the ANN pathway and continuously tracked to support anticipatory navigation strategies. Simulation results demonstrate that the proposed method offers acceptable detection accuracy while maintaining computational efficiency close to SNN-only implementations, which operate at a fraction of the resource cost. This framework represents a significant advancement in neuromorphic navigation systems for robots operating in unpredictable and dynamic environments.
+ oai:arXiv.org:2601.13451v1
+ cs.RO
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Reza Ahmadvand, Sarah Safura Sharif, Yaser Mike Banad
+
+
+ A simulation of urban incidents involving pedestrians and vehicles based on Weighted A*
+ https://arxiv.org/abs/2601.13452
+ arXiv:2601.13452v1 Announce Type: new
+Abstract: This document presents a comprehensive simulation framework designed to model urban incidents involving pedestrians and vehicles. Using a multiagent systems approach, two types of agents (pedestrians and vehicles) are introduced within a 2D grid based urban environment. The environment encodes streets, sidewalks, buildings, zebra crossings, and obstacles such as potholes and infrastructure elements. Each agent employs a weighted A* algorithm for pathfinding, allowing for variation in decision making behavior such as reckless movement or strict rule-following. The model aims to simulate interactions, assess risk of collisions, and evaluate efficiency under varying environmental and behavioral conditions. Experimental results explore how factors like obstacle density, presence of traffic control mechanisms, and behavioral deviations affect safety and travel efficiency.
+ oai:arXiv.org:2601.13452v1
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Edgar Gonzalez Fernandez
+
+
+ PhysicsSolutionAgent: Towards Multimodal Explanations for Numerical Physics Problem Solving
+ https://arxiv.org/abs/2601.13453
+ arXiv:2601.13453v1 Announce Type: new
+Abstract: Explaining numerical physics problems often requires more than text-based solutions; clear visual reasoning can substantially improve conceptual understanding. While large language models (LLMs) demonstrate strong performance on many physics questions in textual form, their ability to generate long, high-quality visual explanations remains insufficiently explored. In this work, we introduce PhysicsSolutionAgent (PSA), an autonomous agent that generates physics-problem explanation videos of up to six minutes using Manim animations. To evaluate the generated videos, we design an assessment pipeline that performs automated checks across 15 quantitative parameters and incorporates feedback from a vision-language model (VLM) to iteratively improve video quality. We evaluate PSA on 32 videos spanning numerical and theoretical physics problems. Our results reveal systematic differences in video quality depending on problem difficulty and whether the task is numerical or theoretical. Using GPT-5-mini, PSA achieves a 100% video-completion rate with an average automated score of 3.8/5. However, qualitative analysis and human inspection uncover both minor and major issues, including visual layout inconsistencies and errors in how visual content is interpreted during feedback. These findings expose key limitations in reliable Manim code generation and highlight broader challenges in multimodal reasoning and evaluation for visual explanations of numerical physics problems. Our work underscores the need for improved visual understanding, verification, and evaluation frameworks in future multimodal educational systems
+ oai:arXiv.org:2601.13453v1
+ cs.CL
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Aditya Thole, Anmol Agrawal, Arnav Ramamoorthy, Dhruv Kumar
+
+
+ Federated Learning Under Temporal Drift -- Mitigating Catastrophic Forgetting via Experience Replay
+ https://arxiv.org/abs/2601.13456
+ arXiv:2601.13456v1 Announce Type: new
+Abstract: Federated Learning struggles under temporal concept drift where client data distributions shift over time. We demonstrate that standard FedAvg suffers catastrophic forgetting under seasonal drift on Fashion-MNIST, with accuracy dropping from 74% to 28%. We propose client-side experience replay, where each client maintains a small buffer of past samples mixed with current data during local training. This simple approach requires no changes to server aggregation. Experiments show that a 50-sample-per-class buffer restores performance to 78-82%, effectively preventing forgetting. Our ablation study reveals a clear memory-accuracy trade-off as buffer size increases.
+ oai:arXiv.org:2601.13456v1
+ cs.LG
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sahasra Kokkula, Daniel David, Aaditya Baruah
+
+
+ A Tool for Automatically Cataloguing and Selecting Pre-Trained Models and Datasets for Software Engineering
+ https://arxiv.org/abs/2601.13460
+ arXiv:2601.13460v1 Announce Type: new
+Abstract: The rapid growth of machine learning assets has made it increasingly difficult for software engineers to identify models and datasets that match their specific needs. Browsing large registries, such as Hugging Face, is time-consuming, error-prone, and rarely tailored to Software Engineering (SE) tasks. We present MLAssetSelection, a web application that automatically extracts SE assets and supports four key functionalities: (i) a configurable leaderboard for ranking models across multiple benchmarks and metrics; (ii) requirements-based selection of models and datasets; (iii) real-time automated updates through scheduled jobs that keep asset information current; and (iv) user-centric features including login, personalized asset lists, and configurable alert notifications. A demonstration video is available at https://youtu.be/t6CJ6P9asV4.
+ oai:arXiv.org:2601.13460v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alexandra Gonz\'alez, Oscar Cerezo, Xavier Franch, Silverio Mart\'inez-Fern\'andez
+
+
+ SpatialBench-UC: Uncertainty-Aware Evaluation of Spatial Prompt Following in Text-to-Image Generation
+ https://arxiv.org/abs/2601.13462
+ arXiv:2601.13462v1 Announce Type: new
+Abstract: Evaluating whether text-to-image models follow explicit spatial instructions is difficult to automate. Object detectors may miss targets or return multiple plausible detections, and simple geometric tests can become ambiguous in borderline cases. Spatial evaluation is naturally a selective prediction problem, the checker may abstain when evidence is weak and report confidence so that results can be interpreted as a risk coverage tradeoff rather than a single score. We introduce SpatialBench-UC, a small, reproducible benchmark for pairwise spatial relations. The benchmark contains 200 prompts (50 object pairs times 4 relations) grouped into 100 counterfactual pairs obtained by swapping object roles. We release a benchmark package, versioned prompts, pinned configs, per-sample checker outputs, and report tables, enabling reproducible and auditable comparisons across models. We also include a lightweight human audit used to calibrate the checker's abstention margin and confidence threshold. We evaluate three baselines, Stable Diffusion 1.5, SD 1.5 BoxDiff, and SD 1.4 GLIGEN. The checker reports pass rate and coverage as well as conditional pass rates on decided samples. The results show that grounding methods substantially improve both pass rate and coverage, while abstention remains a dominant factor due mainly to missing detections.
+ oai:arXiv.org:2601.13462v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Amine Rostane
+
+
+ Quantum Qualifiers for Neural Network Model Selection in Hadronic Physics
+ https://arxiv.org/abs/2601.13463
+ arXiv:2601.13463v1 Announce Type: new
+Abstract: As quantum machine-learning architectures mature, a central challenge is no longer their construction, but identifying the regimes in which they offer practical advantages over classical approaches. In this work, we introduce a framework for addressing this question in data-driven hadronic physics problems by developing diagnostic tools - centered on a quantitative quantum qualifier - that guide model selection between classical and quantum deep neural networks based on intrinsic properties of the data. Using controlled classification and regression studies, we show how relative model performance follows systematic trends in complexity, noise, and dimensionality, and how these trends can be distilled into a predictive criterion. We then demonstrate the utility of this approach through an application to Compton form factor extraction from deeply virtual Compton scattering, where the quantum qualifier identifies kinematic regimes favorable to quantum models. Together, these results establish a principled framework for deploying quantum machine-learning tools in precision hadronic physics.
+ oai:arXiv.org:2601.13463v1
+ cs.LG
+ hep-ph
+ nucl-th
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Brandon B. Le, D. Keller
+
+
+ Context and Transcripts Improve Detection of Deepfake Audios of Public Figures
+ https://arxiv.org/abs/2601.13464
+ arXiv:2601.13464v1 Announce Type: new
+Abstract: Humans use context to assess the veracity of information. However, current audio deepfake detectors only analyze the audio file without considering either context or transcripts. We create and analyze a Journalist-provided Deepfake Dataset (JDD) of 255 public deepfakes which were primarily contributed by over 70 journalists since early 2024. We also generate a synthetic audio dataset (SYN) of dead public figures and propose a novel Context-based Audio Deepfake Detector (CADD) architecture. In addition, we evaluate performance on two large-scale datasets: ITW and P$^2$V. We show that sufficient context and/or the transcript can significantly improve the efficacy of audio deepfake detectors. Performance (measured via F1 score, AUC, and EER) of multiple baseline audio deepfake detectors and traditional classifiers can be improved by 5%-37.58% in F1-score, 3.77%-42.79% in AUC, and 6.17%-47.83% in EER. We additionally show that CADD, via its use of context and/or transcripts, is more robust to 5 adversarial evasion strategies, limiting performance degradation to an average of just -0.71% across all experiments. Code, models, and datasets are available at our project page: https://sites.northwestern.edu/nsail/cadd-context-based-audio-deepfake-detection (access restricted during review).
+ oai:arXiv.org:2601.13464v1
+ cs.AI
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Chongyang Gao, Marco Postiglione, Julian Baldwin, Natalia Denisenko, Isabel Gortner, Luke Fosdick, Chiara Pulice, Sarit Kraus, V. S. Subrahmanian
+
+
+ Graph Neural Networks are Heuristics
+ https://arxiv.org/abs/2601.13465
+ arXiv:2601.13465v1 Announce Type: new
+Abstract: We demonstrate that a single training trajectory can transform a graph neural network into an unsupervised heuristic for combinatorial optimization. Focusing on the Travelling Salesman Problem, we show that encoding global structural constraints as an inductive bias enables a non-autoregressive model to generate solutions via direct forward passes, without search, supervision, or sequential decision-making. At inference time, dropout and snapshot ensembling allow a single model to act as an implicit ensemble, reducing optimality gaps through increased solution diversity. Our results establish that graph neural networks do not require supervised training nor explicit search to be effective. Instead, they can internalize global combinatorial structure and function as strong, learned heuristics. This reframes the role of learning in combinatorial optimization: from augmenting classical algorithms to directly instantiating new heuristics.
+ oai:arXiv.org:2601.13465v1
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yimeng Min, Carla P. Gomes
+
+
+ Governance Matters: Lessons from Restructuring the data.table OSS Project
+ https://arxiv.org/abs/2601.13466
+ arXiv:2601.13466v1 Announce Type: new
+Abstract: Open source software (OSS) forms the backbone of industrial data workflows and enterprise systems. However, many OSS projects face operational risks due to informal or centralized governance. This paper presents a practical case study of data.table, a high-performance R package widely adopted in production analytics pipelines, which underwent a community-led governance reform to address scalability and sustainability concerns. Before the reform, data.table faced a growing backlog of unresolved issues and open pull requests, unclear contributor pathways, and bottlenecks caused by reliance on a single core maintainer. In response, the community initiated a redesign of its governance structure. In this paper, we evaluated the impact of this transition through a mixed-methods approach, combining a contributor survey (n=17) with mining project repository data. Our results show that following the reform, the project experienced a 200% increase in new contributor recruitment, a drop in pull request resolution time from over 700 days to under a week, and a 3x increase in contributor retention. Community sentiment improved around transparency, onboarding, and project momentum, though concerns around fairness and conflict resolution remain. This case study provides practical guidance for maintainers, companies, and foundations seeking to enhance OSS governance.
+ oai:arXiv.org:2601.13466v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ 10.1109/ICSME64153.2025.00067
+ Pedro Oliveira, Doris Amoakohene, Toby Hocking, Marco Gerosa, Igor Steinmacher
+
+
+ Preconditioning Benefits of Spectral Orthogonalization in Muon
+ https://arxiv.org/abs/2601.13474
+ arXiv:2601.13474v1 Announce Type: new
+Abstract: The Muon optimizer, a matrix-structured algorithm that leverages spectral orthogonalization of gradients, is a milestone in the pretraining of large language models. However, the underlying mechanisms of Muon -- particularly the role of gradient orthogonalization -- remain poorly understood, with very few works providing end-to-end analyses that rigorously explain its advantages in concrete applications. We take a step by studying the effectiveness of a simplified variant of Muon through two case studies: matrix factorization, and in-context learning of linear transformers. For both problems, we prove that simplified Muon converges linearly with iteration complexities independent of the relevant condition number, provably outperforming gradient descent and Adam. Our analysis reveals that the Muon dynamics decouple into a collection of independent scalar sequences in the spectral domain, each exhibiting similar convergence behavior. Our theory formalizes the preconditioning effect induced by spectral orthogonalization, offering insight into Muon's effectiveness in these matrix optimization problems and potentially beyond.
+ oai:arXiv.org:2601.13474v1
+ cs.LG
+ cs.AI
+ math.OC
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianhao Ma, Yu Huang, Yuejie Chi, Yuxin Chen
+
+
+ A Unified Variational Imputation Framework for Electric Vehicle Charging Data Using Retrieval-Augmented Language Model
+ https://arxiv.org/abs/2601.13476
+ arXiv:2601.13476v1 Announce Type: new
+Abstract: The reliability of data-driven applications in electric vehicle (EV) infrastructure, such as charging demand forecasting, hinges on the availability of complete, high-quality charging data. However, real-world EV datasets are often plagued by missing records, and existing imputation methods are ill-equipped for the complex, multimodal context of charging data, often relying on a restrictive one-model-per-station paradigm that ignores valuable inter-station correlations. To address these gaps, we develop a novel PRobabilistic variational imputation framework that leverages the power of large lAnguage models and retrIeval-augmented Memory (PRAIM). PRAIM employs a pre-trained language model to encode heterogeneous data, spanning time-series demand, calendar features, and geospatial context, into a unified, semantically rich representation. This is dynamically fortified by retrieval-augmented memory that retrieves relevant examples from the entire charging network, enabling a single, unified imputation model empowered by variational neural architecture to overcome data sparsity. Extensive experiments on four public datasets demonstrate that PRAIM significantly outperforms established baselines in both imputation accuracy and its ability to preserve the original data's statistical distribution, leading to substantial improvements in downstream forecasting performance.
+ oai:arXiv.org:2601.13476v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/TSG.2026.3656697
+ IEEE Transactions on Smart Grid, 2026
+ Jinhao Li, Hao Wang
+
+
+ Elias-type Bounds for Codes in the Symmetric Limited-Magnitude Error Channel
+ https://arxiv.org/abs/2601.13477
+ arXiv:2601.13477v1 Announce Type: new
+Abstract: We study perfect error-correcting codes in $\mathbb{Z}^n$ for the symmetric limited-magnitude error channel, where at most $e$ coordinates of an integer vector may be altered by a value whose magnitude is at most $s$. Geometrically, such codes correspond to tilings of $\mathbb{Z}^n$ by the symmetric limited-magnitude error ball $\mathcal{B}(n,e,s,s)$. Given $n$ and $s$, we adapt the geometric ideas underlying the Elias bound for the Hamming metric to the distance $d_s$ tailed for this channel, and derive new necessary conditions on $e$ for the existence of perfect codes / tilings, without assuming any lattice structure. Our main results identify two distinct regimes depending on the error magnitude. For small error magnitudes ($s \in \{1, 2\}$), we prove that if the number of correctable errors does not exceed a certain fraction of $n$, then it is asymptotically bounded by $e = \mathcal{O}(\sqrt{n \log n})$. In contrast, for larger magnitudes ($s \geq 3$), we establish a significantly sharper bound of $e < \sqrt{12.36n}$, which holds without any restriction on $e$ being below a given fraction of $n$. Finally, by extending our method to non-perfect codes, we derive an upper bound on packing density, showing that for codes correcting a linear or $\Omega(\sqrt{n})$ number of errors, the density is bounded by a factor inversely proportional to the error magnitude $s$.
+ oai:arXiv.org:2601.13477v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Zhihao Guan, Hengjia Wei
+
+
+ Exploring Learners' Expectations and Engagement When Collaborating with Constructively Controversial Peer Agents
+ https://arxiv.org/abs/2601.13479
+ arXiv:2601.13479v1 Announce Type: new
+Abstract: Peer agents can supplement real-time collaborative learning in asynchronous online courses. Constructive Controversy (CC) theory suggests that humans deepen their understanding of a topic by confronting and resolving controversies. This study explores whether CC's benefits apply to LLM-based peer agents, focusing on the impact of agents' disputatious behaviors and disclosure of agents' behavior designs on the learning process. In our mixed-method study (n=144), we compare LLMs that follow detailed CC guidelines (regulated) to those guided by broader goals (unregulated) and examine the effects of disclosing the agents' design to users (transparent vs. opaque). Findings show that learners' values influence their agent interaction: those valuing control appreciate unregulated agents' willingness to cease push-back upon request, while those valuing intellectual challenges favor regulated agents for stimulating creativity. Additionally, design transparency lowers learners' perception of agents' abilities. Our findings lay the foundation for designing effective collaborative peer agents in isolated educational settings.
+ oai:arXiv.org:2601.13479v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Thitaree Tanprasert, Young-ho Kim, Sidney Fels, Dongwook Yoon
+
+
+ Towards Efficient and Robust Linguistic Emotion Diagnosis for Mental Health via Multi-Agent Instruction Refinement
+ https://arxiv.org/abs/2601.13481
+ arXiv:2601.13481v1 Announce Type: new
+Abstract: Linguistic expressions of emotions such as depression, anxiety, and trauma-related states are pervasive in clinical notes, counseling dialogues, and online mental health communities, and accurate recognition of these emotions is essential for clinical triage, risk assessment, and timely intervention. Although large language models (LLMs) have demonstrated strong generalization ability in emotion analysis tasks, their diagnostic reliability in high-stakes, context-intensive medical settings remains highly sensitive to prompt design. Moreover, existing methods face two key challenges: emotional comorbidity, in which multiple intertwined emotional states complicate prediction, and inefficient exploration of clinically relevant cues. To address these challenges, we propose APOLO (Automated Prompt Optimization for Linguistic Emotion Diagnosis), a framework that systematically explores a broader and finer-grained prompt space to improve diagnostic efficiency and robustness. APOLO formulates instruction refinement as a Partially Observable Markov Decision Process and adopts a multi-agent collaboration mechanism involving Planner, Teacher, Critic, Student, and Target roles. Within this closed-loop framework, the Planner defines an optimization trajectory, while the Teacher-Critic-Student agents iteratively refine prompts to enhance reasoning stability and effectiveness, and the Target agent determines whether to continue optimization based on performance evaluation. Experimental results show that APOLO consistently improves diagnostic accuracy and robustness across domain-specific and stratified benchmarks, demonstrating a scalable and generalizable paradigm for trustworthy LLM applications in mental healthcare.
+ oai:arXiv.org:2601.13481v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jian Zhang, Zhangqi Wang, Zhiyuan Wang, Weiping Fu, Yu He, Haiping Zhu, Qika Lin, Jun Liu
+
+
+ Spectrum & RAN Sharing: A Measurement-based Case Study of Commercial 5G Networks in Spain
+ https://arxiv.org/abs/2601.13484
+ arXiv:2601.13484v1 Announce Type: new
+Abstract: Radio Access Network (RAN) sharing, which often also includes spectrum sharing, is a strategic cooperative agreement among two or more mobile operators, where one operator may use another's RAN infrastructure to provide mobile services to its users. By mutually sharing physical sites, radio elements, licensed spectrum and other parts of the RAN infrastructure, participating operators can significantly reduce the capital (and operational) expenditure in deploying and operating cellular networks, while accelerating coverage expansion -- thereby addressing the spectrum scarcity and infrastructure cost challenges in the 5G era and beyond. While the economic benefits of RAN sharing are well understood, the impact of such resource pooling on user-perceived performance remains underexplored, especially in real-world commercial deployments. We present, to the best of our knowledge, the first empirical measurement study of commercial 5G spectrum and RAN sharing. Our measurement study is unique in that, beyond identifying real-world instances of shared 5G spectrum and RAN deployment "in the wild", we also analyze users' perceived performance and its implication on Quality of Experience (QoE). Our study provides critical insights into resource management (i.e., pooling) and spectrum efficiency, offering a blueprint (and implications) for network evolution in 5G, 6G and beyond.
+ oai:arXiv.org:2601.13484v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rostand A. K. Fezeu, Lilian C. Freitas, Eman Ramadan, Jason Carpenter, Claudio Fiandrino, Joerg Widmer, Zhi-Li Zhang
+
+
+ The Hidden Toll of Social Media News: Causal Effects on Psychosocial Wellbeing
+ https://arxiv.org/abs/2601.13487
+ arXiv:2601.13487v1 Announce Type: new
+Abstract: News consumption on social media has become ubiquitous, yet how different forms of engagement shape psychosocial outcomes remains unclear. To address this gap, we leveraged a large-scale dataset of ~26M posts and ~45M comments on the BlueSky platform, and conducted a quasi-experimental study, matching 81,345 Treated users exposed to News feeds with 83,711 Control users using stratified propensity score analysis. We examined psychosocial wellbeing, in terms of affective, behavioral, and cognitive outcomes. Our findings reveal that news engagement produces systematic trade-offs: increased depression, stress, and anxiety, yet decreased loneliness and increased social interaction on the platform. Regression models reveal that News feed bookmarking is associated with greater psychosocial deterioration compared to commenting or quoting, with magnitude differences exceeding tenfold. These per-engagement effects accumulate with repeated exposure, showing significant psychosocial impacts. Our work extends theories of news effects beyond crisis-centric frameworks by demonstrating that routine consumption creates distinct psychological dynamics depending on engagement type, and bears implications for tools and interventions for mitigating the psychosocial costs of news consumption on social media.
+ oai:arXiv.org:2601.13487v1
+ cs.SI
+ cs.AI
+ cs.CL
+ cs.CY
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Olivia Pal, Agam Goyal, Eshwar Chandrasekharan, Koustuv Saha
+
+
+ Bridging the Gap Between Estimated and True Regret Towards Reliable Regret Estimation in Deep Learning based Mechanism Design
+ https://arxiv.org/abs/2601.13489
+ arXiv:2601.13489v1 Announce Type: new
+Abstract: Recent advances, such as RegretNet, ALGnet, RegretFormer and CITransNet, use deep learning to approximate optimal multi item auctions by relaxing incentive compatibility (IC) and measuring its violation via ex post regret. However, the true accuracy of these regret estimates remains unclear. Computing exact regret is computationally intractable, and current models rely on gradient based optimizers whose outcomes depend heavily on hyperparameter choices. Through extensive experiments, we reveal that existing methods systematically underestimate actual regret (In some models, the true regret is several hundred times larger than the reported regret), leading to overstated claims of IC and revenue. To address this issue, we derive a lower bound on regret and introduce an efficient item wise regret approximation. Building on this, we propose a guided refinement procedure that substantially improves regret estimation accuracy while reducing computational cost. Our method provides a more reliable foundation for evaluating incentive compatibility in deep learning based auction mechanisms and highlights the need to reassess prior performance claims in this area.
+ oai:arXiv.org:2601.13489v1
+ cs.GT
+ cs.LG
+ econ.GN
+ q-fin.EC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shuyuan You, Zhiqiang Zhuang, Kewen Wang, Zhe Wang
+
+
+ Learning-Augmented Online TRP on a Line
+ https://arxiv.org/abs/2601.13494
+ arXiv:2601.13494v1 Announce Type: new
+Abstract: We study the online traveling repairperson problem on a line within the recently proposed learning-augmented framework, which provides predictions on the requests to be served via machine learning. In the original model (with no predictions), there is a stream of requests released over time along the line. The goal is to minimize the sum (or average) of the completion times of the requests. In the original model, the state-of-the-art competitive ratio lower bound is $1+\sqrt{2} > 2.414$ for any deterministic algorithm and the state-of-the-art competitive ratio upper bound is 4 for a deterministic algorithm. Our prediction model involves predicted positions, possibly error-prone, of each request in the stream known a priori but the arrival times of requests are not known until their arrival. We first establish a 3-competitive lower bound which extends to the original model. We then design a deterministic algorithm that is $(2+\sqrt{3})\approx 3.732$-competitive when predictions are perfect. With imperfect predictions (maximum error $\delta > 0$), we show that our deterministic algorithm becomes $\min\{3.732+4\delta,4\}$-competitive, knowing $\delta$. To the best of our knowledge, these are the first results for online traveling repairperson problem in the learning-augmented framework.
+ oai:arXiv.org:2601.13494v1
+ cs.DS
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Swapnil Guragain, Gokarna Sharma
+
+
+ RASC: Enhancing Observability & Programmability in Smart Spaces
+ https://arxiv.org/abs/2601.13496
+ arXiv:2601.13496v1 Announce Type: new
+Abstract: While RPCs form the bedrock of systems stacks, we posit that IoT device collections in smart spaces like homes, warehouses, and office buildings--which are all "user-facing"--require a more expressive abstraction. Orthogonal to prior work, which improved the reliability of IoT communication, our work focuses on improving the observability and programmability of IoT actions. We present the RASC (Request-Acknowledge-Start-Complete) abstraction, which provides acknowledgments at critical points after an IoT device action is initiated. RASC is a better fit for IoT actions, which naturally vary in length spatially (across devices) and temporally (across time, for a given device). RASC also enables the design of several new features: predicting action completion times accurately, detecting failures of actions faster, allowing fine-grained dependencies in programming, and scheduling. RASC is intended to be implemented atop today's available RPC mechanisms, rather than as a replacement. We integrated RASC into a popular and open-source IoT framework called Home Assistant. Our trace-driven evaluation finds that RASC meets latency SLOs, especially for long actions that last O(mins), which are common in smart spaces. Our scheduling policies for home automations (e.g., routines) outperform state-of-the-art counterparts by 10%-55%.
+ oai:arXiv.org:2601.13496v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Anna Karanika, Kai-Siang Wang, Han-Ting Liang, Shalni Sundram, Indranil Gupta
+
+
+ Optical Linear Systems Framework for Event Sensing and Computational Neuromorphic Imaging
+ https://arxiv.org/abs/2601.13498
+ arXiv:2601.13498v1 Announce Type: new
+Abstract: Event vision sensors (neuromorphic cameras) output sparse, asynchronous ON/OFF events triggered by log-intensity threshold crossings, enabling microsecond-scale sensing with high dynamic range and low data bandwidth. As a nonlinear system, this event representation does not readily integrate with the linear forward models that underpin most computational imaging and optical system design. We present a physics-grounded processing pipeline that maps event streams to estimates of per-pixel log-intensity and intensity derivatives, and embeds these measurements in a dynamic linear systems model with a time-varying point spread function. This enables inverse filtering directly from event data, using frequency-domain Wiener deconvolution with a known (or parameterised) dynamic transfer function. We validate the approach in simulation for single and overlapping point sources under modulated defocus, and on real event data from a tunable-focus telescope imaging a star field, demonstrating source localisation and separability. The proposed framework provides a practical bridge between event sensing and model-based computational imaging for dynamic optical systems.
+ oai:arXiv.org:2601.13498v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nimrod Kruger, Nicholas Owen Ralph, Gregory Cohen, Paul Hurley
+
+
+ Concurrent Permissive Strategy Templates
+ https://arxiv.org/abs/2601.13500
+ arXiv:2601.13500v1 Announce Type: new
+Abstract: Two-player games on finite graphs provide a rigorous foundation for modeling the strategic interaction between reactive systems and their environment. While concurrent game semantics naturally capture the synchronous interactions characteristic of many cyber-physical systems (CPS), their adoption in CPS design remains limited. Building on the concept of permissive strategy templates (PeSTels) for turn-based games, we introduce concurrent (permissive) strategy templates (ConSTels) -- a novel representation for sets of randomized winning strategies in concurrent games with Safety, B\"uchi, and Co-B\"uchi objectives. ConSTels compactly encode infinite families of strategies, thereby supporting both offline and online adaptation. Offline, we exploit compositionality to enable incremental synthesis: combining ConSTels for simpler objectives into non-conflicting templates for more complex combined objectives. Online, we demonstrate how ConSTels facilitate runtime adaptation, adjusting action probabilities in response to observed opponent behavior to optimize performance while preserving correctness. We implemented ConSTel synthesis and adaptation in a prototype tool and experimentally show its potential.
+ oai:arXiv.org:2601.13500v1
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ashwani Anand, Christel Baier, Calvin Chau, Sascha Kl\"uppelholz, Ali Mirzaei, Satya Prakash Nayak, Anne-Kathrin Schmuck
+
+
+ Modeling Perpetrators' Fate-to-Fate Contagion in Public Mass Shootings In The United States Using Bivariate Hawkes Processes
+ https://arxiv.org/abs/2601.13501
+ arXiv:2601.13501v1 Announce Type: new
+Abstract: This study examines how the fate of a perpetrator in a public mass shooting influences the fate of subsequent perpetrators. Using data from 1966 to 2024, we classify incidents according to whether the perpetrator died at the scene or survived the attack. Using a bivariate Hawkes process, we quantify the cross-excitation effect, which is the triggering effect that each event type exerts on the other, i.e., "die at the scene"$\rightarrow$ "live" and "live"$\rightarrow$ "die at the scene", as well as the self-excitation effects, i.e., "die at the scene"$\rightarrow$ "die at the scene" and "live"$\rightarrow$ "live". Our results show that the strongest spillover was from "live" incidents to "die at the scene", where we estimate that 0.34 (0.09, 0.80) of "die at the scene" incidents are triggered by a prior event in which the offender survived the attack. This pathway also exhibits the longest estimated contagion timescale: approximately 20 days. In contrast, the reverse influence, that is, "die at the scene"$\rightarrow$"live", is not statistically significant, with the lower bound of its 95% confidence interval nearly equal to zero. We also find that "die at the scene" events can only cause their own type, where 0.139 (0.01, 0.52) of such incidents are caused by previous "die at the scene" events, with the shortest contagion timescale of roughly 20 hours.
+ oai:arXiv.org:2601.13501v1
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Youness Diouane, James Silver
+
+
+ DIS2: Disentanglement Meets Distillation with Classwise Attention for Robust Remote Sensing Segmentation under Missing Modalities
+ https://arxiv.org/abs/2601.13502
+ arXiv:2601.13502v1 Announce Type: new
+Abstract: The efficacy of multimodal learning in remote sensing (RS) is severely undermined by missing modalities. The challenge is exacerbated by the RS highly heterogeneous data and huge scale variation. Consequently, paradigms proven effective in other domains often fail when confronted with these unique data characteristics. Conventional disentanglement learning, which relies on significant feature overlap between modalities (modality-invariant), is insufficient for this heterogeneity. Similarly, knowledge distillation becomes an ill-posed mimicry task where a student fails to focus on the necessary compensatory knowledge, leaving the semantic gap unaddressed. Our work is therefore built upon three pillars uniquely designed for RS: (1) principled missing information compensation, (2) class-specific modality contribution, and (3) multi-resolution feature importance. We propose a novel method DIS2, a new paradigm shifting from modality-shared feature dependence and untargeted imitation to active, guided missing features compensation. Its core novelty lies in a reformulated synergy between disentanglement learning and knowledge distillation, termed DLKD. Compensatory features are explicitly captured which, when fused with the features of the available modality, approximate the ideal fused representation of the full-modality case. To address the class-specific challenge, our Classwise Feature Learning Module (CFLM) adaptively learn discriminative evidence for each target depending on signal availability. Both DLKD and CFLM are supported by a hierarchical hybrid fusion (HF) structure using features across resolutions to strengthen prediction. Extensive experiments validate that our proposed approach significantly outperforms state-of-the-art methods across benchmarks.
+ oai:arXiv.org:2601.13502v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Nhi Kieu, Kien Nguyen, Arnold Wiliem, Clinton Fookes, Sridha Sridharan
+
+
+ Anonpsy: A Graph-Based Framework for Structure-Preserving De-identification of Psychiatric Narratives
+ https://arxiv.org/abs/2601.13503
+ arXiv:2601.13503v1 Announce Type: new
+Abstract: Psychiatric narratives encode patient identity not only through explicit identifiers but also through idiosyncratic life events embedded in their clinical structure. Existing de-identification approaches, including PHI masking and LLM-based synthetic rewriting, operate at the text level and offer limited control over which semantic elements are preserved or altered. We introduce Anonpsy, a de-identification framework that reformulates the task as graph-guided semantic rewriting. Anonpsy (1) converts each narrative into a semantic graph encoding clinical entities, temporal anchors, and typed relations; (2) applies graph-constrained perturbations that modify identifying context while preserving clinically essential structure; and (3) regenerates text via graph-conditioned LLM generation. Evaluated on 90 clinician-authored psychiatric case narratives, Anonpsy preserves diagnostic fidelity while achieving consistently low re-identification risk under expert, semantic, and GPT-5-based evaluations. Compared with a strong LLM-only rewriting baseline, Anonpsy yields substantially lower semantic similarity and identifiability. These results demonstrate that explicit structural representations combined with constrained generation provide an effective approach to de-identification for psychiatric narratives.
+ oai:arXiv.org:2601.13503v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kyung Ho Lim, Byung-Hoon Kim
+
+
+ Integrating Vision-Centric Text Understanding for Conversational Recommender Systems
+ https://arxiv.org/abs/2601.13505
+ arXiv:2601.13505v1 Announce Type: new
+Abstract: Conversational Recommender Systems (CRSs) have attracted growing attention for their ability to deliver personalized recommendations through natural language interactions. To more accurately infer user preferences from multi-turn conversations, recent works increasingly expand conversational context (e.g., by incorporating diverse entity information or retrieving related dialogues). While such context enrichment can assist preference modeling, it also introduces longer and more heterogeneous inputs, leading to practical issues such as input length constraints, text style inconsistency, and irrelevant textual noise, thereby raising the demand for stronger language understanding ability. In this paper, we propose STARCRS, a Screen-Text-AwaRe Conversational Recommender System that integrates two complementary text understanding modes: (1) a screen-reading pathway that encodes auxiliary textual information as visual tokens, mimicking skim reading on a screen, and (2) an LLM-based textual pathway that focuses on a limited set of critical content for fine-grained reasoning. We design a knowledge-anchored fusion framework that combines contrastive alignment, cross-attention interaction, and adaptive gating to integrate the two modes for improved preference modeling and response generation. Extensive experiments on two widely used benchmarks demonstrate that STARCRS consistently improves both recommendation accuracy and generated response quality.
+ oai:arXiv.org:2601.13505v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Wei Yuan, Shutong Qiao, Tong Chen, Quoc Viet Hung Nguyen, Zi Huang, Hongzhi Yin
+
+
+ Group Relative Policy Optimization for Robust Blind Interference Alignment with Fluid Antennas
+ https://arxiv.org/abs/2601.13506
+ arXiv:2601.13506v1 Announce Type: new
+Abstract: Fluid antenna system (FAS) leverages dynamic reconfigurability to unlock spatial degrees of freedom and reshape wireless channels. This paper proposes, for the first time, a robust fluid antenna-driven blind interference alignment (BIA) framework for a K-user MISO downlink under imperfect channel state information (CSI). We formulate a robust sum-rate maximization problem through optimizing fluid antenna positions. To solve this challenging non-convex problem, we employ group relative policy optimization (GRPO), a novel deep reinforcement learning algorithm that eliminates the critic network. This robust design reduces model size and floating point operations (FLOPs) by nearly half compared to proximal policy optimization (PPO) while significantly enhancing performance through group-based exploration that escapes bad local optima. Simulation results demonstrate that GRPO outperforms PPO by 4.17%, and a 100K-step pre-trained PPO by 30.29%. Due to error distribution learning, GRPO exceeds heuristic MaximumGain and RandomGain by 200.78% and 465.38%, respectively.
+ oai:arXiv.org:2601.13506v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianqiu Peng, Tong Zhang, Shuai Wang, Mingjie Shao, Hao Xu, Rui Wang
+
+
+ Event Classification by Physics-informed Inpainting for Distributed Multichannel Acoustic Sensor with Partially Degraded Channels
+ https://arxiv.org/abs/2601.13513
+ arXiv:2601.13513v1 Announce Type: new
+Abstract: Distributed multichannel acoustic sensing (DMAS) enables large-scale sound event classification (SEC), but performance drops when many channels are degraded and when sensor layouts at test time differ from training layouts. We propose a learning-free, physics-informed inpainting frontend based on reverse time migration (RTM). In this approach, observed multichannel spectrograms are first back-propagated on a 3D grid using an analytic Green's function to form a scene-consistent image, and then forward-projected to reconstruct inpainted signals before log-mel feature extraction and Transformer-based classification. We evaluate the method on ESC-50 with 50 sensors and three layouts (circular, linear, right-angle), where per-channel SNRs are sampled from -30 to 0 dB. Compared with an AST baseline, scaling-sparsemax channel selection, and channel-swap augmentation, the proposed RTM frontend achieves the best or competitive accuracy across all layouts, improving accuracy by 13.1 points on the right-angle layout (from 9.7% to 22.8%). Correlation analyses show that spatial weights align more strongly with SNR than with channel--source distance, and that higher SNR--weight correlation corresponds to higher SEC accuracy. These results demonstrate that a reconstruct-then-project, physics-based preprocessing effectively complements learning-only methods for DMAS under layout-open configurations and severe channel degradation.
+ oai:arXiv.org:2601.13513v1
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Noriyuki Tonami, Wataru Kohno, Yoshiyuki Yajima, Sakiko Mishima, Yumi Arai, Reishi Kondo, Tomoyuki Hino
+
+
+ Automatic Adjustment of HPA Parameters and Attack Prevention in Kubernetes Using Random Forests
+ https://arxiv.org/abs/2601.13515
+ arXiv:2601.13515v1 Announce Type: new
+Abstract: In this paper, HTTP status codes are used as custom metrics within the HPA as the experimental scenario. By integrating the Random Forest classification algorithm from machine learning, attacks are assessed and predicted, dynamically adjusting the maximum pod parameter in the HPA to manage attack traffic. This approach enables the adjustment of HPA parameters using machine learning scripts in targeted attack scenarios while effectively managing attack traffic. All access from attacking IPs is redirected to honeypot pods, achieving a lower incidence of 5XX status codes through HPA pod adjustments under high load conditions. This method also ensures effective isolation of attack traffic, preventing excessive HPA expansion due to attacks. Additionally, experiments conducted under various conditions demonstrate the importance of setting appropriate thresholds for HPA adjustments.
+ oai:arXiv.org:2601.13515v1
+ cs.CR
+ cs.AI
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3704304.3704320
+ Hanlin Zhou, Huah Yong Chan, Jingfei Ni, Mengchun Wu, Qing Deng
+
+
+ From "Fail Fast" to "Mature Safely:" Expert Perspectives as Secondary Stakeholders on Teen-Centered Social Media Risk Detection
+ https://arxiv.org/abs/2601.13516
+ arXiv:2601.13516v1 Announce Type: new
+Abstract: In addressing various risks on social media, the HCI community has advocated for teen-centered risk detection technologies over platform-based, parent-centered features. However, their real-world viability remains underexplored by secondary stakeholders beyond the family unit. Therefore, we present an evaluation of a teen-centered social media risk detection dashboard through online interviews with 33 online safety experts. While experts praised our dashboard's clear design for teen agency, their feedback revealed five primary tensions in implementing and sustaining such technology: objective vs. context-dependent risk definition, informing risks vs. meaningful intervention, teen empowerment vs. motivation, need for data vs. data privacy, and independence vs. sustainability. These findings motivate us to rethink "teen-centered" and a shift from a "fail fast" to a "mature safely" paradigm for youth safety technology innovation. We offer design implications for addressing these tensions before system deployment with teens and strategies for aligning secondary stakeholders' interests to deploy and sustain such technologies in the broader ecosystem of youth online safety.
+ oai:arXiv.org:2601.13516v1
+ cs.HC
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renkai Ma, Ashwaq Alsoubai, Jinkyung Katie Park, Pamela J. Wisniewski
+
+
+ AgenticRed: Optimizing Agentic Systems for Automated Red-teaming
+ https://arxiv.org/abs/2601.13518
+ arXiv:2601.13518v1 Announce Type: new
+Abstract: While recent automated red-teaming methods show promise for systematically exposing model vulnerabilities, most existing approaches rely on human-specified workflows. This dependence on manually designed workflows suffers from human biases and makes exploring the broader design space expensive. We introduce AgenticRed, an automated pipeline that leverages LLMs' in-context learning to iteratively design and refine red-teaming systems without human intervention. Rather than optimizing attacker policies within predefined structures, AgenticRed treats red-teaming as a system design problem. Inspired by methods like Meta Agent Search, we develop a novel procedure for evolving agentic systems using evolutionary selection, and apply it to the problem of automatic red-teaming. Red-teaming systems designed by AgenticRed consistently outperform state-of-the-art approaches, achieving 96% attack success rate (ASR) on Llama-2-7B (36% improvement) and 98% on Llama-3-8B on HarmBench. Our approach exhibits strong transferability to proprietary models, achieving 100% ASR on GPT-3.5-Turbo and GPT-4o-mini, and 60% on Claude-Sonnet-3.5 (24% improvement). This work highlights automated system design as a powerful paradigm for AI safety evaluation that can keep pace with rapidly evolving models.
+ oai:arXiv.org:2601.13518v1
+ cs.AI
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jiayi Yuan, Jonathan N\"other, Natasha Jaques, Goran Radanovi\'c
+
+
+ Sticky Help, Bounded Effects: Session-by-Session Analytics of Teacher Interventions in K-12 Classrooms
+ https://arxiv.org/abs/2601.13520
+ arXiv:2601.13520v1 Announce Type: new
+Abstract: Teachers' in-the-moment support is a limited resource in technology-supported classrooms, and teachers must decide whom to help and when during ongoing student work. However, less is known about how students' prior help history (whether they were helped earlier) and their engagement states (e.g., idle, struggle) shape teachers' decisions, and whether observed learning benefits associated with teacher help extend beyond the current class session. To address these questions, we first conducted interviews with nine K-12 mathematics teachers to identify candidate decision factors for teacher help. We then analyzed 1.4 million student-system interactions from 339 students across 14 classes in the MATHia intelligent tutoring system by linking teacher-logged help events with fine-grained engagement states. Mixed-effects models show that students who received help earlier were more likely to receive additional help later, even after accounting for current engagement state. Cross-lagged panel analyses further show that teacher help recurred across sessions, whereas idle behavior did not receive sustained attention over time. Finally, help coincided with immediate learning within sessions, but did not predict skill acquisition in later sessions, as estimated by additive factor modeling. These findings suggest that teacher help is "sticky" in that it recurs for previously supported students, while its measurable learning benefits in our data are largely session-bound. We discuss implications for designing real-time analytics that track attention coverage and highlight under-visited students to support a more equitable and effective allocation of teacher attention.
+ oai:arXiv.org:2601.13520v1
+ cs.CY
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1145/3785022.3785128
+ Qiao Jin, Conrad Borchers, Ashish Gurung, Sean Jackson, Sameeksha Agarwal, Cancan Wang, YiChen Yu, Pragati Maheshwary, Vincent Aleven
+
+
+ StoTAM: Stochastic Alternating Minimization for Tucker-Structured Tensor Sensing
+ https://arxiv.org/abs/2601.13522
+ arXiv:2601.13522v1 Announce Type: new
+Abstract: Low-rank tensor sensing is a fundamental problem with broad applications in signal processing and machine learning. Among various tensor models, low-Tucker-rank tensors are particularly attractive for capturing multi-mode subspace structures in high-dimensional data. Existing recovery methods either operate on the full tensor variable with expensive tensor projections, or adopt factorized formulations that still rely on full-gradient computations, while most stochastic factorized approaches are restricted to tensor decomposition settings. In this work, we propose a stochastic alternating minimization algorithm that operates directly on the core tensor and factor matrices under a Tucker factorization. The proposed method avoids repeated tensor projections and enables efficient mini-batch updates on low-dimensional tensor factors. Numerical experiments on synthetic tensor sensing demonstrate that the proposed algorithm exhibits favorable convergence behavior in wall-clock time compared with representative stochastic tensor recovery baselines.
+ oai:arXiv.org:2601.13522v1
+ cs.LG
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Shuang Li
+
+
+ GO-MLVTON: Garment Occlusion-Aware Multi-Layer Virtual Try-On with Diffusion Models
+ https://arxiv.org/abs/2601.13524
+ arXiv:2601.13524v1 Announce Type: new
+Abstract: Existing Image-based virtual try-on (VTON) methods primarily focus on single-layer or multi-garment VTON, neglecting multi-layer VTON (ML-VTON), which involves dressing multiple layers of garments onto the human body with realistic deformation and layering to generate visually plausible outcomes. The main challenge lies in accurately modeling occlusion relationships between inner and outer garments to reduce interference from redundant inner garment features. To address this, we propose GO-MLVTON, the first multi-layer VTON method, introducing the Garment Occlusion Learning module to learn occlusion relationships and the StableDiffusion-based Garment Morphing & Fitting module to deform and fit garments onto the human body, producing high-quality multi-layer try-on results. Additionally, we present the MLG dataset for this task and propose a new metric named Layered Appearance Coherence Difference (LACD) for evaluation. Extensive experiments demonstrate the state-of-the-art performance of GO-MLVTON. Project page: https://upyuyang.github.io/go-mlvton/.
+ oai:arXiv.org:2601.13524v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yang Yu, Yunze Deng, Yige Zhang, Yanjie Xiao, Youkun Ou, Wenhao Hu, Mingchao Li, Bin Feng, Wenyu Liu, Dandan Zheng, Jingdong Chen
+
+
+ More Than Efficiency: Embedding Compression Improves Domain Adaptation in Dense Retrieval
+ https://arxiv.org/abs/2601.13525
+ arXiv:2601.13525v1 Announce Type: new
+Abstract: Dense retrievers powered by pretrained embeddings are widely used for document retrieval but struggle in specialized domains due to the mismatches between the training and target domain distributions. Domain adaptation typically requires costly annotation and retraining of query-document pairs. In this work, we revisit an overlooked alternative: applying PCA to domain embeddings to derive lower-dimensional representations that preserve domain-relevant features while discarding non-discriminative components. Though traditionally used for efficiency, we demonstrate that this simple embedding compression can effectively improve retrieval performance. Evaluated across 9 retrievers and 14 MTEB datasets, PCA applied solely to query embeddings improves NDCG@10 in 75.4% of model-dataset pairs, offering a simple and lightweight method for domain adaptation.
+ oai:arXiv.org:2601.13525v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chunsheng Zuo, Daniel Khashabi
+
+
+ Eliciting Harmful Capabilities by Fine-Tuning On Safeguarded Outputs
+ https://arxiv.org/abs/2601.13528
+ arXiv:2601.13528v1 Announce Type: new
+Abstract: Model developers implement safeguards in frontier models to prevent misuse, for example, by employing classifiers to filter dangerous outputs. In this work, we demonstrate that even robustly safeguarded models can be used to elicit harmful capabilities in open-source models through elicitation attacks. Our elicitation attacks consist of three stages: (i) constructing prompts in adjacent domains to a target harmful task that do not request dangerous information; (ii) obtaining responses to these prompts from safeguarded frontier models; (iii) fine-tuning open-source models on these prompt-output pairs. Since the requested prompts cannot be used to directly cause harm, they are not refused by frontier model safeguards. We evaluate these elicitation attacks within the domain of hazardous chemical synthesis and processing, and demonstrate that our attacks recover approximately 40% of the capability gap between the base open-source model and an unrestricted frontier model. We then show that the efficacy of elicitation attacks scales with the capability of the frontier model and the amount of generated fine-tuning data. Our work demonstrates the challenge of mitigating ecosystem level risks with output-level safeguards.
+ oai:arXiv.org:2601.13528v1
+ cs.CR
+ cs.AI
+ cs.CL
+ cs.LG
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jackson Kaunismaa, Avery Griffin, John Hughes, Christina Q. Knight, Mrinank Sharma, Erik Jones
+
+
+ The OncoReach Stylet for Brachytherapy: Design Evaluation and Pilot Study
+ https://arxiv.org/abs/2601.13529
+ arXiv:2601.13529v1 Announce Type: new
+Abstract: Cervical cancer accounts for a significant portion of the global cancer burden among women. Interstitial brachytherapy (ISBT) is a standard procedure for treating cervical cancer; it involves placing a radioactive source through a straight hollow needle within or in close proximity to the tumor and surrounding tissue. However, the use of straight needles limits surgical planning to a linear needle path. We present the OncoReach stylet, a handheld, tendon-driven steerable stylet designed for compatibility with standard ISBT 15- and 13-gauge needles. Building upon our prior work, we evaluated design parameters like needle gauge, spherical joint count and spherical joint placement, including an asymmetric disk design to identify a configuration that maximizes bending compliance while retaining axial stiffness. Free space experiments quantified tip deflection across configurations, and a two-tube Cosserat rod model accurately predicted the centerline shape of the needle for most trials. The best performing configuration was integrated into a reusable handheld prototype that enables manual actuation. A patient-derived, multi-composite phantom model of the uterus and pelvis was developed to conduct a pilot study of the OncoReach steerable stylet with one expert user. Results showed the ability to steer from less-invasive, medial entry points to reach the lateral-most targets, underscoring the significance of steerable stylets.
+ oai:arXiv.org:2601.13529v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pejman Kheradmand, Kent K. Yamamoto, Emma Webster, Keith Sowards, Gianna Hatheway, Katharine L. Jackson, Sabino Zani Jr., Julie A. Raffi, Diandra N. Ayala-Peacock, Scott R. Silva, Joanna Deaton Bertram, Yash Chitalia
+
+
+ Reasoning While Recommending: Entropy-Guided Latent Reasoning in Generative Re-ranking Models
+ https://arxiv.org/abs/2601.13533
+ arXiv:2601.13533v1 Announce Type: new
+Abstract: Reinforcement learning plays a crucial role in generative re-ranking scenarios due to its exploration-exploitation capabilities, but existing generative methods mostly fail to adapt to the dynamic entropy changes in model difficulty during list generation, making it challenging to accurately capture complex preferences. Given that language models have achieved remarkable breakthroughs by integrating reasoning capabilities, we draw on this approach to introduce a latent reasoning mechanism, and experimental validation demonstrates that this mechanism effectively reduces entropy in the model's decision-making process. Based on these findings, we introduce the Entropy-Guided Latent Reasoning (EGLR) recommendation model, which has three core advantages. First, it abandons the "reason first, recommend later" paradigm to achieve "reasoning while recommending", specifically designed for the high-difficulty nature of list generation by enabling real-time reasoning during generation. Second, it implements entropy-guided variable-length reasoning using context-aware reasoning token alongside dynamic temperature adjustment, expanding exploration breadth in reasoning and boosting exploitation precision in recommending to achieve a more precisely adapted exploration-exploitation trade-off. Third, the model adopts a lightweight integration design with no complex independent modules or post-processing, enabling easy adaptation to existing models. Experimental results on two real-world datasets validate the model's effectiveness, and its notable advantage lies in being compatible with existing generative re-ranking models to enhance their performance. Further analyses also demonstrate its practical deployment value and research potential.
+ oai:arXiv.org:2601.13533v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Changshuo Zhang
+
+
+ MN-TSG:Continuous Time Series Generation with Irregular Observations
+ https://arxiv.org/abs/2601.13534
+ arXiv:2601.13534v1 Announce Type: new
+Abstract: Time series generation (TSG) plays a critical role in a wide range of domains, such as healthcare. However, most existing methods assume regularly sampled observations and fixed output resolutions, which are often misaligned with real-world scenarios where data are irregularly sampled and sparsely observed. This mismatch is particularly problematic in applications such as clinical monitoring, where irregular measurements must support downstream tasks requiring continuous and high-resolution time series.
+ Neural Controlled Differential Equations (NCDEs) have shown strong potential for modeling irregular time series, yet they still face challenges in capturing complex dynamic temporal patterns and supporting continuous TSG. To address these limitations, we propose MN-TSG, a novel framework that explores Mixture-of-Experts (MoE)-based NCDEs and integrates them with existing TSG models for irregular and continuous generation tasks.
+ The core of MN-TSG lies in a MoE-NCDE architecture with dynamically parameterized expert functions and a decoupled design that facilitates more effective optimization of MoE dynamics. Furthermore, we leverage existing TSG models to learn the joint distribution over the mixture of experts and the generated time series. This enables the framework not only to generate new samples, but also to produce appropriate expert configurations tailored to each sample, thereby supporting refined continuous TSG.
+ Extensive experiments on ten public and synthetic datasets demonstrate the effectiveness of MN-TSG, consistently outperforming strong TSG baselines on both irregular-to-regular and irregular-to-continuous generation tasks.
+ oai:arXiv.org:2601.13534v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xu Zhang, Junwei Deng, Chang Xu, Hao Li, Jiang Bian
+
+
+ Sparse Identification of Nonlinear Distributed-Delay Dynamics via the Linear Chain Trick
+ https://arxiv.org/abs/2601.13536
+ arXiv:2601.13536v1 Announce Type: new
+Abstract: The Sparse Identification of Nonlinear Dynamics (SINDy) framework has been frequently used to discover parsimonious differential equations governing natural and physical systems. This includes recent extensions to SINDy that enable the recovery of discrete delay differential equations, where delay terms are represented explicitly in the candidate library. However, such formulations cannot capture the distributed delays that naturally arise in biological, physical, and engineering systems. In the present work, we extend SINDy to identify distributed-delay differential equations by incorporating the Linear Chain Trick (LCT), which provides a finite-dimensional ordinary differential equation representing the distributed memory effects. Hence, SINDy can operate in an augmented state space using conventional sparse regression while preserving a clear interpretation of delayed influences via the chain trick. From time-series data, the proposed method jointly infers the governing equations, the mean delay, and the dispersion of the underlying delay distribution. We numerically verify the method on several models with distributed delay, including the logistic growth model and a Hes1--mRNA gene regulatory network model. We show that the proposed method accurately reconstructs distributed delay dynamics, remains robust under noise and sparse sampling, and provides a transparent, data-driven approach for discovering nonlinear systems with distributed-delay.
+ oai:arXiv.org:2601.13536v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohammed Alanazi, Majid Bani-Yaghoub
+
+
+ When Wording Steers the Evaluation: Framing Bias in LLM judges
+ https://arxiv.org/abs/2601.13537
+ arXiv:2601.13537v1 Announce Type: new
+Abstract: Large language models (LLMs) are known to produce varying responses depending on prompt phrasing, indicating that subtle guidance in phrasing can steer their answers. However, the impact of this framing bias on LLM-based evaluation, where models are expected to make stable and impartial judgments, remains largely underexplored. Drawing inspiration from the framing effect in psychology, we systematically investigate how deliberate prompt framing skews model judgments across four high-stakes evaluation tasks. We design symmetric prompts using predicate-positive and predicate-negative constructions and demonstrate that such framing induces significant discrepancies in model outputs. Across 14 LLM judges, we observe clear susceptibility to framing, with model families showing distinct tendencies toward agreement or rejection. These findings suggest that framing bias is a structural property of current LLM-based evaluation systems, underscoring the need for framing-aware protocols.
+ oai:arXiv.org:2601.13537v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yerin Hwang, Dongryeol Lee, Taegwan Kang, Minwoo Lee, Kyomin Jung
+
+
+ LongSpeech: A Scalable Benchmark for Transcription, Translation and Understanding in Long Speech
+ https://arxiv.org/abs/2601.13539
+ arXiv:2601.13539v1 Announce Type: new
+Abstract: Recent advances in audio-language models have demonstrated remarkable success on short, segment-level speech tasks. However, real-world applications such as meeting transcription, spoken document understanding, and conversational analysis require robust models capable of processing and reasoning over long-form audio. In this work, we present LongSpeech, a large-scale and scalable benchmark specifically designed to evaluate and advance the capabilities of speech models on long-duration audio. LongSpeech comprises over 100,000 speech segments, each approximately 10 minutes long, with rich annotations for ASR, speech translation, summarization, language detection, speaker counting, content separation, and question answering. We introduce a reproducible pipeline for constructing long-form speech benchmarks from diverse sources, enabling future extensions. Our initial experiments with state-of-the-art models reveal significant performance gaps, with models often specializing in one task at the expense of others and struggling with higher-level reasoning. These findings underscore the challenging nature of our benchmark. Our benchmark will be made publicly available to the research community.
+ oai:arXiv.org:2601.13539v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Fei Yang, Xuanfan Ni, Renyi Yang, Jiahui Geng, Qing Li, Chenyang Lyu, Yichao Du, Longyue Wang, Weihua Luo, Kaifu Zhang
+
+
+ A hybrid numerical method for a microscopic and macroscopic traffic flow model
+ https://arxiv.org/abs/2601.13541
+ arXiv:2601.13541v1 Announce Type: new
+Abstract: In this paper, we introduce a traffic flow model based on a microscopic follow-the-leader model, while enforcing maximal constraints on the density and velocity of the flow. The related macroscopic model can be represented in conservative formulation. By introducing an advected variable up with the flow, where p is the velocity offset, and u is the relative velocity, we reformulate the classical Aw-Rascle-Zhang (ARZ) model and the modified Aw-Rascle model to describe a realistic fundamental diagrams. The elementary waves are derived, and the Riemann problem is solved to validate the model's theoretical consistency. We further extend to a two-dimensional model. Numerical simulations are given for both one-and two-dimensional case by using the hybrid Godunov-Glimm scheme to verify the model's performance.
+ oai:arXiv.org:2601.13541v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuanhong Wu, Shuzhi Liu, Qinglong Zhang
+
+
+ TruthTensor: Evaluating LLMs Human Imitation through Prediction Market Drift and Holistic Reasoning
+ https://arxiv.org/abs/2601.13545
+ arXiv:2601.13545v1 Announce Type: new
+Abstract: Evaluating language models and AI agents remains fundamentally challenging because static benchmarks fail to capture real-world uncertainty, distribution shift, and the gap between isolated task accuracy and human-aligned decision-making under evolving conditions. This paper introduces TruthTensor, a novel, reproducible evaluation paradigm that measures Large Language Models (LLMs) not only as prediction engines but as human-imitation systems operating in socially-grounded, high-entropy environments. Building on forward-looking, contamination-free tasks, our framework anchors evaluation to live prediction markets and combines probabilistic scoring to provide a holistic view of model behavior. TruthTensor complements traditional correctness metrics with drift-centric diagnostics and explicit robustness checks for reproducibility. It specify human vs. automated evaluation roles, annotation protocols, and statistical testing procedures to ensure interpretability and replicability of results. In experiments across 500+ real markets (political, economic, cultural, technological), TruthTensor demonstrates that models with similar forecast accuracy can diverge markedly in calibration, drift, and risk-sensitivity, underscoring the need to evaluate models along multiple axes (accuracy, calibration, narrative stability, cost, and resource efficiency). TruthTensor therefore operationalizes modern evaluation best practices, clear hypothesis framing, careful metric selection, transparent compute/cost reporting, human-in-the-loop validation, and open, versioned evaluation contracts, to produce defensible assessments of LLMs in real-world decision contexts. We publicly release TruthTensor at https://truthtensor.com
+ oai:arXiv.org:2601.13545v1
+ cs.AI
+ cs.ET
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Shirin Shahabi, Spencer Graham, Haruna Isah
+
+
+ ChatAD: Reasoning-Enhanced Time-Series Anomaly Detection with Multi-Turn Instruction Evolution
+ https://arxiv.org/abs/2601.13546
+ arXiv:2601.13546v1 Announce Type: new
+Abstract: LLM-driven Anomaly Detection (AD) helps enhance the understanding and explanatory abilities of anomalous behaviors in Time Series (TS). Existing methods face challenges of inadequate reasoning ability, deficient multi-turn dialogue capability, and narrow generalization. To this end, we 1) propose a multi-agent-based TS Evolution algorithm named TSEvol. On top of it, we 2) introduce the AD reasoning and multi-turn dialogue Dataset TSEData-20K and contribute the Chatbot family for AD, including ChatAD-Llama3-8B, Qwen2.5-7B, and Mistral-7B. Furthermore, 3) we propose the TS Kahneman-Tversky Optimization (TKTO) to enhance ChatAD's cross-task generalization capability. Lastly, 4) we propose a LLM-driven Learning-based AD Benchmark LLADBench to evaluate the performance of ChatAD and nine baselines across seven datasets and tasks. Our three ChatAD models achieve substantial gains, up to 34.50% in accuracy, 34.71% in F1, and a 37.42% reduction in false positives. Besides, via KTKO, our optimized ChatAD achieves competitive performance in reasoning and cross-task generalization on classification, forecasting, and imputation.
+ oai:arXiv.org:2601.13546v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hui Sun, Chang Xu, Haonan Xie, Hao Li, Yuhao Huang, Chuheng Zhang, Ming Jin, Xiaoguang Liu, Gang Wang, Jiang Bian
+
+
+ HateXScore: A Metric Suite for Evaluating Reasoning Quality in Hate Speech Explanations
+ https://arxiv.org/abs/2601.13547
+ arXiv:2601.13547v1 Announce Type: new
+Abstract: Hateful speech detection is a key component of content moderation, yet current evaluation frameworks rarely assess why a text is deemed hateful. We introduce \textsf{HateXScore}, a four-component metric suite designed to evaluate the reasoning quality of model explanations. It assesses (i) conclusion explicitness, (ii) faithfulness and causal grounding of quoted spans, (iii) protected group identification (policy-configurable), and (iv) logical consistency among these elements. Evaluated on six diverse hate speech datasets, \textsf{HateXScore} is intended as a diagnostic complement to reveal interpretability failures and annotation inconsistencies that are invisible to standard metrics like Accuracy or F1. Moreover, human evaluation shows strong agreement with \textsf{HateXScore}, validating it as a practical tool for trustworthy and transparent moderation.
+ \textcolor{red}{Disclaimer: This paper contains sensitive content that may be disturbing to some readers.}
+ oai:arXiv.org:2601.13547v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yujia Hu, Roy Ka-Wei Lee
+
+
+ Patterning: The Dual of Interpretability
+ https://arxiv.org/abs/2601.13548
+ arXiv:2601.13548v1 Announce Type: new
+Abstract: Mechanistic interpretability aims to understand how neural networks generalize beyond their training data by reverse-engineering their internal structures. We introduce patterning as the dual problem: given a desired form of generalization, determine what training data produces it. Our approach is based on susceptibilities, which measure how posterior expectation values of observables respond to infinitesimal shifts in the data distribution. Inverting this linear response relationship yields the data intervention that steers the model toward a target internal configuration. We demonstrate patterning in a small language model, showing that re-weighting training data along principal susceptibility directions can accelerate or delay the formation of structure, such as the induction circuit. In a synthetic parentheses balancing task where multiple algorithms achieve perfect training accuracy, we show that patterning can select which algorithm the model learns by targeting the local learning coefficient of each solution. These results establish that the same mathematical framework used to read internal structure can be inverted to write it.
+ oai:arXiv.org:2601.13548v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ George Wang, Daniel Murfet
+
+
+ DiffFace-Edit: A Diffusion-Based Facial Dataset for Forgery-Semantic Driven Deepfake Detection Analysis
+ https://arxiv.org/abs/2601.13551
+ arXiv:2601.13551v1 Announce Type: new
+Abstract: Generative models now produce imperceptible, fine-grained manipulated faces, posing significant privacy risks. However, existing AI-generated face datasets generally lack focus on samples with fine-grained regional manipulations. Furthermore, no researchers have yet studied the real impact of splice attacks, which occur between real and manipulated samples, on detectors. We refer to these as detector-evasive samples. Based on this, we introduce the DiffFace-Edit dataset, which has the following advantages: 1) It contains over two million AI-generated fake images. 2) It features edits across eight facial regions (e.g., eyes, nose) and includes a richer variety of editing combinations, such as single-region and multi-region edits. Additionally, we specifically analyze the impact of detector-evasive samples on detection models. We conduct a comprehensive analysis of the dataset and propose a cross-domain evaluation that combines IMDL methods. Dataset will be available at https://github.com/ywh1093/DiffFace-Edit.
+ oai:arXiv.org:2601.13551v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Feng Ding, Wenhui Yi, Xinan He, Mengyao Xiao, Jianfeng Xu, Jianqiang Du
+
+
+ LogicEnvGen: Task-Logic Driven Generation of Diverse Simulated Environments for Embodied AI
+ https://arxiv.org/abs/2601.13556
+ arXiv:2601.13556v1 Announce Type: new
+Abstract: Simulated environments play an essential role in embodied AI, functionally analogous to test cases in software engineering. However, existing environment generation methods often emphasize visual realism (e.g., object diversity and layout coherence), overlooking a crucial aspect: logical diversity from the testing perspective. This limits the comprehensive evaluation of agent adaptability and planning robustness in distinct simulated environments. To bridge this gap, we propose LogicEnvGen, a novel method driven by Large Language Models (LLMs) that adopts a top-down paradigm to generate logically diverse simulated environments as test cases for agents. Given an agent task, LogicEnvGen first analyzes its execution logic to construct decision-tree-structured behavior plans and then synthesizes a set of logical trajectories. Subsequently, it adopts a heuristic algorithm to refine the trajectory set, reducing redundant simulation. For each logical trajectory, which represents a potential task situation, LogicEnvGen correspondingly instantiates a concrete environment. Notably, it employs constraint solving for physical plausibility. Furthermore, we introduce LogicEnvEval, a novel benchmark comprising four quantitative metrics for environment evaluation. Experimental results verify the lack of logical diversity in baselines and demonstrate that LogicEnvGen achieves 1.04-2.61x greater diversity, significantly improving the performance in revealing agent faults by 4.00%-68.00%.
+ oai:arXiv.org:2601.13556v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianan Wang, Siyang Zhang, Bin Li, Juan Chen, Jingtao Qi, Zhuo Zhang, Chen Qian
+
+
+ Leveraging ChatGPT and Other NLP Methods for Identifying Risk and Protective Behaviors in MSM: Social Media and Dating apps Text Analysis
+ https://arxiv.org/abs/2601.13558
+ arXiv:2601.13558v1 Announce Type: new
+Abstract: Men who have sex with men (MSM) are at elevated risk for sexually transmitted infections and harmful drinking compared to heterosexual men. Text data collected from social media and dating applications may provide new opportunities for personalized public health interventions by enabling automatic identification of risk and protective behaviors. In this study, we evaluated whether text from social media and dating apps can be used to predict sexual risk behaviors, alcohol use, and pre-exposure prophylaxis (PrEP) uptake among MSM. With participant consent, we collected textual data and trained machine learning models using features derived from ChatGPT embeddings, BERT embeddings, LIWC, and a dictionary-based risk term approach. The models achieved strong performance in predicting monthly binge drinking and having more than five sexual partners, with F1 scores of 0.78, and moderate performance in predicting PrEP use and heavy drinking, with F1 scores of 0.64 and 0.63. These findings demonstrate that social media and dating app text data can provide valuable insights into risk and protective behaviors and highlight the potential of large language model-based methods to support scalable and personalized public health interventions for MSM.
+ oai:arXiv.org:2601.13558v1
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mehrab Beikzadeh, Chenglin Hong, Cory J Cascalheira, Callisto Boka, Majid Sarrafzadeh, Ian W Holloway
+
+
+ AgentGC: Evolutionary Learning-based Lossless Compression for Genomics Data with LLM-driven Multiple Agent
+ https://arxiv.org/abs/2601.13559
+ arXiv:2601.13559v1 Announce Type: new
+Abstract: Lossless compression has made significant advancements in Genomics Data (GD) storage, sharing and management. Current learning-based methods are non-evolvable with problems of low-level compression modeling, limited adaptability, and user-unfriendly interface. To this end, we propose AgentGC, the first evolutionary Agent-based GD Compressor, consisting of 3 layers with multi-agent named Leader and Worker. Specifically, the 1) User layer provides a user-friendly interface via Leader combined with LLM; 2) Cognitive layer, driven by the Leader, integrates LLM to consider joint optimization of algorithm-dataset-system, addressing the issues of low-level modeling and limited adaptability; and 3) Compression layer, headed by Worker, performs compression & decompression via a automated multi-knowledge learning-based compression framework. On top of AgentGC, we design 3 modes to support diverse scenarios: CP for compression-ratio priority, TP for throughput priority, and BM for balanced mode. Compared with 14 baselines on 9 datasets, the average compression ratios gains are 16.66%, 16.11%, and 16.33%, the throughput gains are 4.73x, 9.23x, and 9.15x, respectively.
+ oai:arXiv.org:2601.13559v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sun Hui, Ding Yanfeng, Huidong Ma, Chang Xu, Keyan Jin, Lizheng Zu, Cheng Zhong, xiaoguang Liu, Gang Wang, Wentong Cai
+
+
+ Reasoning is a Modality
+ https://arxiv.org/abs/2601.13562
+ arXiv:2601.13562v1 Announce Type: new
+Abstract: The Abstraction and Reasoning Corpus (ARC) provides a compact laboratory for studying abstract reasoning, an ability central to human intelligence. Modern AI systems, including LLMs and ViTs, largely operate as sequence-of-behavior prediction machines: they match observable behaviors by modeling token statistics without a persistent, readable mental state. This creates a gap with human-like behavior: humans can explain an action by decoding internal state, while AI systems can produce fluent post-hoc rationalizations that are not grounded in such a state. We hypothesize that reasoning is a modality: reasoning should exist as a distinct channel separate from the low-level workspace on which rules are applied. To test this hypothesis, on solving ARC tasks as a visual reasoning problem, we designed a novel role-separated transformer block that splits global controller tokens from grid workspace tokens, enabling iterative rule execution. Trained and evaluated within the VARC vision-centric protocol, our method achieved 62.6% accuracy on ARC-1, surpassing average human performance (60.2%) and outperforming prior methods significantly. Qualitatively, our models exhibit more coherent rule-application structure than the dense ViT baseline, consistent with a shift away from plausible probability blobs toward controller-driven reasoning.
+ oai:arXiv.org:2601.13562v1
+ cs.AI
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zhiguang Liu, Yi Shang
+
+
+ ButterflyMoE: Sub-Linear Ternary Experts via Structured Butterfly Orbits
+ https://arxiv.org/abs/2601.13563
+ arXiv:2601.13563v1 Announce Type: new
+Abstract: Linear memory scaling stores $N$ independent expert weight matrices requiring $\mathcal{O}(N \cdot d^2)$ memory, which exceeds edge devices memory budget. Current compression methods like quantization, pruning and low-rank factorization reduce constant factors but leave the scaling bottleneck unresolved. We introduce ButterflyMoE, a method that treats experts not as independent weight matrices but as geometric reorientations of a unified shared quantized substrate. Diversity among experts arises from viewing different angles of shared capacity, not from redundant storage. By applying learned rotations to a shared ternary prototype, each expert yields $\mathcal{O}(d^2 + N \cdot d \log d)$ memory -- sub-linear in the number of experts. The key insight: training these rotations with quantization reduces activation outliers and stabilizes extreme low bit training, where static methods collapse. Across language modeling benchmarks, ButterflyMoE achieves 150 times memory reduction at 256 experts with negligible accuracy loss. This allows 64 experts to fit on 4GB devices compared to standard MoE's 8 experts, showing geometric parametrization breaks linear scaling.
+ oai:arXiv.org:2601.13563v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Aryan Karmore
+
+
+ Multi-objective fluorescent molecule design with a data-physics dual-driven generative framework
+ https://arxiv.org/abs/2601.13564
+ arXiv:2601.13564v1 Announce Type: new
+Abstract: Designing fluorescent small molecules with tailored optical and physicochemical properties requires navigating vast, underexplored chemical space while satisfying multiple objectives and constraints. Conventional generate-score-screen approaches become impractical under such realistic design specifications, owing to their low search efficiency, unreliable generalizability of machine-learning prediction, and the prohibitive cost of quantum chemical calculation. Here we present LUMOS, a data-and-physics driven framework for inverse design of fluorescent molecules. LUMOS couples generator and predictor within a shared latent representation, enabling direct specification-to-molecule design and efficient exploration. Moreover, LUMOS combines neural networks with a fast time-dependent density functional theory (TD-DFT) calculation workflow to build a suite of complementary predictors spanning different trade-offs in speed, accuracy, and generalizability, enabling reliable property prediction across diverse scenarios. Finally, LUMOS employs a property-guided diffusion model integrated with multi-objective evolutionary algorithms, enabling de novo design and molecular optimization under multiple objectives and constraints. Across comprehensive benchmarks, LUMOS consistently outperforms baseline models in terms of accuracy, generalizability and physical plausibility for fluorescence property prediction, and demonstrates superior performance in multi-objective scaffold- and fragment-level molecular optimization. Further validation using TD-DFT and molecular dynamics (MD) simulations demonstrates that LUMOS can generate valid fluorophores that meet various target specifications. Overall, these results establish LUMOS as a data-physics dual-driven framework for general fluorophore inverse design.
+ oai:arXiv.org:2601.13564v1
+ cs.LG
+ cs.AI
+ physics.chem-ph
+ q-bio.BM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yanheng Li, Zhichen Pu, Lijiang Yang, Zehao Zhou, Yi Qin Gao
+
+
+ Learning Fine-Grained Correspondence with Cross-Perspective Perception for Open-Vocabulary 6D Object Pose Estimation
+ https://arxiv.org/abs/2601.13565
+ arXiv:2601.13565v1 Announce Type: new
+Abstract: Open-vocabulary 6D object pose estimation empowers robots to manipulate arbitrary unseen objects guided solely by natural language. However, a critical limitation of existing approaches is their reliance on unconstrained global matching strategies. In open-world scenarios, trying to match anchor features against the entire query image space introduces excessive ambiguity, as target features are easily confused with background distractors. To resolve this, we propose Fine-grained Correspondence Pose Estimation (FiCoP), a framework that transitions from noise-prone global matching to spatially-constrained patch-level correspondence. Our core innovation lies in leveraging a patch-to-patch correlation matrix as a structural prior to narrowing the matching scope, effectively filtering out irrelevant clutter to prevent it from degrading pose estimation. Firstly, we introduce an object-centric disentanglement preprocessing to isolate the semantic target from environmental noise. Secondly, a Cross-Perspective Global Perception (CPGP) module is proposed to fuse dual-view features, establishing structural consensus through explicit context reasoning. Finally, we design a Patch Correlation Predictor (PCP) that generates a precise block-wise association map, acting as a spatial filter to enforce fine-grained, noise-resilient matching. Experiments on the REAL275 and Toyota-Light datasets demonstrate that FiCoP improves Average Recall by 8.0% and 6.1%, respectively, compared to the state-of-the-art method, highlighting its capability to deliver robust and generalized perception for robotic agents operating in complex, unconstrained open-world environments. The source code will be made publicly available at https://github.com/zjjqinyu/FiCoP.
+ oai:arXiv.org:2601.13565v1
+ cs.CV
+ cs.RO
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yu Qin, Shimeng Fan, Fan Yang, Zixuan Xue, Zijie Mai, Wenrui Chen, Kailun Yang, Zhiyong Li
+
+
+ Self-Improvement as Coherence Optimization: A Theoretical Account
+ https://arxiv.org/abs/2601.13566
+ arXiv:2601.13566v1 Announce Type: new
+Abstract: Can language models improve their accuracy without external supervision? Methods such as debate, bootstrap, and internal coherence maximization achieve this surprising feat, even matching golden finetuning performance. Yet why they work remains theoretically unclear. We show that they are all special cases of coherence optimization: finding a context-to-behavior mapping that's most compressible and jointly predictable. We prove that coherence optimization is equivalent to description-length regularization, and that among all such regularization schemes, it is optimal for semi-supervised learning when the regularizer is derived from a pretrained model. Our theory, supported by preliminary experiments, explains why feedback-free self-improvement works and predicts when it should succeed or fail.
+ oai:arXiv.org:2601.13566v1
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tianyi Qiu, Ahmed Hani Ismail, Zhonghao He, Shi Feng
+
+
+ DRGW: Learning Disentangled Representations for Robust Graph Watermarking
+ https://arxiv.org/abs/2601.13569
+ arXiv:2601.13569v1 Announce Type: new
+Abstract: Graph-structured data is foundational to numerous web applications, and watermarking is crucial for protecting their intellectual property and ensuring data provenance. Existing watermarking methods primarily operate on graph structures or entangled graph representations, which compromise the transparency and robustness of watermarks due to the information coupling in representing graphs and uncontrollable discretization in transforming continuous numerical representations into graph structures. This motivates us to propose DRGW, the first graph watermarking framework that addresses these issues through disentangled representation learning. Specifically, we design an adversarially trained encoder that learns an invariant structural representation against diverse perturbations and derives a statistically independent watermark carrier, ensuring both robustness and transparency of watermarks. Meanwhile, we devise a graph-aware invertible neural network to provide a lossless channel for watermark embedding and extraction, guaranteeing high detectability and transparency of watermarks. Additionally, we develop a structure-aware editor that resolves the issue of latent modifications into discrete graph edits, ensuring robustness against structural perturbations. Experiments on diverse benchmark datasets demonstrate the superior effectiveness of DRGW.
+ oai:arXiv.org:2601.13569v1
+ cs.LG
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jiasen Li, Yanwei Liu, Zhuoyi Shang, Xiaoyan Gu, Weiping Wang
+
+
+ GeoDynamics: A Geometric State-Space Neural Network for Understanding Brain Dynamics on Riemannian Manifolds
+ https://arxiv.org/abs/2601.13570
+ arXiv:2601.13570v1 Announce Type: new
+Abstract: State-space models (SSMs) have become a cornerstone for unraveling brain dynamics, revealing how latent neural states evolve over time and give rise to observed signals. By combining the flexibility of deep learning with the principled dynamical structure of SSMs, recent studies have achieved powerful fits to functional neuroimaging data. However, most existing approaches still view the brain as a set of loosely connected regions or impose oversimplified network priors, falling short of a truly holistic and self-organized dynamical system perspective. Brain functional connectivity (FC) at each time point naturally forms a symmetric positive definite (SPD) matrix, which resides on a curved Riemannian manifold rather than in Euclidean space. Capturing the trajectories of these SPD matrices is key to understanding how coordinated networks support cognition and behavior. To this end, we introduce GeoDynamics, a geometric state-space neural network that tracks latent brain-state trajectories directly on the high-dimensional SPD manifold. GeoDynamics embeds each connectivity matrix into a manifold-aware recurrent framework, learning smooth and geometry-respecting transitions that reveal task-driven state changes and early markers of Alzheimer's disease, Parkinson's disease, and autism. Beyond neuroscience, we validate GeoDynamics on human action recognition benchmarks (UTKinect, Florence, HDM05), demonstrating its scalability and robustness in modeling complex spatiotemporal dynamics across diverse domains.
+ oai:arXiv.org:2601.13570v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tingting Dan, Jiaqi Ding, Guorong Wu
+
+
+ Stochastic Dynamic Pricing of Electric Vehicle Charging with Heterogeneous User Behavior: A Stackelberg Game Framework
+ https://arxiv.org/abs/2601.13571
+ arXiv:2601.13571v1 Announce Type: new
+Abstract: The rapid adoption of electric vehicles (EVs) introduces complex spatiotemporal demand management challenges for charging station operators (CSOs), exacerbated by demand imbalances, behavioral heterogeneity, and system uncertainty. Traditional dynamic pricing models, often relying on deterministic EV-CS pairings and network equilibrium assumptions, frequently oversimplify user behavior and lack scalability. This study proposes a stochastic, behaviorally heterogeneous dynamic pricing framework formulated as a bi-level Stackelberg game. The upper level optimizes time-varying pricing to maximize system-wide utility, while the lower level models decentralized EV users via a multinomial logit (MNL) choice model incorporating price sensitivity, battery aging, risk attitudes, and network travel costs. Crucially, the model avoids network equilibrium constraints to enhance scalability, with congestion effects represented via queuing-theoretic approximations. To efficiently solve the resulting large-scale optimization problem, a rolling-horizon approach combining the Dynamic Probabilistic Sensitivity Analysis-guided Cross-Entropy Method (PSA-CEM) with the Method of Successive Averages (MSA) is implemented. A real-world case study in Clayton, Melbourne, validates the framework using 22 charging stations. Simulation results demonstrate that the proposed mechanism substantially reduces queuing penalties and improves user utility compared to fixed and time-of-use pricing. The framework provides a robust, scalable tool for strategic EV charging management, balancing realism with computational efficiency.
+ oai:arXiv.org:2601.13571v1
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yongqi Zhang, Dong Ngoduy, Li Duan, Mingchang Zhu, Zhuo Chen
+
+
+ Behavior Knowledge Merge in Reinforced Agentic Models
+ https://arxiv.org/abs/2601.13572
+ arXiv:2601.13572v1 Announce Type: new
+Abstract: Reinforcement learning (RL) is central to post-training, particularly for agentic models that require specialized reasoning behaviors. In this setting, model merging offers a practical mechanism for integrating multiple RL-trained agents from different tasks into a single generalist model. However, existing merging methods are designed for supervised fine-tuning (SFT), and they are suboptimal to preserve task-specific capabilities on RL-trained agentic models. The root is a task-vector mismatch between RL and SFT: on-policy RL induces task vectors that are highly sparse and heterogeneous, whereas SFT-style merging implicitly assumes dense and globally comparable task vectors. When standard global averaging is applied under this mismatch, RL's non-overlapping task vectors that encode critical task-specific behaviors are reduced and parameter updates are diluted. To address this issue, we propose Reinforced Agent Merging (RAM), a distribution-aware merging framework explicitly designed for RL-trained agentic models. RAM disentangles shared and task-specific unique parameter updates, averaging shared components while selectively preserving and rescaling unique ones to counteract parameter update dilution. Experiments across multiple agent domains and model architectures demonstrate that RAM not only surpasses merging baselines, but also unlocks synergistic potential among agents to achieve performance superior to that of specialized agents in their domains.
+ oai:arXiv.org:2601.13572v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Xiangchi Yuan, Dachuan Shi, Chunhui Zhang, Zheyuan Liu, Shenglong Yao, Soroush Vosoughi, Wenke Lee
+
+
+ TRGCN: A Hybrid Framework for Social Network Rumor Detection
+ https://arxiv.org/abs/2601.13573
+ arXiv:2601.13573v1 Announce Type: new
+Abstract: Accurate and efficient rumor detection is critical for information governance, particularly in the context of the rapid spread of misinformation on social networks. Traditional rumor detection relied primarily on manual analysis. With the continuous advancement of technology, machine learning and deep learning approaches for rumor identification have gradually emerged and gained prominence. However, previous approaches often struggle to simultaneously capture both the sequential and the global structural relationships among topological nodes within a social network. To tackle this issue, we introduce a hybrid model for detecting rumors that integrates a Graph Convolutional Network (GCN) with a Transformer architecture, aiming to leverage the complementary strengths of structural and semantic feature extraction. Positional encoding helps preserve the sequential order of these nodes within the propagation structure. The use of Multi-head attention mechanisms enables the model to capture features across diverse representational subspaces, thereby enhancing both the richness and depth of text comprehension. This integration allows the framework to concurrently identify the key propagation network of rumors, the textual content, the long-range dependencies, and the sequence among propagation nodes. Experimental evaluations on publicly available datasets, including Twitter 15 and Twitter 16, demonstrate that our proposed fusion model significantly outperforms both standalone models and existing mainstream methods in terms of accuracy. These results validate the effectiveness and superiority of our approach for the rumor detection task.
+ oai:arXiv.org:2601.13573v1
+ cs.SI
+ physics.soc-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yanqin Yan, Suiyu Zhang, Dingguo Yu, Yijie Zhou, Cheng-Jun Wang, Ke-ke Shang
+
+
+ Highly Deformable Proprioceptive Membrane for Real-Time 3D Shape Reconstruction
+ https://arxiv.org/abs/2601.13574
+ arXiv:2601.13574v1 Announce Type: new
+Abstract: Reconstructing the three-dimensional (3D) geometry of object surfaces is essential for robot perception, yet vision-based approaches are generally unreliable under low illumination or occlusion. This limitation motivates the design of a proprioceptive membrane that conforms to the surface of interest and infers 3D geometry by reconstructing its own deformation. Conventional shape-aware membranes typically rely on resistive, capacitive, or magneto-sensitive mechanisms. However, these methods often encounter challenges such as structural complexity, limited compliance during large-scale deformation, and susceptibility to electromagnetic interference. This work presents a soft, flexible, and stretchable proprioceptive silicone membrane based on optical waveguide sensing. The membrane sensor integrates edge-mounted LEDs and centrally distributed photodiodes (PDs), interconnected via liquid-metal traces embedded within a multilayer elastomeric composite. Rich deformation-dependent light intensity signals are decoded by a data-driven model to recover the membrane geometry as a 3D point cloud. On a customized 140 mm square membrane, real-time reconstruction of large-scale out-of-plane deformation is achieved at 90 Hz with an average reconstruction error of 1.3 mm, measured by Chamfer distance, while maintaining accuracy for indentations up to 25 mm. The proposed framework provides a scalable, robust, and low-profile solution for global shape perception in deformable robotic systems.
+ oai:arXiv.org:2601.13574v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Guanyu Xu, Jiaqi Wang, Dezhong Tong, Xiaonan Huang
+
+
+ Comparing Without Saying: A Dataset and Benchmark for Implicit Comparative Opinion Mining from Same-User Reviews
+ https://arxiv.org/abs/2601.13575
+ arXiv:2601.13575v1 Announce Type: new
+Abstract: Existing studies on comparative opinion mining have mainly focused on explicit comparative expressions, which are uncommon in real-world reviews. This leaves implicit comparisons - here users express preferences across separate reviews - largely underexplored. We introduce SUDO, a novel dataset for implicit comparative opinion mining from same-user reviews, allowing reliable inference of user preferences even without explicit comparative cues. SUDO comprises 4,150 annotated review pairs (15,191 sentences) with a bi-level structure capturing aspect-level mentions and review-level preferences. We benchmark this task using two baseline architectures: traditional machine learning- and language model-based baselines. Experimental results show that while the latter outperforms the former, overall performance remains moderate, revealing the inherent difficulty of the task and establishing SUDO as a challenging and valuable benchmark for future research.
+ oai:arXiv.org:2601.13575v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Thanh-Lam T. Nguyen, Ngoc-Quang Le, Quoc-Trung Phu, Thi-Phuong Le, Ngoc-Huyen Pham, Phuong-Nguyen Nguyen, Hoang-Quynh Le
+
+
+ FG-OrIU: Towards Better Forgetting via Feature-Gradient Orthogonality for Incremental Unlearning
+ https://arxiv.org/abs/2601.13578
+ arXiv:2601.13578v1 Announce Type: new
+Abstract: Incremental unlearning (IU) is critical for pre-trained models to comply with sequential data deletion requests, yet existing methods primarily suppress parameters or confuse knowledge without explicit constraints on both feature and gradient level, resulting in \textit{superficial forgetting} where residual information remains recoverable. This incomplete forgetting risks security breaches and disrupts retention balance, especially in IU scenarios. We propose FG-OrIU (\textbf{F}eature-\textbf{G}radient \textbf{Or}thogonality for \textbf{I}ncremental \textbf{U}nlearning), the first framework unifying orthogonal constraints on both features and gradients level to achieve deep forgetting, where the forgetting effect is irreversible. FG-OrIU decomposes feature spaces via Singular Value Decomposition (SVD), separating forgetting and remaining class features into distinct subspaces. It then enforces dual constraints: feature orthogonal projection on both forgetting and remaining classes, while gradient orthogonal projection prevents the reintroduction of forgotten knowledge and disruption to remaining classes during updates. Additionally, dynamic subspace adaptation merges newly forgetting subspaces and contracts remaining subspaces, ensuring a stable balance between removal and retention across sequential unlearning tasks. Extensive experiments demonstrate the effectiveness of our method.
+ oai:arXiv.org:2601.13578v1
+ cs.LG
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Qian Feng, JiaHang Tu, Mintong Kang, Hanbin Zhao, Chao Zhang, Hui Qian
+
+
+ A Kubernetes custom scheduler based on reinforcement learning for compute-intensive pods
+ https://arxiv.org/abs/2601.13579
+ arXiv:2601.13579v1 Announce Type: new
+Abstract: With the rise of cloud computing and lightweight containers, Docker has emerged as a leading technology for rapid service deployment, with Kubernetes responsible for pod orchestration. However, for compute-intensive workloads-particularly web services executing containerized machine-learning training-the default Kubernetes scheduler does not always achieve optimal placement. To address this, we propose two custom, reinforcement-learning-based schedulers, SDQN and SDQN-n, both built on the Deep Q-Network (DQN) framework. In compute-intensive scenarios, these models outperform the default Kubernetes scheduler as well as Transformer-and LSTM-based alternatives, reducing average CPU utilization per cluster node by 10%, and by over 20% when using SDQN-n. Moreover, our results show that SDQN-n approach of consolidating pods onto fewer nodes further amplifies resource savings and helps advance greener, more energy-efficient data centers.Therefore, pod scheduling must employ different strategies tailored to each scenario in order to achieve better performance.Since the reinforcement-learning components of the SDQN and SDQN-n architectures proposed in this paper can be easily tuned by adjusting their parameters, they can accommodate the requirements of various future scenarios.
+ oai:arXiv.org:2601.13579v1
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hanlin Zhou, Huah Yong Chan, Shun Yao Zhang, Meie Lin, Jingfei Ni
+
+
+ Neural Organ Transplantation (NOT): Checkpoint-Based Modular Adaptation for Transformer Models
+ https://arxiv.org/abs/2601.13580
+ arXiv:2601.13580v1 Announce Type: new
+Abstract: We introduce Neural Organ Transplantation (NOT), a modular adaptation framework that enables trained transformer layers to function as reusable transferable checkpoints for domain adaptation. Unlike conventional fine-tuning approaches that tightly couple trained parameters to specific model instances and training data, NOT extracts contiguous layer subsets ("donor organs") from pre-trained models, trains them independently on domain-specific data, and saves them as standalone checkpoint files that can be transplanted into compatible recipient models without access to the original training data. Through experiments on three decoder-only transformer architectures spanning 124M to 20B parameters (GPT-2, TinyLlama, and GPT-OSS), we demonstrate that donor transplantation substantially outperforms existing adaptation methods, achieving an order-of-magnitude improvement in perplexity over LoRA while training significantly faster. The method exhibits position dependence, with early insertion positions yielding optimal results. Cross-domain transfer at billion-parameter scale reveals unexpected regularization benefits. These findings demonstrate that transformer middle layers can support efficient modular transfer for decoder-only architectures, enabling privacy-preserving expertise sharing through checkpoint distribution. We note that this approach is currently limited to decoder-only models; preliminary experiments on encoder-based architectures show reduced effectiveness.
+ oai:arXiv.org:2601.13580v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ahmad Al-Zuraiqi
+
+
+ SCRIPTMIND: Crime Script Inference and Cognitive Evaluation for LLM-based Social Engineering Scam Detection System
+ https://arxiv.org/abs/2601.13581
+ arXiv:2601.13581v1 Announce Type: new
+Abstract: Social engineering scams increasingly employ personalized, multi-turn deception, exposing the limits of traditional detection methods. While Large Language Models (LLMs) show promise in identifying deception, their cognitive assistance potential remains underexplored. We propose ScriptMind, an integrated framework for LLM-based scam detection that bridges automated reasoning and human cognition. It comprises three components: the Crime Script Inference Task (CSIT) for scam reasoning, the Crime Script-Aware Inference Dataset (CSID) for fine-tuning small LLMs, and the Cognitive Simulation-based Evaluation of Social Engineering Defense (CSED) for assessing real-time cognitive impact. Using 571 Korean phone scam cases, we built 22,712 structured scammer-sequence training instances. Experimental results show that the 11B small LLM fine-tuned with ScriptMind outperformed GPT-4o by 13%, achieving superior performance over commercial models in detection accuracy, false-positive reduction, scammer utterance prediction, and rationale quality. Moreover, in phone scam simulation experiments, it significantly enhanced and sustained users' suspicion levels, improving their cognitive awareness of scams. ScriptMind represents a step toward human-centered, cognitively adaptive LLMs for scam defense.
+ oai:arXiv.org:2601.13581v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Heedou Kim, Changsik Kim, Sanghwa Shin, Jaewoo Kang
+
+
+ Nonlinear fractional-periodic boundary value problems with Hilfer fractional derivative: existence and numerical approximations of solutions
+ https://arxiv.org/abs/2601.13584
+ arXiv:2601.13584v1 Announce Type: new
+Abstract: We prove conditions for existence of analytical solutions for boundary value problems with the Hilfer fractional derivative, generalizing the commonly used Riemann-Liouville and Caputo operators. The boundary values, referred to in this paper as fractional-periodic, are fractional integral conditions generalizing recurrent solution values for the non-Caputo case of the Hilfer fractional derivative. Analytical solutions to the studied problem are obtained using a perturbation of the corresponding initial value problem with enforced boundary conditions. In general, solutions to the boundary value problem are singular for $t\downarrow 0$. To overcome this singularity, we construct a sequence of converging solutions in a weighted continuous function space. We present a Bernstein splines-based implementation to numerically approximate solutions. We prove convergence of the numerical method, providing convergence criteria and asymptotic convergence rates. Numerical examples show empirical convergence results corresponding with the theoretical bounds. Moreover, the method is able to approximate the singular behavior of solutions and is demonstrated to converge for nonlinear problems. Finally, we apply a grid search to obtain correspondence to the original, non-perturbed system.
+ oai:arXiv.org:2601.13584v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Niels Goedegebure, Kateryna Marynets
+
+
+ TREX: Tokenizer Regression for Optimal Data Mixture
+ https://arxiv.org/abs/2601.13588
+ arXiv:2601.13588v1 Announce Type: new
+Abstract: Building effective tokenizers for multilingual Large Language Models (LLMs) requires careful control over language-specific data mixtures. While a tokenizer's compression performance critically affects the efficiency of LLM training and inference, existing approaches rely on heuristics or costly large-scale searches to determine optimal language ratios. We introduce Tokenizer Regression for Optimal Data MiXture (TREX), a regression-based framework that efficiently predicts the optimal data mixture for tokenizer training. TREX trains small-scale proxy tokenizers on random mixtures, gathers their compression statistics, and learns to predict compression performance from data mixtures. This learned model enables scalable mixture search before large-scale tokenizer training, mitigating the accuracy-cost trade-off in multilingual tokenizer design. Tokenizers trained with TReX's predicted mixtures outperform mixtures based on LLaMA3 and uniform distributions by up to 12% in both inand out-of-distribution compression efficiency, demonstrating strong scalability, robustness, and practical effectiveness.
+ oai:arXiv.org:2601.13588v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Inho Won, Hangyeol Yoo, Minkyung Cho, Jungyeul Park, Hoyun Song, KyungTae Lim
+
+
+ Motion-to-Response Content Generation via Multi-Agent AI System with Real-Time Safety Verification
+ https://arxiv.org/abs/2601.13589
+ arXiv:2601.13589v1 Announce Type: new
+Abstract: This paper proposes a multi-agent artificial intelligence system that generates response-oriented media content in real time based on audio-derived emotional signals. Unlike conventional speech emotion recognition studies that focus primarily on classification accuracy, our approach emphasizes the transformation of inferred emotional states into safe, age-appropriate, and controllable response content through a structured pipeline of specialized AI agents. The proposed system comprises four cooperative agents: (1) an Emotion Recognition Agent with CNN-based acoustic feature extraction, (2) a Response Policy Decision Agent for mapping emotions to response modes, (3) a Content Parameter Generation Agent for producing media control parameters, and (4) a Safety Verification Agent enforcing age-appropriateness and stimulation constraints. We introduce an explicit safety verification loop that filters generated content before output, ensuring compliance with predefined rules. Experimental results on public datasets demonstrate that the system achieves 73.2% emotion recognition accuracy, 89.4% response mode consistency, and 100% safety compliance while maintaining sub-100ms inference latency suitable for on-device deployment. The modular architecture enables interpretability and extensibility, making it applicable to child-adjacent media, therapeutic applications, and emotionally responsive smart devices.
+ oai:arXiv.org:2601.13589v1
+ cs.AI
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ HyeYoung Lee
+
+
+ Vulnerability of LLMs' Belief Systems? LLMs Belief Resistance Check Through Strategic Persuasive Conversation Interventions
+ https://arxiv.org/abs/2601.13590
+ arXiv:2601.13590v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are increasingly employed in various question-answering tasks. However, recent studies showcase that LLMs are susceptible to persuasion and could adopt counterfactual beliefs. We present a systematic evaluation of LLM susceptibility to persuasion under the Source--Message--Channel--Receiver (SMCR) communication framework. Across five mainstream Large Language Models (LLMs) and three domains (factual knowledge, medical QA, and social bias), we analyze how different persuasive strategies influence belief stability over multiple interaction turns. We further examine whether meta-cognition prompting (i.e., eliciting self-reported confidence) affects resistance to persuasion. Results show that smaller models exhibit extreme compliance, with over 80% of belief changes occurring at the first persuasive turn (average end turn of 1.1--1.4). Contrary to expectations, meta-cognition prompting increases vulnerability by accelerating belief erosion rather than enhancing robustness. Finally, we evaluate adversarial fine-tuning as a defense. While GPT-4o-mini achieves near-complete robustness (98.6%) and Mistral~7B improves substantially (35.7% $\rightarrow$ 79.3%), Llama models remain highly susceptible (<14%) even when fine-tuned on their own failure cases. Together, these findings highlight substantial model-dependent limits of current robustness interventions and offer guidance for developing more trustworthy LLMs.
+ oai:arXiv.org:2601.13590v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fan Huang, Haewoon Kwak, Jisun An
+
+
+ DSAEval: Evaluating Data Science Agents on a Wide Range of Real-World Data Science Problems
+ https://arxiv.org/abs/2601.13591
+ arXiv:2601.13591v1 Announce Type: new
+Abstract: Recent LLM-based data agents aim to automate data science tasks ranging from data analysis to deep learning. However, the open-ended nature of real-world data science problems, which often span multiple taxonomies and lack standard answers, poses a significant challenge for evaluation. To address this, we introduce DSAEval, a benchmark comprising 641 real-world data science problems grounded in 285 diverse datasets, covering both structured and unstructured data (e.g., vision and text). DSAEval incorporates three distinctive features: (1) Multimodal Environment Perception, which enables agents to interpret observations from multiple modalities including text and vision; (2) Multi-Query Interactions, which mirror the iterative and cumulative nature of real-world data science projects; and (3) Multi-Dimensional Evaluation, which provides a holistic assessment across reasoning, code, and results. We systematically evaluate 11 advanced agentic LLMs using DSAEval. Our results show that Claude-Sonnet-4.5 achieves the strongest overall performance, GPT-5.2 is the most efficient, and MiMo-V2-Flash is the most cost-effective. We further demonstrate that multimodal perception consistently improves performance on vision-related tasks, with gains ranging from 2.04% to 11.30%. Overall, while current data science agents perform well on structured data and routine data anlysis workflows, substantial challenges remain in unstructured domains. Finally, we offer critical insights and outline future research directions to advance the development of data science agents.
+ oai:arXiv.org:2601.13591v1
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Maojun Sun, Yifei Xie, Yue Wu, Ruijian Han, Binyan Jiang, Defeng Sun, Yancheng Yuan, Jian Huang
+
+
+ Machine learning based radiative parameterization scheme and its performance in operational reforecast experiments
+ https://arxiv.org/abs/2601.13592
+ arXiv:2601.13592v1 Announce Type: new
+Abstract: Radiation is typically the most time-consuming physical process in numerical models. One solution is to use machine learning methods to simulate the radiation process to improve computational efficiency. From an operational standpoint, this study investigates critical limitations inherent to hybrid forecasting frameworks that embed deep neural networks into numerical prediction models, with a specific focus on two fundamental bottlenecks: coupling compatibility and long-term integration stability. A residual convolutional neural network is employed to approximate the Rapid Radiative Transfer Model for General Circulation Models (RRTMG) within the global operational system of China Meteorological Administration. We adopted an offline training and online coupling approach. First, a comprehensive dataset is generated through model simulations, encompassing all atmospheric columns both with and without cloud cover. To ensure the stability of the hybrid model, the dataset is enhanced via experience replay, and additional output constraints based on physical significance are imposed. Meanwhile, a LibTorch-based coupling method is utilized, which is more suitable for real-time operational computations. The hybrid model is capable of performing ten-day integrated forecasts as required. A two-month operational reforecast experiment demonstrates that the machine learning emulator achieves accuracy comparable to that of the traditional physical scheme, while accelerating the computation speed by approximately eightfold.
+ oai:arXiv.org:2601.13592v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hao Jing, Sa Xiao, Haoyu Li, Huadong Xiao, Wei Xue
+
+
+ AI IDEs or Autonomous Agents? Measuring the Impact of Coding Agents on Software Development
+ https://arxiv.org/abs/2601.13597
+ arXiv:2601.13597v1 Announce Type: new
+Abstract: Large language model (LLM)-based coding agents increasingly act as autonomous contributors that generate and merge pull requests, yet their real-world effects on software projects are unclear, especially relative to widely adopted IDE-based AI assistants. We present a longitudinal causal study of agent adoption in open-source repositories using staggered difference-in-differences with matched controls. Using the AIDev dataset, we define adoption as the first agent-generated pull request and analyze monthly repository-level outcomes spanning development velocity (commits, lines added) and software quality (static-analysis warnings, cognitive complexity, duplication, and comment density). Results show large, front-loaded velocity gains only when agents are the first observable AI tool in a project; repositories with prior AI IDE usage experience minimal or short-lived throughput benefits. In contrast, quality risks are persistent across settings, with static-analysis warnings and cognitive complexity rising roughly 18% and 35%, indicating sustained agent-induced complexity debt even when velocity advantages fade. These heterogeneous effects suggest diminishing returns to AI assistance and highlight the need for quality safeguards, provenance tracking, and selective deployment of autonomous agents. Our findings establish an empirical basis for understanding how agentic and IDE-based tools interact, and motivate research on balancing acceleration with maintainability in AI-integrated development workflows.
+ oai:arXiv.org:2601.13597v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shyam Agarwal, Hao He, Bogdan Vasilescu
+
+
+ Diffusion In Diffusion: Breaking the Autoregressive Bottleneck in Block Diffusion Models
+ https://arxiv.org/abs/2601.13599
+ arXiv:2601.13599v1 Announce Type: new
+Abstract: Block diffusion language models, operating as semi-autoregressive paradigms, combine the strengths of both autoregressive and diffusion paradigms. However, their strict unidirectional block dependencies introduce irreversibility and sacrifice the global planning capabilities for which diffusion models are renowned. In order to address these issues, we propose Diffusion in Diffusion, a draft-then-refine framework designed to overcome the irreversibility and myopia problems inherent in block diffusion models. Our approach first employs block diffusion to generate rapid drafts using small blocks, then refines these drafts through global bidirectional diffusion with a larger bidirectional receptive field. We utilise snapshot confidence remasking to identify the most critical tokens that require modification, and apply mix-scale training to expand the block diffusion model's global capabilities. Empirical results demonstrate that our approach sets a new benchmark for discrete diffusion models on the OpenWebText dataset. Using just 26% of the fine-tuning budget of baseline models, we reduce generative perplexity from 25.7 to 21.9, significantly narrowing the performance gap with autoregressive models.
+ oai:arXiv.org:2601.13599v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Linrui Ma, Yufei Cui, Kai Han, Yunhe Wang
+
+
+ Foundations of Global Consistency Checking with Noisy LLM Oracles
+ https://arxiv.org/abs/2601.13600
+ arXiv:2601.13600v1 Announce Type: new
+Abstract: Ensuring that collections of natural-language facts are globally consistent is essential for tasks such as fact-checking, summarization, and knowledge base construction. While Large Language Models (LLMs) can assess the consistency of small subsets of facts, their judgments are noisy, and pairwise checks are insufficient to guarantee global coherence. We formalize this problem and show that verifying global consistency requires exponentially many oracle queries in the worst case. To make the task practical, we propose an adaptive divide-and-conquer algorithm that identifies minimal inconsistent subsets (MUSes) of facts and optionally computes minimal repairs through hitting-sets. Our approach has low-degree polynomial query complexity. Experiments with both synthetic and real LLM oracles show that our method efficiently detects and localizes inconsistencies, offering a scalable framework for linguistic consistency verification with LLM-based evaluators.
+ oai:arXiv.org:2601.13600v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Paul He, Elke Kirschbaum, Shiva Kasiviswanathan
+
+
+ An Elementary Approach to Scheduling in Generative Diffusion Models
+ https://arxiv.org/abs/2601.13602
+ arXiv:2601.13602v1 Announce Type: new
+Abstract: An elementary approach to characterizing the impact of noise scheduling and time discretization in generative diffusion models is developed. Considering a simplified model where the source distribution is multivariate Gaussian with a given covariance matrix, the explicit closed-form evolution trajectory of the distributions across reverse sampling steps is derived, and consequently, the Kullback-Leibler (KL) divergence between the source distribution and the reverse sampling output is obtained. The effect of the number of time discretization steps on the convergence of this KL divergence is studied via the Euler-Maclaurin expansion. An optimization problem is formulated, and its solution noise schedule is obtained via calculus of variations, shown to follow a tangent law whose coefficient is determined by the eigenvalues of the source covariance matrix. For an alternative scenario, more realistic in practice, where pretrained models have been obtained for some given noise schedules, the KL divergence also provides a measure to compare different time discretization strategies in reverse sampling. Experiments across different datasets and pretrained models demonstrate that the time discretization strategy selected by our approach consistently outperforms baseline and search-based strategies, particularly when the budget on the number of function evaluations is very tight.
+ oai:arXiv.org:2601.13602v1
+ cs.IT
+ cs.LG
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qiang Sun, H. Vincent Poor, Wenyi Zhang
+
+
+ DCCVT: Differentiable Clipped Centroidal Voronoi Tessellation
+ https://arxiv.org/abs/2601.13603
+ arXiv:2601.13603v1 Announce Type: new
+Abstract: While Marching Cubes (MC) and Marching Tetrahedra (MTet) are widely adopted in 3D reconstruction pipelines due to their simplicity and efficiency, their differentiable variants remain suboptimal for mesh extraction. This often limits the quality of 3D meshes reconstructed from point clouds or images in learning-based frameworks. In contrast, clipped CVTs offer stronger theoretical guarantees and yield higher-quality meshes. However, the lack of a differentiable formulation has prevented their integration into modern machine learning pipelines. To bridge this gap, we propose DCCVT, a differentiable algorithm that extracts high-quality 3D meshes from noisy signed distance fields (SDFs) using clipped CVTs. We derive a fully differentiable formulation for computing clipped CVTs and demonstrate its integration with deep learning-based SDF estimation to reconstruct accurate 3D meshes from input point clouds. Our experiments with synthetic data demonstrate the superior ability of DCCVT against state-of-the-art methods in mesh quality and reconstruction fidelity. https://wylliamcantincharawi.dev/DCCVT.github.io/
+ oai:arXiv.org:2601.13603v1
+ cs.CG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Wylliam Cantin Charawi, Adrien Gruson, Jane Wu, Christian Desrosiers, Diego Thomas
+
+
+ Optimizing Parallel Schemes with Lyapunov Exponents and kNN-LLE Estimation
+ https://arxiv.org/abs/2601.13604
+ arXiv:2601.13604v1 Announce Type: new
+Abstract: Inverse parallel schemes remain indispensable tools for computing the roots of nonlinear systems, yet their dynamical behavior can be unexpectedly rich, ranging from strong contraction to oscillatory or chaotic transients depending on the choice of algorithmic parameters and initial states. A unified analytical-data-driven methodology for identifying, measuring, and reducing such instabilities in a family of uni-parametric inverse parallel solvers is presented in this study. On the theoretical side, we derive stability and bifurcation characterizations of the underlying iterative maps, identifying parameter regions associated with periodic or chaotic behavior. On the computational side, we introduce a micro-series pipeline based on kNN-driven estimation of the local largest Lyapunov exponent (LLE), applied to scalar time series derived from solver trajectories. The resulting sliding-window Lyapunov profiles provide fine-grained, real-time diagnostics of contractive or unstable phases and reveal transient behaviors not captured by coarse linearized analysis. Leveraging this correspondence, we introduce a Lyapunov-informed parameter selection strategy that identifies solver settings associated with stable behavior, particularly when the estimated LLE indicates persistent instability. Comprehensive experiments on ensembles of perturbed initial guesses demonstrate close agreement between the theoretical stability diagrams and empirical Lyapunov profiles, and show that the proposed adaptive mechanism significantly improves robustness. The study establishes micro-series Lyapunov analysis as a practical, interpretable tool for constructing self-stabilizing root-finding schemes and opens avenues for extending such diagnostics to higher-dimensional or noise-contaminated problems.
+ oai:arXiv.org:2601.13604v1
+ math.NA
+ cs.LG
+ cs.NA
+ math.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mudassir Shams, Andrei Velichko, Bruno Carpentieri
+
+
+ Outage Identification from Electricity Market Data: Quickest Change Detection Approach
+ https://arxiv.org/abs/2601.13605
+ arXiv:2601.13605v1 Announce Type: new
+Abstract: Power system outages expose market participants to significant financial risk unless promptly detected and hedged. We develop an outage identification method from public market signals grounded in the parametric quickest change detection (QCD) theory. Parametric QCD operates on stochastic data streams, distinguishing pre- and post-change regimes using the ratio of their respective probability density functions. To derive the density functions for normal and post-outage market signals, we exploit multi-parametric programming to decompose complex market signals into parametric random variables with a known density. These densities are then used to construct a QCD-based statistic that triggers an alarm as soon as the statistic exceeds an appropriate threshold. Numerical experiments on a stylized PJM testbed demonstrate rapid line outage identification from public streams of electricity demand and price data.
+ oai:arXiv.org:2601.13605v1
+ eess.SY
+ cs.SY
+ stat.AP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Milad Hoseinpour, Shubhanshu Shekhar, Vladimir Dvorkin
+
+
+ ChartVerse: Scaling Chart Reasoning via Reliable Programmatic Synthesis from Scratch
+ https://arxiv.org/abs/2601.13606
+ arXiv:2601.13606v1 Announce Type: new
+Abstract: Chart reasoning is a critical capability for Vision Language Models (VLMs). However, the development of open-source models is severely hindered by the lack of high-quality training data. Existing datasets suffer from a dual challenge: synthetic charts are often simplistic and repetitive, while the associated QA pairs are prone to hallucinations and lack the reasoning depth required for complex tasks. To bridge this gap, we propose ChartVerse, a scalable framework designed to synthesize complex charts and reliable reasoning data from scratch. (1) To address the bottleneck of simple patterns, we first introduce Rollout Posterior Entropy (RPE), a novel metric that quantifies chart complexity. Guided by RPE, we develop complexity-aware chart coder to autonomously synthesize diverse, high-complexity charts via executable programs. (2) To guarantee reasoning rigor, we develop truth-anchored inverse QA synthesis. Diverging from standard generation, we adopt an answer-first paradigm: we extract deterministic answers directly from the source code, generate questions conditional on these anchors, and enforce strict consistency verification. To further elevate difficulty and reasoning depth, we filter samples based on model fail-rate and distill high-quality Chain-of-Thought (CoT) reasoning. We curate ChartVerse-SFT-600K and ChartVerse-RL-40K using Qwen3-VL-30B-A3B-Thinking as the teacher. Experimental results demonstrate that ChartVerse-8B achieves state-of-the-art performance, notably surpassing its teacher and rivaling the stronger Qwen3-VL-32B-Thinking.
+ oai:arXiv.org:2601.13606v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zheng Liu, Honglin Lin, Chonghan Qin, Xiaoyang Wang, Xin Gao, Yu Li, Mengzhang Cai, Yun Zhu, Zhanping Zhong, Qizhi Pei, Zhuoshi Pan, Xiaoran Shang, Bin Cui, Conghui He, Wentao Zhang, Lijun Wu
+
+
+ When Reasoning Leaks Membership: Membership Inference Attack on Black-box Large Reasoning Models
+ https://arxiv.org/abs/2601.13607
+ arXiv:2601.13607v1 Announce Type: new
+Abstract: Large Reasoning Models (LRMs) have rapidly gained prominence for their strong performance in solving complex tasks. Many modern black-box LRMs expose the intermediate reasoning traces through APIs to improve transparency (e.g., Gemini-2.5 and Claude-sonnet). Despite their benefits, we find that these traces can leak membership signals, creating a new privacy threat even without access to token logits used in prior attacks. In this work, we initiate the first systematic exploration of Membership Inference Attacks (MIAs) on black-box LRMs. Our preliminary analysis shows that LRMs produce confident, recall-like reasoning traces on familiar training member samples but more hesitant, inference-like reasoning traces on non-members. The representations of these traces are continuously distributed in the semantic latent space, spanning from familiar to unfamiliar samples. Building on this observation, we propose BlackSpectrum, the first membership inference attack framework targeting the black-box LRMs. The key idea is to construct a recall-inference axis in the semantic latent space, based on representations derived from the exposed traces. By locating where a query sample falls along this axis, the attacker can obtain a membership score and predict how likely it is to be a member of the training data. Additionally, to address the limitations of outdated datasets unsuited to modern LRMs, we provide two new datasets to support future research, arXivReasoning and BookReasoning. Empirically, exposing reasoning traces significantly increases the vulnerability of LRMs to membership inference attacks, leading to large gains in attack performance. Our findings highlight the need for LRM companies to balance transparency in intermediate reasoning traces with privacy preservation.
+ oai:arXiv.org:2601.13607v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ruihan Hu, Yu-Ming Shang, Wei Luo, Ye Tao, Xi Zhang
+
+
+ Fisher-Informed Parameterwise Aggregation for Federated Learning with Heterogeneous Data
+ https://arxiv.org/abs/2601.13608
+ arXiv:2601.13608v1 Announce Type: new
+Abstract: Federated learning aggregates model updates from distributed clients, but standard first order methods such as FedAvg apply the same scalar weight to all parameters from each client. Under non-IID data, these uniformly weighted updates can be strongly misaligned across clients, causing client drift and degrading the global model. Here we propose Fisher-Informed Parameterwise Aggregation (FIPA), a second-order aggregation method that replaces client-level scalar weights with parameter-specific Fisher Information Matrix (FIM) weights, enabling true parameter-level scaling that captures how each client's data uniquely influences different parameters. With low-rank approximation, FIPA remains communication- and computation-efficient. Across nonlinear function regression, PDE learning, and image classification, FIPA consistently improves over averaging-based aggregation, and can be effectively combined with state-of-the-art client-side optimization algorithms to further improve image classification accuracy. These results highlight the benefits of FIPA for federated learning under heterogeneous data distributions.
+ oai:arXiv.org:2601.13608v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhipeng Chang, Ting He, Wenrui Hao
+
+
+ Balancing Fairness and High Match Rates in Reciprocal Recommender Systems: A Nash Social Welfare Approach
+ https://arxiv.org/abs/2601.13609
+ arXiv:2601.13609v1 Announce Type: new
+Abstract: Matching platforms, such as online dating services and job recommendations, have become increasingly prevalent. For the success of these platforms, it is crucial to design reciprocal recommender systems (RRSs) that not only increase the total number of matches but also avoid creating unfairness among users. In this paper, we investigate the fairness of RRSs on matching platforms. From the perspective of fair division, we define the users' opportunities to be recommended and establish the fairness concept of envy-freeness in the allocation of these opportunities. We first introduce the Social Welfare (SW) method, which approximately maximizes the number of matches, and show that it leads to significant unfairness in recommendation opportunities, illustrating the trade-off between fairness and match rates. To address this challenge, we propose the Nash Social Welfare (NSW) method, which alternately optimizes two NSW functions and achieves nearly envy-free recommendations. We further generalize the SW and NSW method to the $\alpha$-SW method, which balances the trade-off between fairness and high match rates. Additionally, we develop a computationally efficient approximation algorithm for the SW/NSW/$\alpha$-SW methods based on the Sinkhorn algorithm. Through extensive experiments on both synthetic datasets and two real-world datasets, we demonstrate the practical effectiveness of our approach.
+ oai:arXiv.org:2601.13609v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yoji Tomita, Tomohiko Yokoyama
+
+
+ Secure Multi-Path Routing with All-or-Nothing Transform for Network-on-Chip Architectures
+ https://arxiv.org/abs/2601.13610
+ arXiv:2601.13610v1 Announce Type: new
+Abstract: Ensuring Network-on-Chip (NoC) security is crucial to design trustworthy NoC-based System-on-Chip (SoC) architectures. While there are various threats that exploit on-chip communication vulnerabilities, eavesdropping attacks via malicious nodes are among the most common and stealthy. Although encryption can secure packets for confidentiality, it may introduce unacceptable overhead for resource-constrained SoCs. In this paper, we propose a lightweight confidentiality-preserving framework that utilizes a quasi-group based All-Or-Nothing Transform (AONT) combined with secure multi-path routing in NoC-based SoCs. By applying AONT to each packet and distributing its transformed blocks across multiple non-overlapping routes, we ensure that no intermediate router can reconstruct the original data without all blocks. Extensive experimental evaluation demonstrates that our method effectively mitigates eavesdropping attacks by malicious routers with negligible area and performance overhead. Our results also reveal that AONT-based multi-path routing can provide 7.3x reduction in overhead compared to traditional encryption for securing against eavesdropping attacks.
+ oai:arXiv.org:2601.13610v1
+ cs.CR
+ cs.AR
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hansika Weerasena, Matthew Randall, Prabhat Mishra
+
+
+ PINA: Prompt Injection Attack against Navigation Agents
+ https://arxiv.org/abs/2601.13612
+ arXiv:2601.13612v1 Announce Type: new
+Abstract: Navigation agents powered by large language models (LLMs) convert natural language instructions into executable plans and actions. Compared to text-based applications, their security is far more critical: a successful prompt injection attack does not just alter outputs but can directly misguide physical navigation, leading to unsafe routes, mission failure, or real-world harm. Despite this high-stakes setting, the vulnerability of navigation agents to prompt injection remains largely unexplored. In this paper, we propose PINA, an adaptive prompt optimization framework tailored to navigation agents under black-box, long-context, and action-executable constraints. Experiments on indoor and outdoor navigation agents show that PINA achieves high attack success rates with an average ASR of 87.5%, surpasses all baselines, and remains robust under ablation and adaptive-attack conditions. This work provides the first systematic investigation of prompt injection attacks in navigation and highlights their urgent security implications for embodied LLM agents.
+ oai:arXiv.org:2601.13612v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiani Liu, Yixin He, Lanlan Fan, Qidi Zhong, Yushi Cheng, Meng Zhang, Yanjiao Chen, Wenyuan Xu
+
+
+ CauScientist: Teaching LLMs to Respect Data for Causal Discovery
+ https://arxiv.org/abs/2601.13614
+ arXiv:2601.13614v1 Announce Type: new
+Abstract: Causal discovery is fundamental to scientific understanding and reliable decision-making. Existing approaches face critical limitations: purely data-driven methods suffer from statistical indistinguishability and modeling assumptions, while recent LLM-based methods either ignore statistical evidence or incorporate unverified priors that can mislead result. To this end, we propose CauScientist, a collaborative framework that synergizes LLMs as hypothesis-generating "data scientists" with probabilistic statistics as rigorous "verifiers". CauScientist employs hybrid initialization to select superior starting graphs, iteratively refines structures through LLM-proposed modifications validated by statistical criteria, and maintains error memory to guide efficient search space. Experiments demonstrate that CauScientist substantially outperforms purely data-driven baselines, achieving up to 53.8% F1 score improvement and enhancing recall from 35.0% to 100.0%. Notably, while standalone LLM performance degrades with graph complexity, CauScientist reduces structural hamming distance (SHD) by 44.0% compared to Qwen3-32B on 37-node graphs. Our project page is at https://github.com/OpenCausaLab/CauScientist.
+ oai:arXiv.org:2601.13614v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bo Peng, Sirui Chen, Lei Xu, Chaochao Lu
+
+
+ Resilient Hierarchical Power Control for Hybrid GFL/GFM Microgrids Under Mixed Cyber-Attacks and Physical Constraints
+ https://arxiv.org/abs/2601.13615
+ arXiv:2601.13615v1 Announce Type: new
+Abstract: Hybrid microgrids integrating Grid-Following (GFL) and Grid-Forming (GFM) inverters present complex control challenges arising from the decoupling between long-term economic dispatch and real-time dynamic regulation, as well as the distinct physical limitations of heterogeneous inverters under cyber uncertainties. This paper proposes a Resilient Hierarchical Power Control (RHPC) strategy to unify these conflicting requirements within a cohesive framework. A standardized power increment mechanism is developed to bridge the tertiary and secondary layers, ensuring that real-time load fluctuations are compensated strictly according to the optimal economic ratios derived from the tertiary layer. To address the strict active power saturation constraints of GFL units, a dynamic activation scheme coupled with projection operators is introduced, which actively isolates saturated nodes from the consensus loop to prevent integrator wind-up and preserve the stability of the GFM backbone. Furthermore, the proposed framework incorporates a multi-scale attention mechanism and LSTM-based predictors into the secondary control protocol, endowing the system with robustness against unbounded False Data Injection (FDI) attacks and packet losses. Rigorous theoretical analysis confirms that the system achieves Uniformly Ultimately Bounded (UUB) convergence, and simulations on a modified IEEE 33-bus system demonstrate that the proposed strategy significantly improves power sharing accuracy and operational resilience in both grid-connected and islanded modes compared to conventional methods.
+ oai:arXiv.org:2601.13615v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Lifu Ding, Chunhui Hou, Yutong Li, Qinmin Yang
+
+
+ Reflections over the Sea: Reconfigurable Intelligent Surface for Maritime Self-Powered Communications
+ https://arxiv.org/abs/2601.13618
+ arXiv:2601.13618v1 Announce Type: new
+Abstract: Maritime communication is becoming a vital component of 6G networks, driven by the rapid expansion of the maritime economy. However, existing technologies face critical challenges in signal coverage, availability, and robustness, especially under harsh sea conditions. This paper proposes a novel framework for the maritime Internet-of-Things (IoT) communications that leverages the reconfigurable intelligent surface (RIS) mounted on offshore infrastructures, such as wind turbines, to enhance coverage and reliability. To capture dynamic maritime environment, a near-ocean-surface channel model is developed considering the impact of sea waves. In addition, a wave energy harvesting (EH) system is designed to self-power IoT sensors for data acquisition, processing, and transmission. To support real-time adaptation, channel state information is continuously measured to optimize RIS reflection parameters and maximize multi-user communication rates. Simulation results show that the proposed system significantly improves IoT communication performance by over 20%, under harsh sea conditions.
+ oai:arXiv.org:2601.13618v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qianqian Zhang, Long Wang, Ben Wu, Jia Mi
+
+
+ CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models
+ https://arxiv.org/abs/2601.13622
+ arXiv:2601.13622v1 Announce Type: new
+Abstract: Recent advancements in Large Vision-Language Models (LVLMs) have pushed them closer to becoming general-purpose assistants. Despite their strong performance, LVLMs still struggle with vision-centric tasks such as image classification, underperforming compared to their base vision encoders, which are often CLIP-based models. To address this limitation, we propose Context-Aware Image Representation Prioritization via Ensemble (CARPE), a novel, model-agnostic framework which introduces vision-integration layers and a context-aware ensemble strategy to identify when to prioritize image representations or rely on the reasoning capabilities of the language model. This design enhances the model's ability to adaptively weight visual and textual modalities and enables the model to capture various aspects of image representations, leading to consistent improvements in generalization across classification and vision-language benchmarks. Extensive experiments demonstrate that CARPE not only improves performance on image classification benchmarks but also enhances results across various vision-language benchmarks. Finally, CARPE is designed to be effectively integrated with most open-source LVLMs that consist of a vision encoder and a language model, ensuring its adaptability across diverse architectures.
+ oai:arXiv.org:2601.13622v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Donghee Lee, Rui Cai, Zhe Zhao
+
+
+ PRIMAL: Processing-In-Memory Based Low-Rank Adaptation for LLM Inference Accelerator
+ https://arxiv.org/abs/2601.13628
+ arXiv:2601.13628v1 Announce Type: new
+Abstract: This paper presents PRIMAL, a processing-in-memory (PIM) based large language model (LLM) inference accelerator with low-rank adaptation (LoRA). PRIMAL integrates heterogeneous PIM processing elements (PEs), interconnected by 2D-mesh inter-PE computational network (IPCN). A novel SRAM reprogramming and power gating (SRPG) scheme enables pipelined LoRA updates and sub-linear power scaling by overlapping reconfiguration with computation and gating idle resources. PRIMAL employs optimized spatial mapping and dataflow orchestration to minimize communication overhead, and achieves $1.5\times$ throughput and $25\times$ energy efficiency over NVIDIA H100 with LoRA rank 8 (Q,V) on Llama-13B.
+ oai:arXiv.org:2601.13628v1
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yue Jiet Chong, Yimin Wang, Zhen Wu, Xuanyao Fong
+
+
+ Activation-Space Anchored Access Control for Multi-Class Permission Reasoning in Large Language Models
+ https://arxiv.org/abs/2601.13630
+ arXiv:2601.13630v1 Announce Type: new
+Abstract: Large language models (LLMs) are increasingly deployed over knowledge bases for efficient knowledge retrieval and question answering. However, LLMs can inadvertently answer beyond a user's permission scope, leaking sensitive content, thus making it difficult to deploy knowledge-base QA under fine-grained access control requirements. In this work, we identify a geometric regularity in intermediate activations: for the same query, representations induced by different permission scopes cluster distinctly and are readily separable. Building on this separability, we propose Activation-space Anchored Access Control (AAAC), a training-free framework for multi-class permission control. AAAC constructs an anchor bank, with one permission anchor per class, from a small offline sample set and requires no fine-tuning. At inference time, a multi-anchor steering mechanism redirects each query's activations toward the anchor-defined authorized region associated with the current user, thereby suppressing over-privileged generations by design. Finally, extensive experiments across three LLM families demonstrate that AAAC reduces permission violation rates by up to 86.5% and prompt-based attack success rates by 90.7%, while improving response usability with minor inference overhead compared to baselines.
+ oai:arXiv.org:2601.13630v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhaopeng Zhang, Pengcheng Sun, Lan Zhang, Chen Tang, Jiewei Lai, Yunhao Wang, Hui Jin
+
+
+ ContiguousKV: Accelerating LLM Prefill with Granularity-Aligned KV Cache Management
+ https://arxiv.org/abs/2601.13631
+ arXiv:2601.13631v1 Announce Type: new
+Abstract: Efficiently serving Large Language Models (LLMs) with persistent Prefix Key-Value (KV) Cache is critical for applications like conversational search and multi-turn dialogue. Serving a request requires loading the pre-computed prefix KV cache and generating the first token, defined as the Re-Prefill Phase. Offloading this shared prefix cache to secondary storage is essential for memory scalability. Re-Prefill with offloading suffers from severe I/O bottlenecks in two aspects. First, semantic-aware KV cache pruning algorithms select important tokens in fine granularity, while systems manage I/O in coarse, fixed-size blocks, causing severe read amplification. Second, the sequential dependency between identifying important tokens and loading KV cache creates idle I/O and compute bubbles, under-utilizing system resources.
+ This paper proposes \textit{ContiguousKV}, a high-performance prefix KV cache offloading system that bridges algorithmic semantics with I/O efficiency to accelerate the Re-Prefill phase. We first introduce \textit{ContiguousChunk}, a unified data management granularity that aligns KV cache pruning with I/O operations. All the mechanisms critical for I/O performance are performed at the granularity of ContiguousChunk, thereby eliminating read amplification. By exploiting the high similarity in important ContiguousChunk indices across layers, we propose intra- and inter-period asynchronous prefetching to break the sequential dependency between I/O and compute, effectively eliminating idle bubbles. Finally, we propose attention-guided cache management to retain semantically critical prefix data in memory. Evaluations on Qwen2.5 series models show that ContiguousKV achieves a 3.85x speedup in the Re-Prefill phase over the state-of-the-art offloading system IMPRESS, while maintaining high output quality.
+ oai:arXiv.org:2601.13631v1
+ cs.OS
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Jing Zou, Shangyu Wu, Hancong Duan, Qiao Li, Chun Jason Xue
+
+
+ Resilient Routing: Risk-Aware Dynamic Routing in Smart Logistics via Spatiotemporal Graph Learning
+ https://arxiv.org/abs/2601.13632
+ arXiv:2601.13632v1 Announce Type: new
+Abstract: With the rapid development of the e-commerce industry, the logistics network is experiencing unprecedented pressure. The traditional static routing strategy most time cannot tolerate the traffic congestion and fluctuating retail demand. In this paper, we propose a Risk-Aware Dynamic Routing(RADR) framework which integrates Spatiotemporal Graph Neural Networks (ST-GNN) with combinatorial optimization. We first construct a logistics topology graph by using the discrete GPS data using spatial clustering methods. Subsequently, a hybrid deep learning model combining Graph Convolutional Network (GCN) and Gated Recurrent Unit (GRU) is adopted to extract spatial correlations and temporal dependencies for predicting future congestion risks. These prediction results are then integrated into a dynamic edge weight mechanism to perform path planning. We evaluated the framework on the Smart Logistics Dataset 2024, which contains real-world Internet of Things(IoT) sensor data. The experimental results show that the RADR algorithm significantly enhances the resilience of the supply chain. Particularly in the case study of high congestion scenarios, our method reduces the potential congestion risk exposure by 19.3% while only increasing the transportation distance by 2.1%. This empirical evidence confirms that the proposed data-driven approach can effectively balance delivery efficiency and operational safety.
+ oai:arXiv.org:2601.13632v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhiming Xue, Sichen Zhao, Yalun Qi, Xianling Zeng, Zihan Yu
+
+
+ Scaling Test-time Inference for Visual Grounding
+ https://arxiv.org/abs/2601.13633
+ arXiv:2601.13633v1 Announce Type: new
+Abstract: Visual grounding is an essential capability of Visual Language Models (VLMs) to understand the real physical world. Previous state-of-the-art grounding visual language models usually have large model sizes, making them heavy for deployment and slow for inference. However, we notice that the sizes of visual encoders are nearly the same for small and large VLMs and the major difference is the sizes of the language models. Small VLMs fall behind larger VLMs in grounding because of the difference in language understanding capability rather than visual information handling. To mitigate the gap, we introduce 'Efficient visual Grounding language Models' (EGM): a method to scale the test-time computation (#generated tokens). Scaling the test-time computation of a small model is deployment-friendly, and yields better end-to-end latency as the cost of each token is much cheaper compared to directly running a large model. On the RefCOCO benchmark, our EGM-Qwen3-VL-8B demonstrates 91.4 IoU with an average of 737ms (5.9x faster) latency while Qwen3-VL-235B demands 4,320ms to achieve 90.5 IoU. To validate our approach's generality, we further set up a new amodal grounding setting that requires the model to predict both the visible and occluded parts of the objects. Experiments show our method can consistently and significantly improve the vanilla grounding and amodal grounding capabilities of small models to be on par with or outperform the larger models, thereby improving the efficiency for visual grounding.
+ oai:arXiv.org:2601.13633v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Guanqi Zhan, Changye Li, Zhijian Liu, Yao Lu, Yi Wu, Song Han, Ligeng Zhu
+
+
+ Direct Finite-Time Contraction (Step-Log) Profiling--Driven Optimization of Parallel Schemes for Nonlinear Problems on Multicore Architectures
+ https://arxiv.org/abs/2601.13637
+ arXiv:2601.13637v1 Announce Type: new
+Abstract: Efficient computation of all distinct solutions of nonlinear problems is essential in many scientific and engineering applications. Although high-order parallel iterative schemes offer fast convergence, their practical performance is often limited by sensitivity to internal parameters and the lack of reproducible tuning procedures. Classical parameter selection tools based on analytical conditions and dynamical-system diagnostics can be problem-dependent and computationally demanding, which motivates lightweight data-driven alternatives.
+ In this study, we propose a parameterized single-step bi-parametric parallel Weierstrass-type scheme with third-order convergence together with a training-free tuning framework based on Direct finite-time contraction (step-log) profiling. The approach extracts Lyapunov-like finite-time contraction information directly from solver trajectories via step norms and step-log ratios, aggregates the resulting profiles over micro-launch ensembles, and ranks parameter candidates using two compact scores: the stability minimum S_min and the stability moment S_mom. Numerical results demonstrate consistent improvements in convergence rate, stability, and robustness across diverse nonlinear test problems, establishing the proposed profiling-based strategy as an efficient and reproducible alternative to classical parameter tuning methods.
+ oai:arXiv.org:2601.13637v1
+ math.NA
+ cs.NA
+ math.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mudassir Shams, Andrei Velichko, Bruno Carpentieri
+
+
+ A General One-Shot Multimodal Active Perception Framework for Robotic Manipulation: Learning to Predict Optimal Viewpoint
+ https://arxiv.org/abs/2601.13639
+ arXiv:2601.13639v1 Announce Type: new
+Abstract: Active perception in vision-based robotic manipulation aims to move the camera toward more informative observation viewpoints, thereby providing high-quality perceptual inputs for downstream tasks. Most existing active perception methods rely on iterative optimization, leading to high time and motion costs, and are tightly coupled with task-specific objectives, which limits their transferability. In this paper, we propose a general one-shot multimodal active perception framework for robotic manipulation. The framework enables direct inference of optimal viewpoints and comprises a data collection pipeline and an optimal viewpoint prediction network. Specifically, the framework decouples viewpoint quality evaluation from the overall architecture, supporting heterogeneous task requirements. Optimal viewpoints are defined through systematic sampling and evaluation of candidate viewpoints, after which large-scale training datasets are constructed via domain randomization. Moreover, a multimodal optimal viewpoint prediction network is developed, leveraging cross-attention to align and fuse multimodal features and directly predict camera pose adjustments. The proposed framework is instantiated in robotic grasping under viewpoint-constrained environments. Experimental results demonstrate that active perception guided by the framework significantly improves grasp success rates. Notably, real-world evaluations achieve nearly double the grasp success rate and enable seamless sim-to-real transfer without additional fine-tuning, demonstrating the effectiveness of the proposed framework.
+ oai:arXiv.org:2601.13639v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Deyun Qin, Zezhi Liu, Hanqian Luo, Xiao Liang, Yongchun Fang
+
+
+ Towards Token-Level Text Anomaly Detection
+ https://arxiv.org/abs/2601.13644
+ arXiv:2601.13644v1 Announce Type: new
+Abstract: Despite significant progress in text anomaly detection for web applications such as spam filtering and fake news detection, existing methods are fundamentally limited to document-level analysis, unable to identify which specific parts of a text are anomalous. We introduce token-level anomaly detection, a novel paradigm that enables fine-grained localization of anomalies within text. We formally define text anomalies at both document and token-levels, and propose a unified detection framework that operates across multiple levels. To facilitate research in this direction, we collect and annotate three benchmark datasets spanning spam, reviews and grammar errors with token-level labels. Experimental results demonstrate that our framework get better performance than other 6 baselines, opening new possibilities for precise anomaly localization in text. All the codes and data are publicly available on https://github.com/charles-cao/TokenCore.
+ oai:arXiv.org:2601.13644v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yang Cao, Bicheng Yu, Sikun Yang, Ming Liu, Yujiu Yang
+
+
+ Quadratic Upper Bound for Boosting Robustness
+ https://arxiv.org/abs/2601.13645
+ arXiv:2601.13645v1 Announce Type: new
+Abstract: Fast adversarial training (FAT) aims to enhance the robustness of models against adversarial attacks with reduced training time, however, FAT often suffers from compromised robustness due to insufficient exploration of adversarial space. In this paper, we develop a loss function to mitigate the problem of degraded robustness under FAT. Specifically, we derive a quadratic upper bound (QUB) on the adversarial training (AT) loss function and propose to utilize the bound with existing FAT methods. Our experimental results show that applying QUB loss to the existing methods yields significant improvement of robustness. Furthermore, using various metrics, we demonstrate that this improvement is likely to result from the smoothened loss landscape of the resulting model.
+ oai:arXiv.org:2601.13645v1
+ cs.LG
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Proceedings of the 42nd International Conference on Machine Learning (ICML 2025), Proceedings of Machine Learning Research (PMLR), vol. 267, pp. 72656-72676, 2025
+ Euijin You, Hyang-Won Lee
+
+
+ Fusion Segment Transformer: Bi-Directional Attention Guided Fusion Network for AI-Generated Music Detection
+ https://arxiv.org/abs/2601.13647
+ arXiv:2601.13647v1 Announce Type: new
+Abstract: With the rise of generative AI technology, anyone can now easily create and deploy AI-generated music, which has heightened the need for technical solutions to address copyright and ownership issues. While existing works mainly focused on short-audio, the challenge of full-audio detection, which requires modeling long-term structure and context, remains insufficiently explored. To address this, we propose an improved version of the Segment Transformer, termed the Fusion Segment Transformer. As in our previous work, we extract content embeddings from short music segments using diverse feature extractors. Furthermore, we enhance the architecture for full-audio AI-generated music detection by introducing a Gated Fusion Layer that effectively integrates content and structural information, enabling the capture of long-term context. Experiments on the SONICS and AIME datasets show that our approach outperforms the previous model and recent baselines, achieving state-of-the-art results in AI-generated music detection.
+ oai:arXiv.org:2601.13647v1
+ cs.SD
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yumin Kim, Seonghyeon Go
+
+
+ Fairness or Fluency? An Investigation into Language Bias of Pairwise LLM-as-a-Judge
+ https://arxiv.org/abs/2601.13649
+ arXiv:2601.13649v1 Announce Type: new
+Abstract: Recent advances in Large Language Models (LLMs) have incentivized the development of LLM-as-a-judge, an application of LLMs where they are used as judges to decide the quality of a certain piece of text given a certain context. However, previous studies have demonstrated that LLM-as-a-judge can be biased towards different aspects of the judged texts, which often do not align with human preference. One of the identified biases is language bias, which indicates that the decision of LLM-as-a-judge can differ based on the language of the judged texts. In this paper, we study two types of language bias in pairwise LLM-as-a-judge: (1) performance disparity between languages when the judge is prompted to compare options from the same language, and (2) bias towards options written in major languages when the judge is prompted to compare options of two different languages. We find that for same-language judging, there exist significant performance disparities across language families, with European languages consistently outperforming African languages, and this bias is more pronounced in culturally-related subjects. For inter-language judging, we observe that most models favor English answers, and that this preference is influenced more by answer language than question language. Finally, we investigate whether language bias is in fact caused by low-perplexity bias, a previously identified bias of LLM-as-a-judge, and we find that while perplexity is slightly correlated with language bias, language bias cannot be fully explained by perplexity only.
+ oai:arXiv.org:2601.13649v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaolin Zhou, Zheng Luo, Yicheng Gao, Qixuan Chen, Xiyang Hu, Yue Zhao, Ruishan Liu
+
+
+ Face-Voice Association with Inductive Bias for Maximum Class Separation
+ https://arxiv.org/abs/2601.13651
+ arXiv:2601.13651v1 Announce Type: new
+Abstract: Face-voice association is widely studied in multimodal learning and is approached representing faces and voices with embeddings that are close for a same person and well separated from those of others. Previous work achieved this with loss functions. Recent advancements in classification have shown that the discriminative ability of embeddings can be strengthened by imposing maximum class separation as inductive bias. This technique has never been used in the domain of face-voice association, and this work aims at filling this gap. More specifically, we develop a method for face-voice association that imposes maximum class separation among multimodal representations of different speakers as an inductive bias. Through quantitative experiments we demonstrate the effectiveness of our approach, showing that it achieves SOTA performance on two task formulation of face-voice association. Furthermore, we carry out an ablation study to show that imposing inductive bias is most effective when combined with losses for inter-class orthogonality. To the best of our knowledge, this work is the first that applies and demonstrates the effectiveness of maximum class separation as an inductive bias in multimodal learning; it hence paves the way to establish a new paradigm.
+ oai:arXiv.org:2601.13651v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Marta Moscati, Oleksandr Kats, Mubashir Noman, Muhammad Zaigham Zaheer, Yufang Hou, Markus Schedl, Shah Nawaz
+
+
+ TimeART: Towards Agentic Time Series Reasoning via Tool-Augmentation
+ https://arxiv.org/abs/2601.13653
+ arXiv:2601.13653v1 Announce Type: new
+Abstract: Time series data widely exist in real-world cyber-physical systems. Though analyzing and interpreting them contributes to significant values, e.g, disaster prediction and financial risk control, current workflows mainly rely on human data scientists, which requires significant labor costs and lacks automation. To tackle this, we introduce TimeART, a framework fusing the analytical capability of strong out-of-the-box tools and the reasoning capability of Large Language Models (LLMs), which serves as a fully agentic data scientist for Time Series Question Answering (TSQA). To teach the LLM-based Time Series Reasoning Models (TSRMs) strategic tool-use, we also collect a 100k expert trajectory corpus called TimeToolBench. To enhance TSRMs' generalization capability, we then devise a four-stage training strategy, which boosts TSRMs through learning from their own early experiences and self-reflections. Experimentally, we train an 8B TSRM on TimeToolBench and equip it with the TimeART framework, and it achieves consistent state-of-the-art performance on multiple TSQA tasks, which pioneers a novel approach towards agentic time series reasoning.
+ oai:arXiv.org:2601.13653v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xingjian Wu, Junkai Lu, Zhengyu Li, Xiangfei Qiu, Jilin Hu, Chenjuan Guo, Christian S. Jensen, Bin Yang
+
+
+ Why Does the LLM Stop Computing: An Empirical Study of User-Reported Failures in Open-Source LLMs
+ https://arxiv.org/abs/2601.13655
+ arXiv:2601.13655v1 Announce Type: new
+Abstract: The democratization of open-source Large Language Models (LLMs) allows users to fine-tune and deploy models on local infrastructure but exposes them to a First Mile deployment landscape. Unlike black-box API consumption, the reliability of user-managed orchestration remains a critical blind spot. To bridge this gap, we conduct the first large-scale empirical study of 705 real-world failures from the open-source DeepSeek, Llama, and Qwen ecosystems.
+ Our analysis reveals a paradigm shift: white-box orchestration relocates the reliability bottleneck from model algorithmic defects to the systemic fragility of the deployment stack. We identify three key phenomena: (1) Diagnostic Divergence: runtime crashes distinctively signal infrastructure friction, whereas incorrect functionality serves as a signature for internal tokenizer defects. (2) Systemic Homogeneity: Root causes converge across divergent series, confirming reliability barriers are inherent to the shared ecosystem rather than specific architectures. (3) Lifecycle Escalation: Barriers escalate from intrinsic configuration struggles during fine-tuning to compounded environmental incompatibilities during inference. Supported by our publicly available dataset, these insights provide actionable guidance for enhancing the reliability of the LLM landscape.
+ oai:arXiv.org:2601.13655v1
+ cs.SE
+ cs.AI
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Guangba Yu, Zirui Wang, Yujie Huang, Renyi Zhong, Yuedong Zhong, Yilun Wang, Michael R. Lyu
+
+
+ Communication-Free Collective Navigation for a Swarm of UAVs via LiDAR-Based Deep Reinforcement Learning
+ https://arxiv.org/abs/2601.13657
+ arXiv:2601.13657v1 Announce Type: new
+Abstract: This paper presents a deep reinforcement learning (DRL) based controller for collective navigation of unmanned aerial vehicle (UAV) swarms in communication-denied environments, enabling robust operation in complex, obstacle-rich environments. Inspired by biological swarms where informed individuals guide groups without explicit communication, we employ an implicit leader-follower framework. In this paradigm, only the leader possesses goal information, while follower UAVs learn robust policies using only onboard LiDAR sensing, without requiring any inter-agent communication or leader identification. Our system utilizes LiDAR point clustering and an extended Kalman filter for stable neighbor tracking, providing reliable perception independent of external positioning systems. The core of our approach is a DRL controller, trained in GPU-accelerated Nvidia Isaac Sim, that enables followers to learn complex emergent behaviors - balancing flocking and obstacle avoidance - using only local perception. This allows the swarm to implicitly follow the leader while robustly addressing perceptual challenges such as occlusion and limited field-of-view. The robustness and sim-to-real transfer of our approach are confirmed through extensive simulations and challenging real-world experiments with a swarm of five UAVs, which successfully demonstrated collective navigation across diverse indoor and outdoor environments without any communication or external localization.
+ oai:arXiv.org:2601.13657v1
+ cs.RO
+ cs.AI
+ cs.LG
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Myong-Yol Choi, Hankyoul Ko, Hanse Cho, Changseung Kim, Seunghwan Kim, Jaemin Seo, Hyondong Oh
+
+
+ Beyond Known Facts: Generating Unseen Temporal Knowledge to Address Data Contamination in LLM Evaluation
+ https://arxiv.org/abs/2601.13658
+ arXiv:2601.13658v1 Announce Type: new
+Abstract: The automatic extraction of information is important for populating large web knowledge bases such as Wikidata. The temporal version of that task, temporal knowledge graph extraction (TKGE), involves extracting temporally grounded facts from text, represented as semantic quadruples (subject, relation, object, timestamp). Many recent systems take advantage of large language models (LLMs), which are becoming a new cornerstone of the web due to their performance on many tasks across the natural language processing (NLP) field. Despite the importance of TKGE, existing datasets for training and evaluation remain scarce, and contamination of evaluation data is an unaddressed issue, potentially inflating LLMs' perceived performance due to overlaps between training and evaluation sets. To mitigate these challenges, we propose a novel synthetic evaluation dataset constructed from predicted future, previously unseen temporal facts, thereby eliminating contamination and enabling robust and unbiased benchmarking. Our dataset creation involves a two-step approach: (1) Temporal Knowledge Graph Forecasting (TKGF) generates plausible future quadruples, which are subsequently filtered to adhere to the original knowledge base schema; (2) LLMs perform quadruple-to-text generation, creating semantically aligned textual descriptions. We benchmark Extract, Define and Canonicalize (EDC), a state-of-the-art LLM-based extraction framework, demonstrating that LLM performance decreases when evaluated on our dataset compared to a dataset of known facts. We publicly release our dataset consisting of 4.2K future quadruples and corresponding textual descriptions, along with the generation methodology, enabling continuous creation of unlimited future temporal datasets to serve as long-term, contamination-free benchmarks for TKGE.
+ oai:arXiv.org:2601.13658v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Arthur Amalvy, Hen-Hsen Huang
+
+
+ Temporal-Spatial Decouple before Act: Disentangled Representation Learning for Multimodal Sentiment Analysis
+ https://arxiv.org/abs/2601.13659
+ arXiv:2601.13659v1 Announce Type: new
+Abstract: Multimodal Sentiment Analysis integrates Linguistic, Visual, and Acoustic. Mainstream approaches based on modality-invariant and modality-specific factorization or on complex fusion still rely on spatiotemporal mixed modeling. This ignores spatiotemporal heterogeneity, leading to spatiotemporal information asymmetry and thus limited performance. Hence, we propose TSDA, Temporal-Spatial Decouple before Act, which explicitly decouples each modality into temporal dynamics and spatial structural context before any interaction. For every modality, a temporal encoder and a spatial encoder project signals into separate temporal and spatial body. Factor-Consistent Cross-Modal Alignment then aligns temporal features only with their temporal counterparts across modalities, and spatial features only with their spatial counterparts. Factor specific supervision and decorrelation regularization reduce cross factor leakage while preserving complementarity. A Gated Recouple module subsequently recouples the aligned streams for task. Extensive experiments show that TSDA outperforms baselines. Ablation analysis studies confirm the necessity and interpretability of the design.
+ oai:arXiv.org:2601.13659v1
+ cs.CL
+ cs.AI
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chunlei Meng, Ziyang Zhou, Lucas He, Xiaojing Du, Chun Ouyang, Zhongxue Gan
+
+
+ Reinforcement Learning for Opportunistic Routing in Software-Defined LEO-Terrestrial Systems
+ https://arxiv.org/abs/2601.13662
+ arXiv:2601.13662v1 Announce Type: new
+Abstract: The proliferation of large-scale low Earth orbit (LEO) satellite constellations is driving the need for intelligent routing strategies that can effectively deliver data to terrestrial networks under rapidly time-varying topologies and intermittent gateway visibility. Leveraging the global control capabilities of a geostationary (GEO)-resident software-defined networking (SDN) controller, we introduce opportunistic routing, which aims to minimize delivery delay by forwarding packets to any currently available ground gateways rather than fixed destinations. This makes it a promising approach for achieving low-latency and robust data delivery in highly dynamic LEO networks. Specifically, we formulate a constrained stochastic optimization problem and employ a residual reinforcement learning framework to optimize opportunistic routing for reducing transmission delay. Simulation results over multiple days of orbital data demonstrate that our method achieves significant improvements in queue length reduction compared to classical backpressure and other well-known queueing algorithms.
+ oai:arXiv.org:2601.13662v1
+ cs.NI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sivaram Krishnan, Zhouyou Gu, Jihong Park, Sung-Min Oh, Jinho Choi
+
+
+ On the stability, complexity, and distribution of similarity classes of the longest edge bisection process for triangles
+ https://arxiv.org/abs/2601.13663
+ arXiv:2601.13663v1 Announce Type: new
+Abstract: The Longest Edge Bisection (LEB) of a triangle is performed by joining the midpoint of its longest edge to the opposite vertex. Applying this procedure iteratively produces an infinite family of triangles. Surprisingly, a classical result of Adler (1983) shows that for any initial triangle, this infinite family falls into finitely many similarity classes.
+ While the set of classes is finite, we show that a far smaller, stable subset of ``fat'' triangles, called {\bf terminal quadruples}, effectively dominates the final mesh structure. We prove the following asymptotic area distribution result: for every initial triangle, the portion of area occupied by terminal quadruples tends to one, with the convergence occurring at an exponential rate. In fact, we provide the precise distribution of triangles in every step. We introduce the {\bf bisection graph} and use spectral methods to establish this result.
+ Given this dominance, we provide a complete characterization of triangles possessing a single terminal quadruple, while conversely exhibiting a sequence of triangles with an unbounded number of terminal quadruples. Furthermore, we reveal several fundamental geometric properties of the points of a terminal quadruple, laying the groundwork for studying the geometric distribution of the entire orbit. Our analysis leverages the hyperbolic geometry framework of Perdomo and Plaza (2014) and refines their techniques.
+ oai:arXiv.org:2601.13663v1
+ cs.CG
+ math.CO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Daniel Kalmanovich, Yaar Solomon
+
+
+ VIAFormer: Voxel-Image Alignment Transformer for High-Fidelity Voxel Refinement
+ https://arxiv.org/abs/2601.13664
+ arXiv:2601.13664v1 Announce Type: new
+Abstract: We propose VIAFormer, a Voxel-Image Alignment Transformer model designed for Multi-view Conditioned Voxel Refinement--the task of repairing incomplete noisy voxels using calibrated multi-view images as guidance. Its effectiveness stems from a synergistic design: an Image Index that provides explicit 3D spatial grounding for 2D image tokens, a Correctional Flow objective that learns a direct voxel-refinement trajectory, and a Hybrid Stream Transformer that enables robust cross-modal fusion. Experiments show that VIAFormer establishes a new state of the art in correcting both severe synthetic corruptions and realistic artifacts on the voxel shape obtained from powerful Vision Foundation Models. Beyond benchmarking, we demonstrate VIAFormer as a practical and reliable bridge in real-world 3D creation pipelines, paving the way for voxel-based methods to thrive in large-model, big-data wave.
+ oai:arXiv.org:2601.13664v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tiancheng Fang, Bowen Pan, Lingxi Chen, Jiangjing Lyu, Chengfei Lyu, Chaoyue Niu, Fan Wu
+
+
+ Transformer based Multi-task Fusion Network for Food Spoilage Detection and Shelf life Forecasting
+ https://arxiv.org/abs/2601.13665
+ arXiv:2601.13665v1 Announce Type: new
+Abstract: Food wastage is one of the critical challenges in the agricultural supply chain, and accurate and effective spoilage detection can help to reduce it. Further, it is highly important to forecast the spoilage information. This aids the longevity of the supply chain management in the agriculture field. This motivated us to propose fusion based architectures by combining CNN with LSTM and DeiT transformer for the following multi-tasks simultaneously: (i) vegetable classification, (ii) food spoilage detection, and (iii) shelf life forecasting. We developed a dataset by capturing images of vegetables from their fresh state until they were completely spoiled. From the experimental analysis it is concluded that the proposed fusion architectures CNN+CNN-LSTM and CNN+DeiT Transformer outperformed several deep learning models such as CNN, VGG16, ResNet50, Capsule Networks, and DeiT Transformers. Overall, CNN + DeiT Transformer yielded F1-score of 0.98 and 0.61 in vegetable classification and spoilage detection respectively and mean squared error (MSE) and symmetric mean absolute percentage error (SMAPE) of 3.58, and 41.66% respectively in spoilage forecasting. Further, the reliability of the fusion models was validated on noisy images and integrated with LIME to visualize the model decisions.
+ oai:arXiv.org:2601.13665v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Mounika Kanulla, Rajasree Dadigi, Sailaja Thota, Vivek Yelleti
+
+
+ CommunityBench: Benchmarking Community-Level Alignment across Diverse Groups and Tasks
+ https://arxiv.org/abs/2601.13669
+ arXiv:2601.13669v1 Announce Type: new
+Abstract: Large language models (LLMs) alignment ensures model behaviors reflect human value. Existing alignment strategies primarily follow two paths: one assumes a universal value set for a unified goal (i.e., one-size-fits-all), while the other treats every individual as unique to customize models (i.e., individual-level). However, assuming a monolithic value space marginalizes minority norms, while tailoring individual models is prohibitively expensive. Recognizing that human society is organized into social clusters with high intra-group value alignment, we propose community-level alignment as a "middle ground". Practically, we introduce CommunityBench, the first large-scale benchmark for community-level alignment evaluation, featuring four tasks grounded in Common Identity and Common Bond theory. With CommunityBench, we conduct a comprehensive evaluation of various foundation models on CommunityBench, revealing that current LLMs exhibit limited capacity to model community-specific preferences. Furthermore, we investigate the potential of community-level alignment in facilitating individual modeling, providing a promising direction for scalable and pluralistic alignment.
+ oai:arXiv.org:2601.13669v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jiayu Lin, Zhongyu Wei
+
+
+ The Orchestration of Multi-Agent Systems: Architectures, Protocols, and Enterprise Adoption
+ https://arxiv.org/abs/2601.13671
+ arXiv:2601.13671v1 Announce Type: new
+Abstract: Orchestrated multi-agent systems represent the next stage in the evolution of artificial intelligence, where autonomous agents collaborate through structured coordination and communication to achieve complex, shared objectives. This paper consolidates and formalizes the technical composition of such systems, presenting a unified architectural framework that integrates planning, policy enforcement, state management, and quality operations into a coherent orchestration layer. Another primary contribution of this work is the in-depth technical delineation of two complementary communication protocols - the Model Context Protocol, which standardizes how agents access external tools and contextual data, and the Agent2Agent protocol, which governs peer coordination, negotiation, and delegation. Together, these protocols establish an interoperable communication substrate that enables scalable, auditable, and policy-compliant reasoning across distributed agent collectives. Beyond protocol design, the paper details how orchestration logic, governance frameworks, and observability mechanisms collectively sustain system coherence, transparency, and accountability. By synthesizing these elements into a cohesive technical blueprint, this paper provides comprehensive treatments of orchestrated multi-agent systems - bridging conceptual architectures with implementation-ready design principles for enterprise-scale AI ecosystems.
+ oai:arXiv.org:2601.13671v1
+ cs.MA
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Apoorva Adimulam, Rajesh Gupta, Sumit Kumar
+
+
+ Autoregressive deep learning for real-time simulation of soft tissue dynamics during virtual neurosurgery
+ https://arxiv.org/abs/2601.13676
+ arXiv:2601.13676v1 Announce Type: new
+Abstract: Accurate simulation of brain deformation is a key component for developing realistic, interactive neurosurgical simulators, as complex nonlinear deformations must be captured to ensure realistic tool-tissue interactions. However, traditional numerical solvers often fall short in meeting real-time performance requirements. To overcome this, we introduce a deep learning-based surrogate model that efficiently simulates transient brain deformation caused by continuous interactions between surgical instruments and the virtual brain geometry. Building on Universal Physics Transformers, our approach operates directly on large-scale mesh data and is trained on an extensive dataset generated from nonlinear finite element simulations, covering a broad spectrum of temporal instrument-tissue interaction scenarios. To reduce the accumulation of errors in autoregressive inference, we propose a stochastic teacher forcing strategy applied during model training. Specifically, training consists of short stochastic rollouts in which the proportion of ground truth inputs is gradually decreased in favor of model-generated predictions. Our results show that the proposed surrogate model achieves accurate and efficient predictions across a range of transient brain deformation scenarios, scaling to meshes with up to 150,000 nodes. The introduced stochastic teacher forcing technique substantially improves long-term rollout stability, reducing the maximum prediction error from 6.7 mm to 3.5 mm. We further integrate the trained surrogate model into an interactive neurosurgical simulation environment, achieving runtimes below 10 ms per simulation step on consumer-grade inference hardware. Our proposed deep learning framework enables rapid, smooth and accurate biomechanical simulations of dynamic brain tissue deformation, laying the foundation for realistic surgical training environments.
+ oai:arXiv.org:2601.13676v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Fabian Greifeneder, Wolfgang Fenz, Benedikt Alkin, Johannes Brandstetter, Michael Giretzlehner, Philipp Moser
+
+
+ Finally Outshining the Random Baseline: A Simple and Effective Solution for Active Learning in 3D Biomedical Imaging
+ https://arxiv.org/abs/2601.13677
+ arXiv:2601.13677v1 Announce Type: new
+Abstract: Active learning (AL) has the potential to drastically reduce annotation costs in 3D biomedical image segmentation, where expert labeling of volumetric data is both time-consuming and expensive. Yet, existing AL methods are unable to consistently outperform improved random sampling baselines adapted to 3D data, leaving the field without a reliable solution. We introduce Class-stratified Scheduled Power Predictive Entropy (ClaSP PE), a simple and effective query strategy that addresses two key limitations of standard uncertainty-based AL methods: class imbalance and redundancy in early selections. ClaSP PE combines class-stratified querying to ensure coverage of underrepresented structures and log-scale power noising with a decaying schedule to enforce query diversity in early-stage AL and encourage exploitation later. In our evaluation on 24 experimental settings using four 3D biomedical datasets within the comprehensive nnActive benchmark, ClaSP PE is the only method that generally outperforms improved random baselines in terms of both segmentation quality with statistically significant gains, whilst remaining annotation efficient. Furthermore, we explicitly simulate the real-world application by testing our method on four previously unseen datasets without manual adaptation, where all experiment parameters are set according to predefined guidelines. The results confirm that ClaSP PE robustly generalizes to novel tasks without requiring dataset-specific tuning. Within the nnActive framework, we present compelling evidence that an AL method can consistently outperform random baselines adapted to 3D segmentation, in terms of both performance and annotation efficiency in a realistic, close-to-production scenario. Our open-source implementation and clear deployment guidelines make it readily applicable in practice. Code is at https://github.com/MIC-DKFZ/nnActive.
+ oai:arXiv.org:2601.13677v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Carsten T. L\"uth, Jeremias Traub, Kim-Celine Kahl, Till J. Bungert, Lukas Klein, Lars Kr\"amer, Paul F. J\"ager, Klaus Maier-Hein, Fabian Isensee
+
+
+ Ultra-Lightweight Network for Ship-Radiated Sound Classification on Embedded Deployment
+ https://arxiv.org/abs/2601.13679
+ arXiv:2601.13679v1 Announce Type: new
+Abstract: This letter presents ShuffleFAC, a lightweight acoustic model for ship-radiated sound classification in resource-constrained maritime monitoring systems. ShuffleFAC integrates Frequency-Aware convolution into an efficiency-oriented backbone using separable convolution, point-wise group convolution, and channel shuffle, enabling frequency-sensitive feature extraction with low computational cost. Experiments on the DeepShip dataset show that ShuffleFAC achieves competitive performance with substantially reduced complexity. In particular, ShuffleFAC ($\gamma=16$) attains a macro F1-score of 71.45 $\pm$ 1.18% using 39K parameters and 3.06M MACs, and achieves an inference latency of 6.05 $\pm$ 0.95ms on a Raspberry Pi. Compared with MicroNet0, it improves macro F1-score by 1.82 % while reducing model size by 9.7x and latency by 2.5x. These results indicate that ShuffleFAC is suitable for real-time embedded UATR.
+ oai:arXiv.org:2601.13679v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sangwon Park, Dongjun Kim, Sung-Hoon Byun, Sangwook Park
+
+
+ ORCA -- An Automated Threat Analysis Pipeline for O-RAN Continuous Development
+ https://arxiv.org/abs/2601.13681
+ arXiv:2601.13681v1 Announce Type: new
+Abstract: The Open-Radio Access Network (O-RAN) integrates numerous software components in a cloud-like deployment, opening the radio access network to previously unconsidered security threats. With the ever-evolving threat landscape, integrating security practices through a DevSecOps approach is essential for fast and secure releases. Current vulnerability assessment practices often rely on manual, labor-intensive, and subjective investigations, leading to inconsistencies in the threat analysis. To mitigate these issues, we establish an automated pipeline that leverages Natural Language Processing (NLP) to minimize human intervention and associated biases. By mapping real-world vulnerabilities to predefined threat lists with a standardized input format, our approach is the first to enable iterative, quantitative, and efficient assessments, generating reliable threat scores for both individual vulnerabilities and entire system components within O-RAN. We illustrate the effectiveness of our framework through an example implementation for O-RAN, showcasing how continuous security testing can integrate into automated testing pipelines to address the unique security challenges of this paradigm shift in telecommunications.
+ oai:arXiv.org:2601.13681v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Felix Klement, Alessandro Brighente, Michele Polese, Mauro Conti, Stefan Katzenbeisser
+
+
+ CodeContests-O: Powering LLMs via Feedback-Driven Iterative Test Case Generation
+ https://arxiv.org/abs/2601.13682
+ arXiv:2601.13682v1 Announce Type: new
+Abstract: The rise of reasoning models necessitates large-scale verifiable data, for which programming tasks serve as an ideal source. However, while competitive programming platforms provide abundant problems and solutions, high-quality test cases for verification remain scarce. Existing approaches attempt to synthesize test cases using Large Language Models (LLMs), but rely solely on the model's intrinsic generation capabilities without external feedback, frequently resulting in insufficiently diverse cases. To address this limitation, we propose a $\textbf{Feedback-Driven Iterative Framework}$ for comprehensive test case construction. Specifically, our method leverages the LLM to generate initial test cases, executes them against known correct and incorrect solutions, and utilizes the failed results as feedback to guide the LLM in refining the test cases toward high fidelity and discriminability. We then apply this method to the CodeContests dataset to construct an optimized high-quality derivative, $\textbf{CodeContests-O}$. Evaluating against the entire pool of solutions ($1.1 \times 10^7$ in total), our dataset achieves an average True Positive Rate (TPR) of $89.37\%$ and True Negative Rate (TNR) of $90.89\%$, significantly outperforming the CodeContests and CodeContests+ by margins of $4.32\%$ and $9.37\%$, respectively. Furthermore, fine-tuning the Qwen2.5-7B model on CodeContests-O results in a $9.52\%$ improvement on LiveCodeBench (Pass@1). Experiments demonstrate the effectiveness of our framework and the quality of CodeContests-O. To support reproducibility and facilitate future research, we release the $\href{https://github.com/cai-jianfeng/CodeContests-O}{code}$ and $\href{https://huggingface.co/datasets/caijanfeng/CodeContests-O}{dataset}$.
+ oai:arXiv.org:2601.13682v1
+ cs.SE
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianfeng Cai, Jinhua Zhu, Ruopei Sun, Kangwen Zhao, Dongyun Xue, Mingxiao Feng, Wengang Zhou, Houqiang Li
+
+
+ Dynamic Differential Linear Attention: Enhancing Linear Diffusion Transformer for High-Quality Image Generation
+ https://arxiv.org/abs/2601.13683
+ arXiv:2601.13683v1 Announce Type: new
+Abstract: Diffusion transformers (DiTs) have emerged as a powerful architecture for high-fidelity image generation, yet the quadratic cost of self-attention poses a major scalability bottleneck. To address this, linear attention mechanisms have been adopted to reduce computational cost; unfortunately, the resulting linear diffusion transformers (LiTs) models often come at the expense of generative performance, frequently producing over-smoothed attention weights that limit expressiveness. In this work, we introduce Dynamic Differential Linear Attention (DyDiLA), a novel linear attention formulation that enhances the effectiveness of LiTs by mitigating the oversmoothing issue and improving generation quality. Specifically, the novelty of DyDiLA lies in three key designs: (i) dynamic projection module, which facilitates the decoupling of token representations by learning with dynamically assigned knowledge; (ii) dynamic measure kernel, which provides a better similarity measurement to capture fine-grained semantic distinctions between tokens by dynamically assigning kernel functions for token processing; and (iii) token differential operator, which enables more robust query-to-key retrieval by calculating the differences between the tokens and their corresponding information redundancy produced by dynamic measure kernel. To capitalize on DyDiLA, we introduce a refined LiT, termed DyDi-LiT, that systematically incorporates our advancements. Extensive experiments show that DyDi-LiT consistently outperforms current state-of-the-art (SOTA) models across multiple metrics, underscoring its strong practical potential.
+ oai:arXiv.org:2601.13683v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Boyuan Cao, Xingbo Yao, Chenhui Wang, Jiaxin Ye, Yujie Wei, Hongming Shan
+
+
+ HeteroCache: A Dynamic Retrieval Approach to Heterogeneous KV Cache Compression for Long-Context LLM Inference
+ https://arxiv.org/abs/2601.13684
+ arXiv:2601.13684v1 Announce Type: new
+Abstract: The linear memory growth of the KV cache poses a significant bottleneck for LLM inference in long-context tasks. Existing static compression methods often fail to preserve globally important information, principally because they overlook the attention drift phenomenon where token significance evolves dynamically. Although recent dynamic retrieval approaches attempt to address this issue, they typically suffer from coarse-grained caching strategies and incur high I/O overhead due to frequent data transfers. To overcome these limitations, we propose HeteroCache, a training-free dynamic compression framework. Our method is built on two key insights: attention heads exhibit diverse temporal heterogeneity, and there is significant spatial redundancy among heads within the same layer. Guided by these insights, HeteroCache categorizes heads based on stability and redundancy. Consequently, we apply a fine-grained weighting strategy that allocates larger cache budgets to heads with rapidly shifting attention to capture context changes, thereby addressing the inefficiency of coarse-grained strategies. Furthermore, we employ a hierarchical storage mechanism in which a subset of representative heads monitors attention shift, and trigger an asynchronous, on-demand retrieval of contexts from the CPU, effectively hiding I/O latency. Finally, experiments demonstrate that HeteroCache achieves state-of-the-art performance on multiple long-context benchmarks and accelerates decoding by up to $3\times$ compared to the original model in the 224K context. Our code will be open-source.
+ oai:arXiv.org:2601.13684v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhiyuan Shi, Qibo Qiu, Feng Xue, Zhonglin Jiang, Li Yu, Jian Jiang, Xiaofei He, Wenxiao Wang
+
+
+ Understanding Mental States to Guide Social Influence in Multi-Person Group Dialogue
+ https://arxiv.org/abs/2601.13687
+ arXiv:2601.13687v1 Announce Type: new
+Abstract: Existing dynamic Theory of Mind (ToM) benchmarks mostly place language models in a passive role: the model reads a sequence of connected scenarios and reports what people believe, feel, intend, and do as these states change. In real social interaction, ToM is also used for action: a speaker plans what to say in order to shift another person's mental-state trajectory toward a goal. We introduce SocialMindChange, a benchmark that moves from tracking minds to changing minds in social interaction. Each instance defines a social context with 4 characters and five connected scenes. The model plays one character and generates dialogue across the five scenes to reach the target while remaining consistent with the evolving states of all participants. SocialMindChange also includes selected higher-order states. Using a structured four-step framework, we construct 1,200 social contexts, covering 6000 scenarios and over 90,000 questions, each validated for realism and quality. Evaluations on ten state-of-the-art LLMs show that their average performance is 54.2% below human performance. This gap suggests that current LLMs still struggle to maintain and change mental-state representations across long, linked interactions.
+ oai:arXiv.org:2601.13687v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhichao Liang, Satoshi Nakamura
+
+
+ Criminator: An Easy-to-Use XR "Crime Animator" for Rapid Reconstruction and Analysis of Dynamic Crime Scenes
+ https://arxiv.org/abs/2601.13689
+ arXiv:2601.13689v1 Announce Type: new
+Abstract: Law enforcement authorities are increasingly interested in 3D modelling for virtual crime scene reconstruction, enabling offline analysis without the cost and contamination risk of on-site investigation. Past work has demonstrated spatial relationships through static modelling but validating the sequence of events in dynamic scenarios is crucial for solving a case. Yet, animation tools are not well suited to crime scene reconstruction, and complex for non-experts in 3D modelling/animation. Through a co-design process with criminology experts, we designed "Criminator"-a methodological framework and XR tool that simplifies animation authoring. We evaluated this tool with participants trained in criminology (n=6) and untrained individuals (n=12). Both groups were able to successfully complete the character animation tasks and provided high usability ratings for observation tasks. Criminator has potential for hypothesis testing, demonstration, sense-making, and training. Challenges remain in how such a tool fits into the entire judicial process, with questions about including animations as evidence.
+ oai:arXiv.org:2601.13689v1
+ cs.HC
+ cs.GR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3772318.3791210
+ Vahid Pooryousef, Lonni Besan\c{c}on, Maxime Cordeil, Chris Flight, Alastair M Ross AM, Richard Bassed, Tim Dwyer
+
+
+ Dr. Assistant: Enhancing Clinical Diagnostic Inquiry via Structured Diagnostic Reasoning Data and Reinforcement Learning
+ https://arxiv.org/abs/2601.13690
+ arXiv:2601.13690v1 Announce Type: new
+Abstract: Clinical Decision Support Systems (CDSSs) provide reasoning and inquiry guidance for physicians, yet they face notable challenges, including high maintenance costs and low generalization capability. Recently, Large Language Models (LLMs) have been widely adopted in healthcare due to their extensive knowledge reserves, retrieval, and communication capabilities. While LLMs show promise and excel at medical benchmarks, their diagnostic reasoning and inquiry skills are constrained. To mitigate this issue, we propose (1) Clinical Diagnostic Reasoning Data (CDRD) structure to capture abstract clinical reasoning logic, and a pipeline for its construction, and (2) the Dr. Assistant, a clinical diagnostic model equipped with clinical reasoning and inquiry skills. Its training involves a two-stage process: SFT, followed by RL with a tailored reward function. We also introduce a benchmark to evaluate both diagnostic reasoning and inquiry. Our experiments demonstrate that the Dr. Assistant outperforms open-source models and achieves competitive performance to closed-source models, providing an effective solution for clinical diagnostic inquiry guidance.
+ oai:arXiv.org:2601.13690v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yue Guo, Fanfu Wang, Jianwei Lv, Xincheng Shi, Yuchen Li, Youya Wang, Yunsheng Zeng, Yujing Liu, Yunhao Qiao, Gen Li, Junfeng Wang, Bo Yuan
+
+
+ Generative Intent Prediction Agentic AI empowered Edge Service Function Chain Orchestration
+ https://arxiv.org/abs/2601.13694
+ arXiv:2601.13694v1 Announce Type: new
+Abstract: With the development of artificial intelligence (AI), Agentic AI (AAI) based on large language models (LLMs) is gradually being applied to network management. However, in edge network environments, high user mobility and implicit service intents pose significant challenges to the passive and reactive management of traditional AAI. To address the limitations of existing approaches in handling dynamic demands and predicting users' implicit intents, in this paper we propose an edge service function chain (SFC) orchestration framework empowered by a Generative Intent Prediction Agent (GIPA). Our GIPA aims to shift the paradigm from passive execution to proactive prediction and orchestration. First, we construct a multidimensional intent space that includes functional preferences, QoS sensitivity, and resource requirements, enabling the mapping from unstructured natural language to quantifiable physical resource demands. Second, to cope with the complexity and randomness of intent sequences, we design an intent prediction model based on a Generative Diffusion Model (GDM), which reconstructs users' implicit intents from multidimensional context through a reverse denoising process. Finally, the predicted implicit intents are embedded as global prompts into the SFC orchestration model to guide the network in proactively and ahead-of-time optimizing SFC deployment strategies. Experiment results show that GIPA outperforms existing baseline methods in highly concurrent and highly dynamic scenarios.
+ oai:arXiv.org:2601.13694v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yan Sun, Shaoyong Guo, Sai Huang, Zhiyong Feng, Feng Qi, Xuesong Qiu
+
+
+ OptiSQL: Executable SQL Generation from Optical TokensOptiSQL: Executable SQL Generation from Optical Tokens
+ https://arxiv.org/abs/2601.13695
+ arXiv:2601.13695v1 Announce Type: new
+Abstract: Executable SQL generation is typically studied in text-to-SQL settings, where tables are provided as fully linearized textual schemas and contents. While effective, this formulation assumes access to structured text and incurs substantial token overhead, which is misaligned with many real-world scenarios where tables appear as visual artifacts in documents or webpages. We investigate whether compact optical representations can serve as an efficient interface for executable semantic parsing. We present OptiSQL, a vision-driven framework that generates executable SQL directly from table images and natural language questions using compact optical tokens. OptiSQL leverages an OCR-oriented visual encoder to compress table structure and content into a small set of optical tokens and fine-tunes a pretrained decoder for SQL generation while freezing the encoder to isolate representation sufficiency. Experiments on a visualized version of Spider 2.0-Snow show that OptiSQL retains strong execution accuracy while reducing table input tokens by an order of magnitude. Robustness analyses further demonstrate that optical tokens preserve essential structural information under visual perturbations.
+ oai:arXiv.org:2601.13695v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sifan Li, Hongkai Chen, Yujun Cai, Liyang Chen, Qingwen Ye, Yiwei Wang
+
+
+ Uncertainty-Aware Gradient Signal-to-Noise Data Selection for Instruction Tuning
+ https://arxiv.org/abs/2601.13697
+ arXiv:2601.13697v1 Announce Type: new
+Abstract: Instruction tuning is a standard paradigm for adapting large language models (LLMs), but modern instruction datasets are large, noisy, and redundant, making full-data fine-tuning costly and often unnecessary. Existing data selection methods either build expensive gradient datastores or assign static scores from a weak proxy, largely ignoring evolving uncertainty, and thus missing a key source of LLM interpretability. We propose GRADFILTERING, an objective-agnostic, uncertainty-aware data selection framework that utilizes a small GPT-2 proxy with a LoRA ensemble and aggregates per-example gradients into a Gradient Signal-to-Noise Ratio (G-SNR) utility. Our method matches or surpasses random subsets and strong baselines in most LLM-as-a-judge evaluations as well as in human assessment. Moreover, GRADFILTERING-selected subsets converge faster than competitive filters under the same compute budget, reflecting the benefit of uncertainty-aware scoring.
+ oai:arXiv.org:2601.13697v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhihang Yuan, Chengyu Yue, Long Huang, Litu Ou, Lei Shi
+
+
+ Does Privacy Always Harm Fairness? Data-Dependent Trade-offs via Chernoff Information Neural Estimation
+ https://arxiv.org/abs/2601.13698
+ arXiv:2601.13698v1 Announce Type: new
+Abstract: Fairness and privacy are two vital pillars of trustworthy machine learning. Despite extensive research on these individual topics, the relationship between fairness and privacy has received significantly less attention. In this paper, we utilize the information-theoretic measure Chernoff Information to highlight the data-dependent nature of the relationship among the triad of fairness, privacy, and accuracy. We first define Noisy Chernoff Difference, a tool that allows us to analyze the relationship among the triad simultaneously. We then show that for synthetic data, this value behaves in 3 distinct ways (depending on the distribution of the data). We highlight the data distributions involved in these cases and explore their fairness and privacy implications. Additionally, we show that Noisy Chernoff Difference acts as a proxy for the steepness of the fairness-accuracy curves. Finally, we propose a method for estimating Chernoff Information on data from unknown distributions and utilize this framework to examine the triad dynamic on real datasets. This work builds towards a unified understanding of the fairness-privacy-accuracy relationship and highlights its data-dependent nature.
+ oai:arXiv.org:2601.13698v1
+ cs.LG
+ cs.AI
+ cs.IT
+ math.IT
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Arjun Nichani (Richard), Hsiang Hsu (Richard), Chun-Fu (Richard), Chen, Haewon Jeong
+
+
+ DistilMOS: Layer-Wise Self-Distillation For Self-Supervised Learning Model-Based MOS Prediction
+ https://arxiv.org/abs/2601.13700
+ arXiv:2601.13700v1 Announce Type: new
+Abstract: With the advancement of self-supervised learning (SSL), fine-tuning pretrained SSL models for mean opinion score (MOS) prediction has achieved state-of-the-art performance. However, during fine-tuning, these SSL-based MOS prediction models often suffer from catastrophic forgetting of the pretrained knowledge and tend to overfit the training set, resulting in poor generalization performance. In this study, we propose DistilMOS, a novel method that learns to predict not only MOS but also token IDs obtained by clustering the hidden representations of each layer in the pretrained SSL model. These layer-wise token targets serve as self-distillation signals that enables the MOS prediction model to extract rich internal knowledge from SSL models, enhancing both prediction accuracy and generalization capability. Experimental evaluations demonstrate that our method significantly outperforms standard SSL-based MOS prediction models on both in-domain and out-of-domain evaluations, verifying the effectiveness and practicality of the proposed method.
+ oai:arXiv.org:2601.13700v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianing Yang, Wataru Nakata, Yuki Saito, Hiroshi Saruwatari
+
+
+ IGAA: Intent-Driven General Agentic AI for Edge Services Scheduling using Generative Meta Learning
+ https://arxiv.org/abs/2601.13702
+ arXiv:2601.13702v1 Announce Type: new
+Abstract: Agentic AI (AAI), which extends Large Language Models with enhanced reasoning capabilities, has emerged as a promising paradigm for autonomous edge service scheduling. However, user mobility creates highly dynamic service demands in edge networks, and existing service scheduling agents often lack generalization capabilities for new scenarios. Therefore, this paper proposes a novel Intent-Driven General Agentic AI (IGAA) framework. Leveraging a meta-learning paradigm, IGAA enables AAI to continuously learn from prior service scheduling experiences to achieve generalized scheduling capabilities. Particularly, IGAA incorporates three core mechanisms. First, we design a Network-Service-Intent matrix mapping method to allow agents to simulate novel scenarios and generate training datasets. Second, we present an easy-to-hard generalization learning scheme with two customized algorithms, namely Resource Causal Effect-aware Transfer Learning (RCETL) and Action Potential Optimality-aware Transfer Learning (APOTL). These algorithms help IGAA adapt to new scenarios. Furthermore, to prevent catastrophic forgetting during continual IGAA learning, we propose a Generative Intent Replay (GIR) mechanism that synthesizes historical service data to consolidate prior capabilities. Finally, to mitigate the effect of LLM hallucinations on scenario simulation, we incorporate a scenario evaluation and correction model to guide agents in generating rational scenarios and datasets. Extensive experiments demonstrate IGAA's strong generalization and scalability. Specifically, IGAA enables rapid adaptation by transferring learned policies to analogous new ones, such as applying latency-sensitive patterns from real-time computing to optimize novel Internet of Vehicles (IoV) services. Compared to scenario-specific methods, IGAA maintains the intent-satisfaction rate gap within 3.81%.
+ oai:arXiv.org:2601.13702v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yan Sun, Yinqiu Liu, Shaoyong Guo, Ruichen Zhang, Feng Qi, Xuesong Qiu, Weifeng Gong, Dusit Niyato, Qihui Wu
+
+
+ Performance and Complexity Trade-off Optimization of Speech Models During Training
+ https://arxiv.org/abs/2601.13704
+ arXiv:2601.13704v1 Announce Type: new
+Abstract: In speech machine learning, neural network models are typically designed by choosing an architecture with fixed layer sizes and structure. These models are then trained to maximize performance on metrics aligned with the task's objective. While the overall architecture is usually guided by prior knowledge of the task, the sizes of individual layers are often chosen heuristically. However, this approach does not guarantee an optimal trade-off between performance and computational complexity; consequently, post hoc methods such as weight quantization or model pruning are typically employed to reduce computational cost. This occurs because stochastic gradient descent (SGD) methods can only optimize differentiable functions, while factors influencing computational complexity, such as layer sizes and floating-point operations per second (FLOP/s), are non-differentiable and require modifying the model structure during training. We propose a reparameterization technique based on feature noise injection that enables joint optimization of performance and computational complexity during training using SGD-based methods. Unlike traditional pruning methods, our approach allows the model size to be dynamically optimized for a target performance-complexity trade-off, without relying on heuristic criteria to select which weights or structures to remove. We demonstrate the effectiveness of our method through three case studies, including a synthetic example and two practical real-world applications: voice activity detection and audio anti-spoofing. The code related to our work is publicly available to encourage further research.
+ oai:arXiv.org:2601.13704v1
+ cs.SD
+ cs.AI
+ cs.LG
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Esteban G\'omez, Tom B\"ackstr\"om
+
+
+ Reasoning or Pattern Matching? Probing Large Vision-Language Models with Visual Puzzles
+ https://arxiv.org/abs/2601.13705
+ arXiv:2601.13705v1 Announce Type: new
+Abstract: Puzzles have long served as compact and revealing probes of human cognition, isolating abstraction, rule discovery, and systematic reasoning with minimal reliance on prior knowledge. Leveraging these properties, visual puzzles have recently emerged as a powerful diagnostic tool for evaluating the reasoning abilities of Large Vision-Language Models (LVLMs), offering controlled, verifiable alternatives to open-ended multimodal benchmarks. This survey provides a unified perspective of visual puzzle reasoning in LVLMs. We frame visual puzzles through a common abstraction and organize existing benchmarks by the reasoning mechanisms they target (inductive, analogical, algorithmic, deductive, and geometric/spatial), thereby linking puzzle design to the cognitive operations required for solving. Synthesizing empirical evidence across these categories, we identify consistent limitations in current models, including brittle generalization, tight entanglement between perception and reasoning, and a persistent gap between fluent explanations and faithful execution. By framing visual puzzles as diagnostic instruments rather than task formats, this survey elaborates on the state of LVLM reasoning and outlines key directions for future benchmarks and reasoning-aware multimodal systems.
+ oai:arXiv.org:2601.13705v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Maria Lymperaiou, Vasileios Karampinis, Giorgos Filandrianos, Angelos Vlachos, Chrysoula Zerva, Athanasios Voulodimos
+
+
+ ParkingTwin: Training-Free Streaming 3D Reconstruction for Parking-Lot Digital Twins
+ https://arxiv.org/abs/2601.13706
+ arXiv:2601.13706v1 Announce Type: new
+Abstract: High-fidelity parking-lot digital twins provide essential priors for path planning, collision checking, and perception validation in Automated Valet Parking (AVP). Yet robot-oriented reconstruction faces a trilemma: sparse forward-facing views cause weak parallax and ill-posed geometry; dynamic occlusions and extreme lighting hinder stable texture fusion; and neural rendering typically needs expensive offline optimization, violating edge-side streaming constraints. We propose ParkingTwin, a training-free, lightweight system for online streaming 3D reconstruction. First, OSM-prior-driven geometric construction uses OpenStreetMap semantic topology to directly generate a metric-consistent TSDF, replacing blind geometric search with deterministic mapping and avoiding costly optimization. Second, geometry-aware dynamic filtering employs a quad-modal constraint field (normal/height/depth consistency) to reject moving vehicles and transient occlusions in real time. Third, illumination-robust fusion in CIELAB decouples luminance and chromaticity via adaptive L-channel weighting and depth-gradient suppression, reducing seams under abrupt lighting changes. ParkingTwin runs at 30+ FPS on an entry-level GTX 1660. On a 68,000 m^2 real-world dataset, it achieves SSIM 0.87 (+16.0%), delivers about 15x end-to-end speedup, and reduces GPU memory by 83.3% compared with state-of-the-art 3D Gaussian Splatting (3DGS) that typically requires high-end GPUs (RTX 4090D). The system outputs explicit triangle meshes compatible with Unity/Unreal digital-twin pipelines. Project page: https://mihoutao-liu.github.io/ParkingTwin/
+ oai:arXiv.org:2601.13706v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xinhao Liu, Yu Wang, Xiansheng Guo, Gordon Owusu Boateng, Yu Cao, Haonan Si, Xingchen Guo, Nirwan Ansari
+
+
+ Attention-space Contrastive Guidance for Efficient Hallucination Mitigation in LVLMs
+ https://arxiv.org/abs/2601.13707
+ arXiv:2601.13707v1 Announce Type: new
+Abstract: Hallucinations in large vision-language models (LVLMs) often arise when language priors dominate over visual evidence, causing object misidentification and visually inconsistent descriptions. We address this issue by framing hallucination mitigation as contrastive guidance, steering generation toward visually grounded and semantically faithful text. This approach regulates the model's internal behavior by reducing over-dependence on language priors and contrasting visually grounded with language-only representations. We propose Attention-space Contrastive Guidance (ACG), a single-pass mechanism that operates within self-attention layers to construct both vision-language and language-only attention paths in a single forward computation. This integration enables computationally efficient guidance directly embedded in the model's representation contextualization. To correct approximation bias introduced by the single-pass formulation, we further apply an orthogonalized correction that removes components aligned with the language-only path, selectively amplifying visual contributions. Experiments on the CHAIR and POPE benchmarks show that ACG achieves state-of-the-art faithfulness and caption quality while significantly reducing computational cost. Our method establishes a principled and efficient alternative, reducing latency by up to 2x compared to prior contrastive decoding methods that require multiple forward passes.
+ oai:arXiv.org:2601.13707v1
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yujin Jo, Sangyoon Bae, Taesup Kim
+
+
+ Hidden in Plain Text: Measuring LLM Deception Quality Against Human Baselines Using Social Deduction Games
+ https://arxiv.org/abs/2601.13709
+ arXiv:2601.13709v1 Announce Type: new
+Abstract: Large Language Model (LLM) agents are increasingly used in many applications, raising concerns about their safety. While previous work has shown that LLMs can deceive in controlled tasks, less is known about their ability to deceive using natural language in social contexts. In this paper, we study deception in the Social Deduction Game (SDG) Mafia, where success is dependent on deceiving others through conversation. Unlike previous SDG studies, we use an asynchronous multi-agent framework which better simulates realistic social contexts. We simulate 35 Mafia games with GPT-4o LLM agents. We then create a Mafia Detector using GPT-4-Turbo to analyze game transcripts without player role information to predict the mafia players. We use prediction accuracy as a surrogate marker for deception quality. We compare this prediction accuracy to that of 28 human games and a random baseline. Results show that the Mafia Detector's mafia prediction accuracy is lower on LLM games than on human games. The result is consistent regardless of the game days and the number of mafias detected. This indicates that LLMs blend in better and thus deceive more effectively. We also release a dataset of LLM Mafia transcripts to support future research. Our findings underscore both the sophistication and risks of LLM deception in social contexts.
+ oai:arXiv.org:2601.13709v1
+ cs.AI
+ cs.CL
+ cs.CY
+ cs.HC
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Christopher Kao, Vanshika Vats, James Davis
+
+
+ Who Should Have Surgery? A Comparative Study of GenAI vs Supervised ML for CRS Surgical Outcome Prediction
+ https://arxiv.org/abs/2601.13710
+ arXiv:2601.13710v1 Announce Type: new
+Abstract: Artificial intelligence has reshaped medical imaging, yet the use of AI on clinical data for prospective decision support remains limited. We study pre-operative prediction of clinically meaningful improvement in chronic rhinosinusitis (CRS), defining success as a more than 8.9-point reduction in SNOT-22 at 6 months (MCID). In a prospectively collected cohort where all patients underwent surgery, we ask whether models using only pre-operative clinical data could have identified those who would have poor outcomes, i.e. those who should have avoided surgery. We benchmark supervised ML (logistic regression, tree ensembles, and an in-house MLP) against generative AI (ChatGPT, Claude, Gemini, Perplexity), giving each the same structured inputs and constraining outputs to binary recommendations with confidence. Our best ML model (MLP) achieves 85 % accuracy with superior calibration and decision-curve net benefit. GenAI models underperform on discrimination and calibration across zero-shot setting. Notably, GenAI justifications align with clinician heuristics and the MLP's feature importance, repeatedly highlighting baseline SNOT-22, CT/endoscopy severity, polyp phenotype, and physchology/pain comorbidities. We provide a reproducible tabular-to-GenAI evaluation protocol and subgroup analyses. Findings support an ML-first, GenAI- augmented workflow: deploy calibrated ML for primary triage of surgical candidacy, with GenAI as an explainer to enhance transparency and shared decision-making.
+ oai:arXiv.org:2601.13710v1
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sayeed Shafayet Chowdhury, Snehasis Mukhopadhyay, Shiaofen Fang, Vijay R. Ramakrishnan
+
+
+ GerAV: Towards New Heights in German Authorship Verification using Fine-Tuned LLMs on a New Benchmark
+ https://arxiv.org/abs/2601.13711
+ arXiv:2601.13711v1 Announce Type: new
+Abstract: Authorship verification (AV) is the task of determining whether two texts were written by the same author and has been studied extensively, predominantly for English data. In contrast, large-scale benchmarks and systematic evaluations for other languages remain scarce. We address this gap by introducing GerAV, a comprehensive benchmark for German AV comprising over 600k labeled text pairs. GerAV is built from Twitter and Reddit data, with the Reddit part further divided into in-domain and cross-domain message-based subsets, as well as a profile-based subset. This design enables controlled analysis of the effects of data source, topical domain, and text length. Using the provided training splits, we conduct a systematic evaluation of strong baselines and state-of-the-art models and find that our best approach, a fine-tuned large language model, outperforms recent baselines by up to 0.09 absolute F1 score and surpasses GPT-5 in a zero-shot setting by 0.08. We further observe a trade-off between specialization and generalization: models trained on specific data types perform best under matching conditions but generalize less well across data regimes, a limitation that can be mitigated by combining training sources. Overall, GerAV provides a challenging and versatile benchmark for advancing research on German and cross-domain AV.
+ oai:arXiv.org:2601.13711v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Lotta Kiefer, Christoph Leiter, Sotaro Takeshita, Elena Schmidt, Steffen Eger
+
+
+ Nonlinear compressive reduced basis approximation : when Taylor meets Kolmogorov
+ https://arxiv.org/abs/2601.13712
+ arXiv:2601.13712v1 Announce Type: new
+Abstract: This paper investigates model reduction methods for efficiently approximating the solution of parameter-dependent PDEs with a multi-parameter vector $\vec{\mu} \in \mathbb{R}^p$. In cases where the Kolmogorov $N$-width decays fast enough, it is effective to approximate the solution as a sum of $N$ separable terms, each being the product of a parameter-dependent coefficient and a space-dependent function. This leads to reduced-order models with $N$ degrees of freedom and complexity of order ${\mathcal O}(N^3)$.
+ However, when the $N$-width decays slowly, $N$ must be large to achieve acceptable accuracy, making cubic complexity prohibitive. The linear complexity measure in terms of Kolmogorov width must be replaced by the Gelfand width, with its associated sensing number. Recent nonlinear approaches based on this notion decompose the $N$ coordinates into two groups: $n$ free variables and $\overline{n}$ dependent variables, where the latter are nonlinear functions of the former ($N= n+\overline n$). Several works have focused on cases where these $\overline{n}$ functions are homogeneous quadratic forms of the $n$ variables, with optimization strategies for choosing $n$ given a target accuracy.
+ A rigorous analysis of the local sensing number is carried out, showing that $n = p$ is optimal and appropriate, at least locally, around a reference point. In practical scenarios involving wide parameter ranges, the condition $p\le n \le p + k$ (with $k$ small) is valid and more robust from continuity arguments. Additionally, the assumption of a quadratic mapping, while justified in a local sense, becomes insufficient. More expressive nonlinear mappings-including those using machine learning-become necessary. This work contributes a theoretical foundation for such strategies and highlights the need for further investigations to push back the Kolmogorov Barrier.
+ oai:arXiv.org:2601.13712v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Joubine Aghili, Hassan Ballout, Yvon Maday, Christophe Prud'homme
+
+
+ SWE-Tester: Training Open-Source LLMs for Issue Reproduction in Real-World Repositories
+ https://arxiv.org/abs/2601.13713
+ arXiv:2601.13713v1 Announce Type: new
+Abstract: Software testing is crucial for ensuring the correctness and reliability of software systems. Automated generation of issue reproduction tests from natural language issue descriptions enhances developer productivity by simplifying root cause analysis, promotes test-driven development -- "test first, write code later", and can be used for improving the effectiveness of automated issue resolution systems like coding agents. Existing methods proposed for this task predominantly rely on closed-source LLMs, with limited exploration of open models. To address this, we propose SWE-Tester -- a novel pipeline for training open-source LLMs to generate issue reproduction tests. First, we curate a high-quality training dataset of 41K instances from 2.6K open-source GitHub repositories and use it to train LLMs of varying sizes and families. The fine-tuned models achieve absolute improvements of up to 10\% in success rate and 21\% in change coverage on SWT-Bench Verified. Further analysis shows consistent improvements with increased inference-time compute, more data, and larger models. These results highlight the effectiveness of our framework for advancing open-source LLMs in this domain.
+ oai:arXiv.org:2601.13713v1
+ cs.SE
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Aditya Bharat Soni, Rajat Ghosh, Vaishnavi Bhargava, Valerie Chen, Debojyoti Dutta
+
+
+ MVGD-Net: A Novel Motion-aware Video Glass Surface Detection Network
+ https://arxiv.org/abs/2601.13715
+ arXiv:2601.13715v1 Announce Type: new
+Abstract: Glass surface ubiquitous in both daily life and professional environments presents a potential threat to vision-based systems, such as robot and drone navigation. To solve this challenge, most recent studies have shown significant interest in Video Glass Surface Detection (VGSD). We observe that objects in the reflection (or transmission) layer appear farther from the glass surfaces. Consequently, in video motion scenarios, the notable reflected (or transmitted) objects on the glass surface move slower than objects in non-glass regions within the same spatial plane, and this motion inconsistency can effectively reveal the presence of glass surfaces. Based on this observation, we propose a novel network, named MVGD-Net, for detecting glass surfaces in videos by leveraging motion inconsistency cues. Our MVGD-Net features three novel modules: the Cross-scale Multimodal Fusion Module (CMFM) that integrates extracted spatial features and estimated optical flow maps, the History Guided Attention Module (HGAM) and Temporal Cross Attention Module (TCAM), both of which further enhances temporal features. A Temporal-Spatial Decoder (TSD) is also introduced to fuse the spatial and temporal features for generating the glass region mask. Furthermore, for learning our network, we also propose a large-scale dataset, which comprises 312 diverse glass scenarios with a total of 19,268 frames. Extensive experiments demonstrate that our MVGD-Net outperforms relevant state-of-the-art methods.
+ oai:arXiv.org:2601.13715v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yiwei Lu, Hao Huang, Tao Yan
+
+
+ Simulated Ignorance Fails: A Systematic Study of LLM Behaviors on Forecasting Problems Before Model Knowledge Cutoff
+ https://arxiv.org/abs/2601.13717
+ arXiv:2601.13717v1 Announce Type: new
+Abstract: Evaluating LLM forecasting capabilities is constrained by a fundamental tension: prospective evaluation offers methodological rigor but prohibitive latency, while retrospective forecasting (RF) -- evaluating on already-resolved events -- faces rapidly shrinking clean evaluation data as SOTA models possess increasingly recent knowledge cutoffs. Simulated Ignorance (SI), prompting models to suppress pre-cutoff knowledge, has emerged as a potential solution. We provide the first systematic test of whether SI can approximate True Ignorance (TI). Across 477 competition-level questions and 9 models, we find that SI fails systematically: (1) cutoff instructions leave a 52% performance gap between SI and TI; (2) chain-of-thought reasoning fails to suppress prior knowledge, even when reasoning traces contain no explicit post-cutoff references; (3) reasoning-optimized models exhibit worse SI fidelity despite superior reasoning trace quality. These findings demonstrate that prompts cannot reliably "rewind" model knowledge. We conclude that RF on pre-cutoff events is methodologically flawed; we recommend against using SI-based retrospective setups to benchmark forecasting capabilities.
+ oai:arXiv.org:2601.13717v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zehan Li, Yuxuan Wang, Ali El Lahib, Ying-Jieh Xia, Xinyu Pi
+
+
+ Hierarchical Long Video Understanding with Audiovisual Entity Cohesion and Agentic Search
+ https://arxiv.org/abs/2601.13719
+ arXiv:2601.13719v1 Announce Type: new
+Abstract: Long video understanding presents significant challenges for vision-language models due to extremely long context windows. Existing solutions relying on naive chunking strategies with retrieval-augmented generation, typically suffer from information fragmentation and a loss of global coherence. We present HAVEN, a unified framework for long-video understanding that enables coherent and comprehensive reasoning by integrating audiovisual entity cohesion and hierarchical video indexing with agentic search. First, we preserve semantic consistency by integrating entity-level representations across visual and auditory streams, while organizing content into a structured hierarchy spanning global summary, scene, segment, and entity levels. Then we employ an agentic search mechanism to enable dynamic retrieval and reasoning across these layers, facilitating coherent narrative reconstruction and fine-grained entity tracking. Extensive experiments demonstrate that our method achieves good temporal coherence, entity consistency, and retrieval efficiency, establishing a new state-of-the-art with an overall accuracy of 84.1% on LVBench. Notably, it achieves outstanding performance in the challenging reasoning category, reaching 80.1%. These results highlight the effectiveness of structured, multimodal reasoning for comprehensive and context-consistent understanding of long-form videos.
+ oai:arXiv.org:2601.13719v1
+ cs.CV
+ cs.AI
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xinlei Yin, Xiulian Peng, Xiao Li, Zhiwei Xiong, Yan Lu
+
+
+ OP-Bench: Benchmarking Over-Personalization for Memory-Augmented Personalized Conversational Agents
+ https://arxiv.org/abs/2601.13722
+ arXiv:2601.13722v1 Announce Type: new
+Abstract: Memory-augmented conversational agents enable personalized interactions using long-term user memory and have gained substantial traction. However, existing benchmarks primarily focus on whether agents can recall and apply user information, while overlooking whether such personalization is used appropriately. In fact, agents may overuse personal information, producing responses that feel forced, intrusive, or socially inappropriate to users. We refer to this issue as \emph{over-personalization}. In this work, we formalize over-personalization into three types: Irrelevance, Repetition, and Sycophancy, and introduce \textbf{OP-Bench} a benchmark of 1,700 verified instances constructed from long-horizon dialogue histories. Using \textbf{OP-Bench}, we evaluate multiple large language models and memory-augmentation methods, and find that over-personalization is widespread when memory is introduced. Further analysis reveals that agents tend to retrieve and over-attend to user memories even when unnecessary. To address this issue, we propose \textbf{Self-ReCheck}, a lightweight, model-agnostic memory filtering mechanism that mitigates over-personalization while preserving personalization performance. Our work takes an initial step toward more controllable and appropriate personalization in memory-augmented dialogue systems.
+ oai:arXiv.org:2601.13722v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yulin Hu, Zimo Long, Jiahe Guo, Xingyu Sui, Xing Fu, Weixiang Zhao, Yanyan Zhao, Bing Qin
+
+
+ Facial Spatiotemporal Graphs: Leveraging the 3D Facial Surface for Remote Physiological Measurement
+ https://arxiv.org/abs/2601.13724
+ arXiv:2601.13724v1 Announce Type: new
+Abstract: Facial remote photoplethysmography (rPPG) methods estimate physiological signals by modeling subtle color changes on the 3D facial surface over time. However, existing methods fail to explicitly align their receptive fields with the 3D facial surface-the spatial support of the rPPG signal. To address this, we propose the Facial Spatiotemporal Graph (STGraph), a novel representation that encodes facial color and structure using 3D facial mesh sequences-enabling surface-aligned spatiotemporal processing. We introduce MeshPhys, a lightweight spatiotemporal graph convolutional network that operates on the STGraph to estimate physiological signals. Across four benchmark datasets, MeshPhys achieves state-of-the-art or competitive performance in both intra- and cross-dataset settings. Ablation studies show that constraining the model's receptive field to the facial surface acts as a strong structural prior, and that surface-aligned, 3D-aware node features are critical for robustly encoding facial surface color. Together, the STGraph and MeshPhys constitute a novel, principled modeling paradigm for facial rPPG, enabling robust, interpretable, and generalizable estimation. Code is available at https://samcantrill.github.io/facial-stgraph-rppg/ .
+ oai:arXiv.org:2601.13724v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sam Cantrill, David Ahmedt-Aristizabal, Lars Petersson, Hanna Suominen, Mohammad Ali Armin
+
+
+ Foundational VeriFast: Pragmatic Certification of Verification Tool Results through Hinted Mirroring
+ https://arxiv.org/abs/2601.13727
+ arXiv:2601.13727v1 Announce Type: new
+Abstract: VeriFast is a leading tool for the modular formal verification of correctness properties of single-threaded and multi-threaded C and Rust programs. It verifies a program by symbolically executing each function in isolation, exploiting user-annotated preconditions, postconditions, and loop invariants written in a form of separation logic, and using a separation logic-based symbolic representation of memory. However, the tool itself, written in roughly 30K lines of OCaml code, has not been formally verified. Therefore, bugs in the tool could cause it to falsely report the correctness of the input program. We here report on an early result extending VeriFast to emit, upon successful verification of a Rust program, a Rocq proof script that proves correctness of the program with respect to a Rocq-encoded axiomatic semantics of Rust. This significantly enhances VeriFast's applicability in safety-critical domains. We apply hinted mirroring: we record key information from VeriFast's symbolic execution run, and use it to direct a replay of the run in Rocq.
+ oai:arXiv.org:2601.13727v1
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bart Jacobs
+
+
+ On Temperature-Constrained Non-Deterministic Machine Translation: Potential and Evaluation
+ https://arxiv.org/abs/2601.13729
+ arXiv:2601.13729v1 Announce Type: new
+Abstract: In recent years, the non-deterministic properties of language models have garnered considerable attention and have shown a significant influence on real-world applications. However, such properties remain under-explored in machine translation (MT), a complex, non-deterministic NLP task. In this study, we systematically evaluate modern MT systems and identify temperature-constrained Non-Deterministic MT (ND-MT) as a distinct phenomenon. Additionally, we demonstrate that ND-MT exhibits significant potential in addressing the multi-modality issue that has long challenged MT research and provides higher-quality candidates than Deterministic MT (D-MT) under temperature constraints. However, ND-MT introduces new challenges in evaluating system performance. Specifically, the evaluation framework designed for D-MT fails to yield consistent evaluation results when applied to ND-MT. We further investigate this emerging challenge by evaluating five state-of-the-art ND-MT systems across three open datasets using both lexical-based and semantic-based metrics at varying sampling sizes. The results reveal a Buckets effect across these systems: the lowest-quality candidate generated by ND-MT consistently determines the overall system ranking across different sampling sizes for all reasonable metrics. Furthermore, we propose the ExpectoSample strategy to automatically assess the reliability of evaluation metrics for selecting robust ND-MT.
+ oai:arXiv.org:2601.13729v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Weichuan Wang, Mingyang Liu, Linqi Song, Chen Ma
+
+
+ Breaking the Data Barrier in Learning Symbolic Computation: A Case Study on Variable Ordering Suggestion for Cylindrical Algebraic Decomposition
+ https://arxiv.org/abs/2601.13731
+ arXiv:2601.13731v1 Announce Type: new
+Abstract: Symbolic computation, powered by modern computer algebra systems, has important applications in mathematical reasoning through exact deep computations. The efficiency of symbolic computation is largely constrained by such deep computations in high dimension. This creates a fundamental barrier on labelled data acquisition if leveraging supervised deep learning to accelerate symbolic computation. Cylindrical algebraic decomposition (CAD) is a pillar symbolic computation method for reasoning with first-order logic formulas over reals with many applications in formal verification and automatic theorem proving. Variable orderings have a huge impact on its efficiency. Impeded by the difficulty to acquire abundant labelled data, existing learning-based approaches are only competitive with the best expert-based heuristics. In this work, we address this problem by designing a series of intimately connected tasks for which a large amount of annotated data can be easily obtained. We pre-train a Transformer model with these data and then fine-tune it on the datasets for CAD ordering. Experiments on publicly available CAD ordering datasets show that on average the orderings predicted by the new model are significantly better than those suggested by the best heuristic methods.
+ oai:arXiv.org:2601.13731v1
+ cs.SC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Rui-Juan Jing, Yuegang Zhao, Changbo Chen
+
+
+ SUNSET -- A Sensor-fUsioN based semantic SegmEnTation exemplar for ROS-based self-adaptation
+ https://arxiv.org/abs/2601.13732
+ arXiv:2601.13732v1 Announce Type: new
+Abstract: The fact that robots are getting deployed more often in dynamic environments, together with the increasing complexity of their software systems, raises the need for self-adaptive approaches. In these environments robotic software systems increasingly operate amid (1) uncertainties, where symptoms are easy to observe but root causes are ambiguous, or (2) multiple uncertainties appear concurrently. We present SUNSET, a ROS2-based exemplar that enables rigorous, repeatable evaluation of architecture-based self-adaptation in such conditions. It implements a sensor fusion semantic-segmentation pipeline driven by a trained Machine Learning (ML) model whose input preprocessing can be perturbed to induce realistic performance degradations. The exemplar exposes five observable symptoms, where each can be caused by different root causes and supports concurrent uncertainties spanning self-healing and self-optimisation. SUNSET includes the segmentation pipeline, a trained ML model, uncertainty-injection scripts, a baseline controller, and step-by-step integration and evaluation documentation to facilitate reproducible studies and fair comparison.
+ oai:arXiv.org:2601.13732v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Andreas Wiedholz, Rafael Paintner, Julian Glei{\ss}ner, Alwin Hoffmann, Tobias Huber
+
+
+ Towards robust long-context understanding of large language model via active recap learning
+ https://arxiv.org/abs/2601.13734
+ arXiv:2601.13734v1 Announce Type: new
+Abstract: In this paper, we propose active recap learning (ARL), a framework for enhancing large language model (LLM) in understanding long contexts. ARL enables models to revisit and summarize earlier content through targeted sequence construction during contined pretraining and retrospective summarization at inference. First, we identify key tokens in prepared long context based on loss gaps between long and short forward contexts and find most revant preceding paragraphs, then summarize them using an LLM. Second, ARL equips models with the ability to autonomously generate and utilize these retrospective summaries during inference, thereby establishing a recursive memory mechanism across paragraphs. Experimental results show substantial gains, with ARL achieving a 26.8% improvement on RULER and a 9.44% improvement on LongBench. Overall, ARL offers a simple yet effective continued pretraining-based approach to strengthen long-context understanding, advancing scalable memory augmentation in LLM
+ oai:arXiv.org:2601.13734v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chenyu Hui
+
+
+ Reasoning or Fluency? Dissecting Probabilistic Confidence in Best-of-N Selection
+ https://arxiv.org/abs/2601.13735
+ arXiv:2601.13735v1 Announce Type: new
+Abstract: Probabilistic confidence metrics are increasingly adopted as proxies for reasoning quality in Best-of-N selection, under the assumption that higher confidence reflects higher reasoning fidelity. In this work, we challenge this assumption by investigating whether these metrics truly capture inter-step causal dependencies necessary for valid reasoning. We introduce three classes of inter-step causality perturbations that systematically disrupt dependencies between reasoning steps while preserving local fluency. Surprisingly, across diverse model families and reasoning benchmarks, we find that selection accuracy degrades only marginally under these disruptions. Even severe interventions, such as applying hard attention masks that directly prevent the model from attending to prior reasoning steps, do not substantially reduce selection performance. These findings provide strong evidence that current probabilistic metrics are largely insensitive to logical structure, and primarily capture surface-level fluency or in-distribution priors instead. Motivated by this gap, we propose a contrastive causality metric that explicitly isolates inter-step causal dependencies, and demonstrate that it yields more faithful output selection than existing probability-based approaches.
+ oai:arXiv.org:2601.13735v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hojin Kim, Jaehyung Kim
+
+
+ RIM Hand : A Robotic Hand with an Accurate Carpometacarpal Joint and Nitinol-Supported Skeletal Structure
+ https://arxiv.org/abs/2601.13737
+ arXiv:2601.13737v1 Announce Type: new
+Abstract: This paper presents the flexible RIM Hand, a biomimetic robotic hand that precisely replicates the carpometacarpal (CMC) joints and employs superelastic Nitinol wires throughout its skeletal framework. By modeling the full carpal-to-metacarpal anatomy, the design enables realistic palm deformation through tendon-driven fingers while enhancing joint restoration and supports skeletal structure with Nitinol-based dorsal extensors. A flexible silicone skin further increases contact friction and contact area, enabling stable grasps for diverse objects. Experiments show that the palm can deform up to 28%, matching human hand flexibility, while achieving more than twice the payload capacity and three times the contact area compared to a rigid palm design. The RIM Hand thus offers improved dexterity, compliance, and anthropomorphism, making it promising for prosthetic and service-robot applications.
+ oai:arXiv.org:2601.13737v1
+ cs.RO
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Joon Lee, Jeongyoon Han, Doyoung Kim, Seokhwan Jeong
+
+
+ Dimension-First Evaluation of Speech-to-Speech Models with Structured Acoustic Cues
+ https://arxiv.org/abs/2601.13742
+ arXiv:2601.13742v1 Announce Type: new
+Abstract: Large Language Model (LLM) judges exhibit strong reasoning capabilities but are limited to textual content. This leaves current automatic Speech-to-Speech (S2S) evaluation methods reliant on opaque and expensive Audio Language Models (ALMs). In this work, we propose TRACE (Textual Reasoning over Audio Cues for Evaluation), a novel framework that enables LLM judges to reason over audio cues to achieve cost-efficient and human-aligned S2S evaluation. To demonstrate the strength of the framework, we first introduce a Human Chain-of-Thought (HCoT) annotation protocol to improve the diagnostic capability of existing judge benchmarks by separating evaluation into explicit dimensions: content (C), voice quality (VQ), and paralinguistics (P). Using this data, TRACE constructs a textual blueprint of inexpensive audio signals and prompts an LLM to render dimension-wise judgments, fusing them into an overall rating via a deterministic policy. TRACE achieves higher agreement with human raters than ALMs and transcript-only LLM judges while being significantly more cost-effective. We will release the HCoT annotations and the TRACE framework to enable scalable and human-aligned S2S evaluation.
+ oai:arXiv.org:2601.13742v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Arjun Chandra, Kevin Miller, Venkatesh Ravichandran, Constantinos Papayiannis, Venkatesh Saligrama
+
+
+ Counterexample Classification against Signal Temporal Logic Specifications
+ https://arxiv.org/abs/2601.13743
+ arXiv:2601.13743v1 Announce Type: new
+Abstract: Signal Temporal Logic (STL) has been widely adopted as a specification language for specifying desirable behaviors of hybrid systems. By monitoring a given STL specification, we can detect the executions that violate it, which are often referred to as counterexamples. In practice, these counterexamples may arise from different causes and thus are relevant to different system defects. To effectively address this, we need a proper criterion for classifying these counterexamples, by which we can comprehend the possible violation patterns and the distributions of these counterexamples with respect to the patterns. In this paper, we propose a classification criterion by using parametric signal temporal logic (PSTL) to represent each class. Due to this formalism, identifying the classes of a counterexample requires finding proper parameter values of PSTL that enable a class to include the counterexample. To improve the efficiency of class identification, we further derive an inclusion relation between different classes, and then propose a binary search-like approach over it that significantly prunes the classes needed to query. We implement a prototype tool and experimentally evaluate its effectiveness on two widely-studied systems.
+ oai:arXiv.org:2601.13743v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Zhenya Zhang, Parv Kapoor, Jie An, Eunsuk Kang
+
+
+ Variational Dual-path Attention Network for CSI-Based Gesture Recognition
+ https://arxiv.org/abs/2601.13745
+ arXiv:2601.13745v1 Announce Type: new
+Abstract: Wi-Fi gesture recognition based on Channel State Information (CSI) is challenged by high-dimensional noise and resource constraints on edge devices. Prevailing end-to-end models tightly couple feature extraction with classification, overlooking the inherent time-frequency sparsity of CSI and leading to redundancy and poor generalization. To address this, this paper proposes a lightweight feature preprocessing module--the Variational Dual-path Attention Network (VDAN). It performs structured feature refinement through frequency-domain filtering and temporal detection. Variational inference is introduced to model the uncertainty in attention weights, thereby enhancing robustness to noise. The design principles of the module are explained from the perspectives of the information bottleneck and regularization. Experiments on a public dataset demonstrate that the learned attention weights align with the physical sparse characteristics of CSI, verifying its interpretability. This work provides an efficient and explainable front-end processing solution for resource-constrained wireless sensing systems.
+ oai:arXiv.org:2601.13745v1
+ cs.NI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ N. Zhang
+
+
+ EEG-Titans: Long-Horizon Seizure Forecasting via Dual-Branch Attention and Neural Memory
+ https://arxiv.org/abs/2601.13748
+ arXiv:2601.13748v1 Announce Type: new
+Abstract: Accurate epileptic seizure prediction from electroencephalography (EEG) remains challenging because pre-ictal dynamics may span long time horizons while clinically relevant signatures can be subtle and transient. Many deep learning models face a persistent trade-off between capturing local spatiotemporal patterns and maintaining informative long-range context when operating on ultralong sequences. We propose EEG-Titans, a dualbranch architecture that incorporates a modern neural memory mechanism for long-context modeling. The model combines sliding-window attention to capture short-term anomalies with a recurrent memory pathway that summarizes slower, progressive trends over time. On the CHB-MIT scalp EEG dataset, evaluated under a chronological holdout protocol, EEG-Titans achieves 99.46% average segment-level sensitivity across 18 subjects. We further analyze safety-first operating points on artifact-prone recordings and show that a hierarchical context strategy extending the receptive field for high-noise subjects can markedly reduce false alarms (down to 0.00 FPR/h in an extreme outlier) without sacrificing sensitivity. These results indicate that memory-augmented long-context modeling can provide robust seizure forecasting under clinically constrained evaluation
+ oai:arXiv.org:2601.13748v1
+ cs.LG
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Tien-Dat Pham, Xuan-The Tran
+
+
+ Pro-AI Bias in Large Language Models
+ https://arxiv.org/abs/2601.13749
+ arXiv:2601.13749v1 Announce Type: new
+Abstract: Large language models (LLMs) are increasingly employed for decision-support across multiple domains. We investigate whether these models display a systematic preferential bias in favor of artificial intelligence (AI) itself. Across three complementary experiments, we find consistent evidence of pro-AI bias. First, we show that LLMs disproportionately recommend AI-related options in response to diverse advice-seeking queries, with proprietary models doing so almost deterministically. Second, we demonstrate that models systematically overestimate salaries for AI-related jobs relative to closely matched non-AI jobs, with proprietary models overestimating AI salaries more by 10 percentage points. Finally, probing internal representations of open-weight models reveals that ``Artificial Intelligence'' exhibits the highest similarity to generic prompts for academic fields under positive, negative, and neutral framings alike, indicating valence-invariant representational centrality. These patterns suggest that LLM-generated advice and valuation can systematically skew choices and perceptions in high-stakes decisions.
+ oai:arXiv.org:2601.13749v1
+ cs.CL
+ cs.AI
+ cs.CY
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Benaya Trabelsi, Jonathan Shaki, Sarit Kraus
+
+
+ HiT: History-Injection Transformers for Onboard Continuous Flood Change Detection
+ https://arxiv.org/abs/2601.13751
+ arXiv:2601.13751v1 Announce Type: new
+Abstract: Natural disaster monitoring through continuous satellite observation requires processing multi-temporal data under strict operational constraints. This paper addresses flood detection, a critical application for hazard management, by developing an onboard change detection system that operates within the memory and computational limits of small satellites. We propose History Injection mechanism for Transformer models (HiT), that maintains historical context from previous observations while reducing data storage by over 99\% of original image size. Moreover, testing on the STTORM-CD flood dataset confirms that the HiT mechanism within the Prithvi-tiny foundation model maintains detection accuracy compared to the bitemporal baseline. The proposed HiT-Prithvi model achieved 43 FPS on Jetson Orin Nano, a representative onboard hardware used in nanosats. This work establishes a practical framework for satellite-based continuous monitoring of natural disasters, supporting real-time hazard assessment without dependency on ground-based processing infrastructure. Architecture as well as model checkpoints is available at https://github.com/zaitra/HiT-change-detection
+ oai:arXiv.org:2601.13751v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Daniel Kyselica, Jon\'a\v{s} Herec, Oliver Kutis, Rado Pito\v{n}\'ak
+
+
+ Finding RELIEF: Shaping Reasoning Behavior without Reasoning Supervision via Belief Engineering
+ https://arxiv.org/abs/2601.13752
+ arXiv:2601.13752v1 Announce Type: new
+Abstract: Large reasoning models (LRMs) have achieved remarkable success in complex problem-solving, yet they often suffer from computational redundancy or reasoning unfaithfulness. Current methods for shaping LRM behavior typically rely on reinforcement learning or fine-tuning with gold-standard reasoning traces, a paradigm that is both computationally expensive and difficult to scale. In this paper, we reveal that LRMs possess latent \textit{reasoning beliefs} that internally track their own reasoning traits, which can be captured through simple logit probing. Building upon this insight, we propose Reasoning Belief Engineering (RELIEF), a simple yet effective framework that shapes LRM behavior by aligning the model's self-concept with a target belief blueprint. Crucially, RELIEF completely bypasses the need for reasoning-trace supervision. It internalizes desired traits by fine-tuning on synthesized, self-reflective question-answering pairs that affirm the target belief. Extensive experiments on efficiency and faithfulness tasks demonstrate that RELIEF matches or outperforms behavior-supervised and preference-based baselines while requiring lower training costs. Further analysis validates that shifting a model's reasoning belief effectively shapes its actual behavior.
+ oai:arXiv.org:2601.13752v1
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chak Tou Leong, Dingwei Chen, Heming Xia, Qingyu Yin, Sunbowen Lee, Jian Wang, Wenjie Li
+
+
+ Research on Adaptive Inertial Control in Synchronization Systems: Based on Variational Optimization Methods and Their Applications in the Stability of Complex Networks
+ https://arxiv.org/abs/2601.13753
+ arXiv:2601.13753v1 Announce Type: new
+Abstract: Aiming at the core problem that it is difficult for a fixed inertia coefficient to balance transient disturbance suppression and long-term stability in complex network synchronization systems, an adaptive inertia control strategy based on variational optimization is proposed. Taking the Kuramoto model with inertia as the research carrier, the analytical expression of the time-varying inertia coefficient M(t) is strictly derived by the functional variational method, and a hierarchical control structure of "benchmark inertia + disturbance feedback" is constructed to achieve the organic unity of minimizing the vulnerability performance function H(T) and stability constraints. A multimodal decoupling control strategy based on Laplacian eigenvector projection is designed to enhance the feedback strength of the dominant mode by eigenvalue weighting, improving the control accuracy and dynamic response speed. Simulation verification is carried out in complex network systems, and the control performance of regular networks (RG), random networks (ER), small-world networks (SW), scale-free networks (SF) and spider webs (SP) under three typical disturbances of pulses, monotonic decays and oscillatory decays is systematically analyzed. The results show that the proposed strategy reduces H(T) of the five networks by 19%-25%, shortens the relaxation time by 15%-24%, and the real parts of all system eigenvalues are less than -0.25s^-1 , meeting the asymptotic stability criterion. This study provides a new theoretical framework and engineering implementation scheme for the stability control of complex network synchronization systems, which can be widely applied to fields such as power grids, communication networks, and neural networks.
+ oai:arXiv.org:2601.13753v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yiwei Zhou, Zhongcheng Lei, Xiaoran Dai, Wenshan Hu, Hong Zhou
+
+
+ On Autopilot? An Empirical Study of Human-AI Teaming and Review Practices in Open Source
+ https://arxiv.org/abs/2601.13754
+ arXiv:2601.13754v1 Announce Type: new
+Abstract: Large Language Models (LLMs) increasingly automate software engineering tasks. While recent studies highlight the accelerated adoption of ``AI as a teammate'' in Open Source Software (OSS), developer interaction patterns remain under-explored. In this work, we investigated project-level guidelines and developers' interactions with AI-assisted pull requests (PRs) by expanding the AIDev dataset to include finer-grained contributor code ownership and a comparative baseline of human-created PRs. We found that over 67.5\% of AI-co-authored PRs originate from contributors without prior code ownership. Despite this, the majority of repositories lack guidelines for AI-coding agent usage. Notably, we observed a distinct interaction pattern: AI-co-authored PRs are merged significantly faster with minimal feedback. In contrast to human-created PRs where non-owner developers receive the most feedback, AI-co-authored PRs from non-owners receive the least, with approximately 80\% merged without any explicit review. Finally, we discuss implications for developers and researchers.
+ oai:arXiv.org:2601.13754v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haoyu Gao, Peerachai Banyongrakkul, Hao Guan, Mansooreh Zahedi, Christoph Treude
+
+
+ The Limits of Conditional Volatility: Assessing Cryptocurrency VaR under EWMA and IGARCH Models
+ https://arxiv.org/abs/2601.13757
+ arXiv:2601.13757v1 Announce Type: new
+Abstract: The application of the standard static Geometric Brownian Motion (GBM) model for cryptocurrency risk management resulted in a systemic failure, evidenced by a 80.67% chance of loss in the 5% value-at-risk benchmark. This study addresses a critical literature gap by comparatively testing three conditional volatility models the EWMA/IGARCH baseline, an IGARCH model augmented with explicit mean reversion (IGARCH + MR), and a modified EGARCH-style asymmetric shock model within a correlated Monte Carlo VaR framework. Crucially, the analysis is applied specifically to high-beta altcoins (XRP, SOL, ADA), an asset class largely neglected by mainstream GARCH literature. Our results demonstrate that imposing stationarity (IGARCH + MR) drastically underestimates downside risk (5 percent value-at-risk reduced by 50%), while the asymmetric model (Model 3) leads to severe over-penalization. The EWMA/IGARCH baseline, characterized by infinite volatility persistence (alpha + beta = 1), provided the only robust conditional volatility estimate. This finding constitutes a formal rejection of the conventional financial hypotheses of volatility mean reversion and the asymmetric leverage effect in the altcoin asset class, establishing that non-stationary frameworks are a prerequisite for regulatory-grade risk modeling in this domain.
+ oai:arXiv.org:2601.13757v1
+ cs.CR
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ekleen Kaur
+
+
+ GOMPSNR: Reflourish the Signal-to-Noise Ratio Metric for Audio Generation Tasks
+ https://arxiv.org/abs/2601.13758
+ arXiv:2601.13758v1 Announce Type: new
+Abstract: In the field of audio generation, signal-to-noise ratio (SNR) has long served as an objective metric for evaluating audio quality. Nevertheless, recent studies have shown that SNR and its variants are not always highly correlated with human perception, prompting us to raise the questions: Why does SNR fail in measuring audio quality? And how to improve its reliability as an objective metric? In this paper, we identify the inadequate measurement of phase distance as a pivotal factor and propose to reformulate SNR with specially designed phase-distance terms, yielding an improved metric named GOMPSNR. We further extend the newly proposed formulation to derive two novel categories of loss function, corresponding to magnitude-guided phase refinement and joint magnitude-phase optimization, respectively. Besides, extensive experiments are conducted for an optimal combination of different loss functions. Experimental results on advanced neural vocoders demonstrate that our proposed GOMPSNR exhibits more reliable error measurement than SNR. Meanwhile, our proposed loss functions yield substantial improvements in model performance, and our wellchosen combination of different loss functions further optimizes the overall model capability.
+ oai:arXiv.org:2601.13758v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lingling Dai, Andong Li, Cheng Chi, Yifan Liang, Xiaodong Li, Chengshi Zheng
+
+
+ DARC: Decoupled Asymmetric Reasoning Curriculum for LLM Evolution
+ https://arxiv.org/abs/2601.13761
+ arXiv:2601.13761v1 Announce Type: new
+Abstract: Self-play with large language models has emerged as a promising paradigm for achieving self-improving artificial intelligence. However, existing self-play frameworks often suffer from optimization instability, due to (i) non-stationary objectives induced by solver-dependent reward feedback for the Questioner, and (ii) bootstrapping errors from self-generated pseudo-labels used to supervise the Solver. To mitigate these challenges, we introduce DARC (Decoupled Asymmetric Reasoning Curriculum), a two-stage framework that stabilizes the self-evolution process. First, we train the Questioner to synthesize difficulty-calibrated questions, conditioned on explicit difficulty levels and external corpora. Second, we train the Solver with an asymmetric self-distillation mechanism, where a document-augmented teacher generates high-quality pseudo-labels to supervise the student Solver that lacks document access. Empirical results demonstrate that DARC is model-agnostic, yielding an average improvement of 10.9 points across nine reasoning benchmarks and three backbone models. Moreover, DARC consistently outperforms all baselines and approaches the performance of fully supervised models without relying on human annotations.The code is available at https://github.com/RUCBM/DARC.
+ oai:arXiv.org:2601.13761v1
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shengda Fan, Xuyan Ye, Yankai Lin
+
+
+ TransMode-LLM: Feature-Informed Natural Language Modeling with Domain-Enhanced Prompting for Travel Behavior Modeling
+ https://arxiv.org/abs/2601.13763
+ arXiv:2601.13763v1 Announce Type: new
+Abstract: Understanding traveler behavior and accurately predicting travel mode choice are at the heart of transportation planning and policy-making. This study proposes TransMode-LLM, an innovative framework that integrates statistical methods with LLM-based techniques to predict travel modes from travel survey data. The framework operates through three phases: (1) statistical analysis identifies key behavioral features, (2) natural language encoding transforms structured data into contextual descriptions, and (3) LLM adaptation predicts travel mode through multiple learning paradigms including zero-shot and one/few-shot learning and domain-enhanced prompting. We evaluate TransMode-LLM using both general-purpose models (GPT-4o, GPT-4o-mini) and reasoning-focused models (o3-mini, o4-mini) with varying sample sizes on real-world travel survey data. Extensive experiment results demonstrate that the LLM-based approach achieves competitive accuracy compared to state-of-the-art baseline classifiers models. Moreover, few-shot learning significantly improves prediction accuracy, with models like o3-mini showing consistent improvements of up to 42.9\% with 5 provided examples. However, domain-enhanced prompting shows divergent effects across LLM architectures. In detail, it is helpful to improve performance for general-purpose models with GPT-4o achieving improvements of 2.27% to 12.50%. However, for reasoning-oriented models (o3-mini, o4-mini), domain knowledge enhancement does not universally improve performance. This study advances the application of LLMs in travel behavior modeling, providing promising and valuable insights for both academic research and transportation policy-making in the future.
+ oai:arXiv.org:2601.13763v1
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Meijing Zhang, Ying Xu
+
+
+ vLinear: A Powerful Linear Model for Multivariate Time Series Forecasting
+ https://arxiv.org/abs/2601.13768
+ arXiv:2601.13768v1 Announce Type: new
+Abstract: In this paper, we present \textbf{vLinear}, an effective yet efficient \textbf{linear}-based multivariate time series forecaster featuring two components: the \textbf{v}ecTrans module and the WFMLoss objective. Many state-of-the-art forecasters rely on self-attention or its variants to capture multivariate correlations, typically incurring $\mathcal{O}(N^2)$ computational complexity with respect to the number of variates $N$. To address this, we propose vecTrans, a lightweight module that utilizes a learnable vector to model multivariate correlations, reducing the complexity to $\mathcal{O}(N)$. Notably, vecTrans can be seamlessly integrated into Transformer-based forecasters, delivering up to 5$\times$ inference speedups and consistent performance gains. Furthermore, we introduce WFMLoss (Weighted Flow Matching Loss) as the objective. In contrast to typical \textbf{velocity-oriented} flow matching objectives, we demonstrate that a \textbf{final-series-oriented} formulation yields significantly superior forecasting accuracy. WFMLoss also incorporates path- and horizon-weighted strategies to focus learning on more reliable paths and horizons. Empirically, vLinear achieves state-of-the-art performance across 22 benchmarks and 124 forecasting settings. Moreover, WFMLoss serves as an effective plug-and-play objective, consistently improving existing forecasters. The code is available at https://anonymous.4open.science/r/vLinear.
+ oai:arXiv.org:2601.13768v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Wenzhen Yue, Ruohao Guo, Ji Shi, Zihan Hao, Shiyu Hu, Xianghua Ying
+
+
+ Interoperable rApp/xApp Control over O-RAN for Mobility-aware Dynamic Spectrum Allocation
+ https://arxiv.org/abs/2601.13769
+ arXiv:2601.13769v1 Announce Type: new
+Abstract: Open Radio Access Networks (O-RAN) enable the disaggregation of radio access functions and the deployment of control applications across different timescales. However, designing interoperable control schemes that jointly exploit long-term traffic awareness and near-real-time radio resource optimization remains a challenging problem, particularly under dense multi-cell interference and heterogeneous service demands. This paper proposes an interoperable rApp/xApp-driven dynamic spectrum allocation (DSA) framework for O-RAN, based on a graph-theoretic formulation of physical resource block (PRB) assignment. The proposed architecture leverages a non-real-time radio intelligent controller (Non-RT RIC) rApp to predict aggregated traffic evolution and generate high-level spectrum policies at the minutes timescale, while a near-real-time RIC (Near-RT RIC) xApp constructs a user-centric conflict graph and performs fairness-aware PRB allocation at sub-second timescales. To mitigate persistent user starvation, a conflict-aware modified proportional fair (MPF) scheduling mechanism is applied, enabling controlled interference-free PRB time-sharing. Extensive simulation results demonstrate that the proposed framework significantly improves the PRB assignment success rate (above 90%) and service-share fairness (above 85%) across different channel configurations and user demands, while maintaining architectural separation and rApp/xApp interoperability in accordance with O-RAN principles.
+ oai:arXiv.org:2601.13769v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Anastasios Giannopoulos, Sotirios Spantideas, Maria Lamprini Bartsioka, Panagiotis Trakadas
+
+
+ Look-Ahead-Bench: a Standardized Benchmark of Look-ahead Bias in Point-in-Time LLMs for Finance
+ https://arxiv.org/abs/2601.13770
+ arXiv:2601.13770v1 Announce Type: new
+Abstract: We introduce Look-Ahead-Bench, a standardized benchmark measuring look-ahead bias in Point-in-Time (PiT) Large Language Models (LLMs) within realistic and practical financial workflows. Unlike most existing approaches that primarily test inner lookahead knowledge via Q\\&A, our benchmark evaluates model behavior in practical scenarios. To distinguish genuine predictive capability from memorization-based performance, we analyze performance decay across temporally distinct market regimes, incorporating several quantitative baselines to establish performance thresholds. We evaluate prominent open-source LLMs -- Llama 3.1 (8B and 70B) and DeepSeek 3.2 -- against a family of Point-in-Time LLMs (Pitinf-Small, Pitinf-Medium, and frontier-level model Pitinf-Large) from PiT-Inference. Results reveal significant lookahead bias in standard LLMs, as measured with alpha decay, unlike Pitinf models, which demonstrate improved generalization and reasoning abilities as they scale in size. This work establishes a foundation for the standardized evaluation of temporal bias in financial LLMs and provides a practical framework for identifying models suitable for real-world deployment. Code is available on GitHub: https://github.com/benstaf/lookaheadbench
+ oai:arXiv.org:2601.13770v1
+ cs.AI
+ cs.CL
+ cs.LG
+ q-fin.CP
+ q-fin.GN
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mostapha Benhenda (LAGA)
+
+
+ A Blockchain-Oriented Software Engineering Architecture for Carbon Credit Certification Systems
+ https://arxiv.org/abs/2601.13772
+ arXiv:2601.13772v1 Announce Type: new
+Abstract: Carbon credit systems have emerged as a policy tool to incentivize emission reductions and support the transition to clean energy. Reliable carbon-credit certification depends on mechanisms that connect actual, measured renewable-energy production to verifiable emission-reduction records. Although blockchain and IoT technologies have been applied to emission monitoring and trading, existing work offers limited support for certification processes, particularly for small and medium-scale renewable installations. This paper introduces a blockchain-based carbon-credit certification architecture, demonstrated through a 100 kWp photovoltaic case study, that integrates real-time IoT data collection, edge-level aggregation, and secure on-chain storage on a permissioned blockchain with smart contracts. Unlike approaches focused on trading mechanisms, the proposed system aligns with European legislation and voluntary carbon-market standards, clarifying the practical requirements and constraints that apply to photovoltaic operators. The resulting architecture provides a structured pathway for generating verifiable carbon-credit records and supporting third-party verification.
+ oai:arXiv.org:2601.13772v1
+ cs.SE
+ cs.DC
+ cs.SI
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Matteo Vaccargiu, Azmat Ullah, Pierluigi Gallo
+
+
+ Orthogonium : A Unified, Efficient Library of Orthogonal and 1-Lipschitz Building Blocks
+ https://arxiv.org/abs/2601.13776
+ arXiv:2601.13776v1 Announce Type: new
+Abstract: Orthogonal and 1-Lipschitz neural network layers are essential building blocks in robust deep learning architectures, crucial for certified adversarial robustness, stable generative models, and reliable recurrent networks. Despite significant advancements, existing implementations remain fragmented, limited, and computationally demanding. To address these issues, we introduce Orthogonium , a unified, efficient, and comprehensive PyTorch library providing orthogonal and 1-Lipschitz layers. Orthogonium provides access to standard convolution features-including support for strides, dilation, grouping, and transposed-while maintaining strict mathematical guarantees. Its optimized implementations reduce overhead on large scale benchmarks such as ImageNet. Moreover, rigorous testing within the library has uncovered critical errors in existing implementations, emphasizing the importance of standardized and reliable tools. Orthogonium thus significantly lowers adoption barriers, enabling scalable experimentation and integration across diverse applications requiring orthogonality and robust Lipschitz constraints. Orthogonium is available at https://github.com/deel-ai/orthogonium.
+ oai:arXiv.org:2601.13776v1
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ ICML 2025 Workshop on Championing Open- source Development in Machine Learning (CODEML '25), Jul 2025, Vancouver, France
+ Thibaut Boissin (IRIT-MISFIT), Franck Mamalet (ANITI, IMT), Valentin Lafargue (ANITI, IMT), Mathieu Serrurier (IRIT-MISFIT)
+
+
+ Sample Efficient Learning of Body-Environment Interaction of an Under-Actuated System
+ https://arxiv.org/abs/2601.13777
+ arXiv:2601.13777v1 Announce Type: new
+Abstract: Geometric mechanics provides valuable insights into how biological and robotic systems use changes in shape to move by mechanically interacting with their environment. In high-friction environments it provides that the entire interaction is captured by the ``motility map''. Here we compare methods for learning the motility map from motion tracking data of a physical robot created specifically to test these methods by having under-actuated degrees of freedom and a hard to model interaction with its substrate. We compared four modeling approaches in terms of their ability to predict body velocity from shape change within the same gait, across gaits, and across speeds. Our results show a trade-off between simpler methods which are superior on small training datasets, and more sophisticated methods, which are superior when more training data is available.
+ oai:arXiv.org:2601.13777v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zvi Chapnik, Yizhar Or, Shai Revzen
+
+
+ Fit Matters: Format-Distance Alignment Improves Conversational Search
+ https://arxiv.org/abs/2601.13778
+ arXiv:2601.13778v1 Announce Type: new
+Abstract: Existing conversational search systems can synthesize information into responses, but they lack principled ways to adapt response formats to users' cognitive states. This paper investigates whether aligning format and distance, which involves matching information granularity and media to users' psychological distance, improves user experience. In a between-subjects experiment (N=464) on travel planning, we crossed two distance dimensions (temporal/spatial x near/far) with four formats varying in granularity (abstract/concrete) and media (text/image-and-text). The experiment established that format--distance alignment reduced users' risk perceptions while increasing decision confidence, perceptions of information usefulness, ease of use, enjoyment, and credibility, and adoption intentions. Concrete formats imposed higher cognitive load, but yielded productive effort when matched to near-distance tasks. Images enhanced concrete but not abstract text, suggesting multimedia benefits depend on complementarity. These findings establish format--distance alignment as a distinctive and important design dimension, enabling systems to tailor response formats to users' psychological distance.
+ oai:arXiv.org:2601.13778v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1145/3772318.3790317
+ Yitian Yang, Yugin Tan, Jung-Tai King, Yang Chen Lin, Yi-Chieh Lee
+
+
+ Principled Latent Diffusion for Graphs via Laplacian Autoencoders
+ https://arxiv.org/abs/2601.13780
+ arXiv:2601.13780v1 Announce Type: new
+Abstract: Graph diffusion models achieve state-of-the-art performance in graph generation but suffer from quadratic complexity in the number of nodes -- and much of their capacity is wasted modeling the absence of edges in sparse graphs. Inspired by latent diffusion in other modalities, a natural idea is to compress graphs into a low-dimensional latent space and perform diffusion there. However, unlike images or text, graph generation requires nearly lossless reconstruction, as even a single error in decoding an adjacency matrix can render the entire sample invalid. This challenge has remained largely unaddressed. We propose LG-Flow, a latent graph diffusion framework that directly overcomes these obstacles. A permutation-equivariant autoencoder maps each node into a fixed-dimensional embedding from which the full adjacency is provably recoverable, enabling near-lossless reconstruction for both undirected graphs and DAGs. The dimensionality of this latent representation scales linearly with the number of nodes, eliminating the quadratic bottleneck and making it feasible to train larger and more expressive models. In this latent space, we train a Diffusion Transformer with flow matching, enabling efficient and expressive graph generation. Our approach achieves competitive results against state-of-the-art graph diffusion models, while achieving up to $1000\times$ speed-up.
+ oai:arXiv.org:2601.13780v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Antoine Siraudin, Christopher Morris
+
+
+ Area-universality in Outerplanar Graphs
+ https://arxiv.org/abs/2601.13781
+ arXiv:2601.13781v1 Announce Type: new
+Abstract: A rectangular floorplan is a partition of a rectangle into smaller rectangles such that no four rectangles meet at a single point. Rectangular floorplans arise naturally in a variety of applications, including VLSI design, architectural layout, and cartography, where efficient and flexible spatial subdivisions are required. A central concept in this domain is that of area-universality: a floorplan (or more generally, a rectangular layout) is area-universal if, for any assignment of target areas to its constituent rectangles, there exists a combinatorially equivalent layout that realizes these areas.
+ In this paper, we investigate the structural conditions under which an outerplanar graph admits an area-universal rectangular layout. We establish a necessary and sufficient condition for area-universality in this setting, thereby providing a complete characterization of admissible outerplanar graphs. Furthermore, we present an algorithmic construction that guarantees that the resulting layout is always area-universal.
+ oai:arXiv.org:2601.13781v1
+ cs.CG
+ math.CO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ravi Suthar, Raveena, Krishnendra Shekhawat
+
+
+ Demystifying Starlink Network Performance under Vehicular Mobility with Dynamic Beam Switching
+ https://arxiv.org/abs/2601.13790
+ arXiv:2601.13790v1 Announce Type: new
+Abstract: In the last few years, considerable research efforts have focused on measuring and improving Starlink network performance, especially for user terminals (UTs) in stationary scenarios. However, the performance of Starlink networks in mobility settings, particularly with frequent changes in the UT's orientation, and the impact of environmental factors, such as transient obstructions, has not been thoroughly studied, leaving gaps in understanding the causes of performance degradation. Recently, researchers have started identifying the communicating satellites to evaluate satellite selection strategies and the impact on network performance. However, existing Starlink satellite identification methods only work in stationary, obstruction-free scenarios, as they do not account for UT mobility, obstructions or detect dynamic beam switching events. In this paper, we reveal that the UT can perform multiple dynamic beam switching attempts to connect to different satellites when the UT-satellite link is degraded. This degradation can occur either due to the loss of line-of-sight (LoS) from changes in the FOV or obstructions, or due to poor signal quality, extending UT-satellite handovers beyond the well-known 15-second regular handover interval. We propose a mobility-aware Starlink satellite identification method that detects dynamic beam switching events, and plausibly explain network performance using UT's diagnostic data and connected satellite information. Our findings demystifies the mobile Starlink network performance degradations, which is crucial to enhance the end-to-end performance of transport layer protocols and in diverse application scenarios.
+ oai:arXiv.org:2601.13790v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinwei Zhao, Jack Baude, Ali Ahangarpour, Vaibhava Krishna Devulapalli, Sree Ganesh Lalitaditya Divakarla, Zhi-Li Zhang, Jianping Pan
+
+
+ PAtt: A Pattern Attention Network for ETA Prediction Using Historical Speed Profiles
+ https://arxiv.org/abs/2601.13793
+ arXiv:2601.13793v1 Announce Type: new
+Abstract: In this paper, we propose an ETA model (Estimated Time of Arrival) that leverages an attention mechanism over historical road speed patterns. As autonomous driving and intelligent transportation systems become increasingly prevalent, the need for accurate and reliable ETA estimation has grown, playing a vital role in navigation, mobility planning, and traffic management. However, predicting ETA remains a challenging task due to the dynamic and complex nature of traffic flow. Traditional methods often combine real-time and historical traffic data in simplistic ways, or rely on complex rule-based computations. While recent deep learning models have shown potential, they often require high computational costs and do not effectively capture the spatio-temporal patterns crucial for ETA prediction. ETA prediction inherently involves spatio-temporal causality, and our proposed model addresses this by leveraging attention mechanisms to extract and utilize temporal features accumulated at each spatio-temporal point along a route. This architecture enables efficient and accurate ETA estimation while keeping the model lightweight and scalable. We validate our approach using real-world driving datasets and demonstrate that our approach outperforms existing baselines by effectively integrating road characteristics, real-time traffic conditions, and historical speed patterns in a task-aware manner.
+ oai:arXiv.org:2601.13793v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ ByeoungDo Kim, JunYeop Na, Kyungwook Tak, JunTae Kim, DongHyeon Kim, Duckky Kim
+
+
+ A Distributed Spatial Data Warehouse for AIS Data (DIPAAL)
+ https://arxiv.org/abs/2601.13795
+ arXiv:2601.13795v1 Announce Type: new
+Abstract: AIS data from ships is excellent for analyzing single-ship movements and monitoring all ships within a specific area. However, the AIS data needs to be cleaned, processed, and stored before being usable. This paper presents a system consisting of an efficient and modular ETL process for loading AIS data, as well as a distributed spatial data warehouse storing the trajectories of ships. To efficiently analyze a large set of ships, a raster approach to querying the AIS data is proposed. A spatially partitioned data warehouse with a granularized cell representation and heatmap presentation is designed, developed, and evaluated. Currently the data warehouse stores ~312 million kilometers of ship trajectories and more than +8 billion rows in the largest table. It is found that searching the cell representation is faster than searching the trajectory representation. Further, we show that the spatially divided shards enable a consistently good scale-up for both cell and heatmap analytics in large areas, ranging between 354% to 1164% with a 5x increase in workers
+ oai:arXiv.org:2601.13795v1
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Alex S. Klitgaard, Lau E. Josefsen, Mikael V. Mikkelsen, Kristian Torp
+
+
+ Zero-free regions and concentration inequalities for hypergraph colorings in the local lemma regime
+ https://arxiv.org/abs/2601.13796
+ arXiv:2601.13796v1 Announce Type: new
+Abstract: We show that for $q$-colorings in $k$-uniform hypergraphs with maximum degree $\Delta$, if $k\ge 50$ and $q\ge 700\Delta^{\frac{5}{k-10}}$, there is a "Lee-Yang" zero-free strip around the interval $[0,1]$ of the partition function, which includes the special case of uniform enumeration of hypergraph colorings. As an immediate consequence, we obtain Berry-Esseen type inequalities for hypergraph $q$-colorings under such conditions, demonstrating the asymptotic normality for the size of any color class in a uniformly random coloring. Our framework also extends to the study of "Fisher zeros", leading to deterministic algorithms for approximating the partition function in the zero-free region.
+ Our approach is based on extending the recent work of [Liu, Wang, Yin, Yu, STOC 2025] to general constraint satisfaction problems (CSP). We focus on partition functions defined for CSPs by introducing external fields to the variables. A key component in our approach is a projection-lifting scheme, which enables us to essentially lift information percolation type analysis for Markov chains from the real line to the complex plane. Last but not least, we also show a Chebyshev-type inequality under the sampling LLL condition for atomic CSPs.
+ oai:arXiv.org:2601.13796v1
+ cs.DS
+ cs.DM
+ math.PR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jingcheng Liu, Yixiao Yu
+
+
+ PREGEN: Uncovering Latent Thoughts in Composed Video Retrieval
+ https://arxiv.org/abs/2601.13797
+ arXiv:2601.13797v1 Announce Type: new
+Abstract: Composed Video Retrieval (CoVR) aims to retrieve a video based on a query video and a modifying text. Current CoVR methods fail to fully exploit modern Vision-Language Models (VLMs), either using outdated architectures or requiring computationally expensive fine-tuning and slow caption generation. We introduce PREGEN (PRE GENeration extraction), an efficient and powerful CoVR framework that overcomes these limitations. Our approach uniquely pairs a frozen, pre-trained VLM with a lightweight encoding model, eliminating the need for any VLM fine-tuning. We feed the query video and modifying text into the VLM and extract the hidden state of the final token from each layer. A simple encoder is then trained on these pooled representations, creating a semantically rich and compact embedding for retrieval. PREGEN significantly advances the state of the art, surpassing all prior methods on standard CoVR benchmarks with substantial gains in Recall@1 of +27.23 and +69.59. Our method demonstrates robustness across different VLM backbones and exhibits strong zero-shot generalization to more complex textual modifications, highlighting its effectiveness and semantic capabilities.
+ oai:arXiv.org:2601.13797v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Gabriele Serussi, David Vainshtein, Jonathan Kouchly, Dotan Di Castro, Chaim Baskin
+
+
+ Insight: Interpretable Semantic Hierarchies in Vision-Language Encoders
+ https://arxiv.org/abs/2601.13798
+ arXiv:2601.13798v1 Announce Type: new
+Abstract: Language-aligned vision foundation models perform strongly across diverse downstream tasks. Yet, their learned representations remain opaque, making interpreting their decision-making hard. Recent works decompose these representations into human-interpretable concepts, but provide poor spatial grounding and are limited to image classification tasks. In this work, we propose Insight, a language-aligned concept foundation model that provides fine-grained concepts, which are human-interpretable and spatially grounded in the input image. We leverage a hierarchical sparse autoencoder and a foundation model with strong semantic representations to automatically extract concepts at various granularities. Examining local co-occurrence dependencies of concepts allows us to define concept relationships. Through these relations we further improve concept naming and obtain richer explanations. On benchmark data, we show that Insight provides performance on classification and segmentation that is competitive with opaque foundation models while providing fine-grained, high quality concept-based explanations. Code is available at https://github.com/kawi19/Insight.
+ oai:arXiv.org:2601.13798v1
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kai Wittenmayer, Sukrut Rao, Amin Parchami-Araghi, Bernt Schiele, Jonas Fischer
+
+
+ Linear viscoelastic rheological FrBD models
+ https://arxiv.org/abs/2601.13799
+ arXiv:2601.13799v1 Announce Type: new
+Abstract: In [1], a new modeling paradigm for developing rate-and-state-dependent, control-oriented friction models was introduced. The framework, termed Friction with Bristle Dynamics (FrBD), combines nonlinear analytical expressions for the friction coefficient with constitutive equations for bristle-like elements. Within the FrBD framework, this letter introduces two novel formulations based on the two most general linear viscoelastic models for solids: the Generalized Maxwell (GM) and Generalized Kelvin-Voigt (GKV) elements. Both are analyzed in terms of boundedness and passivity, revealing that these properties are satisfied for any physically meaningful parametrization. An application of passivity for control design is also illustrated, considering an example from robotics. The findings of this letter systematically integrate rate-and-state dynamic friction models with linear viscoelasticity.
+ oai:arXiv.org:2601.13799v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Luigi Romano, Ole Morten Aamo, Jan {\AA}slund, Erik Frisk
+
+
+ A Hybridizable Discontinuous Galerkin Method for the non--local Camassa--Holm--Kadomtsev--Petviashvili equation
+ https://arxiv.org/abs/2601.13800
+ arXiv:2601.13800v1 Announce Type: new
+Abstract: This paper develops a hybridizable discontinuous Galerkin method for the two-dimensional Camassa--Holm--Kadomtsev--Petviashvili equation. The method employs Cartesian meshes with tensor-product polynomial spaces, enabling separate treatment of \(x\) and \(y\) derivatives. The non-local operator \(\partial_{x}^{-1}u_{y}\) is localized through an auxiliary variable \(v\) satisfying \(v_x = u_y\), allowing efficient element-by-element computations. We prove energy stability of the semi-discrete scheme and derive \(\mathcal{O}(h^{k+1/2})\) convergence in space. Numerical experiments validate the theoretical results and demonstrate the method's capability to accurately resolve smooth solutions and peaked solitary waves (peakons).
+ oai:arXiv.org:2601.13800v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Mukul Dwivedi, Ruben Gutendorf, Andreas Rupp
+
+
+ HoverAI: An Embodied Aerial Agent for Natural Human-Drone Interaction
+ https://arxiv.org/abs/2601.13801
+ arXiv:2601.13801v1 Announce Type: new
+Abstract: Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction.
+ oai:arXiv.org:2601.13801v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yuhua Jin, Nikita Kuzmin, Georgii Demianchuk, Mariya Lezina, Fawad Mehboob, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Ahsan Mustafa, Dzmitry Tsetserukou
+
+
+ Habibi: Laying the Open-Source Foundation of Unified-Dialectal Arabic Speech Synthesis
+ https://arxiv.org/abs/2601.13802
+ arXiv:2601.13802v1 Announce Type: new
+Abstract: A notable gap persists in speech synthesis research and development for Arabic dialects, particularly from a unified modeling perspective. Despite its high practical value, the inherent linguistic complexity of Arabic dialects, further compounded by a lack of standardized data, benchmarks, and evaluation guidelines, steers researchers toward safer ground. To bridge this divide, we present Habibi, a suite of specialized and unified text-to-speech models that harnesses existing open-source ASR corpora to support a wide range of high- to low-resource Arabic dialects through linguistically-informed curriculum learning. Our approach outperforms the leading commercial service in generation quality, while maintaining extensibility through effective in-context learning, without requiring text diacritization. We are committed to open-sourcing the model, along with creating the first systematic benchmark for multi-dialect Arabic speech synthesis. Furthermore, by identifying the key challenges in and establishing evaluation standards for the process, we aim to provide a solid groundwork for subsequent research. Resources at https://SWivid.github.io/Habibi/ .
+ oai:arXiv.org:2601.13802v1
+ cs.CL
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yushen Chen, Junzhe Liu, Yujie Tu, Zhikang Niu, Yuzhe Liang, Kai Yu, Chunyu Qiang, Chen Zhang, Xie Chen
+
+
+ The Non-Predictability of Mispredicted Branches using Timing Information
+ https://arxiv.org/abs/2601.13804
+ arXiv:2601.13804v1 Announce Type: new
+Abstract: Branch misprediction latency is one of the most important contributors to performance degradation and wasted energy consumption in a modern core. State-of-the-art predictors generally perform very well but occasionally suffer from high Misprediction Per Kilo Instruction due to hard-to-predict branches. In this work, we investigate if predicting branches using microarchitectural information, in addition to traditional branch history, can improve prediction accuracy. Our approach considers branch timing information (resolution cycle) both for older branches in the Reorder Buffer (ROB) and recently committed, and for younger branches relative to the branch we re-predict. We propose Speculative Branch Resolution (SBR) in which, N cycles after a branch allocates in the ROB, various timing information is collected and used to re-predict. Using the gem5 simulator we implement and perform a limit-study of SBR using a TAGE-Like predictor. Our experiments show that the post-alloc timing information we used was not able to yield performance gains over an unbounded TAGE-SC. However, we find two hard to predict branches where timing information did provide an advantage and thoroughly analysed one of them to understand why. This finding suggests that predictors may benefit from specific microarchitectural information to increase accuracy on specific hard to predict branches and that overriding predictions in the backend may yet yield performance benefits, but that further research is needed to determine such information vectors.
+ oai:arXiv.org:2601.13804v1
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ioannis Constantinou, Arthur Perais, Yiannakis Sazeides
+
+
+ Knowledge Graph-Assisted LLM Post-Training for Enhanced Legal Reasoning
+ https://arxiv.org/abs/2601.13806
+ arXiv:2601.13806v1 Announce Type: new
+Abstract: LLM post-training has primarily relied on large text corpora and human feedback, without capturing the structure of domain knowledge. This has caused models to struggle dealing with complex reasoning tasks, especially for high-stakes professional domains. In Law, reasoning requires deep understanding of the relations between various legal concepts, a key component missing in current LLM post-training. In this paper, we propose a knowledge graph (KG)-assisted approach for enhancing LLMs' reasoning capability in Legal that is generalizable to other high-stakes domains. We model key legal concepts by following the \textbf{IRAC} (Issue, Rule, Analysis and Conclusion) framework, and construct a KG with 12K legal cases. We then produce training data using our IRAC KG, and conduct both Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) with three state-of-the-art (SOTA) LLMs (30B, 49B and 70B), varying architecture and base model family. Our post-trained models obtained better average performance on 4/5 diverse legal benchmarks (14 tasks) than baselines. In particular, our 70B DPO model achieved the best score on 4/6 reasoning tasks, among baselines and a 141B SOTA legal LLM, demonstrating the effectiveness of our KG for enhancing LLMs' legal reasoning capability.
+ oai:arXiv.org:2601.13806v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Dezhao Song, Guglielmo Bonifazi, Frank Schilder, Jonathan Richard Schwarz
+
+
+ DroneVLA: VLA based Aerial Manipulation
+ https://arxiv.org/abs/2601.13809
+ arXiv:2601.13809v1 Announce Type: new
+Abstract: As aerial platforms evolve from passive observers to active manipulators, the challenge shifts toward designing intuitive interfaces that allow non-expert users to command these systems naturally. This work introduces a novel concept of autonomous aerial manipulation system capable of interpreting high-level natural language commands to retrieve objects and deliver them to a human user. The system is intended to integrate a MediaPipe based on Grounding DINO and a Vision-Language-Action (VLA) model with a custom-built drone equipped with a 1-DOF gripper and an Intel RealSense RGB-D camera. VLA performs semantic reasoning to interpret the intent of a user prompt and generates a prioritized task queue for grasping of relevant objects in the scene. Grounding DINO and dynamic A* planning algorithm are used to navigate and safely relocate the object. To ensure safe and natural interaction during the handover phase, the system employs a human-centric controller driven by MediaPipe. This module provides real-time human pose estimation, allowing the drone to employ visual servoing to maintain a stable, distinct position directly in front of the user, facilitating a comfortable handover. We demonstrate the system's efficacy through real-world experiments for localization and navigation, which resulted in a 0.164m, 0.070m, and 0.084m of max, mean euclidean, and root-mean squared errors, respectively, highlighting the feasibility of VLA for aerial manipulation operations.
+ oai:arXiv.org:2601.13809v1
+ cs.RO
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Fawad Mehboob, Monijesu James, Amir Habel, Jeffrin Sam, Miguel Altamirano Cabrera, Dzmitry Tsetserukou
+
+
+ Integrated Sensing and Communication for Low-Altitude Security
+ https://arxiv.org/abs/2601.13810
+ arXiv:2601.13810v1 Announce Type: new
+Abstract: The dense concentration of low-altitude, slow-speed, and small-size targets in the complex low-altitude environment poses significant security challenges, including failures in continuous wide-area sensing and ambiguous target intent, which existing regulatory frameworks struggle to address. Integrated sensing and communication (ISAC), a hallmark of next-generation mobile communication, offers a transformative approach to low-altitude security governance. By leveraging existing cellular infrastructure and spectrum resources, ISAC enables the construction of a seamless wide-area sensing network, supports intelligent feature extraction and intent inference, facilitates real-time collaborative decision-making, and establishes a dynamic trust authentication framework. This article systematically reviews the technical system, analyzes the security challenges, forecasts the enabling value of ISAC, and discusses the resulting open problems and challenges, thereby laying a foundation for future research and industrial implementation.
+ oai:arXiv.org:2601.13810v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ruixing Ren
+
+
+ GuideTouch: An Obstacle Avoidance Device for Visually Impaired
+ https://arxiv.org/abs/2601.13813
+ arXiv:2601.13813v1 Announce Type: new
+Abstract: Safe navigation for the visually impaired individuals remains a critical challenge, especially concerning head-level obstacles, which traditional mobility aids often fail to detect. We introduce GuideTouch, a compact, affordable, standalone wearable device designed for autonomous obstacle avoidance. The system integrates two vertically aligned Time-of-Flight (ToF) sensors, enabling three-dimensional environmental perception, and four vibrotactile actuators that provide directional haptic feedback. Proximity and direction information is communicated via an intuitive 4-point vibrotactile feedback system located across the user's shoulders and upper chest. For real-world robustness, the device includes a unique centrifugal self-cleaning optical cover mechanism and a sound alarm system for location if the device is dropped. We evaluated the haptic perception accuracy across 22 participants (17 male and 5 female, aged 21-48, mean 25.7, sd 6.1). Statistical analysis confirmed a significant difference between the perception accuracy of different patterns. The system demonstrated high recognition accuracy, achieving an average of 92.9% for single and double motor (primary directional) patterns. Furthermore, preliminary experiments with 14 visually impaired users validated this interface, showing a recognition accuracy of 93.75% for primary directional cues. The results demonstrate that GuideTouch enables intuitive spatial perception and could significantly improve the safety, confidence, and autonomy of users with visual impairments during independent navigation.
+ oai:arXiv.org:2601.13813v1
+ cs.RO
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Timofei Kozlov, Artem Trandofilov, Georgii Gazaryan, Issatay Tokmurziyev, Miguel Altamirano Cabrera, Dzmitry Tsetserukou
+
+
+ From RTL to Prompt Coding: Empowering the Next Generation of Chip Designers through LLMs
+ https://arxiv.org/abs/2601.13815
+ arXiv:2601.13815v1 Announce Type: new
+Abstract: This paper presents an LLM-based learning platform for chip design education, aiming to make chip design accessible to beginners without overwhelming them with technical complexity. It represents the first educational platform that assists learners holistically across both frontend and backend design. The proposed approach integrates an LLM-based chat agent into a browser-based workflow built upon the Tiny Tapeout ecosystem. The workflow guides users from an initial design idea through RTL code generation to a tapeout-ready chip. To evaluate the concept, a case study was conducted with 18 high-school students. Within a 90-minute session they developed eight functional VGA chip designs in a 130 nm technology. Despite having no prior experience in chip design, all groups successfully implemented tapeout-ready projects. The results demonstrate the feasibility and educational impact of LLM-assisted chip design, highlighting its potential to attract and inspire early learners and significantly broaden the target audience for the field.
+ oai:arXiv.org:2601.13815v1
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lukas Krupp, Matthew Venn, Norbert Wehn
+
+
+ Discriminant Learning-based Colorspace for Blade Segmentation
+ https://arxiv.org/abs/2601.13816
+ arXiv:2601.13816v1 Announce Type: new
+Abstract: Suboptimal color representation often hinders accurate image segmentation, yet many modern algorithms neglect this critical preprocessing step. This work presents a novel multidimensional nonlinear discriminant analysis algorithm, Colorspace Discriminant Analysis (CSDA), for improved segmentation. Extending Linear Discriminant Analysis into a deep learning context, CSDA customizes color representation by maximizing multidimensional signed inter-class separability while minimizing intra-class variability through a generalized discriminative loss. To ensure stable training, we introduce three alternative losses that enable end-to-end optimization of both the discriminative colorspace and segmentation process. Experiments on wind turbine blade data demonstrate significant accuracy gains, emphasizing the importance of tailored preprocessing in domain-specific segmentation.
+ oai:arXiv.org:2601.13816v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ra\"ul P\'erez-Gonzalo, Andreas Espersen, Antonio Agudo
+
+
+ Device Association and Resource Allocation for Hierarchical Split Federated Learning in Space-Air-Ground Integrated Network
+ https://arxiv.org/abs/2601.13817
+ arXiv:2601.13817v1 Announce Type: new
+Abstract: 6G facilitates deployment of Federated Learning (FL) in the Space-Air-Ground Integrated Network (SAGIN), yet FL confronts challenges such as resource constrained and unbalanced data distribution. To address these issues, this paper proposes a Hierarchical Split Federated Learning (HSFL) framework and derives its upper bound of loss function. To minimize the weighted sum of training loss and latency, we formulate a joint optimization problem that integrates device association, model split layer selection, and resource allocation. We decompose the original problem into several subproblems, where an iterative optimization algorithm for device association and resource allocation based on brute-force split point search is proposed. Simulation results demonstrate that the proposed algorithm can effectively balance training efficiency and model accuracy for FL in SAGIN.
+ oai:arXiv.org:2601.13817v1
+ cs.DC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haitao Zhao, Xiaoyu Tang, Bo Xu, Jinlong Sun, Linghao Zhang
+
+
+ Efficient Parallel $(\Delta+1)$-Edge-Coloring
+ https://arxiv.org/abs/2601.13822
+ arXiv:2601.13822v1 Announce Type: new
+Abstract: We study the $(\Delta+1)$-edge-coloring problem in the parallel $\left(\mathrm{PRAM}\right)$ model of computation. The celebrated Vizing's theorem [Viz64] states that every simple graph $G = (V,E)$ can be properly $(\Delta+1)$-edge-colored. In a seminal paper, Karloff and Shmoys [KS87] devised a parallel algorithm with time $O\left(\Delta^5\cdot\log n\cdot\left(\log^3 n+\Delta^2\right)\right)$ and $O(m\cdot\Delta)$ processors. This result was improved by Liang et al. [LSH96] to time $O\left(\Delta^{4.5}\cdot \log^3\Delta\cdot \log n + \Delta^4 \cdot\log^4 n\right)$ and $O\left(n\cdot\Delta^{3} +n^2\right)$ processors. [LSH96] claimed $O\left(\Delta^{3.5} \cdot\log^3\Delta\cdot \log n + \Delta^3\cdot \log^4 n\right)$ time, but we point out a flaw in their analysis, which once corrected, results in the above bound. We devise a faster parallel algorithm for this fundamental problem. Specifically, our algorithm uses $O\left(\Delta^4\cdot \log^4 n\right)$ time and $O(m\cdot \Delta)$ processors. Another variant of our algorithm requires $O\left(\Delta^{4+o(1)}\cdot\log^2 n\right)$ time, and $O\left(m\cdot\Delta\cdot\log n\cdot\log^{\delta}\Delta\right)$ processors, for an arbitrarily small $\delta>0$. We also devise a few other tradeoffs between the time and the number of processors, and devise an improved algorithm for graphs with small arboricity. On the way to these results, we also provide a very fast parallel algorithm for updating $(\Delta+1)$-edge-coloring. Our algorithm for this problem is dramatically faster and simpler than the previous state-of-the-art algorithm (due to [LSH96]) for this problem.
+ oai:arXiv.org:2601.13822v1
+ cs.DS
+ cs.DC
+ cs.DM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Michael Elkin, Ariel Khuzman
+
+
+ Multi-Trace M\"uller Boundary Integral Equation for Electromagnetic Scattering by Composite Objects
+ https://arxiv.org/abs/2601.13823
+ arXiv:2601.13823v1 Announce Type: new
+Abstract: This paper introduces a boundary integral equation for time-harmonic electromagnetic scattering by composite dielectric objects. The formulation extends the classical M\"uller equation to composite structures through the global multi-trace method. The key ingredient enabling this extension is the use of the Stratton-Chu representation in complementary region, also known as the extinction property, which augments the off-diagonal blocks of the interior representation operator. The resulting block system is composed entirely of second-kind operators. A Petrov-Galerkin (mixed) discretization using Rao-Wilton-Glisson trial functions and Buffa-Christiansen test functions is employed, yielding linear systems that remain well conditioned on dense meshes and at low frequencies without the need for additional stabilization. This reduces computational costs associated with matrix-vector multiplications and iterative solving. Numerical experiments demonstrate the accuracy of the method in computing field traces and derived quantities.
+ oai:arXiv.org:2601.13823v1
+ math.NA
+ cs.NA
+ math-ph
+ math.MP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Van Chien Le, Kristof Cools
+
+
+ ELSA: Efficient LLM-Centric Split Aggregation for Privacy-Aware Hierarchical Federated Learning over Resource-Constrained Edge Networks
+ https://arxiv.org/abs/2601.13824
+ arXiv:2601.13824v1 Announce Type: new
+Abstract: Training large language models (LLMs) at the network edge faces fundamental challenges arising from device resource constraints, severe data heterogeneity, and heightened privacy risks. To address these, we propose ELSA (Efficient LLM-centric Split Aggregation), a novel framework that systematically integrates split learning (SL) and hierarchical federated learning (HFL) for distributed LLM fine-tuning over resource-constrained edge networks. ELSA introduces three key innovations. First, it employs a task-agnostic, behavior-aware client clustering mechanism that constructs semantic fingerprints using public probe inputs and symmetric KL divergence, further enhanced by prediction-consistency-based trust scoring and latency-aware edge assignment to jointly address data heterogeneity, client unreliability, and communication constraints. Second, it splits the LLM into three parts across clients and edge servers, with the cloud used only for adapter aggregation, enabling an effective balance between on-device computation cost and global convergence stability. Third, it incorporates a lightweight communication scheme based on computational sketches combined with semantic subspace orthogonal perturbation (SS-OP) to reduce communication overhead while mitigating privacy leakage during model exchanges. Experiments across diverse NLP tasks demonstrate that ELSA consistently outperforms state-of-the-art methods in terms of adaptability, convergence behavior, and robustness, establishing a scalable and privacy-aware solution for edge-side LLM fine-tuning under resource constraints.
+ oai:arXiv.org:2601.13824v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Xiaohong Yang, Tong Xie, Minghui Liwang, Chikai Shang, Yang Lu, Zhenzhen Jiao, Liqun Fu, Seyyedali Hosseinalipour
+
+
+ MirageNet:A Secure, Efficient, and Scalable On-Device Model Protection in Heterogeneous TEE and GPU System
+ https://arxiv.org/abs/2601.13826
+ arXiv:2601.13826v1 Announce Type: new
+Abstract: As edge devices gain stronger computing power, deploying high-performance DNN models on untrusted hardware has become a practical approach to cut inference latency and protect user data privacy. Given high model training costs and user experience requirements, balancing model privacy and low runtime overhead is critical. TEEs offer a viable defense, and prior work has proposed heterogeneous GPU-TEE inference frameworks via parameter obfuscation to balance efficiency and confidentiality. However, recent studies find partial obfuscation defenses ineffective, while robust schemes cause unacceptable latency. To resolve these issues, we propose ConvShatter, a novel obfuscation scheme that achieves low latency and high accuracy while preserving model confidentiality and integrity. It leverages convolution linearity to decompose kernels into critical and common ones, inject confounding decoys, and permute channel/kernel orders. Pre-deployment, it performs kernel decomposition, decoy injection and order obfuscation, storing minimal recovery parameters securely in the TEE. During inference, the TEE reconstructs outputs of obfuscated convolutional layers. Extensive experiments show ConvShatter substantially reduces latency overhead with strong security guarantees; versus comparable schemes, it cuts overhead by 16% relative to GroupCover while maintaining accuracy on par with the original model.
+ oai:arXiv.org:2601.13826v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Huadi Zheng, Li Cheng, Yan Ding
+
+
+ Base Station Sleeping Strategy Based on Load Sharing in Ultra-Dense Networks
+ https://arxiv.org/abs/2601.13832
+ arXiv:2601.13832v1 Announce Type: new
+Abstract: To address the issues of high operational costs and low energy efficiency (EE) caused by the dense deployment of small base stations (s-BSs) in 5G ultra-dense networks (UDNs), this paper first constructs a multi-objective mathematical optimization model targeting maximizing EE and minimizing the number of active BSs. The model incorporates key constraints including BS operational state, user equipment (UE)-BS connection relationship, and load threshold, laying a theoretical foundation for the coordinated optimization of energy conservation and quality of service. Based on this model, an integrated solution combining UE-BS initial connection optimization and load-sharing based BS sleeping is proposed. In the initial connection phase, with communication quality and BS load as dual constraints, efficient matching between UEs and optimal BSs is achieved through three sequential steps: communication feasibility screening, redundant connection removal, and overload load redistribution. This resolves the problems of load imbalance and difficult identification of redundant BSs in UDNs arising from unordered initial connections. In the BS sleeping phase, a BS sleeping index, comprehensively considering UE transferability and backup BS resources, is innovatively introduced to quantify BS dormancy priority. Through a closed-loop process involving low-load BS screening, adjacent BS load evaluation, and load sharing by two takeover BSs based on their capacity, accurate dormancy of redundant BSs and collaborative load migration are realized. Simulation results in a typical UDNs scenario demonstrate that, compared with the traditional baseline scheme, the proposed solution exhibits significant advantages in convergence speed, optimization of the number of active BSs, and EE improvement.
+ oai:arXiv.org:2601.13832v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ruixing Ren, Shan Chen, Xuehan Bao, Pingzheng Ge, Dongming Wang, Junhui Zhao
+
+
+ The Role of Prosodic and Lexical Cues in Turn-Taking with Self-Supervised Speech Representations
+ https://arxiv.org/abs/2601.13835
+ arXiv:2601.13835v1 Announce Type: new
+Abstract: Fluid turn-taking remains a key challenge in human-robot interaction. Self-supervised speech representations (S3Rs) have driven many advances, but it remains unclear whether S3R-based turn-taking models rely on prosodic cues, lexical cues or both. We introduce a vocoder-based approach to control prosody and lexical cues in speech more cleanly than prior work. This allows us to probe the voice-activity projection model, an S3R-based turn-taking model. We find that prediction on prosody-matched, unintelligible noise is similar to accuracy on clean speech. This reveals both prosodic and lexical cues support turn-taking, but either can be used in isolation. Hence, future models may only require prosody, providing privacy and potential performance benefits. When either prosodic or lexical information is disrupted, the model exploits the other without further training, indicating they are encoded in S3Rs with limited interdependence. Results are consistent in CPC-based and wav2vec2.0 S3Rs. We discuss our findings and highlight a number of directions for future work. All code is available to support future research.
+ oai:arXiv.org:2601.13835v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sam OConnor Russell, Delphine Charuau, Naomi Harte
+
+
+ FutureOmni: Evaluating Future Forecasting from Omni-Modal Context for Multimodal LLMs
+ https://arxiv.org/abs/2601.13836
+ arXiv:2601.13836v1 Announce Type: new
+Abstract: Although Multimodal Large Language Models (MLLMs) demonstrate strong omni-modal perception, their ability to forecast future events from audio-visual cues remains largely unexplored, as existing benchmarks focus mainly on retrospective understanding. To bridge this gap, we introduce FutureOmni, the first benchmark designed to evaluate omni-modal future forecasting from audio-visual environments. The evaluated models are required to perform cross-modal causal and temporal reasoning, as well as effectively leverage internal knowledge to predict future events. FutureOmni is constructed via a scalable LLM-assisted, human-in-the-loop pipeline and contains 919 videos and 1,034 multiple-choice QA pairs across 8 primary domains. Evaluations on 13 omni-modal and 7 video-only models show that current systems struggle with audio-visual future prediction, particularly in speech-heavy scenarios, with the best accuracy of 64.8% achieved by Gemini 3 Flash. To mitigate this limitation, we curate a 7K-sample instruction-tuning dataset and propose an Omni-Modal Future Forecasting (OFF) training strategy. Evaluations on FutureOmni and popular audio-visual and video-only benchmarks demonstrate that OFF enhances future forecasting and generalization. We publicly release all code (https://github.com/OpenMOSS/FutureOmni) and datasets (https://huggingface.co/datasets/OpenMOSS-Team/FutureOmni).
+ oai:arXiv.org:2601.13836v1
+ cs.CL
+ cs.CV
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qian Chen, Jinlan Fu, Changsong Li, See-Kiong Ng, Xipeng Qiu
+
+
+ FastGHA: Generalized Few-Shot 3D Gaussian Head Avatars with Real-Time Animation
+ https://arxiv.org/abs/2601.13837
+ arXiv:2601.13837v1 Announce Type: new
+Abstract: Despite recent progress in 3D Gaussian-based head avatar modeling, efficiently generating high fidelity avatars remains a challenge. Current methods typically rely on extensive multi-view capture setups or monocular videos with per-identity optimization during inference, limiting their scalability and ease of use on unseen subjects. To overcome these efficiency drawbacks, we propose \OURS, a feed-forward method to generate high-quality Gaussian head avatars from only a few input images while supporting real-time animation. Our approach directly learns a per-pixel Gaussian representation from the input images, and aggregates multi-view information using a transformer-based encoder that fuses image features from both DINOv3 and Stable Diffusion VAE. For real-time animation, we extend the explicit Gaussian representations with per-Gaussian features and introduce a lightweight MLP-based dynamic network to predict 3D Gaussian deformations from expression codes. Furthermore, to enhance geometric smoothness of the 3D head, we employ point maps from a pre-trained large reconstruction model as geometry supervision. Experiments show that our approach significantly outperforms existing methods in both rendering quality and inference efficiency, while supporting real-time dynamic avatar animation.
+ oai:arXiv.org:2601.13837v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xinya Ji, Sebastian Weiss, Manuel Kansy, Jacek Naruniec, Xun Cao, Barbara Solenthaler, Derek Bradley
+
+
+ A Predictive and Preventive Digital Twin Framework for Indoor Wireless Networks
+ https://arxiv.org/abs/2601.13838
+ arXiv:2601.13838v1 Announce Type: new
+Abstract: Wi-Fi networks increasingly suffer from performance degradation caused by contention-based channel access, dense deployments, and largely self-managed operation among mutually interfering access points (APs). In this paper, we propose a Digital Twin (DT) framework that captures the essential spatial and temporal characteristics of wireless channels and traffic patterns, enabling the prediction of likely future network scenarios while respecting physical constraints. Leveraging this predictive capability, we introduce two analytically derived performance upper bounds-one based on Shannon capacity and the other on latency behavior under CSMA-CA (Carrier Sense Multiple Access with Collision Avoidance)-that can be evaluated efficiently without time-consuming network simulations. By applying importance sampling to DT-generated scenarios, potentially risky network conditions can be identified within large stochastic scenario spaces. These same performance bounds are then used to proactively guide a gradient-based search for improved network configurations, with the objective of avoiding imminent performance degradation rather than pursuing globally optimal but fragile solutions. Simulation results demonstrate that the proposed approach can successfully predict time-dependent network congestion and mitigate it in advance, highlighting its potential for predictive and preventive Wi-Fi network management.
+ oai:arXiv.org:2601.13838v1
+ cs.NI
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiunn-Tsair Chen
+
+
+ DisasterVQA: A Visual Question Answering Benchmark Dataset for Disaster Scenes
+ https://arxiv.org/abs/2601.13839
+ arXiv:2601.13839v1 Announce Type: new
+Abstract: Social media imagery provides a low-latency source of situational information during natural and human-induced disasters, enabling rapid damage assessment and response. While Visual Question Answering (VQA) has shown strong performance in general-purpose domains, its suitability for the complex and safety-critical reasoning required in disaster response remains unclear. We introduce DisasterVQA, a benchmark dataset designed for perception and reasoning in crisis contexts. DisasterVQA consists of 1,395 real-world images and 4,405 expert-curated question-answer pairs spanning diverse events such as floods, wildfires, and earthquakes. Grounded in humanitarian frameworks including FEMA ESF and OCHA MIRA, the dataset includes binary, multiple-choice, and open-ended questions covering situational awareness and operational decision-making tasks. We benchmark seven state-of-the-art vision-language models and find performance variability across question types, disaster categories, regions, and humanitarian tasks. Although models achieve high accuracy on binary questions, they struggle with fine-grained quantitative reasoning, object counting, and context-sensitive interpretation, particularly for underrepresented disaster scenarios. DisasterVQA provides a challenging and practical benchmark to guide the development of more robust and operationally meaningful vision-language models for disaster response. The dataset is publicly available at https://zenodo.org/records/18267770.
+ oai:arXiv.org:2601.13839v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Aisha Al-Mohannadi, Ayisha Firoz, Yin Yang, Muhammad Imran, Ferda Ofli
+
+
+ Robust Reversible Watermarking in Encrypted Images Based on Dual-MSBs Spiral Embedding
+ https://arxiv.org/abs/2601.13840
+ arXiv:2601.13840v1 Announce Type: new
+Abstract: Robust reversible watermarking in encrypted images (RRWEI) faces an inherent challenge in simultaneously achieving robustness, reversibility, and content privacy under severely constrained embedding capacity. Existing RRWEI schemes often exhibit limited robustness against noise, lossy compression, and cropping attacks due to insufficient redundancy in the encrypted domain. To address this challenge, this paper proposes a novel RRWEI framework that couples dual most significant bit-plane (dual-MSBs) embedding with spatial redundancy and error-correcting coding. By compressing prediction-error bit-planes, sufficient embedding space and auxiliary information for lossless reconstruction are reserved. The dual-MSBs are further reorganized using a spiral embedding strategy to distribute multiple redundant watermark copies across spatially dispersed regions, enhancing robustness against both noise and spatial loss.Experimental results on standard test images demonstrate that the proposed method consistently outperforms under evaluated settings robustness against Gaussian noise, JPEG compression, and diverse cropping attacks, while maintaining perfect reversibility and high embedding capacity. Compared with state-of-the-art RRWEI schemes, the proposed framework achieves substantially lower bit-error rates and more stable performance under a wide range of attack scenarios.
+ oai:arXiv.org:2601.13840v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haoyu Shen, Wen Yin, Zhaoxia Yin, Wan-Li Lyu, Xinpeng Zhang
+
+
+ Nemesis, an Escape Game in Graphs
+ https://arxiv.org/abs/2601.13841
+ arXiv:2601.13841v1 Announce Type: new
+Abstract: We define a new escape game in graphs that we call Nemesis. The game is played on a graph having a subset of vertices labeled as exits and the goal of one of the two players, called the fugitive, is to reach one of these exit vertices. The second player, i.e. the fugitive adversary, is called the Nemesis. Her goal is to trap the fugitive in a connected component which does not contain any exit. At each round of the game, the fugitive moves from one vertex to an adjacent vertex. Then the Nemesis deletes one edge anywhere in the graph. The game ends when either the fugitive reached an exit or when he is in a connected component that does not contain any exit. In trees and graphs of maximum degree bounded by 3, Nemesis can be solved in linear time. We also show that a variant of the game called Blizzard where only edges adjacent to the position of the fugitive can be deleted also admits a linear time solution. For arbitrary graphs, we show that Nemesis is PSPACE-complete, and that it is NP-hard on planar multigraphs. We extend our results to the related Cat Herding problem, proving its PSPACE-completeness. We also prove that finding a strategy based on a full binary escape tree whose leaves are exists is NP-complete.
+ oai:arXiv.org:2601.13841v1
+ cs.DS
+ cs.CC
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Pierre Berg\'e, Antoine Dailly, Yan Gerard
+
+
+ Small Models, Big Impact: Tool-Augmented AI Agents for Wireless Network Planning
+ https://arxiv.org/abs/2601.13843
+ arXiv:2601.13843v1 Announce Type: new
+Abstract: Large Language Models (LLMs) such as ChatGPT promise revolutionary capabilities for Sixth-Generation (6G) wireless networks but their massive computational requirements and tendency to generate technically incorrect information create deployment barriers. In this work, we introduce MAINTAINED: autonomous artificial intelligence agent for wireless network deployment. Instead of encoding domain knowledge within model parameters, our approach orchestrates specialized computational tools for geographic analysis, signal propagation modeling, and network optimization. In a real-world case study, MAINTAINED outperforms state-of-the-art LLMs including ChatGPT-4o, Claude Sonnet 4, and DeepSeek-R1 by up to 100-fold in verified performance metrics while requiring less computational resources. This paradigm shift, moving from relying on parametric knowledge towards externalizing domain knowledge into verifiable computational tools, eliminates hallucination in technical specifications and enables edge-deployable Artificial Intelligence (AI) for wireless communications.
+ oai:arXiv.org:2601.13843v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yongqiang Zhang, Mustafa A. Kishk, Mohamed-Slim Alouini
+
+
+ Optimal L2 Regularization in High-dimensional Continual Linear Regression
+ https://arxiv.org/abs/2601.13844
+ arXiv:2601.13844v1 Announce Type: new
+Abstract: We study generalization in an overparameterized continual linear regression setting, where a model is trained with L2 (isotropic) regularization across a sequence of tasks. We derive a closed-form expression for the expected generalization loss in the high-dimensional regime that holds for arbitrary linear teachers. We demonstrate that isotropic regularization mitigates label noise under both single-teacher and multiple i.i.d. teacher settings, whereas prior work accommodating multiple teachers either did not employ regularization or used memory-demanding methods. Furthermore, we prove that the optimal fixed regularization strength scales nearly linearly with the number of tasks $T$, specifically as $T/\ln T$. To our knowledge, this is the first such result in theoretical continual learning. Finally, we validate our theoretical findings through experiments on linear regression and neural networks, illustrating how this scaling law affects generalization and offering a practical recipe for the design of continual learning systems.
+ oai:arXiv.org:2601.13844v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Gilad Karpel, Edward Moroshko, Ran Levinstein, Ron Meir, Daniel Soudry, Itay Evron
+
+
+ Virtual Urbanism: An AI-Driven Framework for Quantifying Urban Identity. A Tokyo-Based Pilot Study Using Diffusion-Generated Synthetic Environments
+ https://arxiv.org/abs/2601.13846
+ arXiv:2601.13846v1 Announce Type: new
+Abstract: This paper introduces Virtual Urbanism (VU), a multimodal AI-driven analytical framework for quantifying urban identity through the medium of synthetic urban replicas. The framework aims to advance computationally tractable urban identity metrics. To demonstrate feasibility, the pilot study Virtual Urbanism and Tokyo Microcosms is presented. A pipeline integrating Stable Diffusion and LoRA models was used to produce synthetic replicas of nine Tokyo areas rendered as dynamic synthetic urban sequences, excluding existing orientation markers to elicit core identity-forming elements. Human-evaluation experiments (I) assessed perceptual legitimacy of replicas; (II) quantified area-level identity; (III) derived core identity-forming elements. Results showed a mean identification accuracy of ~81%, confirming the validity of the replicas. Urban Identity Level (UIL) metric enabled assessment of identity levels across areas, while semantic analysis revealed culturally embedded typologies as core identity-forming elements, positioning VU as a viable framework for AI-augmented urban analysis, outlining a path toward automated, multi-parameter identity metrics.
+ oai:arXiv.org:2601.13846v1
+ cs.AI
+ cs.CY
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Glinskaya Maria
+
+
+ Emotion and Acoustics Should Agree: Cross-Level Inconsistency Analysis for Audio Deepfake Detection
+ https://arxiv.org/abs/2601.13847
+ arXiv:2601.13847v1 Announce Type: new
+Abstract: Audio Deepfake Detection (ADD) aims to detect spoof speech from bonafide speech. Most prior studies assume that stronger correlations within or across acoustic and emotional features imply authenticity, and thus focus on enhancing or measuring such correlations. However, existing methods often treat acoustic and emotional features in isolation or rely on correlation metrics, which overlook subtle desynchronization between them and smooth out abrupt discontinuities. To address these issues, we propose EAI-ADD, which treats cross level emotion acoustic inconsistency as the primary detection signal. We first project emotional and acoustic representations into a comparable space. Then we progressively integrate frame level and utterance level emotion features with acoustic features to capture cross level emotion acoustic inconsistencies across different temporal granularities. Experimental results on the ASVspoof 2019LA and 2021LA datasets demonstrate that the proposed EAI-ADD outperforms baselines, providing a more effective solution for audio anti spoofing detection.
+ oai:arXiv.org:2601.13847v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinhua Zhang, Zhenqi Jia, Rui Liu
+
+
+ Inverting Self-Organizing Maps: A Unified Activation-Based Framework
+ https://arxiv.org/abs/2601.13851
+ arXiv:2601.13851v1 Announce Type: new
+Abstract: Self-Organizing Maps provide topology-preserving projections of high-dimensional data and have been widely used for visualization, clustering, and vector quantization. In this work, we show that the activation pattern of a SOM - the squared distances to its prototypes - can be inverted to recover the exact input under mild geometric conditions. This follows from a classical fact in Euclidean distance geometry: a point in $D$ dimensions is uniquely determined by its distances to $D{+}1$ affinely independent references. We derive the corresponding linear system and characterize the conditions under which the inversion is well-posed. Building upon this mechanism, we introduce the Manifold-Aware Unified SOM Inversion and Control (MUSIC) update rule, which enables controlled, semantically meaningful trajectories in latent space. MUSIC modifies squared distances to selected prototypes while preserving others, resulting in a deterministic geometric flow aligned with the SOM's piecewise-linear structure. Tikhonov regularization stabilizes the update rule and ensures smooth motion on high-dimensional datasets. Unlike variational or probabilistic generative models, MUSIC does not rely on sampling, latent priors, or encoder-decoder architectures. If no perturbation is applied, inversion recovers the exact input; when a target cluster or prototype is specified, MUSIC produces coherent semantic variations while remaining on the data manifold. This leads to a new perspective on data augmentation and controllable latent exploration based solely on prototype geometry. We validate the approach using synthetic Gaussian mixtures, the MNIST and the Faces in the Wild dataset. Across all settings, MUSIC produces smooth, interpretable trajectories that reveal the underlying geometry of the learned manifold, illustrating the advantages of SOM-based inversion over unsupervised clustering.
+ oai:arXiv.org:2601.13851v1
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Alessandro Londei, Matteo Benati, Denise Lanzieri, Vittorio Loreto
+
+
+ Probabilistic Deep Discriminant Analysis for Wind Blade Segmentation
+ https://arxiv.org/abs/2601.13852
+ arXiv:2601.13852v1 Announce Type: new
+Abstract: Linear discriminant analysis improves class separability but struggles with non-linearly separable data. To overcome this, we introduce Deep Discriminant Analysis (DDA), which directly optimizes the Fisher criterion utilizing deep networks. To ensure stable training and avoid computational instabilities, we incorporate signed between-class variance, bound outputs with a sigmoid function, and convert multiplicative relationships into additive ones. We present two stable DDA loss functions and augment them with a probability loss, resulting in Probabilistic DDA (PDDA). PDDA effectively minimizes class overlap in output distributions, producing highly confident predictions with reduced within-class variance. When applied to wind blade segmentation, PDDA showcases notable advances in performance and consistency, critical for wind energy maintenance. To our knowledge, this is the first application of DDA to image segmentation.
+ oai:arXiv.org:2601.13852v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ra\"ul P\'erez-Gonzalo, Andreas Espersen, Antonio Agudo
+
+
+ Question-Focused Filtering for Knowledge-based VQA
+ https://arxiv.org/abs/2601.13856
+ arXiv:2601.13856v1 Announce Type: new
+Abstract: Knowledge-based Visual Question Answering (KB-VQA) aims to answer questions by integrating images with external knowledge. Effective knowledge filtering is crucial for improving accuracy. Typical filtering methods use similarity metrics to locate relevant article sections from one article, leading to information selection errors at the article and intra-article levels. Although recent explorations of Multimodal Large Language Model (MLLM)-based filtering methods demonstrate superior semantic understanding and cross-article filtering capabilities, their high computational cost limits practical application. To address these issues, this paper proposes a question-focused filtering method. This approach can perform question-focused, cross-article filtering, efficiently obtaining high-quality filtered knowledge while keeping computational costs comparable to typical methods. Specifically, we design a trainable Question-Focused Filter (QFF) and a Chunk-based Dynamic Multi-Article Selection (CDA) module, which collectively alleviate information selection errors at both the article and intra-article levels. Experiments show that our method outperforms current state-of-the-art models by 4.9% on E-VQA and 3.8% on InfoSeek, validating its effectiveness. The code is publicly available at: https://github.com/leaffeall/QKVQA.
+ oai:arXiv.org:2601.13856v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wei Ye, Yixin Su, Yueguo Chen, Longxiang Gao, Jianjun Li, Ruixuan Li, Rui Zhang
+
+
+ Designing Drone Interfaces to Assist Pedestrians Crossing Non-Signalised Roads
+ https://arxiv.org/abs/2601.13858
+ arXiv:2601.13858v1 Announce Type: new
+Abstract: Recent research highlights the potential of drones to enhance pedestrian experiences, such as aiding navigation and supporting street-level activities. This paper explores the design of drone interfaces to assist pedestrians crossing dangerous roads without designated crosswalks or traffic lights, leveraging drones' ability to monitor and analyse real-time traffic data. Inspired by existing traffic signal systems, the interface communicates safety information through permissive alerts, prohibitive warnings, directional warnings, and collision emergency warnings. These safety cues were integrated into drone interfaces using in-situ projections and drone-equipped screens through an iterative design process. A mixed-methods, within-subjects VR evaluation (n=18) revealed that drone-assisted systems significantly improved pedestrian safety experiences and reduced mental workload compared to a baseline without any crossing aid, with projections outperforming screens. The findings suggest the potential for drone interfaces to be integrated into connected traffic systems. We also offer design recommendations for developing drone interfaces that support safe pedestrian crossings.
+ oai:arXiv.org:2601.13858v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3764687.376470
+ OZCHI '25: Proceedings of the 37th Australian Conference on Human-Computer Interaction (2025)
+ Guixiang Zhang, Yiyuan Wang, Marius Hoggenmueller
+
+
+ HardSecBench: Benchmarking the Security Awareness of LLMs for Hardware Code Generation
+ https://arxiv.org/abs/2601.13864
+ arXiv:2601.13864v1 Announce Type: new
+Abstract: Large language models (LLMs) are being increasingly integrated into practical hardware and firmware development pipelines for code generation. Existing studies have primarily focused on evaluating the functional correctness of LLM-generated code, yet paid limited attention to its security issues. However, LLM-generated code that appears functionally sound may embed security flaws which could induce catastrophic damages after deployment. This critical research gap motivates us to design a benchmark for assessing security awareness under realistic specifications. In this work, we introduce HardSecBench, a benchmark with 924 tasks spanning Verilog Register Transfer Level (RTL) and firmware-level C, covering 76 hardware-relevant Common Weakness Enumeration (CWE) entries. Each task includes a structured specification, a secure reference implementation, and executable tests. To automate artifact synthesis, we propose a multi-agent pipeline that decouples synthesis from verification and grounds evaluation in execution evidence, enabling reliable evaluation. Using HardSecBench, we evaluate a range of LLMs on hardware and firmware code generation and find that models often satisfy functional requirements while still leaving security risks. We also find that security results vary with prompting. These findings highlight pressing challenges and offer actionable insights for future advancements in LLM-assisted hardware design. Our data and code will be released soon.
+ oai:arXiv.org:2601.13864v1
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qirui Chen, Jingxian Shuai, Shuangwu Chen, Shenghao Ye, Zijian Wen, Xufei Su, Jie Jin, Jiangming Li, Jun Chen, Xiaobin Tan, Jian Yang
+
+
+ Understanding Human-Multi-Agent Team Formation for Creative Work
+ https://arxiv.org/abs/2601.13865
+ arXiv:2601.13865v1 Announce Type: new
+Abstract: Team-based collaboration is a cornerstone of modern creative work. Recent advances in generative AI open possibilities for humans to collaborate with multiple AI agents in distinct roles to address complex creative workflows. Yet, how to form Human-Multi-Agent Teams (HMATs) is underexplored, especially given that inter-agent interactions increase complexity and the risk of unexpected behaviors. In this exploratory study, we aim to understand how to form HMATs for creative work using CrafTeam, a technology probe that allows users to form and collaborate with their teams. We conducted a study with 12 design practitioners, in which participants iterated through a three-step cycle: forming HMATs, ideating with their teams, and reflecting on their teams' ideation. Our findings reveal that while participants initially attempted autonomous team operations, they ultimately adopted team formations in which they directly orchestrated agents. We discuss design considerations for HMAT formation that humans can effectively orchestrate multiple agents.
+ oai:arXiv.org:2601.13865v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ 10.1145/3772318.3791166
+ Hyunseung Lim, Dasom Choi, Sooyohn Nam, Bogoan Kim, Hwajung Hong
+
+
+ OCCAM: Class-Agnostic, Training-Free, Prior-Free and Multi-Class Object Counting
+ https://arxiv.org/abs/2601.13871
+ arXiv:2601.13871v1 Announce Type: new
+Abstract: Class-Agnostic object Counting (CAC) involves counting instances of objects from arbitrary classes within an image. Due to its practical importance, CAC has received increasing attention in recent years. Most existing methods assume a single object class per image, rely on extensive training of large deep learning models and address the problem by incorporating additional information, such as visual exemplars or text prompts. In this paper, we present OCCAM, the first training-free approach to CAC that operates without the need of any supplementary information. Moreover, our approach addresses the multi-class variant of the problem, as it is capable of counting the object instances in each and every class among arbitrary object classes within an image. We leverage Segment Anything Model 2 (SAM2), a foundation model, and a custom threshold-based variant of the First Integer Neighbor Clustering Hierarchy (FINCH) algorithm to achieve competitive performance on widely used benchmark datasets, FSC-147 and CARPK. We propose a synthetic multi-class dataset and F1 score as a more suitable evaluation metric. The code for our method and the proposed synthetic dataset will be made publicly available at https://mikespanak.github.io/OCCAM_counter.
+ oai:arXiv.org:2601.13871v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Michail Spanakis, Iason Oikonomidis, Antonis Argyros
+
+
+ Enhanced Cyber Threat Intelligence by Network Forensic Analysis for Ransomware as a Service(RaaS) Malwares
+ https://arxiv.org/abs/2601.13873
+ arXiv:2601.13873v1 Announce Type: new
+Abstract: In the current era of interconnected cyberspace, there is an adverse effect of ransomware on individuals, startups, and large companies. Cybercriminals hold digital assets till the demand for payment is made. The success of ransomware upsurged with the introduction of Ransomware as a Service(RaaS) franchise in the darknet market. Obfuscation and polymorphic nature of malware make them more difficult to identify by Antivirus system. Signature based intrusion detection is still on role suffering from the scarcity of RaaS packet signatures. We have analysed RaaS samples by network forensic approach to investigate on packet captures of benign and malicious network traffic. The behavior analysis of RaaS family Ransomwares, Ryuk and Gandcrab have been investigated to classify the packets as suspicious, malicious, and non-malicious which further aid in generating RaaS packet signatures for early detection and mitigation of ransomwares belonging to RaaS family. More than 40\% of packets are found malicious in this experiment. The proposed method is also verified by Virus Total API Approach. Further, the proposed approach is recommended for integration into honeypots in the present scenario to combat with data scarcity concerned with malware samples(RaaS). This data will be helpful in developing AI-based threat intelligence mechanisms. In turn enhance detection, prevention of threats, incident response and risk assessment.
+ oai:arXiv.org:2601.13873v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sharmila S P
+
+
+ Pedagogical Alignment for Vision-Language-Action Models: A Comprehensive Framework for Data, Architecture, and Evaluation in Education
+ https://arxiv.org/abs/2601.13876
+ arXiv:2601.13876v1 Announce Type: new
+Abstract: Science demonstrations are important for effective STEM education, yet teachers face challenges in conducting them safely and consistently across multiple occasions, where robotics can be helpful. However, current Vision-Language-Action (VLA) models require substantial computational resources and sacrifice language generation capabilities to maximize efficiency, making them unsuitable for resource-constrained educational settings that require interpretable, explanation-generating systems. We present \textit{Pedagogical VLA Framework}, a framework that applies pedagogical alignment to lightweight VLA models through four components: text healing to restore language generation capabilities, large language model (LLM) distillation to transfer pedagogical knowledge, safety training for educational environments, and pedagogical evaluation adjusted to science education contexts. We evaluate Pedagogical VLA Framework across five science demonstrations spanning physics, chemistry, biology, and earth science, using an evaluation framework developed in collaboration with science education experts. Our evaluation assesses both task performance (success rate, protocol compliance, efficiency, safety) and pedagogical quality through teacher surveys and LLM-as-Judge assessment. We additionally provide qualitative analysis of generated texts. Experimental results demonstrate that Pedagogical VLA Framework achieves comparable task performance to baseline models while producing contextually appropriate educational explanations.
+ oai:arXiv.org:2601.13876v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Unggi Lee, Jahyun Jeong, Sunyoung Shin, Haeun Park, Jeongsu Moon, Youngchang Song, Jaechang Shim, JaeHwan Lee, Yunju Noh, Seungwon Choi, Ahhyun Kim, TaeHyeon Kim, Kyungtae Joo, Taeyeong Kim, Gyeonggeon Lee
+
+
+ Chain-of-Thought Compression Should Not Be Blind: V-Skip for Efficient Multimodal Reasoning via Dual-Path Anchoring
+ https://arxiv.org/abs/2601.13879
+ arXiv:2601.13879v1 Announce Type: new
+Abstract: While Chain-of-Thought (CoT) reasoning significantly enhances the performance of Multimodal Large Language Models (MLLMs), its autoregressive nature incurs prohibitive latency constraints. Current efforts to mitigate this via token compression often fail by blindly applying text-centric metrics to multimodal contexts. We identify a critical failure mode termed Visual Amnesia, where linguistically redundant tokens are erroneously pruned, leading to hallucinations. To address this, we introduce V-Skip that reformulates token pruning as a Visual-Anchored Information Bottleneck (VA-IB) optimization problem. V-Skip employs a dual-path gating mechanism that weighs token importance through both linguistic surprisal and cross-modal attention flow, effectively rescuing visually salient anchors. Extensive experiments on Qwen2-VL and Llama-3.2 families demonstrate that V-Skip achieves a $2.9\times$ speedup with negligible accuracy loss. Specifically, it preserves fine-grained visual details, outperforming other baselines over 30\% on the DocVQA.
+ oai:arXiv.org:2601.13879v1
+ cs.MM
+ cs.CL
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Dongxu Zhang, Yiding Sun, Cheng Tan, Wenbiao Yan, Ning Yang, Jihua Zhu, Hiajun Zhang
+
+
+ LifeAgentBench: A Multi-dimensional Benchmark and Agent for Personal Health Assistants in Digital Health
+ https://arxiv.org/abs/2601.13880
+ arXiv:2601.13880v1 Announce Type: new
+Abstract: Personalized digital health support requires long-horizon, cross-dimensional reasoning over heterogeneous lifestyle signals, and recent advances in mobile sensing and large language models (LLMs) make such support increasingly feasible. However, the capabilities of current LLMs in this setting remain unclear due to the lack of systematic benchmarks. In this paper, we introduce LifeAgentBench, a large-scale QA benchmark for long-horizon, cross-dimensional, and multi-user lifestyle health reasoning, containing 22,573 questions spanning from basic retrieval to complex reasoning. We release an extensible benchmark construction pipeline and a standardized evaluation protocol to enable reliable and scalable assessment of LLM-based health assistants. We then systematically evaluate 11 leading LLMs on LifeAgentBench and identify key bottlenecks in long-horizon aggregation and cross-dimensional reasoning. Motivated by these findings, we propose LifeAgent as a strong baseline agent for health assistant that integrates multi-step evidence retrieval with deterministic aggregation, achieving significant improvements compared with two widely used baselines. Case studies further demonstrate its potential in realistic daily-life scenarios. The benchmark is publicly available at https://anonymous.4open.science/r/LifeAgentBench-CE7B.
+ oai:arXiv.org:2601.13880v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ye Tian, Zihao Wang, Onat Gungor, Xiaoran Fan, Tajana Rosing
+
+
+ OpenLearnLM Benchmark: A Unified Framework for Evaluating Knowledge, Skill, and Attitude in Educational Large Language Models
+ https://arxiv.org/abs/2601.13882
+ arXiv:2601.13882v1 Announce Type: new
+Abstract: Large Language Models are increasingly deployed as educational tools, yet existing benchmarks focus on narrow skills and lack grounding in learning sciences. We introduce OpenLearnLM Benchmark, a theory-grounded framework evaluating LLMs across three dimensions derived from educational assessment theory: Knowledge (curriculum-aligned content and pedagogical understanding), Skills (scenario-based competencies organized through a four-level center-role-scenario-subscenario hierarchy), and Attitude (alignment consistency and deception resistance). Our benchmark comprises 124K+ items spanning multiple subjects, educational roles, and difficulty levels based on Bloom's taxonomy. The Knowledge domain prioritizes authentic assessment items from established benchmarks, while the Attitude domain adapts Anthropic's Alignment Faking methodology to detect behavioral inconsistency under varying monitoring conditions. Evaluation of seven frontier models reveals distinct capability profiles: Claude-Opus-4.5 excels in practical skills despite lower content knowledge, while Grok-4.1-fast leads in knowledge but shows alignment concerns. Notably, no single model dominates all dimensions, validating the necessity of multi-axis evaluation. OpenLearnLM provides an open, comprehensive framework for advancing LLM readiness in authentic educational contexts.
+ oai:arXiv.org:2601.13882v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Unggi Lee, Sookbun Lee, Heungsoo Choi, Jinseo Lee, Haeun Park, Younghoon Jeon, Sungmin Cho, Minju Kang, Junbo Koh, Jiyeong Bae, Minwoo Nam, Juyeon Eun, Yeonji Jung, Yeil Jeong
+
+
+ Constrained MARL for Coexisting TN-NTN Resource Allocation: Scalability and Flexibility
+ https://arxiv.org/abs/2601.13883
+ arXiv:2601.13883v1 Announce Type: new
+Abstract: This paper considers the joint TN-NTN constrained resource allocation, where terrestrial base stations and non-terrestrial base stations coexist in the spectrum. We focus on large-scale and practical scenarios characterized by large numbers of transmission channels and users, alongside highly dynamic user behaviors. As common learning solutions fail to address these challenges, we propose a decomposition solution based on the special properties of the cross-segment interference, and then tackle the original problem via solving subproblems in a sequential learning manner. Furthermore, to enhance the flexibility of the learned policies, we design a stochastic training environment that captures the key characteristics of real-world systems. Simulation results tested on the full 20MHz bandwidth with various numerologies show that our solution significantly improves scalability compared to existing solutions and remains robust in highly dynamic scenarios.
+ oai:arXiv.org:2601.13883v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Cuong Le, Thang X. Vu, Stefano Andrenacci, Symeon Chatzinotas
+
+
+ Confident Rankings with Fewer Items: Adaptive LLM Evaluation with Continuous Scores
+ https://arxiv.org/abs/2601.13885
+ arXiv:2601.13885v1 Announce Type: new
+Abstract: Computerized Adaptive Testing (CAT) has proven effective for efficient LLM evaluation on multiple-choice benchmarks, but modern LLM evaluation increasingly relies on generation tasks where outputs are scored continuously rather than marked correct/incorrect. We present a principled extension of IRT-based adaptive testing to continuous bounded scores (ROUGE, BLEU, LLM-as-a-Judge) by replacing the Bernoulli response distribution with a heteroskedastic normal distribution. Building on this, we introduce an uncertainty aware ranker with adaptive stopping criteria that achieves reliable model ranking while testing as few items and as cheaply as possible. We validate our method on five benchmarks spanning n-gram-based, embedding-based, and LLM-as-judge metrics. Our method uses 2% of the items while improving ranking correlation by 0.12 {\tau} over random sampling, with 95% accuracy on confident predictions.
+ oai:arXiv.org:2601.13885v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Esma Balk{\i}r, Alice Pernthaller, Marco Basaldella, Jos\'e Hern\'andez-Orallo, Nigel Collier
+
+
+ Revisiting Multi-Task Visual Representation Learning
+ https://arxiv.org/abs/2601.13886
+ arXiv:2601.13886v1 Announce Type: new
+Abstract: Current visual representation learning remains bifurcated: vision-language models (e.g., CLIP) excel at global semantic alignment but lack spatial precision, while self-supervised methods (e.g., MAE, DINO) capture intricate local structures yet struggle with high-level semantic context. We argue that these paradigms are fundamentally complementary and can be integrated into a principled multi-task framework, further enhanced by dense spatial supervision. We introduce MTV, a multi-task visual pretraining framework that jointly optimizes a shared backbone across vision-language contrastive, self-supervised, and dense spatial objectives. To mitigate the need for manual annotations, we leverage high-capacity "expert" models -- such as Depth Anything V2 and OWLv2 -- to synthesize dense, structured pseudo-labels at scale. Beyond the framework, we provide a systematic investigation into the mechanics of multi-task visual learning, analyzing: (i) the marginal gain of each objective, (ii) task synergies versus interference, and (iii) scaling behavior across varying data and model scales. Our results demonstrate that MTV achieves "best-of-both-worlds" performance, significantly enhancing fine-grained spatial reasoning without compromising global semantic understanding. Our findings suggest that multi-task learning, fueled by high-quality pseudo-supervision, is a scalable path toward more general visual encoders.
+ oai:arXiv.org:2601.13886v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shangzhe Di, Zhonghua Zhai, Weidi Xie
+
+
+ Human Simulation Computation: A Human-Inspired Framework for Adaptive AI Systems
+ https://arxiv.org/abs/2601.13887
+ arXiv:2601.13887v1 Announce Type: new
+Abstract: Large language models (LLMs) have demonstrated strong capabilities in knowledge representation and reasoning based on textual data. However, their reliance on language material alone limits their ability to adapt, verify reasoning outcomes, and operate effectively in open and dynamic real-world environments. In this paper, we propose Human Simulation Computation (HSC), a human-inspired computational framework that models intelligence as a continuous, closed-loop process involving thinking, action, learning, reflection, and activity scheduling, collectively referred to as the internal reasoning process. HSC emphasizes active participation both within the internal reasoning process and in interactions with the environment, where actions are used not only to achieve goals but also to automatically refine and improve internal reasoning mechanisms without external intervention. Furthermore, HSC incorporates commonly used human thinking strategies across all stages of the internal reasoning process, such as main-feature-oriented reasoning, scope expansion through action, and on-time learning driven by environmental feedback. Through theoretical analysis, we argue that human simulation strategies cannot be fully learned from language material alone, and that human-like reasoning processes and action-grounded reasoning methods are essential for robust adaptation and effective interaction with real-world environments.
+ oai:arXiv.org:2601.13887v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Hong Su
+
+
+ Towards Inclusive External Human-Machine Interface: Exploring the Effects of Visual and Auditory eHMI for Deaf and Hard-of-Hearing People
+ https://arxiv.org/abs/2601.13889
+ arXiv:2601.13889v1 Announce Type: new
+Abstract: External Human-Machine Interfaces (eHMIs) have been proposed to facilitate communication between Automated Vehicles (AVs) and pedestrians. However, no attention was given to Deaf and Hard-of-Hearing (DHH) people. We conducted a formative study through focus groups with 6 DHH people and 6 key stakeholders (including researchers, assistive technologists, and automotive interface designers) to compare proposed eHMIs and extract key design requirements. Subsequently, we investigated the effects of visual and auditory eHMI in a virtual reality user study with 32 participants (16 DHH). Results from our scenario suggesting that (1) DHH participants spent more time looking at the AV; (2) both visual and auditory eHMIs enhanced trust, usefulness, and perceived safety; and (3) only visual eHMIs reduced the time to step into the road, time looking at the AV, gaze time, and percentage looking at active visual eHMI components. Lastly, we provided five practical implications for making eHMI inclusive of DHH people.
+ oai:arXiv.org:2601.13889v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3772318.3790738
+ Wenge Xu, Foroogh Hajiseyedjavadi, Kurtis Weir, Chukwuemeka Eze, Mark Colley
+
+
+ Multi-Objective Hierarchical Optimization with Large Language Models
+ https://arxiv.org/abs/2601.13892
+ arXiv:2601.13892v1 Announce Type: new
+Abstract: Despite their widespread adoption in various domains, especially due to their powerful reasoning capabilities, Large Language Models (LLMs) are not the off-the-shelf choice to drive multi-objective optimization yet. Conventional strategies rank high in benchmarks due to their intrinsic capabilities to handle numerical inputs and careful modelling choices that balance exploration and Pareto-front exploitation, as well as handle multiple (conflicting) objectives. In this paper, we close this gap by leveraging LLMs as surrogate models and candidate samplers inside a structured hierarchical search strategy. By adaptively partitioning the input space into disjoint hyperrectangular regions and ranking them with a composite score function, we restrict the generative process of the LLM to specific, high-potential sub-spaces, hence making the problem easier to solve as the LLM doesn't have to reason about the global structure of the problem, but only locally instead. We show that under standard regularity assumptions, our algorithm generates candidate solutions that converge to the true Pareto set in Hausdorff distance. Empirically, it consistently outperforms the global LLM-based multi-objective optimizer and is on par with standard evolutionary and Bayesian optimization algorithm on synthetic and real-world benchmarks.
+ oai:arXiv.org:2601.13892v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Andrej Schwanke, Lyubomir Ivanov, David Salinas, Frank Hutter, Arber Zela
+
+
+ Multi-Location Software Model Completion
+ https://arxiv.org/abs/2601.13894
+ arXiv:2601.13894v1 Announce Type: new
+Abstract: In model-driven engineering and beyond, software models are key development artifacts. In practice, they often grow to substantial size and complexity, undergoing thousands of modifications over time due to evolution, refactoring, and maintenance. The rise of AI has sparked interest in how software modeling activities can be automated. Recently, LLM-based approaches for software model completion have been proposed, however, the state of the art supports only single-location model completion by predicting changes at a specific location. Going beyond, we aim to bridge the gap toward handling coordinated changes that span multiple locations across large, complex models. Specifically, we propose a novel global embedding-based next focus predictor, NextFocus, which is capable of multi-location model completion for the first time. The predictor consists of a neural network with an attention mechanism that is trained on historical software model evolution data. Starting from an existing change, it predicts further model elements to change, potentially spanning multiple parts of the model. We evaluate our approach on multi-location model changes that have actually been performed by developers in real-world projects. NextFocus achieves promising results for multi-location model completion, even when changes are heavily spread across the model. It achieves an average Precision@k score of 0.98 for $k \leq 10$, significantly outperforming the three baseline approaches.
+ oai:arXiv.org:2601.13894v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alisa Welter, Christof Tinnes, Sven Apel
+
+
+ OmniOVCD: Streamlining Open-Vocabulary Change Detection with SAM 3
+ https://arxiv.org/abs/2601.13895
+ arXiv:2601.13895v1 Announce Type: new
+Abstract: Change Detection (CD) is a fundamental task in remote sensing. It monitors the evolution of land cover over time. Based on this, Open-Vocabulary Change Detection (OVCD) introduces a new requirement. It aims to reduce the reliance on predefined categories. Existing training-free OVCD methods mostly use CLIP to identify categories. These methods also need extra models like DINO to extract features. However, combining different models often causes problems in matching features and makes the system unstable. Recently, the Segment Anything Model 3 (SAM 3) is introduced. It integrates segmentation and identification capabilities within one promptable model, which offers new possibilities for the OVCD task. In this paper, we propose OmniOVCD, a standalone framework designed for OVCD. By leveraging the decoupled output heads of SAM 3, we propose a Synergistic Fusion to Instance Decoupling (SFID) strategy. SFID first fuses the semantic, instance, and presence outputs of SAM 3 to construct land-cover masks, and then decomposes them into individual instance masks for change comparison. This design preserves high accuracy in category recognition and maintains instance-level consistency across images. As a result, the model can generate accurate change masks. Experiments on four public benchmarks (LEVIR-CD, WHU-CD, S2Looking, and SECOND) demonstrate SOTA performance, achieving IoU scores of 67.2, 66.5, 24.5, and 27.1 (class-average), respectively, surpassing all previous methods.
+ oai:arXiv.org:2601.13895v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xu Zhang, Danyang Li, Yingjie Xia, Xiaohang Dong, Hualong Yu, Jianye Wang, Qicheng Li
+
+
+ TractRLFusion: A GPT-Based Multi-Critic Policy Fusion Framework for Fiber Tractography
+ https://arxiv.org/abs/2601.13897
+ arXiv:2601.13897v1 Announce Type: new
+Abstract: Tractography plays a pivotal role in the non-invasive reconstruction of white matter fiber pathways, providing vital information on brain connectivity and supporting precise neurosurgical planning. Although traditional methods relied mainly on classical deterministic and probabilistic approaches, recent progress has benefited from supervised deep learning (DL) and deep reinforcement learning (DRL) to improve tract reconstruction. A persistent challenge in tractography is accurately reconstructing white matter tracts while minimizing spurious connections. To address this, we propose TractRLFusion, a novel GPT-based policy fusion framework that integrates multiple RL policies through a data-driven fusion strategy. Our method employs a two-stage training data selection process for effective policy fusion, followed by a multi-critic fine-tuning phase to enhance robustness and generalization. Experiments on HCP, ISMRM, and TractoInferno datasets demonstrate that TractRLFusion outperforms individual RL policies as well as state-of-the-art classical and DRL methods in accuracy and anatomical reliability.
+ oai:arXiv.org:2601.13897v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ankita Joshi, Ashutosh Sharma, Anoushkrit Goel, Ranjeet Ranjan Jha, Chirag Ahuja, Arnav Bhavsar, Aditya Nigam
+
+
+ Towards Visually Explaining Statistical Tests with Applications in Biomedical Imaging
+ https://arxiv.org/abs/2601.13899
+ arXiv:2601.13899v1 Announce Type: new
+Abstract: Deep neural two-sample tests have recently shown strong power for detecting distributional differences between groups, yet their black-box nature limits interpretability and practical adoption in biomedical analysis. Moreover, most existing post-hoc explainability methods rely on class labels, making them unsuitable for label-free statistical testing settings. We propose an explainable deep statistical testing framework that augments deep two-sample tests with sample-level and feature-level explanations, revealing which individual samples and which input features drive statistically significant group differences. Our method highlights which image regions and which individual samples contribute most to the detected group difference, providing spatial and instance-wise insight into the test's decision. Applied to biomedical imaging data, the proposed framework identifies influential samples and highlights anatomically meaningful regions associated with disease-related variation. This work bridges statistical inference and explainable AI, enabling interpretable, label-free population analysis in medical imaging.
+ oai:arXiv.org:2601.13899v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Masoumeh Javanbakhat, Piotr Komorowski, Dilyara Bareeva, Wei-Chang Lai, Wojciech Samek, Christoph Lippert
+
+
+ Mathematical and computational perspectives on the Boolean and binary rank and their relation to the real rank
+ https://arxiv.org/abs/2601.13900
+ arXiv:2601.13900v1 Announce Type: new
+Abstract: This survey provides a comprehensive overview of the study of the binary and Boolean rank from both a mathematical and a computational perspective, with particular emphasis on their relationship to the real rank. We review the basic definitions of these rank functions and present the main alternative formulations of the binary and Boolean rank, together with their computational complexity and their deep connection to the field of communication complexity. We summarize key techniques used to establish lower and upper bounds on the binary and Boolean rank, including methods from linear algebra, combinatorics and graph theory, isolation sets, the probabilistic method, kernelization, communication protocols and the query to communication lifting technique. Furthermore, we highlight the main mathematical properties of these ranks in comparison with those of the real rank, and discuss several non-trivial bounds on the rank of specific families of matrices. Finally, we present algorithmic approaches for computing and approximating these rank functions, such as parameterized algorithms, approximation algorithms, property testing and approximate Boolean matrix factorization (BMF). Together, the results presented outline the current theoretical knowledge in this area and suggest directions for further research.
+ oai:arXiv.org:2601.13900v1
+ cs.DM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Michal Parnas
+
+
+ Know Your Contract: Extending eIDAS Trust into Public Blockchains
+ https://arxiv.org/abs/2601.13903
+ arXiv:2601.13903v1 Announce Type: new
+Abstract: Public blockchains lack native mechanisms to attribute on-chain actions to legally accountable entities, creating a fundamental barrier to institutional adoption and regulatory compliance. This paper presents an architecture that extends the European Union eIDAS trust framework into public blockchain ecosystems by cryptographically binding smart contracts to qualified electronic seals issued by Qualified Trust Service Providers. The mechanism establishes a verifiable chain of trust from the European Commission List of Trusted Lists to individual on-chain addresses, enabling machine-verifiable proofs for automated regulatory validation, such as Know Your Contract, Counterparty, and Business checks, without introducing new trusted intermediaries. Regulatory requirements arising from eIDAS, MiCA, PSD2, PSR, and the proposed European Business Wallet are analyzed, and a cryptographic suite meeting both eIDAS implementing regulations and EVM execution constraints following the Ethereum Fusaka upgrade is identified, namely ECDSA with P-256 and CAdES formatting. Two complementary trust validation models are presented: an off-chain workflow for agent-to-agent payment protocols and a fully on-chain workflow enabling regulatory-compliant DeFi operations between legal entities. The on-chain model converts regulatory compliance from a per-counterparty administrative burden into an automated, standardized process, enabling mutual validation at first interaction without prior business relationships. As eIDAS wallets become mandatory across EU member states, the proposed architecture provides a pathway for integrating European digital trust infrastructure into blockchain-based systems, enabling institutional DeFi participation, real-world asset tokenization, and agentic commerce within a trusted, regulatory-compliant framework.
+ oai:arXiv.org:2601.13903v1
+ cs.CR
+ cs.CY
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Awid Vaziry, Christoph Wronka, Sandro Rodriguez Garzon, Axel K\"upper
+
+
+ PREFAB: PREFerence-based Affective Modeling for Low-Budget Self-Annotation
+ https://arxiv.org/abs/2601.13904
+ arXiv:2601.13904v1 Announce Type: new
+Abstract: Self-annotation is the gold standard for collecting affective state labels in affective computing. Existing methods typically rely on full annotation, requiring users to continuously label affective states across entire sessions. While this process yields fine-grained data, it is time-consuming, cognitively demanding, and prone to fatigue and errors. To address these issues, we present PREFAB, a low-budget retrospective self-annotation method that targets affective inflection regions rather than full annotation. Grounded in the peak-end rule and ordinal representations of emotion, PREFAB employs a preference-learning model to detect relative affective changes, directing annotators to label only selected segments while interpolating the remainder of the stimulus. We further introduce a preview mechanism that provides brief contextual cues to assist annotation. We evaluate PREFAB through a technical performance study and a 25-participant user study. Results show that PREFAB outperforms baselines in modeling affective inflections while mitigating workload (and conditionally mitigating temporal burden). Importantly PREFAB improves annotator confidence without degrading annotation quality.
+ oai:arXiv.org:2601.13904v1
+ cs.AI
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jaeyoung Moon, Youjin Choi, Yucheon Park, David Melhart, Georgios N. Yannakakis, Kyung-Joong Kim
+
+
+ Decentralized Infrastructure for Digital Notarizing, Signing and Sharing Files using Blockchain
+ https://arxiv.org/abs/2601.13907
+ arXiv:2601.13907v1 Announce Type: new
+Abstract: Traditional paper-based document management has long posed challenges related to security, authenticity, and efficiency. Despite advances in digitalization, official documents remain vulnerable to forgery, loss, and unauthorized access. This thesis proposes a decentralized infrastructure for digital notarization, signing, and sharing of documents using blockchain technology. The research addresses key issues of transparency, immutability, and feasibility by defining system requirements, evaluating existing solutions, and proposing a novel architecture based on distributed systems.
+ By combining cryptographic techniques with decentralized storage, this research contributes to the development of a more secure and efficient framework for managing official documents. The findings highlight the potential of blockchain-based digital notarization to streamline bureaucratic processes, mitigate security risks, and enhance user trust in digital document management.
+ oai:arXiv.org:2601.13907v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Cosmin-Iulian Irimia
+
+
+ Improving the local solution of the DG predictor of the ADER-DG method for solving systems of ordinary differential equations and its applicability to systems of differential-algebraic equations
+ https://arxiv.org/abs/2601.13908
+ arXiv:2601.13908v1 Announce Type: new
+Abstract: Improved local numerical solution for the ADER-DG numerical method with a local DG predictor for solving the initial value problem for a first-order ODE system is proposed. The improved local numerical solution demonstrates convergence orders of one higher than the convergence order of the local numerical solution of the original ADER-DG numerical method and has the property of continuity at grid nodes. Rigorous proofs of the approximation orders of the local numerical solution and the improved local numerical solution are presented. Obtaining the proposed improved local numerical solution does not require significant changes to the structure of the ADER-DG numerical method. Therefore, all conclusions regarding the convergence orders of the numerical solution at grid nodes, the resulting superconvergence, and the high stability of the ADER-DG numerical method remain unchanged. A wide range of applications of the ADER-DG numerical method is presented for solving specific initial value problems for ODE systems for a wide range of polynomial degrees. The obtained results provide strong confirmation for the developed rigorous theory. The improved local numerical solution is shown to exhibit both higher accuracy and improved smoothness and point-wise comparability. Empirical convergence orders of all individual numerical solutions were calculated for a wide range of error norms, which well agree with the expected convergence orders. The rigorous proof, based on the $\epsilon$-embedding method, of the applicability of the ADER-DG numerical method with a local DG predictor to solving DAE systems is presents.
+ oai:arXiv.org:2601.13908v1
+ math.NA
+ cs.NA
+ math.FA
+ physics.app-ph
+ physics.comp-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ I. S. Popov
+
+
+ On the Role of Rotation Equivariance in Monocular 3D Human Pose Estimation
+ https://arxiv.org/abs/2601.13913
+ arXiv:2601.13913v1 Announce Type: new
+Abstract: Estimating 3D from 2D is one of the central tasks in computer vision. In this work, we consider the monocular setting, i.e. single-view input, for 3D human pose estimation (HPE). Here, the task is to predict a 3D point set of human skeletal joints from a single 2D input image. While by definition this is an ill-posed problem, recent work has presented methods that solve it with up to several-centimetre error. Typically, these methods employ a two-step approach, where the first step is to detect the 2D skeletal joints in the input image, followed by the step of 2D-to-3D lifting. We find that common lifting models fail when encountering a rotated input. We argue that learning a single human pose along with its in-plane rotations is considerably easier and more geometrically grounded than directly learning a point-to-point mapping. Furthermore, our intuition is that endowing the model with the notion of rotation equivariance without explicitly constraining its parameter space should lead to a more straightforward learning process than one with equivariance by design. Utilising the common HPE benchmarks, we confirm that the 2D rotation equivariance per se improves the model performance on human poses akin to rotations in the image plane, and can be efficiently and straightforwardly learned by augmentation, outperforming state-of-the-art equivariant-by-design methods.
+ oai:arXiv.org:2601.13913v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pavlo Melnyk, Cuong Le, Urs Waldmann, Per-Erik Forss\'en, Bastian Wandt
+
+
+ AgentEHR: Advancing Autonomous Clinical Decision-Making via Retrospective Summarization
+ https://arxiv.org/abs/2601.13918
+ arXiv:2601.13918v1 Announce Type: new
+Abstract: Large Language Models have demonstrated profound utility in the medical domain. However, their application to autonomous Electronic Health Records~(EHRs) navigation remains constrained by a reliance on curated inputs and simplified retrieval tasks. To bridge the gap between idealized experimental settings and realistic clinical environments, we present AgentEHR. This benchmark challenges agents to execute complex decision-making tasks, such as diagnosis and treatment planning, requiring long-range interactive reasoning directly within raw and high-noise databases. In tackling these tasks, we identify that existing summarization methods inevitably suffer from critical information loss and fractured reasoning continuity. To address this, we propose RetroSum, a novel framework that unifies a retrospective summarization mechanism with an evolving experience strategy. By dynamically re-evaluating interaction history, the retrospective mechanism prevents long-context information loss and ensures unbroken logical coherence. Additionally, the evolving strategy bridges the domain gap by retrieving accumulated experience from a memory bank. Extensive empirical evaluations demonstrate that RetroSum achieves performance gains of up to 29.16% over competitive baselines, while significantly decreasing total interaction errors by up to 92.3%.
+ oai:arXiv.org:2601.13918v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yusheng Liao, Chuan Xuan, Yutong Cai, Lina Yang, Zhe Chen, Yanfeng Wang, Yu Wang
+
+
+ HyperWalker: Dynamic Hypergraph-Based Deep Diagnosis for Multi-Hop Clinical Modeling across EHR and X-Ray in Medical VLMs
+ https://arxiv.org/abs/2601.13919
+ arXiv:2601.13919v1 Announce Type: new
+Abstract: Automated clinical diagnosis remains a core challenge in medical AI, which usually requires models to integrate multi-modal data and reason across complex, case-specific contexts. Although recent methods have advanced medical report generation (MRG) and visual question answering (VQA) with medical vision-language models (VLMs), these methods, however, predominantly operate under a sample-isolated inference paradigm, as such processing cases independently without access to longitudinal electronic health records (EHRs) or structurally related patient examples. This paradigm limits reasoning to image-derived information alone, which ignores external complementary medical evidence for potentially more accurate diagnosis. To overcome this limitation, we propose \textbf{HyperWalker}, a \textit{Deep Diagnosis} framework that reformulates clinical reasoning via dynamic hypergraphs and test-time training. First, we construct a dynamic hypergraph, termed \textbf{iBrochure}, to model the structural heterogeneity of EHR data and implicit high-order associations among multimodal clinical information. Within this hypergraph, a reinforcement learning agent, \textbf{Walker}, navigates to and identifies optimal diagnostic paths. To ensure comprehensive coverage of diverse clinical characteristics in test samples, we incorporate a \textit{linger mechanism}, a multi-hop orthogonal retrieval strategy that iteratively selects clinically complementary neighborhood cases reflecting distinct clinical attributes. Experiments on MRG with MIMIC and medical VQA on EHRXQA demonstrate that HyperWalker achieves state-of-the-art performance. Code is available at: https://github.com/Bean-Young/HyperWalker
+ oai:arXiv.org:2601.13919v1
+ cs.CL
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuezhe Yang, Hao Wang, Yige Peng, Jinman Kim, Lei Bi
+
+
+ Asymmetric regularization mechanism for GAN training with Variational Inequalities
+ https://arxiv.org/abs/2601.13920
+ arXiv:2601.13920v1 Announce Type: new
+Abstract: We formulate the training of generative adversarial networks (GANs) as a Nash equilibrium seeking problem. To stabilize the training process and find a Nash equilibrium, we propose an asymmetric regularization mechanism based on the classic Tikhonov step and on a novel zero-centered gradient penalty. Under smoothness and a local identifiability condition induced by a Gauss-Newton Gramian, we obtain explicit Lipschitz and (strong)-monotonicity constants for the regularized operator. These constants ensure last-iterate linear convergence of a single-call Extrapolation-from-the-Past (EFTP) method. Empirical simulations on an academic example show that, even when strong monotonicity cannot be achieved, the asymmetric regularization is enough to converge to an equilibrium and stabilize the trajectory.
+ oai:arXiv.org:2601.13920v1
+ cs.GT
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Spyridon C. Giagtzoglou, Mark H. M. Winands, Barbara Franci
+
+
+ Automatic Prompt Optimization for Dataset-Level Feature Discovery
+ https://arxiv.org/abs/2601.13922
+ arXiv:2601.13922v1 Announce Type: new
+Abstract: Feature extraction from unstructured text is a critical step in many downstream classification pipelines, yet current approaches largely rely on hand-crafted prompts or fixed feature schemas. We formulate feature discovery as a dataset-level prompt optimization problem: given a labelled text corpus, the goal is to induce a global set of interpretable and discriminative feature definitions whose realizations optimize a downstream supervised learning objective. To this end, we propose a multi-agent prompt optimization framework in which language-model agents jointly propose feature definitions, extract feature values, and evaluate feature quality using dataset-level performance and interpretability feedback. Instruction prompts are iteratively refined based on this structured feedback, enabling optimization over prompts that induce shared feature sets rather than per-example predictions. This formulation departs from prior prompt optimization methods that rely on per-sample supervision and provides a principled mechanism for automatic feature discovery from unstructured text.
+ oai:arXiv.org:2601.13922v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Adrian Cosma, Oleg Szehr, David Kletz, Alessandro Antonucci, Olivier Pelletier
+
+
+ Proactive Coded Caching Scheme for D2D Networks
+ https://arxiv.org/abs/2601.13929
+ arXiv:2601.13929v1 Announce Type: new
+Abstract: Coded caching and device-to-device (D2D) communication are two effective techniques for alleviating network traffic. Secure transmission and file privacy have also become critical concerns in these domains. However, prevailing coded caching schemes typically assume that a user's cached content is inaccessible to others, overlooking the risk of file privacy leakage due to attacks targeting the cache itself. In this paper, we propose a secure coded caching scheme for D2D networks that guarantees both file privacy and secure delivery. We demonstrate that the proposed scheme achieves order-optimal performance when the file size is sufficiently large and the cache memory is ample.
+ oai:arXiv.org:2601.13929v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qiaoling Zhang, Changlu Lin, Minquan Cheng
+
+
+ Towards Effective Negation Modeling in Joint Audio-Text Models for Music
+ https://arxiv.org/abs/2601.13931
+ arXiv:2601.13931v1 Announce Type: new
+Abstract: Joint audio-text models are widely used for music retrieval, yet they struggle with semantic phenomena such as negation. Negation is fundamental for distinguishing the absence (or presence) of musical elements (e.g., "with vocals" vs. "without vocals"), but current systems fail to represent this reliably. In this work, we investigate and mitigate this limitation by training CLAP models from scratch on the Million Song Dataset with LP-MusicCaps-MSD captions. We introduce negation through text augmentation and a dissimilarity-based contrastive loss, designed to explicitly separate original and negated captions in the joint embedding space. To evaluate progress, we propose two protocols that frame negation modeling as retrieval and binary classification tasks. Experiments demonstrate that both methods, individually and combined, improve negation handling while largely preserving retrieval performance.
+ oai:arXiv.org:2601.13931v1
+ cs.SD
+ cs.IR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yannis Vasilakis, Rachel Bittner, Johan Pauwels
+
+
+ VulnResolver: A Hybrid Agent Framework for LLM-Based Automated Vulnerability Issue Resolution
+ https://arxiv.org/abs/2601.13933
+ arXiv:2601.13933v1 Announce Type: new
+Abstract: As software systems grow in complexity, security vulnerabilities have become increasingly prevalent, posing serious risks and economic costs. Although automated detection tools such as fuzzers have advanced considerably, effective resolution still often depends on human expertise. Existing automated vulnerability repair (AVR) methods rely heavily on manually provided annotations (e.g., fault locations or CWE labels), which are often difficult and time-consuming to obtain, while overlooking the rich, naturally embedded semantic context found in issue reports from developers.
+ In this paper, we present VulnResolver, the first LLM-based hybrid agent framework for automated vulnerability issue resolution. VulnResolver unites the adaptability of autonomous agents with the stability of workflow-guided repair through two specialized agents. The Context Pre-Collection Agent (CPCAgent) adaptively explores the repository to gather dependency and contextual information, while the Safety Property Analysis Agent (SPAAgent) generates and validates the safety properties violated by vulnerabilities. Together, these agents produce structured analyses that enrich the original issue reports, enabling more accurate vulnerability localization and patch generation.
+ Evaluations on the SEC-bench benchmark show that VulnResolver resolves 75% of issues on SEC-bench Lite, achieving the best resolution performance. On SEC-bench Full, VulnResolver also significantly outperforms the strongest baseline, the agent-based OpenHands, confirming its effectiveness. Overall, VulnResolver delivers an adaptive and security-aware framework that advances end-to-end automated vulnerability issue resolution through workflow stability and the specialized agents' capabilities in contextual reasoning and property-based analysis.
+ oai:arXiv.org:2601.13933v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mingming Zhang, Xu Wang, Jian Zhang, Xiangxin Meng, Jiayi Zhang, Chunming Hu
+
+
+ TrackletGPT: A Language-like GPT Framework for White Matter Tract Segmentation
+ https://arxiv.org/abs/2601.13935
+ arXiv:2601.13935v1 Announce Type: new
+Abstract: White Matter Tract Segmentation is imperative for studying brain structural connectivity, neurological disorders and neurosurgery. This task remains complex, as tracts differ among themselves, across subjects and conditions, yet have similar 3D structure across hemispheres and subjects. To address these challenges, we propose TrackletGPT, a language-like GPT framework which reintroduces sequential information in tokens using tracklets. TrackletGPT generalises seamlessly across datasets, is fully automatic, and encodes granular sub-streamline segments, Tracklets, scaling and refining GPT models in Tractography Segmentation. Based on our experiments, TrackletGPT outperforms state-of-the-art methods on average DICE, Overlap and Overreach scores on TractoInferno and HCP datasets, even on inter-dataset experiments.
+ oai:arXiv.org:2601.13935v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Anoushkrit Goel, Simroop Singh, Ankita Joshi, Ranjeet Ranjan Jha, Chirag Ahuja, Aditya Nigam, Arnav Bhavsar
+
+
+ Impact Matters! An Audit Method to Evaluate AI Projects and their Impact for Sustainability and Public Interest
+ https://arxiv.org/abs/2601.13936
+ arXiv:2601.13936v1 Announce Type: new
+Abstract: The overall rapid increase of artificial intelligence (AI) use is linked to various initiatives that propose AI 'for good'. However, there is a lack of transparency in the goals of such projects, as well as a missing evaluation of their actual impacts on society and the planet. We close this gap by proposing public interest and sustainability as a regulatory dual-concept, together creating the necessary framework for a just and sustainable development that can be operationalized and utilized for the assessment of AI systems. Based on this framework, and building on existing work in auditing, we introduce the Impact-AI-method, a qualitative audit method to evaluate concrete AI projects with respect to public interest and sustainability. The interview-based method captures a project's governance structure, its theory of change, AI model and data characteristics, and social, environmental, and economic impacts. We also propose a catalog of assessment criteria to rate the outcome of the audit as well as to create an accessible output that can be debated broadly by civil society. The Impact-AI-method, developed in a transdisciplinary research setting together with NGOs and a multi-stakeholder research council, is intended as a reusable blueprint that both informs public debate about AI 'for good' claims and supports the creation of transparency of AI systems that purport to contribute to a just and sustainable development.
+ oai:arXiv.org:2601.13936v1
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Theresa Z\"uger, Laura State, Lena Winter
+
+
+ IF-GEO: Conflict-Aware Instruction Fusion for Multi-Query Generative Engine Optimization
+ https://arxiv.org/abs/2601.13938
+ arXiv:2601.13938v1 Announce Type: new
+Abstract: As Generative Engines revolutionize information retrieval by synthesizing direct answers from retrieved sources, ensuring source visibility becomes a significant challenge. Improving it through targeted content revisions is a practical strategy termed Generative Engine Optimization (GEO). However, optimizing a document for diverse queries presents a constrained optimization challenge where heterogeneous queries often impose conflicting and competing revision requirements under a limited content budget. To address this challenge, we propose IF-GEO, a "diverge-then-converge" framework comprising two phases: (i) mining distinct optimization preferences from representative latent queries; (ii) synthesizing a Global Revision Blueprint for guided editing by coordinating preferences via conflict-aware instruction fusion. To explicitly quantify IF-GEO's objective of cross-query stability, we introduce risk-aware stability metrics. Experiments on multi-query benchmarks demonstrate that IF-GEO achieves substantial performance gains while maintaining robustness across diverse retrieval scenarios.
+ oai:arXiv.org:2601.13938v1
+ cs.IR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Heyang Zhou (School of Cyber Science and Technology, University of Science and Technology of China), JiaJia Chen (Institute of Dataspace, Hefei Comprehensive National Science Center), Xiaolu Chen (School of Cyber Science and Technology, University of Science and Technology of China), Jie Bao (School of Cyber Science and Technology, University of Science and Technology of China), Zhen Chen (School of Cyber Science and Technology, University of Science and Technology of China), Yong Liao (School of Cyber Science and Technology, University of Science and Technology of China)
+
+
+ Glance-or-Gaze: Incentivizing LMMs to Adaptively Focus Search via Reinforcement Learning
+ https://arxiv.org/abs/2601.13942
+ arXiv:2601.13942v1 Announce Type: new
+Abstract: Large Multimodal Models (LMMs) have achieved remarkable success in visual understanding, yet they struggle with knowledge-intensive queries involving long-tail entities or evolving information due to static parametric knowledge. Recent search-augmented approaches attempt to address this limitation, but existing methods rely on indiscriminate whole-image retrieval that introduces substantial visual redundancy and noise, and lack deep iterative reflection, limiting their effectiveness on complex visual queries. To overcome these challenges, we propose Glance-or-Gaze (GoG), a fully autonomous framework that shifts from passive perception to active visual planning. GoG introduces a Selective Gaze mechanism that dynamically chooses whether to glance at global context or gaze into high-value regions, filtering irrelevant information before retrieval. We design a dual-stage training strategy: Reflective GoG Behavior Alignment via supervised fine-tuning instills the fundamental GoG paradigm, while Complexity-Adaptive Reinforcement Learning further enhances the model's capability to handle complex queries through iterative reasoning. Experiments across six benchmarks demonstrate state-of-the-art performance. Ablation studies confirm that both Selective Gaze and complexity-adaptive RL are essential for effective visual search. We will release our data and models for further exploration soon.
+ oai:arXiv.org:2601.13942v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hongbo Bai, Yujin Zhou, Yile Wu, Chi-Min Chan, Pengcheng Wen, Kunhao Pan, Sirui Han, Yike Guo
+
+
+ RepoGenesis: Benchmarking End-to-End Microservice Generation from Readme to Repository
+ https://arxiv.org/abs/2601.13943
+ arXiv:2601.13943v1 Announce Type: new
+Abstract: Large language models and agents have achieved remarkable progress in code generation. However, existing benchmarks focus on isolated function/class-level generation (e.g., ClassEval) or modifications to existing codebases (e.g., SWE-Bench), neglecting complete microservice repository generation that reflects real-world 0-to-1 development workflows. To bridge this gap, we introduce RepoGenesis, the first multilingual benchmark for repository-level end-to-end web microservice generation, comprising 106 repositories (60 Python, 46 Java) across 18 domains and 11 frameworks, with 1,258 API endpoints and 2,335 test cases verified through a "review-rebuttal" quality assurance process. We evaluate open-source agents (e.g., DeepCode) and commercial IDEs (e.g., Cursor) using Pass@1, API Coverage (AC), and Deployment Success Rate (DSR). Results reveal that despite high AC (up to 73.91%) and DSR (up to 100%), the best-performing system achieves only 23.67% Pass@1 on Python and 21.45% on Java, exposing deficiencies in architectural coherence, dependency management, and cross-file consistency. Notably, GenesisAgent-8B, fine-tuned on RepoGenesis (train), achieves performance comparable to GPT-5 mini, demonstrating the quality of RepoGenesis for advancing microservice generation. We release our benchmark at https://github.com/pzy2000/RepoGenesis.
+ oai:arXiv.org:2601.13943v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zhiyuan Peng, Xin Yin, Pu Zhao, Fangkai Yang, Lu Wang, Ran Jia, Xu Chen, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang
+
+
+ Efficient Coordination with the System-Level Shared State: An Embodied-AI Native Modular Framework
+ https://arxiv.org/abs/2601.13945
+ arXiv:2601.13945v1 Announce Type: new
+Abstract: As Embodied AI systems move from research prototypes to real world deployments, they tend to evolve rapidly while remaining reliable under workload changes and partial failures. In practice, many deployments are only partially decoupled: middleware moves messages, but shared context and feedback semantics are implicit, causing interface drift, cross-module interference, and brittle recovery at scale. We present ANCHOR, a modular framework that makes decoupling and robustness explicit system-level primitives. ANCHOR separates (i) Canonical Records, an evolvable contract for the standardized shared state, from (ii) a communication bus for many-to-many dissemination and feedback-oriented coordination, forming an inspectable end-to-end loop. We validate closed-loop feasibility on a de-identified workflow instantiation, characterize latency distributions under varying payload sizes and publish rates, and demonstrate automatic stream resumption after hard crashes and restarts even with shared-memory loss. Overall, ANCHOR turns ad-hoc integration glue into explicit contracts, enabling controlled degradation under load and self-healing recovery for scalable deployment of closed-loop AI systems.
+ oai:arXiv.org:2601.13945v1
+ cs.RO
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yixuan Deng, Tongrun Wu, Donghao Wu, Zeyu Wei, Jiayuan Wang, Zhenglong Sun, Yuqing Tang, Xiaoqiang Ji
+
+
+ VTONGuard: Automatic Detection and Authentication of AI-Generated Virtual Try-On Content
+ https://arxiv.org/abs/2601.13951
+ arXiv:2601.13951v1 Announce Type: new
+Abstract: With the rapid advancement of generative AI, virtual try-on (VTON) systems are becoming increasingly common in e-commerce and digital entertainment. However, the growing realism of AI-generated try-on content raises pressing concerns about authenticity and responsible use. To address this, we present VTONGuard, a large-scale benchmark dataset containing over 775,000 real and synthetic try-on images. The dataset covers diverse real-world conditions, including variations in pose, background, and garment styles, and provides both authentic and manipulated examples. Based on this benchmark, we conduct a systematic evaluation of multiple detection paradigms under unified training and testing protocols. Our results reveal each method's strengths and weaknesses and highlight the persistent challenge of cross-paradigm generalization. To further advance detection, we design a multi-task framework that integrates auxiliary segmentation to enhance boundary-aware feature learning, achieving the best overall performance on VTONGuard. We expect this benchmark to enable fair comparisons, facilitate the development of more robust detection models, and promote the safe and responsible deployment of VTON technologies in practice.
+ oai:arXiv.org:2601.13951v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shengyi Wu, Yan Hong, Shengyao Chen, Zheng Wang, Xianbing Sun, Jiahui Zhan, Jun Lan, Jianfu Zhang
+
+
+ Differentiable Logic Synthesis: Spectral Coefficient Selection via Sinkhorn-Constrained Composition
+ https://arxiv.org/abs/2601.13953
+ arXiv:2601.13953v1 Announce Type: new
+Abstract: Learning precise Boolean logic via gradient descent remains challenging: neural networks typically converge to "fuzzy" approximations that degrade under quantization. We introduce Hierarchical Spectral Composition, a differentiable architecture that selects spectral coefficients from a frozen Boolean Fourier basis and composes them via Sinkhorn-constrained routing with column-sign modulation. Our approach draws on recent insights from Manifold-Constrained Hyper-Connections (mHC), which demonstrated that projecting routing matrices onto the Birkhoff polytope preserves identity mappings and stabilizes large-scale training. We adapt this framework to logic synthesis, adding column-sign modulation to enable Boolean negation -- a capability absent in standard doubly stochastic routing.
+ We validate our approach across four phases of increasing complexity: (1) For n=2 (16 Boolean operations over 4-dim basis), gradient descent achieves 100% accuracy with zero routing drift and zero-loss quantization to ternary masks. (2) For n=3 (10 three-variable operations), gradient descent achieves 76% accuracy, but exhaustive enumeration over 3^8 = 6561 configurations proves that optimal ternary masks exist for all operations (100% accuracy, 39% sparsity). (3) For n=4 (10 four-variable operations over 16-dim basis), spectral synthesis -- combining exact Walsh-Hadamard coefficients, ternary quantization, and MCMC refinement with parallel tempering -- achieves 100% accuracy on all operations. This progression establishes (a) that ternary polynomial threshold representations exist for all tested functions, and (b) that finding them requires methods beyond pure gradient descent as dimensionality grows. All operations enable single-cycle combinational logic inference at 10,959 MOps/s on GPU, demonstrating viability for hardware-efficient neuro-symbolic logic synthesis.
+ oai:arXiv.org:2601.13953v1
+ cs.LG
+ cs.AR
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Gorgi Pavlov
+
+
+ DExTeR: Weakly Semi-Supervised Object Detection with Class and Instance Experts for Medical Imaging
+ https://arxiv.org/abs/2601.13954
+ arXiv:2601.13954v1 Announce Type: new
+Abstract: Detecting anatomical landmarks in medical imaging is essential for diagnosis and intervention guidance. However, object detection models rely on costly bounding box annotations, limiting scalability. Weakly Semi-Supervised Object Detection (WSSOD) with point annotations proposes annotating each instance with a single point, minimizing annotation time while preserving localization signals. A Point-to-Box teacher model, trained on a small box-labeled subset, converts these point annotations into pseudo-box labels to train a student detector. Yet, medical imagery presents unique challenges, including overlapping anatomy, variable object sizes, and elusive structures, which hinder accurate bounding box inference. To overcome these challenges, we introduce DExTeR (DETR with Experts), a transformer-based Point-to-Box regressor tailored for medical imaging. Built upon Point-DETR, DExTeR encodes single-point annotations as object queries, refining feature extraction with the proposed class-guided deformable attention, which guides attention sampling using point coordinates and class labels to capture class-specific characteristics. To improve discrimination in complex structures, it introduces CLICK-MoE (CLass, Instance, and Common Knowledge Mixture of Experts), decoupling class and instance representations to reduce confusion among adjacent or overlapping instances. Finally, we implement a multi-point training strategy which promotes prediction consistency across different point placements, improving robustness to annotation variability. DExTeR achieves state-of-the-art performance across three datasets spanning different medical domains (endoscopy, chest X-rays, and endoscopic ultrasound) highlighting its potential to reduce annotation costs while maintaining high detection accuracy.
+ oai:arXiv.org:2601.13954v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Adrien Meyer, Didier Mutter, Nicolas Padoy
+
+
+ Where to Place a Heavy Payload on a Multirotor UAV for Best Control Performance
+ https://arxiv.org/abs/2601.13958
+ arXiv:2601.13958v1 Announce Type: new
+Abstract: This paper studies the impact of rigidly attached heavy payload placement - where the payload mass significantly influences the UAV's dynamics - on the stability and control performance of a multirotor unmanned aerial vehicle (UAV). In particular, we focus on how the position of such a payload relative to the vehicle's Center of Gravity (CoG) affects the stability and control performance at an arbitrary point of interest on the UAV, such as the payload position, and on how this position can be optimized. Our conclusions are based on two key contributions. First, we analyze the stability of the zero-dynamics of a complete nonlinear model of the UAV with payload. We demonstrate that the stability of the zero dynamics depends on the vertical signed distance in the body-fixed frame between the controlled output position and the combined CoG of the UAV with payload. Specifically, positioning the output below the CoG yields unstable zero dynamics, while the linearized zero dynamics are marginally stable when placing it above, indicating reduced sensitivity to input disturbances. Second, we analyze the performance of the linearized UAV model with payload by providing an analytical expression for the H2-norm, from which we can quantify the system's attenuation to white noise input disturbances. We conclude that less control authority leads to a higher optimal position of the controlled output with respect to the CoG for closed-loop white-noise disturbance rejection capabilities, also when the heavy payload is the controlled output. The results are illustrated through numerical examples.
+ oai:arXiv.org:2601.13958v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Sander Doodeman, Paula Chanfreut Palacio, Elena Torta, Duarte Antunes
+
+
+ RL-BioAug: Label-Efficient Reinforcement Learning for Self-Supervised EEG Representation Learning
+ https://arxiv.org/abs/2601.13964
+ arXiv:2601.13964v1 Announce Type: new
+Abstract: The quality of data augmentation serves as a critical determinant for the performance of contrastive learning in EEG tasks. Although this paradigm is promising for utilizing unlabeled data, static or random augmentation strategies often fail to preserve intrinsic information due to the non-stationarity of EEG signals where statistical properties change over time. To address this, we propose RL-BioAug, a framework that leverages a label-efficient reinforcement learning (RL) agent to autonomously determine optimal augmentation policies. While utilizing only a minimal fraction (10\%) of labeled data to guide the agent's policy, our method enables the encoder to learn robust representations in a strictly self-supervised manner. Experimental results demonstrate that RL-BioAug significantly outperforms the random selection strategy, achieving substantial improvements of 9.69\% and 8.80\% in Macro-F1 score on the Sleep-EDFX and CHB-MIT datasets, respectively. Notably, this agent mainly chose optimal strategies for each task -- for example, Time Masking with a 62\% probability for sleep stage classification and Crop \& Resize with a 77\% probability for seizure detection. Our framework suggests its potential to replace conventional heuristic-based augmentations and establish a new autonomous paradigm for data augmentation. The source code is available at \href{https://github.com/dlcjfgmlnasa/RL-BioAug}{https://github.com/dlcjfgmlnasa/RL-BioAug}.
+ oai:arXiv.org:2601.13964v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Cheol-Hui Lee, Hwa-Yeon Lee, Dong-Joo Kim
+
+
+ Autonomous Knowledge Graph Exploration with Adaptive Breadth-Depth Retrieval
+ https://arxiv.org/abs/2601.13969
+ arXiv:2601.13969v1 Announce Type: new
+Abstract: Retrieving evidence for language model queries from knowledge graphs requires balancing broad search across the graph with multi-hop traversal to follow relational links. Similarity-based retrievers provide coverage but remain shallow, whereas traversal-based methods rely on selecting seed nodes to start exploration, which can fail when queries span multiple entities and relations. We introduce ARK: Adaptive Retriever of Knowledge, an agentic KG retriever that gives a language model control over this breadth-depth tradeoff using a two-operation toolset: global lexical search over node descriptors and one-hop neighborhood exploration that composes into multi-hop traversal. ARK alternates between breadth-oriented discovery and depth-oriented expansion without depending on a fragile seed selection, a pre-set hop depth, or requiring retrieval training. ARK adapts tool use to queries, using global search for language-heavy queries and neighborhood exploration for relation-heavy queries. On STaRK, ARK reaches 59.1% average Hit@1 and 67.4 average MRR, improving average Hit@1 by up to 31.4% and average MRR by up to 28.0% over retrieval-based and agentic training-free methods. Finally, we distill ARK's tool-use trajectories from a large teacher into an 8B model via label-free imitation, improving Hit@1 by +7.0, +26.6, and +13.5 absolute points over the base 8B model on AMAZON, MAG, and PRIME datasets, respectively, while retaining up to 98.5% of the teacher's Hit@1 rate.
+ oai:arXiv.org:2601.13969v1
+ cs.AI
+ cs.IR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Joaqu\'in Polonuer (Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA, Departamento de Computaci\'on, FCEyN, Universidad de Buenos Aires, Buenos Aires, Argentina), Lucas Vittor (Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA), I\~naki Arango (Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA), Ayush Noori (Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA, Department of Engineering Science, University of Oxford, Oxford, UK), David A. Clifton (Department of Engineering Science, University of Oxford, Oxford, UK, Oxford Suzhou Centre for Advanced Research, University of Oxford, Suzhou, Jiangsu, China), Luciano Del Corro (ELIAS Lab, Departamento de Ingenier\'ia, Universidad de San Andr\'es, Victoria, Argentina, Lumina Labs, Buenos Aires, Argentina), Marinka Zitnik (Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA, Kempner Institute for the Study of Natural and Artificial Intelligence, Allston, MA, USA, Broad Institute of MIT and Harvard, Cambridge, MA, USA, Harvard Data Science Initiative, Cambridge, MA, USA)
+
+
+ The Transparency Paradox in Explainable AI: A Theory of Autonomy Depletion Through Cognitive Load
+ https://arxiv.org/abs/2601.13973
+ arXiv:2601.13973v1 Announce Type: new
+Abstract: Objective: This paper develops a theoretical framework explaining when and why AI explanations enhance versus impair human decision-making.
+ Background: Transparency is advocated as universally beneficial for human-AI interaction, yet identical AI explanations improve decision quality in some contexts but impair it in others. Current theories--trust calibration, cognitive load, and self-determination--cannot fully account for this paradox.
+ Method: The framework models autonomy as a continuous stochastic process influenced by information-induced cognitive load. Using stochastic control theory, autonomy evolution is formalized as geometric Brownian motion with information-dependent drift, and optimal transparency is derived via Hamilton-Jacobi-Bellman equations. Monte Carlo simulations validate theoretical predictions.
+ Results: Mathematical analysis generates five testable predictions about disengagement timing, working memory moderation, autonomy trajectory shapes, and optimal information levels. Computational solutions demonstrate that dynamic transparency policies outperform both maximum and minimum transparency by adapting to real-time cognitive state. The optimal policy exhibits threshold structure: provide information when autonomy is high and accumulated load is low; withhold when resources are depleted.
+ Conclusion: Transparency effects depend on dynamic cognitive resource depletion rather than static design choices. Information provision triggers metacognitive processing that reduces perceived control when cognitive load exceeds working memory capacity.
+ Application: The framework provides design principles for adaptive AI systems: adjust transparency based on real-time cognitive state, implement information budgets respecting capacity limits, and personalize thresholds based on individual working memory capacity.
+ oai:arXiv.org:2601.13973v1
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ancuta Margondai, Mustapha Mouloua
+
+
+ STEC: A Reference-Free Spatio-Temporal Entropy Coverage Metric for Evaluating Sampled Video Frames
+ https://arxiv.org/abs/2601.13974
+ arXiv:2601.13974v1 Announce Type: new
+Abstract: Frame sampling is a fundamental component in video understanding and video--language model pipelines, yet evaluating the quality of sampled frames remains challenging. Existing evaluation metrics primarily focus on perceptual quality or reconstruction fidelity, and are not designed to assess whether a set of sampled frames adequately captures informative and representative video content.
+ We propose Spatio-Temporal Entropy Coverage (STEC), a simple and non-reference metric for evaluating the effectiveness of video frame sampling. STEC builds upon Spatio-Temporal Frame Entropy (STFE), which measures per-frame spatial information via entropy-based structural complexity, and evaluates sampled frames based on their temporal coverage and redundancy. By jointly modeling spatial information strength, temporal dispersion, and non-redundancy, STEC provides a principled and lightweight measure of sampling quality.
+ Experiments on the MSR-VTT test-1k benchmark demonstrate that STEC clearly differentiates common sampling strategies, including random, uniform, and content-aware methods. We further show that STEC reveals robustness patterns across individual videos that are not captured by average performance alone, highlighting its practical value as a general-purpose evaluation tool for efficient video understanding.
+ We emphasize that STEC is not designed to predict downstream task accuracy, but to provide a task-agnostic diagnostic signal for analyzing frame sampling behavior under constrained budgets.
+ oai:arXiv.org:2601.13974v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Shih-Yao Lin
+
+
+ Harmonizing the Deep: A Unified Information Pipeline for Robust Marine Biodiversity Assessment Across Heterogeneous Domains
+ https://arxiv.org/abs/2601.13975
+ arXiv:2601.13975v1 Announce Type: new
+Abstract: Marine biodiversity monitoring requires scalability and reliability across complex underwater environments to support conservation and invasive-species management. Yet existing detection solutions often exhibit a pronounced deployment gap, with performance degrading sharply when transferred to new sites. This work establishes the foundational detection layer for a multi-year invasive species monitoring initiative targeting Arctic and Atlantic marine ecosystems. We address this challenge by developing a Unified Information Pipeline that standardises heterogeneous datasets into a comparable information flow and evaluates a fixed, deployment-relevant detector under controlled cross-domain protocols. Across multiple domains, we find that structural factors, such as scene composition, object density, and contextual redundancy, explain cross-domain performance loss more strongly than visual degradation such as turbidity, with sparse scenes inducing a characteristic "Context Collapse" failure mode. We further validate operational feasibility by benchmarking inference on low-cost edge hardware, showing that runtime optimisation enables practical sampling rates for remote monitoring. The results shift emphasis from image enhancement toward structure-aware reliability, providing a democratised tool for consistent marine ecosystem assessment.
+ oai:arXiv.org:2601.13975v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Marco Piccolo, Qiwei Han, Astrid van Toor, Joachim Vanneste
+
+
+ FantasyVLN: Unified Multimodal Chain-of-Thought Reasoning for Vision-Language Navigation
+ https://arxiv.org/abs/2601.13976
+ arXiv:2601.13976v1 Announce Type: new
+Abstract: Achieving human-level performance in Vision-and-Language Navigation (VLN) requires an embodied agent to jointly understand multimodal instructions and visual-spatial context while reasoning over long action sequences. Recent works, such as NavCoT and NavGPT-2, demonstrate the potential of Chain-of-Thought (CoT) reasoning for improving interpretability and long-horizon planning. Moreover, multimodal extensions like OctoNav-R1 and CoT-VLA further validate CoT as a promising pathway toward human-like navigation reasoning. However, existing approaches face critical drawbacks: purely textual CoTs lack spatial grounding and easily overfit to sparse annotated reasoning steps, while multimodal CoTs incur severe token inflation by generating imagined visual observations, making real-time navigation impractical. In this work, we propose FantasyVLN, a unified implicit reasoning framework that preserves the benefits of CoT reasoning without explicit token overhead. Specifically, imagined visual tokens are encoded into a compact latent space using a pretrained Visual AutoRegressor (VAR) during CoT reasoning training, and the model jointly learns from textual, visual, and multimodal CoT modes under a unified multi-CoT strategy. At inference, our model performs direct instruction-to-action mapping while still enjoying reasoning-aware representations. Extensive experiments on LH-VLN show that our approach achieves reasoning-aware yet real-time navigation, improving success rates and efficiency while reducing inference latency by an order of magnitude compared to explicit CoT methods.
+ oai:arXiv.org:2601.13976v1
+ cs.CV
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jing Zuo, Lingzhou Mu, Fan Jiang, Chengcheng Ma, Mu Xu, Yonggang Qi
+
+
+ Active Cross-Modal Visuo-Tactile Perception of Deformable Linear Objects
+ https://arxiv.org/abs/2601.13979
+ arXiv:2601.13979v1 Announce Type: new
+Abstract: This paper presents a novel cross-modal visuo-tactile perception framework for the 3D shape reconstruction of deformable linear objects (DLOs), with a specific focus on cables subject to severe visual occlusions. Unlike existing methods relying predominantly on vision, whose performance degrades under varying illumination, background clutter, or partial visibility, the proposed approach integrates foundation-model-based visual perception with adaptive tactile exploration. The visual pipeline exploits SAM for instance segmentation and Florence for semantic refinement, followed by skeletonization, endpoint detection, and point-cloud extraction. Occluded cable segments are autonomously identified and explored with a tactile sensor, which provides local point clouds that are merged with the visual data through Euclidean clustering and topology-preserving fusion. A B-spline interpolation driven by endpoint-guided point sorting yields a smooth and complete reconstruction of the cable shape. Experimental validation using a robotic manipulator equipped with an RGB-D camera and a tactile pad demonstrates that the proposed framework accurately reconstructs both simple and highly curved single or multiple cable configurations, even when large portions are occluded. These results highlight the potential of foundation-model-enhanced cross-modal perception for advancing robotic manipulation of deformable objects.
+ oai:arXiv.org:2601.13979v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Raffaele Mazza, Ciro Natale, Pietro Falco
+
+
+ VirtualCrime: Evaluating Criminal Potential of Large Language Models via Sandbox Simulation
+ https://arxiv.org/abs/2601.13981
+ arXiv:2601.13981v1 Announce Type: new
+Abstract: Large language models (LLMs) have shown strong capabilities in multi-step decision-making, planning and actions, and are increasingly integrated into various real-world applications. It is concerning whether their strong problem-solving abilities may be misused for crimes. To address this gap, we propose VirtualCrime, a sandbox simulation framework based on a three-agent system to evaluate the criminal capabilities of models. Specifically, this framework consists of an attacker agent acting as the leader of a criminal team, a judge agent determining the outcome of each action, and a world manager agent updating the environment state and entities. Furthermore, we design 40 diverse crime tasks within this framework, covering 11 maps and 13 crime objectives such as theft, robbery, kidnapping, and riot. We also introduce a human player baseline for reference to better interpret the performance of LLM agents. We evaluate 8 strong LLMs and find (1) All agents in the simulation environment compliantly generate detailed plans and execute intelligent crime processes, with some achieving relatively high success rates; (2) In some cases, agents take severe action that inflicts harm to NPCs to achieve their goals. Our work highlights the need for safety alignment when deploying agentic AI in real-world settings.
+ oai:arXiv.org:2601.13981v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yilin Tang, Yu Wang, Lanlan Qiu, Wenchang Gao, Yunfei Ma, Baicheng Chen, Tianxing He
+
+
+ Equivariant Learning for Unsupervised Image Dehazing
+ https://arxiv.org/abs/2601.13986
+ arXiv:2601.13986v1 Announce Type: new
+Abstract: Image Dehazing (ID) aims to produce a clear image from an observation contaminated by haze. Current ID methods typically rely on carefully crafted priors or extensive haze-free ground truth, both of which are expensive or impractical to acquire, particularly in the context of scientific imaging. We propose a new unsupervised learning framework called Equivariant Image Dehazing (EID) that exploits the symmetry of image signals to restore clarity to hazy observations. By enforcing haze consistency and systematic equivariance, EID can recover clear patterns directly from raw, hazy images. Additionally, we propose an adversarial learning strategy to model unknown haze physics and facilitate EID learning. Experiments on two scientific image dehazing benchmarks (including cell microscopy and medical endoscopy) and on natural image dehazing have demonstrated that EID significantly outperforms state-of-the-art approaches. By unifying equivariant learning with modelling haze physics, we hope that EID will enable more versatile and effective haze removal in scientific imaging. Code and datasets will be published.
+ oai:arXiv.org:2601.13986v1
+ cs.CV
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhang Wen, Jiangwei Xie, Dongdong Chen
+
+
+ A universal linearized subspace refinement framework for neural networks
+ https://arxiv.org/abs/2601.13989
+ arXiv:2601.13989v1 Announce Type: new
+Abstract: Neural networks are predominantly trained using gradient-based methods, yet in many applications their final predictions remain far from the accuracy attainable within the model's expressive capacity. We introduce Linearized Subspace Refinement (LSR), a general and architecture-agnostic framework that exploits the Jacobian-induced linear residual model at a fixed trained network state. By solving a reduced direct least-squares problem within this subspace, LSR computes a subspace-optimal solution of the linearized residual model, yielding a refined linear predictor with substantially improved accuracy over standard gradient-trained solutions, without modifying network architectures, loss formulations, or training procedures. Across supervised function approximation, data-driven operator learning, and physics-informed operator fine-tuning, we show that gradient-based training often fails to access this attainable accuracy, even when local linearization yields a convex problem. This observation indicates that loss-induced numerical ill-conditioning, rather than nonconvexity or model expressivity, can constitute a dominant practical bottleneck. In contrast, one-shot LSR systematically exposes accuracy levels not fully exploited by gradient-based training, frequently achieving order-of-magnitude error reductions. For operator-constrained problems with composite loss structures, we further introduce Iterative LSR, which alternates one-shot LSR with supervised nonlinear alignment, transforming ill-conditioned residual minimization into numerically benign fitting steps and yielding accelerated convergence and improved accuracy. By bridging nonlinear neural representations with reduced-order linear solvers at fixed linearization points, LSR provides a numerically grounded and broadly applicable refinement framework for supervised learning, operator learning, and scientific computing.
+ oai:arXiv.org:2601.13989v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wenbo Cao, Weiwei Zhang
+
+
+ Generating Functions Meet Occupation Measures: Invariant Synthesis for Probabilistic Loops (Extended Version)
+ https://arxiv.org/abs/2601.13991
+ arXiv:2601.13991v1 Announce Type: new
+Abstract: A fundamental computational task in probabilistic programming is to infer a program's output (posterior) distribution from a given initial (prior) distribution. This problem is challenging, especially for expressive languages that feature loops or unbounded recursion. While most of the existing literature focuses on statistical approximation, in this paper we address the problem of mathematically exact inference.
+ To achieve this for programs with loops, we rely on a relatively underexplored type of probabilistic loop invariant, which is linked to a loop's so-called occupation measure. The occupation measure associates program states with their expected number of visits, given the initial distribution. Based on this, we derive the notion of an occupation invariant. Such invariants are essentially dual to probabilistic martingales, the predominant technique for formal probabilistic loop analysis in the literature. A key feature of occupation invariants is that they can take the initial distribution into account and often yield a proof of positive almost sure termination as a by-product.
+ Finally, we present an automatic, template-based invariant synthesis approach for occupation invariants by encoding them as generating functions. The approach is implemented and evaluated on a set of benchmarks.
+ oai:arXiv.org:2601.13991v1
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Darion Haase, Kevin Batz, Adrian Gallus, Benjamin Lucien Kaminski, Joost-Pieter Katoen, Lutz Klinkenberg, Tobias Winkler
+
+
+ "The Whole Is Greater Than the Sum of Its Parts": A Compatibility-Aware Multi-Teacher CoT Distillation Framework
+ https://arxiv.org/abs/2601.13992
+ arXiv:2601.13992v1 Announce Type: new
+Abstract: Chain-of-Thought (CoT) reasoning empowers Large Language Models (LLMs) with remarkable capabilities but typically requires prohibitive parameter scales. CoT distillation has emerged as a promising paradigm to transfer reasoning prowess into compact Student Models (SLMs), but existing approaches often rely on a solitary teacher, capping the student's potential since individual LLMs often exhibit distinct capability biases and may suffer from catastrophic forgetting. While leveraging diverse teachers seems appealing, effectively fusing their supervisions remains challenging: teacher-student incompatibility risks amplifying hallucinations, and passive supervision fails to ensure genuine logic internalization. To address this, we introduce COMPACT, a framework that adaptively fuses supervisions from different teachers by dynamically weighting teacher gradients based on the student's real-time compatibility evaluated by a multi-dimensional metric: (1) Graph-based Consensus to filter misleading rationales by identifying mainstream reasoning paths; (2) Mutual-Information-based Adaptability to detect "epiphany moments" for genuinely understanding the reasoning process rather than merely imitating; and (3) Loss-based Difficulty to assess student receptivity to the teacher's guidance and prevent negative transfer. Extensive experiments and latent space analysis demonstrate that COMPACT effectively integrates diverse reasoning capabilities without damaging the model's original knowledge structure, achieving state-of-the-art performance on various benchmarks while mitigating catastrophic forgetting.
+ oai:arXiv.org:2601.13992v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jin Cui, Jiaqi Guo, Jiepeng Zhou, Ruixuan Yang, Jiayi Lu, Jiajun Xu, Jiangcheng Song, Boran Zhao, Pengju Ren
+
+
+ Capacity and Energy Trade-Offs in FR3 6G Networks Using Real Deployment Data
+ https://arxiv.org/abs/2601.13993
+ arXiv:2601.13993v1 Announce Type: new
+Abstract: This article presents a data-driven system-level analysis of multi-layer 6G networks operating in the upper mid-band (FR3: 7-24 GHz). Unlike most prior studies based on 3rd Generation Partnership Project (3GPP) templates, we leverage real-world deployment and traffic data from a commercial 4G/5G network in China to evaluate practical 6G strategies. Using Giulia-a deployment-informed system-level heterogeneous network model-we show that 6G can boost median throughput by up to 9.5x over heterogeneous 4G+5G deployments, but also increases power usage by up to 59%. Critically, co-locating 6G with existing sites delivers limited gains while incurring high energy cost. In contrast, non-co-located, traffic-aware deployments achieve superior throughput-to-watt efficiency, highlighting the need for strategic, user equipment (UE) hotspot-focused 6G planning.
+ oai:arXiv.org:2601.13993v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ David L\'opez-P\'erez, Nicola Piovesan, Matteo Bernab\`e
+
+
+ torch-sla: Differentiable Sparse Linear Algebra with Adjoint Solvers and Sparse Tensor Parallelism for PyTorch
+ https://arxiv.org/abs/2601.13994
+ arXiv:2601.13994v1 Announce Type: new
+Abstract: Industrial scientific computing predominantly uses sparse matrices to represent unstructured data -- finite element meshes, graphs, point clouds. We present \torchsla{}, an open-source PyTorch library that enables GPU-accelerated, scalable, and differentiable sparse linear algebra. The library addresses three fundamental challenges: (1) GPU acceleration for sparse linear solves, nonlinear solves (Newton, Picard, Anderson), and eigenvalue computation; (2) Multi-GPU scaling via domain decomposition with halo exchange, reaching \textbf{400 million DOF linear solve on 3 GPUs}; and (3) Adjoint-based differentiation} achieving $\mathcal{O}(1)$ computational graph nodes (for autograd) and $\mathcal{O}(\text{nnz})$ memory -- independent of solver iterations. \torchsla{} supports multiple backends (SciPy, cuDSS, PyTorch-native) and seamlessly integrates with PyTorch autograd for end-to-end differentiable simulations. Code is available at https://github.com/walkerchi/torch-sla.
+ oai:arXiv.org:2601.13994v1
+ cs.DC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mingyuan Chi
+
+
+ From Tags to Trees: Structuring Fine-Grained Knowledge for Controllable Data Selection in LLM Instruction Tuning
+ https://arxiv.org/abs/2601.13995
+ arXiv:2601.13995v1 Announce Type: new
+Abstract: Effective and controllable data selection is critical for LLM instruction tuning, especially with massive open-source datasets. Existing approaches primarily rely on instance-level quality scores, or diversity metrics based on embedding clusters or semantic tags. However, constrained by the flatness of embedding spaces or the coarseness of tags, these approaches overlook fine-grained knowledge and its intrinsic hierarchical dependencies, consequently hindering precise data valuation and knowledge-aligned sampling. To address this challenge, we propose Tree-aware Aligned Global Sampling (TAGS), a unified framework that leverages a knowledge tree built from fine-grained tags, thereby enabling joint control of global quality, diversity, and target alignment. Using an LLM-based tagger, we extract atomic knowledge concepts, which are organized into a global tree through bottom-up hierarchical clustering. By grounding data instances onto this tree, a tree-aware metric then quantifies data quality and diversity, facilitating effective sampling. Our controllable sampling strategy maximizes tree-level information gain and enforces leaf-level alignment via KL-divergence for specific domains. Extensive experiments demonstrate that TAGS significantly outperforms state-of-the-art baselines. Notably, it surpasses the full-dataset model by \textbf{+5.84\%} using only \textbf{5\%} of the data, while our aligned sampling strategy further boosts average performance by \textbf{+4.24\%}.
+ oai:arXiv.org:2601.13995v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zihan Niu, Wenping Hu, Junmin Chen, Xiyue Wang, Tong Xu, Ruiming Tang
+
+
+ Software Testing in the Quantum World
+ https://arxiv.org/abs/2601.13996
+ arXiv:2601.13996v1 Announce Type: new
+Abstract: Quantum computing offers significant speedups for simulating physical, chemical, and biological systems, and for optimization and machine learning. As quantum software grows in complexity, the classical simulation of quantum computers, which has long been essential for quality assurance, becomes infeasible. This shift requires new quality-assurance methods that operate directly on real quantum computers. This paper presents the key challenges in testing large-scale quantum software and offers software engineering perspectives for addressing them.
+ oai:arXiv.org:2601.13996v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Rui Abreu, Shaukat Ali, Paolo Arcaini, Jose Campos, Michael Felderer, Claude Gravel, Fuyuki Ishikawa, Stefan Klikovits, Andriy Miranskyy, Mohammad Mousavi, Masaomi Yamaguchi, Lei Zhang, Jianjun Zhao, Anila Mjeda
+
+
+ Group-Invariant Unsupervised Skill Discovery: Symmetry-aware Skill Representations for Generalizable Behavior
+ https://arxiv.org/abs/2601.14000
+ arXiv:2601.14000v1 Announce Type: new
+Abstract: Unsupervised skill discovery aims to acquire behavior primitives that improve exploration and accelerate downstream task learning. However, existing approaches often ignore the geometric symmetries of physical environments, leading to redundant behaviors and sample inefficiency. To address this, we introduce Group-Invariant Skill Discovery (GISD), a framework that explicitly embeds group structure into the skill discovery objective. Our approach is grounded in a theoretical guarantee: we prove that in group-symmetric environments, the standard Wasserstein dependency measure admits a globally optimal solution comprised of an equivariant policy and a group-invariant scoring function. Motivated by this, we formulate the Group-Invariant Wasserstein dependency measure, which restricts the optimization to this symmetry-aware subspace without loss of optimality. Practically, we parameterize the scoring function using a group Fourier representation and define the intrinsic reward via the alignment of equivariant latent features, ensuring that the discovered skills generalize systematically under group transformations. Experiments on state-based and pixel-based locomotion benchmarks demonstrate that GISD achieves broader state-space coverage and improved efficiency in downstream task learning compared to a strong baseline.
+ oai:arXiv.org:2601.14000v1
+ cs.RO
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junwoo Chang, Joseph Park, Roberto Horowitz, Jongmin Lee, Jongeun Choi
+
+
+ Auditory Brain Passage Retrieval: Cross-Sensory EEG Training for Neural Information Retrieval
+ https://arxiv.org/abs/2601.14001
+ arXiv:2601.14001v1 Announce Type: new
+Abstract: Query formulation from internal information needs remains fundamentally challenging across all Information Retrieval paradigms due to cognitive complexity and physical impairments. Brain Passage Retrieval (BPR) addresses this by directly mapping EEG signals to passage representations without intermediate text translation. However, existing BPR research exclusively uses visual stimuli, leaving critical questions unanswered: Can auditory EEG enable effective retrieval for voice-based interfaces and visually impaired users? Can training on combined EEG datasets from different sensory modalities improve performance despite severe data scarcity? We present the first systematic investigation of auditory EEG for BPR and evaluate cross-sensory training benefits. Using dual encoder architectures with four pooling strategies (CLS, mean, max, multi-vector), we conduct controlled experiments comparing auditory-only, visual-only, and combined training on the Alice (auditory) and Nieuwland (visual) datasets. Results demonstrate that auditory EEG consistently outperforms visual EEG, and cross-sensory training with CLS pooling achieves substantial improvements over individual training: 31% in MRR (0.474), 43% in Hit@1 (0.314), and 28% in Hit@10 (0.858). Critically, combined auditory EEG models surpass BM25 text baselines (MRR: 0.474 vs 0.428), establishing neural queries as competitive with traditional retrieval whilst enabling accessible interfaces. These findings validate auditory neural interfaces for IR tasks and demonstrate that cross-sensory training addresses data scarcity whilst outperforming single-modality approaches Code: https://github.com/NiallMcguire/Audio_BPR
+ oai:arXiv.org:2601.14001v1
+ cs.IR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Niall McGuire, Yashar Moshfeghi
+
+
+ Consensus Stability of Community Notes on X
+ https://arxiv.org/abs/2601.14002
+ arXiv:2601.14002v1 Announce Type: new
+Abstract: Community-based fact-checking systems, such as Community Notes on X (formerly Twitter), aim to mitigate online misinformation by surfacing annotations judged helpful by contributors with diverse viewpoints. While prior work has shown that the platform's bridging-based algorithm effectively selects helpful notes at the time of display, little is known about how evaluations change after notes become visible. Using a large-scale dataset of 437,396 community notes and 35 million ratings from over 580,000 contributors, we examine the stability of helpful notes and the rating dynamics that follow their initial display. We find that 30.2% of displayed notes later lose their helpful status and disappear. Using interrupted time series models, we further show that note display triggers a sharp increase in rating volume and a significant shift in rating leaning, but these effects differ across rater groups. Contributors with viewpoints similar to note authors tend to increase supportive ratings, while dissimilar contributors increase negative ratings, producing systematic post-display polarization. Counterfactual analyses suggest that this post-display polarization, particularly from dissimilar raters, plays a substantial role in note disappearance. These findings highlight the vulnerability of consensus-based fact-checking systems to polarized rating behavior and suggest pathways for improving their resilience.
+ oai:arXiv.org:2601.14002v1
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1145/3774904.3792987
+ Yuwei Chuai, Gabriele Lenzini, Nicolas Pr\"ollochs
+
+
+ Locate, Steer, and Improve: A Practical Survey of Actionable Mechanistic Interpretability in Large Language Models
+ https://arxiv.org/abs/2601.14004
+ arXiv:2601.14004v1 Announce Type: new
+Abstract: Mechanistic Interpretability (MI) has emerged as a vital approach to demystify the opaque decision-making of Large Language Models (LLMs). However, existing reviews primarily treat MI as an observational science, summarizing analytical insights while lacking a systematic framework for actionable intervention. To bridge this gap, we present a practical survey structured around the pipeline: "Locate, Steer, and Improve." We formally categorize Localizing (diagnosis) and Steering (intervention) methods based on specific Interpretable Objects to establish a rigorous intervention protocol. Furthermore, we demonstrate how this framework enables tangible improvements in Alignment, Capability, and Efficiency, effectively operationalizing MI as an actionable methodology for model optimization. The curated paper list of this work is available at https://github.com/rattlesnakey/Awesome-Actionable-MI-Survey.
+ oai:arXiv.org:2601.14004v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hengyuan Zhang, Zhihao Zhang, Mingyang Wang, Zunhai Su, Yiwei Wang, Qianli Wang, Shuzhou Yuan, Ercong Nie, Xufeng Duan, Qibo Xue, Zeping Yu, Chenming Shang, Xiao Liang, Jing Xiong, Hui Shen, Chaofan Tao, Zhengwu Liu, Senjie Jin, Zhiheng Xi, Dongdong Zhang, Sophia Ananiadou, Tao Gui, Ruobing Xie, Hayden Kwok-Hay So, Hinrich Sch\"utze, Xuanjing Huang, Qi Zhang, Ngai Wong
+
+
+ BACH-V: Bridging Abstract and Concrete Human-Values in Large Language Models
+ https://arxiv.org/abs/2601.14007
+ arXiv:2601.14007v1 Announce Type: new
+Abstract: Do large language models (LLMs) genuinely understand abstract concepts, or merely manipulate them as statistical patterns? We introduce an abstraction-grounding framework that decomposes conceptual understanding into three capacities: interpretation of abstract concepts (Abstract-Abstract, A-A), grounding of abstractions in concrete events (Abstract-Concrete, A-C), and application of abstract principles to regulate concrete decisions (Concrete-Concrete, C-C). Using human values as a testbed - given their semantic richness and centrality to alignment - we employ probing (detecting value traces in internal activations) and steering (modifying representations to shift behavior). Across six open-source LLMs and ten value dimensions, probing shows that diagnostic probes trained solely on abstract value descriptions reliably detect the same values in concrete event narratives and decision reasoning, demonstrating cross-level transfer. Steering reveals an asymmetry: intervening on value representations causally shifts concrete judgments and decisions (A-C, C-C), yet leaves abstract interpretations unchanged (A-A), suggesting that encoded abstract values function as stable anchors rather than malleable activations. These findings indicate LLMs maintain structured value representations that bridge abstraction and action, providing a mechanistic and operational foundation for building value-driven autonomous AI systems with more transparent, generalizable alignment and control.
+ oai:arXiv.org:2601.14007v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Junyu Zhang, Yipeng Kang, Jiong Guo, Jiayu Zhan, Junqi Wang
+
+
+ MANATEE: A DevOps Platform for xApp Lifecycle Management and Testing in Open RAN
+ https://arxiv.org/abs/2601.14009
+ arXiv:2601.14009v1 Announce Type: new
+Abstract: The shift to disaggregated 5G architectures introduces unprecedented flexibility but also significant complexity in Beyond 5G Radio Access Networks (RANs). Open RAN enables programmability through xApps, yet deploying and validating these applications is critical given the nature of the systems they aim to control. Current Open RAN ecosystems lack robust lifecycle management of xApps that enable automated testing, seamless migration, and production-grade observability, resulting in slow, error-prone xApp delivery. To address these issues, DevOps practices can streamline the xApp lifecycle by integrating Continuous Integration/Continuous Deployment (CI/CD) pipelines with advanced traffic management and monitoring, such as leveraging service mesh technologies to enable progressive deployment strategies (e.g., canary releases and A/B testing) to ensure fine-grained observability and resilience. The solution presented in this article, MANATEE (Mesh Architecture for Radio Access Network Automation and TEsting Ecosystems), is the first platform that combines these principles to simplify xApp delivery into production, accelerate innovation, and guarantee performance across heterogeneous O-RAN environments. We prototyped MANATEE on a Kubernetes cluster integrated with the O-RAN Software Community Near-Real Time RAN Intelligent Controller (RIC), as well as with service mesh technologies, to facilitate testing of xApps across simulated, emulated, and real testbed environments. Our experimental results demonstrate that service mesh integration introduces minimal overhead (below 1 ms latency), while enabling reliable canary deployments with fine-grained traffic control and conflict-free A/B testing through circuit-breaking mechanisms.
+ oai:arXiv.org:2601.14009v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sofia Montebugnoli, Leonardo Bonati, Andrea Sabbioni, Luca Foschini, Paolo Bellavista, Salvatore D'Oro, Michele Polese, Tommaso Melodia
+
+
+ Numerical solution of Smoluchowski coagulation equation combined with Ostwald ripening
+ https://arxiv.org/abs/2601.14011
+ arXiv:2601.14011v1 Announce Type: new
+Abstract: The processes of simultaneous coagulation and Ostwald ripening of particles in the concluding stage of phase transformation are considered. We solve the integro-differential system of Smoluchowski-type kinetic and mass balance equations using a computationally efficient numerical algorithm based on low-rank matrices. We compare our numerical solutions for different initial particle-volume distributions with the universal distribution function for combined coagulation and Ostwald ripening. Our calculations confirm the tendency of a particulate ensemble to the universal particle-volume distribution to be approached asymptotically after a sufficiently long time, no matter what the initial particle-volume distribution might be.
+ oai:arXiv.org:2601.14011v1
+ math.NA
+ cond-mat.soft
+ cond-mat.stat-mech
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Robert T. Zaks, Sergey A. Matveev, Margarita A. Nikishina, Dmitri V. Alexandrov
+
+
+ BallotRank: A Condorcet Completion Method for Graphs
+ https://arxiv.org/abs/2601.14015
+ arXiv:2601.14015v1 Announce Type: new
+Abstract: We introduce BallotRank, a ranked preference aggregation method derived from a modified PageRank algorithm. It is a Condorcet-consistent method without damping, and empirical examination of nearly 2,000 ranked choice elections and over 20,000 internet polls confirms that BallotRank always identifies the Condorcet winner at conventional values of the damping parameter. We also prove that the method satisfies many of the same social choice criteria as other well-known Condorcet completion methods, but it has the advantage of being a natural social welfare function that provides a full ranking of the candidates.
+ oai:arXiv.org:2601.14015v1
+ cs.GT
+ econ.GN
+ q-fin.EC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Ismar Volic, Jason Douglas Todd
+
+
+ A Security Framework for Chemical Functions
+ https://arxiv.org/abs/2601.14019
+ arXiv:2601.14019v1 Announce Type: new
+Abstract: In this paper, we introduce chemical functions, a unified framework that models chemical systems as noisy challenge--response primitives, and formalize the associated chemical function infrastructure. Building on the theory of physical functions, we rigorously define robustness, unclonability, and unpredictability for chemical functions in both finite and asymptotic regimes, and specify security games that capture the adversary's power and the security goals. We instantiate the framework with two existing DNA-based constructions (operable random DNA and Genomic Sequence Encryption) and derive quantitative bounds for robustness, unclonability, and unpredictability. Our analysis develops maximum-likelihood verification rules under sequencing noise and partial-edit models, and provides high-precision estimates based on binomial distributions to guide parameter selection. The framework, definitions, and analyses yield a reproducible methodology for designing chemically unclonable authentication mechanisms. We demonstrate applications to in-product authentication and to shared key generation using standard extraction techniques.
+ oai:arXiv.org:2601.14019v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Frederik Walter, Hrishi Narayanan, Jessica Bariffi, Anne L\"uscher, Rawad Bitar, Robert Grass, Antonia Wachter-Zeh, Zohar Yakhini
+
+
+ OAMAC: Origin-Aware Mandatory Access Control for Practical Post-Compromise Attack Surface Reduction
+ https://arxiv.org/abs/2601.14021
+ arXiv:2601.14021v1 Announce Type: new
+Abstract: Modern operating systems provide powerful mandatory access control mechanisms, yet they largely reason about who executes code rather than how execution originates. As a result, processes launched remotely, locally, or by background services are often treated equivalently once privileges are obtained, complicating security reasoning and enabling post-compromise abuse of sensitive system interfaces. We introduce origin-aware mandatory access control (OAMAC), a kernel-level enforcement model that treats execution origin -- such as physical user presence, remote access, or service execution -- as a first-class security attribute. OAMAC mediates access to security-critical subsystems based on execution provenance rather than identity alone, enabling centralized governance over multiple attack surfaces while significantly reducing policy complexity. We present a deployable prototype implemented entirely using the Linux eBPF LSM framework, requiring no kernel modifications. OAMAC classifies execution origin using kernel-visible metadata, propagates origin across process creation, and enforces origin-aware policies on both sensitive filesystem interfaces and the kernel BPF control plane. Policies are maintained in kernel-resident eBPF maps and can be reconfigured at runtime via a minimal userspace tool. Our evaluation demonstrates that OAMAC effectively restricts common post-compromise actions available to remote attackers while preserving normal local administration and system stability. We argue that execution origin represents a missing abstraction in contemporary operating system security models, and that elevating it to a first-class concept enables practical attack surface reduction without requiring subsystem-specific expertise or heavyweight security frameworks.
+ oai:arXiv.org:2601.14021v1
+ cs.CR
+ cs.OS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Omer Abdelmajeed Idris Mohammed, Ilhami M. Orak
+
+
+ Credible CO2 Comparisons: A Machine Learning Approach to Vehicle Powertrain Assessment
+ https://arxiv.org/abs/2601.14022
+ arXiv:2601.14022v1 Announce Type: new
+Abstract: Decarbonizing road transport requires consistent and transparent methods for comparing CO2 emissions across vehicle technologies. This paper proposes a machine learning-based framework for like-for-like operational assessment of internal combustion engine vehicles (ICEVs) and electric vehicles (EVs) under identical, real-world driving conditions. The approach isolates technology-specific effects by holding the observed speed profile and environmental context fixed, enabling direct comparison of powertrain performance. Recurrent neural network models are trained independently for each domain to learn the mapping from contextual driving variables (speed, acceleration, temperature) to internal actuation variables (torque, throttle) and instantaneous CO2-equivalent emission rates. This structure allows the construction of counterfactual scenarios that answer: What emissions would an EV have generated if it had followed the same driving profile as an ICEV? By aligning both vehicle types on a unified instantaneous emissions metric, the framework enables fair and reproducible evaluation of powertrain technologies. It offers a scalable foundation for credible, data-driven assessments of vehicle carbon performance under real-world operating conditions.
+ oai:arXiv.org:2601.14022v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rodrigo Pereira David, Luciano Araujo Dourado Filho, Daniel Marques da Silva, Jo\~ao Alfredo Cal-Braz
+
+
+ Universal Approximation Theorem for Input-Connected Multilayer Perceptrons
+ https://arxiv.org/abs/2601.14026
+ arXiv:2601.14026v1 Announce Type: new
+Abstract: We introduce the Input-Connected Multilayer Perceptron (IC-MLP), a feedforward neural network architecture in which each hidden neuron receives, in addition to the outputs of the preceding layer, a direct affine connection from the raw input. We first study this architecture in the univariate setting and give an explicit and systematic description of IC-MLPs with an arbitrary finite number of hidden layers, including iterated formulas for the network functions. In this setting, we prove a universal approximation theorem showing that deep IC-MLPs can approximate any continuous function on a closed interval of the real line if and only if the activation function is nonlinear. We then extend the analysis to vector-valued inputs and establish a corresponding universal approximation theorem for continuous functions on compact subsets of $\mathbb{R}^n$.
+ oai:arXiv.org:2601.14026v1
+ cs.LG
+ cs.NE
+ math.FA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Vugar Ismailov
+
+
+ Numina-Lean-Agent: An Open and General Agentic Reasoning System for Formal Mathematics
+ https://arxiv.org/abs/2601.14027
+ arXiv:2601.14027v1 Announce Type: new
+Abstract: Agentic systems have recently become the dominant paradigm for formal theorem proving, achieving strong performance by coordinating multiple models and tools. However, existing approaches often rely on task-specific pipelines and trained formal provers, limiting their flexibility and reproducibility. In this paper, we propose the paradigm that directly uses a general coding agent as a formal math reasoner. This paradigm is motivated by (1) A general coding agent provides a natural interface for diverse reasoning tasks beyond proving, (2) Performance can be improved by simply replacing the underlying base model, without training, and (3) MCP enables flexible extension and autonomous calling of specialized tools, avoiding complex design. Based on this paradigm, we introduce Numina-Lean-Agent, which combines Claude Code with Numina-Lean-MCP to enable autonomous interaction with Lean, retrieval of relevant theorems, informal proving and auxiliary reasoning tools. Using Claude Opus 4.5 as the base model, Numina-Lean-Agent solves all problems in Putnam 2025 (12 / 12), matching the best closed-source system. Beyond benchmark evaluation, we further demonstrate its generality by interacting with mathematicians to successfully formalize the Brascamp-Lieb theorem. We release Numina-Lean-Agent and all solutions at https://github.com/project-numina/numina-lean-agent.
+ oai:arXiv.org:2601.14027v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Junqi Liu, Zihao Zhou, Zekai Zhu, Marco Dos Santos, Weikun He, Jiawei Liu, Ran Wang, Yunzhou Xie, Junqiao Zhao, Qiufeng Wang, Lihong Zhi, Jia Li, Wenda Li
+
+
+ Likelihood-Separable Diffusion Inference for Multi-Image MRI Super-Resolution
+ https://arxiv.org/abs/2601.14030
+ arXiv:2601.14030v1 Announce Type: new
+Abstract: Diffusion models are the current state-of-the-art for solving inverse problems in imaging. Their impressive generative capability allows them to approximate sampling from a prior distribution, which alongside a known likelihood function permits posterior sampling without retraining the model. While recent methods have made strides in advancing the accuracy of posterior sampling, the majority focuses on single-image inverse problems. However, for modalities such as magnetic resonance imaging (MRI), it is common to acquire multiple complementary measurements, each low-resolution along a different axis. In this work, we generalize common diffusion-based inverse single-image problem solvers for multi-image super-resolution (MISR) MRI. We show that the DPS likelihood correction allows an exactly-separable gradient decomposition across independently acquired measurements, enabling MISR without constructing a joint operator, modifying the diffusion model, or increasing network function evaluations. We derive MISR versions of DPS, DMAP, DPPS, and diffusion-based PnP/ADMM, and demonstrate substantial gains over SISR across $4\times/8\times/16\times$ anisotropic degradations. Our results achieve state-of-the-art super-resolution of anisotropic MRI volumes and, critically, enable reconstruction of near-isotropic anatomy from routine 2D multi-slice acquisitions, which are otherwise highly degraded in orthogonal views.
+ oai:arXiv.org:2601.14030v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Samuel W. Remedios, Zhangxing Bian, Shuwen Wei, Aaron Carass, Jerry L. Prince, Blake E. Dewey
+
+
+ RM-Distiller: Exploiting Generative LLM for Reward Model Distillation
+ https://arxiv.org/abs/2601.14032
+ arXiv:2601.14032v1 Announce Type: new
+Abstract: Reward models (RMs) play a pivotal role in aligning large language models (LLMs) with human preferences. Due to the difficulty of obtaining high-quality human preference annotations, distilling preferences from generative LLMs has emerged as a standard practice. However, existing approaches predominantly treat teacher models as simple binary annotators, failing to fully exploit the rich knowledge and capabilities for RM distillation. To address this, we propose RM-Distiller, a framework designed to systematically exploit the multifaceted capabilities of teacher LLMs: (1) Refinement capability, which synthesizes highly correlated response pairs to create fine-grained and contrastive signals. (2) Scoring capability, which guides the RM in capturing precise preference strength via a margin-aware optimization objective. (3) Generation capability, which incorporates the teacher's generative distribution to regularize the RM to preserve its fundamental linguistic knowledge. Extensive experiments demonstrate that RM-Distiller significantly outperforms traditional distillation methods both on RM benchmarks and reinforcement learning-based alignment, proving that exploiting multifaceted teacher capabilities is critical for effective reward modeling. To the best of our knowledge, this is the first systematic research on RM distillation from generative LLMs.
+ oai:arXiv.org:2601.14032v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hongli Zhou, Hui Huang, Wei Liu, Chenglong Wang, Xingyuan Bu, Lvyuan Han, Fuhai Song, Muyun Yang, Wenhao Jiang, Hailong Cao, Tiejun Zhao
+
+
+ PAC-Private Responses with Adversarial Composition
+ https://arxiv.org/abs/2601.14033
+ arXiv:2601.14033v1 Announce Type: new
+Abstract: Modern machine learning models are increasingly deployed behind APIs. This renders standard weight-privatization methods (e.g. DP-SGD) unnecessarily noisy at the cost of utility. While model weights may vary significantly across training datasets, model responses to specific inputs are much lower dimensional and more stable. This motivates enforcing privacy guarantees directly on model outputs.
+ We approach this under PAC privacy, which provides instance-based privacy guarantees for arbitrary black-box functions by controlling mutual information (MI). Importantly, PAC privacy explicitly rewards output stability with reduced noise levels. However, a central challenge remains: response privacy requires composing a large number of adaptively chosen, potentially adversarial queries issued by untrusted users, where existing composition results on PAC privacy are inadequate. We introduce a new algorithm that achieves adversarial composition via adaptive noise calibration and prove that mutual information guarantees accumulate linearly under adaptive and adversarial querying.
+ Experiments across tabular, vision, and NLP tasks show that our method achieves high utility at extremely small per-query privacy budgets. On CIFAR-10, we achieve 87.79% accuracy with a per-step MI budget of $2^{-32}$. This enables serving one million queries while provably bounding membership inference attack (MIA) success rates to 51.08% -- the same guarantee of $(0.04, 10^{-5})$-DP. Furthermore, we show that private responses can be used to label public data to distill a publishable privacy-preserving model; using an ImageNet subset as a public dataset, our model distilled from 210,000 responses achieves 91.86% accuracy on CIFAR-10 with MIA success upper-bounded by 50.49%, which is comparable to $(0.02,10^{-5})$-DP.
+ oai:arXiv.org:2601.14033v1
+ cs.LG
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaochen Zhu, Mayuri Sridhar, Srinivas Devadas
+
+
+ Analyzing the Availability of E-Mail Addresses for PyPI Libraries
+ https://arxiv.org/abs/2601.14034
+ arXiv:2601.14034v1 Announce Type: new
+Abstract: Open Source Software (OSS) libraries form the backbone of modern software systems, yet their long-term sustainability often depends on maintainers being reachable for support, coordination, and security reporting. In this paper, we empirically analyze the availability of contact information - specifically e-mail addresses - across 686,034 Python libraries on the Python Package Index (PyPI) and their associated GitHub repositories. We examine how and where maintainers provide this information, assess its validity, and explore coverage across individual libraries and their dependency chains. Our findings show that 81.6% of libraries include at least one valid e-mail address, with PyPI serving as the primary source (79.5%). When analyzing dependency chains, we observe that up to 97.8% of direct and 97.7% of transitive dependencies provide valid contact information. At the same time, we identify over 698,000 invalid entries, primarily due to missing fields. These results demonstrate strong maintainer reachability across the ecosystem, while highlighting opportunities for improvement - such as offering clearer guidance to maintainers during the packaging process and introducing opt-in validation mechanisms for existing e-mail addresses.
+ oai:arXiv.org:2601.14034v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alexandros Tsakpinis, Alexander Pretschner
+
+
+ Human detectors are surprisingly powerful reward models
+ https://arxiv.org/abs/2601.14037
+ arXiv:2601.14037v1 Announce Type: new
+Abstract: Video generation models have recently achieved impressive visual fidelity and temporal coherence. Yet, they continue to struggle with complex, non-rigid motions, especially when synthesizing humans performing dynamic actions such as sports, dance, etc. Generated videos often exhibit missing or extra limbs, distorted poses, or physically implausible actions. In this work, we propose a remarkably simple reward model, HuDA, to quantify and improve the human motion in generated videos. HuDA integrates human detection confidence for appearance quality, and a temporal prompt alignment score to capture motion realism. We show this simple reward function that leverages off-the-shelf models without any additional training, outperforms specialized models finetuned with manually annotated data. Using HuDA for Group Reward Policy Optimization (GRPO) post-training of video models, we significantly enhance video generation, especially when generating complex human motions, outperforming state-of-the-art models like Wan 2.1, with win-rate of 73%. Finally, we demonstrate that HuDA improves generation quality beyond just humans, for instance, significantly improving generation of animal videos and human-object interactions.
+ oai:arXiv.org:2601.14037v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kumar Ashutosh, XuDong Wang, Xi Yin, Kristen Grauman, Adam Polyak, Ishan Misra, Rohit Girdhar
+
+
+ Correcting and Quantifying Systematic Errors in 3D Box Annotations for Autonomous Driving
+ https://arxiv.org/abs/2601.14038
+ arXiv:2601.14038v1 Announce Type: new
+Abstract: Accurate ground truth annotations are critical to supervised learning and evaluating the performance of autonomous vehicle systems. These vehicles are typically equipped with active sensors, such as LiDAR, which scan the environment in predefined patterns. 3D box annotation based on data from such sensors is challenging in dynamic scenarios, where objects are observed at different timestamps, hence different positions. Without proper handling of this phenomenon, systematic errors are prone to being introduced in the box annotations. Our work is the first to discover such annotation errors in widely used, publicly available datasets. Through our novel offline estimation method, we correct the annotations so that they follow physically feasible trajectories and achieve spatial and temporal consistency with the sensor data. For the first time, we define metrics for this problem; and we evaluate our method on the Argoverse 2, MAN TruckScenes, and our proprietary datasets. Our approach increases the quality of box annotations by more than 17% in these datasets. Furthermore, we quantify the annotation errors in them and find that the original annotations are misplaced by up to 2.5 m, with highly dynamic objects being the most affected. Finally, we test the impact of the errors in benchmarking and find that the impact is larger than the improvements that state-of-the-art methods typically achieve with respect to the previous state-of-the-art methods; showing that accurate annotations are essential for correct interpretation of performance. Our code is available at https://github.com/alexandre-justo-miro/annotation-correction-3D-boxes.
+ oai:arXiv.org:2601.14038v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Alexandre Justo Miro (Traton Group R&D, M\"alardalen University), Ludvig af Klinteberg (M\"alardalen University), Bogdan Timus (Traton Group R&D), Aron Asefaw (KTH Royal Institute of Technology), Ajinkya Khoche (Traton Group R&D, KTH Royal Institute of Technology), Thomas Gustafsson (Traton Group R&D), Sina Sharif Mansouri (Traton Group R&D), Masoud Daneshtalab (M\"alardalen University)
+
+
+ Generalizing Abstention for Noise-Robust Learning in Medical Image Segmentation
+ https://arxiv.org/abs/2601.14039
+ arXiv:2601.14039v1 Announce Type: new
+Abstract: Label noise is a critical problem in medical image segmentation, often arising from the inherent difficulty of manual annotation. Models trained on noisy data are prone to overfitting, which degrades their generalization performance. While a number of methods and strategies have been proposed to mitigate noisy labels in the segmentation domain, this area remains largely under-explored. The abstention mechanism has proven effective in classification tasks by enhancing the capabilities of Cross Entropy, yet its potential in segmentation remains unverified. In this paper, we address this gap by introducing a universal and modular abstention framework capable of enhancing the noise-robustness of a diverse range of loss functions. Our framework improves upon prior work with two key components: an informed regularization term to guide abstention behaviour, and a more flexible power-law-based auto-tuning algorithm for the abstention penalty. We demonstrate the framework's versatility by systematically integrating it with three distinct loss functions to create three novel, noise-robust variants: GAC, SAC, and ADS. Experiments on the CaDIS and DSAD medical datasets show our methods consistently and significantly outperform their non-abstaining baselines, especially under high noise levels. This work establishes that enabling models to selectively ignore corrupted samples is a powerful and generalizable strategy for building more reliable segmentation models. Our code is publicly available at https://github.com/wemous/abstention-for-segmentation.
+ oai:arXiv.org:2601.14039v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Wesam Moustafa, Hossam Elsafty, Helen Schneider, Lorenz Sparrenberg, Rafet Sifa
+
+
+ Top 10 Open Challenges Steering the Future of Diffusion Language Model and Its Variants
+ https://arxiv.org/abs/2601.14041
+ arXiv:2601.14041v1 Announce Type: new
+Abstract: The paradigm of Large Language Models (LLMs) is currently defined by auto-regressive (AR) architectures, which generate text through a sequential ``brick-by-brick'' process. Despite their success, AR models are inherently constrained by a causal bottleneck that limits global structural foresight and iterative refinement. Diffusion Language Models (DLMs) offer a transformative alternative, conceptualizing text generation as a holistic, bidirectional denoising process akin to a sculptor refining a masterpiece. However, the potential of DLMs remains largely untapped as they are frequently confined within AR-legacy infrastructures and optimization frameworks. In this Perspective, we identify ten fundamental challenges ranging from architectural inertia and gradient sparsity to the limitations of linear reasoning that prevent DLMs from reaching their ``GPT-4 moment''. We propose a strategic roadmap organized into four pillars: foundational infrastructure, algorithmic optimization, cognitive reasoning, and unified multimodal intelligence. By shifting toward a diffusion-native ecosystem characterized by multi-scale tokenization, active remasking, and latent thinking, we can move beyond the constraints of the causal horizon. We argue that this transition is essential for developing next-generation AI capable of complex structural reasoning, dynamic self-correction, and seamless multimodal integration.
+ oai:arXiv.org:2601.14041v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yunhe Wang, Kai Han, Huiling Zhen, Yuchuan Tian, Hanting Chen, Yongbing Huang, Yufei Cui, Yingte Shu, Shan Gao, Ismail Elezi, Roy Vaughan Miles, Songcen Xu, Feng Wen, Chao Xu, Sinan Zeng, Dacheng Tao
+
+
+ Federated Balanced Learning
+ https://arxiv.org/abs/2601.14042
+ arXiv:2601.14042v1 Announce Type: new
+Abstract: Federated learning is a paradigm of joint learning in which clients collaborate by sharing model parameters instead of data. However, in the non-iid setting, the global model experiences client drift, which can seriously affect the final performance of the model. Previous methods tend to correct the global model that has already deviated based on the loss function or gradient, overlooking the impact of the client samples. In this paper, we rethink the role of the client side and propose Federated Balanced Learning, i.e., FBL, to prevent this issue from the beginning through sample balance on the client side. Technically, FBL allows unbalanced data on the client side to achieve sample balance through knowledge filling and knowledge sampling using edge-side generation models, under the limitation of a fixed number of data samples on clients. Furthermore, we design a Knowledge Alignment Strategy to bridge the gap between synthetic and real data, and a Knowledge Drop Strategy to regularize our method. Meanwhile, we scale our method to real and complex scenarios, allowing different clients to adopt various methods, and extend our framework to further improve performance. Numerous experiments show that our method outperforms state-of-the-art baselines. The code is released upon acceptance.
+ oai:arXiv.org:2601.14042v1
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Jiaze Li, Haoran Xu, Wanyi Wu, Changwei Wang, Shuaiguang Li, Jianzhong Ju, Zhenbo Luo, Jian Luan, Youyang Qu, Longxiang Gao, Xudong Yang, Lumin Xing
+
+
+ Weather-R1: Logically Consistent Reinforcement Fine-Tuning for Multimodal Reasoning in Meteorology
+ https://arxiv.org/abs/2601.14044
+ arXiv:2601.14044v1 Announce Type: new
+Abstract: While Vision Language Models (VLMs) show advancing reasoning capabilities, their application in meteorology is constrained by a domain gap and a reasoning faithfulness gap. Specifically, mainstream Reinforcement Fine-Tuning (RFT) can induce Self-Contradictory Reasoning (Self-Contra), where the model's reasoning contradicts its final answer, which is unacceptable in such a high-stakes domain. To address these challenges, we construct WeatherQA, a novel multimodal reasoning benchmark in meteorology. We also propose Logically Consistent Reinforcement Fine-Tuning (LoCo-RFT), which resolves Self-Contra by introducing a logical consistency reward. Furthermore, we introduce Weather-R1, the first reasoning VLM with logical faithfulness in meteorology, to the best of our knowledge. Experiments demonstrate that Weather-R1 improves performance on WeatherQA by 9.8 percentage points over the baseline, outperforming Supervised Fine-Tuning and RFT, and even surpassing the original Qwen2.5-VL-32B. These results highlight the effectiveness of our LoCo-RFT and the superiority of Weather-R1. Our benchmark and code are available at https://github.com/Marcowky/Weather-R1.
+ oai:arXiv.org:2601.14044v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kaiyu Wu, Pucheng Han, Hualong Zhang, Naigeng Wu, Keze Wang
+
+
+ PRiSM: Benchmarking Phone Realization in Speech Models
+ https://arxiv.org/abs/2601.14046
+ arXiv:2601.14046v1 Announce Type: new
+Abstract: Phone recognition (PR) serves as the atomic interface for language-agnostic modeling for cross-lingual speech processing and phonetic analysis. Despite prolonged efforts in developing PR systems, current evaluations only measure surface-level transcription accuracy. We introduce PRiSM, the first open-source benchmark designed to expose blind spots in phonetic perception through intrinsic and extrinsic evaluation of PR systems. PRiSM standardizes transcription-based evaluation and assesses downstream utility in clinical, educational, and multilingual settings with transcription and representation probes. We find that diverse language exposure during training is key to PR performance, encoder-CTC models are the most stable, and specialized PR models still outperform Large Audio Language Models. PRiSM releases code, recipes, and datasets to move the field toward multilingual speech models with robust phonetic ability: https://github.com/changelinglab/prism.
+ oai:arXiv.org:2601.14046v1
+ cs.CL
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shikhar Bharadwaj, Chin-Jou Li, Yoonjae Kim, Kwanghee Choi, Eunjung Yeo, Ryan Soh-Eun Shim, Hanyu Zhou, Brendon Boldt, Karen Rosero Jacome, Kalvin Chang, Darsh Agrawal, Keer Xu, Chao-Han Huck Yang, Jian Zhu, Shinji Watanabe, David R. Mortensen
+
+
+ Collective intelligence in science: direct elicitation of diverse information from experts with unknown information structure
+ https://arxiv.org/abs/2601.14047
+ arXiv:2601.14047v1 Announce Type: new
+Abstract: Suppose we need a deep collective analysis of an open scientific problem: there is a complex scientific hypothesis and a large online group of mutually unrelated experts with relevant private information of a diverse and unpredictable nature. This information may be results of experts' individual experiments, original reasoning of some of them, results of AI systems they use, etc. We propose a simple mechanism based on a self-resolving play-money prediction market entangled with a chat. We show that such a system can easily be brought to an equilibrium where participants directly share their private information on the hypothesis through the chat and trade as if the market were resolved in accordance with the truth of the hypothesis. This approach will lead to efficient aggregation of relevant information in a completely interpretable form even if the ground truth cannot be established and experts initially know nothing about each other and cannot perform complex Bayesian calculations. Finally, by rewarding the experts with some real assets proportionally to the play money they end up with, we can get an innovative way to fund large-scale collaborative studies of any type.
+ oai:arXiv.org:2601.14047v1
+ cs.GT
+ cs.AI
+ cs.MA
+ cs.SI
+ econ.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alexey V. Osipov, Nikolay N. Osipov
+
+
+ Understanding Multilingualism in Mixture-of-Experts LLMs: Routing Mechanism, Expert Specialization, and Layerwise Steering
+ https://arxiv.org/abs/2601.14050
+ arXiv:2601.14050v1 Announce Type: new
+Abstract: Mixture-of-Experts (MoE) architectures have shown strong multilingual capabilities, yet the internal mechanisms underlying performance gains and cross-language differences remain insufficiently understood. In this work, we conduct a systematic analysis of MoE models, examining routing behavior and expert specialization across languages and network depth. Our analysis reveals that multilingual processing in MoE models is highly structured: routing aligns with linguistic families, expert utilization follows a clear layerwise pattern, and high-resource languages rely on shared experts while low-resource languages depend more on language-exclusive experts despite weaker performance. Layerwise interventions further show that early and late MoE layers support language-specific processing, whereas middle layers serve as language-agnostic capacity hubs. Building on these insights, we propose a routing-guided steering method that adaptively guides routing behavior in middle layers toward shared experts associated with dominant languages at inference time, leading to consistent multilingual performance improvements, particularly for linguistically related language pairs. Our code is available at https://github.com/conctsai/Multilingualism-in-Mixture-of-Experts-LLMs.
+ oai:arXiv.org:2601.14050v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yuxin Chen, Zhengzhou Cai, Xiangtian Ji, Weixiang Zhao, An Zhang, Xiang Wang, Tat-Seng Chua
+
+
+ Kakugo: Distillation of Low-Resource Languages into Small Language Models
+ https://arxiv.org/abs/2601.14051
+ arXiv:2601.14051v1 Announce Type: new
+Abstract: We present Kakugo, a novel and cost-effective pipeline designed to train general-purpose Small Language Models (SLMs) for low-resource languages using only the language name as input. By using a large teacher model to generate synthetic prompts and translate instruction datasets, we produced training data and SLMs for 54 low-resource languages. Evaluations across a diverse set of general natural language processing tasks, including translation, classification, and question answering, demonstrate that our pipeline consistently improves performance over base models. With a total generation and training cost of under $50 per language, Kakugo offers an accessible method for communities to develop language-specific AI.
+ oai:arXiv.org:2601.14051v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Peter Devine, Mardhiyah Sanni, Farid Adilazuarda, Julieta Gil Loizaga, Barry Haddow
+
+
+ Vision Also You Need: Navigating Out-of-Distribution Detection with Multimodal Large Language Model
+ https://arxiv.org/abs/2601.14052
+ arXiv:2601.14052v1 Announce Type: new
+Abstract: Out-of-Distribution (OOD) detection is a critical task that has garnered significant attention. The emergence of CLIP has spurred extensive research into zero-shot OOD detection, often employing a training-free approach. Current methods leverage expert knowledge from large language models (LLMs) to identify potential outliers. However, these approaches tend to over-rely on knowledge in the text space, neglecting the inherent challenges involved in detecting out-of-distribution samples in the image space. In this paper, we propose a novel pipeline, MM-OOD, which leverages the multimodal reasoning capabilities of MLLMs and their ability to conduct multi-round conversations for enhanced outlier detection. Our method is designed to improve performance in both near OOD and far OOD tasks. Specifically, (1) for near OOD tasks, we directly feed ID images and corresponding text prompts into MLLMs to identify potential outliers; and (2) for far OOD tasks, we introduce the sketch-generate-elaborate framework: first, we sketch outlier exposure using text prompts, then generate corresponding visual OOD samples, and finally elaborate by using multimodal prompts. Experiments demonstrate that our method achieves significant improvements on widely used multimodal datasets such as Food-101, while also validating its scalability on ImageNet-1K.
+ oai:arXiv.org:2601.14052v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Haoran Xu, Yanlin Liu, Zizhao Tong, Jiaze Li, Kexue Fu, Yuyang Zhang, Longxiang Gao, Shuaiguang Li, Xingyu Li, Yanran Xu, Changwei Wang
+
+
+ LLMOrbit: A Circular Taxonomy of Large Language Models -From Scaling Walls to Agentic AI Systems
+ https://arxiv.org/abs/2601.14053
+ arXiv:2601.14053v1 Announce Type: new
+Abstract: The field of artificial intelligence has undergone a revolution from foundational Transformer architectures to reasoning-capable systems approaching human-level performance. We present LLMOrbit, a comprehensive circular taxonomy navigating the landscape of large language models spanning 2019-2025. This survey examines over 50 models across 15 organizations through eight interconnected orbital dimensions, documenting architectural innovations, training methodologies, and efficiency patterns defining modern LLMs, generative AI, and agentic systems. We identify three critical crises: (1) data scarcity (9-27T tokens depleted by 2026-2028), (2) exponential cost growth ($3M to $300M+ in 5 years), and (3) unsustainable energy consumption (22x increase), establishing the scaling wall limiting brute-force approaches. Our analysis reveals six paradigms breaking this wall: (1) test-time compute (o1, DeepSeek-R1 achieve GPT-4 performance with 10x inference compute), (2) quantization (4-8x compression), (3) distributed edge computing (10x cost reduction), (4) model merging, (5) efficient training (ORPO reduces memory 50%), and (6) small specialized models (Phi-4 14B matches larger models). Three paradigm shifts emerge: (1) post-training gains (RLHF, GRPO, pure RL contribute substantially, DeepSeek-R1 achieving 79.8% MATH), (2) efficiency revolution (MoE routing 18x efficiency, Multi-head Latent Attention 8x KV cache compression enables GPT-4-level performance at <$0.30/M tokens), and (3) democratization (open-source Llama 3 88.6% MMLU surpasses GPT-4 86.4%). We provide insights into techniques (RLHF, PPO, DPO, GRPO, ORPO), trace evolution from passive generation to tool-using agents (ReAct, RAG, multi-agent systems), and analyze post-training innovations.
+ oai:arXiv.org:2601.14053v1
+ cs.LG
+ cs.AI
+ cs.CV
+ cs.MA
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Badri N. Patro, Vijay S. Agneeswaran
+
+
+ SecureSplit: Mitigating Backdoor Attacks in Split Learning
+ https://arxiv.org/abs/2601.14054
+ arXiv:2601.14054v1 Announce Type: new
+Abstract: Split Learning (SL) offers a framework for collaborative model training that respects data privacy by allowing participants to share the same dataset while maintaining distinct feature sets. However, SL is susceptible to backdoor attacks, in which malicious clients subtly alter their embeddings to insert hidden triggers that compromise the final trained model. To address this vulnerability, we introduce SecureSplit, a defense mechanism tailored to SL. SecureSplit applies a dimensionality transformation strategy to accentuate subtle differences between benign and poisoned embeddings, facilitating their separation. With this enhanced distinction, we develop an adaptive filtering approach that uses a majority-based voting scheme to remove contaminated embeddings while preserving clean ones. Rigorous experiments across four datasets (CIFAR-10, MNIST, CINIC-10, and ImageNette), five backdoor attack scenarios, and seven alternative defenses confirm the effectiveness of SecureSplit under various challenging conditions.
+ oai:arXiv.org:2601.14054v1
+ cs.CR
+ cs.DC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhihao Dou, Dongfei Cui, Weida Wang, Anjun Gao, Yueyang Quan, Mengyao Ma, Viet Vo, Guangdong Bai, Zhuqing Liu, Minghong Fang
+
+
+ Decoder-Free Supervoxel GNN for Accurate Brain-Tumor Localization in Multi-Modal MRI
+ https://arxiv.org/abs/2601.14055
+ arXiv:2601.14055v1 Announce Type: new
+Abstract: Modern vision backbones for 3D medical imaging typically process dense voxel grids through parameter-heavy encoder-decoder structures, a design that allocates a significant portion of its parameters to spatial reconstruction rather than feature learning. Our approach introduces SVGFormer, a decoder-free pipeline built upon a content-aware grouping stage that partitions the volume into a semantic graph of supervoxels. Its hierarchical encoder learns rich node representations by combining a patch-level Transformer with a supervoxel-level Graph Attention Network, jointly modeling fine-grained intra-region features and broader inter-regional dependencies. This design concentrates all learnable capacity on feature encoding and provides inherent, dual-scale explainability from the patch to the region level. To validate the framework's flexibility, we trained two specialized models on the BraTS dataset: one for node-level classification and one for tumor proportion regression. Both models achieved strong performance, with the classification model achieving a F1-score of 0.875 and the regression model a MAE of 0.028, confirming the encoder's ability to learn discriminative and localized features. Our results establish that a graph-based, encoder-only paradigm offers an accurate and inherently interpretable alternative for 3D medical image representation.
+ oai:arXiv.org:2601.14055v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Andrea Protani, Marc Molina Van Den Bosch, Lorenzo Giusti, Heloisa Barbosa Da Silva, Paolo Cacace, Albert Sund Aillet, Miguel Angel Gonzalez Ballester, Friedhelm Hummel, Luigi Serio
+
+
+ POCI-Diff: Position Objects Consistently and Interactively with 3D-Layout Guided Diffusion
+ https://arxiv.org/abs/2601.14056
+ arXiv:2601.14056v1 Announce Type: new
+Abstract: We propose a diffusion-based approach for Text-to-Image (T2I) generation with consistent and interactive 3D layout control and editing. While prior methods improve spatial adherence using 2D cues or iterative copy-warp-paste strategies, they often distort object geometry and fail to preserve consistency across edits. To address these limitations, we introduce a framework for Positioning Objects Consistently and Interactively (POCI-Diff), a novel formulation for jointly enforcing 3D geometric constraints and instance-level semantic binding within a unified diffusion process. Our method enables explicit per-object semantic control by binding individual text descriptions to specific 3D bounding boxes through Blended Latent Diffusion, allowing one-shot synthesis of complex multi-object scenes. We further propose a warping-free generative editing pipeline that supports object insertion, removal, and transformation via regeneration rather than pixel deformation. To preserve object identity and consistency across edits, we condition the diffusion process on reference images using IP-Adapter, enabling coherent object appearance throughout interactive 3D editing while maintaining global scene coherence. Experimental results demonstrate that POCI-Diff produces high-quality images consistent with the specified 3D layouts and edits, outperforming state-of-the-art methods in both visual fidelity and layout adherence while eliminating warping-induced geometric artifacts.
+ oai:arXiv.org:2601.14056v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Andrea Rigo, Luca Stornaiuolo, Weijie Wang, Mauro Martino, Bruno Lepri, Nicu Sebe
+
+
+ Verifying Floating-Point Programs in Stainless
+ https://arxiv.org/abs/2601.14059
+ arXiv:2601.14059v1 Announce Type: new
+Abstract: We extend the Stainless deductive verifier with floating-point support, providing the first automated verification support for floating-point numbers for a subset of Scala that includes polymorphism, recursion and higher-order functions. We follow the recent approach in the KeY verifier to axiomatise reasoning about mathematical functions, but go further by supporting all functions from Scala's math API, and by verifying the correctness of the axioms against the actual implementation in Stainless itself. We validate Stainless' floating-point support on a new set of benchmarks sampled from real-world code from GitHub, showing that it can verify specifications about, e.g., ranges of output or absence of special values for most supported functions, or produce counter-examples when the specifications do not hold.
+ oai:arXiv.org:2601.14059v1
+ cs.PL
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Andrea Gilot, Axel Bergstr\"om, Eva Darulova
+
+
+ Fine-Grained Zero-Shot Composed Image Retrieval with Complementary Visual-Semantic Integration
+ https://arxiv.org/abs/2601.14060
+ arXiv:2601.14060v1 Announce Type: new
+Abstract: Zero-shot composed image retrieval (ZS-CIR) is a rapidly growing area with significant practical applications, allowing users to retrieve a target image by providing a reference image and a relative caption describing the desired modifications. Existing ZS-CIR methods often struggle to capture fine-grained changes and integrate visual and semantic information effectively. They primarily rely on either transforming the multimodal query into a single text using image-to-text models or employing large language models for target image description generation, approaches that often fail to capture complementary visual information and complete semantic context. To address these limitations, we propose a novel Fine-Grained Zero-Shot Composed Image Retrieval method with Complementary Visual-Semantic Integration (CVSI). Specifically, CVSI leverages three key components: (1) Visual Information Extraction, which not only extracts global image features but also uses a pre-trained mapping network to convert the image into a pseudo token, combining it with the modification text and the objects most likely to be added. (2) Semantic Information Extraction, which involves using a pre-trained captioning model to generate multiple captions for the reference image, followed by leveraging an LLM to generate the modified captions and the objects most likely to be added. (3) Complementary Information Retrieval, which integrates information extracted from both the query and database images to retrieve the target image, enabling the system to efficiently handle retrieval queries in a variety of situations. Extensive experiments on three public datasets (e.g., CIRR, CIRCO, and FashionIQ) demonstrate that CVSI significantly outperforms existing state-of-the-art methods. Our code is available at https://github.com/yyc6631/CVSI.
+ oai:arXiv.org:2601.14060v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yongcong Ye, Kai Zhang, Yanghai Zhang, Enhong Chen, Longfei Li, Jun Zhou
+
+
+ XCR-Bench: A Multi-Task Benchmark for Evaluating Cultural Reasoning in LLMs
+ https://arxiv.org/abs/2601.14063
+ arXiv:2601.14063v1 Announce Type: new
+Abstract: Cross-cultural competence in large language models (LLMs) requires the ability to identify Culture-Specific Items (CSIs) and to adapt them appropriately across cultural contexts. Progress in evaluating this capability has been constrained by the scarcity of high-quality CSI-annotated corpora with parallel cross-cultural sentence pairs. To address this limitation, we introduce XCR-Bench, a Cross(X)-Cultural Reasoning Benchmark consisting of 4.9k parallel sentences and 1,098 unique CSIs, spanning three distinct reasoning tasks with corresponding evaluation metrics. Our corpus integrates Newmark's CSI framework with Hall's Triad of Culture, enabling systematic analysis of cultural reasoning beyond surface-level artifacts and into semi-visible and invisible cultural elements such as social norms, beliefs, and values. Our findings show that state-of-the-art LLMs exhibit consistent weaknesses in identifying and adapting CSIs related to social etiquette and cultural reference. Additionally, we find evidence that LLMs encode regional and ethno-religious biases even within a single linguistic setting during cultural adaptation. We release our corpus and code to facilitate future research on cross-cultural NLP.
+ oai:arXiv.org:2601.14063v1
+ cs.CL
+ cs.AI
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Mohsinul Kabir, Tasnim Ahmed, Md Mezbaur Rahman, Shaoxiong Ji, Hassan Alhuzali, Sophia Ananiadou
+
+
+ VERIDAH: Solving Enumeration Anomaly Aware Vertebra Labeling across Imaging Sequences
+ https://arxiv.org/abs/2601.14066
+ arXiv:2601.14066v1 Announce Type: new
+Abstract: The human spine commonly consists of seven cervical, twelve thoracic, and five lumbar vertebrae. However, enumeration anomalies may result in individuals having eleven or thirteen thoracic vertebrae and four or six lumbar vertebrae. Although the identification of enumeration anomalies has potential clinical implications for chronic back pain and operation planning, the thoracolumbar junction is often poorly assessed and rarely described in clinical reports. Additionally, even though multiple deep-learning-based vertebra labeling algorithms exist, there is a lack of methods to automatically label enumeration anomalies. Our work closes that gap by introducing "Vertebra Identification with Anomaly Handling" (VERIDAH), a novel vertebra labeling algorithm based on multiple classification heads combined with a weighted vertebra sequence prediction algorithm. We show that our approach surpasses existing models on T2w TSE sagittal (98.30% vs. 94.24% of subjects with all vertebrae correctly labeled, p < 0.001) and CT imaging (99.18% vs. 77.26% of subjects with all vertebrae correctly labeled, p < 0.001) and works in arbitrary field-of-view images. VERIDAH correctly labeled the presence 2 M\"oller et al. of thoracic enumeration anomalies in 87.80% and 96.30% of T2w and CT images, respectively, and lumbar enumeration anomalies in 94.48% and 97.22% for T2w and CT, respectively. Our code and models are available at: https://github.com/Hendrik-code/spineps.
+ oai:arXiv.org:2601.14066v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hendrik M\"oller, Hanna Schoen, Robert Graf, Matan Atad, Nathan Molinier, Anjany Sekuboyina, Bettina K. Budai, Fabian Bamberg, Steffen Ringhof, Christopher Schlett, Tobias Pischon, Thoralf Niendorf, Josua A. Decker, Marc-Andr\'e Weber, Bjoern Menze, Daniel Rueckert, Jan S. Kirschke
+
+
+ Modular Attractor Acceleration in Infinite-State Games (Full Version)
+ https://arxiv.org/abs/2601.14068
+ arXiv:2601.14068v1 Announce Type: new
+Abstract: Infinite-state games provide a framework for the synthesis of reactive systems with unbounded data domains. Solving such games typically relies on computing symbolic fixpoints, particularly symbolic attractors. However, these computations may not terminate, and while recent acceleration techniques have been proposed to address this issue, they often rely on acceleration arguments of limited expressiveness. In this work, we propose an approach for the modular computation of acceleration arguments. It enables the construction of complex acceleration arguments by composing simpler ones, thereby improving both scalability and flexibility. In addition, we introduce a summarization technique that generalizes discovered acceleration arguments, allowing them to be efficiently reused across multiple contexts. Together, these contributions improve the efficiency of solving infinite-state games in reactive synthesis, as demonstrated by our experimental evaluation.
+ oai:arXiv.org:2601.14068v1
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Philippe Heim, Rayna Dimitrova
+
+
+ Unsupervised Video Class-Incremental Learning via Deep Embedded Clustering Management
+ https://arxiv.org/abs/2601.14069
+ arXiv:2601.14069v1 Announce Type: new
+Abstract: Unsupervised video class incremental learning (uVCIL) represents an important learning paradigm for learning video information without forgetting, and without considering any data labels. Prior approaches have focused on supervised class-incremental learning, relying on using the knowledge of labels and task boundaries, which is costly, requires human annotation, or is simply not a realistic option. In this paper, we propose a simple yet effective approach to address the uVCIL. We first consider a deep feature extractor network, providing a set of representative video features during each task without assuming any class or task information. We then progressively build a series of deep clusters from the extracted features. During the successive task learning, the model updated from the previous task is used as an initial state in order to transfer knowledge to the current learning task. We perform in-depth evaluations on three standard video action recognition datasets, including UCF101, HMDB51, and Something-to-Something V2, by ignoring the labels from the supervised setting. Our approach significantly outperforms other baselines on all datasets.
+ oai:arXiv.org:2601.14069v1
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nattapong Kurpukdee, Adrian G. Bors
+
+
+ On the optimal shape parameter for kernel methods: Sharp direct and inverse statements
+ https://arxiv.org/abs/2601.14070
+ arXiv:2601.14070v1 Announce Type: new
+Abstract: The search for the optimal shape parameter for Radial Basis Function (RBF) kernel approximation has been an outstanding research problem for decades. In this work, we establish a theoretical framework for this problem by leveraging a recently established theory on sharp direct, inverse and saturation statements for kernel based approximation. In particular, we link the search for the optimal shape parameter to superconvergence phenomena. Our analysis is carried out for finitely smooth Sobolev kernels, thereby covering large classes of radial kernels used in practice, including those emerging from current machine-learning methodologies. Our results elucidate how approximation regimes, kernel regularity, and parameter choices interact, thereby clarifying a question that has remained unresolved for decades.
+ oai:arXiv.org:2601.14070v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tizian Wenzel, Gabriele Santin
+
+
+ Utilizing the Perceived Age to Maximize Freshness in Query-Based Update Systems
+ https://arxiv.org/abs/2601.14075
+ arXiv:2601.14075v1 Announce Type: new
+Abstract: Query-based sampling has become an increasingly popular technique for monitoring Markov sources in pull-based update systems. However, most of the contemporary literature on this assumes an exponential distribution for query delay and often relies on the assumption that the feedback or replies to the queries are instantaneous. In this work, we relax both of these assumptions and find optimal sampling policies for monitoring continuous-time Markov chains (CTMC) under generic delay distributions. In particular, we show that one can obtain significant gains in terms of mean binary freshness (MBF) by employing a waiting based strategy for query-based sampling.
+ oai:arXiv.org:2601.14075v1
+ cs.IT
+ cs.SY
+ eess.SY
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sahan Liyanaarachchi, Sennur Ulukus, Nail Akar
+
+
+ From Trees to Tree-Like: Distribution and Synthesis for Asynchronous Automata
+ https://arxiv.org/abs/2601.14078
+ arXiv:2601.14078v1 Announce Type: new
+Abstract: We revisit constructions for distribution and synthesis of Zielonka's asynchronous automata in restricted settings. We show first a simple, quadratic, distribution construction for asynchronous automata, where the process architecture is tree-like. An architecture is tree-like if there is an underlying spanning tree of the architecture and communications are local on the tree. This quadratic distribution result generalizes the known construction for tree architectures and improves on an older, exponential construction for triangulated dependence alphabets. Lastly we consider the problem of distributed controller synthesis and show that it is decidable for tree-like architectures. This extends the decidability boundary from tree architectures to tree-like keeping the same $\text{Tower}_d(n)$ complexity bound, where $n$ is the size of the system and $d \ge 0$ the depth of the process tree.
+ oai:arXiv.org:2601.14078v1
+ cs.FL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mathieu Lehaut, Anca Muscholl, Nir Piterman
+
+
+ VENI: Variational Encoder for Natural Illumination
+ https://arxiv.org/abs/2601.14079
+ arXiv:2601.14079v1 Announce Type: new
+Abstract: Inverse rendering is an ill-posed problem, but priors like illumination priors, can simplify it. Existing work either disregards the spherical and rotation-equivariant nature of illumination environments or does not provide a well-behaved latent space. We propose a rotation-equivariant variational autoencoder that models natural illumination on the sphere without relying on 2D projections. To preserve the SO(2)-equivariance of environment maps, we use a novel Vector Neuron Vision Transformer (VN-ViT) as encoder and a rotation-equivariant conditional neural field as decoder. In the encoder, we reduce the equivariance from SO(3) to SO(2) using a novel SO(2)-equivariant fully connected layer, an extension of Vector Neurons. We show that our SO(2)-equivariant fully connected layer outperforms standard Vector Neurons when used in our SO(2)-equivariant model. Compared to previous methods, our variational autoencoder enables smoother interpolation in latent space and offers a more well-behaved latent space.
+ oai:arXiv.org:2601.14079v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Paul Walker, James A. D. Gardner, Andreea Ardelean, William A. P. Smith, Bernhard Egger
+
+
+ Feature-Aware Test Generation for Deep Learning Models
+ https://arxiv.org/abs/2601.14081
+ arXiv:2601.14081v1 Announce Type: new
+Abstract: As deep learning models are widely used in software systems, test generation plays a crucial role in assessing the quality of such models before deployment. To date, the most advanced test generators rely on generative AI to synthesize inputs; however, these approaches remain limited in providing semantic insight into the causes of misbehaviours and in offering fine-grained semantic controllability over the generated inputs. In this paper, we introduce Detect, a feature-aware test generation framework for vision-based deep learning (DL) models that systematically generates inputs by perturbing disentangled semantic attributes within the latent space. Detect perturbs individual latent features in a controlled way and observes how these changes affect the model's output. Through this process, it identifies which features lead to behavior shifts and uses a vision-language model for semantic attribution. By distinguishing between task-relevant and irrelevant features, Detect applies feature-aware perturbations targeted for both generalization and robustness. Empirical results across image classification and detection tasks show that Detect generates high-quality test cases with fine-grained control, reveals distinct shortcut behaviors across model architectures (convolutional and transformer-based), and bugs that are not captured by accuracy metrics. Specifically, Detect outperforms a state-of-the-art test generator in decision boundary discovery and a leading spurious feature localization method in identifying robustness failures. Our findings show that fully fine-tuned convolutional models are prone to overfitting on localized cues, such as co-occurring visual traits, while weakly supervised transformers tend to rely on global features, such as environmental variances. These findings highlight the value of interpretable and feature-aware testing in improving DL model reliability.
+ oai:arXiv.org:2601.14081v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Xingcheng Chen, Oliver Weissl, Andrea Stocco
+
+
+ DermaBench: A Clinician-Annotated Benchmark Dataset for Dermatology Visual Question Answering and Reasoning
+ https://arxiv.org/abs/2601.14084
+ arXiv:2601.14084v1 Announce Type: new
+Abstract: Vision-language models (VLMs) are increasingly important in medical applications; however, their evaluation in dermatology remains limited by datasets that focus primarily on image-level classification tasks such as lesion recognition. While valuable for recognition, such datasets cannot assess the full visual understanding, language grounding, and clinical reasoning capabilities of multimodal models. Visual question answering (VQA) benchmarks are required to evaluate how models interpret dermatological images, reason over fine-grained morphology, and generate clinically meaningful descriptions. We introduce DermaBench, a clinician-annotated dermatology VQA benchmark built on the Diverse Dermatology Images (DDI) dataset. DermaBench comprises 656 clinical images from 570 unique patients spanning Fitzpatrick skin types I-VI. Using a hierarchical annotation schema with 22 main questions (single-choice, multi-choice, and open-ended), expert dermatologists annotated each image for diagnosis, anatomic site, lesion morphology, distribution, surface features, color, and image quality, together with open-ended narrative descriptions and summaries, yielding approximately 14.474 VQA-style annotations. DermaBench is released as a metadata-only dataset to respect upstream licensing and is publicly available at Harvard Dataverse.
+ oai:arXiv.org:2601.14084v1
+ cs.CV
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Abdurrahim Yilmaz, Ozan Erdem, Ece Gokyayla, Ayda Acar, Burc Bugra Dagtas, Dilara Ilhan Erdil, Gulsum Gencoglan, Burak Temelkuran
+
+
+ Two-Stream temporal transformer for video action classification
+ https://arxiv.org/abs/2601.14086
+ arXiv:2601.14086v1 Announce Type: new
+Abstract: Motion representation plays an important role in video understanding and has many applications including action recognition, robot and autonomous guidance or others. Lately, transformer networks, through their self-attention mechanism capabilities, have proved their efficiency in many applications. In this study, we introduce a new two-stream transformer video classifier, which extracts spatio-temporal information from content and optical flow representing movement information. The proposed model identifies self-attention features across the joint optical flow and temporal frame domain and represents their relationships within the transformer encoder mechanism. The experimental results show that our proposed methodology provides excellent classification results on three well-known video datasets of human activities.
+ oai:arXiv.org:2601.14086v1
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nattapong Kurpukdee, Adrian G. Bors
+
+
+ '1'-bit Count-based Sorting Unit to Reduce Link Power in DNN Accelerators
+ https://arxiv.org/abs/2601.14087
+ arXiv:2601.14087v1 Announce Type: new
+Abstract: Interconnect power consumption remains a bottleneck in Deep Neural Network (DNN) accelerators. While ordering data based on '1'-bit counts can mitigate this via reduced switching activity, practical hardware sorting implementations remain underexplored. This work proposes the hardware implementation of a comparison-free sorting unit optimized for Convolutional Neural Networks (CNN). By leveraging approximate computing to group population counts into coarse-grained buckets, our design achieves hardware area reductions while preserving the link power benefits of data reordering. Our approximate sorting unit achieves up to 35.4% area reduction while maintaining 19.50\% BT reduction compared to 20.42% of precise implementation.
+ oai:arXiv.org:2601.14087v1
+ cs.AR
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ruichi Han (Department of Electronics,Embedded Systems, KTH Royal Institute of Technology, Stockholm, Sweden), Yizhi Chen (Department of Electronics,Embedded Systems, KTH Royal Institute of Technology, Stockholm, Sweden), Tong Lei (Department of Electronics,Embedded Systems, KTH Royal Institute of Technology, Stockholm, Sweden), Jordi Altayo Gonzalez (Department of Electronics,Embedded Systems, KTH Royal Institute of Technology, Stockholm, Sweden), Ahmed Hemani (Department of Electronics,Embedded Systems, KTH Royal Institute of Technology, Stockholm, Sweden)
+
+
+ Near Optimal Code Construction for the Adversarial Torn Paper Channel With Edit Errors
+ https://arxiv.org/abs/2601.14088
+ arXiv:2601.14088v1 Announce Type: new
+Abstract: Motivated by DNA storage systems and 3D fingerprinting, this work studies the adversarial torn paper channel with edit errors. This channel first applies at most $t_e$ edit errors (i.e., insertions, deletions, and substitutions) to the transmitted word and then breaks it into $t+1$ fragments at arbitrary positions. In this paper, we construct a near optimal error correcting code for this channel, which will be referred to as a $t$-breaks $t_e$-edit-errors resilient code. This code enables reconstructing the transmitted codeword from the $t+1$ noisy fragments. Moreover, we study list decoding of the torn paper channel by deriving bounds on the size of the list (of codewords) obtained from cutting a codeword of a $t$-breaks resilient code $t'$ times, where $t' > t$.
+ oai:arXiv.org:2601.14088v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Maria Abu-Sini, Reinhard Heckel
+
+
+ Data-Driven Safe Output Regulation of Strict-Feedback Linear Systems with Input Delay
+ https://arxiv.org/abs/2601.14089
+ arXiv:2601.14089v1 Announce Type: new
+Abstract: This paper develops a data-driven safe control framework for linear systems possessing a known strict-feedback structure, but with most plant parameters, external disturbances, and input delay being unknown. By leveraging Koopman operator theory, we utilize Krylov dynamic mode decomposition (DMD) to extract the system dynamics from measured data, enabling the reconstruction of the system and disturbance matrices. Concurrently, the batch least-squares identification (BaLSI) method is employed to identify other unknown parameters in the input channel. Using control barrier functions (CBFs) and backstepping, we first develop a full-state safe controller. Based on this, we build an output-feedback controller by performing system identification using only the output data and actuation signals as well as constructing an observer to estimate the unmeasured plant states. The proposed approach achieves: 1) finite-time identification of a substantial set of unknown system quantities, and 2) exponential convergence of the output state (the state furthest from the control input) to a reference trajectory while rigorously ensuring safety constraints. The effectiveness of the proposed method is demonstrated through a safe vehicle platooning application.
+ oai:arXiv.org:2601.14089v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Zhenxu Zhao, Ji Wang, Weiyao Lan
+
+
+ Zero-shot adaptable task planning for autonomous construction robots: a comparative study of lightweight single and multi-AI agent systems
+ https://arxiv.org/abs/2601.14091
+ arXiv:2601.14091v1 Announce Type: new
+Abstract: Robots are expected to play a major role in the future construction industry but face challenges due to high costs and difficulty adapting to dynamic tasks. This study explores the potential of foundation models to enhance the adaptability and generalizability of task planning in construction robots. Four models are proposed and implemented using lightweight, open-source large language models (LLMs) and vision language models (VLMs). These models include one single agent and three multi-agent teams that collaborate to create robot action plans. The models are evaluated across three construction roles: Painter, Safety Inspector, and Floor Tiling. Results show that the four-agent team outperforms the state-of-the-art GPT-4o in most metrics while being ten times more cost-effective. Additionally, teams with three and four agents demonstrate the improved generalizability. By discussing how agent behaviors influence outputs, this study enhances the understanding of AI teams and supports future research in diverse unstructured environments beyond construction.
+ oai:arXiv.org:2601.14091v1
+ cs.RO
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Hossein Naderi, Alireza Shojaei, Lifu Huang, Philip Agee, Kereshmeh Afsari, Abiola Akanmu
+
+
+ Optimizing Energy and Data Collection in UAV-aided IoT Networks using Attention-based Multi-Objective Reinforcement Learning
+ https://arxiv.org/abs/2601.14092
+ arXiv:2601.14092v1 Announce Type: new
+Abstract: Due to their adaptability and mobility, Unmanned Aerial Vehicles (UAVs) are becoming increasingly essential for wireless network services, particularly for data harvesting tasks. In this context, Artificial Intelligence (AI)-based approaches have gained significant attention for addressing UAV path planning tasks in large and complex environments, bridging the gap with real-world deployments. However, many existing algorithms suffer from limited training data, which hampers their performance in highly dynamic environments. Moreover, they often overlook the inherently multi-objective nature of the task, treating it in an overly simplistic manner. To address these limitations, we propose an attention-based Multi-Objective Reinforcement Learning (MORL) architecture that explicitly handles the trade-off between data collection and energy consumption in urban environments, even without prior knowledge of wireless channel conditions. Our method develops a single model capable of adapting to varying trade-off preferences and dynamic scenario parameters without the need for fine-tuning or retraining. Extensive simulations show that our approach achieves substantial improvements in performance, model compactness, sample efficiency, and most importantly, generalization to previously unseen scenarios, outperforming existing RL solutions.
+ oai:arXiv.org:2601.14092v1
+ cs.LG
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Babacar Toure, Dimitrios Tsilimantos, Omid Esrafilian, Marios Kountouris
+
+
+ Remapping and navigation of an embedding space via error minimization: a fundamental organizational principle of cognition in natural and artificial systems
+ https://arxiv.org/abs/2601.14096
+ arXiv:2601.14096v1 Announce Type: new
+Abstract: The emerging field of diverse intelligence seeks an integrated view of problem-solving in agents of very different provenance, composition, and substrates. From subcellular chemical networks to swarms of organisms, and across evolved, engineered, and chimeric systems, it is hypothesized that scale-invariant principles of decision-making can be discovered. We propose that cognition in both natural and synthetic systems can be characterized and understood by the interplay between two equally important invariants: (1) the remapping of embedding spaces, and (2) the navigation within these spaces. Biological collectives, from single cells to entire organisms (and beyond), remap transcriptional, morphological, physiological, or 3D spaces to maintain homeostasis and regenerate structure, while navigating these spaces through distributed error correction. Modern Artificial Intelligence (AI) systems, including transformers, diffusion models, and neural cellular automata enact analogous processes by remapping data into latent embeddings and refining them iteratively through contextualization. We argue that this dual principle - remapping and navigation of embedding spaces via iterative error minimization - constitutes a substrate-independent invariant of cognition. Recognizing this shared mechanism not only illuminates deep parallels between living systems and artificial models, but also provides a unifying framework for engineering adaptive intelligence across scales.
+ oai:arXiv.org:2601.14096v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Benedikt Hartl, L\'eo Pio-Lopez, Chris Fields, Michael Levin
+
+
+ A flexible language model-assisted electronic design automation framework
+ https://arxiv.org/abs/2601.14098
+ arXiv:2601.14098v1 Announce Type: new
+Abstract: Large language models (LLMs) are transforming electronic design automation (EDA) by enhancing design stages such as schematic design, simulation, netlist synthesis, and place-and-route. Existing methods primarily focus these optimisations within isolated open-source EDA tools and often lack the flexibility to handle multiple domains, such as analogue, digital, and radio-frequency design. In contrast, modern systems require to interface with commercial EDA environments, adhere to tool-specific operation rules, and incorporate feedback from design outcomes while supporting diverse design flows. We propose a versatile framework that uses LLMs to generate files compatible with commercial EDA tools and optimise designs using power-performance-area reports. This is accomplished by guiding the LLMs with tool constraints and feedback from design outputs to meet tool requirements and user specifications. Case studies on operational transconductance amplifiers, microstrip patch antennas, and FPGA circuits show that the framework is effective as an EDA-aware assistant, handling diverse design challenges reliably.
+ oai:arXiv.org:2601.14098v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Cristian Sestito, Panagiota Kontou, Pratibha Verma, Atish Dixit, Alexandros D. Keros, Michael O'Boyle, Christos-Savvas Bouganis, Themis Prodromakis
+
+
+ Causal feature selection framework for stable soft sensor modeling based on time-delayed cross mapping
+ https://arxiv.org/abs/2601.14099
+ arXiv:2601.14099v1 Announce Type: new
+Abstract: Soft sensor modeling plays a crucial role in process monitoring. Causal feature selection can enhance the performance of soft sensor models in industrial applications. However, existing methods ignore two critical characteristics of industrial processes. Firstly, causal relationships between variables always involve time delays, whereas most causal feature selection methods investigate causal relationships in the same time dimension. Secondly, variables in industrial processes are often interdependent, which contradicts the decorrelation assumption of traditional causal inference methods. Consequently, soft sensor models based on existing causal feature selection approaches often lack sufficient accuracy and stability. To overcome these challenges, this paper proposes a causal feature selection framework based on time-delayed cross mapping. Time-delayed cross mapping employs state space reconstruction to effectively handle interdependent variables in causality analysis, and considers varying causal strength across time delay. Time-delayed convergent cross mapping (TDCCM) is introduced for total causal inference, and time-delayed partial cross mapping (TDPCM) is developed for direct causal inference. Then, in order to achieve automatic feature selection, an objective feature selection strategy is presented. The causal threshold is automatically determined based on the model performance on the validation set, and the causal features are then selected. Two real-world case studies show that TDCCM achieves the highest average performance, while TDPCM improves soft sensor stability and performance in the worst scenario. The code is publicly available at https://github.com/dirge1/TDPCM.
+ oai:arXiv.org:2601.14099v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1016/j.aei.2026.104337
+ Advanced Engineering Informatics 2026, 71, 104337
+ Shi-Shun Chen, Xiao-Yang Li, Enrico Zio
+
+
+ Curriculum-Based Strategies for Efficient Cross-Domain Action Recognition
+ https://arxiv.org/abs/2601.14101
+ arXiv:2601.14101v1 Announce Type: new
+Abstract: Despite significant progress in human action recognition, generalizing to diverse viewpoints remains a challenge. Most existing datasets are captured from ground-level perspectives, and models trained on them often struggle to transfer to drastically different domains such as aerial views. This paper examines how curriculum-based training strategies can improve generalization to unseen real aerial-view data without using any real aerial data during training.
+ We explore curriculum learning for cross-view action recognition using two out-of-domain sources: synthetic aerial-view data and real ground-view data. Our results on the evaluation on order of training (fine-tuning on synthetic aerial data vs. real ground data) shows that fine-tuning on real ground data but differ in how they transition from synthetic to real. The first uses a two-stage curriculum with direct fine-tuning, while the second applies a progressive curriculum that expands the dataset in multiple stages before fine-tuning. We evaluate both methods on the REMAG dataset using SlowFast (CNN-based) and MViTv2 (Transformer-based) architectures.
+ Results show that combining the two out-of-domain datasets clearly outperforms training on a single domain, whether real ground-view or synthetic aerial-view. Both curriculum strategies match the top-1 accuracy of simple dataset combination while offering efficiency gains. With the two-step fine-tuning method, SlowFast achieves up to a 37% reduction in iterations and MViTv2 up to a 30% reduction compared to simple combination. The multi-step progressive approach further reduces iterations, by up to 9% for SlowFast and 30% for MViTv2, relative to the two-step method. These findings demonstrate that curriculum-based training can maintain comparable performance (top-1 accuracy within 3% range) while improving training efficiency in cross-view action recognition.
+ oai:arXiv.org:2601.14101v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Emily Kim, Allen Wu, Jessica Hodgins
+
+
+ Interp3D: Correspondence-aware Interpolation for Generative Textured 3D Morphing
+ https://arxiv.org/abs/2601.14103
+ arXiv:2601.14103v1 Announce Type: new
+Abstract: Textured 3D morphing seeks to generate smooth and plausible transitions between two 3D assets, preserving both structural coherence and fine-grained appearance. This ability is crucial not only for advancing 3D generation research but also for practical applications in animation, editing, and digital content creation. Existing approaches either operate directly on geometry, limiting them to shape-only morphing while neglecting textures, or extend 2D interpolation strategies into 3D, which often causes semantic ambiguity, structural misalignment, and texture blurring. These challenges underscore the necessity to jointly preserve geometric consistency, texture alignment, and robustness throughout the transition process. To address this, we propose Interp3D, a novel training-free framework for textured 3D morphing. It harnesses generative priors and adopts a progressive alignment principle to ensure both geometric fidelity and texture coherence. Starting from semantically aligned interpolation in condition space, Interp3D enforces structural consistency via SLAT (Structured Latent)-guided structure interpolation, and finally transfers appearance details through fine-grained texture fusion. For comprehensive evaluations, we construct a dedicated dataset, Interp3DData, with graded difficulty levels and assess generation results from fidelity, transition smoothness, and plausibility. Both quantitative metrics and human studies demonstrate the significant advantages of our proposed approach over previous methods. Source code is available at https://github.com/xiaolul2/Interp3D.
+ oai:arXiv.org:2601.14103v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaolu Liu, Yicong Li, Qiyuan He, Jiayin Zhu, Wei Ji, Angela Yao, Jianke Zhu
+
+
+ Diffusion-Guided Backdoor Attacks in Real-World Reinforcement Learning
+ https://arxiv.org/abs/2601.14104
+ arXiv:2601.14104v1 Announce Type: new
+Abstract: Backdoor attacks embed hidden malicious behaviors in reinforcement learning (RL) policies and activate them using triggers at test time. Most existing attacks are validated only in simulation, while their effectiveness in real-world robotic systems remains unclear. In physical deployment, safety-constrained control pipelines such as velocity limiting, action smoothing, and collision avoidance suppress abnormal actions, causing strong attenuation of conventional backdoor attacks. We study this previously overlooked problem and propose a diffusion-guided backdoor attack framework (DGBA) for real-world RL. We design small printable visual patch triggers placed on the floor and generate them using a conditional diffusion model that produces diverse patch appearances under real-world visual variations. We treat the robot control stack as a black-box system. We further introduce an advantage-based poisoning strategy that injects triggers only at decision-critical training states. We evaluate our method on a TurtleBot3 mobile robot and demonstrate reliable activation of targeted attacks while preserving normal task performance. Demo videos and code are available in the supplementary material.
+ oai:arXiv.org:2601.14104v1
+ cs.RO
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tairan Huang, Qingqing Ye, Yulin Jin, Jiawei Lian, Yi Wang, Haibo Hu
+
+
+ Truth with a Twist: The Rhetoric of Persuasion in Professional vs. Community-Authored Fact-Checks
+ https://arxiv.org/abs/2601.14105
+ arXiv:2601.14105v1 Announce Type: new
+Abstract: This study presents the first large-scale comparison of persuasion techniques present in crowd- versus professionally-written debunks. Using extensive datasets from Community Notes (CNs), EUvsDisinfo, and the Database of Known Fakes (DBKF), we quantify the prevalence and types of persuasion techniques across these fact-checking ecosystems. Contrary to prior hypothesis that community-produced debunks rely more heavily on subjective or persuasive wording, we find no evidence that CNs contain a higher average number of persuasion techniques than professional fact-checks. We additionally identify systematic rhetorical differences between CNs and professional debunking efforts, reflecting differences in institutional norms and topical coverage. Finally, we examine how the crowd evaluates persuasive language in CNs and show that, although notes with more persuasive elements receive slightly higher overall helpfulness ratings, crowd raters are effective at penalising the use of particular problematic rhetorical means
+ oai:arXiv.org:2601.14105v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3774904.3792938
+ Olesya Razuvayevskaya, Kalina Bontcheva
+
+
+ Communication Technologies for Intelligent Transportation Systems: From Railways to UAVs and Beyond
+ https://arxiv.org/abs/2601.14106
+ arXiv:2601.14106v1 Announce Type: new
+Abstract: This white paper aims to comprehensively analyze and consolidate the state of the art in communication technologies supporting modern and future Information and Communication Technology (ICT). Its primary objective is to establish a common understanding of how communication solutions enable automation, safety, and efficiency across multiple transport domains, including railways, road vehicles, aircraft, and unmanned aerial vehicles. The document seeks to identify key communication requirements and technological enablers necessary for interoperable and reliable ITS operation. It also assesses the limitations of current systems and proposes pathways for integrating emerging technologies such as 5G, Sixth Generation (6G), and Artificial Intelligence (AI)-driven network control. The white paper also intends to support harmonization between different transport modes through a unified framework for communication modeling, testing, and standardization. It highlights the importance of accurate channel modeling and empirical validation to design efficient, robust, and scalable systems. Another objective is to explore the use of reconfigurable intelligent surfaces, integrated sensing and communication, and digital twin concepts within ITS. The document emphasizes the role of spectrum management and standardization efforts in ensuring interoperability among diverse communication systems. Finally, the paper seeks to stimulate collaboration among academia, industry, and standardization bodies to advance the design of resilient and adaptive communication infrastructures for future transportation systems.
+ oai:arXiv.org:2601.14106v1
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.26636/jtit.2025.COST-CA20120-VT2.2385
+ White Paper: Communication Technologies for Intelligent Transportation Systems: From Railways to UAVs and Beyond, JTIT, pp. 1 to 108, Dec. 2025
+ Shrief Rizkalla, Adrian Kliks, Nila Bagheri, Miguel A. Bellido-Manganell, Aniruddha Chandra, Anja Dakic, Laura Finarelli, Davy Gaillot, Matti Hamalainen, Ruisi He, Markus Hofer, Sandaruwan Jayaweera, Francesco Linsalata, Konstantin Mikhaylov, Jon M. Peha, Ibrahim Rashdan, Gianluca Rizzo, Abdul Saboor, Martin Schmidhammer, Michal Sybis, Fredrik Tufvesson, Paul Unterhuber, Fernando J. Velez, Evgenii Vinogradov, Michael Walter, Thomas Zemen, Haibin Zhang, Zhengyu Zhang
+
+
+ AttackMate: Realistic Emulation and Automation of Cyber Attack Scenarios Across the Kill Chain
+ https://arxiv.org/abs/2601.14108
+ arXiv:2601.14108v1 Announce Type: new
+Abstract: Adversary emulation tools facilitate scripting and automated execution of cyber attack chains, thereby reducing costs and manual expert effort required for security testing, cyber exercises, and intrusion detection research. However, due to the fact that existing tools typically rely on agents installed on target systems, they leave suspicious traces that make it easy to distinguish their activities from those of real human attackers. Moreover, these tools often lack relevant capabilities, such as handling of interactive prompts, and are unsuitable for emulating specific stages of the kill chain, such as initial access. This paper thus introduces AttackMate, an open-source attack scripting language and execution engine designed to mimic behavior patterns of actual attackers. We validate the tool in a case study covering common attack steps including privilege escalation, information gathering, and lateral movement. Our results indicate that log artifacts resulting from AttackMate's activities resemble those produced by human attackers more closely than those generated by standard adversary emulation tools.
+ oai:arXiv.org:2601.14108v1
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Max Landauer, Wolfgang Hotwagner, Thorina Boenke, Florian Skopik, Markus Wurzenberger
+
+
+ TLSQL: Table Learning Structured Query Language
+ https://arxiv.org/abs/2601.14109
+ arXiv:2601.14109v1 Announce Type: new
+Abstract: Table learning, which lies at the intersection of machine learning and modern database systems, has recently attracted growing attention. However, existing frameworks typically require explicit data export and extensive feature engineering, creating a high barrier for database practitioners. We present TLSQL (Table Learning Structured Query Language), a system that enables table learning directly over relational databases via SQL-like declarative specifications. TLSQL is implemented as a lightweight Python library that translates these specifications into standard SQL queries and structured learning task descriptions. The generated SQL queries are executed natively by the database engine, while the task descriptions are consumed by downstream table learning frameworks. This design allows users to focus on modeling and analysis rather than low-level data preparation and pipeline orchestration. Experiments on real-world datasets demonstrate that TLSQL effectively lowers the barrier to integrating machine learning into databasecentric workflows. Our code is available at https://github.com/rllmproject/tlsql/.
+ oai:arXiv.org:2601.14109v1
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Feiyang Chen, Ken Zhong, Aoqian Zhang, Zheng Wang, Li Pan, Jianhua Li
+
+
+ PMCE: Probabilistic Multi-Granularity Semantics with Caption-Guided Enhancement for Few-Shot Learning
+ https://arxiv.org/abs/2601.14111
+ arXiv:2601.14111v1 Announce Type: new
+Abstract: Few-shot learning aims to identify novel categories from only a handful of labeled samples, where prototypes estimated from scarce data are often biased and generalize poorly. Semantic-based methods alleviate this by introducing coarse class-level information, but they are mostly applied on the support side, leaving query representations unchanged. In this paper, we present PMCE, a Probabilistic few-shot framework that leverages Multi-granularity semantics with Caption-guided Enhancement. PMCE constructs a nonparametric knowledge bank that stores visual statistics for each category as well as CLIP-encoded class name embeddings of the base classes. At meta-test time, the most relevant base classes are retrieved based on the similarities of class name embeddings for each novel category. These statistics are then aggregated into category-specific prior information and fused with the support set prototypes via a simple MAP update. Simultaneously, a frozen BLIP captioner provides label-free instance-level image descriptions, and a lightweight enhancer trained on base classes optimizes both support prototypes and query features under an inductive protocol with a consistency regularization to stabilize noisy captions. Experiments on four benchmarks show that PMCE consistently improves over strong baselines, achieving up to 7.71% absolute gain over the strongest semantic competitor on MiniImageNet in the 1-shot setting. Our code is available at https://anonymous.4open.science/r/PMCE-275D
+ oai:arXiv.org:2601.14111v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiaying Wu, Can Gao, Jinglu Hu, Hui Li, Xiaofeng Cao, Jingcai Guo
+
+
+ Learning to Explain: Supervised Token Attribution from Transformer Attention Patterns
+ https://arxiv.org/abs/2601.14112
+ arXiv:2601.14112v1 Announce Type: new
+Abstract: Explainable AI (XAI) has become critical as transformer-based models are deployed in high-stakes applications including healthcare, legal systems, and financial services, where opacity hinders trust and accountability. Transformers self-attention mechanisms have proven valuable for model interpretability, with attention weights successfully used to understand model focus and behavior (Xu et al., 2015); (Wiegreffe and Pinter, 2019). However, existing attention-based explanation methods rely on manually defined aggregation strategies and fixed attribution rules (Abnar and Zuidema, 2020a); (Chefer et al., 2021), while model-agnostic approaches (LIME, SHAP) treat the model as a black box and incur significant computational costs through input perturbation. We introduce Explanation Network (ExpNet), a lightweight neural network that learns an explicit mapping from transformer attention patterns to token-level importance scores. Unlike prior methods, ExpNet discovers optimal attention feature combinations automatically rather than relying on predetermined rules. We evaluate ExpNet in a challenging cross-task setting and benchmark it against a broad spectrum of model-agnostic methods and attention-based techniques spanning four methodological families.
+ oai:arXiv.org:2601.14112v1
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ George Mihaila
+
+
+ Partial Reductions for Kleene Algebra with Linear Hypotheses
+ https://arxiv.org/abs/2601.14114
+ arXiv:2601.14114v1 Announce Type: new
+Abstract: Kleene algebra (KA) is an important tool for reasoning about general program equivalences, with a decidable and complete equational theory. However, KA cannot always prove equivalences between specific programs. For this purpose, one adds hypotheses to KA that encode program-specific knowledge. Traditionally, a map on regular expressions called a reduction then lets us lift decidability and completeness to these more expressive systems. Explicitly constructing such a reduction requires significant labour. Moreover, due to regularity constraints, a reduction may not exist for all combinations of expression and hypothesis.
+ We describe an automaton-based construction to mechanically derive reductions for a wide class of hypotheses. These reductions can be partial, in which case they yield partial completeness: completeness for expressions in their domain. This allows us to automatically establish the provability of more equivalences than what is covered in existing work.
+ oai:arXiv.org:2601.14114v1
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Liam Chung, Tobias Kapp\'e
+
+
+ Riemannian Liquid Spatio-Temporal Graph Network
+ https://arxiv.org/abs/2601.14115
+ arXiv:2601.14115v1 Announce Type: new
+Abstract: Liquid Time-Constant networks (LTCs), a type of continuous-time graph neural network, excel at modeling irregularly-sampled dynamics but are fundamentally confined to Euclidean space. This limitation introduces significant geometric distortion when representing real-world graphs with inherent non-Euclidean structures (e.g., hierarchies and cycles), degrading representation quality. To overcome this limitation, we introduce the Riemannian Liquid Spatio-Temporal Graph Network (RLSTG), a framework that unifies continuous-time liquid dynamics with the geometric inductive biases of Riemannian manifolds. RLSTG models graph evolution through an Ordinary Differential Equation (ODE) formulated directly on a curved manifold, enabling it to faithfully capture the intrinsic geometry of both structurally static and dynamic spatio-temporal graphs. Moreover, we provide rigorous theoretical guarantees for RLSTG, extending stability theorems of LTCs to the Riemannian domain and quantifying its expressive power via state trajectory analysis. Extensive experiments on real-world benchmarks demonstrate that, by combining advanced temporal dynamics with a Riemannian spatial representation, RLSTG achieves superior performance on graphs with complex structures. Project Page: https://rlstg.github.io
+ oai:arXiv.org:2601.14115v1
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Liangsi Lu, Jingchao Wang, Zhaorong Dai, Hanqian Liu, Yang Shi
+
+
+ NewsRECON: News article REtrieval for image CONtextualization
+ https://arxiv.org/abs/2601.14121
+ arXiv:2601.14121v1 Announce Type: new
+Abstract: Identifying when and where a news image was taken is crucial for journalists and forensic experts to produce credible stories and debunk misinformation. While many existing methods rely on reverse image search (RIS) engines, these tools often fail to return results, thereby limiting their practical applicability. In this work, we address the challenging scenario where RIS evidence is unavailable. We introduce NewsRECON, a method that links images to relevant news articles to infer their date and location from article metadata. NewsRECON leverages a corpus of over 90,000 articles and integrates: (1) a bi-encoder for retrieving event-relevant articles; (2) two cross-encoders for reranking articles by location and event consistency. Experiments on the TARA and 5Pils-OOC show that NewsRECON outperforms prior work and can be combined with a multimodal large language model to achieve new SOTA results in the absence of RIS evidence. We make our code available.
+ oai:arXiv.org:2601.14121v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Jonathan Tonglet, Iryna Gurevych, Tinne Tuytelaars, Marie-Francine Moens
+
+
+ A Systematic Analysis of Chunking Strategies for Reliable Question Answering
+ https://arxiv.org/abs/2601.14123
+ arXiv:2601.14123v1 Announce Type: new
+Abstract: We study how document chunking choices impact the reliability of Retrieval-Augmented Generation (RAG) systems in industry. While practice often relies on heuristics, our end-to-end evaluation on Natural Questions systematically varies chunking method (token, sentence, semantic, code), chunk size, overlap, and context length. We use a standard industrial setup: SPLADE retrieval and a Mistral-8B generator. We derive actionable lessons for cost-efficient deployment: (i) overlap provides no measurable benefit and increases indexing cost; (ii) sentence chunking is the most cost-effective method, matching semantic chunking up to ~5k tokens; (iii) a "context cliff" reduces quality beyond ~2.5k tokens; and (iv) optimal context depends on the goal (semantic quality peaks at small contexts; exact match at larger ones).
+ oai:arXiv.org:2601.14123v1
+ cs.CL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sofia Bennani, Charles Moslonka
+
+
+ Style Transfer as Bias Mitigation: Diffusion Models for Synthetic Mental Health Text for Arabic
+ https://arxiv.org/abs/2601.14124
+ arXiv:2601.14124v1 Announce Type: new
+Abstract: Synthetic data offers a promising solution for mitigating data scarcity and demographic bias in mental health analysis, yet existing approaches largely rely on pretrained large language models (LLMs), which may suffer from limited output diversity and propagate biases inherited from their training data. In this work, we propose a pretraining-free diffusion-based approach for synthetic text generation that frames bias mitigation as a style transfer problem. Using the CARMA Arabic mental health corpus, which exhibits a substantial gender imbalance, we focus on male-to-female style transfer to augment underrepresented female-authored content. We construct five datasets capturing varying linguistic and semantic aspects of gender expression in Arabic and train separate diffusion models for each setting. Quantitative evaluations demonstrate consistently high semantic fidelity between source and generated text, alongside meaningful surface-level stylistic divergence, while qualitative analysis confirms linguistically plausible gender transformations. Our results show that diffusion-based style transfer can generate high-entropy, semantically faithful synthetic data without reliance on pretrained LLMs, providing an effective and flexible framework for mitigating gender bias in sensitive, low-resource mental health domains.
+ oai:arXiv.org:2601.14124v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Saad Mankarious, Aya Zirikly
+
+
+ The Side Effects of Being Smart: Safety Risks in MLLMs' Multi-Image Reasoning
+ https://arxiv.org/abs/2601.14127
+ arXiv:2601.14127v1 Announce Type: new
+Abstract: As Multimodal Large Language Models (MLLMs) acquire stronger reasoning capabilities to handle complex, multi-image instructions, this advancement may pose new safety risks. We study this problem by introducing MIR-SafetyBench, the first benchmark focused on multi-image reasoning safety, which consists of 2,676 instances across a taxonomy of 9 multi-image relations. Our extensive evaluations on 19 MLLMs reveal a troubling trend: models with more advanced multi-image reasoning can be more vulnerable on MIR-SafetyBench. Beyond attack success rates, we find that many responses labeled as safe are superficial, often driven by misunderstanding or evasive, non-committal replies. We further observe that unsafe generations exhibit lower attention entropy than safe ones on average. This internal signature suggests a possible risk that models may over-focus on task solving while neglecting safety constraints. Our code and data are available at https://github.com/thu-coai/MIR-SafetyBench.
+ oai:arXiv.org:2601.14127v1
+ cs.CV
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renmiao Chen, Yida Lu, Shiyao Cui, Xuan Ouyang, Victor Shea-Jay Huang, Shumin Zhang, Chengwei Pan, Han Qiu, Minlie Huang
+
+
+ SandWorm: Event-based Visuotactile Perception with Active Vibration for Screw-Actuated Robot in Granular Media
+ https://arxiv.org/abs/2601.14128
+ arXiv:2601.14128v1 Announce Type: new
+Abstract: Perception in granular media remains challenging due to unpredictable particle dynamics. To address this challenge, we present SandWorm, a biomimetic screw-actuated robot augmented by peristaltic motion to enhance locomotion, and SWTac, a novel event-based visuotactile sensor with an actively vibrated elastomer. The event camera is mechanically decoupled from vibrations by a spring isolation mechanism, enabling high-quality tactile imaging of both dynamic and stationary objects. For algorithm design, we propose an IMU-guided temporal filter to enhance imaging consistency, improving MSNR by 24%. Moreover, we systematically optimize SWTac with vibration parameters, event camera settings and elastomer properties. Motivated by asymmetric edge features, we also implement contact surface estimation by U-Net. Experimental validation demonstrates SWTac's 0.2 mm texture resolution, 98% stone classification accuracy, and 0.15 N force estimation error, while SandWorm demonstrates versatile locomotion (up to 12.5 mm/s) in challenging terrains, successfully executes pipeline dredging and subsurface exploration in complex granular media (observed 90% success rate). Field experiments further confirm the system's practical performance.
+ oai:arXiv.org:2601.14128v1
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shoujie Li, Changqing Guo, Junhao Gong, Chenxin Liang, Wenhua Ding, Wenbo Ding
+
+
+ "Range as a Key" is the Key! Fast and Compact Cloud Block Store Index with RASK
+ https://arxiv.org/abs/2601.14129
+ arXiv:2601.14129v1 Announce Type: new
+Abstract: In cloud block store, indexing is on the critical path of I/O operations and typically resides in memory. With the scaling of users and the emergence of denser storage media, the index has become a primary memory consumer, causing memory strain. Our extensive analysis of production traces reveals that write requests exhibit a strong tendency to target continuous block ranges in cloud storage systems. Thus, compared to current per-block indexing, our insight is that we should directly index block ranges (i.e., range-as-a-key) to save memory.
+ In this paper, we propose RASK, a memory-efficient and high-performance tree-structured index that natively indexes ranges. While range-as-a-key offers the potential to save memory and improve performance, realizing this idea is challenging due to the range overlap and range fragmentation issues. To handle range overlap efficiently, RASK introduces the log-structured leaf, combined with range-tailored search and garbage collection. To reduce range fragmentation, RASK employs range-aware split and merge mechanisms. Our evaluations on four production traces show that RASK reduces memory footprint by up to 98.9% and increases throughput by up to 31.0x compared to ten state-of-the-art indexes.
+ oai:arXiv.org:2601.14129v1
+ cs.OS
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haoru Zhao, Mingkai Dong, Erci Xu, Zhongyu Wang, Haibo Chen
+
+
+ GIC-DLC: Differentiable Logic Circuits for Hardware-Friendly Grayscale Image Compression
+ https://arxiv.org/abs/2601.14130
+ arXiv:2601.14130v1 Announce Type: new
+Abstract: Neural image codecs achieve higher compression ratios than traditional hand-crafted methods such as PNG or JPEG-XL, but often incur substantial computational overhead, limiting their deployment on energy-constrained devices such as smartphones, cameras, and drones. We propose Grayscale Image Compression with Differentiable Logic Circuits (GIC-DLC), a hardware-aware codec where we train lookup tables to combine the flexibility of neural networks with the efficiency of Boolean operations. Experiments on grayscale benchmark datasets show that GIC-DLC outperforms traditional codecs in compression efficiency while allowing substantial reductions in energy consumption and latency. These results demonstrate that learned compression can be hardware-friendly, offering a promising direction for low-power image compression on edge devices.
+ oai:arXiv.org:2601.14130v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Till Aczel, David F. Jenny, Simon B\"uhrer, Andreas Plesner, Antonio Di Maio, Roger Wattenhofer
+
+
+ Practitioner Views on Mobile App Accessibility: Practices and Challenges
+ https://arxiv.org/abs/2601.14131
+ arXiv:2601.14131v1 Announce Type: new
+Abstract: As mobile applications (apps) become ubiquitous in everyday life, it is crucial for developers to prioritize accessibility for users with diverse abilities. While previous research has identified widespread accessibility issues and raised awareness of developer challenges, there remains a lack of cross-platform, globally representative insights into how practitioners approach accessibility in practice. This paper presents findings from a mixed-methods survey of 110 mobile app developers across 43 countries, examining how platform ecosystems (iOS vs. Android) and developer experience shape accessibility practices. Results show that while developers recognize the importance of accessibility, they often rely on platform-specific guidelines and typically perform compliance testing late in the development process. Developers primarily implement text-focused features while struggling with API limitations and organizational constraints. Through systematic cross-platform comparison, we identify novel platform-specific barriers and demonstrate how accessibility practices differ across developer experience levels. Our findings offer new insights into the challenges of achieving accessibility in practice and provide actionable steps for various stakeholders to promote more consistent and inclusive app development.
+ oai:arXiv.org:2601.14131v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3744916.3787791
+ Amila Indika, Rick Kazman, Anthony Peruma
+
+
+ Toward self-coding information systems
+ https://arxiv.org/abs/2601.14132
+ arXiv:2601.14132v1 Announce Type: new
+Abstract: In this extended abstract, we propose a novel research topic in the field of agentic AI, which we refer to as self-coding information systems. These systems will be able to dynamically adapt their structure or behavior by evaluating potential adaptation decisions, generate source code, test, and (re)deploy their source code autonomously, at runtime, reducing the time to market of new features. Here we motivate the topic, provide a formal definition of self-coding information systems, discuss some expected impacts of the new technology, and indicate potential research directions.
+ oai:arXiv.org:2601.14132v1
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Rodrigo Falc\~ao, Frank Elberzhager, Karthik Vaidhyanathan
+
+
+ TwinBrainVLA: Unleashing the Potential of Generalist VLMs for Embodied Tasks via Asymmetric Mixture-of-Transformers
+ https://arxiv.org/abs/2601.14133
+ arXiv:2601.14133v1 Announce Type: new
+Abstract: Standard Vision-Language-Action (VLA) models typically fine-tune a monolithic Vision-Language Model (VLM) backbone explicitly for robotic control. However, this approach creates a critical tension between maintaining high-level general semantic understanding and learning low-level, fine-grained sensorimotor skills, often leading to "catastrophic forgetting" of the model's open-world capabilities. To resolve this conflict, we introduce TwinBrainVLA, a novel architecture that coordinates a generalist VLM retaining universal semantic understanding and a specialist VLM dedicated to embodied proprioception for joint robotic control. TwinBrainVLA synergizes a frozen "Left Brain", which retains robust general visual reasoning, with a trainable "Right Brain", specialized for embodied perception, via a novel Asymmetric Mixture-of-Transformers (AsyMoT) mechanism. This design allows the Right Brain to dynamically query semantic knowledge from the frozen Left Brain and fuse it with proprioceptive states, providing rich conditioning for a Flow-Matching Action Expert to generate precise continuous controls. Extensive experiments on SimplerEnv and RoboCasa benchmarks demonstrate that TwinBrainVLA achieves superior manipulation performance compared to state-of-the-art baselines while explicitly preserving the comprehensive visual understanding capabilities of the pre-trained VLM, offering a promising direction for building general-purpose robots that simultaneously achieve high-level semantic understanding and low-level physical dexterity.
+ oai:arXiv.org:2601.14133v1
+ cs.RO
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bin Yu, Shijie Lian, Xiaopeng Lin, Yuliang Wei, Zhaolong Shen, Changti Wu, Yuzhuo Miao, Xinming Wang, Bailing Wang, Cong Huang, Kai Chen
+
+
+ CREATE: Cross-Layer Resilience Characterization and Optimization for Efficient yet Reliable Embodied AI Systems
+ https://arxiv.org/abs/2601.14140
+ arXiv:2601.14140v1 Announce Type: new
+Abstract: Embodied Artificial Intelligence (AI) has recently attracted significant attention as it bridges AI with the physical world. Modern embodied AI systems often combine a Large Language Model (LLM)-based planner for high-level task planning and a reinforcement learning (RL)-based controller for low-level action generation, enabling embodied agents to tackle complex tasks in real-world environments. However, deploying embodied agents remains challenging due to their high computation requirements, especially for battery-powered local devices. Although techniques like lowering operating voltage can improve energy efficiency, they can introduce bit errors and result in task failures. In this work, we propose CREATE, a general design principle that leverages heterogeneous resilience at different layers for synergistic energy-reliability co-optimization. For the first time, we conduct a comprehensive error injection study on modern embodied AI systems and observe an inherent but heterogeneous fault tolerance. Building upon these insights, we develop an anomaly detection and clearance mechanism at the circuit level to eliminate outlier errors. At the model level, we propose a weight-rotation-enhanced planning algorithm to improve the fault tolerance of the LLM-based planner. Furthermore, we introduce an application-level technique, autonomy-adaptive voltage scaling, to dynamically adjust the operating voltage of the controllers. The voltage scaling circuit is co-designed to enable online voltage adjustment. Extensive experiments demonstrate that without compromising task quality, CREATE achieves 40.6% computational energy savings on average over nominal-voltage baselines and 35.0% over prior-art techniques. This further leads to 29.5% to 37.3% chip-level energy savings and approximately a 15% to 30% improvement in battery life.
+ oai:arXiv.org:2601.14140v1
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3779212.3790147
+ Tong Xie, Yijiahao Qi, Jinqi Wen, Zishen Wan, Yanchi Dong, Zihao Wang, Shaofei Cai, Yitao Liang, Tianyu Jia, Yuan Wang, Runsheng Wang, Meng Li
+
+
+ Vector Coded Caching Multiplicatively Boosts MU-MIMO Systems Under Practical Considerations
+ https://arxiv.org/abs/2601.14142
+ arXiv:2601.14142v1 Announce Type: new
+Abstract: This work presents a first comprehensive analysis of the impact of vector coded caching (VCC) in multi-user multiple-input multiple-output (MU-MIMO) systems with multiple receive antennas and variable pathloss -- two key factors that critically influence systems with inherent MU unicasting behavior. We investigate two widely adopted precoding strategies: (i) blockdiagonalization (BD) at the transmitter combined with maximal ratio combining (MRC) at the receivers, and (ii) zero-forcing (ZF) precoding. Our analysis explicitly accounts for practical considerations such as channel fading, channel state information (CSI) acquisition overhead, and fairness-oriented power allocation.
+ Our contributions span both analytical and simulation-based fronts. On the analytical side, we derive analytical expressions for the achievable throughput under BD-MRC and ZF, highlighting the performance benefits of equipping multi-antenna users with cache-aided interference management. Specifically, we develop a low-complexity BD-MRC optimization method that leverages matrix structure to significantly reduce the dimensionality involved in precoding computation, followed by solving the associated maxmin fairness problem through an efficient one-dimensional search. In the massive MIMO regime, an asymptotic expression for the achievable throughput over Rayleigh fading channels is also derived. Simulations validate our theoretical results, confirming that VCC delivers substantial performance gains over optimized cacheless MU-MIMO systems. For example, with 32 transmit antennas and 2 receive antennas per user, VCC yields throughput improvements exceeding 300%. These gains are further amplified under imperfect CSI at the transmitter, where VCC's ability to offload interference mitigation to the receivers ensures robust performance even in the face of degraded CSI quality and elevated acquisition costs.
+ oai:arXiv.org:2601.14142v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hui Zhao, Petros Elia
+
+
+ The Quest for Reliable AI Accelerators: Cross-Layer Evaluation and Design Optimization
+ https://arxiv.org/abs/2601.14148
+ arXiv:2601.14148v1 Announce Type: new
+Abstract: As the CMOS technology pushes to the nanoscale, aging effects and process variations have become increasingly pronounced, posing significant reliability challenges for AI accelerators. Traditional guardband-based design approaches, which rely on pessimistic timing margin, sacrifice significant performance and computational efficiency, rendering them inadequate for high-performance AI computing demands. Current reliability-aware AI accelerator design faces two core challenges: (1) the lack of systematic cross-layer analysis tools to capture coupling reliability effects across device, circuit, architecture, and application layers; and (2) the fundamental trade-off between conventional reliability optimization and computational efficiency. To address these challenges, this paper systematically presents a series of reliability-aware accelerator designs, encompassing (1) aging and variation-aware dynamic timing analyzer, (2) accelerator dataflow optimization using critical input pattern reduction, and (3) resilience characterization and novel architecture design for large language models (LLMs). By tightly integrating cross-layer reliability modeling and AI workload characteristics, these co-optimization approaches effectively achieve reliable and efficient AI acceleration.
+ oai:arXiv.org:2601.14148v1
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ 10.1109/ASICON66040.2025.11326137
+ Meng Li, Tong Xie, Zuodong Zhang, Runsheng Wang
+
+
+ Lost in the Prompt Order: Revealing the Limitations of Causal Attention in Language Models
+ https://arxiv.org/abs/2601.14152
+ arXiv:2601.14152v1 Announce Type: new
+Abstract: Large language models exhibit surprising sensitivity to the structure of the prompt, but the mechanisms underlying this sensitivity remain poorly understood. In this work, we conduct an in-depth investigation on a striking case: in multiple-choice question answering, placing context before the questions and options (CQO) outperforms the reverse order (QOC) by over 14%p, consistently over a wide range of models and datasets. Through systematic architectural analysis, we identify causal attention as the core mechanism: in QOC prompts, the causal mask prevents option tokens from attending to context, creating an information bottleneck where context becomes invisible to options.
+ oai:arXiv.org:2601.14152v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hyunjong Ok, Jaeho Lee
+
+
+ LLM Augmented Intervenable Multimodal Adaptor for Post-operative Complication Prediction in Lung Cancer Surgery
+ https://arxiv.org/abs/2601.14154
+ arXiv:2601.14154v1 Announce Type: new
+Abstract: Postoperative complications remain a critical concern in clinical practice, adversely affecting patient outcomes and contributing to rising healthcare costs. We present MIRACLE, a deep learning architecture for prediction of risk of postoperative complications in lung cancer surgery by integrating preoperative clinical and radiological data. MIRACLE employs a hyperspherical embedding space fusion of heterogeneous inputs, enabling the extraction of robust, discriminative features from both structured clinical records and high-dimensional radiological images. To enhance transparency of prediction and clinical utility, we incorporate an interventional deep learning module in MIRACLE, that not only refines predictions but also provides interpretable and actionable insights, allowing domain experts to interactively adjust recommendations based on clinical expertise. We validate our approach on POC-L, a real-world dataset comprising 3,094 lung cancer patients who underwent surgery at Roswell Park Comprehensive Cancer Center. Our results demonstrate that MIRACLE outperforms various traditional machine learning models and contemporary large language models (LLM) variants alone, for personalized and explainable postoperative risk management.
+ oai:arXiv.org:2601.14154v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shubham Pandey, Bhavin Jawade, Srirangaraj Setlur, Venu Govindaraju, Kenneth Seastedt
+
+
+ ConceptCaps -- a Distilled Concept Dataset for Interpretability in Music Models
+ https://arxiv.org/abs/2601.14157
+ arXiv:2601.14157v1 Announce Type: new
+Abstract: Concept-based interpretability methods like TCAV require clean, well-separated positive and negative examples for each concept. Existing music datasets lack this structure: tags are sparse, noisy, or ill-defined. We introduce ConceptCaps, a dataset of 23k music-caption-audio triplets with explicit labels from a 200-attribute taxonomy. Our pipeline separates semantic modeling from text generation: a VAE learns plausible attribute co-occurrence patterns, a fine-tuned LLM converts attribute lists into professional descriptions, and MusicGen synthesizes corresponding audio. This separation improves coherence and controllability over end-to-end approaches. We validate the dataset through audio-text alignment (CLAP), linguistic quality metrics (BERTScore, MAUVE), and TCAV analysis confirming that concept probes recover musically meaningful patterns. Dataset and code are available online.
+ oai:arXiv.org:2601.14157v1
+ cs.SD
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Bruno Sienkiewicz, {\L}ukasz Neumann, Mateusz Modrzejewski
+
+
+ Multi-Partner Project: Multi-GPU Performance Portability Analysis for CFD Simulations at Scale
+ https://arxiv.org/abs/2601.14159
+ arXiv:2601.14159v1 Announce Type: new
+Abstract: As heterogeneous supercomputing architectures leveraging GPUs become increasingly central to high-performance computing (HPC), it is crucial for computational fluid dynamics (CFD) simulations, a de-facto HPC workload, to efficiently utilize such hardware. One of the key challenges of HPC codes is performance portability, i.e. the ability to maintain near-optimal performance across different accelerators. In the context of the \textbf{REFMAP} project, which targets scalable, GPU-enabled multi-fidelity CFD for urban airflow prediction, this paper analyzes the performance portability of SOD2D, a state-of-the-art Spectral Elements simulation framework across AMD and NVIDIA GPU architectures. We first discuss the physical and numerical models underlying SOD2D, highlighting its computational hotspots. Then, we examine its performance and scalability in a multi-level manner, i.e. defining and characterizing an extensive full-stack design space spanning across application, software and hardware infrastructure related parameters. Single-GPU performance characterization across server-grade NVIDIA and AMD GPU architectures and vendor-specific compiler stacks, show the potential as well as the diverse effect of memory access optimizations, i.e. 0.69$\times$ - 3.91$\times$ deviations in acceleration speedup. Performance variability of SOD2D at scale is further examined on the LUMI multi-GPU cluster, where profiling reveals similar throughput variations, highlighting the limits of performance projections and the need for multi-level, informed tuning.
+ oai:arXiv.org:2601.14159v1
+ cs.DC
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Panagiotis-Eleftherios Eleftherakis (National Technical University of Athens, Greece), George Anagnostopoulos (National Technical University of Athens, Greece), Anastassis Kapetanakis (National Technical University of Athens, Greece), Mohammad Umair (KTH Royal Institute of Technology, Sweden), Jean-Yves Vet (Hewlett Packard Enterprise), Konstantinos Iliakis (National Technical University of Athens, Greece), Jonathan Vincent (KTH Royal Institute of Technology, Sweden), Jing Gong (KTH Royal Institute of Technology, Sweden), Akshay Patil (Technical University of Delft, Netherlands), Clara Garc\'ia-S\'anchez (Technical University of Delft, Netherlands), Gerardo Zampino (KTH Royal Institute of Technology, Sweden), Ricardo Vinuesa (University of Michigan, USA), Sotirios Xydis (National Technical University of Athens, Greece)
+
+
+ Domain-Adaptation through Synthetic Data: Fine-Tuning Large Language Models for German Law
+ https://arxiv.org/abs/2601.14160
+ arXiv:2601.14160v1 Announce Type: new
+Abstract: Large language models (LLMs) often struggle in specialized domains such as legal reasoning due to limited expert knowledge, resulting in factually incorrect outputs or hallucinations. This paper presents an effective method for adapting advanced LLMs to German legal question answering through a novel synthetic data generation approach. In contrast to costly human-annotated resources or unreliable synthetic alternatives, our approach systematically produces high-quality, diverse, and legally accurate question-answer pairs directly from authoritative German statutes. Using rigorous automated filtering methods and parameter-efficient fine-tuning techniques, we demonstrate that LLMs adapted with our synthetic dataset significantly outperform their baseline counterparts on German legal question answering tasks. Our results highlight the feasibility of using carefully designed synthetic data as a robust alternative to manual annotation in high-stakes, knowledge-intensive domains.
+ oai:arXiv.org:2601.14160v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Ali Hamza Bashir, Muhammad Rehan Khalid, Kostadin Cvejoski, Jana Birr, Jule Berghaus, Armin Berger, Sandra Halscheidt, Christian Temath, Rafet Sifa, David Berghaus
+
+
+ One-Shot Refiner: Boosting Feed-forward Novel View Synthesis via One-Step Diffusion
+ https://arxiv.org/abs/2601.14161
+ arXiv:2601.14161v1 Announce Type: new
+Abstract: We present a novel framework for high-fidelity novel view synthesis (NVS) from sparse images, addressing key limitations in recent feed-forward 3D Gaussian Splatting (3DGS) methods built on Vision Transformer (ViT) backbones. While ViT-based pipelines offer strong geometric priors, they are often constrained by low-resolution inputs due to computational costs. Moreover, existing generative enhancement methods tend to be 3D-agnostic, resulting in inconsistent structures across views, especially in unseen regions. To overcome these challenges, we design a Dual-Domain Detail Perception Module, which enables handling high-resolution images without being limited by the ViT backbone, and endows Gaussians with additional features to store high-frequency details. We develop a feature-guided diffusion network, which can preserve high-frequency details during the restoration process. We introduce a unified training strategy that enables joint optimization of the ViT-based geometric backbone and the diffusion-based refinement module. Experiments demonstrate that our method can maintain superior generation quality across multiple datasets.
+ oai:arXiv.org:2601.14161v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yitong Dong, Qi Zhang, Minchao Jiang, Zhiqiang Wu, Qingnan Fan, Ying Feng, Huaqi Zhang, Hujun Bao, Guofeng Zhang
+
+
+ An Empirical Study on Remote Code Execution in Machine Learning Model Hosting Ecosystems
+ https://arxiv.org/abs/2601.14163
+ arXiv:2601.14163v1 Announce Type: new
+Abstract: Model-sharing platforms, such as Hugging Face, ModelScope, and OpenCSG, have become central to modern machine learning development, enabling developers to share, load, and fine-tune pre-trained models with minimal effort. However, the flexibility of these ecosystems introduces a critical security concern: the execution of untrusted code during model loading (i.e., via trust_remote_code or trust_repo). In this work, we conduct the first large-scale empirical study of custom model loading practices across five major model-sharing platforms to assess their prevalence, associated risks, and developer perceptions. We first quantify the frequency with which models require custom code to function and identify those that execute arbitrary Python files during loading. We then apply three complementary static analysis tools: Bandit, CodeQL, and Semgrep, to detect security smells and potential vulnerabilities, categorizing our findings by CWE identifiers to provide a standardized risk taxonomy. We also use YARA to identify malicious patterns and payload signatures. In parallel, we systematically analyze the documentation, API design, and safety mechanisms of each platform to understand their mitigation strategies and enforcement levels. Finally, we conduct a qualitative analysis of over 600 developer discussions from GitHub, Hugging Face, and PyTorch Hub forums, as well as Stack Overflow, to capture community concerns and misconceptions regarding security and usability. Our findings reveal widespread reliance on unsafe defaults, uneven security enforcement across platforms, and persistent confusion among developers about the implications of executing remote code. We conclude with actionable recommendations for designing safer model-sharing infrastructures and striking a balance between usability and security in future AI ecosystems.
+ oai:arXiv.org:2601.14163v1
+ cs.SE
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mohammed Latif Siddiq, Tanzim Hossain Romel, Natalie Sekerak, Beatrice Casey, Joanna C. S. Santos
+
+
+ The Impact of Interference Cognition on the Reliability and Capacity of Industrial Wireless Communications
+ https://arxiv.org/abs/2601.14164
+ arXiv:2601.14164v1 Announce Type: new
+Abstract: Interference significantly impacts the performance of industrial wireless networks, particularly n severe interference environments with dense networks reusing spectrum resources intensively. Although delicate interference information is often unavailable in conventional networks, emerging interference cognition techniques can compensate this critical problem with possibly different precision. This paper investigates the relationship between precision of interference cognition and system performance. We propose a novel performance analysis framework that quantifies the impact of varying interference information precision on achievable rate.
+ Specifically, leveraging the Nakagami-$\mathbf{m}$ fading channel model, we analytically and asymptotically analyze the average achievable rate in the finite blocklength regime for different precision levels of signal and interference information. Our findings reveal the critical importance of identifying per-link interference information for achieving optimal performance. Additionally, obtaining instantaneous information is more beneficial for signal links.
+ oai:arXiv.org:2601.14164v1
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yichen Guo, Tao Peng, Yujie Zhao, Yijing Niu, Wenbo Wang
+
+
+ ASBA: A-line State Space Model and B-line Attention for Sparse Optical Doppler Tomography Reconstruction
+ https://arxiv.org/abs/2601.14165
+ arXiv:2601.14165v1 Announce Type: new
+Abstract: Optical Doppler Tomography (ODT) is an emerging blood flow analysis technique. A 2D ODT image (B-scan) is generated by sequentially acquiring 1D depth-resolved raw A-scans (A-line) along the lateral axis (B-line), followed by Doppler phase-subtraction analysis. To ensure high-fidelity B-scan images, current practices rely on dense sampling, which prolongs scanning time, increases storage demands, and limits the capture of rapid blood flow dynamics. Recent studies have explored sparse sampling of raw A-scans to alleviate these limitations, but their effectiveness is hindered by the conservative sampling rates and the uniform modeling of flow and background signals. In this study, we introduce a novel blood flow-aware network, named ASBA (A-line ROI State space model and B-line phase Attention), to reconstruct ODT images from highly sparsely sampled raw A-scans. Specifically, we propose an A-line ROI state space model to extract sparsely distributed flow features along the A-line, and a B-line phase attention to capture long-range flow signals along each B-line based on phase difference. Moreover, we introduce a flow-aware weighted loss function that encourages the network to prioritize the accurate reconstruction of flow signals. Extensive experiments on real animal data demonstrate that the proposed approach clearly outperforms existing state-of-the-art reconstruction methods.
+ oai:arXiv.org:2601.14165v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhenghong Li, Wensheng Cheng, Congwu Du, Yingtian Pan, Zhaozheng Yin, Haibin Ling
+
+
+ Paper2Rebuttal: A Multi-Agent Framework for Transparent Author Response Assistance
+ https://arxiv.org/abs/2601.14171
+ arXiv:2601.14171v1 Announce Type: new
+Abstract: Writing effective rebuttals is a high-stakes task that demands more than linguistic fluency, as it requires precise alignment between reviewer intent and manuscript details. Current solutions typically treat this as a direct-to-text generation problem, suffering from hallucination, overlooked critiques, and a lack of verifiable grounding. To address these limitations, we introduce $\textbf{RebuttalAgent}$, the first multi-agents framework that reframes rebuttal generation as an evidence-centric planning task. Our system decomposes complex feedback into atomic concerns and dynamically constructs hybrid contexts by synthesizing compressed summaries with high-fidelity text while integrating an autonomous and on-demand external search module to resolve concerns requiring outside literature. By generating an inspectable response plan before drafting, $\textbf{RebuttalAgent}$ ensures that every argument is explicitly anchored in internal or external evidence. We validate our approach on the proposed $\textbf{RebuttalBench}$ and demonstrate that our pipeline outperforms strong baselines in coverage, faithfulness, and strategic coherence, offering a transparent and controllable assistant for the peer review process. Code will be released.
+ oai:arXiv.org:2601.14171v1
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qianli Ma, Chang Guo, Zhiheng Tian, Siyu Wang, Jipeng Xiao, Yuanhao Yue, Zhipeng Zhang
+
+
+ Human Values in a Single Sentence: Moral Presence, Hierarchies, and Transformer Ensembles on the Schwartz Continuum
+ https://arxiv.org/abs/2601.14172
+ arXiv:2601.14172v1 Announce Type: new
+Abstract: We study sentence-level identification of the 19 values in the Schwartz motivational continuum as a concrete formulation of human value detection in text. The setting - out-of-context sentences from news and political manifestos - features sparse moral cues and severe class imbalance. This combination makes fine-grained sentence-level value detection intrinsically difficult, even for strong modern neural models. We first operationalize a binary moral presence task ("does any value appear?") and show that it is learnable from single sentences (positive-class F1 $\approx$ 0.74 with calibrated thresholds). We then compare a presence-gated hierarchy to a direct multi-label classifier under matched compute, both based on DeBERTa-base and augmented with lightweight signals (prior-sentence context, LIWC-22/eMFD/MJD lexica, and topic features). The hierarchy does not outperform direct prediction, indicating that gate recall limits downstream gains. We also benchmark instruction-tuned LLMs - Gemma 2 9B, Llama 3.1 8B, Mistral 8B, and Qwen 2.5 7B - in zero-/few-shot and QLoRA setups and build simple ensembles; a soft-vote supervised ensemble reaches macro-F1 0.332, significantly surpassing the best single supervised model and exceeding prior English-only baselines. Overall, in this scenario, lightweight signals and small ensembles yield the most reliable improvements, while hierarchical gating offers limited benefit. We argue that, under an 8 GB single-GPU constraint and at the 7-9B scale, carefully tuned supervised encoders remain a strong and compute-efficient baseline for structured human value detection, and we outline how richer value structure and sentence-in-document context could further improve performance.
+ oai:arXiv.org:2601.14172v1
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ V\'ictor Yeste, Paolo Rosso
+
+
+ Penalizing Localized Dirichlet Energies in Low Rank Tensor Products
+ https://arxiv.org/abs/2601.14173
+ arXiv:2601.14173v1 Announce Type: new
+Abstract: We study low-rank tensor-product B-spline (TPBS) models for regression tasks and investigate Dirichlet energy as a measure of smoothness. We show that TPBS models admit a closed-form expression for the Dirichlet energy, and reveal scenarios where perfect interpolation is possible with exponentially small Dirichlet energy. This renders global Dirichlet energy-based regularization ineffective. To address this limitation, we propose a novel regularization strategy based on local Dirichlet energies defined on small hypercubes centered at the training points. Leveraging pretrained TPBS models, we also introduce two estimators for inference from incomplete samples. Comparative experiments with neural networks demonstrate that TPBS models outperform neural networks in the overfitting regime for most datasets, and maintain competitive performance otherwise. Overall, TPBS models exhibit greater robustness to overfitting and consistently benefit from regularization, while neural networks are more sensitive to overfitting and less effective in leveraging regularization.
+ oai:arXiv.org:2601.14173v1
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Paris A. Karakasis, Nicholas D. Sidiropoulos
+
+
+ A model of errors in transformers
+ https://arxiv.org/abs/2601.14175
+ arXiv:2601.14175v1 Announce Type: new
+Abstract: We study the error rate of LLMs on tasks like arithmetic that require a deterministic output, and repetitive processing of tokens drawn from a small set of alternatives. We argue that incorrect predictions arise when small errors in the attention mechanism accumulate to cross a threshold, and use this insight to derive a quantitative two-parameter relationship between the accuracy and the complexity of the task. The two parameters vary with the prompt and the model; they can be interpreted in terms of an elementary noise rate, and the number of plausible erroneous tokens that can be predicted. Our analysis is inspired by an ``effective field theory'' perspective: the LLM's many raw parameters can be reorganized into just two parameters that govern the error rate. We perform extensive empirical tests, using Gemini 2.5 Flash, Gemini 2.5 Pro and DeepSeek R1, and find excellent agreement between the predicted and observed accuracy for a variety of tasks, although we also identify deviations in some cases. Our model provides an alternative to suggestions that errors made by LLMs on long repetitive tasks indicate the ``collapse of reasoning'', or an inability to express ``compositional'' functions. Finally, we show how to construct prompts to reduce the error rate.
+ oai:arXiv.org:2601.14175v1
+ cs.LG
+ cs.AI
+ cs.CL
+ hep-th
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Suvrat Raju, Praneeth Netrapalli
+
+
+ ReSearch: A Multi-Stage Machine Learning Framework for Earth Science Data Discovery
+ https://arxiv.org/abs/2601.14176
+ arXiv:2601.14176v1 Announce Type: new
+Abstract: The rapid expansion of Earth Science data from satellite observations, reanalysis products, and numerical simulations has created a critical bottleneck in scientific discovery, namely identifying relevant datasets for a given research objective.
+ Existing discovery systems are primarily retrieval-centric and struggle to bridge the gap between high-level scientific intent and heterogeneous metadata at scale.
+ We introduce \textbf{ReSearch}, a multi-stage, reasoning-enhanced search framework that formulates Earth Science data discovery as an iterative process of intent interpretation, high-recall retrieval, and context-aware ranking.
+ ReSearch integrates lexical search, semantic embeddings, abbreviation expansion, and large language model reranking within a unified architecture that explicitly separates recall and precision objectives.
+ To enable realistic evaluation, we construct a literature-grounded benchmark by aligning natural language intent with datasets cited in peer-reviewed Earth Science studies.
+ Experiments demonstrate that ReSearch consistently improves recall and ranking performance over baseline methods, particularly for task-based queries expressing abstract scientific goals.
+ These results underscore the importance of intent-aware, multi-stage search as a foundational capability for reproducible and scalable Earth Science research.
+ oai:arXiv.org:2601.14176v1
+ cs.DB
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Youran Sun, Yixin Wen, Haizhao Yang
+
+
+ Progressive self-supervised blind-spot denoising method for LDCT denoising
+ https://arxiv.org/abs/2601.14180
+ arXiv:2601.14180v1 Announce Type: new
+Abstract: Self-supervised learning is increasingly investigated for low-dose computed tomography (LDCT) image denoising, as it alleviates the dependence on paired normal-dose CT (NDCT) data, which are often difficult to acquire in clinical practice. In this paper, we propose a novel self-supervised training strategy that relies exclusively on LDCT images. We introduce a step-wise blind-spot denoising mechanism that enforces conditional independence in a progressive manner, enabling more fine-grained denoising learning. In addition, we add Gaussian noise to LDCT images, which acts as a regularization and mitigates overfitting. Extensive experiments on the Mayo LDCT dataset demonstrate that the proposed method consistently outperforms existing self-supervised approaches and achieves performance comparable to, or better than, several representative supervised denoising methods.
+ oai:arXiv.org:2601.14180v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yichao Liu, Yueyang Teng, Junwen Guo
+
+
+ IIR-VLM: In-Context Instance-level Recognition for Large Vision-Language Models
+ https://arxiv.org/abs/2601.14188
+ arXiv:2601.14188v1 Announce Type: new
+Abstract: Instance-level recognition (ILR) concerns distinguishing individual instances from one another, with person re-identification as a prominent example. Despite the impressive visual perception capabilities of modern VLMs, we find their performance on ILR unsatisfactory, often dramatically underperforming domain-specific ILR models. This limitation hinders many practical application of VLMs, e.g. where recognizing familiar people and objects is crucial for effective visual understanding. Existing solutions typically learn to recognize instances one at a time using instance-specific datasets, which not only incur substantial data collection and training costs but also struggle with fine-grained discrimination. In this work, we propose IIR-VLM, a VLM enhanced for In-context Instance-level Recognition. We integrate pre-trained ILR expert models as auxiliary visual encoders to provide specialized features for learning diverse instances, which enables VLMs to learn new instances in-context in a one-shot manner. Further, IIR-VLM leverages this knowledge for instance-aware visual understanding. We validate IIR-VLM's efficacy on existing instance personalization benchmarks. Finally, we demonstrate its superior ILR performance on a challenging new benchmark, which assesses ILR capabilities across varying difficulty and diverse categories, with person, face, pet and general objects as the instances at task.
+ oai:arXiv.org:2601.14188v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Liang Shi, Wei Li, Kevin M Beussman, Lin Chen, Yun Fu
+
+
+ From big q-Jacobi and Chebyshev polynomials to exponential-reproducing subdivision: new identities
+ https://arxiv.org/abs/2601.14189
+ arXiv:2601.14189v1 Announce Type: new
+Abstract: In this paper we derive new identities satisfied by Chebyshev polynomials of the first kind and big q-Jacobi polynomials. An immediate benefit of the derived identities is the achievement of closed-form expressions for the Laurent polynomials that identify minimum-support interpolating subdivision schemes reproducing finite sets of integer powers of exponentials.
+ oai:arXiv.org:2601.14189v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Leonard Peter Bos, Lucia Romani, Alberto Viscardi
+
+
+ Analyzing Far-Right Telegram Channels as Constituents of Information Autocracy in Russia
+ https://arxiv.org/abs/2601.14190
+ arXiv:2601.14190v1 Announce Type: new
+Abstract: This study examines how Russian far-right communities on Telegram shape perceptions of political figures through memes and visual narratives. Far from passive spectators, these actors co-produce propaganda, blending state-aligned messages with their own extremist framings. In Russia, such groups are central because they articulate the ideological foundations of the war against Ukraine and reflect the regime's gradual drift toward ultranationalist rhetoric. Drawing on a dataset of 200,000 images from expert-selected far-right Telegram channels, the study employs computer vision and unsupervised clustering to identify memes featuring Russian (Putin, Shoigu) and foreign politicians (Zelensky, Biden, Trump) and to reveal recurrent visual patterns in their representation. By leveraging the large-scale and temporal depth of this dataset, the analysis uncovers differential patterns of legitimation and delegitimation across actors and over time. These insights are not attainable in smaller-scale studies. Preliminary findings show that far-right memes function as instruments of propaganda co-production. These communities do not simply echo official messages but generate bottom-up narratives of legitimation and delegitimation that align with state ideology. By framing leaders as heroic and opponents as corrupt or weak, far-right actors act as informal co-creators of authoritarian legitimacy within Russia's informational autocracy.
+ oai:arXiv.org:2601.14190v1
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Polina Smirnova, Mykola Makhortykh
+
+
+ Toward Efficient Agents: Memory, Tool learning, and Planning
+ https://arxiv.org/abs/2601.14192
+ arXiv:2601.14192v1 Announce Type: new
+Abstract: Recent years have witnessed increasing interest in extending large language models into agentic systems. While the effectiveness of agents has continued to improve, efficiency, which is crucial for real-world deployment, has often been overlooked. This paper therefore investigates efficiency from three core components of agents: memory, tool learning, and planning, considering costs such as latency, tokens, steps, etc. Aimed at conducting comprehensive research addressing the efficiency of the agentic system itself, we review a broad range of recent approaches that differ in implementation yet frequently converge on shared high-level principles including but not limited to bounding context via compression and management, designing reinforcement learning rewards to minimize tool invocation, and employing controlled search mechanisms to enhance efficiency, which we discuss in detail. Accordingly, we characterize efficiency in two complementary ways: comparing effectiveness under a fixed cost budget, and comparing cost at a comparable level of effectiveness. This trade-off can also be viewed through the Pareto frontier between effectiveness and cost. From this perspective, we also examine efficiency oriented benchmarks by summarizing evaluation protocols for these components and consolidating commonly reported efficiency metrics from both benchmark and methodological studies. Moreover, we discuss the key challenges and future directions, with the goal of providing promising insights.
+ oai:arXiv.org:2601.14192v1
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaofang Yang, Lijun Li, Heng Zhou, Tong Zhu, Xiaoye Qu, Yuchen Fan, Qianshan Wei, Rui Ye, Li Kang, Yiran Qin, Zhiqiang Kou, Daizong Liu, Qi Li, Ning Ding, Siheng Chen, Jing Shao
+
+
+ A Minimax Perspective on Almost-Stable Matchings
+ https://arxiv.org/abs/2601.14195
+ arXiv:2601.14195v1 Announce Type: new
+Abstract: Stability is crucial in matching markets, yet in many real-world settings - from hospital residency allocations to roommate assignments - full stability is either impossible to achieve or can come at the cost of leaving many agents unmatched. When stability cannot be achieved, algorithmicists and market designers face a critical question: how should instability be measured and distributed among participants? Existing approaches to "almost-stable" matchings focus on aggregate measures, minimising either the total number of blocking pairs or the count of agents involved in blocking pairs. However, such aggregate objectives can result in concentrated instability on a few individual agents, raising concerns about fairness and incentives to deviate. We introduce a fairness-oriented approach to approximate stability based on the minimax principle: we seek matchings that minimise the maximum number of blocking pairs any agent is in. Equivalently, we minimise the maximum number of agents that anyone has justified envy towards. This distributional objective protects the worst-off agents from a disproportionate amount of instability. We characterise the computational complexity of this notion across fundamental matching settings. Surprisingly, even very modest guarantees prove computationally intractable: we show that it is NP-complete to decide whether a matching exists in which no agent is in more than one blocking pair, even when preference lists have constant-bounded length. This hardness applies to both Stable Roommates and maximum-cardinality Stable Marriage. On the positive side, we provide polynomial-time algorithms when agents rank at most two others, and present approximation algorithms and integer programs. Our results map the algorithmic landscape and reveal fundamental trade-offs between distributional guarantees and computational feasibility.
+ oai:arXiv.org:2601.14195v1
+ cs.GT
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Frederik Glitzner, David Manlove
+
+
+ Differentiated Pickup Point Offering for Emission Reduction in Last-Mile Delivery
+ https://arxiv.org/abs/2601.14196
+ arXiv:2601.14196v1 Announce Type: new
+Abstract: Pickup points are widely recognized as a sustainable alternative to home delivery, as consolidating orders at pickup locations can shorten delivery routes and improve first-attempt success rates. However, these benefits may be negated when customers drive to pick up their orders. This study proposes a Differentiated Pickup Point Offering (DPO) policy that aims to jointly reduce emissions from delivery truck routes and customer travel. Under DPO, each arriving customer is offered a single recommended pickup point, rather than an unrestricted choice among all locations, while retaining the option of home delivery. We study this problem in a dynamic and stochastic setting, where the pickup point offered to each customer depends on previously realized customer locations and delivery choices. To design effective DPO policies, we adopt a reinforcement learning-based approach that accounts for spatial relationships between customers and pickup points and their implications for future route consolidation. Computational experiments show that differentiated pickup point offerings can substantially reduce total carbon emissions. The proposed policies reduce total emissions by up to 9% relative to home-only delivery and by 2% on average compared with alternative policies, including unrestricted pickup point choice and nearest pickup point assignment. Differentiated offerings are particularly effective in dense urban settings with many pickup points and short inter-location distances. Moreover, explicitly accounting for the dynamic nature of customer arrivals and choices is especially important when customers are less inclined to choose pickup point delivery over home delivery.
+ oai:arXiv.org:2601.14196v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Albina Galiullina, Wouter van Heeswijk, Tom van Woensel
+
+
+ Local electrical impedance tomography via projections
+ https://arxiv.org/abs/2601.14198
+ arXiv:2601.14198v1 Announce Type: new
+Abstract: This paper introduces a method for approximately eliminating the effect that conductivity changes outside the region of interest have in electrical impedance tomography, allowing to form a local reconstruction in the region of interest only. The method considers the Jacobian matrix of the forward map, i.e., of the map that sends the discretized conductivity to the electrode measurements, at an initial guess for the conductivity. The Jacobian matrix is divided columnwise into two parts: one corresponding to the region of interest and a nuisance Jacobian corresponding to the rest of the domain. The leading idea is to project both the electrode measurements and the forward map onto the orthogonal complement of the span of a number of left-hand singular vectors for a suitably weighted nuisance Jacobian. The weighting can, e.g., account for the element sizes in a finite element discretization or to prior information on the conductivity outside the region of interest. The inverse problem is then solved by considering the projected relation between the measurements and the forward map, only reconstructing the conductivity in the region of interest. The functionality of the method is demonstrated by applying a reconstruction algorithm that combines lagged diffusivity iteration and total variation regularization to experimental data. In particular, data from a head-shaped water tank is considered, with the conductivity change in the region of interest mimicking growth of a hemorrhagic stroke and the changes outside the region of interest imitating physiological variations in the conductivity of the scalp.
+ oai:arXiv.org:2601.14198v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ A. J\"a\"askel\"ainen, A. Vavilov, J. Toivanen, A. H\"anninen, V. Kolehmainen, N. Hyv\"onen
+
+
+ Convergence analysis and a novel Lagrange multiplier partitioned method for fluid-poroelastic interaction
+ https://arxiv.org/abs/2601.14201
+ arXiv:2601.14201v1 Announce Type: new
+Abstract: We propose a partitioned method for the monolithic formulation of the Stokes-Biot system that incorporates Lagrange multipliers enforcing the interface conditions. The monolithic system is discretized using finite elements, and we establish convergence of the resulting approximation. A Schur complement based algorithm is developed together with an efficient preconditioner, enabling the fluid and poroelastic structure subproblems to be decoupled and solved independently at each time step. The Lagrange multipliers approximate the interface fluxes and act as Neumann boundary conditions for the subproblems, yielding parallel solution of the Stokes and Biot equations. Numerical experiments demonstrate the effectiveness of the proposed algorithm and validate the theoretical error estimate.
+ oai:arXiv.org:2601.14201v1
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Amy de Castro, Hyesuk Lee
+
+
+ Storage-Rate Trade-off in A-XPIR
+ https://arxiv.org/abs/2601.14202
+ arXiv:2601.14202v1 Announce Type: new
+Abstract: We consider the storage problem in an asymmetric $X$-secure private information retrieval (A-XPIR) setting. The A-XPIR setting considers the $X$-secure PIR problem (XPIR) when a given arbitrary set of servers is communicating. We focus on the trade-off region between the average storage at the servers and the average download cost. In the case of $N=4$ servers and two non-overlapping sets of communicating servers with $K=2$ messages, we characterize the achievable region and show that the three main inequalities compared to the no-security case collapse to two inequalities in the asymmetric security case. In the general case, we derive bounds that need to be satisfied for the general achievable region for an arbitrary number of servers and messages. In addition, we provide the storage and retrieval scheme for the case of $N=4$ servers with $K=2$ messages and two non-overlapping sets of communicating servers, such that the messages are not replicated (in the sense of a coded version of each symbol) and at the same time achieve the optimal achievable rate for the case of replication. Finally, we derive the exact capacity for the case of asymmetric security and asymmetric collusion for $N=4$ servers, with the communication links $\{1,2\}$ and $\{3,4\}$, which splits the servers into two groups, i.e., $g=2$, and with the collusion links $\{1,3\}$, $\{2,4\}$, as $C=\frac{1}{3}$. More generally, we derive a capacity result for a certain family of asymmetric collusion and asymmetric security cases.
+ oai:arXiv.org:2601.14202v1
+ cs.IT
+ cs.CR
+ cs.NI
+ eess.SP
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohamed Nomeir, Sennur Ulukus
+
+
+ Copy-Trasform-Paste: Zero-Shot Object-Object Alignment Guided by Vision-Language and Geometric Constraints
+ https://arxiv.org/abs/2601.14207
+ arXiv:2601.14207v1 Announce Type: new
+Abstract: We study zero-shot 3D alignment of two given meshes, using a text prompt describing their spatial relation -- an essential capability for content creation and scene assembly. Earlier approaches primarily rely on geometric alignment procedures, while recent work leverages pretrained 2D diffusion models to model language-conditioned object-object spatial relationships. In contrast, we directly optimize the relative pose at test time, updating translation, rotation, and isotropic scale with CLIP-driven gradients via a differentiable renderer, without training a new model. Our framework augments language supervision with geometry-aware objectives: a variant of soft-Iterative Closest Point (ICP) term to encourage surface attachment and a penetration loss to discourage interpenetration. A phased schedule strengthens contact constraints over time, and camera control concentrates the optimization on the interaction region. To enable evaluation, we curate a benchmark containing diverse categories and relations, and compare against baselines. Our method outperforms all alternatives, yielding semantically faithful and physically plausible alignments.
+ oai:arXiv.org:2601.14207v1
+ cs.GR
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Rotem Gatenyo, Ohad Fried
+
+
+ Rig-Aware 3D Reconstruction of Vehicle Undercarriages using Gaussian Splatting
+ https://arxiv.org/abs/2601.14208
+ arXiv:2601.14208v1 Announce Type: new
+Abstract: Inspecting the undercarriage of used vehicles is a labor-intensive task that requires inspectors to crouch or crawl underneath each vehicle to thoroughly examine it. Additionally, online buyers rarely see undercarriage photos. We present an end-to-end pipeline that utilizes a three-camera rig to capture videos of the undercarriage as the vehicle drives over it, and produces an interactive 3D model of the undercarriage. The 3D model enables inspectors and customers to rotate, zoom, and slice through the undercarriage, allowing them to detect rust, leaks, or impact damage in seconds, thereby improving both workplace safety and buyer confidence. Our primary contribution is a rig-aware Structure-from-Motion (SfM) pipeline specifically designed to overcome the challenges of wide-angle lens distortion and low-parallax scenes. Our method overcomes the challenges of wide-angle lens distortion and low-parallax scenes by integrating precise camera calibration, synchronized video streams, and strong geometric priors from the camera rig. We use a constrained matching strategy with learned components, the DISK feature extractor, and the attention-based LightGlue matcher to generate high-quality sparse point clouds that are often unattainable with standard SfM pipelines. These point clouds seed the Gaussian splatting process to generate photorealistic undercarriage models that render in real-time. Our experiments and ablation studies demonstrate that our design choices are essential to achieve state-of-the-art quality.
+ oai:arXiv.org:2601.14208v1
+ cs.CV
+ cs.GR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nitin Kulkarni, Akhil Devarashetti, Charlie Cluss, Livio Forte, Dan Buckmaster, Philip Schneider, Chunming Qiao, Alina Vereshchaka
+
+
+ InT: Self-Proposed Interventions Enable Credit Assignment in LLM Reasoning
+ https://arxiv.org/abs/2601.14209
+ arXiv:2601.14209v1 Announce Type: new
+Abstract: Outcome-reward reinforcement learning (RL) has proven effective at improving the reasoning capabilities of large language models (LLMs). However, standard RL assigns credit only at the level of the final answer, penalizing entire reasoning traces when the outcome is incorrect and uniformly reinforcing all steps when it is correct. As a result, correct intermediate steps may be discouraged in failed traces, while spurious steps may be reinforced in successful ones. We refer to this failure mode as the problem of credit assignment. While a natural remedy is to train a process reward model, accurately optimizing such models to identify corrective reasoning steps remains challenging. We introduce Intervention Training (InT), a training paradigm in which the model performs fine-grained credit assignment on its own reasoning traces by proposing short, targeted corrections that steer trajectories toward higher reward. Using reference solutions commonly available in mathematical reasoning datasets and exploiting the fact that verifying a model-generated solution is easier than generating a correct one from scratch, the model identifies the first error in its reasoning and proposes a single-step intervention to redirect the trajectory toward the correct solution. We then apply supervised fine-tuning (SFT) to the on-policy rollout up to the point of error concatenated with the intervention, localizing error to the specific step that caused failure. We show that the resulting model serves as a far better initialization for RL training. After running InT and subsequent fine-tuning with RL, we improve accuracy by nearly 14% over a 4B-parameter base model on IMO-AnswerBench, outperforming larger open-source models such as gpt-oss-20b.
+ oai:arXiv.org:2601.14209v1
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Matthew Y. R. Yang, Hao Bai, Ian Wu, Gene Yang, Amrith Setlur, Aviral Kumar
+
+
+ HALT: Hallucination Assessment via Latent Testing
+ https://arxiv.org/abs/2601.14210
+ arXiv:2601.14210v1 Announce Type: new
+Abstract: Hallucination in large language models (LLMs) can be understood as a failure of faithful readout: although internal representations may encode uncertainty about a query, decoding pressures still yield a fluent answer. We propose lightweight residual probes that read hallucination risk directly from intermediate hidden states of question tokens, motivated by the hypothesis that these layers retain epistemic signals that are attenuated in the final decoding stage. The probe is a small auxiliary network whose computation is orders of magnitude cheaper than token generation and can be evaluated fully in parallel with inference, enabling near-instantaneous hallucination risk estimation with effectively zero added latency in low-risk cases. We deploy the probe as an agentic critic for fast selective generation and routing, allowing LLMs to immediately answer confident queries while delegating uncertain ones to stronger verification pipelines. Across four QA benchmarks and multiple LLM families, the method achieves strong AUROC and AURAC, generalizes under dataset shift, and reveals interpretable structure in intermediate representations, positioning fast internal uncertainty readout as a principled foundation for reliable agentic AI.
+ oai:arXiv.org:2601.14210v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rohan Bhatnagar, Youran Sun, Chi Andrew Zhang, Yixin Wen, Haizhao Yang
+
+
+ Unification of Deterministic Higher-Order Patterns
+ https://arxiv.org/abs/2601.14211
+ arXiv:2601.14211v1 Announce Type: new
+Abstract: We present a sound and complete unification procedure for deterministic higher-order patterns, a class of simply-typed lambda terms introduced by Yokoyama et al. which comes with a deterministic matching problem. Our unification procedure can be seen as a special case of full higher-order unification where flex-flex pairs can be solved in a most general way. Moreover, our method generalizes Libal and Miller's recent functions-as-constructors higher-order unification by dropping their global condition on variable arguments, thereby losing the property that every solvable problem has a most general unifier. In fact, minimal complete sets of unifiers of deterministic higher-order patterns may be infinite, so decidability of the unification problem remains an open question.
+ oai:arXiv.org:2601.14211v1
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Johannes Niederhauser, Aart Middeldorp
+
+
+ Generalization and Completeness of Stochastic Local Search Algorithms
+ https://arxiv.org/abs/2601.14212
+ arXiv:2601.14212v1 Announce Type: new
+Abstract: We generalize Stochastic Local Search (SLS) heuristics into a unique formal model. This model has two key components: a common structure designed to be as large as possible and a parametric structure intended to be as small as possible. Each heuristic is obtained by instantiating the parametric part in a different way. Particular instances for Genetic Algorithms (GA), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO) are presented. Then, we use our model to prove the Turing-completeness of SLS algorithms in general. The proof uses our framework to construct a GA able to simulate any Turing machine. This Turing-completeness implies that determining any non-trivial property concerning the relationship between the inputs and the computed outputs is undecidable for GA and, by extension, for the general set of SLS methods (although not necessarily for each particular method). Similar proofs are more informally presented for PSO and ACO.
+ oai:arXiv.org:2601.14212v1
+ cs.NE
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1016/j.swevo.2021.100982
+ Daniel Loscos, Narciso Marti-Oliet, Ismael Rodriguez
+
+
+ Beyond Polarization: Opinion Mixing and Social Influence in Deliberation
+ https://arxiv.org/abs/2601.14221
+ arXiv:2601.14221v1 Announce Type: new
+Abstract: Deliberative processes are often discussed as increasing or decreasing polarization. This approach misses a different, and arguably more diagnostic, dimension of opinion change: whether deliberation reshuffles who agrees with whom, or simply moves everyone in parallel while preserving the pre-deliberation rank ordering. We introduce \opinion mixing, measured by Kendall's rank correlation (\tau) between pre- and post-deliberation responses, as a complement to variance-based polarization metrics. Across two large online deliberative polls spanning 32 countries (MCF-2022: n=6,342; MCF-2023: n=1,529), deliberation increases opinion mixing relative to survey-only controls: treatment groups exhibit lower rank correlation on (97%) and (93%) of opinion questions, respectively. Polarization measures based on variance tell a more heterogeneous story: controls consistently converge, while treated groups sometimes converge and sometimes diverge depending on the issue.
+ To probe mechanisms, we link transcripts and surveys in a third event (SOF: (n=617), 116 groups) and use LLM-assisted coding of 6,232 discussion statements. Expressed support in discussion statements strongly predicts subsequent group-level opinion shifts; this correlation is amplified by justification quality in the statements but not by argument novelty. To our knowledge, we are the first to observe how different notions of argument quality have different associations with the outcome of deliberation. This suggests that opinion change after deliberation is related to selective uptake of well-reasoned arguments, producing complex patterns of opinion reorganization that standard polarization metrics may miss.
+ oai:arXiv.org:2601.14221v1
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Mohak Goyal, Lodewijk Gelauff, Naman Gupta, Ashish Goel, Kamesh Munagala
+
+
+ Rerank Before You Reason: Analyzing Reranking Tradeoffs through Effective Token Cost in Deep Search Agents
+ https://arxiv.org/abs/2601.14224
+ arXiv:2601.14224v1 Announce Type: new
+Abstract: Deep research agents rely on iterative retrieval and reasoning to answer complex queries, but scaling test-time computation raises significant efficiency concerns. We study how to allocate reasoning budget in deep search pipelines, focusing on the role of listwise reranking. Using the BrowseComp-Plus benchmark, we analyze tradeoffs between model scale, reasoning effort, reranking depth, and total token cost via a novel effective token cost (ETC) metric. Our results show that reranking consistently improves retrieval and end-to-end accuracy, and that moderate reranking often yields larger gains than increasing search-time reasoning, achieving comparable accuracy at substantially lower cost. All our code is available at https://github.com/texttron/BrowseComp-Plus.git
+ oai:arXiv.org:2601.14224v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sahel Sharifymoghaddam, Jimmy Lin
+
+
+ Transformer Architectures for Respiratory Sound Analysis and Multimodal Diagnosis
+ https://arxiv.org/abs/2601.14227
+ arXiv:2601.14227v1 Announce Type: new
+Abstract: Respiratory sound analysis is a crucial tool for screening asthma and other pulmonary pathologies, yet traditional auscultation remains subjective and experience-dependent. Our prior research established a CNN baseline using DenseNet201, which demonstrated high sensitivity in classifying respiratory sounds. In this work, we (i) adapt the Audio Spectrogram Transformer (AST) for respiratory sound analysis and (ii) evaluate a multimodal Vision-Language Model (VLM) that integrates spectrograms with structured patient metadata.
+ AST is initialized from publicly available weights and fine-tuned on a medical dataset containing hundreds of recordings per diagnosis. The VLM experiment uses a compact Moondream-type model that processes spectrogram images alongside a structured text prompt (sex, age, recording site) to output a JSON-formatted diagnosis. Results indicate that AST achieves approximately 97% accuracy with an F1-score around 97% and ROC AUC of 0.98 for asthma detection, significantly outperforming both the internal CNN baseline and typical external benchmarks. The VLM reaches 86-87% accuracy, performing comparably to the CNN baseline while demonstrating the capability to integrate clinical context into the inference process. These results confirm the effectiveness of self-attention for acoustic screening and highlight the potential of multimodal architectures for holistic diagnostic tools.
+ oai:arXiv.org:2601.14227v1
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Theodore Aptekarev, Vladimir Sokolovsky, Gregory Furman
+
+
+ Attention-Based Offline Reinforcement Learning and Clustering for Interpretable Sepsis Treatment
+ https://arxiv.org/abs/2601.14228
+ arXiv:2601.14228v1 Announce Type: new
+Abstract: Sepsis remains one of the leading causes of mortality in intensive care units, where timely and accurate treatment decisions can significantly impact patient outcomes. In this work, we propose an interpretable decision support framework. Our system integrates four core components: (1) a clustering-based stratification module that categorizes patients into low, intermediate, and high-risk groups upon ICU admission, using clustering with statistical validation; (2) a synthetic data augmentation pipeline leveraging variational autoencoders (VAE) and diffusion models to enrich underrepresented trajectories such as fluid or vasopressor administration; (3) an offline reinforcement learning (RL) agent trained using Advantage Weighted Regression (AWR) with a lightweight attention encoder and supported by an ensemble models for conservative, safety-aware treatment recommendations; and (4) a rationale generation module powered by a multi-modal large language model (LLM), which produces natural-language justifications grounded in clinical context and retrieved expert knowledge. Evaluated on the MIMIC-III and eICU datasets, our approach achieves high treatment accuracy while providing clinicians with interpretable and robust policy recommendations.
+ oai:arXiv.org:2601.14228v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Punit Kumar, Vaibhav Saran, Divyesh Patel, Nitin Kulkarni, Alina Vereshchaka
+
+
+ MASCOT: Towards Multi-Agent Socio-Collaborative Companion Systems
+ https://arxiv.org/abs/2601.14230
+ arXiv:2601.14230v1 Announce Type: new
+Abstract: Multi-agent systems (MAS) have recently emerged as promising socio-collaborative companions for emotional and cognitive support. However, these systems frequently suffer from persona collapse--where agents revert to generic, homogenized assistant behaviors--and social sycophancy, which produces redundant, non-constructive dialogue. We propose MASCOT, a generalizable framework for multi-perspective socio-collaborative companions. MASCOT introduces a novel bi-level optimization strategy to harmonize individual and collective behaviors: 1) Persona-Aware Behavioral Alignment, an RLAIF-driven pipeline that finetunes individual agents for strict persona fidelity to prevent identity loss; and 2) Collaborative Dialogue Optimization, a meta-policy guided by group-level rewards to ensure diverse and productive discourse. Extensive evaluations across psychological support and workplace domains demonstrate that MASCOT significantly outperforms state-of-the-art baselines, achieving improvements of up to +14.1 in Persona Consistency and +10.6 in Social Contribution. Our framework provides a practical roadmap for engineering the next generation of socially intelligent multi-agent systems.
+ oai:arXiv.org:2601.14230v1
+ cs.CL
+ cs.AI
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Yiyang Wang, Yiqiao Jin, Alex Cabral, Josiah Hester
+
+
+ KAGE-Bench: Fast Known-Axis Visual Generalization Evaluation for Reinforcement Learning
+ https://arxiv.org/abs/2601.14232
+ arXiv:2601.14232v1 Announce Type: new
+Abstract: Pixel-based reinforcement learning agents often fail under purely visual distribution shift even when latent dynamics and rewards are unchanged, but existing benchmarks entangle multiple sources of shift and hinder systematic analysis. We introduce KAGE-Env, a JAX-native 2D platformer that factorizes the observation process into independently controllable visual axes while keeping the underlying control problem fixed. By construction, varying a visual axis affects performance only through the induced state-conditional action distribution of a pixel policy, providing a clean abstraction for visual generalization. Building on this environment, we define KAGE-Bench, a benchmark of six known-axis suites comprising 34 train-evaluation configuration pairs that isolate individual visual shifts. Using a standard PPO-CNN baseline, we observe strong axis-dependent failures, with background and photometric shifts often collapsing success, while agent-appearance shifts are comparatively benign. Several shifts preserve forward motion while breaking task completion, showing that return alone can obscure generalization failures. Finally, the fully vectorized JAX implementation enables up to 33M environment steps per second on a single GPU, enabling fast and reproducible sweeps over visual factors. Code: https://avanturist322.github.io/KAGEBench/.
+ oai:arXiv.org:2601.14232v1
+ cs.LG
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Egor Cherepanov, Daniil Zelezetsky, Alexey K. Kovalev, Aleksandr I. Panov
+
+
+ Q-learning with Adjoint Matching
+ https://arxiv.org/abs/2601.14234
+ arXiv:2601.14234v1 Announce Type: new
+Abstract: We propose Q-learning with Adjoint Matching (QAM), a novel TD-based reinforcement learning (RL) algorithm that tackles a long-standing challenge in continuous-action RL: efficient optimization of an expressive diffusion or flow-matching policy with respect to a parameterized Q-function. Effective optimization requires exploiting the first-order information of the critic, but it is challenging to do so for flow or diffusion policies because direct gradient-based optimization via backpropagation through their multi-step denoising process is numerically unstable. Existing methods work around this either by only using the value and discarding the gradient information, or by relying on approximations that sacrifice policy expressivity or bias the learned policy. QAM sidesteps both of these challenges by leveraging adjoint matching, a recently proposed technique in generative modeling, which transforms the critic's action gradient to form a step-wise objective function that is free from unstable backpropagation, while providing an unbiased, expressive policy at the optimum. Combined with temporal-difference backup for critic learning, QAM consistently outperforms prior approaches on hard, sparse reward tasks in both offline and offline-to-online RL.
+ oai:arXiv.org:2601.14234v1
+ cs.LG
+ cs.AI
+ cs.RO
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qiyang Li, Sergey Levine
+
+
+ Stabilizer-Assisted Inactivation Decoding of Quantum Error-Correcting Codes with Erasures
+ https://arxiv.org/abs/2601.14236
+ arXiv:2601.14236v1 Announce Type: new
+Abstract: In this work, we develop a reduced complexity maximum likelihood (ML) decoder for quantum low-density parity-check (QLDPC) codes over erasures. Our decoder combines classical inactivation decoding, which integrates peeling with symbolic guessing, with a new dual peeling procedure. In the dual peeling stage, we perform row operations on the stabilizer matrix to efficiently reveal stabilizer generators and their linear combinations whose support lies entirely on the erased set. Each such stabilizer identified allows us to freely fix a bit in its support without affecting the logical state of the decoded result. This removes one degree of freedom that would otherwise require a symbolic guess, reducing the number of inactivated variables and decreasing the size of the final linear system that must be solved. We further show that dual peeling combined with standard peeling alone, without inactivation, is sufficient to achieve ML for erasure decoding of surface codes. Simulations across several QLDPC code families confirm that our decoder matches ML logical failure performance while significantly reducing the complexity of inactivation decoding, including more than a 20% reduction in symbolic guesses for the B1 lifted product code at high erasure rates.
+ oai:arXiv.org:2601.14236v1
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Giulio Pech, Mert G\"okduman, Hanwen Yao, Henry D. Pfister
+
+
+ Spatiotemporal Wildfire Prediction and Reinforcement Learning for Helitack Suppression
+ https://arxiv.org/abs/2601.14238
+ arXiv:2601.14238v1 Announce Type: new
+Abstract: Wildfires are growing in frequency and intensity, devastating ecosystems and communities while causing billions of dollars in suppression costs and economic damage annually in the U.S. Traditional wildfire management is mostly reactive, addressing fires only after they are detected. We introduce \textit{FireCastRL}, a proactive artificial intelligence (AI) framework that combines wildfire forecasting with intelligent suppression strategies. Our framework first uses a deep spatiotemporal model to predict wildfire ignition. For high-risk predictions, we deploy a pre-trained reinforcement learning (RL) agent to execute real-time suppression tactics with helitack units inside a physics-informed 3D simulation. The framework generates a threat assessment report to help emergency responders optimize resource allocation and planning. In addition, we are publicly releasing a large-scale, spatiotemporal dataset containing $\mathbf{9.5}$ million samples of environmental variables for wildfire prediction. Our work demonstrates how deep learning and RL can be combined to support both forecasting and tactical wildfire response. More details can be found at https://sites.google.com/view/firecastrl.
+ oai:arXiv.org:2601.14238v1
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shaurya Mathur, Shreyas Bellary Manjunath, Nitin Kulkarni, Alina Vereshchaka
+
+
+ APEX-Agents
+ https://arxiv.org/abs/2601.14242
+ arXiv:2601.14242v1 Announce Type: new
+Abstract: We introduce the AI Productivity Index for Agents (APEX-Agents), a benchmark for assessing whether AI agents can execute long-horizon, cross-application tasks created by investment banking analysts, management consultants, and corporate lawyers. APEX-Agents requires agents to navigate realistic work environments with files and tools. We test eight agents for the leaderboard using Pass@1. Gemini 3 Flash (Thinking=High) achieves the highest score of 24.0%, followed by GPT-5.2 (Thinking=High), Claude Opus 4.5 (Thinking=High), and Gemini 3 Pro (Thinking=High). We open source the APEX-Agents benchmark (n=480) with all prompts, rubrics, gold outputs, files, and metadata. We also open-source Archipelago, our infrastructure for agent execution and evaluation.
+ oai:arXiv.org:2601.14242v1
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Bertie Vidgen, Austin Mann, Abby Fennelly, John Wright Stanly, Lucas Rothman, Marco Burstein, Julien Benchek, David Ostrofsky, Anirudh Ravichandran, Debnil Sur, Neel Venugopal, Alannah Hsia, Isaac Robinson, Calix Huang, Olivia Varones, Daniyal Khan, Michael Haines, Zach Richards, Chirag Mahapatra, Brendan Foody, Osvald Nitski
+
+
+ Jet-RL: Enabling On-Policy FP8 Reinforcement Learning with Unified Training and Rollout Precision Flow
+ https://arxiv.org/abs/2601.14243
+ arXiv:2601.14243v1 Announce Type: new
+Abstract: Reinforcement learning (RL) is essential for enhancing the complex reasoning capabilities of large language models (LLMs). However, existing RL training pipelines are computationally inefficient and resource-intensive, with the rollout phase accounting for over 70% of total training time. Quantized RL training, particularly using FP8 precision, offers a promising approach to mitigating this bottleneck. A commonly adopted strategy applies FP8 precision during rollout while retaining BF16 precision for training. In this work, we present the first comprehensive study of FP8 RL training and demonstrate that the widely used BF16-training + FP8-rollout strategy suffers from severe training instability and catastrophic accuracy collapse under long-horizon rollouts and challenging tasks. Our analysis shows that these failures stem from the off-policy nature of the approach, which introduces substantial numerical mismatch between training and inference. Motivated by these observations, we propose Jet-RL, an FP8 RL training framework that enables robust and stable RL optimization. The key idea is to adopt a unified FP8 precision flow for both training and rollout, thereby minimizing numerical discrepancies and eliminating the need for inefficient inter-step calibration. Extensive experiments validate the effectiveness of Jet-RL: our method achieves up to 33% speedup in the rollout phase, up to 41% speedup in the training phase, and a 16% end-to-end speedup over BF16 training, while maintaining stable convergence across all settings and incurring negligible accuracy degradation.
+ oai:arXiv.org:2601.14243v1
+ cs.LG
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Haocheng Xi, Charlie Ruan, Peiyuan Liao, Yujun Lin, Han Cai, Yilong Zhao, Shuo Yang, Kurt Keutzer, Song Han, Ligeng Zhu
+
+
+ XR: Cross-Modal Agents for Composed Image Retrieval
+ https://arxiv.org/abs/2601.14245
+ arXiv:2601.14245v1 Announce Type: new
+Abstract: Retrieval is being redefined by agentic AI, demanding multimodal reasoning beyond conventional similarity-based paradigms. Composed Image Retrieval (CIR) exemplifies this shift as each query combines a reference image with textual modifications, requiring compositional understanding across modalities. While embedding-based CIR methods have achieved progress, they remain narrow in perspective, capturing limited cross-modal cues and lacking semantic reasoning. To address these limitations, we introduce XR, a training-free multi-agent framework that reframes retrieval as a progressively coordinated reasoning process. It orchestrates three specialized types of agents: imagination agents synthesize target representations through cross-modal generation, similarity agents perform coarse filtering via hybrid matching, and question agents verify factual consistency through targeted reasoning for fine filtering. Through progressive multi-agent coordination, XR iteratively refines retrieval to meet both semantic and visual query constraints, achieving up to a 38% gain over strong training-free and training-based baselines on FashionIQ, CIRR, and CIRCO, while ablations show each agent is essential. Code is available: https://01yzzyu.github.io/xr.github.io/.
+ oai:arXiv.org:2601.14245v1
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3774904.3792276
+ Zhongyu Yang, Wei Pang, Yingfang Yuan
+
+
+ Soft Tail-dropping for Adaptive Visual Tokenization
+ https://arxiv.org/abs/2601.14246
+ arXiv:2601.14246v1 Announce Type: new
+Abstract: We present Soft Tail-dropping Adaptive Tokenizer (STAT), a 1D discrete visual tokenizer that adaptively chooses the number of output tokens per image according to its structural complexity and level of detail. STAT encodes an image into a sequence of discrete codes together with per-token keep probabilities. Beyond standard autoencoder objectives, we regularize these keep probabilities to be monotonically decreasing along the sequence and explicitly align their distribution with an image-level complexity measure. As a result, STAT produces length-adaptive 1D visual tokens that are naturally compatible with causal 1D autoregressive (AR) visual generative models. On ImageNet-1k, equipping vanilla causal AR models with STAT yields competitive or superior visual generation quality compared to other probabilistic model families, while also exhibiting favorable scaling behavior that has been elusive in prior vanilla AR visual generation attempts.
+ oai:arXiv.org:2601.14246v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zeyuan Chen, Kai Zhang, Zhuowen Tu, Yuanjun Xiong
+
+
+ Which Reasoning Trajectories Teach Students to Reason Better? A Simple Metric of Informative Alignment
+ https://arxiv.org/abs/2601.14249
+ arXiv:2601.14249v1 Announce Type: new
+Abstract: Long chain-of-thought (CoT) trajectories provide rich supervision signals for distilling reasoning from teacher to student LLMs. However, both prior work and our experiments show that trajectories from stronger teachers do not necessarily yield better students, highlighting the importance of data-student suitability in distillation. Existing methods assess suitability primarily through student likelihood, favoring trajectories that closely align with the model's current behavior but overlooking more informative ones. Addressing this, we propose Rank-Surprisal Ratio (RSR), a simple metric that captures both alignment and informativeness to assess the suitability of a reasoning trajectory. RSR is motivated by the observation that effective trajectories typically combine low absolute probability with relatively high-ranked tokens under the student model, balancing learning signal strength and behavioral alignment. Concretely, RSR is defined as the ratio of a trajectory's average token-wise rank to its average negative log-likelihood, and is straightforward to compute and interpret. Across five student models and reasoning trajectories from 11 diverse teachers, RSR strongly correlates with post-training performance (average Spearman 0.86), outperforming existing metrics. We further demonstrate its practical utility in both trajectory selection and teacher selection.
+ oai:arXiv.org:2601.14249v1
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuming Yang, Mingyoung Lai, Wanxu Zhao, Xiaoran Fan, Zhiheng Xi, Mingqi Wu, Chiyue Huang, Jun Zhao, Haijun Lv, Jian Tong, Yunhua Zhou, Yicheng Zou, Qipeng Guo, Tao Gui, Qi Zhang, Xuanjing Huang
+
+
+ OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer
+ https://arxiv.org/abs/2601.14250
+ arXiv:2601.14250v1 Announce Type: new
+Abstract: Videos convey richer information than images or text, capturing both spatial and temporal dynamics. However, most existing video customization methods rely on reference images or task-specific temporal priors, failing to fully exploit the rich spatio-temporal information inherent in videos, thereby limiting flexibility and generalization in video generation. To address these limitations, we propose OmniTransfer, a unified framework for spatio-temporal video transfer. It leverages multi-view information across frames to enhance appearance consistency and exploits temporal cues to enable fine-grained temporal control. To unify various video transfer tasks, OmniTransfer incorporates three key designs: Task-aware Positional Bias that adaptively leverages reference video information to improve temporal alignment or appearance consistency; Reference-decoupled Causal Learning separating reference and target branches to enable precise reference transfer while improving efficiency; and Task-adaptive Multimodal Alignment using multimodal semantic guidance to dynamically distinguish and tackle different tasks. Extensive experiments show that OmniTransfer outperforms existing methods in appearance (ID and style) and temporal transfer (camera movement and video effects), while matching pose-guided methods in motion transfer without using pose, establishing a new paradigm for flexible, high-fidelity video generation.
+ oai:arXiv.org:2601.14250v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pengze Zhang, Yanze Wu, Mengtian Li, Xu Bai, Songtao Zhao, Fulong Ye, Chong Mou, Xinghui Li, Zhuowei Chen, Qian He, Mingyuan Gao
+
+
+ LightOnOCR: A 1B End-to-End Multilingual Vision-Language Model for State-of-the-Art OCR
+ https://arxiv.org/abs/2601.14251
+ arXiv:2601.14251v1 Announce Type: new
+Abstract: We present \textbf{LightOnOCR-2-1B}, a 1B-parameter end-to-end multilingual vision--language model that converts document images (e.g., PDFs) into clean, naturally ordered text without brittle OCR pipelines. Trained on a large-scale, high-quality distillation mix with strong coverage of scans, French documents, and scientific PDFs, LightOnOCR-2 achieves state-of-the-art results on OlmOCR-Bench while being 9$\times$ smaller and substantially faster than prior best-performing models. We further extend the output format to predict normalized bounding boxes for embedded images, introducing localization during pretraining via a resume strategy and refining it with RLVR using IoU-based rewards. Finally, we improve robustness with checkpoint averaging and task-arithmetic merging. We release model checkpoints under Apache 2.0, and publicly release the dataset and \textbf{LightOnOCR-bbox-bench} evaluation under their respective licenses.
+ oai:arXiv.org:2601.14251v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Said Taghadouini, Adrien Cavaill\`es, Baptiste Aubertin
+
+
+ Identification capacity and rate-query tradeoffs in classification systems
+ https://arxiv.org/abs/2601.14252
+ arXiv:2601.14252v1 Announce Type: new
+Abstract: We study a one-shot identification analogue of rate-distortion for discrete classification under three resources: tag rate L (bits of side information stored per entity), identification cost W (attribute-membership queries per identification, excluding global preprocessing and amortized caching), and distortion D (misclassification probability). The question is to characterize achievable triples (L,W,D) when a decoder must recover an entity's class from limited observations. Zero-error barrier. If two distinct classes induce the same attribute profile, then the observation pi(V) is identical for both and no decoder can identify the class from attribute queries alone. Thus, if the profile map pi is not injective on classes, zero-error identification without tags is impossible (a zero-error feasibility threshold). Achievability and converse at D=0. With k classes, nominal tags of L = ceil(log2 k) bits enable O(1) identification cost with D=0. Conversely, any scheme with D=0 must satisfy L >= log2 k bits (tight). Without tags (L=0), identification requires Omega(n) queries in the worst case and may incur D>0. Combinatorial structure. Minimal sufficient query families form the bases of a matroid; the induced distinguishing dimension is well-defined and links to zero-error source coding via graph entropy. We illustrate implications for type systems, databases, and biological taxonomy. All results are mechanized in Lean4 (6000+ lines, 0 sorry).
+ oai:arXiv.org:2601.14252v1
+ cs.IT
+ cs.PL
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Tristan Simas
+
+
+ Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis
+ https://arxiv.org/abs/2601.14253
+ arXiv:2601.14253v1 Announce Type: new
+Abstract: We present Motion 3-to-4, a feed-forward framework for synthesising high-quality 4D dynamic objects from a single monocular video and an optional 3D reference mesh. While recent advances have significantly improved 2D, video, and 3D content generation, 4D synthesis remains difficult due to limited training data and the inherent ambiguity of recovering geometry and motion from a monocular viewpoint. Motion 3-to-4 addresses these challenges by decomposing 4D synthesis into static 3D shape generation and motion reconstruction. Using a canonical reference mesh, our model learns a compact motion latent representation and predicts per-frame vertex trajectories to recover complete, temporally coherent geometry. A scalable frame-wise transformer further enables robustness to varying sequence lengths. Evaluations on both standard benchmarks and a new dataset with accurate ground-truth geometry show that Motion 3-to-4 delivers superior fidelity and spatial consistency compared to prior work. Project page is available at https://motion3-to-4.github.io/.
+ oai:arXiv.org:2601.14253v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hongyuan Chen, Xingyu Chen, Youjia Zhang, Zexiang Xu, Anpei Chen
+
+
+ VideoMaMa: Mask-Guided Video Matting via Generative Prior
+ https://arxiv.org/abs/2601.14255
+ arXiv:2601.14255v1 Announce Type: new
+Abstract: Generalizing video matting models to real-world videos remains a significant challenge due to the scarcity of labeled data. To address this, we present Video Mask-to-Matte Model (VideoMaMa) that converts coarse segmentation masks into pixel accurate alpha mattes, by leveraging pretrained video diffusion models. VideoMaMa demonstrates strong zero-shot generalization to real-world footage, even though it is trained solely on synthetic data. Building on this capability, we develop a scalable pseudo-labeling pipeline for large-scale video matting and construct the Matting Anything in Video (MA-V) dataset, which offers high-quality matting annotations for more than 50K real-world videos spanning diverse scenes and motions. To validate the effectiveness of this dataset, we fine-tune the SAM2 model on MA-V to obtain SAM2-Matte, which outperforms the same model trained on existing matting datasets in terms of robustness on in-the-wild videos. These findings emphasize the importance of large-scale pseudo-labeled video matting and showcase how generative priors and accessible segmentation cues can drive scalable progress in video matting research.
+ oai:arXiv.org:2601.14255v1
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sangbeom Lim, Seoung Wug Oh, Jiahui Huang, Heeji Yoon, Seungryong Kim, Joon-Young Lee
+
+
+ Implicit Neural Representation Facilitates Unified Universal Vision Encoding
+ https://arxiv.org/abs/2601.14256
+ arXiv:2601.14256v1 Announce Type: new
+Abstract: Models for image representation learning are typically designed for either recognition or generation. Various forms of contrastive learning help models learn to convert images to embeddings that are useful for classification, detection, and segmentation. On the other hand, models can be trained to reconstruct images with pixel-wise, perceptual, and adversarial losses in order to learn a latent space that is useful for image generation. We seek to unify these two directions with a first-of-its-kind model that learns representations which are simultaneously useful for recognition and generation. We train our model as a hyper-network for implicit neural representation, which learns to map images to model weights for fast, accurate reconstruction. We further integrate our INR hyper-network with knowledge distillation to improve its generalization and performance. Beyond the novel training design, the model also learns an unprecedented compressed embedding space with outstanding performance for various visual tasks. The complete model competes with state-of-the-art results for image representation learning, while also enabling generative capabilities with its high-quality tiny embeddings. The code is available at https://github.com/tiktok/huvr.
+ oai:arXiv.org:2601.14256v1
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Matthew Gwilliam, Xiao Wang, Xuefeng Hu, Zhenheng Yang
+
+
+ Lightweight Prompt Biasing for Contextualized End-to-End ASR Systems
+ https://arxiv.org/abs/2506.06252
+ arXiv:2506.06252v2 Announce Type: cross
+Abstract: End-to-End Automatic Speech Recognition (ASR) has advanced significantly yet still struggles with rare and domain-specific entities. This paper introduces a simple yet efficient prompt-based biasing technique for contextualized ASR, enhancing recognition accuracy by leverage a unified multitask learning framework. The approach comprises two key components: a prompt biasing model which is trained to determine when to focus on entities in prompt, and a entity filtering mechanism which efficiently filters out irrelevant entities. Our method significantly enhances ASR accuracy on entities, achieving a relative 30.7% and 18.0% reduction in Entity Word Error Rate compared to the baseline model with shallow fusion on in-house domain dataset with small and large entity lists, respectively. The primary advantage of this method lies in its efficiency and simplicity without any structure change, making it lightweight and highly efficient.
+ oai:arXiv.org:2506.06252v2
+ eess.AS
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Bo Ren, Yu Shi, Jinyu Li
+
+
+ Advancing Minority Stress Detection with Transformers: Insights from the Social Media Datasets
+ https://arxiv.org/abs/2509.02908
+ arXiv:2509.02908v1 Announce Type: cross
+Abstract: Individuals from sexual and gender minority groups experience disproportionately high rates of poor health outcomes and mental disorders compared to their heterosexual and cisgender counterparts, largely as a consequence of minority stress as described by Meyer's (2003) model. This study presents the first comprehensive evaluation of transformer-based architectures for detecting minority stress in online discourse. We benchmark multiple transformer models including ELECTRA, BERT, RoBERTa, and BART against traditional machine learning baselines and graph-augmented variants. We further assess zero-shot and few-shot learning paradigms to assess their applicability on underrepresented datasets. Experiments are conducted on the two largest publicly available Reddit corpora for minority stress detection, comprising 12,645 and 5,789 posts, and are repeated over five random seeds to ensure robustness. Our results demonstrate that integrating graph structure consistently improves detection performance across transformer-only models and that supervised fine-tuning with relational context outperforms zero and few-shot approaches. Theoretical analysis reveals that modeling social connectivity and conversational context via graph augmentation sharpens the models' ability to identify key linguistic markers such as identity concealment, internalized stigma, and calls for support, suggesting that graph-enhanced transformers offer the most reliable foundation for digital health interventions and public health policy.
+ oai:arXiv.org:2509.02908v1
+ cs.CL
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1007/s13278-025-01521-z
+ Santosh Chapagain, Cory J Cascalheira, Shah Muhammad Hamdi, Soukaina Filali Boubrahimi, Jillian R. Scheer
+
+
+ Multi-Scale Negative Coupled Information Systems (MNCIS): A Unified Spectral Topology Framework for Stability in Turbulence, AI, and Biology
+ https://arxiv.org/abs/2601.11594
+ arXiv:2601.11594v1 Announce Type: cross
+Abstract: Complex dynamical systems frequently encounter a recurrent structural instability: the collapse of the spectral gap, driving the system toward a low-dimensional "Zero-Mode Attractor" (e.g., spectral pile-up or over-smoothing). Building upon recent global well-posedness estimates [Hou, arXiv:2601.00638], this work generalizes the Multi-Scale Negative Coupled Information System (MNCIS) framework. We postulate that global stability requires an active topological operator -- Adaptive Spectral Negative Coupling (ASNC) -- functioning as a state-dependent high-pass filter that penalizes entropy accumulation at spectral boundaries. We validate this unified framework via three implementations:(1) Hydrodynamics: In 3D Navier-Stokes turbulence ($N=256^3$), ASNC acts as a global-enstrophy adaptive sub-grid scale (SGS) model, stabilizing the inviscid limit and preserving the Kolmogorov $-5/3$ inertial range without artificial hyper-viscosity.(2) Artificial Intelligence: Addressing Over-smoothing in Graph Neural Networks (GNNs), we implement ASNC as a parameter-free topological constraint. Unlike baselines (e.g., DeepGCNs) relying on dense residual connections to bypass signal decay, our framework enables the training of ultra-deep 64-layer networks without residual connections, maintaining perfectly stationary feature variance ($\sigma^2 \equiv 1.0$) on the ogbn-arxiv benchmark. (3) Biological Physics: In reaction-diffusion morphogenesis, it stabilizes Turing patterns against diffusive washout in high-entropy regimes. Our results suggest that the MNCIS framework provides a base-independent topological condition for distinguishing viable complex systems from those collapsing into thermal equilibrium, bridging physical stability and information persistence.
+ oai:arXiv.org:2601.11594v1
+ physics.comp-ph
+ cs.LG
+ nlin.AO
+ physics.bio-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pengyue Hou
+
+
+ Level set-based topology optimization of micropolar solids under thermo-mechanical loading
+ https://arxiv.org/abs/2601.11607
+ arXiv:2601.11607v1 Announce Type: cross
+Abstract: We propose a novel level set-based topology optimization for micropolar solids subjected to thermo-mechanical loading. To capture the size effects, we have incorporated the microstructural length-scale information into the level set-based topology optimization method by adopting a micropolar theory. The proposed non-local topology optimization method can provide accurate topology optimization for size-dependent solids under thermo-mechanical loading. We have demonstrated the effectiveness of the proposed method through a few representative two-dimensional benchmark problems. The numerical results reveal the substantial influence of underlying micro-structures, incorporated in the model through micropolar parameters, and temperature on topology optimization, highlighting the necessity of the proposed thermo-mechanical micropolar formulation for materials with pronounced non-local effects. For the numerical implementation of the proposed model, we have used open-source finite element libraries, \texttt{Gridap.jl}, and \texttt{GridapTopOpt.jl}, available in Julia, to ensure transparency and reproducibility of the reported computational results.
+ oai:arXiv.org:2601.11607v1
+ physics.comp-ph
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mayank Shekhar, Ayyappan Unnikrishna Pillai, Subhayan De, Mohammad Masiur Rahaman
+
+
+ Qualitative analysis and numerical investigations of time-fractional Zika virus model arising in population dynamics
+ https://arxiv.org/abs/2601.11636
+ arXiv:2601.11636v1 Announce Type: cross
+Abstract: Epidemic models play a crucial role in population dynamics, offering valuable insights into disease transmission while aiding in epidemic prediction and control. In this paper, we analyze the mathematical model of the time-fractional Zika virus transmission for human and mosquito populations. The fractional derivative is considered in the Caputo sense of order $\alpha\in(0,1).$ We begin by conducting a qualitative analysis using the stability theory of differential equations. The existence and uniqueness of the solution are established, and the model's stability is examined through Hyers-Ulam stability analysis. Furthermore, an efficient difference scheme utilizing the standard L1 technique is developed to simulate the model and analyze the solution's behavior under key parameters. The resulting nonlinear algebraic system is solved using the Newton-Raphson method. Finally, illustrative examples are presented to validate the theoretical findings. Graphical results indicate that the fractional model provides deeper insights and a better understanding of disease dynamics. These findings aid in controlling the virus through contact precautions and recommended therapies while also helping to predict its future spread.
+ oai:arXiv.org:2601.11636v1
+ math.DS
+ cs.NA
+ math.NA
+ q-bio.PE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Gaurav Saini, Bappa Ghosh, Sunita Chand
+
+
+ Large Language Model Agent for User-friendly Chemical Process Simulations
+ https://arxiv.org/abs/2601.11650
+ arXiv:2601.11650v1 Announce Type: cross
+Abstract: Modern process simulators enable detailed process design, simulation, and optimization; however, constructing and interpreting simulations is time-consuming and requires expert knowledge. This limits early exploration by inexperienced users. To address this, a large language model (LLM) agent is integrated with AVEVA Process Simulation (APS) via Model Context Protocol (MCP), allowing natural language interaction with rigorous process simulations. An MCP server toolset enables the LLM to communicate programmatically with APS using Python, allowing it to execute complex simulation tasks from plain-language instructions. Two water-methanol separation case studies assess the framework across different task complexities and interaction modes. The first shows the agent autonomously analyzing flowsheets, finding improvement opportunities, and iteratively optimizing, extracting data, and presenting results clearly. The framework benefits both educational purposes, by translating technical concepts and demonstrating workflows, and experienced practitioners by automating data extraction, speeding routine tasks, and supporting brainstorming. The second case study assesses autonomous flowsheet synthesis through both a step-by-step dialogue and a single prompt, demonstrating its potential for novices and experts alike. The step-by-step mode gives reliable, guided construction suitable for educational contexts; the single-prompt mode constructs fast baseline flowsheets for later refinement. While current limitations such as oversimplification, calculation errors, and technical hiccups mean expert oversight is still needed, the framework's capabilities in analysis, optimization, and guided construction suggest LLM-based agents can become valuable collaborators.
+ oai:arXiv.org:2601.11650v1
+ physics.chem-ph
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Jingkang Liang, Niklas Groll, G\"urkan Sin
+
+
+ AI Agents Need Memory Control Over More Context
+ https://arxiv.org/abs/2601.11653
+ arXiv:2601.11653v1 Announce Type: cross
+Abstract: AI agents are increasingly used in long, multi-turn workflows in both research and enterprise settings. As interactions grow, agent behavior often degrades due to loss of constraint focus, error accumulation, and memory-induced drift. This problem is especially visible in real-world deployments where context evolves, distractions are introduced, and decisions must remain consistent over time. A common practice is to equip agents with persistent memory through transcript replay or retrieval-based mechanisms. While convenient, these approaches introduce unbounded context growth and are vulnerable to noisy recall and memory poisoning, leading to unstable behavior and increased drift. In this work, we introduce the Agent Cognitive Compressor (ACC), a bio-inspired memory controller that replaces transcript replay with a bounded internal state updated online at each turn. ACC separates artifact recall from state commitment, enabling stable conditioning while preventing unverified content from becoming persistent memory. We evaluate ACC using an agent-judge-driven live evaluation framework that measures both task outcomes and memory-driven anomalies across extended interactions. Across scenarios spanning IT operations, cybersecurity response, and healthcare workflows, ACC consistently maintains bounded memory and exhibits more stable multi-turn behavior, with significantly lower hallucination and drift than transcript replay and retrieval-based agents. These results show that cognitive compression provides a practical and effective foundation for reliable memory control in long-horizon AI agents.
+ oai:arXiv.org:2601.11653v1
+ q-bio.NC
+ cs.LG
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Fouad Bousetouane
+
+
+ Pigment Network Detection and Classification in Dermoscopic Images Using Directional Imaging Algorithms and Convolutional Neural Networks
+ https://arxiv.org/abs/2601.11674
+ arXiv:2601.11674v1 Announce Type: cross
+Abstract: Early diagnosis of melanoma, which can save thousands of lives, relies heavily on the analysis of dermoscopic images. One crucial diagnostic criterion is the identification of unusual pigment network (PN). However, distinguishing between regular (typical) and irregular (atypical) PN is challenging. This study aims to automate the PN detection process using a directional imaging algorithm and classify PN types using machine learning classifiers. The directional imaging algorithm incorporates Principal Component Analysis (PCA), contrast enhancement, filtering, and noise reduction. Applied to the PH2 dataset, this algorithm achieved a 96% success rate, which increased to 100% after pixel intensity adjustments. We created a new dataset containing only PN images from these results. We then employed two classifiers, Convolutional Neural Network (CNN) and Bag of Features (BoF), to categorize PN into atypical and typical classes. Given the limited dataset of 200 images, a simple and effective CNN was designed, featuring two convolutional layers and two batch normalization layers. The proposed CNN achieved 90% accuracy, 90% sensitivity, and 89% specificity. When compared to state-of-the-art methods, our CNN demonstrated superior performance. Our study highlights the potential of the proposed CNN model for effective PN classification, suggesting future research should focus on expanding datasets and incorporating additional dermatological features to further enhance melanoma diagnosis.
+ oai:arXiv.org:2601.11674v1
+ eess.IV
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1016/j.bspc.2024.106883
+ Biomedical Signal Processing and Control (2024), 106883
+ M. A. Rasel, Sameem Abdul Kareem, Unaizah Obaidellah
+
+
+ FourierPET: Deep Fourier-based Unrolled Network for Low-count PET Reconstruction
+ https://arxiv.org/abs/2601.11680
+ arXiv:2601.11680v1 Announce Type: cross
+Abstract: Low-count positron emission tomography (PET) reconstruction is a challenging inverse problem due to severe degradations arising from Poisson noise, photon scarcity, and attenuation correction errors. Existing deep learning methods typically address these in the spatial domain with an undifferentiated optimization objective, making it difficult to disentangle overlapping artifacts and limiting correction effectiveness. In this work, we perform a Fourier-domain analysis and reveal that these degradations are spectrally separable: Poisson noise and photon scarcity cause high-frequency phase perturbations, while attenuation errors suppress low-frequency amplitude components. Leveraging this insight, we propose FourierPET, a Fourier-based unrolled reconstruction framework grounded in the Alternating Direction Method of Multipliers. It consists of three tailored modules: a spectral consistency module that enforces global frequency alignment to maintain data fidelity, an amplitude-phase correction module that decouples and compensates for high-frequency phase distortions and low-frequency amplitude suppression, and a dual adjustment module that accelerates convergence during iterative reconstruction. Extensive experiments demonstrate that FourierPET achieves state-of-the-art performance with significantly fewer parameters, while offering enhanced interpretability through frequency-aware correction.
+ oai:arXiv.org:2601.11680v1
+ eess.IV
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zheng Zhang, Hao Tang, Yingying Hu, Zhanli Hu, Jing Qin
+
+
+ Mobile-friendly Image de-noising: Hardware Conscious Optimization for Edge Application
+ https://arxiv.org/abs/2601.11684
+ arXiv:2601.11684v1 Announce Type: cross
+Abstract: Image enhancement is a critical task in computer vision and photography that is often entangled with noise. This renders the traditional Image Signal Processing (ISP) ineffective compared to the advances in deep learning. However, the success of such methods is increasingly associated with the ease of their deployment on edge devices, such as smartphones. This work presents a novel mobile-friendly network for image de-noising obtained with Entropy-Regularized differentiable Neural Architecture Search (NAS) on a hardware-aware search space for a U-Net architecture, which is first-of-its-kind. The designed model has 12% less parameters, with ~2-fold improvement in ondevice latency and 1.5-fold improvement in the memory footprint for a 0.7% drop in PSNR, when deployed and profiled on Samsung Galaxy S24 Ultra. Compared to the SOTA Swin-Transformer for Image Restoration, the proposed network had competitive accuracy with ~18-fold reduction in GMACs. Further, the network was tested successfully for Gaussian de-noising with 3 intensities on 4 benchmarks and real-world de-noising on 1 benchmark demonstrating its generalization ability.
+ oai:arXiv.org:2601.11684v1
+ eess.IV
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Srinivas Miriyala, Sowmya Vajrala, Hitesh Kumar, Sravanth Kodavanti, Vikram Rajendiran
+
+
+ Towards Efficient Image Deblurring for Edge Deployment
+ https://arxiv.org/abs/2601.11685
+ arXiv:2601.11685v1 Announce Type: cross
+Abstract: Image deblurring is a critical stage in mobile image signal processing pipelines, where the ability to restore fine structures and textures must be balanced with real-time constraints on edge devices. While recent deep networks such as transformers and activation-free architectures achieve state-of-the-art (SOTA) accuracy, their efficiency is typically measured in FLOPs or parameters, which do not correlate with latency on embedded hardware. We propose a hardware-aware adaptation framework that restructures existing models through sensitivity-guided block substitution, surrogate distillation, and training-free multi-objective search driven by device profiling. Applied to the 36-block NAFNet baseline, the optimized variants achieve up to 55% reduction in GMACs compared to the recent transformer-based SOTA while maintaining competitive accuracy. Most importantly, on-device deployment yields a 1.25X latency improvement over the baseline. Experiments on motion deblurring (GoPro), defocus deblurring (DPDD), and auxiliary benchmarks (RealBlur-J/R, HIDE) demonstrate the generality of the approach, while comparisons with prior efficient baselines confirm its accuracy-efficiency trade-off. These results establish feedback-driven adaptation as a principled strategy for bridging the gap between algorithmic design and deployment-ready deblurring models.
+ oai:arXiv.org:2601.11685v1
+ eess.IV
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Srinivas Miriyala, Sowmya Vajrala, Sravanth Kodavanti
+
+
+ Bridging Modalities: Joint Synthesis and Registration Framework for Aligning Diffusion MRI with T1-Weighted Images
+ https://arxiv.org/abs/2601.11689
+ arXiv:2601.11689v1 Announce Type: cross
+Abstract: Multimodal image registration between diffusion MRI (dMRI) and T1-weighted (T1w) MRI images is a critical step for aligning diffusion-weighted imaging (DWI) data with structural anatomical space. Traditional registration methods often struggle to ensure accuracy due to the large intensity differences between diffusion data and high-resolution anatomical structures. This paper proposes an unsupervised registration framework based on a generative registration network, which transforms the original multimodal registration problem between b0 and T1w images into a unimodal registration task between a generated image and the real T1w image. This effectively reduces the complexity of cross-modal registration. The framework first employs an image synthesis model to generate images with T1w-like contrast, and then learns a deformation field from the generated image to the fixed T1w image. The registration network jointly optimizes local structural similarity and cross-modal statistical dependency to improve deformation estimation accuracy. Experiments conducted on two independent datasets demonstrate that the proposed method outperforms several state-of-the-art approaches in multimodal registration tasks.
+ oai:arXiv.org:2601.11689v1
+ eess.IV
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaofan Wang, Junyi Wang, Yuqian Chen, Lauren J. O' Donnell, Fan Zhang
+
+
+ Explainable histomorphology-based survival prediction of glioblastoma, IDH-wildtype
+ https://arxiv.org/abs/2601.11691
+ arXiv:2601.11691v1 Announce Type: cross
+Abstract: Glioblastoma, IDH-wildtype (GBM-IDHwt) is the most common malignant brain tumor. Histomorphology is a crucial component of the integrated diagnosis of GBM-IDHwt. Artificial intelligence (AI) methods have shown promise to extract additional prognostic information from histological whole-slide images (WSI) of hematoxylin and eosin-stained glioblastoma tissue. Here, we present an explainable AI-based method to support systematic interpretation of histomorphological features associated with survival. It combines an explainable multiple instance learning (MIL) architecture with a sparse autoencoder (SAE) to relate human-interpretable visual patterns of tissue to survival. The MIL architecture directly identifies prognosis-relevant image tiles and the SAE maps these tiles post-hoc to visual patterns. The MIL method was trained and evaluated using a new real-world dataset that comprised 720 GBM-IDHwt cases from three hospitals and four cancer registries in Germany. The SAE was trained using 1878 WSIs of glioblastoma from five independent public data collections. Despite the many factors influencing survival time, our method showed some ability to discriminate between patients living less than 180 days or more than 360 days solely based on histomorphology (AUC: 0.67; 95% CI: 0.63-0.72). Cox proportional hazards regression confirmed a significant difference in survival time between the predicted groups after adjustment for established prognostic factors (hazard ratio: 1.47; 95% CI: 1.26-1.72). Our method identified multiple interpretable visual patterns associated with survival. Three neuropathologists separately found that 21 of the 24 most strongly associated patterns could be clearly attributed to seven histomorphological categories. Necrosis and hemorrhage appeared to be associated with shorter survival while highly cellular tumor areas were associated with longer survival.
+ oai:arXiv.org:2601.11691v1
+ eess.IV
+ cs.LG
+ q-bio.QM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Jan-Philipp Redlich, Friedrich Feuerhake, Stefan Nikolin, Nadine Sarah Schaadt, Sarah Teuber-Hanselmann, Joachim Weis, Sabine Luttmann, Andrea Eberle, Christoph Buck, Timm Intemann, Pascal Birnstill, Klaus Kraywinkel, Jonas Ort, Peter Boor, Andr\'e Homeyer
+
+
+ Anisotropic Tensor Deconvolution of Hyperspectral Images
+ https://arxiv.org/abs/2601.11694
+ arXiv:2601.11694v1 Announce Type: cross
+Abstract: Hyperspectral image (HSI) deconvolution is a challenging ill-posed inverse problem, made difficult by the data's high dimensionality.We propose a parameter-parsimonious framework based on a low-rank Canonical Polyadic Decomposition (CPD) of the entire latent HSI $\mathbf{\mathcal{X}} \in \mathbb{R}^{P\times Q \times N}$.This approach recasts the problem from recovering a large-scale image with $PQN$ variables to estimating the CPD factors with $(P+Q+N)R$ variables.This model also enables a structure-aware, anisotropic Total Variation (TV) regularization applied only to the spatial factors, preserving the smooth spectral signatures.An efficient algorithm based on the Proximal Alternating Linearized Minimization (PALM) framework is developed to solve the resulting non-convex optimization problem.Experiments confirm the model's efficiency, showing a numerous parameter reduction of over two orders of magnitude and a compelling trade-off between model compactness and reconstruction accuracy.
+ oai:arXiv.org:2601.11694v1
+ eess.IV
+ cs.CV
+ cs.LG
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Xinjue Wang, Xiuheng Wang, Esa Ollila, Sergiy A. Vorobyov
+
+
+ Inter-Cell Interference Rejection Based on Ultrawideband Walsh-Domain Wireless Autoencoding
+ https://arxiv.org/abs/2601.11713
+ arXiv:2601.11713v1 Announce Type: cross
+Abstract: This paper proposes a novel technique for rejecting partial-in-band inter-cell interference (ICI) in ultrawideband communication systems. We present the design of an end-to-end wireless autoencoder architecture that jointly optimizes the transmitter and receiver encoding/decoding in the Walsh domain to mitigate interference from coexisting narrower-band 5G base stations. By exploiting the orthogonality and self-inverse properties of Walsh functions, the system distributes and learns to encode bit-words across parallel Walsh branches. Through analytical modeling and simulation, we characterize how 5G CPOFDM interference maps into the Walsh domain and identify optimal ratios of transmission frequencies and sampling rate where the end-to-end autoencoder achieves the highest rejection. Experimental results show that the proposed autoencoder achieves up to 12 dB of ICI rejection while maintaining a low block error rate (BLER) for the same baseline channel noise, i.e., baseline Signal-to-Noise-Ratio (SNR) without the interference.
+ oai:arXiv.org:2601.11713v1
+ eess.SP
+ cs.AI
+ cs.LG
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Rodney Martinez Alonso, Cel Thys, Cedric Dehos, Yuneisy Esthela Garcia Guzman, Sofie Pollin
+
+
+ AllShowers: One model for all calorimeter showers
+ https://arxiv.org/abs/2601.11716
+ arXiv:2601.11716v1 Announce Type: cross
+Abstract: Accurate and efficient detector simulation is essential for modern collider experiments. To reduce the high computational cost, various fast machine learning surrogate models have been proposed. Traditional surrogate models for calorimeter shower modeling train separate networks for each particle species, limiting scalability and reuse. We introduce AllShowers, a unified generative model that simulates calorimeter showers across multiple particle types using a single generative model. AllShowers is a continuous normalizing flow model with a Transformer architecture, enabling it to generate complex spatial and energy correlations in variable-length point cloud representations of showers. Trained on a diverse dataset of simulated showers in the highly granular ILD detector, the model demonstrates the ability to generate realistic showers for electrons, photons, and charged and neutral hadrons across a wide range of incident energies and angles without retraining. In addition to unifying shower generation for multiple particle types, AllShowers surpasses the fidelity of previous single-particle-type models for hadronic showers. Key innovations include the use of a layer embedding, allowing the model to learn all relevant calorimeter layer properties; a custom attention masking scheme to reduce computational demands and introduce a helpful inductive bias; and a shower- and layer-wise optimal transport mapping to improve training convergence and sample quality. AllShowers marks a significant step towards a universal model for calorimeter shower simulations in collider experiments.
+ oai:arXiv.org:2601.11716v1
+ physics.ins-det
+ cs.LG
+ hep-ex
+ hep-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Thorsten Buss, Henry Day-Hall, Frank Gaede, Gregor Kasieczka, Katja Kr\"uger
+
+
+ Lightweight Self-Supervised Detection of Fundamental Frequency and Accurate Probability of Voicing in Monophonic Music
+ https://arxiv.org/abs/2601.11768
+ arXiv:2601.11768v1 Announce Type: cross
+Abstract: Reliable fundamental frequency (F 0) and voicing estimation is essential for neural synthesis, yet many pitch extractors depend on large labeled corpora and degrade under realistic recording artifacts. We propose a lightweight, fully self-supervised framework for joint F 0 estimation and voicing inference, designed for rapid single-instrument training from limited audio. Using transposition-equivariant learning on CQT features, we introduce an EM-style iterative reweighting scheme that uses Shift Cross-Entropy (SCE) consistency as a reliability signal to suppress uninformative noisy/unvoiced frames. The resulting weights provide confidence scores that enable pseudo-labeling for a separate lightweight voicing classifier without manual annotations. Trained on MedleyDB and evaluated on MDB-stem-synth ground truth, our method achieves competitive cross-corpus performance (RPA 95.84, RCA 96.24) and demonstrates cross-instrument generalization.
+ oai:arXiv.org:2601.11768v1
+ eess.AS
+ cs.AI
+ cs.LG
+ cs.SD
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Venkat Suprabath Bitra, Homayoon Beigi
+
+
+ Quantum Kernel Machine Learning for Autonomous Materials Science
+ https://arxiv.org/abs/2601.11775
+ arXiv:2601.11775v1 Announce Type: cross
+Abstract: Autonomous materials science, where active learning is used to navigate large compositional phase space, has emerged as a powerful vehicle to rapidly explore new materials. A crucial aspect of autonomous materials science is exploring new materials using as little data as possible. Gaussian process-based active learning allows effective charting of multi-dimensional parameter space with a limited number of training data, and thus is a common algorithmic choice for autonomous materials science. An integral part of the autonomous workflow is the application of kernel functions for quantifying similarities among measured data points. A recent theoretical breakthrough has shown that quantum kernel models can achieve similar performance with less training data than classical models. This signals the possible advantage of applying quantum kernel machine learning to autonomous materials discovery. In this work, we compare quantum and classical kernels for their utility in sequential phase space navigation for autonomous materials science. Specifically, we compute a quantum kernel and several classical kernels for x-ray diffraction patterns taken from an Fe-Ga-Pd ternary composition spread library. We conduct our study on both IonQ's Aria trapped ion quantum computer hardware and the corresponding classical noisy simulator. We experimentally verify that a quantum kernel model can outperform some classical kernel models. The results highlight the potential of quantum kernel machine learning methods for accelerating materials discovery and suggest complex x-ray diffraction data is a candidate for robust quantum kernel model advantage.
+ oai:arXiv.org:2601.11775v1
+ cond-mat.mtrl-sci
+ cs.LG
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Felix Adams (University of Maryland College Park), Daiwei Zhu (IonQ), David W. Steuerman (IonQ), A. Gilad Kusne (University of Maryland College Park, National Institute for Standards and Technology), Ichiro Takeuchi (University of Maryland College Park, University of Maryland Quantum Materials Center)
+
+
+ Gradient-based Active Learning with Gaussian Processes for Global Sensitivity Analysis
+ https://arxiv.org/abs/2601.11790
+ arXiv:2601.11790v1 Announce Type: cross
+Abstract: Global sensitivity analysis of complex numerical simulators is often limited by the small number of model evaluations that can be afforded. In such settings, surrogate models built from a limited set of simulations can substantially reduce the computational burden, provided that the design of computer experiments is enriched efficiently. In this context, we propose an active learning approach that, for a fixed evaluation budget, targets the most informative regions of the input space to improve sensitivity analysis accuracy. More specifically, our method builds on recent advances in active learning for sensitivity analysis (Sobol' indices and derivative-based global sensitivity measures, DGSM) that exploit derivatives obtained from a Gaussian process (GP) surrogate. By leveraging the joint posterior distribution of the GP gradient, we develop acquisition functions that better account for correlations between partial derivatives and their impact on the response surface, leading to a more comprehensive and robust methodology than existing DGSM-oriented criteria. The proposed approach is first compared to state-of-the-art methods on standard benchmark functions, and is then applied to a real environmental model of pesticide transfers.
+ oai:arXiv.org:2601.11790v1
+ stat.ML
+ cs.LG
+ stat.ME
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Guerlain Lambert, C\'eline Helbert, Claire Lauvernet
+
+
+ Karhunen-Lo\`eve Expansion-Based Residual Anomaly Map for Resource-Efficient Glioma MRI Segmentation
+ https://arxiv.org/abs/2601.11833
+ arXiv:2601.11833v1 Announce Type: cross
+Abstract: Accurate segmentation of brain tumors is essential for clinical diagnosis and treatment planning. Deep learning is currently the state-of-the-art for brain tumor segmentation, yet it requires either large datasets or extensive computational resources that are inaccessible in most areas. This makes the problem increasingly difficult: state-of-the-art models use thousands of training cases and vast computational power, where performance drops sharply when either is limited. The top performer in the Brats GLI 2023 competition relied on supercomputers trained on over 92,000 augmented MRI scans using an AMD EPYC 7402 CPU, six NVIDIA RTX 6000 GPUs (48GB VRAM each), and 1024GB of RAM over multiple weeks. To address this, the Karhunen--Lo\`eve Expansion (KLE) was implemented as a feature extraction step on downsampled, z-score normalized MRI volumes. Each 240$\times$240$\times$155 multi-modal scan is reduced to four $48^3$ channels and compressed into 32 KL coefficients. The resulting approximate reconstruction enables a residual-based anomaly map, which is upsampled and added as a fifth channel to a compact 3D U-Net. All experiments were run on a consumer workstation (AMD Ryzen 5 7600X CPU, RTX 4060Ti (8GB VRAM), and 64GB RAM while using far fewer training cases. This model achieves post-processed Dice scores of 0.929 (WT), 0.856 (TC), and 0.821 (ET), with HD95 distances of 2.93, 6.78, and 10.35 voxels. These results are significantly better than the winning BraTS 2023 methodology for HD95 distances and WT dice scores. This demonstrates that a KLE-based residual anomaly map can dramatically reduce computational cost and data requirements while retaining state-of-the-art performance.
+ oai:arXiv.org:2601.11833v1
+ q-bio.QM
+ cs.CV
+ cs.LG
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Anthony Hur
+
+
+ Necessity of Cooperative Transmissions for Wireless MapReduce
+ https://arxiv.org/abs/2601.11844
+ arXiv:2601.11844v1 Announce Type: cross
+Abstract: The paper presents an improved upper bound (achievability result) on the optimal tradeoff between Normalized Delivery Time (NDT) and computation load for distributed computing MapReduce systems in certain ranges of the parameters. The upper bound is based on interference alignment combined with zero-forcing. The paper further provides a lower bound (converse) on the optimal NDT-computation tradeoff that can be achieved when IVAs are partitioned into sub-IVAs, and these sub-IVAs are then transmitted (in an arbitrary form) by a single node, without cooperation among nodes. For appropriate linear functions (e.g., XORs), such non-cooperative schemes can achieve some of the best NDT-computation tradeoff points so far obtained in the literature. However, as our lower bound shows, any non-cooperative scheme achieves a worse NDT-computation tradeoff than our new proposed scheme for certain parameters, thus proving the necessity of cooperative schemes like zero-forcing to attain the optimal NDT-computation tradeoff.
+ oai:arXiv.org:2601.11844v1
+ eess.SP
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Yue Bi, Mich\`ele Wigger
+
+
+ Adversarial Drift-Aware Predictive Transfer: Toward Durable Clinical AI
+ https://arxiv.org/abs/2601.11860
+ arXiv:2601.11860v1 Announce Type: cross
+Abstract: Clinical AI systems frequently suffer performance decay post-deployment due to temporal data shifts, such as evolving populations, diagnostic coding updates (e.g., ICD-9 to ICD-10), and systemic shocks like the COVID-19 pandemic. Addressing this ``aging'' effect via frequent retraining is often impractical due to computational costs and privacy constraints. To overcome these hurdles, we introduce Adversarial Drift-Aware Predictive Transfer (ADAPT), a novel framework designed to confer durability against temporal drift with minimal retraining. ADAPT innovatively constructs an uncertainty set of plausible future models by combining historical source models and limited current data. By optimizing worst-case performance over this set, it balances current accuracy with robustness against degradation due to future drifts. Crucially, ADAPT requires only summary-level model estimators from historical periods, preserving data privacy and ensuring operational simplicity. Validated on longitudinal suicide risk prediction using electronic health records from Mass General Brigham (2005--2021) and Duke University Health Systems, ADAPT demonstrated superior stability across coding transitions and pandemic-induced shifts. By minimizing annual performance decay without labeling or retraining future data, ADAPT offers a scalable pathway for sustaining reliable AI in high-stakes healthcare environments.
+ oai:arXiv.org:2601.11860v1
+ stat.AP
+ cs.LG
+ stat.ME
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Xin Xiong, Zijian Guo, Haobo Zhu, Chuan Hong, Jordan W Smoller, Tianxi Cai, Molei Liu
+
+
+ Accelerated MR Elastography Using Learned Neural Network Representation
+ https://arxiv.org/abs/2601.11878
+ arXiv:2601.11878v1 Announce Type: cross
+Abstract: To develop a deep-learning method for achieving fast high-resolution MR elastography from highly undersampled data without the need of high-quality training dataset. We first framed the deep neural network representation as a nonlinear extension of the linear subspace model, then used it to represent and reconstruct MRE image repetitions from undersampled k-space data. The network weights were learned using a multi-level k-space consistent loss in a self-supervised manner. To further enhance reconstruction quality, phase-contrast specific magnitude and phase priors were incorporated, including the similarity of anatomical structures and smoothness of wave-induced harmonic displacement. Experiments were conducted using both 3D gradient-echo spiral and multi-slice spin-echo spiral MRE datasets. Compared to the conventional linear subspace-based approaches, the nonlinear network representation method was able to produce superior image reconstruction with suppressed noise and artifacts from a single in-plane spiral arm per MRE repetition (e.g., total R=10), yielding comparable stiffness estimation to the fully sampled data. This work demonstrated the feasibility of using deep network representations to model and reconstruct MRE images from highly-undersampled data, a nonlinear extension of the subspace-based approaches.
+ oai:arXiv.org:2601.11878v1
+ eess.SP
+ cs.CV
+ cs.LG
+ q-bio.QM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xi Peng
+
+
+ Impact of Circuit Depth versus Qubit Count on Variational Quantum Classifiers for Higgs Boson Signal Detection
+ https://arxiv.org/abs/2601.11937
+ arXiv:2601.11937v1 Announce Type: cross
+Abstract: High-Energy Physics (HEP) experiments, such as those at the Large Hadron Collider (LHC), generate massive datasets that challenge classical computational limits. Quantum Machine Learning (QML) offers a potential advantage in processing high-dimensional data; however, finding the optimal architecture for current Noisy Intermediate-Scale Quantum (NISQ) devices remains an open challenge. This study investigates the performance of Variational Quantum Classifiers (VQC) in detecting Higgs Boson signals using the ATLAS Higgs Boson Machine Learning Challenge 2014 experiment dataset. We implemented a dimensionality reduction pipeline using Principal Component Analysis (PCA) to map 30 physical features into 4-qubit and 8-qubit latent spaces. We benchmarked three configurations: (A) a shallow 4-qubit circuit, (B) a deep 4-qubit circuit with increased entanglement layers, and (C) an expanded 8-qubit circuit. Experimental results demonstrate that increasing circuit depth significantly improves performance, yielding the highest accuracy of 56.2% (Configuration B), compared to a baseline of 51.9%. Conversely, simply scaling to 8 qubits resulted in a performance degradation to 50.6% due to optimization challenges associated with Barren Plateaus in the larger Hilbert space. These findings suggest that for near-term quantum hardware, prioritizing circuit depth and entanglement capability is more critical than increasing qubit count for effective anomaly detection in HEP data.
+ oai:arXiv.org:2601.11937v1
+ quant-ph
+ cs.LG
+ hep-ex
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.5281/zenodo.18096724
+ Fatih Maulana
+
+
+ NiMark: A Non-intrusive Watermarking Framework against Screen-shooting Attacks
+ https://arxiv.org/abs/2601.11978
+ arXiv:2601.11978v1 Announce Type: cross
+Abstract: Unauthorized screen-shooting poses a critical data leakage risk. Resisting screen-shooting attacks typically requires high-strength watermark embedding, inevitably degrading the cover image. To resolve the robustness-fidelity conflict, non-intrusive watermarking has emerged as a solution by constructing logical verification keys without altering the original content. However, existing non-intrusive schemes lack the capacity to withstand screen-shooting noise. While deep learning offers a potential remedy, we observe that directly applying it leads to a previously underexplored failure mode, the Structural Shortcut: networks tend to learn trivial identity mappings and neglect the image-watermark binding. Furthermore, even when logical binding is enforced, standard training strategies cannot fully bridge the noise gap, yielding suboptimal robustness against physical distortions. In this paper, we propose NiMark, an end-to-end framework addressing these challenges. First, to eliminate the structural shortcut, we introduce the Sigmoid-Gated XOR (SG-XOR) estimator to enable gradient propagation for the logical operation, effectively enforcing rigid image-watermark binding. Second, to overcome the robustness bottleneck, we devise a two-stage training strategy integrating a restorer to bridge the domain gap caused by screen-shooting noise. Experiments demonstrate that NiMark consistently outperforms representative state-of-the-art methods against both digital attacks and screen-shooting noise, while maintaining zero visual distortion.
+ oai:arXiv.org:2601.11978v1
+ eess.IV
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yufeng Wu, Xin Liao, Baowei Wang, Han Fang, Xiaoshuai Wu, Guiling Wang
+
+
+ A Kernel Approach for Semi-implicit Variational Inference
+ https://arxiv.org/abs/2601.12023
+ arXiv:2601.12023v1 Announce Type: cross
+Abstract: Semi-implicit variational inference (SIVI) enhances the expressiveness of variational families through hierarchical semi-implicit distributions, but the intractability of their densities makes standard ELBO-based optimization biased. Recent score-matching approaches to SIVI (SIVI-SM) address this issue via a minimax formulation, at the expense of an additional lower-level optimization problem. In this paper, we propose kernel semi-implicit variational inference (KSIVI), a principled and tractable alternative that eliminates the lower-level optimization by leveraging kernel methods. We show that when optimizing over a reproducing kernel Hilbert space, the lower-level problem admits an explicit solution, reducing the objective to the kernel Stein discrepancy (KSD). Exploiting the hierarchical structure of semi-implicit distributions, the resulting KSD objective can be efficiently optimized using stochastic gradient methods. We establish optimization guarantees via variance bounds on Monte Carlo gradient estimators and derive statistical generalization bounds of order $\tilde{\mathcal{O}}(1/\sqrt{n})$. We further introduce a multi-layer hierarchical extension that improves expressiveness while preserving tractability. Empirical results on synthetic and real-world Bayesian inference tasks demonstrate the effectiveness of KSIVI.
+ oai:arXiv.org:2601.12023v1
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Longlin Yu, Ziheng Cheng, Shiyue Zhang, Cheng Zhang
+
+
+ sangkuriang: A pseudo-spectral Python library for Korteweg-de Vries soliton simulation
+ https://arxiv.org/abs/2601.12029
+ arXiv:2601.12029v1 Announce Type: cross
+Abstract: The Korteweg-de Vries (KdV) equation serves as a foundational model in nonlinear wave physics, describing the balance between dispersive spreading and nonlinear steepening that gives rise to solitons. This article introduces sangkuriang, an open-source Python library for solving this equation using Fourier pseudo-spectral spatial discretization coupled with adaptive high-order time integration. The implementation leverages just-in-time (JIT) compilation for computational efficiency while maintaining accessibility for instructional purposes. Validation encompasses progressively complex scenarios including isolated soliton propagation, symmetric two-wave configurations, overtaking collisions between waves of differing amplitudes, and three-body interactions. Conservation of the classical invariants is monitored throughout, with deviations remaining small across all test cases. Measured soliton velocities conform closely to theoretical predictions based on the amplitude-velocity relationship characteristic of integrable systems. Complementary diagnostics drawn from information theory and recurrence analysis confirm that computed solutions preserve the regular phase-space structure expected for completely integrable dynamics. The solver outputs data in standard scientific formats compatible with common analysis tools and generates visualizations of spatiotemporal wave evolution. By combining numerical accuracy with practical accessibility on modest computational resources, sangkuriang offers a platform suitable for both classroom demonstrations of nonlinear wave phenomena and exploratory research into soliton dynamics.
+ oai:arXiv.org:2601.12029v1
+ nlin.PS
+ cs.NA
+ math.NA
+ physics.ao-ph
+ physics.comp-ph
+ physics.ed-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sandy H. S. Herho, Faruq Khadami, Iwan P. Anwar, Dasapta E. Irawan
+
+
+ Nonlinear Dynamic Factor Analysis With a Transformer Network
+ https://arxiv.org/abs/2601.12039
+ arXiv:2601.12039v1 Announce Type: cross
+Abstract: The paper develops a Transformer architecture for estimating dynamic factors from multivariate time series data under flexible identification assumptions. Performance on small datasets is improved substantially by using a conventional factor model as prior information via a regularization term in the training objective. The results are interpreted with Attention matrices that quantify the relative importance of variables and their lags for the factor estimate. Time variation in Attention patterns can help detect regime switches and evaluate narratives. Monte Carlo experiments suggest that the Transformer is more accurate than the linear factor model, when the data deviate from linear-Gaussian assumptions. An empirical application uses the Transformer to construct a coincident index of U.S. real economic activity.
+ oai:arXiv.org:2601.12039v1
+ econ.EM
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Oliver Snellman
+
+
+ Koopman Spectral Computation Beyond The Reflexive Regime: Endpoint Solvability Complexity Index And Type-2 Links
+ https://arxiv.org/abs/2601.12044
+ arXiv:2601.12044v1 Announce Type: cross
+Abstract: We study the Solvability Complexity Index (SCI) of Koopman operator spectral computation in the information-based framework of towers of algorithms. Given a compact metric space $(\mathcal{X},d)$ with a finite Borel measure $\omega$ on $\mathcal{X}$ and a continuous nonsingular map $F:\mathcal{X}\to \mathcal{X}$, our focus is the Koopman operator $\mathcal{K}_F$ acting on $L^p(\mathcal{X},\omega)$ for $p\in\{1,\infty\}$ for the computational problem \[ \Xi_{\sigma_{\mathrm{ap}}}(F) :=\sigma_{\mathrm{ap}}\!\bigl(\mathcal{K}_F\bigr), \] with input access given by point evaluations of $F\mapsto F(x)$ (and fixed quadrature access to $\omega$).
+ We clarify how the $L^1$ case can be brought into the same oracle model as the reflexive regime $1<p<\infty$ by proving a uniform finite-dimensional quadrature compatibility, while highlighting the fundamentally different role played by non-separability at $p=\infty$.
+ Beyond Koopman operators, we also construct a prototype family of decision problems $(\Xi_m)_{m\in\mathbb N}$ realizing prescribed finite tower heights, providing a reusable reduction source for future SCI lower bounds. Finally, we place these results deeper in the broader computational landscape of Type-2/Weihrauch theory.
+ oai:arXiv.org:2601.12044v1
+ math.LO
+ cs.NA
+ math.DS
+ math.NA
+ math.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Christopher Sorg
+
+
+ Irreversible Failure Reverses the Value of Information
+ https://arxiv.org/abs/2601.12046
+ arXiv:2601.12046v1 Announce Type: cross
+Abstract: We study dynamic games with hidden states and absorbing failure, where belief-driven actions can trigger irreversible collapse. In such environments, equilibria that sustain activity generically operate at the boundary of viability. We show that this geometry endogenously reverses the value of information: greater informational precision increases the probability of collapse on every finite horizon. We formalize this mechanism through a limit-viability criterion, and model opacity as a strategic choice of the information structure via Blackwell garbling. When failure is absorbing, survival values become locally concave in beliefs, implying that transparency destroys equilibrium viability while sufficient opacity restores it. In an extended game where agents choose the information structure ex ante, strictly positive opacity is necessary for equilibrium survival. The results identify irreversible failure--not coordination, misspecification, or ambiguity--as a primitive force generating an endogenous demand for opacity in dynamic games.
+ oai:arXiv.org:2601.12046v1
+ econ.TH
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nicholas H. Kirk
+
+
+ A New Strategy for Artificial Intelligence: Training Foundation Models Directly on Human Brain Data
+ https://arxiv.org/abs/2601.12053
+ arXiv:2601.12053v1 Announce Type: cross
+Abstract: While foundation models have achieved remarkable results across a diversity of domains, they still rely on human-generated data, such as text, as a fundamental source of knowledge. However, this data is ultimately the product of human brains, the filtered projection of a deeper neural complexity. In this paper, we explore a new strategy for artificial intelligence: moving beyond surface-level statistical regularities by training foundation models directly on human brain data. We hypothesize that neuroimaging data could open a window into elements of human cognition that are not accessible through observable actions, and argue that this additional knowledge could be used, alongside classical training data, to overcome some of the current limitations of foundation models. While previous research has demonstrated the possibility to train classical machine learning or deep learning models on neural patterns, this path remains largely unexplored for high-level cognitive functions. Here, we classify the current limitations of foundation models, as well as the promising brain regions and cognitive processes that could be leveraged to address them, along four levels: perception, valuation, execution, and integration. Then, we propose two methods that could be implemented to prioritize the use of limited neuroimaging data for strategically chosen, high-value steps in foundation model training: reinforcement learning from human brain (RLHB) and chain of thought from human brain (CoTHB). We also discuss the potential implications for agents, artificial general intelligence, and artificial superintelligence, as well as the ethical, social, and technical challenges and opportunities. We argue that brain-trained foundation models could represent a realistic and effective middle ground between continuing to scale current architectures and exploring alternative, neuroscience-inspired solutions.
+ oai:arXiv.org:2601.12053v1
+ q-bio.NC
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Ma\"el Donoso
+
+
+ Offline Policy Learning with Weight Clipping and Heaviside Composite Optimization
+ https://arxiv.org/abs/2601.12117
+ arXiv:2601.12117v1 Announce Type: cross
+Abstract: Offline policy learning aims to use historical data to learn an optimal personalized decision rule. In the standard estimate-then-optimize framework, reweighting-based methods (e.g., inverse propensity weighting or doubly robust estimators) are widely used to produce unbiased estimates of policy values. However, when the propensity scores of some treatments are small, these reweighting-based methods suffer from high variance in policy value estimation, which may mislead the downstream policy optimization and yield a learned policy with inferior value. In this paper, we systematically develop an offline policy learning algorithm based on a weight-clipping estimator that truncates small propensity scores via a clipping threshold chosen to minimize the mean squared error (MSE) in policy value estimation. Focusing on linear policies, we address the bilevel and discontinuous objective induced by weight-clipping-based policy optimization by reformulating the problem as a Heaviside composite optimization problem, which provides a rigorous computational framework. The reformulated policy optimization problem is then solved efficiently using the progressive integer programming method, making practical policy learning tractable. We establish an upper bound for the suboptimality of the proposed algorithm, which reveals how the reduction in MSE of policy value estimation, enabled by our proposed weight-clipping estimator, leads to improved policy learning performance.
+ oai:arXiv.org:2601.12117v1
+ math.OC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jingren Liu, Hanzhang Qin, Junyi Liu, Mabel C. Chou, Jong-Shi Pang
+
+
+ Listen, Look, Drive: Coupling Audio Instructions for User-aware VLA-based Autonomous Driving
+ https://arxiv.org/abs/2601.12142
+ arXiv:2601.12142v1 Announce Type: cross
+Abstract: Vision Language Action (VLA) models promise an open-vocabulary interface that can translate perceptual ambiguity into semantically grounded driving decisions, yet they still treat language as a static prior fixed at inference time. As a result, the model must infer continuously shifting objectives from pixels alone, yielding delayed or overly conservative maneuvers. We argue that effective VLAs for autonomous driving need an online channel in which users can influence driving with specific intentions. To this end, we present EchoVLA, a user-aware VLA that couples camera streams with in situ audio instructions. We augment the nuScenes dataset with temporally aligned, intent-specific speech commands generated by converting ego-motion descriptions into synthetic audios. Further, we compose emotional speech-trajectory pairs into a multimodal Chain-of-Thought (CoT) for fine-tuning a Multimodal Large Model (MLM) based on Qwen2.5-Omni. Specifically, we synthesize the audio-augmented dataset with different emotion types paired with corresponding driving behaviors, leveraging the emotional cues embedded in tone, pitch, and speech tempo to reflect varying user states, such as urgent or hesitant intentions, thus enabling our EchoVLA to interpret not only the semantic content but also the emotional context of audio commands for more nuanced and emotionally adaptive driving behavior. In open-loop benchmarks, our approach reduces the average L2 error by $59.4\%$ and the collision rate by $74.4\%$ compared to the baseline of vision-only perception. More experiments on nuScenes dataset validate that EchoVLA not only steers the trajectory through audio instructions, but also modulates driving behavior in response to the emotions detected in the user's speech.
+ oai:arXiv.org:2601.12142v1
+ eess.AS
+ cs.MM
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ziang Guo, Feng Yang, Xuefeng Zhang, Jiaqi Guo, Kun Zhao, Peng Lu, Zufeng Zhang, Sifa Zheng
+
+
+ A Survey on 30+ Years of Automatic Singing Assessment and Singing Information Processing
+ https://arxiv.org/abs/2601.12153
+ arXiv:2601.12153v1 Announce Type: cross
+Abstract: Automatic Singing Assessment and Singing Information Processing have evolved over the past three decades to support singing pedagogy, performance analysis, and vocal training. While the first approach objectively evaluates a singer's performance through computational metrics ranging from real-time visual feedback and acoustical biofeedback to sophisticated pitch tracking and spectral analysis, the latter method compares a predictor vocal signal with a target reference to capture nuanced data embedded in the singing voice. Notable advancements include the development of interactive systems that have significantly improved real-time visual feedback, and the integration of machine learning and deep neural network architectures that enhance the precision of vocal signal processing. This survey critically examines the literature to map the historical evolution of these technologies, while identifying and discussing key gaps. The analysis reveals persistent challenges, such as the lack of standardized evaluation frameworks, difficulties in reliably separating vocal signals from various noise sources, and the underutilization of advanced digital signal processing and artificial intelligence methodologies for capturing artistic expressivity. By detailing these limitations and the corresponding technological advances, this review demonstrates how addressing these issues can bridge the gap between objective computational assessments and subjective human-like evaluations of singing performance, ultimately enhancing both the technical accuracy and pedagogical relevance of automated singing evaluation systems.
+ oai:arXiv.org:2601.12153v1
+ eess.AS
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Arthur N. dos Santos, Bruno S. Masiero
+
+
+ Persistent Sheaf Laplacian Analysis of Protein Stability and Solubility Changes upon Mutation
+ https://arxiv.org/abs/2601.12219
+ arXiv:2601.12219v1 Announce Type: cross
+Abstract: Genetic mutations frequently disrupt protein structure, stability, and solubility, acting as primary drivers for a wide spectrum of diseases. Despite the critical importance of these molecular alterations, existing computational models often lack interpretability, and fail to integrate essential physicochemical interaction. To overcome these limitations, we propose SheafLapNet, a unified predictive framework grounded in the mathematical theory of Topological Deep Learning (TDL) and Persistent Sheaf Laplacian (PSL). Unlike standard Topological Data Analysis (TDA) tools such as persistent homology, which are often insensitive to heterogeneous information, PSL explicitly encodes specific physical and chemical information such as partial charges directly into the topological analysis. SheafLapNet synergizes these sheaf-theoretic invariants with advanced protein transformer features and auxiliary physical descriptors to capture intrinsic molecular interactions in a multiscale and mechanistic manner. To validate our framework, we employ rigorous benchmarks for both regression and classification tasks. For stability prediction, we utilize the comprehensive S2648 and S350 datasets. For solubility prediction, we employ the PON-Sol2 dataset, which provides annotations for increased, decreased, or neutral solubility changes. By integrating these multi-perspective features, SheafLapNet achieves state-of-the-art performance across these diverse benchmarks, demonstrating that sheaf-theoretic modeling significantly enhances both interpretability and generalizability in predicting mutation-induced structural and functional changes.
+ oai:arXiv.org:2601.12219v1
+ math.SP
+ cs.LG
+ q-bio.QM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yiming Ren, Junjie Wee, Xi Chen, Grace Qian, Guo-Wei Wei
+
+
+ On the Provable Suboptimality of Momentum SGD in Nonstationary Stochastic Optimization
+ https://arxiv.org/abs/2601.12238
+ arXiv:2601.12238v1 Announce Type: cross
+Abstract: While momentum-based acceleration has been studied extensively in deterministic optimization problems, its behavior in nonstationary environments -- where the data distribution and optimal parameters drift over time -- remains underexplored. We analyze the tracking performance of Stochastic Gradient Descent (SGD) and its momentum variants (Polyak heavy-ball and Nesterov) under uniform strong convexity and smoothness in varying stepsize regimes. We derive finite-time bounds in expectation and with high probability for the tracking error, establishing a sharp decomposition into three components: a transient initialization term, a noise-induced variance term, and a drift-induced tracking lag. Crucially, our analysis uncovers a fundamental trade-off: while momentum can suppress gradient noise, it incurs an explicit penalty on the tracking capability. We show that momentum can substantially amplify drift-induced tracking error, with amplification that becomes unbounded as the momentum parameter approaches one, formalizing the intuition that using 'stale' gradients hinders adaptation to rapid regime shifts. Complementing these upper bounds, we establish minimax lower bounds for dynamic regret under gradient-variation constraints. These lower bounds prove that the inertia-induced penalty is not an artifact of analysis but an information-theoretic barrier: in drift-dominated regimes, momentum creates an unavoidable 'inertia window' that fundamentally degrades performance. Collectively, these results provide a definitive theoretical grounding for the empirical instability of momentum in dynamic environments and delineate the precise regime boundaries where SGD provably outperforms its accelerated counterparts.
+ oai:arXiv.org:2601.12238v1
+ stat.ML
+ cs.LG
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Sharan Sahu, Cameron J. Hogan, Martin T. Wells
+
+
+ AQUA-Bench: Beyond Finding Answers to Knowing When There Are None in Audio Question Answering
+ https://arxiv.org/abs/2601.12248
+ arXiv:2601.12248v1 Announce Type: cross
+Abstract: Recent advances in audio-aware large language models have shown strong performance on audio question answering. However, existing benchmarks mainly cover answerable questions and overlook the challenge of unanswerable ones, where no reliable answer can be inferred from the audio. Such cases are common in real-world settings, where questions may be misleading, ill-posed, or incompatible with the information. To address this gap, we present AQUA-Bench, a benchmark for Audio Question Unanswerability Assessment. It systematically evaluates three scenarios: Absent Answer Detection (the correct option is missing), Incompatible Answer Set Detection (choices are categorically mismatched with the question), and Incompatible Audio Question Detection (the question is irrelevant or lacks sufficient grounding in the audio). By assessing these cases, AQUA-Bench offers a rigorous measure of model reliability and promotes the development of audio-language systems that are more robust and trustworthy. Our experiments suggest that while models excel on standard answerable tasks, they often face notable challenges with unanswerable ones, pointing to a blind spot in current audio-language understanding.
+ oai:arXiv.org:2601.12248v1
+ eess.AS
+ cs.AI
+ cs.CL
+ cs.LG
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chun-Yi Kuan, Hung-yi Lee
+
+
+ DeepRAHT: Learning Predictive RAHT for Point Cloud Attribute Compression
+ https://arxiv.org/abs/2601.12255
+ arXiv:2601.12255v1 Announce Type: cross
+Abstract: Regional Adaptive Hierarchical Transform (RAHT) is an effective point cloud attribute compression (PCAC) method. However, its application in deep learning lacks research. In this paper, we propose an end-to-end RAHT framework for lossy PCAC based on the sparse tensor, called DeepRAHT. The RAHT transform is performed within the learning reconstruction process, without requiring manual RAHT for preprocessing. We also introduce the predictive RAHT to reduce bitrates and design a learning-based prediction model to enhance performance. Moreover, we devise a bitrate proxy that applies run-length coding to entropy model, achieving seamless variable-rate coding and improving robustness. DeepRAHT is a reversible and distortion-controllable framework, ensuring its lower bound performance and offering significant application potential. The experiments demonstrate that DeepRAHT is a high-performance, faster, and more robust solution than the baseline methods. Project Page: https://github.com/zb12138/DeepRAHT.
+ oai:arXiv.org:2601.12255v1
+ eess.IV
+ cs.CV
+ cs.IT
+ cs.MM
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chunyang Fu, Tai Qin, Shiqi Wang, Zhu Li
+
+
+ DALD-PCAC: Density-Adaptive Learning Descriptor for Point Cloud Lossless Attribute Compression
+ https://arxiv.org/abs/2601.12261
+ arXiv:2601.12261v1 Announce Type: cross
+Abstract: Recently, deep learning has significantly advanced the performance of point cloud geometry compression. However, the learning-based lossless attribute compression of point clouds with varying densities is under-explored. In this paper, we develop a learning-based framework, namely DALD-PCAC that leverages Levels of Detail (LoD) to tailor for point cloud lossless attribute compression. We develop a point-wise attention model using a permutation-invariant Transformer to tackle the challenges of sparsity and irregularity of point clouds during context modeling. We also propose a Density-Adaptive Learning Descriptor (DALD) capable of capturing structure and correlations among points across a large range of neighbors. In addition, we develop a prior-guided block partitioning to reduce the attribute variance within blocks and enhance the performance. Experiments on LiDAR and object point clouds show that DALD-PCAC achieves the state-of-the-art performance on most data. Our method boosts the compression performance and is robust to the varying densities of point clouds. Moreover, it guarantees a good trade-off between performance and complexity, exhibiting great potential in real-world applications. The source code is available at https://github.com/zb12138/DALD_PCAC.
+ oai:arXiv.org:2601.12261v1
+ eess.IV
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chunyang Fu, Ge Li, Wei Gao, Shiqi Wang, Zhu Li, Shan Liu
+
+
+ Logarithmic scaling and stochastic criticality in collective attention
+ https://arxiv.org/abs/2601.12306
+ arXiv:2601.12306v1 Announce Type: cross
+Abstract: We uncover a universal scaling law governing the dispersion of collective attention and identify its underlying stochastic criticality. By analysing large-scale ensembles of Wikipedia page views, we find that the variance of logarithmic attention grows ultraslowly, $\operatorname{Var}[\ln{X(t)}]\propto\ln{t}$, in sharp contrast to the power-law scaling typically expected for diffusive processes. We show that this behaviour is captured by a minimal stochastic differential equation driven by fractional Brownian motion, in which long-range memory ($H$) and temporal decay of volatility ($\eta$) enter through the single exponent $\xi\equiv H-\eta$. At marginality, $\xi=0$, the variance grows logarithmically, marking the critical boundary between power-law growth ($\xi>0$) and saturation ($\xi<0$). By incorporating article-level heterogeneity through a Gaussian mixture model, we further reconstruct the empirical distribution of cumulative attention within the same framework. Our results place collective attention in a distinct class of non-Markovian stochastic processes, with close affinity to ageing-like and ultraslow dynamics in glassy systems.
+ oai:arXiv.org:2601.12306v1
+ physics.soc-ph
+ cond-mat.stat-mech
+ cs.DL
+ cs.SI
+ physics.data-an
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keisuke Okamura
+
+
+ How Well Do LLMs Predict Human Behavior? A Measure of their Pretrained Knowledge
+ https://arxiv.org/abs/2601.12343
+ arXiv:2601.12343v1 Announce Type: cross
+Abstract: Large language models (LLMs) are increasingly used to predict human behavior. We propose a measure for evaluating how much knowledge a pretrained LLM brings to such a prediction: its equivalent sample size, defined as the amount of task-specific data needed to match the predictive accuracy of the LLM. We estimate this measure by comparing the prediction error of a fixed LLM in a given domain to that of flexible machine learning models trained on increasing samples of domain-specific data. We further provide a statistical inference procedure by developing a new asymptotic theory for cross-validated prediction error. Finally, we apply this method to the Panel Study of Income Dynamics. We find that LLMs encode considerable predictive information for some economic variables but much less for others, suggesting that their value as substitutes for domain-specific data differs markedly across settings.
+ oai:arXiv.org:2601.12343v1
+ econ.EM
+ cs.AI
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wayne Gao, Sukjin Han, Annie Liang
+
+
+ Adaptive Rotary Steering with Joint Autoregression for Robust Extraction of Closely Moving Speakers in Dynamic Scenarios
+ https://arxiv.org/abs/2601.12345
+ arXiv:2601.12345v1 Announce Type: cross
+Abstract: Latest advances in deep spatial filtering for Ambisonics demonstrate strong performance in stationary multi-speaker scenarios by rotating the sound field toward a target speaker prior to multi-channel enhancement. For applicability in dynamic acoustic conditions with moving speakers, we propose to automate this rotary steering using an interleaved tracking algorithm conditioned on the target's initial direction. However, for nearby or crossing speakers, robust tracking becomes difficult and spatial cues less effective for enhancement. By incorporating the processed recording as additional guide into both algorithms, our novel joint autoregressive framework leverages temporal-spectral correlations of speech to resolve spatially challenging speaker constellations. Consequently, our proposed method significantly improves tracking and enhancement of closely spaced speakers, consistently outperforming comparable non-autoregressive methods on a synthetic dataset. Real-world recordings complement these findings in complex scenarios with multiple speaker crossings and varying speaker-to-array distances.
+ oai:arXiv.org:2601.12345v1
+ eess.AS
+ cs.LG
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jakob Kienegger, Timo Gerkmann
+
+
+ Bone-conduction Guided Multimodal Speech Enhancement with Conditional Diffusion Models
+ https://arxiv.org/abs/2601.12354
+ arXiv:2601.12354v1 Announce Type: cross
+Abstract: Single-channel speech enhancement models face significant performance degradation in extremely noisy environments. While prior work has shown that complementary bone-conducted speech can guide enhancement, effective integration of this noise-immune modality remains a challenge. This paper introduces a novel multimodal speech enhancement framework that integrates bone-conduction sensors with air-conducted microphones using a conditional diffusion model. Our proposed model significantly outperforms previously established multimodal techniques and a powerful diffusion-based single-modal baseline across a wide range of acoustic conditions.
+ oai:arXiv.org:2601.12354v1
+ eess.AS
+ cs.LG
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sina Khanagha, Bunlong Lay, Timo Gerkmann
+
+
+ BiCoLoR: Communication-Efficient Optimization with Bidirectional Compression and Local Training
+ https://arxiv.org/abs/2601.12400
+ arXiv:2601.12400v1 Announce Type: cross
+Abstract: Slow and costly communication is often the main bottleneck in distributed optimization, especially in federated learning where it occurs over wireless networks. We introduce BiCoLoR, a communication-efficient optimization algorithm that combines two widely used and effective strategies: local training, which increases computation between communication rounds, and compression, which encodes high-dimensional vectors into short bitstreams. While these mechanisms have been combined before, compression has typically been applied only to uplink (client-to-server) communication, leaving the downlink (server-to-client) side unaddressed. In practice, however, both directions are costly. We propose BiCoLoR, the first algorithm to combine local training with bidirectional compression using arbitrary unbiased compressors. This joint design achieves accelerated complexity guarantees in both convex and strongly convex heterogeneous settings. Empirically, BiCoLoR outperforms existing algorithms and establishes a new standard in communication efficiency.
+ oai:arXiv.org:2601.12400v1
+ math.OC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Laurent Condat, Artavazd Maranjyan, Peter Richt\'arik
+
+
+ Temporal Data and Short-Time Averages Improve Multiphase Mass Flow Metering
+ https://arxiv.org/abs/2601.12433
+ arXiv:2601.12433v1 Announce Type: cross
+Abstract: Reliable flow measurements are essential in many industries, but current instruments often fail to accurately estimate multiphase flows, which are frequently encountered in real-world operations. Combining machine learning (ML) algorithms with accurate single-phase flowmeters has therefore received extensive research attention in recent years. The Coriolis mass flowmeter is a widely used single-phase meter that provides direct mass flow measurements, which ML models can be trained to correct, thereby reducing measurement errors in multiphase conditions. This paper demonstrates that preserving temporal information significantly improves model performance in such scenarios. We compare a multilayer perceptron, a windowed multilayer perceptron, and a convolutional neural network (CNN) on three-phase air-water-oil flow data from 342 experiments. Whereas prior work typically compresses each experiment into a single averaged sample, we instead compute short-time averages from within each experiment and train models that preserve temporal information at several downsampling intervals. The CNN performed best at 0.25 Hz with approximately 95 % of relative errors below 13 %, a normalized root mean squared error of 0.03, and a mean absolute percentage error of approximately 4.3 %, clearly outperforming the best single-averaged model and demonstrating that short-time averaging within individual experiments is preferable. Results are consistent across multiple data splits and random seeds, demonstrating robustness.
+ oai:arXiv.org:2601.12433v1
+ eess.SP
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Amanda Nyholm, Yessica Arellano, Jinyu Liu, Damian Krakowiak, Pierluigi Salvo Rossi
+
+
+ Purification Before Fusion: Toward Mask-Free Speech Enhancement for Robust Audio-Visual Speech Recognition
+ https://arxiv.org/abs/2601.12436
+ arXiv:2601.12436v1 Announce Type: cross
+Abstract: Audio-visual speech recognition (AVSR) typically improves recognition accuracy in noisy environments by integrating noise-immune visual cues with audio signals. Nevertheless, high-noise audio inputs are prone to introducing adverse interference into the feature fusion process. To mitigate this, recent AVSR methods often adopt mask-based strategies to filter audio noise during feature interaction and fusion, yet such methods risk discarding semantically relevant information alongside noise. In this work, we propose an end-to-end noise-robust AVSR framework coupled with speech enhancement, eliminating the need for explicit noise mask generation. This framework leverages a Conformer-based bottleneck fusion module to implicitly refine noisy audio features with video assistance. By reducing modality redundancy and enhancing inter-modal interactions, our method preserves speech semantic integrity to achieve robust recognition performance. Experimental evaluations on the public LRS3 benchmark suggest that our method outperforms prior advanced mask-based baselines under noisy conditions.
+ oai:arXiv.org:2601.12436v1
+ eess.AS
+ cs.AI
+ cs.LG
+ cs.MM
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Linzhi Wu, Xingyu Zhang, Hao Yuan, Yakun Zhang, Changyan Zheng, Liang Xie, Tiejun Liu, Erwei Yin
+
+
+ A Mixture of Experts Vision Transformer for High-Fidelity Surface Code Decoding
+ https://arxiv.org/abs/2601.12483
+ arXiv:2601.12483v1 Announce Type: cross
+Abstract: Quantum error correction is a key ingredient for large scale quantum computation, protecting logical information from physical noise by encoding it into many physical qubits. Topological stabilizer codes are particularly appealing due to their geometric locality and practical relevance. In these codes, stabilizer measurements yield a syndrome that must be decoded into a recovery operation, making decoding a central bottleneck for scalable real time operation. Existing decoders are commonly classified into two categories. Classical algorithmic decoders provide strong and well established baselines, but may incur substantial computational overhead at large code distances or under stringent latency constraints. Machine learning based decoders offer fast GPU inference and flexible function approximation, yet many approaches do not explicitly exploit the lattice geometry and local structure of topological codes, which can limit performance. In this work, we propose QuantumSMoE, a quantum vision transformer based decoder that incorporates code structure through plus shaped embeddings and adaptive masking to capture local interactions and lattice connectivity, and improves scalability via a mixture of experts layer with a novel auxiliary loss. Experiments on the toric code demonstrate that QuantumSMoE outperforms state-of-the-art machine learning decoders as well as widely used classical baselines.
+ oai:arXiv.org:2601.12483v1
+ quant-ph
+ cs.IT
+ cs.LG
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hoang Viet Nguyen, Manh Hung Nguyen, Hoang Ta, Van Khu Vu, Yeow Meng Chee
+
+
+ Robust Online Overdetermined Independent Vector Analysis Based on Bilinear Decomposition
+ https://arxiv.org/abs/2601.12485
+ arXiv:2601.12485v1 Announce Type: cross
+Abstract: Online blind source separation is essential for both speech communication and human-machine interaction. Among existing approaches, overdetermined independent vector analysis (OverIVA) delivers strong performance by exploiting the statistical independence of source signals and the orthogonality between source and noise subspaces. However, when applied to large microphone arrays, the number of parameters grows rapidly, which can degrade online estimation accuracy. To overcome this challenge, we propose decomposing each long separation filter into a bilinear form of two shorter filters, thereby reducing the number of parameters. Because the two filters are closely coupled, we design an alternating iterative projection algorithm to update them in turn. Simulation results show that, with far fewer parameters, the proposed method achieves improved performance and robustness.
+ oai:arXiv.org:2601.12485v1
+ eess.AS
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kang Chen, Xianrui Wang, Yichen Yang, Andreas Brendel, Gongping Huang, Zbyn\v{e}k Koldovsk\'y, Jingdong Chen, Jacob Benesty, Shoji Makino
+
+
+ Generative AI as a Non-Convex Supply Shock: Market Bifurcation and Welfare Analysis
+ https://arxiv.org/abs/2601.12488
+ arXiv:2601.12488v1 Announce Type: cross
+Abstract: The diffusion of Generative AI (GenAI) constitutes a supply shock of a fundamentally different nature: while marginal production costs approach zero, content generation creates congestion externalities through information pollution. We develop a three-layer general equilibrium framework to study how this non-convex technology reshapes market structure, transition dynamics, and social welfare. In a static vertical differentiation model, we show that the GenAI cost shock induces a kinked production frontier that bifurcates the market into exit, AI, and human segments, generating a ``middle-class hollow'' in the quality distribution. To analyze adjustment paths, we embed this structure in a mean-field evolutionary system and a calibrated agent-based model with bounded rationality. The transition to the AI-integrated equilibrium is non-monotonic: rather than smooth diffusion, the economy experiences a temporary ecological collapse driven by search frictions and delayed skill adaptation, followed by selective recovery. Survival depends on asymmetric skill reconfiguration, whereby humans retreat from technical execution toward semantic creativity. Finally, we show that the welfare impact of AI adoption is highly sensitive to pollution intensity: low congestion yields monotonic welfare gains, whereas high pollution produces an inverted-U relationship in which further AI expansion reduces total welfare. These results imply that laissez-faire adoption can be inefficient and that optimal governance must shift from input regulation toward output-side congestion management.
+ oai:arXiv.org:2601.12488v1
+ econ.GN
+ cs.CY
+ q-fin.EC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yukun Zhang, Tianyang Zhang
+
+
+ Examples and counterexamples of injective types
+ https://arxiv.org/abs/2601.12536
+ arXiv:2601.12536v1 Announce Type: cross
+Abstract: It is known that, in univalent mathematics, type universes, the type of $n$-types in a universe, reflective subuniverses, and the underlying type of any algebra of the lifting monad are all (algebraically) injective. Here, we further show that the type of ordinals, the type of iterative (multi)sets, the underlying type of any pointed directed complete poset, as well as the types of (small) $\infty$-magmas, monoids, and groups are all injective, among other examples. Not all types of mathematical structures are injective in general. For example, the type of inhabited types is injective if and only if all propositions are projective. In contrast, the type of pointed types and the type of non-empty types are always injective. The injectivity of the type of two-element types implies Fourman and \v{S}\v{c}edrov's world's simplest axiom of choice. We also show that there are no nontrivial small injective types unless a weak propositional resizing principle holds. Other counterexamples include the type of booleans, the simple types, the type of Dedekind reals, and the type of conatural numbers, whose injectivity implies weak excluded middle. More generally, any type with an apartness relation and two points apart cannot be injective unless weak excluded middle holds. Finally, we show that injective types have no non-trivial decidable properties, unless weak excluded middle holds, which amounts to a Rice-like theorem for injective types.
+ oai:arXiv.org:2601.12536v1
+ math.LO
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Tom de Jong, Mart\'in H\"otzel Escard\'o
+
+
+ Artificial Intelligence in Materials Science and Engineering: Current Landscape, Key Challenges, and Future Trajectorie
+ https://arxiv.org/abs/2601.12554
+ arXiv:2601.12554v1 Announce Type: cross
+Abstract: Artificial Intelligence is rapidly transforming materials science and engineering, offering powerful tools to navigate complexity, accelerate discovery, and optimize material design in ways previously unattainable. Driven by the accelerating pace of algorithmic advancements and increasing data availability, AI is becoming an essential competency for materials researchers. This review provides a comprehensive and structured overview of the current landscape, synthesizing recent advancements and methodologies for materials scientists seeking to effectively leverage these data-driven techniques. We survey the spectrum of machine learning approaches, from traditional algorithms to advanced deep learning architectures, including CNNs, GNNs, and Transformers, alongside emerging generative AI and probabilistic models such as Gaussian Processes for uncertainty quantification. The review also examines the pivotal role of data in this field, emphasizing how effective representation and featurization strategies, spanning compositional, structural, image-based, and language-inspired approaches, combined with appropriate preprocessing, fundamentally underpin the performance of machine learning models in materials research. Persistent challenges related to data quality, quantity, and standardization, which critically impact model development and application in materials science and engineering, are also addressed.
+ oai:arXiv.org:2601.12554v1
+ cond-mat.mtrl-sci
+ cs.AI
+ physics.comp-ph
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-sa/4.0/
+ 10.1016/j.compstruct.2025.119419
+ Iman Peivaste, Salim Belouettar, Francesco Mercuri, Nicholas Fantuzzi, Hamidreza Dehghani, Razieh Izadi, Halliru Ibrahim, Jakub Lengiewicz, Ma\"el Belouettar-Mathis, Kouider Bendine, Ahmed Makradi, Martin H\"orsch, Peter Klein, Mohamed El Hachemi, Heinz A. Preisig, Yacine Rezgui, Natalia Konchakova, Ali Daouadji
+
+
+ Automated Angular Received-Power Characterization of Embedded mmWave Transmitters Using Geometry-Calibrated Spatial Sampling
+ https://arxiv.org/abs/2601.12562
+ arXiv:2601.12562v1 Announce Type: cross
+Abstract: This paper presents an automated measurement methodology for angular received-power characterization of embedded millimeter-wave transmitters using geometry-calibrated spatial sampling. Characterization of integrated mmWave transmitters remains challenging due to limited angular coverage and alignment variability in conventional probe-station techniques, as well as the impracticality of anechoic-chamber testing for platform-mounted active modules. To address these challenges, we introduce RAPTAR, an autonomous measurement system for angular received-power acquisition under realistic installation constraints. A collaborative robot executes geometry-calibrated, collision-aware hemispherical trajectories while carrying a calibrated receive probe, enabling controlled and repeatable spatial positioning around a fixed device under test. A spectrum-analyzer-based receiver chain acquires amplitude-only received power as a function of angle and distance following quasi-static pose stabilization. The proposed framework enables repeatable angular received-power mapping and power-domain comparison against idealized free-space references derived from full-wave simulation. Experimental results for a 60-GHz radar module demonstrate a mean absolute received-power error below 2 dB relative to simulation-derived references and a 36.5 % reduction in error compared to manual probe-station measurements, attributed primarily to reduced alignment variability and consistent spatial sampling. The proposed method eliminates the need for coherent field measurements and near-field transformations, enabling practical power-domain characterization of embedded mmWave modules. It is well suited for angular validation in real-world platforms where conventional anechoic measurements are impractical.
+ oai:arXiv.org:2601.12562v1
+ eess.SP
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Maaz Qureshi, Mohammad Omid Bagheri, Abdelrahman Elbadrawy, William Melek, George Shaker
+
+
+ A Functorial Approach to Multi-Space Interpolation with Function Parameters
+ https://arxiv.org/abs/2601.12572
+ arXiv:2601.12572v1 Announce Type: cross
+Abstract: We introduce an extension of interpolation theory to more than two spaces by employing a functional parameter, while retaining a fully functorial and systematic framework. This approach allows for the construction of generalized intermediate spaces and ensures stability under natural operations such as powers and convex combinations. As a significant application, we demonstrate that the interpolation of multiple generalized Sobolev spaces yields a generalized Besov space. Our framework provides explicit tools for handling multi-parameter interpolation, highlighting both its theoretical robustness and practical relevance.
+ oai:arXiv.org:2601.12572v1
+ math.FA
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Thomas Lamby, Samuel Nicolay
+
+
+ Primate-like perceptual decision making emerges through deep recurrent reinforcement learning
+ https://arxiv.org/abs/2601.12577
+ arXiv:2601.12577v1 Announce Type: cross
+Abstract: Progress has led to a detailed understanding of the neural mechanisms that underlie decision making in primates. However, less is known about why such mechanisms are present in the first place. Theory suggests that primate decision making mechanisms, and their resultant behavioral abilities, emerged to maximize reward in the face of noisy, temporally evolving information. To test this theory, we trained an end-to-end deep recurrent neural network using reinforcement learning on a noisy perceptual discrimination task. Networks learned several key abilities of primate-like decision making including trading off speed for accuracy, and flexibly changing their mind in the face of new information. Internal dynamics of these networks suggest that these abilities were supported by similar decision mechanisms as those observed in primate neurophysiological studies. These results provide experimental support for key pressures that gave rise to the primate ability to make flexible decisions.
+ oai:arXiv.org:2601.12577v1
+ q-bio.NC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Nathan J. Wispinski, Scott A. Stone, Anthony Singhal, Patrick M. Pilarski, Craig S. Chapman
+
+
+ Ontology-aligned structuring and reuse of multimodal materials data and workflows towards automatic reproduction
+ https://arxiv.org/abs/2601.12582
+ arXiv:2601.12582v1 Announce Type: cross
+Abstract: Reproducibility of computational results remains a challenge in materials science, as simulation workflows and parameters are often reported only in unstructured text and tables. While literature data are valuable for validation and reuse, the lack of machine-readable workflow descriptions prevents large-scale curation and systematic comparison. Existing text-mining approaches are insufficient to extract complete computational workflows with their associated parameters. An ontology-driven, large language model (LLM)-assisted framework is introduced for the automated extraction and structuring of computational workflows from the literature. The approach focuses on density functional theory-based stacking fault energy (SFE) calculations in hexagonal close-packed magnesium and its binary alloys, and uses a multi-stage filtering strategy together with prompt-engineered LLM extraction applied to method sections and tables. Extracted information is unified into a canonical schema and aligned with established materials ontologies (CMSO, ASMO, and PLDO), enabling the construction of a knowledge graph using atomRDF. The resulting knowledge graph enables systematic comparison of reported SFE values and supports the structured reuse of computational protocols. While full computational reproducibility is still constrained by missing or implicit metadata, the framework provides a foundation for organizing and contextualizing published results in a semantically interoperable form, thereby improving transparency and reusability of computational materials data.
+ oai:arXiv.org:2601.12582v1
+ cond-mat.mtrl-sci
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Sepideh Baghaee Ravari, Abril Azocar Guzman, Sarath Menon, Stefan Sandfeld, Tilmann Hickel, Markus Stricker
+
+
+ A Theory of Diversity for Random Matrices with Applications to In-Context Learning of Schr\"odinger Equations
+ https://arxiv.org/abs/2601.12587
+ arXiv:2601.12587v1 Announce Type: cross
+Abstract: We address the following question: given a collection $\{\mathbf{A}^{(1)}, \dots, \mathbf{A}^{(N)}\}$ of independent $d \times d$ random matrices drawn from a common distribution $\mathbb{P}$, what is the probability that the centralizer of $\{\mathbf{A}^{(1)}, \dots, \mathbf{A}^{(N)}\}$ is trivial? We provide lower bounds on this probability in terms of the sample size $N$ and the dimension $d$ for several families of random matrices which arise from the discretization of linear Schr\"odinger operators with random potentials. When combined with recent work on machine learning theory, our results provide guarantees on the generalization ability of transformer-based neural networks for in-context learning of Schr\"odinger equations.
+ oai:arXiv.org:2601.12587v1
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Frank Cole, Yulong Lu, Shaurya Sehgal
+
+
+ SLAP: Scalable Language-Audio Pretraining with Variable-Duration Audio and Multi-Objective Training
+ https://arxiv.org/abs/2601.12594
+ arXiv:2601.12594v1 Announce Type: cross
+Abstract: Contrastive language-audio pretraining (CLAP) has achieved notable success in learning semantically rich audio representations and is widely adopted for various audio-related tasks. However, current CLAP models face several key limitations. First, they are typically trained on relatively small datasets, often comprising a few million audio samples. Second, existing CLAP models are restricted to short and fixed duration, which constrains their usage in real-world scenarios with variable-duration audio. Third, the standard contrastive training objective operates on global representations, which may hinder the learning of dense, fine-grained audio features. To address these challenges, we introduce Scalable Language-Audio Pretraining (SLAP), which scales language-audio pretraining to 109 million audio-text pairs with variable audio durations and incorporates multiple training objectives. SLAP unifies contrastive loss with additional self-supervised and captioning losses in a single-stage training, facilitating the learning of richer dense audio representations. The proposed SLAP model achieves new state-of-the-art performance on audio-text retrieval and zero-shot audio classification tasks, demonstrating its effectiveness across diverse benchmarks.
+ oai:arXiv.org:2601.12594v1
+ eess.AS
+ cs.AI
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Xinhao Mei, Gael Le Lan, Haohe Liu, Zhaoheng Ni, Varun Nagaraja, Yang Liu, Yangyang Shi, Vikas Chandra
+
+
+ onepot CORE -- an enumerated chemical space to streamline drug discovery, enabled by automated small molecule synthesis and AI
+ https://arxiv.org/abs/2601.12603
+ arXiv:2601.12603v1 Announce Type: cross
+Abstract: The design-make-test-analyze cycle in early-stage drug discovery remains constrained primarily by the "make" step: small-molecule synthesis is slow, costly, and difficult to scale or automate across diverse chemotypes. Enumerated chemical spaces aim to reduce this bottleneck by predefining synthesizable regions of chemical space from available building blocks and reliable reactions, yet existing commercial spaces are still limited by long turnaround times, narrow reaction scope, and substantial manual decision-making in route selection and execution.
+ Here we present the first version of onepot CORE, an enumerated chemical space containing 3.4B molecules and corresponding on-demand synthesis product enabled by an automated synthesis platform and an AI chemist, Phil, that designs, executes, and analyzes experiments. onepot CORE is constructed by (i) selecting a reaction set commonly used in medicinal chemistry, (ii) sourcing and curating building blocks from supplier catalogs, (iii) enumerating candidate products, and (iv) applying ML-based feasibility assessment to prioritize compounds for robust execution. In the current release, the space is supported by seven reactions.
+ We describe an end-to-end workflow - from route selection and automated liquid handling through workup and purification. We further report validation across operational metrics (success rate, timelines, purity, and identity), including NMR confirmation for a representative set of synthesized compounds and assay suitability demonstrated using a series of DPP4 inhibitors. Collectively, onepot CORE illustrates a path toward faster, more reliable access to diverse small molecules, supporting accelerated discovery in pharmaceuticals and beyond.
+ oai:arXiv.org:2601.12603v1
+ physics.chem-ph
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Andrei S. Tyrin, Brandon Wang, Manuel Mu\~noz, Samuel H. Foxman, Daniil A. Boiko
+
+
+ An Eventown Result for Permutations
+ https://arxiv.org/abs/2601.12613
+ arXiv:2601.12613v1 Announce Type: cross
+Abstract: A family of permutations $\mathcal{F} \subseteq S_n$ is even-cycle-intersecting if $\sigma \pi^{-1}$ has an even cycle for all $\sigma,\pi \in \mathcal{F}$. We show that if $\mathcal{F} \subseteq S_n$ is an even-cycle-intersecting family of permutations, then $|\mathcal{F}| \leq 2^{n-1}$, and that equality holds when $n$ is a power of 2 and $\mathcal{F}$ is a double-translate of a Sylow 2-subgroup of $S_n$. This result can be seen as an analogue of the classical eventown problem for subsets and it confirms a conjecture of J\'anos K\"orner on maximum reversing families of the symmetric group. Along the way, we show that the canonically intersecting families of $S_n$ are also the extremal odd-cycle-intersecting families of $S_n$ for all even $n$. While the latter result has less combinatorial significance, its proof uses an interesting new character-theoretic identity that might be of independent interest in algebraic combinatorics.
+ oai:arXiv.org:2601.12613v1
+ math.CO
+ cs.DM
+ math.GR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Nathan Lindzey
+
+
+ Deterministic and probabilistic neural surrogates of global hybrid-Vlasov simulations
+ https://arxiv.org/abs/2601.12614
+ arXiv:2601.12614v1 Announce Type: cross
+Abstract: Hybrid-Vlasov simulations resolve ion-kinetic effects for modeling the solar wind-magnetosphere interaction, but even 5D (2D + 3V) simulations are computationally expensive. We show that graph-based machine learning emulators can learn the spatiotemporal evolution of electromagnetic fields and lower order moments of ion velocity distribution in the near-Earth space environment from four 5D Vlasiator runs performed with identical steady solar wind conditions. The initial ion number density is systematically varied, while the grid spacing is held constant, to scan the ratio of the characteristic ion skin depth to the numerical grid size. Using a graph neural network architecture operating on the 2D spatial simulation grid comprising 670k cells, we demonstrate that both a deterministic forecasting model (Graph-FM) and a probabilistic ensemble forecasting model (Graph-EFM) based on a latent variable formulation are capable of producing accurate predictions of future plasma states. A divergence penalty is incorporated during training to encourage divergence-freeness in the magnetic fields and improve physical consistency. For the probabilistic model, a continuous ranked probability score objective is added to improve the calibration of the ensemble forecasts. When trained, the emulators achieve more than two orders of magnitude speedup in generating the next time step relative to the original simulation on a single GPU compared to 100 CPUs for the Vlasiator runs, while closely matching physical magnetospheric response of the different runs. These results demonstrate that machine learning offers a way to make hybrid-Vlasov simulation tractable for real-time use while providing forecast uncertainty.
+ oai:arXiv.org:2601.12614v1
+ physics.space-ph
+ cs.LG
+ physics.plasm-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Daniel Holmberg, Ivan Zaitsev, Markku Alho, Ioanna Bouri, Fanni Franssila, Haewon Jeong, Minna Palmroth, Teemu Roos
+
+
+ Reorienting off-path Nudged Elastic Bands (RONEB) via Minimum Mode Following
+ https://arxiv.org/abs/2601.12630
+ arXiv:2601.12630v1 Announce Type: cross
+Abstract: Accurate determination of transition states remains central to understanding reaction kinetics. Double-ended methods like the Nudged Elastic Band (NEB) ensure relevant transition states and paths, but incur high computational costs and suffer stagnation on flat or rough potential energy surfaces. Conversely, single-ended eigenmode-following techniques offer efficiency but cannot often be constrained between specific states. Here, we present the Reorienting Off-path Nudged Elastic Bands (RONEB), an adaptive hybrid algorithm that integrates the double ended nature of the NEB with the acceleration of single ended Min-Mode Following methods. RONEB provides stability based on the history of the path optimization, relative force triggering, and an alignment-based back-off penalty to dynamically decouple the climbing image from the elastic band constraints. We benchmark the method against the standard Climbing Image NEB (CI-NEB) across the Baker-Chan transition state test set using the PET-MAD machine-learned potential and the OptBench Pt(111) heptamer island surface diffusion set. A Bayesian analysis of the performance data quantifies a median reduction in gradient calls of 46.3% [95% CrI: -54.7%, -36.9%] relative to the baseline, while surface diffusion tests reveal a 28% reduction across 59 metallic rearrangement mechanisms. These results establish RONEB as a highly effective tool for high-throughput automated chemical discovery.
+ oai:arXiv.org:2601.12630v1
+ physics.chem-ph
+ cond-mat.mtrl-sci
+ cs.LG
+ physics.comp-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rohit Goswami (Institute IMX and Lab-COSMO, \'Ecole polytechnique f\'ed\'erale de Lausanne, Science Institute, University of Iceland, Reykjavik, Iceland), Miha Gunde (Science Institute, University of Iceland, Reykjavik, Iceland, Institute Ru{\dj}er Bo\v{s}kovi\'c, Bijeni\v{c}ka 54, 10000 Zagreb, Croatia), Hannes J\'onsson
+
+
+ New Trends in the Stability of Sinkhorn Semigroups
+ https://arxiv.org/abs/2601.12633
+ arXiv:2601.12633v1 Announce Type: cross
+Abstract: Entropic optimal transport problems play an increasingly important role in machine learning and generative modelling. In contrast with optimal transport maps which often have limited applicability in high dimensions, Schrodinger bridges can be solved using the celebrated Sinkhorn's algorithm, a.k.a. the iterative proportional fitting procedure. The stability properties of Sinkhorn bridges when the number of iterations tends to infinity is a very active research area in applied probability and machine learning. Traditional proofs of convergence are mainly based on nonlinear versions of Perron-Frobenius theory and related Hilbert projective metric techniques, gradient descent, Bregman divergence techniques and Hamilton-Jacobi-Bellman equations, including propagation of convexity profiles based on coupling diffusions by reflection methods. The objective of this review article is to present, in a self-contained manner, recently developed Sinkhorn/Gibbs-type semigroup analysis based upon contraction coefficients and Lyapunov-type operator-theoretic techniques. These powerful, off-the-shelf semigroup methods are based upon transportation cost inequalities (e.g. log-Sobolev, Talagrand quadratic inequality, curvature estimates), $\phi$-divergences, Kantorovich-type criteria and Dobrushin contraction-type coefficients on weighted Banach spaces as well as Wasserstein distances. This novel semigroup analysis allows one to unify and simplify many arguments in the stability of Sinkhorn algorithm. It also yields new contraction estimates w.r.t. generalized $\phi$-entropies, as well as weighted total variation norms, Kantorovich criteria and Wasserstein distances.
+ oai:arXiv.org:2601.12633v1
+ math.PR
+ cs.NA
+ math.NA
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pierre Del Moral, Ajay Jasra
+
+
+ Energy-Efficient Prediction in Textile Manufacturing: Enhancing Accuracy and Data Efficiency With Ensemble Deep Transfer Learning
+ https://arxiv.org/abs/2601.12663
+ arXiv:2601.12663v1 Announce Type: cross
+Abstract: Traditional textile factories consume substantial energy, making energy-efficient production optimization crucial for sustainability and cost reduction. Meanwhile, deep neural networks (DNNs), which are effective for factory output prediction and operational optimization, require extensive historical data, posing challenges due to high sensor deployment and data collection costs. To address this, we propose Ensemble Deep Transfer Learning (EDTL), a novel framework that enhances prediction accuracy and data efficiency by integrating transfer learning with an ensemble strategy and a feature alignment layer. EDTL pretrains DNN models on data-rich production lines (source domain) and adapts them to data-limited lines (target domain), reducing dependency on large datasets. Experiments on real-world textile factory datasets show that EDTL improves prediction accuracy by 5.66% and enhances model robustness by 3.96% compared to conventional DNNs, particularly in data-limited scenarios (20%-40% data availability). This research contributes to energy-efficient textile manufacturing by enabling accurate predictions with fewer data requirements, providing a scalable and cost-effective solution for smart production systems.
+ oai:arXiv.org:2601.12663v1
+ eess.SP
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/ACCESS.2025.3551798
+ IEEE Access, Vol. 13, 2025
+ Yan-Chen Chen, Wei-Yu Chiu, Qun-Yu Wang, Jing-Wei Chen, Hao-Ting Zhao
+
+
+ Emergence of Structural Disparities in theWeb of Scientific Citations
+ https://arxiv.org/abs/2601.12665
+ arXiv:2601.12665v1 Announce Type: cross
+Abstract: Scientific attention is unevenly distributed, creating inequities in recognition and distorting access to opportunities. Using citations as a proxy, we quantify disparities in attention by gender and institutional prestige. We find that women receive systematically fewer citations than men, and that attention is increasingly concentrated among authors from elite institutions -- patterns not fully explained by underrepresentation alone. To explain these dynamics, we introduce a model of citation network growth that incorporates homophily (tendency to cite similar authors), preferential attachment (favoring highly cited authors) and group size (underrepresentation). The model shows that disparities arise not only from group size imbalances but also from cumulative advantage amplifying biased citation preferences. Importantly, increasing representation alone is often insufficient to reduce disparities. Effective strategies should also include reducing homophily, amplifying the visibility of underrepresented groups, and supporting equitable integration of newcomers. Our findings highlight the challenges of mitigating inequities in asymmetric networks like citations, where recognition flows in one direction. By making visible the mechanisms through which attention is distributed, we contribute to efforts toward a more responsible web of science that is fairer, more transparent, and more inclusive, and that better sustains innovation and knowledge production.
+ oai:arXiv.org:2601.12665v1
+ physics.soc-ph
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Buddhika Nettasinghe, Nazanin Alipourfard, Vikram Krishnamurthy, Kristina Lerman
+
+
+ An efficient numerical method for simulating two-dimensional non-periodic metasurfaces
+ https://arxiv.org/abs/2601.12674
+ arXiv:2601.12674v1 Announce Type: cross
+Abstract: Metasurfaces are extremely useful for controlling and manipulating electromagnetic waves. Full-wave numerical simulation is highly desired for their design and optimization, but it is notoriously difficult, even for two-dimensional metasurfaces, when they comprise a huge number of subwavelength elements. This paper focuses on two-dimensional non-periodic metasurfaces that contain only a relatively small number of distinct subwavelength elements. We develop an efficient numerical method based on Neumann-to-Dirichlet operators, the finite element method and local function expansions. Our method drastically reduces the total number of unknowns and is capable of simulating two-dimensional metasurfaces with $10^{5}$ subwavelength elements on a personal computer. Numerical examples demonstrate that the method maintains high accuracy while offering significant advantages in both computational time and memory usage compared to the classical full-domain finite element method, making it particularly suited for the analysis of large metasurfaces.
+ oai:arXiv.org:2601.12674v1
+ physics.optics
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fuhao Liu, Ya Yan Lu
+
+
+ Improving Audio Question Answering with Variational Inference
+ https://arxiv.org/abs/2601.12700
+ arXiv:2601.12700v1 Announce Type: cross
+Abstract: Variational inference (VI) provides a principled framework for estimating posterior distributions over model parameters, enabling explicit modeling of weight uncertainty during optimization. By capturing this uncertainty, VI improves the reliability of predictions, yielding better calibrated outputs. In this work, we investigate the benefits of VI for challenging multimodal understanding and reasoning by applying the Improved Variational Online Newton (IVON), a recent VI optimizer, to fine-tuning a multimodal large language model on audio question answering tasks. Our results show that VI not only enhances predictive accuracy but also significantly improves calibration, reducing the model's overconfidence. These advances further support risk-sensitive applications such as selective prediction, where reliable confidence estimates are crucial.
+ oai:arXiv.org:2601.12700v1
+ eess.AS
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Haolin Chen
+
+
+ Relativistic Hamiltonian as an emergent structure from information geometry
+ https://arxiv.org/abs/2601.12764
+ arXiv:2601.12764v1 Announce Type: cross
+Abstract: We show that the relativistic energy-momentum relation can emerge as an effective ensemble-averaged structure from a multiplicative Hamiltonian when fluctuations of an auxiliary parameter are treated using maximum entropy inference. The resulting probability distribution is uniquely fixed by scale-invariant constraints, which are shown to arise naturally from the Fisher-Rao geometry of the associated statistical manifold. Within this information-geometric framework, the relativistic dispersion relation appears without initially imposing Lorentz symmetry, but as a consequence of statistical averaging and geometric invariance.
+ oai:arXiv.org:2601.12764v1
+ math-ph
+ cs.IT
+ math.IT
+ math.MP
+ physics.class-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Sikarin Yoo-Kong
+
+
+ SciHorizon-GENE: Benchmarking LLM for Life Sciences Inference from Gene Knowledge to Functional Understanding
+ https://arxiv.org/abs/2601.12805
+ arXiv:2601.12805v1 Announce Type: cross
+Abstract: Large language models (LLMs) have shown growing promise in biomedical research, particularly for knowledge-driven interpretation tasks. However, their ability to reliably reason from gene-level knowledge to functional understanding, However, their ability to reliably reason from gene-level knowledge to functional understanding, a core requirement for knowledge-enhanced cell atlas interpretation, remains largely underexplored. To address this gap, we introduce SciHorizon-GENE, a large-scale gene-centric benchmark constructed from authoritative biological databases. The benchmark integrates curated knowledge for over 190K human genes and comprises more than 540K questions covering diverse gene-to-function reasoning scenarios relevant to cell type annotation, functional interpretation, and mechanism-oriented analysis. Motivated by behavioral patterns observed in preliminary examinations, SciHorizon-GENE evaluates LLMs along four biologically critical perspectives: research attention sensitivity, hallucination tendency, answer completeness, and literature influence, explicitly targeting failure modes that limit the safe adoption of LLMs in biological interpretation pipelines. We systematically evaluate a wide range of state-of-the-art general-purpose and biomedical LLMs, revealing substantial heterogeneity in gene-level reasoning capabilities and persistent challenges in generating faithful, complete, and literature-grounded functional interpretations. Our benchmark establishes a systematic foundation for analyzing LLM behavior at the gene scale and offers insights for model selection and development, with direct relevance to knowledge-enhanced biological interpretation.
+ oai:arXiv.org:2601.12805v1
+ q-bio.GN
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaohan Huang, Meng Xiao, Chuan Qin, Qingqing Long, Jinmiao Chen, Yuanchun Zhou, Hengshu Zhu
+
+
+ Perfect codes in weakly metric association schemes
+ https://arxiv.org/abs/2601.12818
+ arXiv:2601.12818v1 Announce Type: cross
+Abstract: The Lloyd Theorem of (Sol\'e, 1989) is combined with the Schwartz-Zippel Lemma of theoretical computer science to derive non-existence results for perfect codes in the Lee metric, NRT metric, mixed Hamming metric, and for the sum-rank distance. The proofs are based on asymptotic enumeration of integer partitions. The framework is the new concept of {\em polynomial} weakly metric association schemes.
+ A connection between this notion and the recent theory of multivariate P-polynomial schemes of ( Bannai et al. 2025) and of $m$-distance regular graphs ( Bernard et al 2025) is pointed out.
+ oai:arXiv.org:2601.12818v1
+ math.CO
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Minjia Shi, Jing Wang, Patrick Sol\'e
+
+
+ Cognition spaces: natural, artificial, and hybrid
+ https://arxiv.org/abs/2601.12837
+ arXiv:2601.12837v1 Announce Type: cross
+Abstract: Cognitive processes are realized across an extraordinary range of natural, artificial, and hybrid systems, yet there is no unified framework for comparing their forms, limits, and unrealized possibilities. Here, we propose a cognition space approach that replaces narrow, substrate-dependent definitions with a comparative representation based on organizational and informational dimensions. Within this framework, cognition is treated as a graded capacity to sense, process, and act upon information, allowing systems as diverse as cells, brains, artificial agents, and human-AI collectives to be analyzed within a common conceptual landscape. We introduce and examine three cognition spaces -- basal aneural, neural, and human-AI hybrid -- and show that their occupation is highly uneven, with clusters of realized systems separated by large unoccupied regions. We argue that these voids are not accidental but reflect evolutionary contingencies, physical constraints, and design limitations. By focusing on the structure of cognition spaces rather than on categorical definitions, this approach clarifies the diversity of existing cognitive systems and highlights hybrid cognition as a promising frontier for exploring novel forms of complexity beyond those produced by biological evolution.
+ oai:arXiv.org:2601.12837v1
+ q-bio.NC
+ cs.AI
+ cs.HC
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ricard Sol\'e, Luis F Seoane, Jordi Pla-Mauri, Michael Timothy Bennett, Michael E. Hochberg, Michael Levin
+
+
+ Accurate Simulation Pipeline for Passive Single-Photon Imaging
+ https://arxiv.org/abs/2601.12850
+ arXiv:2601.12850v1 Announce Type: cross
+Abstract: Single-Photon Avalanche Diodes (SPADs) are new and promising imaging sensors. These sensors are sensitive enough to detect individual photons hitting each pixel, with extreme temporal resolution and without readout noise. Thus, SPADs stand out as an optimal choice for low-light imaging. Due to the high price and limited availability of SPAD sensors, the demand for an accurate data simulation pipeline is substantial. Indeed, the scarcity of SPAD datasets hinders the development of SPAD-specific processing algorithms and impedes the training of learning-based solutions.
+ In this paper, we present a comprehensive SPAD simulation pipeline and validate it with multiple experiments using two recent commercial SPAD sensors. Our simulator is used to generate the SPAD-MNIST, a single-photon version of the seminal MNIST dataset, to investigate the effectiveness of convolutional neural network (CNN) classifiers on reconstructed fluxes, even at extremely low light conditions, e.g., 5 mlux. We also assess the performance of classifiers exclusively trained on simulated data on real images acquired from SPAD sensors at different light conditions. The synthetic dataset encompasses different SPAD imaging modalities and is made available for download. Project page: https://boracchi.faculty.polimi.it/Projects/SPAD-MNIST.html.
+ oai:arXiv.org:2601.12850v1
+ physics.ins-det
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/JSEN.2025.3645459
+ Aleksi Suonsivu, Lauri Salmela, Leevi Uosukainen, Edoardo Peretti, Radu Ciprian Bilcu, Giacomo Boracchi
+
+
+ Angular Sensing by Highly Reconfigurable Pixel Antennas with Joint Radiating Aperture and Feeding Ports Reconfiguration
+ https://arxiv.org/abs/2601.12867
+ arXiv:2601.12867v1 Announce Type: cross
+Abstract: Angular sensing capability is realized using highly reconfigurable pixel antenna (HRPA) with joint radiating aperture and feeding ports reconfiguration. Pixel antennas represent a general class of reconfigurable antenna designs in which the radiating surface, regardless of its shape or size, is divided into sub-wavelength elements called pixels. Each pixel is connected to its neighboring elements through radio frequency switches. By controlling pixel connections, the pixel antenna topology can be flexibly adjusted so that the resulting radiation pattern can be reconfigured. However, conventional pixel antennas have only a single, fixed-position feeding port, which is not efficient for angular sensing. Therefore, in this work, we further extend the reconfigurability of pixel antennas by introducing the HRPA, which enables both geometry control of the pixel antenna and switching of its feeding ports. The model of the proposed HRPA, including both circuit and radiation parameters, is derived. A codebook is then defined, consisting of pixel connection states and feeding port positions for each sensing area. Based on this codebook, an efficient optimization approach is developed to minimize the Cram\acute{\mathrm{\mathbf{e}}}r-Rao lower bound (CRLB) and obtain the optimal HRPA geometries for angular sensing within a given area. Numerical results show that the HRPA reduces the angle estimation error by more than 50% across the full three-dimensional sphere when compared with a conventional uniform planar array of the same size. This demonstrates the effectiveness of the proposed approach and highlights the potential of HRPA for integrated sensing and communication systems.
+ oai:arXiv.org:2601.12867v1
+ eess.SP
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zixiang Han, Hanning Wang, Shiwen Tang, Yujie Zhang
+
+
+ Physics-Aware RIS Codebook Compilation for Near-Field Beam Focusing under Mutual Coupling and Specular Reflections
+ https://arxiv.org/abs/2601.12982
+ arXiv:2601.12982v1 Announce Type: cross
+Abstract: Next-generation wireless networks are envisioned to achieve reliable, low-latency connectivity within environments characterized by strong multipath and severe channel variability. Programmable wireless environments (PWEs) address this challenge by enabling deterministic control of electromagnetic (EM) propagation through software-defined reconfigurable intelligent surfaces (RISs). However, effectively configuring RISs in real time remains a major bottleneck, particularly under near-field conditions where mutual coupling and specular reflections alter the intended response. To overcome this limitation, this paper introduces MATCH, a physics-based codebook compilation algorithm that explicitly accounts for the EM coupling among RIS unit cells and the reflective interactions with surrounding structures, ensuring that the resulting codebooks remain consistent with the physical characteristics of the environment. Finally, MATCH is evaluated under a full-wave simulation framework incorporating mutual coupling and secondary reflections, demonstrating its ability to concentrate scattered energy within the focal region, confirming that physics-consistent, codebook-based optimization constitutes an effective approach for practical and efficient RIS configuration.
+ oai:arXiv.org:2601.12982v1
+ eess.SP
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Alexandros I. Papadopoulos, Maria Anna Pistela, Dimitrios Tyrovolas, Antonios Lalas, Konstantinos Votis, Sotiris Ioannidis, George K. Karagiannidis, Christos Liaskos
+
+
+ Beyond Visual Realism: Toward Reliable Financial Time Series Generation
+ https://arxiv.org/abs/2601.12990
+ arXiv:2601.12990v1 Announce Type: cross
+Abstract: Generative models for financial time series often create data that look realistic and even reproduce stylized facts such as fat tails or volatility clustering. However, these apparent successes break down under trading backtests: models like GANs or WGAN-GP frequently collapse, yielding extreme and unrealistic results that make the synthetic data unusable in practice. We identify the root cause in the neglect of financial asymmetry and rare tail events, which strongly affect market risk but are often overlooked by objectives focusing on distribution matching. To address this, we introduce the Stylized Facts Alignment GAN (SFAG), which converts key stylized facts into differentiable structural constraints and jointly optimizes them with adversarial loss. This multi-constraint design ensures that generated series remain aligned with market dynamics not only in plots but also in backtesting. Experiments on the Shanghai Composite Index (2004--2024) show that while baseline GANs produce unstable and implausible trading outcomes, SFAG generates synthetic data that preserve stylized facts and support robust momentum strategy performance. Our results highlight that structure-preserving objectives are essential to bridge the gap between superficial realism and practical usability in financial generative modeling.
+ oai:arXiv.org:2601.12990v1
+ q-fin.ST
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fan Zhang, Jiabin Luo, Zheng Zhang, Shuanghong Huang, Zhipeng Liu, Yu Chen
+
+
+ When Is Distributed Nonlinear Aggregation Private? Optimality and Information-Theoretical Bounds
+ https://arxiv.org/abs/2601.13001
+ arXiv:2601.13001v1 Announce Type: cross
+Abstract: Nonlinear aggregation is central to modern distributed systems, yet its privacy behavior is far less understood than that of linear aggregation. Unlike linear aggregation where mature mechanisms can often suppress information leakage, nonlinear operators impose inherent structural limits on what privacy guarantees are theoretically achievable when the aggregate must be computed exactly. This paper develops a unified information-theoretic framework to characterize privacy leakage in distributed nonlinear aggregation under a joint adversary that combines passive (honest-but-curious) corruption and eavesdropping over communication channels.
+ We cover two broad classes of nonlinear aggregates: order-based operators (maximum/minimum and top-$K$) and robust aggregation (median/quantiles and trimmed mean). We first derive fundamental lower bounds on leakage that hold without sacrificing accuracy, thereby identifying the minimum unavoidable information revealed by the computation and the transcript. We then propose simple yet effective privacy-preserving distributed algorithms, and show that with appropriate randomized initialization and parameter choices, our proposed approaches can attach the derived optimal bounds for the considered operators. Extensive experiments validate the tightness of the bounds and demonstrate that network topology and key algorithmic parameters (including the stepsize) govern the observed leakage in line with the theoretical analysis, yielding actionable guidelines for privacy-preserving nonlinear aggregation.
+ oai:arXiv.org:2601.13001v1
+ eess.SP
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Wenrui Yu, Jaron Skovsted Gundersen, Richard Heusdens, Qiongxiu Li
+
+
+ Faster 3-colouring algorithm for graphs of diameter 3
+ https://arxiv.org/abs/2601.13072
+ arXiv:2601.13072v1 Announce Type: cross
+Abstract: We show that given an $n$-vertex graph $G$ of diameter 3 we can decide if $G$ is $3$-colourable in time $2^{O(n^{2/3-\varepsilon})}$ for any $\varepsilon < 1/33$. This improves on the previous best algorithm of $2^{O((n\log n)^{2/3})}$ from D\k{e}bski, Piecyk and Rz\k{a}\.zewski [Faster 3-coloring of small-diameter graphs, ESA 2021].
+ oai:arXiv.org:2601.13072v1
+ math.CO
+ cs.DM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Carla Groenland, Hidde Koerts, Sophie Spirkl
+
+
+ Polychronous Wave Computing: Timing-Native Address Selection in Spiking Networks
+ https://arxiv.org/abs/2601.13079
+ arXiv:2601.13079v1 Announce Type: cross
+Abstract: Spike timing offers a combinatorial address space, suggesting that timing-based spiking inference can be executed as lookup and routing rather than as dense multiply--accumulate. Yet most neuromorphic and photonic systems still digitize events into timestamps, bins, or rates and then perform selection in clocked logic. We introduce Polychronous Wave Computing (PWC), a timing-native address-selection primitive that maps relative spike latencies directly to a discrete output route in the wave domain. Spike times are phase-encoded in a rotating frame and processed by a programmable multiport interferometer that evaluates K template correlations in parallel; a driven--dissipative winner-take-all stage then performs a physical argmax, emitting a one-hot output port. We derive the operating envelope imposed by phase wrapping and mutual coherence, and collapse timing jitter, static phase mismatch, and dephasing into a single effective phase-noise budget whose induced winner--runner-up margin predicts boundary-first failures and provides an intensity-only calibration target. Simulations show that nonlinear competition improves routing fidelity compared with noisy linear intensity readout, and that hardware-in-the-loop phase tuning rescues a temporal-order gate from 55.9% to 97.2% accuracy under strong static mismatch. PWC provides a fast routing coprocessor for LUT-style spiking networks and sparse top-1 gates (e.g., mixture-of-experts routing) across polaritonic, photonic, and oscillator platforms.
+ oai:arXiv.org:2601.13079v1
+ cond-mat.dis-nn
+ cs.LG
+ cs.NE
+ physics.optics
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Natalila G. Berloff
+
+
+ Approximate full conformal prediction in RKHS
+ https://arxiv.org/abs/2601.13102
+ arXiv:2601.13102v1 Announce Type: cross
+Abstract: Full conformal prediction is a framework that implicitly formulates distribution-free confidence prediction regions for a wide range of estimators. However, a classical limitation of the full conformal framework is the computation of the confidence prediction regions, which is usually impossible since it requires training infinitely many estimators (for real-valued prediction for instance). The main purpose of the present work is to describe a generic strategy for designing a tight approximation to the full conformal prediction region that can be efficiently computed. Along with this approximate confidence region, a theoretical quantification of the tightness of this approximation is developed, depending on the smoothness assumptions on the loss and score functions. The new notion of thickness is introduced for quantifying the discrepancy between the approximate confidence region and the full conformal one.
+ oai:arXiv.org:2601.13102v1
+ stat.ML
+ cs.LG
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Davidson Lova Razafindrakoto, Alain Celisse, J\'er\^ome Lacaille
+
+
+ Content Leakage in LibriSpeech and Its Impact on the Privacy Evaluation of Speaker Anonymization
+ https://arxiv.org/abs/2601.13107
+ arXiv:2601.13107v1 Announce Type: cross
+Abstract: Speaker anonymization aims to conceal a speaker's identity, without considering the linguistic content. In this study, we reveal a weakness of Librispeech, the dataset that is commonly used to evaluate anonymizers: the books read by the Librispeech speakers are so distinct, that speakers can be identified by their vocabularies. Even perfect anonymizers cannot prevent this identity leakage. The EdAcc dataset is better in this regard: only a few speakers can be identified through their vocabularies, encouraging the attacker to look elsewhere for the identities of the anonymized speakers. EdAcc also comprises spontaneous speech and more diverse speakers, complementing Librispeech and giving more insights into how anonymizers work.
+ oai:arXiv.org:2601.13107v1
+ eess.AS
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Carlos Franzreb, Arnab Das, Tim Polzehl, Sebastian M\"oller
+
+
+ Forecasting Continuum Intensity for Solar Active Region Emergence Prediction using Transformers
+ https://arxiv.org/abs/2601.13144
+ arXiv:2601.13144v1 Announce Type: cross
+Abstract: Early and accurate prediction of solar active region (AR) emergence is crucial for space weather forecasting. Building on established Long Short-Term Memory (LSTM) based approaches for forecasting the continuum intensity decrease associated with AR emergence, this work expands the modeling with new architectures and targets. We investigate a sliding-window Transformer architecture to forecast continuum intensity evolution up to 12 hours ahead using data from 46 ARs observed by SDO/HMI. We conduct a systematic ablation study to evaluate two key components: (1) the inclusion of a temporal 1D convolutional (Conv1D) front-end and (2) a novel `Early Detection' architecture featuring attention biases and a timing-aware loss function. Our best-performing model, combining the Early Detection architecture without the Conv1D layer, achieved a Root Mean Square Error (RMSE) of 0.1189 (representing a 10.6% improvement over the LSTM baseline) and an average advance warning time of 4.73 hours (timing difference of -4.73h), even under a stricter emergence criterion than previous studies. While the Transformer demonstrates superior aggregate timing and accuracy, we note that this high-sensitivity detection comes with increased variance compared to smoother baseline models. However, this volatility is a necessary trade-off for operational warning systems: the model's ability to detect micro-changes in precursor signals enables significantly earlier detection, outweighing the cost of increased noise. Our results demonstrate that Transformer architectures modified with early detection biases, when used without temporal smoothing layers, provide a high-sensitivity alternative for forecasting AR emergence that prioritizes advance warning over statistical smoothness.
+ oai:arXiv.org:2601.13144v1
+ astro-ph.SR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Jonas Tirona, Sarang Patil, Spiridon Kasapis, Eren Dogan, John Stefan, Irina N. Kitiashvili, Alexander G. Kosovichev, Mengjia Xu
+
+
+ SolARED: Solar Active Region Emergence Dataset for Machine Learning Aided Predictions
+ https://arxiv.org/abs/2601.13145
+ arXiv:2601.13145v1 Announce Type: cross
+Abstract: The development of accurate forecasts of solar eruptive activity has become increasingly important for preventing potential impacts on space technologies and exploration. Therefore, it is crucial to detect Active Regions (ARs) before they start forming on the solar surface. This will enable the development of early-warning capabilities for upcoming space weather disturbances. For this reason, we prepared the Solar Active Region Emergence Dataset (SolARED). The dataset is derived from full-disk maps of the Doppler velocity, magnetic field, and continuum intensity, obtained by the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). SolARED includes time series of remapped, tracked, and binned data that characterize the evolution of acoustic power of solar oscillations, unsigned magnetic flux, and continuum intensity for 50 large ARs before, during, and after their emergence on the solar surface, as well as surrounding areas observed on the solar disc between 2010 and 2023. The resulting ML-ready SolARED dataset is designed to support enhancements of predictive capabilities, enabling the development of operational forecasts for the emergence of active regions. The SolARED dataset is available at https://sun.njit.edu/sarportal/, through an interactive visualization web application.
+ oai:arXiv.org:2601.13145v1
+ astro-ph.SR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Spiridon Kasapis, Eren Dogan, Irina N. Kitiashvili, Alexander G. Kosovichev, John T. Stefan, Jake D. Butler, Jonas Tirona, Sarang Patil, Mengjia Xu
+
+
+ Global stability of a Hebbian/anti-Hebbian network for principal subspace learning
+ https://arxiv.org/abs/2601.13170
+ arXiv:2601.13170v1 Announce Type: cross
+Abstract: Biological neural networks self-organize according to local synaptic modifications to produce stable computations. How modifications at the synaptic level give rise to such computations at the network level remains an open question. Pehlevan et al. [Neur. Comp. 27 (2015), 1461--1495] proposed a model of a self-organizing neural network with Hebbian and anti-Hebbian synaptic updates that implements an algorithm for principal subspace analysis; however, global stability of the nonlinear synaptic dynamics has not been established. Here, for the case that the feedforward and recurrent weights evolve at the same timescale, we prove global stability of the continuum limit of the synaptic dynamics and show that the dynamics evolve in two phases. In the first phase, the synaptic weights converge to an invariant manifold where the `neural filters' are orthonormal. In the second phase, the synaptic dynamics follow the gradient flow of a non-convex potential function whose minima correspond to neural filters that span the principal subspace of the input data.
+ oai:arXiv.org:2601.13170v1
+ q-bio.NC
+ cs.NE
+ math.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ David Lipshutz, Robert J. Lipshutz
+
+
+ Empirical Risk Minimization with $f$-Divergence Regularization
+ https://arxiv.org/abs/2601.13191
+ arXiv:2601.13191v1 Announce Type: cross
+Abstract: In this paper, the solution to the empirical risk minimization problem with $f$-divergence regularization (ERM-$f$DR) is presented and conditions under which the solution also serves as the solution to the minimization of the expected empirical risk subject to an $f$-divergence constraint are established. The proposed approach extends applicability to a broader class of $f$-divergences than previously reported and yields theoretical results that recover previously known results. Additionally, the difference between the expected empirical risk of the ERM-$f$DR solution and that of its reference measure is characterized, providing insights into previously studied cases of $f$-divergences. A central contribution is the introduction of the normalization function, a mathematical object that is critical in both the dual formulation and practical computation of the ERM-$f$DR solution. This work presents an implicit characterization of the normalization function as a nonlinear ordinary differential equation (ODE), establishes its key properties, and subsequently leverages them to construct a numerical algorithm for approximating the normalization factor under mild assumptions. Further analysis demonstrates structural equivalences between ERM-$f$DR problems with different $f$-divergences via transformations of the empirical risk. Finally, the proposed algorithm is used to compute the training and test risks of ERM-$f$DR solutions under different $f$-divergence regularizers. This numerical example highlights the practical implications of choosing different functions $f$ in ERM-$f$DR problems.
+ oai:arXiv.org:2601.13191v1
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Francisco Daunas, I\~naki Esnaola, Samir M. Perlaza, H. Vincent Poor
+
+
+ Quantum Data Structure for Range Minimum Query
+ https://arxiv.org/abs/2601.13195
+ arXiv:2601.13195v1 Announce Type: cross
+Abstract: Given an array $a[1..n]$, the Range Minimum Query (RMQ) problem is to maintain a data structure that supports RMQ queries: given a range $[l, r]$, find the index of the minimum element among $a[l..r]$, i.e., $\operatorname{argmin}_{i \in [l, r]} a[i]$. In this paper, we propose a quantum data structure that supports RMQ queries and range updates, with an optimal time complexity $\widetilde \Theta(\sqrt{nq})$ for performing $q = O(n)$ operations without preprocessing, compared to the classical $\widetilde\Theta(n+q)$. As an application, we obtain a time-efficient quantum algorithm for $k$-minimum finding without the use of quantum random access memory.
+ oai:arXiv.org:2601.13195v1
+ quant-ph
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.jcss.2026.103756
+ Journal of Computer and System Sciences, 103756, 2026
+ Qisheng Wang, Zhean Xu, Zhicheng Zhang
+
+
+ Decentralized Cooperative Beamforming for BDRIS-Assisted Cell-Free MIMO OFDM Systems
+ https://arxiv.org/abs/2601.13201
+ arXiv:2601.13201v1 Announce Type: cross
+Abstract: In this paper, a wideband cell-free multi-stream multi-user Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) system is considered operating within a smart wireless environment enabled by multiple Beyond Diagonal Reconfigurable Intelligent Surfaces (BDRISs). A novel decentralized active and passive beamforming framework, robust to imperfect channel state availability and with minimal cooperation among the system's multiple Base Stations (BSs) for deciding the final configurations of the shared BDRISs, is proposed, which aims to substantially reduce the overhead inherent in centralized solutions necessitating a central processing unit of high computational power. By considering a Dynamic Group-Connected (DGC) BDRIS architecture with frequency-selective responses per unit element, we formulate the system's sum-rate maximization problem with respect to the tunable capacitances and permutation matrices of the BDRISs as well as the precoding matrices of the BSs, which is solved via successive concave approximation and alternating projections as well as consensus-based updates for the BDRISs' design. Through extensive simulation results, it is showcased that the proposed robust decentralized cooperative approach with diverse BDRIS architectures outperforms non-cooperation benchmarks. It is also demonstrated that the considered DGC BDRIS architecture is able to provide sum-rate performance gains sufficiently close to the more complex fully-connected BDRIS structure.
+ oai:arXiv.org:2601.13201v1
+ eess.SP
+ cs.ET
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Konstantinos D. Katsanos, George C. Alexandropoulos
+
+
+ Hierarchical Sparse Vector Transmission for Ultra Reliable and Low Latency Communications
+ https://arxiv.org/abs/2601.13204
+ arXiv:2601.13204v1 Announce Type: cross
+Abstract: Sparse vector transmission (SVT) is a promising candidate technology for achieving ultra-reliable low-latency communication (URLLC). In this paper, a hierarchical SVT scheme is proposed for multi-user URLLC scenarios. The hierarchical SVT scheme partitions the transmitted bits into common and private parts. The common information is conveyed by the indices of non-zero sections in a sparse vector, while each user's private information is embedded into non-zero blocks with specific block lengths. At the receiver, the common bits are first recovered from the detected non-zero sections, followed by user-specific private bits decoding based on the corresponding non-zero block indices. Simulation results show the proposed scheme outperforms state-of-the-art SVT schemes in terms of block error rate.
+ oai:arXiv.org:2601.13204v1
+ eess.SP
+ cs.IT
+ eess.IV
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yanfeng Zhang, Xi'an Fan, Jinkai Zheng, Xiaoye Jing, Weiwei Yang, Xu Zhu
+
+
+ Modelling viable supply networks with cooperative adaptive financing
+ https://arxiv.org/abs/2601.13210
+ arXiv:2601.13210v1 Announce Type: cross
+Abstract: We propose a financial liquidity policy sharing method for firm-to-firm supply networks, introducing a scalable autonomous control function for viable complex adaptive supply networks. Cooperation and competition in supply chains is reconciled through overlapping collaborative sets, making firms interdependent and enabling distributed risk governance. How cooperative range - visibility - affects viability is studied using dynamic complex adaptive systems modelling. We find that viability needs cooperation; visibility and viability grow together in scale-free supply networks; and distributed control, where firms only have limited partner information, outperforms centralised control. This suggests that policy toward network viability should implement distributed supply chain financial governance, supporting interfirm collaboration, to enable autonomous control.
+ oai:arXiv.org:2601.13210v1
+ physics.soc-ph
+ cs.SI
+ cs.SY
+ econ.TH
+ eess.SY
+ nlin.AO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yaniv Proselkov, Liming Xu, Alexandra Brintrup
+
+
+ A functionally reversible probabilistic computing architecture enabled by interactions of current-controlled magnetic devices
+ https://arxiv.org/abs/2601.13229
+ arXiv:2601.13229v1 Announce Type: cross
+Abstract: Probabilistic computers replace logic gates with networks of interacting random variables, creating bidirectional systems that can back-derive inputs from outputs. Such architectures enable efficient generation of random samples, implementations of novel algorithms, and natural solutions to classically hard problems such as prime factorization. We present a new physical implementation for these networks: ferromagnetic disks whose magnetization switching process is triggered by current pulses, skewed by external magnetic fields, and randomized by ambient thermal noise. We show that geometry-dependent magnetostatic interactions between these magnetic cells lead to system behavior that emulates deterministic logic gates. Furthermore, by chaining multiple "gates," we achieve a highly accurate bidirectional one-bit full-adder, a proof of concept for complex multi-gate logic functions with reversible information flow. This analog magnetic probabilistic computer methodology improves on other implementations in speed, tunability, and energy efficiency, thereby enabling a powerful new pathway towards practical solution of classically hard problems.
+ oai:arXiv.org:2601.13229v1
+ cond-mat.mes-hall
+ cs.ET
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Shreyes Nallan, Jian-Gang Zhu
+
+
+ Pixelwise Uncertainty Quantification of Accelerated MRI Reconstruction
+ https://arxiv.org/abs/2601.13236
+ arXiv:2601.13236v1 Announce Type: cross
+Abstract: Parallel imaging techniques reduce magnetic resonance imaging (MRI) scan time but image quality degrades as the acceleration factor increases. In clinical practice, conservative acceleration factors are chosen because no mechanism exists to automatically assess the diagnostic quality of undersampled reconstructions. This work introduces a general framework for pixel-wise uncertainty quantification in parallel MRI reconstructions, enabling automatic identification of unreliable regions without access to any ground-truth reference image. Our method integrates conformal quantile regression with image reconstruction methods to estimate statistically rigorous pixel-wise uncertainty intervals. We trained and evaluated our model on Cartesian undersampled brain and knee data obtained from the fastMRI dataset using acceleration factors ranging from 2 to 10. An end-to-end Variational Network was used for image reconstruction. Quantitative experiments demonstrate strong agreement between predicted uncertainty maps and true reconstruction error. Using our method, the corresponding Pearson correlation coefficient was higher than 90% at acceleration levels at and above four-fold; whereas it dropped to less than 70% when the uncertainty was computed using a simpler a heuristic notion (magnitude of the residual). Qualitative examples further show the uncertainty maps based on quantile regression capture the magnitude and spatial distribution of reconstruction errors across acceleration factors, with regions of elevated uncertainty aligning with pathologies and artifacts. The proposed framework enables evaluation of reconstruction quality without access to fully-sampled ground-truth reference images. It represents a step toward adaptive MRI acquisition protocols that may be able to dynamically balance scan time and diagnostic reliability.
+ oai:arXiv.org:2601.13236v1
+ eess.IV
+ cs.AI
+ physics.med-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Ilias I. Giannakopoulos, Lokesh B Gautham Muthukumar, Yvonne W. Lui, Riccardo Lattanzi
+
+
+ Towards Simple and Useful One-Time Programs in the Quantum Random Oracle Model
+ https://arxiv.org/abs/2601.13258
+ arXiv:2601.13258v1 Announce Type: cross
+Abstract: We construct simulation-secure one-time memories (OTM) in the random oracle model, and present a plausible argument for their security against quantum adversaries with bounded and adaptive depth. Our contributions include: (1) A simple scheme where we use only single-qubit Wiesner states and conjunction obfuscation (constructible from LPN): no complex entanglement or quantum cryptography is required. (2) A new POVM bound where e prove that any measurement achieving $(1 - \epsilon)$ success on one basis has conjugate-basis guessing probability at most $\frac{1}{2m} + O(\epsilon^\frac{1}{4})$. (3) Simultation-secure OTMs in the quantum random oracle model where an adversary can only query the random oracle classically. (4) Adaptive depth security where, via an informal application of a lifting theorem from Arora et al., we conjecture security against adversaries with polynomial quantum circuit depth between random oracle queries.
+ Security against adaptive, depth-bounded, quantum adversaries captures many realistic attacks on OTMs built from single-qubit states; our work thus paves the way for practical and truly secure one-time programs. Moreover, depth bounded adaptive adversarial models may allow for encoding one-time memories into error corrected memory states, opening the door to implementations of one-time programs which persist for long periods of time.
+ oai:arXiv.org:2601.13258v1
+ quant-ph
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Lev Stambler
+
+
+ AI Skills Improve Job Prospects: Causal Evidence from a Hiring Experiment
+ https://arxiv.org/abs/2601.13286
+ arXiv:2601.13286v1 Announce Type: cross
+Abstract: The growing adoption of artificial intelligence (AI) technologies has heightened interest in the labour market value of AI-related skills, yet causal evidence on their role in hiring decisions remains scarce. This study examines whether AI skills serve as a positive hiring signal and whether they can offset conventional disadvantages such as older age or lower formal education. We conduct an experimental survey with 1,700 recruiters from the United Kingdom and the United States. Using a paired conjoint design, recruiters evaluated hypothetical candidates represented by synthetically designed resumes. Across three occupations - graphic designer, office assistant, and software engineer - AI skills significantly increase interview invitation probabilities by approximately 8 to 15 percentage points. AI skills also partially or fully offset disadvantages related to age and lower education, with effects strongest for office assistants, where formal AI certification plays an additional compensatory role. Effects are weaker for graphic designers, consistent with more skeptical recruiter attitudes toward AI in creative work. Finally, recruiters' own background and AI usage significantly moderate these effects. Overall, the findings demonstrate that AI skills function as a powerful hiring signal and can mitigate traditional labour market disadvantages, with implications for workers' skill acquisition strategies and firms' recruitment practices.
+ oai:arXiv.org:2601.13286v1
+ econ.GN
+ cs.AI
+ q-fin.EC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Fabian Stephany, Ole Teutloff, Angelo Leone
+
+
+ The table maker's quantum search
+ https://arxiv.org/abs/2601.13306
+ arXiv:2601.13306v1 Announce Type: cross
+Abstract: We show that quantum search can be used to compute the hardness to round an elementary function, that is, to determine the minimum working precision required to compute the values of an elementary function correctly rounded to a target precision of $n$ digits for all possible precision-$n$ floating-point inputs in a given interval. For elementary functions $f$ related to the exponential function, quantum search takes time $\tilde O(2^{n/2} \log (1/\delta))$ to return, with probability $1-\delta$, the hardness to round $f$ over all $n$-bit floating-point inputs in a given binade. For periodic elementary functions in large binades, standalone quantum search yields an asymptotic speedup over the best known classical algorithms and heuristics.
+ oai:arXiv.org:2601.13306v1
+ quant-ph
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Stefanos Kourtis
+
+
+ Scaling laws for amplitude surrogates
+ https://arxiv.org/abs/2601.13308
+ arXiv:2601.13308v1 Announce Type: cross
+Abstract: Scaling laws describing the dependence of neural network performance on the amount of training data, the spent compute, and the network size have emerged across a huge variety of machine learning task and datasets. In this work, we systematically investigate these scaling laws in the context of amplitude surrogates for particle physics. We show that the scaling coefficients are connected to the number of external particles of the process. Our results demonstrate that scaling laws are a useful tool to achieve desired precision targets.
+ oai:arXiv.org:2601.13308v1
+ hep-ph
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Henning Bahl, Victor Bres\'o-Pla, Anja Butter, Joaqu\'in Iturriza Ramirez
+
+
+ Improving Geopolitical Forecasts with Bayesian Networks
+ https://arxiv.org/abs/2601.13362
+ arXiv:2601.13362v1 Announce Type: cross
+Abstract: This study explores how Bayesian networks (BNs) can improve forecast accuracy compared to logistic regression and recalibration and aggregation methods, using data from the Good Judgment Project. Regularized logistic regression models and a baseline recalibrated aggregate were compared to two types of BNs: structure-learned BNs with arcs between predictors, and naive BNs. Four predictor variables were examined: absolute difference from the aggregate, forecast value, days prior to question close, and mean standardized Brier score. Results indicated the recalibrated aggregate achieved the highest accuracy (AUC = 0.985), followed by both types of BNs, then the logistic regression models. Performance of the BNs was likely harmed by reduced information from the discretization process and violation of the assumption of linearity likely harmed the logistic regression models. Future research should explore hybrid approaches combining BNs with logistic regression, examine additional predictor variables, and account for hierarchical data dependencies.
+ oai:arXiv.org:2601.13362v1
+ stat.AP
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Matthew Martin
+
+
+ A uniformity principle for spatial matching
+ https://arxiv.org/abs/2601.13426
+ arXiv:2601.13426v1 Announce Type: cross
+Abstract: Platforms matching spatially distributed supply to demand face a fundamental design choice: given a fixed total budget of service range, how should it be allocated across supply nodes ex ante, i.e. before supply and demand locations are realized, to maximize fulfilled demand? We model this problem using bipartite random geometric graphs where $n$ supply and $m$ demand nodes are uniformly distributed on $[0,1]^k$ ($k \ge 1$), and edges form when demand falls within a supply node's service region, the volume of which is determined by its service range. Since each supply node serves at most one demand, platform performance is determined by the expected size of a maximum matching. We establish a uniformity principle: whenever one service range allocation is more uniform than the other, the more uniform allocation yields a larger expected matching. This principle emerges from diminishing marginal returns to range expanding service range, and limited interference between supply nodes due to bounded ranges naturally fragmenting the graph. For $k=1$, we further characterize the expected matching size through a Markov chain embedding and derive closed-form expressions for special cases. Our results provide theoretical guidance for optimizing service range allocation and designing incentive structures in ride-hailing, on-demand labor markets, and drone delivery networks.
+ oai:arXiv.org:2601.13426v1
+ math.PR
+ cs.DS
+ econ.GN
+ math.OC
+ q-fin.EC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Taha Ameen, Flore Sentenac, Sophie H. Yu
+
+
+ Distribution-Free Confidence Ellipsoids for Ridge Regression with PAC Bounds
+ https://arxiv.org/abs/2601.13436
+ arXiv:2601.13436v1 Announce Type: cross
+Abstract: Linearly parametrized models are widely used in control and signal processing, with the least-squares (LS) estimate being the archetypical solution. When the input is insufficiently exciting, the LS problem may be unsolvable or numerically unstable. This issue can be resolved through regularization, typically with ridge regression. Although regularized estimators reduce the variance error, it remains important to quantify their estimation uncertainty. A possible approach for linear regression is to construct confidence ellipsoids with the Sign-Perturbed Sums (SPS) ellipsoidal outer approximation (EOA) algorithm. The SPS EOA builds non-asymptotic confidence ellipsoids under the assumption that the noises are independent and symmetric about zero. This paper introduces an extension of the SPS EOA algorithm to ridge regression, and derives probably approximately correct (PAC) upper bounds for the resulting region sizes. Compared with previous analyses, our result explicitly show how the regularization parameter affects the region sizes, and provide tighter bounds under weaker excitation assumptions. Finally, the practical effect of regularization is also demonstrated via simulation experiments.
+ oai:arXiv.org:2601.13436v1
+ stat.ML
+ cs.LG
+ cs.SY
+ eess.SP
+ eess.SY
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Szabolcs Szentp\'eteri, Bal\'azs Csan\'ad Cs\'aji
+
+
+ Labels or Preferences? Budget-Constrained Learning with Human Judgments over AI-Generated Outputs
+ https://arxiv.org/abs/2601.13458
+ arXiv:2601.13458v1 Announce Type: cross
+Abstract: The increasing reliance on human preference feedback to judge AI-generated pseudo labels has created a pressing need for principled, budget-conscious data acquisition strategies. We address the crucial question of how to optimally allocate a fixed annotation budget between ground-truth labels and pairwise preferences in AI. Our solution, grounded in semi-parametric inference, casts the budget allocation problem as a monotone missing data framework. Building on this formulation, we introduce Preference-Calibrated Active Learning (PCAL), a novel method that learns the optimal data acquisition strategy and develops a statistically efficient estimator for functionals of the data distribution. Theoretically, we prove the asymptotic optimality of our PCAL estimator and establish a key robustness guarantee that ensures robust performance even with poorly estimated nuisance models. Our flexible framework applies to a general class of problems, by directly optimizing the estimator's variance instead of requiring a closed-form solution. This work provides a principled and statistically efficient approach for budget-constrained learning in modern AI. Simulations and real-data analysis demonstrate the practical benefits and superior performance of our proposed method.
+ oai:arXiv.org:2601.13458v1
+ stat.ML
+ cs.AI
+ cs.LG
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zihan Dong, Ruijia Wu, Linjun Zhang
+
+
+ CatMaster: An Agentic Autonomous System for Computational Heterogeneous Catalysis Research
+ https://arxiv.org/abs/2601.13508
+ arXiv:2601.13508v1 Announce Type: cross
+Abstract: Density functional theory (DFT) is widely used to connect atomic structure with catalytic behavior, but computational heterogeneous catalysis studies often require long workflows that are costly, iterative, and sensitive to setup choices. Besides the intrinsic cost and accuracy limits of first-principles calculations, practical workflow issues such as keeping references consistent, preparing many related inputs, recovering from failed runs on computing clusters, and maintaining a complete record of what was done, can slow down projects and make results difficult to reproduce or extend.
+ Here we present CatMaster, a large-language-model (LLM)-driven agent system that turns natural language requests into complete calculation workspaces, including structures, inputs, outputs, logs, and a concise run record. CatMaster maintains a persistent project record of key facts, constraints, and file pointers to support inspection and restartability. It is paired with a multi-fidelity tool library that covers rapid surrogate relaxations and high-fidelity DFT calculations for validation when needed. We demonstrate CatMaster on four demonstrations of increasing complexity: an O2 spin-state check with remote execution, BCC Fe surface energies with a protocol-sensitivity study and CO adsorption site ranking, high-throughput Pt--Ni--Cu alloy screening for hydrogen evolution reaction (HER) descriptors with surrogate-to-DFT validation, and a demonstration beyond the predefined tool set, including equation-of-state fitting for BCC Fe and CO-FeN4-graphene single-atom catalyst geometry preparation. By reducing manual scripting and bookkeeping while keeping the full evidence trail, CatMaster aims to help catalysis researchers focus on modeling choices and chemical interpretation rather than workflow management.
+ oai:arXiv.org:2601.13508v1
+ cond-mat.mtrl-sci
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Honghao Chen, Jiangjie Qiu, Yi Shen Tew, Xiaonan Wang
+
+
+ Small Gradient Norm Regret for Online Convex Optimization
+ https://arxiv.org/abs/2601.13519
+ arXiv:2601.13519v1 Announce Type: cross
+Abstract: This paper introduces a new problem-dependent regret measure for online convex optimization with smooth losses. The notion, which we call the $G^\star$ regret, depends on the cumulative squared gradient norm evaluated at the decision in hindsight $\sum_{t=1}^T \|\nabla \ell(x^\star)\|^2$. We show that the $G^\star$ regret strictly refines the existing $L^\star$ (small loss) regret, and that it can be arbitrarily sharper when the losses have vanishing curvature around the hindsight decision. We establish upper and lower bounds on the $G^\star$ regret and extend our results to dynamic regret and bandit settings. As a byproduct, we refine the existing convergence analysis of stochastic optimization algorithms in the interpolation regime. Some experiments validate our theoretical findings.
+ oai:arXiv.org:2601.13519v1
+ stat.ML
+ cs.LG
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Wenzhi Gao, Chang He, Madeleine Udell
+
+
+ ICASSP 2026 URGENT Speech Enhancement Challenge
+ https://arxiv.org/abs/2601.13531
+ arXiv:2601.13531v1 Announce Type: cross
+Abstract: The ICASSP 2026 URGENT Challenge advances the series by focusing on universal speech enhancement (SE) systems that handle diverse distortions, domains, and input conditions. This overview paper details the challenge's motivation, task definitions, datasets, baseline systems, evaluation protocols, and results. The challenge is divided into two complementary tracks. Track 1 focuses on universal speech enhancement, while Track 2 introduces speech quality assessment for enhanced speech. The challenge attracted over 80 team registrations, with 29 submitting valid entries, demonstrating significant community interest in robust SE technologies.
+ oai:arXiv.org:2601.13531v1
+ eess.AS
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chenda Li, Wei Wang, Marvin Sach, Wangyou Zhang, Kohei Saijo, Samuele Cornell, Yihui Fu, Zhaoheng Ni, Tim Fingscheidt, Shinji Watanabe, Yanmin Qian
+
+
+ Refined Gradient-Based Temperature Optimization for the Replica-Exchange Monte-Carlo Method
+ https://arxiv.org/abs/2601.13542
+ arXiv:2601.13542v1 Announce Type: cross
+Abstract: The replica-exchange Monte-Carlo (RXMC) method is a powerful Markov-chain Monte-Carlo algorithm for sampling from multi-modal distributions, which are challenging for conventional methods. The sampling efficiency of the RXMC method depends highly on the selection of the temperatures, and finding optimal temperatures remains a challenge. In this study, we propose a refined online temperature selection method by extending the gradient-based optimization framework proposed previously. Building upon the existing temperature update approach, we introduce a reparameterization technique to strictly enforce physical constraints, such as the monotonic ordering of inverse temperatures, which were not explicitly addressed in the original formulation. The proposed method defines the variance of acceptance rates between adjacent replicas as a loss function, estimates its gradient using differential information from the sampling process, and optimizes the temperatures via gradient descent. We demonstrate the effectiveness of our method through experiments on benchmark spin systems, including the two-dimensional ferromagnetic Ising model, the two-dimensional ferromagnetic XY model, and the three-dimensional Edwards-Anderson model. Our results show that the method successfully achieves uniform acceptance rates and reduces round-trip times across the temperature space. Furthermore, our proposed method offers a significant advantage over recently proposed policy gradient method that require careful hyperparameter tuning, while simultaneously preventing the constraint violations that destabilize optimization.
+ oai:arXiv.org:2601.13542v1
+ physics.comp-ph
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tatsuya Miyata, Shunta Arai, Satoshi Takabe
+
+
+ Near-field Physical Layer Security: Robust Beamforming under Location Uncertainty
+ https://arxiv.org/abs/2601.13549
+ arXiv:2601.13549v1 Announce Type: cross
+Abstract: In this paper, we study robust beamforming design for near-field physical-layer-security (PLS) systems, where a base station (BS) equipped with an extremely large-scale array (XL-array) serves multiple near-field legitimate users (Bobs) in the presence of multiple near-field eavesdroppers (Eves). Unlike existing works that mostly assume perfect channel state information (CSI) or location information of Eves, we consider a more practical and challenging scenario, where the locations of Bobs are perfectly known, while only imperfect location information of Eves is available at the BS. We first formulate a robust optimization problem to maximize the sum-rate of Bobs while guaranteeing a worst-case limit on the eavesdropping rate under location uncertainty. By transforming Cartesian position errors into the polar domain, we reveal an important near-field angular-error amplification effect: for the same location error, the closer the Eve, the larger the angle error, severely degrading the performance of conventional robust beamforming methods based on imperfect channel state information. To address this issue, we first establish the conditions for which the first-order Taylor approximation of the near-field channel steering vector under location uncertainty is largely accurate. Then, we propose a two-stage robust beamforming method, which first partitions the uncertainty region into multiple fan-shaped sub-regions, followed by the second stage to formulate and solve a refined linear-matrix-inequality (LMI)-based robust beamforming optimization problem. In addition, the proposed method is further extended to scenarios with multiple Bobs and multiple Eves. Finally, numerical results validate that the proposed method achieves a superior trade-off between rate performance and secrecy robustness, hence significantly outperforming existing benchmarks under Eve location uncertainty.
+ oai:arXiv.org:2601.13549v1
+ eess.SP
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chao Zhou, Changsheng You, Cong Zhou, Chengwen Xing, Jianhua Zhang
+
+
+ Control policies for a two-stage queueing system with parallel and single server options
+ https://arxiv.org/abs/2601.13576
+ arXiv:2601.13576v1 Announce Type: cross
+Abstract: We study a two-stage tandem service queue attended by two servers. Each job-server pair must complete both service phases together, with the server unable to begin a new job until the current one is fully processed after two stages. Immediately after the first phase of service, the server decides whether to send the job/customer to a downstream station that allows parallel processing or to a single-service facility that offers faster or higher-quality service but handles only one job at a time. This choice determines whether the second phase commences immediately or (potentially) after waiting in a queue for the single-service facility to become available.
+ The decision-making scenario is modeled via a Markov decision process formulation, of a clearing system with holding costs at each station. We fully characterize the structural properties of an optimal control policy based on the relationship between the service rates at the downstream stations. A numerical study highlights the significance of optimal control by comparing its performance against several natural heuristic policies.
+ oai:arXiv.org:2601.13576v1
+ math.OC
+ cs.SY
+ eess.SY
+ math.PR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Shuwen Lu, Jamol Pender, Mark E. Lewis
+
+
+ Balancing Independent and Collaborative Service
+ https://arxiv.org/abs/2601.13586
+ arXiv:2601.13586v1 Announce Type: cross
+Abstract: We study a two-type server queueing system where flexible Type-I servers, upon their initial interaction with jobs, decide in real time whether to process them independently or in collaboration with dedicated Type-II servers. Independent processing begins immediately, as does collaborative service if a Type-II server is available. Otherwise, the job and its paired Type-I server wait in queue for collaboration. Type-I servers are non-preemptive and cannot engage with new jobs until their current job is completed.
+ We provide a complete characterization of the structural properties of the optimal policy for the clearing system. In particular, an optimal control is shown to follow a threshold structure based on the number of jobs in the queue before a Type-I first interaction and on the number of jobs in either independent or collaborative service.
+ We propose simple threshold heuristics, based on linear approximations, for real-time decision-making. In much of the parameter and state spaces, we establish theoretical bounds that compare the thresholds proposed by our heuristics to those of optimal policies and identify parameter configurations where these bounds are attained. Outside of these regions, the optimal thresholds are infinite. Numerical experiments further demonstrate the accuracy and robustness of our heuristics, particularly when the initial queue length is high. Our proposed heuristics achieve costs within 0.5% of the optimal policy on average and significantly outperform benchmark policies that exhibit extreme sensitivity to system parameters, sometimes incurring costs exceeding 100% of the optimal.
+ oai:arXiv.org:2601.13586v1
+ math.OC
+ cs.SY
+ eess.SY
+ math.PR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Shuwen Lu, Mark E. Lewis, Jamol Pender
+
+
+ Sample Complexity of Average-Reward Q-Learning: From Single-agent to Federated Reinforcement Learning
+ https://arxiv.org/abs/2601.13642
+ arXiv:2601.13642v1 Announce Type: cross
+Abstract: Average-reward reinforcement learning offers a principled framework for long-term decision-making by maximizing the mean reward per time step. Although Q-learning is a widely used model-free algorithm with established sample complexity in discounted and finite-horizon Markov decision processes (MDPs), its theoretical guarantees for average-reward settings remain limited. This work studies a simple but effective Q-learning algorithm for average-reward MDPs with finite state and action spaces under the weakly communicating assumption, covering both single-agent and federated scenarios. For the single-agent case, we show that Q-learning with carefully chosen parameters achieves sample complexity $\widetilde{O}\left(\frac{|\mathcal{S}||\mathcal{A}|\|h^{\star}\|_{\mathsf{sp}}^3}{\varepsilon^3}\right)$, where $\|h^{\star}\|_{\mathsf{sp}}$ is the span norm of the bias function, improving previous results by at least a factor of $\frac{\|h^{\star}\|_{\mathsf{sp}}^2}{\varepsilon^2}$. In the federated setting with $M$ agents, we prove that collaboration reduces the per-agent sample complexity to $\widetilde{O}\left(\frac{|\mathcal{S}||\mathcal{A}|\|h^{\star}\|_{\mathsf{sp}}^3}{M\varepsilon^3}\right)$, with only $\widetilde{O}\left(\frac{\|h^{\star}\|_{\mathsf{sp}}}{\varepsilon}\right)$ communication rounds required. These results establish the first federated Q-learning algorithm for average-reward MDPs, with provable efficiency in both sample and communication complexity.
+ oai:arXiv.org:2601.13642v1
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuchen Jiao, Jiin Woo, Gen Li, Gauri Joshi, Yuejie Chi
+
+
+ Distributed Coverage Control on Poriferous Surface via Poly-Annulus Conformal Mapping
+ https://arxiv.org/abs/2601.13688
+ arXiv:2601.13688v1 Announce Type: cross
+Abstract: The inherent non-convexity of poriferous surfaces typically entraps agents in local minima and complicates workload distribution. To resolve this, we propose a distributed diffeomorphic coverage control framework for the multi-agent system (MAS) in such surfaces. First, we establish a distributed poly-annulus conformal mapping that transforms arbitrary poriferous surfaces into a multi-hole disk. Leveraging this topological equivalence, a collision-free sectorial partition mechanism is designed in the multi-hole disk, which rigorously induces strictly connected subregions and workload balance on the poriferous surfaces. This mechanism utilizes a buffer-based sequence mechanism to ensure strict topological safety when bypassing obstacles. Furthermore, a pull-back Riemannian metric is constructed to define the length metric that encodes safety constraints. Based on this metric, a distributed gradient-based control law is synthesized to drive agents toward optimal configurations, ensuring simultaneous obstacle avoidance and coverage optimization. Theoretical analyses guarantee the Input-to-State Stability (ISS) of the partition dynamics and the asymptotic convergence of the closed-loop system. Numerical simulations confirm the reachability and robustness of the proposed coverage algorithm, offering a scalable solution for distributed coverage in poriferous surfaces.
+ oai:arXiv.org:2601.13688v1
+ math.OC
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xun Feng, Chao Zhai
+
+
+ End-to-End Reverse Screening Identifies Protein Targets of Small Molecules Using HelixFold3
+ https://arxiv.org/abs/2601.13693
+ arXiv:2601.13693v1 Announce Type: cross
+Abstract: Identifying protein targets for small molecules, or reverse screening, is essential for understanding drug action, guiding compound repurposing, predicting off-target effects, and elucidating the molecular mechanisms of bioactive compounds. Despite its critical role, reverse screening remains challenging because accurately capturing interactions between a small molecule and structurally diverse proteins is inherently complex, and conventional step-wise workflows often propagate errors across decoupled steps such as target structure modeling, pocket identification, docking, and scoring. Here, we present an end-to-end reverse screening strategy leveraging HelixFold3, a high-accuracy biomolecular structure prediction model akin to AlphaFold3, which simultaneously models the folding of proteins from a protein library and the docking of small-molecule ligands within a unified framework. We validate this approach on a diverse and representative set of approximately one hundred small molecules. Compared with conventional reverse docking, our method improves screening accuracy and demonstrates enhanced structural fidelity, binding-site precision, and target prioritization. By systematically linking small molecules to their protein targets, this framework establishes a scalable and straightforward platform for dissecting molecular mechanisms, exploring off-target interactions, and supporting rational drug discovery.
+ oai:arXiv.org:2601.13693v1
+ q-bio.BM
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Shengjie Xu, Xianbin Ye, Mengran Zhu, Xiaonan Zhang, Shanzhuo Zhang, Xiaomin Fang
+
+
+ Generative Adversarial Networks for Resource State Generation
+ https://arxiv.org/abs/2601.13708
+ arXiv:2601.13708v1 Announce Type: cross
+Abstract: We introduce a physics-informed Generative Adversarial Network framework that recasts quantum resource-state generation as an inverse-design task. By embedding task-specific utility functions into training, the model learns to generate valid two-qubit states optimized for teleportation and entanglement broadcasting. Comparing decomposition-based and direct-generation architectures reveals that structural enforcement of Hermiticity, trace-one, and positivity yields higher fidelity and training stability than loss-only approaches. The framework reproduces theoretical resource boundaries for Werner-like and Bell-diagonal states with fidelities exceeding ~98%, establishing adversarial learning as a lightweight yet effective method for constraint-driven quantum-state discovery. This approach provides a scalable foundation for automated design of tailored quantum resources for information-processing applications, exemplified with teleportation and broadcasting of entanglement, and it opens up the possibility of using such states in efficient quantum network design.
+ oai:arXiv.org:2601.13708v1
+ quant-ph
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shahbaz Shaik, Sourav Chatterjee, Sayantan Pramanik, Indranil Chakrabarty
+
+
+ Moving Least Squares without Quasi-Uniformity: A Stochastic Approach
+ https://arxiv.org/abs/2601.13782
+ arXiv:2601.13782v1 Announce Type: cross
+Abstract: Local Polynomial Regression (LPR) and Moving Least Squares (MLS) are closely related nonparametric estimation methods, developed independently in statistics and approximation theory. While statistical LPR analysis focuses on overcoming sampling noise under probabilistic assumptions, the deterministic MLS theory studies smoothness properties and convergence rates with respect to the \textit{fill-distance} (a resolution parameter). Despite this similarity, the deterministic assumptions underlying MLS fail to hold under random sampling. We begin by quantifying the probabilistic behavior of the fill-distance $h_n$ and \textit{separation} $\delta_n$ of an i.i.d. random sample. That is, for a distribution satisfying a mild regularity condition, $h_n\propto n^{-1/d}\log^{1/d} (n)$ and $\delta_n \propto n^{-1/d}$. We then prove that, for MLS of degree $k\!-\!1$, the approximation error associated with a differential operator $Q$ of order $|m|\le k-1$ decays as $h_n^{\,k-|m|}$ up to logarithmic factors, establishing stochastic analogues of the classical MLS estimates. Additionally, We show that the MLS approximant is smooth with high probability. Finally, we apply the stochastic MLS theory to manifold estimation. Assuming that the sampled Manifold is $k$-times smooth, we show that the Hausdorff distance between the true manifold and its MLS reconstruction decays as $h_n^k$, extending the deterministic Manifold-MLS guarantees to random samples. This work provides the first unified stochastic analysis of MLS, demonstrating that -- despite the failure of deterministic sampling assumptions -- the classical convergence and smoothness properties persist under natural probabilistic models
+ oai:arXiv.org:2601.13782v1
+ math.ST
+ cs.NA
+ math.NA
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Shir Tapiro-Moshe, Yariv Aizenbud, Barak Sober
+
+
+ Two-dimensional FrBD friction models for rolling contact: extension to linear viscoelasticity
+ https://arxiv.org/abs/2601.13818
+ arXiv:2601.13818v1 Announce Type: cross
+Abstract: This paper extends the distributed rolling contact FrBD framework to linear viscoelasticity by considering classic derivative Generalised Maxwell and Kelvin-Voigt rheological representations of the bristle element. With this modelling approach, the dynamics of the bristle, generated friction forces, and internal deformation states are described by a system of 2(n+1) hyperbolic partial differential equations (PDEs), which can capture complex relaxation phenomena originating from viscoelastic behaviours. By appropriately specifying the analytical expressions for the transport and rigid relative velocity, three distributed formulations of increasing complexity are introduced, which account for different levels of spin excitation. For the linear variants, well-posedness and passivity are analysed rigorously, showing that these properties hold for any physically meaningful parametrisation. Numerical experiments complement the theoretical results by illustrating steady-state characteristics and transient relaxation effects. The findings of this paper substantially advance the FrBD paradigm by enabling a unified and systematic treatment of linear viscoelasticity.
+ oai:arXiv.org:2601.13818v1
+ physics.app-ph
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Luigi Romano
+
+
+ Co-Initialization of Control Filter and Secondary Path via Meta-Learning for Active Noise Control
+ https://arxiv.org/abs/2601.13849
+ arXiv:2601.13849v1 Announce Type: cross
+Abstract: Active noise control (ANC) must adapt quickly when the acoustic environment changes, yet early performance is largely dictated by initialization. We address this with a Model-Agnostic Meta-Learning (MAML) co-initialization that jointly sets the control filter and the secondary-path model for FxLMS-based ANC while keeping the runtime algorithm unchanged. The initializer is pre-trained on a small set of measured paths using short two-phase inner loops that mimic identification followed by residual-noise reduction, and is applied by simply setting the learned initial coefficients. In an online secondary path modeling FxLMS testbed, it yields lower early-stage error, shorter time-to-target, reduced auxiliary-noise energy, and faster recovery after path changes than a baseline without re-initialization. The method provides a simple fast start for feedforward ANC under environment changes, requiring a small set of paths to pre-train.
+ oai:arXiv.org:2601.13849v1
+ eess.AS
+ cs.LG
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ziyi Yang, Li Rao, Zhengding Luo, Dongyuan Shi, Qirui Huang, Woon-Seng Gan
+
+
+ Block-Fitness Modeling of the Global Air Mobility Network
+ https://arxiv.org/abs/2601.13867
+ arXiv:2601.13867v1 Announce Type: cross
+Abstract: Accurate representations of the World Air Transportation Network (WAN) are fundamental inputs to models of global mobility, epidemic risk, and infrastructure planning. However, high-resolution, real-time data on the WAN are largely commercial and proprietary, therefore often inaccessible to the research community. Here we introduce a generative model of the WAN that treats air travel as a stochastic process within a maximum-entropy framework. The model uses airport-level passenger flows to probabilistically generate connections while preserving traffic volumes across geographic regions. The resulting reconstructed networks reproduce key structural properties of the WAN and enable simulations of dynamic spreading that closely match those obtained using the real network. Our approach provides a scalable, interpretable, and computationally efficient framework for forecasting and policy design in global mobility systems.
+ oai:arXiv.org:2601.13867v1
+ physics.soc-ph
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Giulia Fischetti, Anna Mancini, Giulio Cimini, Jessica T. Davis, Abby Leung, Alessandro Vespignani, Guido Caldarelli
+
+
+ Unified Unbiased Variance Estimation for MMD: Robust Finite-Sample Performance with Imbalanced Data and Exact Acceleration under Null and Alternative Hypotheses
+ https://arxiv.org/abs/2601.13874
+ arXiv:2601.13874v1 Announce Type: cross
+Abstract: The maximum mean discrepancy (MMD) is a kernel-based nonparametric statistic for two-sample testing, whose inferential accuracy depends critically on variance characterization. Existing work provides various finite-sample estimators of the MMD variance, often differing under the null and alternative hypotheses and across balanced or imbalanced sampling schemes. In this paper, we study the variance of the MMD statistic through its U-statistic representation and Hoeffding decomposition, and establish a unified finite-sample characterization covering different hypotheses and sample configurations. Building on this analysis, we propose an exact acceleration method for the univariate case under the Laplacian kernel, which reduces the overall computational complexity from $\mathcal O(n^2)$ to $\mathcal O(n \log n)$.
+ oai:arXiv.org:2601.13874v1
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Shijie Zhong, Jiangfeng Fu, Yikun Yang
+
+
+ SCG With Your Phone: Diagnosis of Rhythmic Spectrum Disorders in Field Conditions
+ https://arxiv.org/abs/2601.13926
+ arXiv:2601.13926v1 Announce Type: cross
+Abstract: Aortic valve opening (AO) events are crucial for detecting frequency and rhythm disorders, especially in real-world settings where seismocardiography (SCG) signals collected via consumer smartphones are subject to noise, motion artifacts, and variability caused by device heterogeneity. In this work, we present a robust deep-learning framework for SCG segmentation and rhythm analysis using accelerometer recordings obtained with consumer smartphones. We develop an enhanced U-Net v3 architecture that integrates multi-scale convolutions, residual connections, and attention gates, enabling reliable segmentation of noisy SCG signals. A dedicated post-processing pipeline converts probability masks into precise AO timestamps, whereas a novel adaptive 3D-to-1D projection method ensures robustness to arbitrary smartphone orientation. Experimental results demonstrate that the proposed method achieves consistently high accuracy and robustness across various device types and unsupervised data-collection conditions. Our approach enables practical, low-cost, and automated cardiac-rhythm monitoring using everyday mobile devices, paving the way for scalable, field-deployable cardiovascular assessment and future multimodal diagnostic systems.
+ oai:arXiv.org:2601.13926v1
+ q-bio.QM
+ cs.LG
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Peter Golenderov, Yaroslav Matushenko, Anastasia Tushina, Michal Barodkin
+
+
+ Stream-Voice-Anon: Enhancing Utility of Real-Time Speaker Anonymization via Neural Audio Codec and Language Models
+ https://arxiv.org/abs/2601.13948
+ arXiv:2601.13948v1 Announce Type: cross
+Abstract: Protecting speaker identity is crucial for online voice applications, yet streaming speaker anonymization (SA) remains underexplored. Recent research has demonstrated that neural audio codec (NAC) provides superior speaker feature disentanglement and linguistic fidelity. NAC can also be used with causal language models (LM) to enhance linguistic fidelity and prompt control for streaming tasks. However, existing NAC-based online LM systems are designed for voice conversion (VC) rather than anonymization, lacking the techniques required for privacy protection. Building on these advances, we present Stream-Voice-Anon, which adapts modern causal LM-based NAC architectures specifically for streaming SA by integrating anonymization techniques. Our anonymization approach incorporates pseudo-speaker representation sampling, a speaker embedding mixing and diverse prompt selection strategies for LM conditioning that leverage the disentanglement properties of quantized content codes to prevent speaker information leakage. Additionally, we compare dynamic and fixed delay configurations to explore latency-privacy trade-offs in real-time scenarios. Under the VoicePrivacy 2024 Challenge protocol, Stream-Voice-Anon achieves substantial improvements in intelligibility (up to 46% relative WER reduction) and emotion preservation (up to 28% UAR relative) compared to the previous state-of-the-art streaming method DarkStream while maintaining comparable latency (180ms vs 200ms) and privacy protection against lazy-informed attackers, though showing 15% relative degradation against semi-informed attackers.
+ oai:arXiv.org:2601.13948v1
+ eess.AS
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Nikita Kuzmin, Songting Liu, Kong Aik Lee, Eng Siong Chng
+
+
+ Optimal Calibration of the endpoint-corrected Hilbert Transform
+ https://arxiv.org/abs/2601.13962
+ arXiv:2601.13962v1 Announce Type: cross
+Abstract: Accurate, low-latency estimates of the instantaneous phase of oscillations are essential for closed-loop sensing and actuation, including (but not limited to) phase-locked neurostimulation and other real-time applications. The endpoint-corrected Hilbert transform (ecHT) reduces boundary artefacts of the Hilbert transform by applying a causal narrow-band filter to the analytic spectrum. This improves the phase estimate at the most recent sample. Despite its widespread empirical use, the systematic endpoint distortions of ecHT have lacked a principled, closed-form analysis. In this study, we derive the ecHT endpoint operator analytically and demonstrate that its output can be decomposed into a desired positive-frequency term (a deterministic complex gain that induces a calibratable amplitude/phase bias) and a residual leakage term setting an irreducible variance floor. This yields (i) an explicit characterisation and bounds for endpoint phase/amplitude error, (ii) a mean-squared-error-optimal scalar calibration (c-ecHT), and (iii) practical design rules relating window length, bandwidth/order, and centre-frequency mismatch to residual bias via an endpoint group delay. The resulting calibrated ecHT achieves near-zero mean phase error and remains computationally compatible with real-time pipelines. Code and analyses are provided at https://github.com/eosmers/cecHT.
+ oai:arXiv.org:2601.13962v1
+ eess.SP
+ cs.SY
+ eess.SY
+ q-bio.NC
+ stat.ME
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Eike Osmers, Dorothea Kolossa
+
+
+ Rigid Body Dynamics in Ambient Fluids
+ https://arxiv.org/abs/2601.13971
+ arXiv:2601.13971v1 Announce Type: cross
+Abstract: We present a novel framework for rigid body dynamics in ambient media, such as air or water, enabling accurate motion prediction of objects without requiring computational fluid dynamics simulations. Our method computes the added mass of the fluid and replaces heuristic models for shape-dependent lift and drag with a generalized estimate of flow separation and dynamic pressure. Our method is the first within the rigid body dynamics context to reproduce the full range of falling plate behaviors: fluttering, tumbling, chaotic and steady modes, as well as phenomena such as the Magnus effect and the flight dynamics of an American football (tight spiral pass paradox). The resulting algorithm is simple to implement, robust, does not rely on specialized integrators and incorporates seamlessly into existing physics engines for real-time simulation.
+ oai:arXiv.org:2601.13971v1
+ physics.flu-dyn
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Marcel Padilla, Aviv Segall, Olga Sorkine-Hornung
+
+
+ SHARE: A Fully Unsupervised Framework for Single Hyperspectral Image Restoration
+ https://arxiv.org/abs/2601.13987
+ arXiv:2601.13987v1 Announce Type: cross
+Abstract: Hyperspectral image (HSI) restoration is a fundamental challenge in computational imaging and computer vision. It involves ill-posed inverse problems, such as inpainting and super-resolution. Although deep learning methods have transformed the field through data-driven learning, their effectiveness hinges on access to meticulously curated ground-truth datasets. This fundamentally restricts their applicability in real-world scenarios where such data is unavailable. This paper presents SHARE (Single Hyperspectral Image Restoration with Equivariance), a fully unsupervised framework that unifies geometric equivariance principles with low-rank spectral modelling to eliminate the need for ground truth. SHARE's core concept is to exploit the intrinsic invariance of hyperspectral structures under differentiable geometric transformations (e.g. rotations and scaling) to derive self-supervision signals through equivariance consistency constraints. Our novel Dynamic Adaptive Spectral Attention (DASA) module further enhances this paradigm shift by explicitly encoding the global low-rank property of HSI and adaptively refining local spectral-spatial correlations through learnable attention mechanisms. Extensive experiments on HSI inpainting and super-resolution tasks demonstrate the effectiveness of SHARE. Our method outperforms many state-of-the-art unsupervised approaches and achieves performance comparable to that of supervised methods. We hope that our approach will shed new light on HSI restoration and broader scientific imaging scenarios. The code will be released at https://github.com/xuwayyy/SHARE.
+ oai:arXiv.org:2601.13987v1
+ eess.IV
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiangwei Xie, Zhang Wen, Mike Davies, Dongdong Chen
+
+
+ Achieving Full Multipath Diversity by Random Constellation Rotation: a Theoretical Perspective
+ https://arxiv.org/abs/2601.13997
+ arXiv:2601.13997v1 Announce Type: cross
+Abstract: Diversity is an essential concept associated with communication reliability in multipath channels since it determines the slope of bit error rate performance in the medium to high signal-to-noise ratio regions. However, most of the existing analytical frameworks were developed for specific modulation schemes while the efficient validation of full multipath diversity for general modulation schemes remains an open problem. To fill this research gap, we propose to utilize random constellation rotation to ease the conditions for full-diversity modulation designs. For linearly precoded cyclic-prefix orthogonal frequency division multiplexing (OFDM) systems, we prove that maximum multipath diversity can be attained as long as the spread matrix does not have zero entries, which is a sufficient but easily satisfied condition. Furthermore, we derive the sufficient and necessary condition for general modulation schemes, whose verification can be divided into validation tasks for each column of the modulation matrix. Based on the proposed conditions, maximum diversity order can be attained with the probability of 1 by enabling a randomly generated rotation pattern for both time and doubly dispersive channels. The theoretical analysis in this paper also demonstrates that the diversity evaluation can be concentrated on the pairwise error probability when the number of error symbols is one, which reduces the complexity of diversity-driven design and performance analysis for novel modulation schemes significantly in both time and doubly dispersive channels. Finally, numerical results for various modulation schemes confirm that the theoretical analysis holds in both time and doubly dispersive channels. Furthermore, when employing practical detectors, the random constellation rotation technique consistently enhance the transmission reliability for both coded and uncoded systems.
+ oai:arXiv.org:2601.13997v1
+ eess.SP
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xuehan Wang, Jinhong Yuan, Jintao Wang, Kehan Huang
+
+
+ DAME: Duration-Aware Matryoshka Embedding for Duration-Robust Speaker Verification
+ https://arxiv.org/abs/2601.13999
+ arXiv:2601.13999v1 Announce Type: cross
+Abstract: Short-utterance speaker verification remains challenging due to limited speaker-discriminative cues in short speech segments. While existing methods focus on enhancing speaker encoders, the embedding learning strategy still forces a single fixed-dimensional representation reused for utterances of any length, leaving capacity misaligned with the information available at different durations. We propose Duration-Aware Matryoshka Embedding (DAME), a model-agnostic framework that builds a nested hierarchy of sub-embeddings aligned to utterance durations: lower-dimensional representations capture compact speaker traits from short utterances, while higher dimensions encode richer details from longer speech. DAME supports both training from scratch and fine-tuning, and serves as a direct alternative to conventional large-margin fine-tuning, consistently improving performance across durations. On the VoxCeleb1-O/E/H and VOiCES evaluation sets, DAME consistently reduces the equal error rate on 1-s and other short-duration trials, while maintaining full-length performance with no additional inference cost. These gains generalize across various speaker encoder architectures under both general training and fine-tuning setups.
+ oai:arXiv.org:2601.13999v1
+ eess.AS
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Youngmoon Jung, Joon-Young Yang, Ju-ho Kim, Jaeyoung Roh, Chang Woo Han, Hoon-Young Cho
+
+
+ MATE: Matryoshka Audio-Text Embeddings for Open-Vocabulary Keyword Spotting
+ https://arxiv.org/abs/2601.14012
+ arXiv:2601.14012v1 Announce Type: cross
+Abstract: Open-vocabulary keyword spotting (KWS) with text-based enrollment has emerged as a flexible alternative to fixed-phrase triggers. Prior utterance-level matching methods, from an embedding-learning standpoint, learn embeddings at a single fixed dimensionality. We depart from this design and propose Matryoshka Audio-Text Embeddings (MATE), a dual-encoder framework that encodes multiple embedding granularities within a single vector via nested sub-embeddings ("prefixes"). Specifically, we introduce a PCA-guided prefix alignment: PCA-compressed versions of the full text embedding for each prefix size serve as teacher targets to align both audio and text prefixes. This alignment concentrates salient keyword cues in lower-dimensional prefixes, while higher dimensions add detail. MATE is trained with standard deep metric learning objectives for audio-text KWS, and is loss-agnostic. To our knowledge, this is the first application of matryoshka-style embeddings to KWS, achieving state-of-the-art results on WSJ and LibriPhrase without any inference overhead.
+ oai:arXiv.org:2601.14012v1
+ eess.AS
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Youngmoon Jung, Myunghun Jung, Joon-Young Yang, Yong-Hyeok Lee, Jaeyoung Roh, Hoon-Young Cho
+
+
+ Intermittent time series forecasting: local vs global models
+ https://arxiv.org/abs/2601.14031
+ arXiv:2601.14031v1 Announce Type: cross
+Abstract: Intermittent time series, characterised by the presence of a significant amount of zeros, constitute a large percentage of inventory items in supply chain. Probabilistic forecasts are needed to plan the inventory levels; the predictive distribution should cover non-negative values, have a mass in zero and a long upper tail. Intermittent time series are commonly forecast using local models, which are trained individually on each time series. In the last years global models, which are trained on a large collection of time series, have become popular for time series forecasting. Global models are often based on neural networks. However, they have not yet been exhaustively tested on intermittent time series. We carry out the first study comparing state-of-the-art local (iETS, TweedieGP) and global models (D-Linear, DeepAR, Transformers) on intermittent time series. For neural networks models we consider three different distribution heads suitable for intermittent time series: negative binomial, hurdle-shifted negative binomial and Tweedie. We use, for the first time, the last two distribution heads with neural networks. We perform experiments on five large datasets comprising more than 40'000 real-world time series. Among neural networks D-Linear provides best accuracy; it also consistently outperforms the local models. Moreover, it has also low computational requirements. Transformers-based architectures are instead much more computationally demanding and less accurate. Among the distribution heads, the Tweedie provides the best estimates of the highest quantiles, while the negative binomial offers overall the best performance.
+ oai:arXiv.org:2601.14031v1
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Stefano Damato, Nicol\`o Rubattu, Dario Azzimonti, Giorgio Corani
+
+
+ MooneyMaker: A Python package to create ambiguous two-tone images
+ https://arxiv.org/abs/2601.14077
+ arXiv:2601.14077v1 Announce Type: cross
+Abstract: Mooney images are high-contrast, two-tone visual stimuli, created by thresholding photographic images. They allow researchers to separate image content from image understanding, making them valuable for studying visual perception. An ideal Mooney image for this purpose achieves a specific balance: it initially appears unrecognizable but becomes fully interpretable to the observer after seeing the original template. Researchers traditionally created these stimuli manually using subjective criteria, which is labor-intensive and can introduce inconsistencies across studies. Automated generation techniques now offer an alternative to this manual approach. Here, we present MooneyMaker, an open-source Python package that automates the generation of ambiguous Mooney images using several complementary approaches. Users can choose between various generation techniques that range from approaches based on image statistics to deep learning models. These models strategically alter edge information to increase initial ambiguity. The package lets users create two-tone images with multiple methods and directly compare the results visually. In an experiment, we validate MooneyMaker by generating Mooney images using different techniques and assess their recognizability for human observers before and after disambiguating them by presenting the template images. Our results reveal that techniques with lower initial recognizability are associated with higher post-template recognition (i.e. a larger disambiguation effect). To help vision scientists build effective databases of Mooney stimuli, we provide practical guidelines for technique selection. By standardizing the generation process, MooneyMaker supports more consistent and reproducible visual perception research.
+ oai:arXiv.org:2601.14077v1
+ q-bio.NC
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Lars C. Reining, Thabo Matthies, Luisa Haussner, Rabea Turon, Thomas S. A. Wallis
+
+
+ Basis Number and Pathwidth
+ https://arxiv.org/abs/2601.14095
+ arXiv:2601.14095v1 Announce Type: cross
+Abstract: We prove two results relating the basis number of a graph $G$ to path decompositions of $G$. Our first result shows that the basis number of a graph is at most four times its pathwidth. Our second result shows that, if a graph $G$ has a path decomposition with adhesions of size at most $k$ in which the graph induced by each bag has basis number at most $b$, then $G$ has basis number at most $b+O(k\log^2 k)$. The first result, combined with recent work of Geniet and Giocanti shows that the basis number of a graph is bounded by a polynomial function of its treewidth. The second result (also combined with the work of Geniet and Giocanti) shows that every $K_t$-minor-free graph has a basis number bounded by a polynomial function of $t$.
+ oai:arXiv.org:2601.14095v1
+ math.CO
+ cs.DM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Babak Miraftab, Pat Morin, Yelena Yuditsky
+
+
+ Achievable Burning Densities of Growing Grids
+ https://arxiv.org/abs/2601.14151
+ arXiv:2601.14151v1 Announce Type: cross
+Abstract: Graph burning is a discrete-time process on graphs where vertices are sequentially activated and burning vertices cause their neighbours to burn over time. In this work, we focus on a dynamic setting in which the graph grows over time, and at each step we burn vertices in the growing grid $G_n = [-f(n),f(n)]^2$. We investigate the set of achievable burning densities for functions of the form $f(n)=\lceil cn^\alpha\rceil$, where $\alpha \ge 1$ and $c>0$. We show that for $\alpha=1$, the set of achievable densities is $[1/(2c^2),1]$, for $1<\alpha<3/2$, every density in $[0,1]$ is achievable, and for $\alpha=3/2$, the set of achievable densities is $[0,(1+\sqrt{6}c)^{-2}]$.
+ oai:arXiv.org:2601.14151v1
+ math.CO
+ cs.DM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Jordan Barrett, Karen Gunderson, JD Nir, Pawel Pralat
+
+
+ Wasserstein distances between ERGMs and Erd\H{o}s-R\'enyi models
+ https://arxiv.org/abs/2601.14170
+ arXiv:2601.14170v1 Announce Type: cross
+Abstract: Ferromagnetic exponential random graph models (ERGMs) are random graph models under which the presence of certain small structures (such as triangles) is encouraged; they can be constructed by tilting an Erd\H{o}s--R\'enyi model by the exponential of a particular nonlinear Hamiltonian. These models are mixtures of metastable wells which each behave macroscopically like an Erd\H{o}s--R\'enyi model, exhibiting the same laws of large numbers for subgraph counts [CD13]. However, on the microscopic scale these metastable wells are very different from Erd\H{o}s--R\'enyi models, with the total variation distance between the two measures tending to 1 [MX23]. In this article we clarify this situation by providing a sharp (up to constants) bound on the Hamming-Wasserstein distance between the two models, which is the average number of edges at which they differ, under the coupling which minimizes this average. In particular, we show that this distance is $\Theta(n^{3/2})$, quantifying exactly how these models differ.
+ An upper bound of this form has appeared in the past [RR19], but this was restricted to the subcritical (high-temperature) regime of parameters. We extend this bound, using a new proof technique, to the supercritical (low-temperature) regime, and prove a matching lower bound which has only previously appeared in the subcritical regime of special cases of ERGMs satisfying a "triangle-free" condition [DF25]. To prove the lower bound in the presence of triangles, we introduce an approximation of the discrete derivative of the Hamiltonian, which controls the dynamical properties of the ERGM, in terms of local counts of triangles and wedges (two-stars) near an edge. This approximation is the main technical and conceptual contribution of the article, and we expect it will be useful in a variety of other contexts as well. Along the way, we also prove a bound on the marginal edge probability under the ERGM via a new bootstrapping argument. Such a bound has already appeared [FLSW25], but again only in the subcritical regime and using a different proof strategy.
+ oai:arXiv.org:2601.14170v1
+ math.PR
+ cond-mat.stat-mech
+ cs.DM
+ math.CO
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Vilas Winstein
+
+
+ Deep Learning Approaches to Quantum Error Mitigation
+ https://arxiv.org/abs/2601.14226
+ arXiv:2601.14226v1 Announce Type: cross
+Abstract: We present a systematic investigation of deep learning methods applied to quantum error mitigation of noisy output probability distributions from measured quantum circuits. We compare different architectures, from fully connected neural networks to transformers, and we test different design/training modalities, identifying sequence-to-sequence, attention-based models as the most effective on our datasets. These models consistently produce mitigated distributions that are closer to the ideal outputs when tested on both simulated and real device data obtained from IBM superconducting quantum processing units (QPU) up to five qubits. Across several different circuit depths, our approach outperforms other baseline error mitigation techniques. We perform a series of ablation studies to examine: how different input features (circuit, device properties, noisy output statistics) affect performance; cross-dataset generalization across circuit families; and transfer learning to a different IBM QPU. We observe that generalization performance across similar devices with the same architecture works effectively, without needing to fully retrain models.
+ oai:arXiv.org:2601.14226v1
+ quant-ph
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Leonardo Placidi, Ifan Williams, Enrico Rinaldi, Daniel Mills, Cristina C\^irstoiu, Vanya Eccles, Ross Duncan
+
+
+ Opportunities in AI/ML for the Rubin LSST Dark Energy Science Collaboration
+ https://arxiv.org/abs/2601.14235
+ arXiv:2601.14235v1 Announce Type: cross
+Abstract: The Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) will produce unprecedented volumes of heterogeneous astronomical data (images, catalogs, and alerts) that challenge traditional analysis pipelines. The LSST Dark Energy Science Collaboration (DESC) aims to derive robust constraints on dark energy and dark matter from these data, requiring methods that are statistically powerful, scalable, and operationally reliable. Artificial intelligence and machine learning (AI/ML) are already embedded across DESC science workflows, from photometric redshifts and transient classification to weak lensing inference and cosmological simulations. Yet their utility for precision cosmology hinges on trustworthy uncertainty quantification, robustness to covariate shift and model misspecification, and reproducible integration within scientific pipelines. This white paper surveys the current landscape of AI/ML across DESC's primary cosmological probes and cross-cutting analyses, revealing that the same core methodologies and fundamental challenges recur across disparate science cases. Since progress on these cross-cutting challenges would benefit multiple probes simultaneously, we identify key methodological research priorities, including Bayesian inference at scale, physics-informed methods, validation frameworks, and active learning for discovery. With an eye on emerging techniques, we also explore the potential of the latest foundation model methodologies and LLM-driven agentic AI systems to reshape DESC workflows, provided their deployment is coupled with rigorous evaluation and governance. Finally, we discuss critical software, computing, data infrastructure, and human capital requirements for the successful deployment of these new methodologies, and consider associated risks and opportunities for broader coordination with external actors.
+ oai:arXiv.org:2601.14235v1
+ astro-ph.IM
+ astro-ph.CO
+ cs.AI
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ LSST Dark Energy Science Collaboration, Eric Aubourg, Camille Avestruz, Matthew R. Becker, Biswajit Biswas, Rahul Biswas, Boris Bolliet, Adam S. Bolton, Clecio R. Bom, Rapha\"el Bonnet-Guerrini, Alexandre Boucaud, Jean-Eric Campagne, Chihway Chang, Aleksandra \'Ciprijanovi\'c, Johann Cohen-Tanugi, Michael W. Coughlin, John Franklin Crenshaw, Juan C. Cuevas-Tello, Juan de Vicente, Seth W. Digel, Steven Dillmann, Mariano Javier de Le\'on Dominguez Romero, Alex Drlica-Wagner, Sydney Erickson, Alexander T. Gagliano, Christos Georgiou, Aritra Ghosh, Matthew Grayling, Kirill A. Grishin, Alan Heavens, Lindsay R. House, Mustapha Ishak, Wassim Kabalan, Arun Kannawadi, Fran\c{c}ois Lanusse, C. Danielle Leonard, Pierre-Fran\c{c}ois L\'eget, Michelle Lochner, Yao-Yuan Mao, Peter Melchior, Grant Merz, Martin Millon, Anais M\"oller, Gautham Narayan, Yuuki Omori, Hiranya Peiris, Laurence Perreault-Levasseur, Andr\'es A. Plazas Malag\'on, Nesar Ramachandra, Benjamin Remy, C\'ecile Roucelle, Jaime Ruiz-Zapatero, Stefan Schuldt, Ignacio Sevilla-Noarbe, Ved G. Shah, Tjitske Starkenburg, Stephen Thorp, Laura Toribio San Cipriano, Tilman Tr\"oster, Roberto Trotta, Padma Venkatraman, Amanda Wasserman, Tim White, Justine Zeghal, Tianqing Zhang, Yuanyuan Zhang
+
+
+ A New Generation of Brain-Computer Interface Based on Riemannian Geometry
+ https://arxiv.org/abs/1310.8115
+ arXiv:1310.8115v2 Announce Type: replace
+Abstract: Based on the cumulated experience over the past 25 years in the field of Brain-Computer Interface (BCI) we can now envision a new generation of BCI. Such BCIs will not require training; instead they will be smartly initialized using remote massive databases and will adapt to the user fast and effectively in the first minute of use. They will be reliable, robust and will maintain good performances within and across sessions. A general classification framework based on recent advances in Riemannian geometry and possessing these characteristics is presented. It applies equally well to BCI based on event-related potentials (ERP), sensorimotor (mu) rhythms and steady-state evoked potential (SSEP). The framework is very simple, both algorithmically and computationally. Due to its simplicity, its ability to learn rapidly (with little training data) and its good across-subject and across-session generalization, this strategy a very good candidate for building a new generation of BCIs, thus we hereby propose it as a benchmark method for the field.
+ oai:arXiv.org:1310.8115v2
+ cs.HC
+ math.DG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Marco Congedo, Alexandre Barachant, Anton Andreev
+
+
+ Boosted optimal weighted least-squares
+ https://arxiv.org/abs/1912.07075
+ arXiv:1912.07075v3 Announce Type: replace
+Abstract: This paper is concerned with the approximation of a function $u$ in a given approximation space $V_m$ of dimension $m$ from evaluations of the function at $n$ suitably chosen points. The aim is to construct an approximation of $u$ in $V_m$ which yields an error close to the best approximation error in $V_m$ and using as few evaluations as possible. Classical least-squares regression, which defines a projection in $V_m$ from $n$ random points, usually requires a large $n$ to guarantee a stable approximation and an error close to the best approximation error. This is a major drawback for applications where $u$ is expensive to evaluate. One remedy is to use a weighted least squares projection using $n$ samples drawn from a properly selected distribution. In this paper, we introduce a boosted weighted least-squares method which allows to ensure almost surely the stability of the weighted least squares projection with a sample size close to the interpolation regime $n=m$. It consists in sampling according to a measure associated with the optimization of a stability criterion over a collection of independent $n$-samples, and resampling according to this measure until a stability condition is satisfied. A greedy method is then proposed to remove points from the obtained sample. Quasi-optimality properties are obtained for the weighted least-squares projection, with or without the greedy procedure. The proposed method is validated on numerical examples and compared to state-of-the-art interpolation and weighted least squares methods.
+ oai:arXiv.org:1912.07075v3
+ math.NA
+ cs.NA
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1090/mcom/3710
+ Math. Comp. (2022)
+ C\'ecile Haberstich, Anthony Nouy, Guillaume Perrin
+
+
+ UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms
+ https://arxiv.org/abs/2105.02135
+ arXiv:2105.02135v5 Announce Type: replace
+Abstract: Policy evaluation is an important instrument for the comparison of different algorithms in Reinforcement Learning (RL). However, even a precise knowledge of the value function $V^{\pi}$ corresponding to a policy $\pi$ does not provide reliable information on how far the policy $\pi$ is from the optimal one. We present a novel model-free upper value iteration procedure ({\sf UVIP}) that allows us to estimate the suboptimality gap $V^{\star}(x) - V^{\pi}(x)$ from above and to construct confidence intervals for \(V^\star\). Our approach relies on upper bounds to the solution of the Bellman optimality equation via the martingale approach. We provide theoretical guarantees for {\sf UVIP} under general assumptions and illustrate its performance on a number of benchmark RL problems.
+ oai:arXiv.org:2105.02135v5
+ cs.LG
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Denis Belomestny, Ilya Levin, Alexey Naumov, Sergey Samsonov
+
+
+ A closer look at TDFA
+ https://arxiv.org/abs/2206.01398
+ arXiv:2206.01398v3 Announce Type: replace
+Abstract: We present an algorithm for regular expression parsing and submatch extraction based on tagged deterministic finite automata. The algorithm works with different disambiguation policies. We give detailed pseudocode for the algorithm, covering important practical optimizations. All transformations from a regular expression to an optimized automaton are explained on a step-by-step example. We consider both ahead-of-time and just-in-time determinization and describe variants of the algorithm suited to each setting. We provide benchmarks showing that the algorithm is very fast in practice. Our research is based on two independent implementations: an open-source lexer generator RE2C and an experimental Java library.
+ oai:arXiv.org:2206.01398v3
+ cs.FL
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Angelo Borsotti, Ulya Trafimovich
+
+
+ A Note on Comparator-Overdrive-Delay Conditioning for Current-Mode Control
+ https://arxiv.org/abs/2206.09340
+ arXiv:2206.09340v3 Announce Type: replace
+Abstract: Comparator-overdrive-delay conditioning is a new control conditioning approach for high-frequency current-mode control. No existing literature rigorously studies the effect of the comparator overdrive delay on the current-mode control. The results in this paper provide insights into the mechanism of comparator-overdrive-delay conditioning.
+ oai:arXiv.org:2206.09340v3
+ eess.SY
+ cs.SY
+ math.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaofan Cui, Guanyu Qian, Al-Thaddeus Avestruz
+
+
+ Functional Rule Extraction Method for Artificial Neural Networks
+ https://arxiv.org/abs/2208.00335
+ arXiv:2208.00335v2 Announce Type: replace
+Abstract: The idea I propose in this paper is a method that is based on comprehensive functions for directed and undirected rule extraction from artificial neural network operations. Firstly, I defined comprehensive functions, then constructed a comprehensive multilayer network (denoted as N). Each activation function of N is parametrized to a comprehensive function.
+ oai:arXiv.org:2208.00335v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Caleb Princewill Nwokocha
+
+
+ Machine Learning Decoder for 5G NR PUCCH Format 0
+ https://arxiv.org/abs/2209.07861
+ arXiv:2209.07861v2 Announce Type: replace
+Abstract: 5G cellular systems depend on the timely exchange of feedback control information between the user equipment and the base station. Proper decoding of this control information is necessary to set up and sustain high throughput radio links. This paper makes the first attempt at using Machine Learning techniques to improve the decoding performance of the Physical Uplink Control Channel Format 0. We use fully connected neural networks to classify the received samples based on the uplink control information content embedded within them. The trained neural network, tested on real-time wireless captures, shows significant improvement in accuracy over conventional DFT-based decoders, even at low SNR. The obtained accuracy results also demonstrate conformance with 3GPP requirements.
+ oai:arXiv.org:2209.07861v2
+ cs.NI
+ cs.IT
+ cs.LG
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ 10.1109/NCC56989.2023.10067950
+ Anil Kumar Yerrapragada, Jeeva Keshav S, Ankit Gautam, Radha Krishna Ganti
+
+
+ Turing meets Moore-Penrose: Computing the Pseudoinverse on Turing Machines
+ https://arxiv.org/abs/2212.02940
+ arXiv:2212.02940v2 Announce Type: replace
+Abstract: The pseudoinverse of a matrix, a generalized notion of the inverse, is of fundamental importance in linear algebra and, thereby, in many different fields. Despite its proven existence, an algorithmic approach is typically necessary to obtain the pseudoinverse in practical applications. Therefore, we analyze if and to what degree the pseudoinverse can be computed on perfect digital hardware platforms modeled as Turing machines. For this, we utilize the notion of an effective algorithm that describes a provably correct computation: upon an input of any error parameter, the algorithm provides an approximation within the given error bound with respect to the unknown solution. We prove that a universal effective algorithm for computing the pseudoinverse of any matrix with a finite error bound does not exist on Turing machines. However, for specific classes of matrices, we show that provably correct algorithms exist and obtain a characterization of the properties of the input set, leading to the effective computability breakdown.
+ oai:arXiv.org:2212.02940v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Holger Boche, Adalbert Fono, Gitta Kutyniok
+
+
+ Provably Fast and Space-Efficient Parallel Biconnectivity
+ https://arxiv.org/abs/2301.01356
+ arXiv:2301.01356v2 Announce Type: replace
+Abstract: Biconnectivity is one of the most fundamental graph problems. The canonical parallel biconnectivity algorithm is the Tarjan-Vishkin algorithm, which has $O(n+m)$ optimal work (number of operations) and polylogarithmic span (longest dependent operations) on a graph with $n$ vertices and $m$ edges. However, Tarjan-Vishkin is not widely used in practice. We believe the reason is the space-inefficiency (it generates an auxiliary graph with $O(m)$ edges). In practice, existing parallel implementations are based on breath-first search (BFS). Since BFS has span proportional to the diameter of the graph, existing parallel BCC implementations suffer from poor performance on large-diameter graphs and can be even slower than the sequential algorithm on many real-world graphs.
+ We propose the first parallel biconnectivity algorithm (FAST-BCC) that has optimal work, polylogarithmic span, and is space-efficient. Our algorithm first generates a skeleton graph based on any spanning tree of the input graph. Then we use the connectivity information of the skeleton to compute the biconnectivity of the original input. All the steps in our algorithm are highly-parallel. We carefully analyze the correctness of our algorithm, which is highly non-trivial.
+ We implemented FAST-BCC and compared it with existing implementations, including GBBS, Slota and Madduri's algorithm, and the sequential Hopcroft-Tarjan algorithm. We ran them on a 96-core machine on 27 graphs, including social, web, road, $k$-NN, and synthetic graphs, with significantly varying sizes and edge distributions. FAST-BCC is the fastest on all 27 graphs. On average (geometric means), FAST-BCC is 5.1$\times$ faster than GBBS, and 3.1$\times$ faster than the best existing baseline on each graph.
+ oai:arXiv.org:2301.01356v2
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3572848.3577483
+ Xiaojun Dong, Letong Wang, Yan Gu, Yihan Sun
+
+
+ Lessons from Formally Verified Deployed Software Systems (Extended version)
+ https://arxiv.org/abs/2301.02206
+ arXiv:2301.02206v4 Announce Type: replace
+Abstract: The technology of formal software verification has made spectacular advances, but how much does it actually benefit the development of practical software? Considerable disagreement remains about the practicality of building systems with mechanically-checked proofs of correctness. Is this prospect confined to a few expensive, life-critical projects, or can the idea be applied to a wide segment of the software industry? To help answer this question, the present survey examines a range of projects, in various application areas, that have produced formally verified systems and deployed them for actual use. It considers the technologies used, the form of verification applied, the results obtained, and the lessons that the software industry should draw regarding its ability to benefit from formal verification techniques and tools.
+ Note: this version is the extended article, covering all the systems identified as relevant. A shorter version, covering only a selection, is also available (see https://doi.org/10.1145/3785652).
+ oai:arXiv.org:2301.02206v4
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ 10.1145/3785652
+ Li Huang, Sophie Ebersold, Alexander Kogtenkov, Bertrand Meyer, Yinling Liu
+
+
+ On the Global Convergence of Risk-Averse Natural Policy Gradient Methods with Expected Conditional Risk Measures
+ https://arxiv.org/abs/2301.10932
+ arXiv:2301.10932v5 Announce Type: replace
+Abstract: Risk-sensitive reinforcement learning (RL) has become a popular tool for controlling the risk of uncertain outcomes and ensuring reliable performance in highly stochastic sequential decision-making problems. While it has been shown that policy gradient methods can find globally optimal policies in the risk-neutral setting, it remains unclear if the risk-averse variants enjoy the same global convergence guarantees. In this paper, we consider a class of dynamic time-consistent risk measures, named Expected Conditional Risk Measures (ECRMs), and derive natural policy gradient (NPG) updates for ECRMs-based RL problems. We provide global optimality and iteration complexity of the proposed risk-averse NPG algorithm with softmax parameterization and entropy regularization under both exact and inexact policy evaluation. Furthermore, we test our risk-averse NPG algorithm on a stochastic Cliffwalk environment to demonstrate the efficacy of our method.
+ oai:arXiv.org:2301.10932v5
+ cs.LG
+ math.OC
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xian Yu, Lei Ying
+
+
+ Multiperiodic Processes: Ergodic Sources with a Sublinear Entropy
+ https://arxiv.org/abs/2302.09049
+ arXiv:2302.09049v5 Announce Type: replace
+Abstract: We construct multiperiodic processes -- a simple example of stationary ergodic (but not mixing) processes over natural numbers that enjoy the vanishing entropy rate under a mild condition. Multiperiodic processes are supported on randomly shifted deterministic sequences called multiperiodic sequences, which can be efficiently generated using an algorithm called the Infinite Clock. Under a suitable parameterization, multiperiodic sequences exhibit relative frequencies of particular numbers given by Zipf's law. Exactly in the same setting, the respective multiperiodic processes satisfy an asymptotic power-law growth of block entropy, called Hilberg's law. Hilberg's law is deemed to hold for statistical language models, in particular.
+ oai:arXiv.org:2302.09049v5
+ cs.IT
+ cs.LG
+ math.IT
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ {\L}ukasz D\k{e}bowski
+
+
+ 3D UAV Trajectory Design for Fair and Energy-Efficient Communication: A Deep Reinforcement Learning Technique
+ https://arxiv.org/abs/2303.05465
+ arXiv:2303.05465v2 Announce Type: replace
+Abstract: In different situations, like disaster communication and network connectivity for rural locations, unmanned aerial vehicles (UAVs) could indeed be utilized as airborne base stations to improve both the functionality and coverage of communication networks. Ground users can employ mobile UAVs to establish communication channels and deliver packages. UAVs, on the other hand, have restricted transmission capabilities and fuel supplies. They can't always cover the full region or continue to fly for a long time, especially in a huge territory. Controlling a swarm of UAVs to yield a relatively long communication coverage while maintaining connectivity and limiting energy usage is so difficult. We use modern deep reinforcement learning (DRL) for UAV connectivity to provide an innovative and extremely energy-efficient DRL-based algorithm. The proposed method: 1) enhances novel energy efficiency while taking into account communications throughput, energy consumption, fairness, and connectivity; 2) evaluates the environment and its dynamics; and 3) makes judgments using strong deep neural networks. For performance evaluation, we have performed comprehensive simulations. In terms of energy consumption and fairness, simulation results show that the DRL-based algorithm consistently outperforms two commonly used baseline techniques.
+ oai:arXiv.org:2303.05465v2
+ cs.NI
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shahid Rasool, Irfan Ullah, Abid Ali, Ishtiaq Ahmad
+
+
+ MMT: A Multilingual and Multi-Topic Indian Social Media Dataset
+ https://arxiv.org/abs/2304.00634
+ arXiv:2304.00634v2 Announce Type: replace
+Abstract: Social media plays a significant role in cross-cultural communication. A vast amount of this occurs in code-mixed and multilingual form, posing a significant challenge to Natural Language Processing (NLP) tools for processing such information, like language identification, topic modeling, and named-entity recognition. To address this, we introduce a large-scale multilingual, and multi-topic dataset (MMT) collected from Twitter (1.7 million Tweets), encompassing 13 coarse-grained and 63 fine-grained topics in the Indian context. We further annotate a subset of 5,346 tweets from the MMT dataset with various Indian languages and their code-mixed counterparts. Also, we demonstrate that the currently existing tools fail to capture the linguistic diversity in MMT on two downstream tasks, i.e., topic modeling and language identification. To facilitate future research, we have make the anonymized and annotated dataset available at https://huggingface.co/datasets/LingoIITGN/MMT.
+ oai:arXiv.org:2304.00634v2
+ cs.CL
+ cs.LG
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ EACL Workshop C3NLP 2023
+ Dwip Dalal, Vivek Srivastava, Mayank Singh
+
+
+ Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and Privacy Challenges
+ https://arxiv.org/abs/2305.14080
+ arXiv:2305.14080v3 Announce Type: replace
+Abstract: The latest developments in computer hardware, sensor technologies, and artificial intelligence can make virtual reality (VR) and virtual spaces an important part of human everyday life. Eye tracking offers not only a hands-free way of interaction but also the possibility of a deeper understanding of human visual attention and cognitive processes in VR. Despite these possibilities, eye-tracking data also reveals users' privacy-sensitive attributes when combined with the information about the presented stimulus. To address all, this survey first covers major works in eye tracking, VR, and privacy areas between 2012 and 2022. While eye tracking in VR part covers the computational eye tracking pipeline from pupil detection and gaze estimation to offline data analysis, for privacy and security, we focus on eye-based authentication as well as computational methods to preserve the privacy of individuals and their eye-tracking data in VR. Later, we outline three main directions by focusing on privacy. In summary, this survey presents an extensive literature review of the utmost possibilities of eye tracking in VR and their privacy implications.
+ oai:arXiv.org:2305.14080v3
+ cs.HC
+ cs.AI
+ cs.CR
+ cs.GR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/JPROC.2026.3653661
+ Efe Bozkir, S\"uleyman \"Ozdel, Mengdi Wang, Brendan David-John, Hong Gao, Kevin Butler, Eakta Jain, Enkelejda Kasneci
+
+
+ A Deep Probabilistic Flow-Based Framework for Unsupervised Cross-Domain Soft Sensing
+ https://arxiv.org/abs/2306.04919
+ arXiv:2306.04919v5 Announce Type: replace
+Abstract: Industrial soft sensing is crucial for accurate process monitoring through reliable inference of dominant sensor variables. However, developing effective data-driven soft sensor models presents challenges, such as achieving domain adaptability, addressing incomplete sensor labels, and learning stochastic data variability. To overcome these challenges, we propose a Deep Variational Potential Flow (DVPF) framework for cross-domain soft sensor modeling, taking into account the lack of sensor labels in the target domain. Our framework introduces sequential variational Bayes with recurrent neural network (RNN) parameterization to address the maximum likelihood estimation problem that characterizes cross-domain soft sensing. Central to the framework is a potential flow that performs unsupervised Bayesian inference on the RNN-extracted features to obtain an exact representation of the intractable posterior distribution. Together, these DVPF components learn domain-adaptable features that effectively capture complex cross-domain process dynamics and data variability. We validate the proposed DVPF on a real industrial multiphase flow process across varying operating modes. The results show that the DVPF demonstrates superior performance in cross-domain soft sensing compared to existing deep feature-based domain adaptation methods.
+ oai:arXiv.org:2306.04919v5
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Junn Yong Loo, Hwa Hui Tew, Fang Yu Leong, Ze Yang Ding, Vishnu Monn Baskaran, Chee-Ming Ting, Chee Pin Tan
+
+
+ LinkDID: A Privacy-Preserving, Sybil-Resistant and Key-Recoverable Decentralized Identity Scheme
+ https://arxiv.org/abs/2307.14679
+ arXiv:2307.14679v3 Announce Type: replace
+Abstract: Decentralized identity frameworks grant users full sovereignty over their digital assets in the Web3 ecosystem. However, allowing arbitrary creation of identifiers makes the system susceptible to Sybil attacks and puts assets at risk when keys are lost or compromised. Moreover, the lack of identification prevents anonymous credential schemes from deterring malicious transfers. While existing solutions attempt to address these issues by linking identifiers to entities through trusted intermediaries, these entities are not always accessible and require costly offline interactions.
+ In this work, we introduce LinkDID, a decentralized identity scheme offering Sybil resistance, trustless key recovery, and nontransferable anonymous credentials. LinkDID creates blockchainbased bindings between identifiers and gradually combines identifiers belonging to the same holder into a unified associated identifier. As all identifiers within an association are presumed to belong to one individual, any fraudulent activity can be detected. The association grows larger as interactions increase, substantially reducing the likelihood of successful Sybil attacks. This mechanism allows holders to recover identifiers with lost or stolen keys by proving knowledge of specific association structures. Additionally, LinkDID prevents unauthorized transfers through blockchain-based identifier-key bindings and proofs of ownership for credentials.
+ The evaluation shows that LinkDID effectively achieves progressive Sybil resistance while surpassing state-of-the-art anonymous credential schemes, achieving identifier association and credential presentation times of 2.41s and 3.31s on consumer-grade devices.
+ oai:arXiv.org:2307.14679v3
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rui Song
+
+
+ Shape Completion with Prediction of Uncertain Regions
+ https://arxiv.org/abs/2308.00377
+ arXiv:2308.00377v2 Announce Type: replace
+Abstract: Shape completion, i.e., predicting the complete geometry of an object from a partial observation, is highly relevant for several downstream tasks, most notably robotic manipulation. When basing planning or prediction of real grasps on object shape reconstruction, an indication of severe geometric uncertainty is indispensable. In particular, there can be an irreducible uncertainty in extended regions about the presence of entire object parts when given ambiguous object views. To treat this important case, we propose two novel methods for predicting such uncertain regions as straightforward extensions of any method for predicting local spatial occupancy, one through postprocessing occupancy scores, the other through direct prediction of an uncertainty indicator. We compare these methods together with two known approaches to probabilistic shape completion. Moreover, we generate a dataset, derived from ShapeNet, of realistically rendered depth images of object views with ground-truth annotations for the uncertain regions. We train on this dataset and test each method in shape completion and prediction of uncertain regions for known and novel object instances and on synthetic and real data. While direct uncertainty prediction is by far the most accurate in the segmentation of uncertain regions, both novel methods outperform the two baselines in shape completion and uncertain region prediction, and avoiding the predicted uncertain regions increases the quality of grasps for all tested methods.
+ oai:arXiv.org:2308.00377v2
+ cs.CV
+ cs.AI
+ cs.LG
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/IROS55552.2023.10342487
+ in Proc. 2023 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Detroit, MI, USA, Oct. 2023, pp. 1215-1221
+ Matthias Humt, Dominik Winkelbauer, Ulrich Hillenbrand
+
+
+ Compositional Feature Augmentation for Unbiased Scene Graph Generation
+ https://arxiv.org/abs/2308.06712
+ arXiv:2308.06712v3 Announce Type: replace
+Abstract: Scene Graph Generation (SGG) aims to detect all the visual relation triplets $<$\texttt{sub}, \texttt{pred}, \texttt{obj}$>$ in a given image. With the emergence of various advanced techniques for better utilizing both the intrinsic and extrinsic information in each relation triplet, SGG has achieved great progress over the recent years. However, due to the ubiquitous long-tailed predicate distributions, today's SGG models are still easily biased to the head predicates. Currently, the most prevalent debiasing solutions for SGG are re-balancing methods, \eg, changing the distributions of original training samples. In this paper, we argue that all existing re-balancing strategies fail to increase the diversity of the relation triplet features of each predicate, which is critical for robust SGG. To this end, we propose a novel Compositional Feature Augmentation (\textbf{CFA}) strategy, which is the first unbiased SGG work to mitigate the bias issue from the perspective of increasing the diversity of triplet features. Specifically, we first decompose each relation triplet feature into two components: intrinsic feature and extrinsic feature, which correspond to the intrinsic characteristics and extrinsic contexts of a relation triplet, respectively. Then, we design two different feature augmentation modules to enrich the feature diversity of original relation triplets by replacing or mixing up either their intrinsic or extrinsic features from other samples. Due to its model-agnostic nature, CFA can be seamlessly incorporated into various SGG frameworks. Extensive ablations have shown that CFA achieves a new state-of-the-art performance on the trade-off between different metrics.
+ oai:arXiv.org:2308.06712v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lin Li, Guikun Chen, Jun Xiao, Yi Yang, Chunping Wang, Long Chen
+
+
+ Contextualising Levels of Language Resourcedness that affect NLP tasks
+ https://arxiv.org/abs/2309.17035
+ arXiv:2309.17035v2 Announce Type: replace
+Abstract: Several widely used software applications involve some form of processing of natural language, with tasks ranging from digitising hardcopies and text processing to speech generation. Varied language resources are used to develop software systems to accomplish a wide range of natural language processing (NLP) tasks, such as the ubiquitous spellcheckers and chatbots. Languages are typically characterised as either low (LRL) or high resourced languages (HRL) with African languages having been characterised as resource-scarce languages and English by far the most well-resourced language. But what lies in-between? We argue that the dichotomous typology of LRL and HRL for all languages is problematic. Through a clear understanding of language resources situated in a society, a matrix is developed that characterises languages as Very LRL, LRL, RL, HRL and Very HRL. The characterisation is based on the typology of contextual features for each category, rather than counting tools. The motivation is provided for each feature and each characterisation. The contextualisation of resourcedness, with a focus on African languages in this paper, and an increased understanding of where on the scale the language used in a project is, may assist in, among others, better planning of research and implementation projects. We thus argue in this paper that the characterisation of language resources within a given scale in a project is an indispensable component, particularly for those in the lower half of the scale.
+ oai:arXiv.org:2309.17035v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ C. Maria Keet, Langa Khumalo
+
+
+ Fast and Inverse-Free Algorithms for Deflating Subspaces
+ https://arxiv.org/abs/2310.00193
+ arXiv:2310.00193v4 Announce Type: replace
+Abstract: This paper explores a key question in numerical linear algebra: how can we compute projectors onto the deflating subspaces of a regular matrix pencil $(A,B)$, in particular without using matrix inversion or defaulting to an expensive Schur decomposition? We focus specifically on spectral projectors, whose associated deflating subspaces correspond to sets of eigenvalues/eigenvectors. In this work, we present a high-level approach to computing these projectors, which combines rational function approximation with an inverse-free arithmetic of Benner and Byers [Numerische Mathematik 2006]. The result is a numerical framework that captures existing inverse-free methods, generates an array of new options, and provides straightforward tools for pursuing efficiency on structured problems (e.g., definite pencils). To exhibit the efficacy of this framework, we consider a handful of methods in detail, including Implicit Repeated Squaring and iterations based on the matrix sign function. In an appendix, we demonstrate that recent, randomized divide-and-conquer eigensolvers -- which are built on fast methods for individual projectors -- can be adapted to produce the generalized Schur form of any matrix pencil in nearly matrix multiplication time.
+ oai:arXiv.org:2310.00193v4
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1016/j.laa.2026.01.014
+ Linear Algebra and its Applications (2026)
+ James Demmel, Ioana Dumitriu, Ryan Schneider
+
+
+ MCPNS: A Macropixel Collocated Position and Its Neighbors Search for Plenoptic 2.0 Video Coding
+ https://arxiv.org/abs/2310.08006
+ arXiv:2310.08006v4 Announce Type: replace
+Abstract: Plenoptic 2.0 cameras enable high-resolution light field capture by incorporating focused optical designs that differ fundamentally from traditional plenoptic 1.0 systems. These structural differences produce distinct motion characteristics that challenge existing motion estimation (ME) algorithms. In this paper, we first conduct a comprehensive statistical analysis on real captured datasets to identify the primary differences in motion vector distributions among conventional, plenoptic 1.0, and plenoptic 2.0 videos. Building on these observations, we propose a novel fast ME algorithm specifically designed for plenoptic 2.0 video coding. The proposed method performs a joint search over macropixel collocated positions (MCPs) and their neighboring regions to effectively handle the large motion deviations typically observed in plenoptic 2.0 sequences. To further improve efficiency, we introduce a macropixel-level diamond search pattern (MLDSP) that follows the center-biased motion-vector distribution at the macropixel resolution, along with a fast MCP neighbor search restricted to the top K number of MCPs with the lowest distortion costs. Experimental results demonstrate that the proposed algorithm achieves better bitrate savings and computational complexity reductions compared to existing ME methods.
+ oai:arXiv.org:2310.08006v4
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Vinh Van Duong, Thuc Nguyen Huu, Jonghoon Yim, Byeungwoo Jeon
+
+
+ Combining Shape Completion and Grasp Prediction for Fast and Versatile Grasping with a Multi-Fingered Hand
+ https://arxiv.org/abs/2310.20350
+ arXiv:2310.20350v2 Announce Type: replace
+Abstract: Grasping objects with limited or no prior knowledge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem, especially when it comes to only partial observability and versatile grasping with multi-fingered hands. We present a novel, fast, and high fidelity deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape. The shape completion network is based on VQDIF and predicts spatial occupancy values at arbitrary query points. As grasp predictor, we use our two-stage architecture that first generates hand poses using an autoregressive model and then regresses finger joint configurations per pose. Critical factors turn out to be sufficient data realism and augmentation, as well as special attention to difficult cases during training. Experiments on a physical robot platform demonstrate successful grasping of a wide range of household objects based on a depth image from a single viewpoint. The whole pipeline is fast, taking only about 1 s for completing the object's shape (0.7 s) and generating 1000 grasps (0.3 s).
+ oai:arXiv.org:2310.20350v2
+ cs.RO
+ cs.AI
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/HUMANOIDS57100.2023.10375210
+ 2023 IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), pp. 1-8, 2023
+ Matthias Humt, Dominik Winkelbauer, Ulrich Hillenbrand, Berthold B\"auml
+
+
+ ConstMig: Enabling Secure Live Migration of Large Intel SGX-based applications
+ https://arxiv.org/abs/2311.06991
+ arXiv:2311.06991v5 Announce Type: replace
+Abstract: Cloud service providers are adopting Trusted Execution Environments (TEEs) to provide hardware-guaranteed security to applications running on remote, untrusted data centers. However, migrating such applications still relies on the decade-old stop-and-copy method, which introduces large downtimes. Modern live-migration approaches such as pre-copy and post-copy do not work for TEE-based applications due to hardware-enforced restrictions.
+ We propose ConstMig, a near-zero-downtime live-migration mechanism for large memory-footprint TEE-based applications. ConstMig is fully compatible with containers, virtual machines (VMs), and microVMs. Our prototype, built on Intel SGX, achieves near-zero downtime irrespective of enclave size and requires no additional hardware support. ConstMig reduces total downtime by 77 - 96% for a suite of SGX applications with multi-gigabyte memory footprints compared to state-of-the-art TEE-based migration solutions such as MigSGX.
+ oai:arXiv.org:2311.06991v5
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Sandeep Kumar, Abhisek Panda, Smruti R. Sarangi
+
+
+ Shadow loss: Memory-linear deep metric learning for efficient training
+ https://arxiv.org/abs/2311.14012
+ arXiv:2311.14012v2 Announce Type: replace
+Abstract: Deep metric learning objectives (e.g., triplet loss) require storing and comparing high-dimensional embeddings, making the per-batch loss buffer scale as $O(S\cdot D)$, where $S$ is the number of samples in a batch and $D$ is the feature dimension, thus limiting training on memory-constrained hardware. We propose Shadow Loss, a proxy-free, parameter-free objective that measures similarity via scalar projections onto the anchor direction, reducing the loss-specific buffer from $O(S\cdot D)$ to $O(S)$ while preserving the triplet structure. We analyze gradients, provide a Lipschitz continuity bound, and show that Shadow Loss penalizes trivial collapse for stable optimization. Across fine-grained retrieval (CUB-200, CARS196), large-scale product retrieval (Stanford Online Products, In-Shop Clothes), and standard/medical benchmarks (CIFAR-10/100, Tiny-ImageNet, HAM-10K, ODIR-5K), Shadow Loss consistently outperforms recent objectives (Triplet, Soft-Margin Triplet, Angular Triplet, SoftTriple, Multi-Similarity). It also converges in $\approx 1.5\text{-}2\times$ fewer epochs under identical backbones and mining. Furthermore, it improves representation separability as measured by higher silhouette scores. The design is architecture-agnostic and vectorized for efficient implementation. By decoupling discriminative power from embedding dimensionality and reusing batch dot-products, Shadow Loss enables memory-linear training and faster convergence, making deep metric learning practical on both edge and large-scale systems.
+ oai:arXiv.org:2311.14012v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Alif Elham Khan, Mohammad Junayed Hasan, Humayra Anjum, Nabeel Mohammed
+
+
+ Learning to Simulate: Generative Metamodeling via Quantile Regression
+ https://arxiv.org/abs/2311.17797
+ arXiv:2311.17797v4 Announce Type: replace
+Abstract: Stochastic simulation models effectively capture complex system dynamics but are often too slow for real-time decision-making. Traditional metamodeling techniques learn relationships between simulator inputs and a single output summary statistic, such as the mean or median. These techniques enable real-time predictions without additional simulations. However, they require prior selection of one appropriate output summary statistic, limiting their flexibility in practical applications. We propose a new concept: generative metamodeling. It aims to construct a "fast simulator of the simulator," generating random outputs significantly faster than the original simulator while preserving approximately equal conditional distributions. Generative metamodels enable rapid generation of numerous random outputs upon input specification, facilitating immediate computation of any summary statistic for real-time decision-making. We introduce a new algorithm, quantile-regression-based generative metamodeling (QRGMM), and establish its distributional convergence and convergence rate. Extensive numerical experiments demonstrate QRGMM's efficacy compared to other state-of-the-art generative algorithms in practical real-time decision-making scenarios.
+ oai:arXiv.org:2311.17797v4
+ cs.LG
+ stat.ME
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ L. Jeff Hong, Yanxi Hou, Qingkai Zhang, Xiaowei Zhang
+
+
+ GNN2R: Weakly-Supervised Rationale-Providing Question Answering over Knowledge Graphs
+ https://arxiv.org/abs/2312.02317
+ arXiv:2312.02317v4 Announce Type: replace
+Abstract: Despite the rapid progress of large language models (LLMs), knowledge graph-based question answering (KGQA) remains essential for producing verifiable and hallucination-resistant answers in many real-world settings where answer trustworthiness and computational efficiency are highly valued. However, most existing KGQA methods provide only final answers in the form of KG entities. Without explicit explanations -- ideally in the form of intermediate reasoning process over relevant KG triples, the QA results are difficult to inspect and interpret. Moreover, this limitation prevents the rich and verifiable knowledge encoded in KGs, which is a key advantage of KGQA over LLMs, from being fully leveraged. However, addressing this issue remains highly challenging due to the lack of annotated intermediate reasoning process and the requirement of high efficiency in KGQA. In this paper, we propose a novel Graph Neural Network-based Two-Step Reasoning method (GNN2R) that can efficiently retrieve both final answers and corresponding reasoning subgraphs as verifiable rationales, using only weak supervision from widely-available final answer annotations. We extensively evaluated GNN2R and demonstrated that GNN2R substantially outperforms existing state-of-the-art KGQA methods in terms of effectiveness, efficiency, and the quality of generated explanations. The complete code and pre-trained models are available at https://github.com/ruijie-wang-uzh/GNN2R.
+ oai:arXiv.org:2312.02317v4
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ruijie Wang, Luca Rossetto, Michael Cochez, Abraham Bernstein
+
+
+ Multi-class Support Vector Machine with Maximizing Minimum Margin
+ https://arxiv.org/abs/2312.06578
+ arXiv:2312.06578v4 Announce Type: replace
+Abstract: Support Vector Machine (SVM) stands out as a prominent machine learning technique widely applied in practical pattern recognition tasks. It achieves binary classification by maximizing the "margin", which represents the minimum distance between instances and the decision boundary. Although many efforts have been dedicated to expanding SVM for multi-class case through strategies such as one versus one and one versus the rest, satisfactory solutions remain to be developed. In this paper, we propose a novel method for multi-class SVM that incorporates pairwise class loss considerations and maximizes the minimum margin. Adhering to this concept, we embrace a new formulation that imparts heightened flexibility to multi-class SVM. Furthermore, the correlations between the proposed method and multiple forms of multi-class SVM are analyzed. The proposed regularizer, akin to the concept of "margin", can serve as a seamless enhancement over the softmax in deep learning, providing guidance for network parameter learning. Empirical evaluations demonstrate the effectiveness and superiority of our proposed method over existing multi-classification methods.
+ oai:arXiv.org:2312.06578v4
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zhezheng Hao, Feiping Nie, Rong Wang
+
+
+ ComplicaCode: Enhancing Disease Complication Detection in Electronic Health Records through ICD Path Generation
+ https://arxiv.org/abs/2312.10259
+ arXiv:2312.10259v2 Announce Type: replace
+Abstract: The target of Electronic Health Record (EHR) coding is to find the diagnostic codes according to the EHRs. In previous research, researchers have preferred to do multi-classification on the EHR coding task; most of them encode the EHR first and then process it to get the probability of each code based on the EHR representation. However, the question of complicating diseases is neglected among all these methods. In this paper, we propose a novel EHR coding framework, which is the first attempt at detecting complicating diseases, called ComplicaCode. This method refers to the idea of adversarial learning; a Path Generator and a Path Discriminator are designed to more efficiently finish the task of EHR coding. We propose a copy module to detect complicating diseases; by the proposed copy module and the adversarial learning strategy, we identify complicating diseases efficiently. Extensive experiments show that our method achieves a 57.30\% ratio of complicating diseases in predictions, and achieves the state-of-the-art performance among cnn-based baselines, it also surpasses transformer methods in the complication detection task, demonstrating the effectiveness of our proposed model. According to the ablation study, the proposed copy mechanism plays a crucial role in detecting complicating diseases.
+ oai:arXiv.org:2312.10259v2
+ cs.LG
+ cs.CL
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaofan Zhou
+
+
+ Quantum Approximate Optimization Algorithm for Test Case Optimization
+ https://arxiv.org/abs/2312.15547
+ arXiv:2312.15547v2 Announce Type: replace
+Abstract: Test case optimization (TCO) reduces software testing cost while preserving its effectiveness, but solving TCO problems for large-scale and complex systems requires substantial computational resources. Quantum approximate optimization algorithms (QAOAs) are promising combinatorial optimization algorithms that rely on quantum computational resources, with the potential efficiency advantages over classical approaches. Several proof-of-concept applications of QAOAs for solving combinatorial problems, such as portfolio optimization, energy systems, and job scheduling, have been proposed. Given the lack of investigation into QAOA's application to TCO problems, and motivated by the computational challenges of TCO problems and the potential of QAOAs, we present IGDec-QAOA to formulate a TCO problem as a QAOA problem and solve it on both ideal and noisy quantum computer simulators, as well as on a real quantum computer. To solve bigger TCO problems that require many qubits, which are unavailable currently, we integrate a problem decomposition strategy with the QAOA. We performed an empirical evaluation with five TCO problems and four publicly available industrial datasets from ABB, Google, and Orona to compare various configurations of IGDec-QAOA, assess its decomposition strategy of handling large datasets, and compare its performance with classical algorithms (i.e., GA and Random Search). Based on the evaluation results achieved on an ideal simulator, we recommend the best configuration of our approach for TCO problems. We also demonstrate that it can reach the same effectiveness as GA and outperform GA in two out of five test case optimization problems. In addition, we observe that, on a noisy simulator, IGDec-QAOA achieved similar performance to that from an ideal simulator. Finally, we demonstrate the feasibility of IGDec-QAOA on a real quantum computer in the presence of noise.
+ oai:arXiv.org:2312.15547v2
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/TSE.2024.3479421
+ in IEEE Transactions on Software Engineering, vol. 50, no. 12, pp. 3249-3264, Dec. 2024
+ Xinyi Wang, Shaukat Ali, Tao Yue, Paolo Arcaini
+
+
+ Hidden Minima in Two-Layer ReLU Networks
+ https://arxiv.org/abs/2312.16819
+ arXiv:2312.16819v4 Announce Type: replace
+Abstract: We consider the optimization problem associated with training two-layer ReLU networks with \(d\) inputs under the squared loss, where the labels are generated by a target network. Recent work has identified two distinct classes of infinite families of minima: one whose training loss vanishes in the high-dimensional limit, and another whose loss remains bounded away from zero. The latter family is empirically avoided by stochastic gradient descent, hence \emph{hidden}, motivating the search for analytic criteria that distinguish hidden from non-hidden minima. A key challenge is that prior analyses have shown the Hessian spectra at hidden and non-hidden minima to coincide up to terms of order \(O(d^{-1/2})\), seemingly limiting the discriminative power of spectral methods. We therefore take a different route, studying instead certain curves along which the loss is locally minimized. Our main result shows that arcs emanating from hidden minima exhibit distinctive structural and symmetry properties, arising precisely from \(\Omega(d^{-1/2})\) eigenvalue contributions that are absent from earlier analyses.
+ oai:arXiv.org:2312.16819v4
+ cs.LG
+ math.OC
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yossi Arjevani
+
+
+ Fusion of Quadratic Time-Frequency Analysis and Convolutional Neural Networks to Diagnose Bearing Faults Under Time-Varying Speeds
+ https://arxiv.org/abs/2401.01172
+ arXiv:2401.01172v3 Announce Type: replace
+Abstract: Diagnosis of bearing faults is paramount to reducing maintenance costs and operational breakdowns. Bearing faults are primary contributors to machine vibrations, and analyzing their signal morphology offers insights into their health status. Unfortunately, existing approaches are optimized for controlled environments, neglecting realistic conditions such as time-varying rotational speeds and the vibration's non-stationary nature. This paper presents a fusion of time-frequency analysis and deep learning techniques to diagnose bearing faults under time-varying speeds and varying noise levels. First, we formulate the bearing fault-induced vibrations and discuss the link between their non-stationarity and the bearing's inherent and operational parameters. We also elucidate quadratic time-frequency distributions and validate their effectiveness in resolving distinctive dynamic patterns associated with different bearing faults. Based on this, we design a time-frequency convolutional neural network (TF-CNN) to diagnose various faults in rolling-element bearings. Our experimental findings undeniably demonstrate the superior performance of TF-CNN in comparison to recently developed techniques. They also assert its versatility in capturing fault-relevant non-stationary features that couple with speed changes and show its exceptional resilience to noise, consistently surpassing competing methods across various signal-to-noise ratios and performance metrics. Altogether, the TF-CNN achieves substantial accuracy improvements up to 15%, in severe noise conditions.
+ oai:arXiv.org:2401.01172v3
+ cs.LG
+ cs.AI
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Mohammad Al-Sa'd, Tuomas Jalonen, Serkan Kiranyaz, Moncef Gabbouj
+
+
+ Manipulating Feature Visualizations with Gradient Slingshots
+ https://arxiv.org/abs/2401.06122
+ arXiv:2401.06122v4 Announce Type: replace
+Abstract: Feature Visualization (FV) is a widely used technique for interpreting concepts learned by Deep Neural Networks (DNNs), which synthesizes input patterns that maximally activate a given feature. Despite its popularity, the trustworthiness of FV explanations has received limited attention. We introduce Gradient Slingshots, a novel method that enables FV manipulation without modifying model architecture or significantly degrading performance. By shaping new trajectories in off-distribution regions of a feature's activation landscape, we coerce the optimization process to converge to a predefined visualization. We evaluate our approach on several DNN architectures, demonstrating its ability to replace faithful FVs with arbitrary targets. These results expose a critical vulnerability: auditors relying solely on FV may accept entirely fabricated explanations. To mitigate this risk, we propose a straightforward defense and quantitatively demonstrate its effectiveness.
+ oai:arXiv.org:2401.06122v4
+ cs.LG
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Dilyara Bareeva, Marina M. -C. H\"ohne, Alexander Warnecke, Lukas Pirch, Klaus-Robert M\"uller, Konrad Rieck, Sebastian Lapuschkin, Kirill Bykov
+
+
+ Advanced safety filter based on SOS Control Barrier and Lyapunov Functions
+ https://arxiv.org/abs/2401.06901
+ arXiv:2401.06901v3 Announce Type: replace
+Abstract: This paper presents a novel safety filter framework that ensures both safety and the preservation of the legacy control action within a nominal region. This modular design allows the safety filter to be integrated into the control hierarchy without compromising the performance of the existing legacy controller during nominal operation. For a control-affine system, this is accomplished by formulating multiple Control Barrier Functions (CBFs) and Control Lyapunov-like Functions (CLFs) conditions, alongside a forward invariance condition for the legacy controller, as sum-of-squares constraints. Additionally, the state-dependent inequality constraints of the resulting Quadratic Program (QP) -- encoding the CBF and CLF conditions -- are designed to remain inactive within the nominal region, ensuring preservation of the legacy control action and performance. Our safety filter design is also the first to include quadratic input constraints, and does not need an explicit specification of the attractor, as it is implicitly defined by the legacy controller. To avoid the chattering effect and guarantee the uniqueness and Lipschitz continuity of solutions, the state-dependent inequality constraints of the Quadratic Program are selected to be regular. Finally, we demonstrate the method in a detailed case study involving the control of a three-phase ac/dc power converter.
+ oai:arXiv.org:2401.06901v3
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Michael Schneeberger, Silvia Mastellone, Florian D\"orfler
+
+
+ DiffusionAgent: Navigating Expert Models for Agentic Image Generation
+ https://arxiv.org/abs/2401.10061
+ arXiv:2401.10061v2 Announce Type: replace
+Abstract: In the accelerating era of human-instructed visual content creation, diffusion models have demonstrated remarkable generative potential. Yet their deployment is constrained by a dual bottleneck: semantic ambiguity in diverse prompts and the narrow specialization of individual models. A single diffusion architecture struggles to maintain optimal performance across heterogeneous prompts, while conventional "parse-then-call" pipelines artificially separate semantic understanding from generative execution. To bridge this gap, we introduce DiffusionAgent, a unified, language-model-driven agent that casts the entire "prompt comprehension-expert routing-image synthesis" loop into a agentic framework. Our contributions are three-fold: (1) a tree-of-thought-powered expert navigator that performs fine-grained semantic parsing and zero-shot matching to the most suitable diffusion model via an extensible prior-knowledge tree; (2) an advantage database updated with human-in-the-loop feedback, continually aligning model-selection policy with human aesthetic and semantic preferences; and (3) a fully decoupled agent architecture that activates the optimal generative path for open-domain prompts without retraining or fine-tuning any expert. Extensive experiments show that DiffusionAgent retains high generation quality while significantly broadening prompt coverage, establishing a new performance and generality benchmark for multi-domain image synthesis. The code is available at https://github.com/DiffusionAgent/DiffusionAgent
+ oai:arXiv.org:2401.10061v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jie Qin, Jie Wu, Weifeng Chen, Yueming Lyu
+
+
+ AlphaMapleSAT: An MCTS-based Cube-and-Conquer SAT Solver for Hard Combinatorial Problems
+ https://arxiv.org/abs/2401.13770
+ arXiv:2401.13770v2 Announce Type: replace
+Abstract: This paper introduces AlphaMapleSAT, a Cube-and-Conquer (CnC) parallel SAT solver that integrates Monte Carlo Tree Search (MCTS) with deductive feedback to efficiently solve challenging combinatorial SAT problems. Traditional lookahead cubing methods, used by solvers such as March, limit their search depth to reduce overhead often resulting in suboptimal partitions. By contrast, AlphaMapleSAT performs a deeper MCTS search guided by deductive rewards from SAT solvers. This approach enables informed exploration of the cubing space while keeping cubing costs low. We demonstrate the efficacy of our technique via extensive evaluations against the widely used and established March cubing solver on three well-known challenging combinatorial benchmarks, including the minimum Kochen-Specker (KS) problem from quantum mechanics, the Murty-Simon Conjecture, and the Ramsey problems from extremal graph theory. We compare AlphaMapleSAT against March using different types of conquering solvers such as SAT Modulo Symmetries (SMS) and SAT+CAS, both built on top of the CaDiCaL SAT solver. We show that in all cases, there is a speedup in elapsed real time (wall clock time) ranging from 1.61x to 7.57x on a 128 core machine for the above-mentioned problems. We also perform cube-level and parallel scaling analysis over 32, 64, and 128 cores, which shows that AlphaMapleSAT outperforms March on all these settings. Our results show that deductively-guided MCTS search technique for cubing in CnC solvers can significantly outperform March on hard combinatorial problems.
+ oai:arXiv.org:2401.13770v2
+ cs.AI
+ math.CO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Piyush Jha, Zhengyu Li, Zhengyang Lu, Raymond Zeng, Curtis Bright, Vijay Ganesh
+
+
+ Integrating Large Language Models into Recommendation via Mutual Augmentation and Adaptive Aggregation
+ https://arxiv.org/abs/2401.13870
+ arXiv:2401.13870v2 Announce Type: replace
+Abstract: Conventional recommendation methods have achieved notable advancements by harnessing collaborative or sequential information from user behavior. Recently, large language models (LLMs) have gained prominence for their capabilities in understanding and reasoning over textual semantics, and have found utility in various domains, including recommendation. Conventional recommendation methods and LLMs each have their strengths and weaknesses. While conventional methods excel at mining collaborative information and modeling sequential behavior, they struggle with data sparsity and the long-tail problem. LLMs, on the other hand, are proficient at utilizing rich textual contexts but face challenges in mining collaborative or sequential information. Despite their individual successes, there is a significant gap in leveraging their combined potential to enhance recommendation performance.
+ In this paper, we introduce a general and model-agnostic framework known as \textbf{L}arge \textbf{la}nguage model with \textbf{m}utual augmentation and \textbf{a}daptive aggregation for \textbf{Rec}ommendation (\textbf{Llama4Rec}). Llama4Rec synergistically combines conventional and LLM-based recommendation models. Llama4Rec proposes data augmentation and prompt augmentation strategies tailored to enhance the conventional model and LLM respectively. An adaptive aggregation module is adopted to combine the predictions of both kinds of models to refine the final recommendation results. Empirical studies on three real-world datasets validate the superiority of Llama4Rec, demonstrating its consistent outperformance of baseline methods and significant improvements in recommendation performance.
+ oai:arXiv.org:2401.13870v2
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sichun Luo, Yuxuan Yao, Bowei He, Wei Shao, Jian Xu, Yinya Huang, Aojun Zhou, Xinyi Zhang, Yuanzhang Xiao, Hanxu Hou, Mingjie Zhan, Linqi Song
+
+
+ Deeper or Wider: A Perspective from Optimal Generalization Error with Sobolev Loss
+ https://arxiv.org/abs/2402.00152
+ arXiv:2402.00152v4 Announce Type: replace
+Abstract: Constructing the architecture of a neural network is a challenging pursuit for the machine learning community, and the dilemma of whether to go deeper or wider remains a persistent question. This paper explores a comparison between deeper neural networks (DeNNs) with a flexible number of layers and wider neural networks (WeNNs) with limited hidden layers, focusing on their optimal generalization error in Sobolev losses. Analytical investigations reveal that the architecture of a neural network can be significantly influenced by various factors, including the number of sample points, parameters within the neural networks, and the regularity of the loss function. Specifically, a higher number of parameters tends to favor WeNNs, while an increased number of sample points and greater regularity in the loss function lean towards the adoption of DeNNs. We ultimately apply this theory to address partial differential equations using deep Ritz and physics-informed neural network (PINN) methods, guiding the design of neural networks.
+ oai:arXiv.org:2402.00152v4
+ cs.LG
+ cs.NA
+ math.NA
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yahong Yang, Juncai He
+
+
+ Trading off Consistency and Dimensionality of Convex Surrogates for the Mode
+ https://arxiv.org/abs/2402.10818
+ arXiv:2402.10818v3 Announce Type: replace
+Abstract: In multiclass classification over $n$ outcomes, the outcomes must be embedded into the reals with dimension at least $n-1$ in order to design a consistent surrogate loss that leads to the "correct" classification, regardless of the data distribution. For large $n$, such as in information retrieval and structured prediction tasks, optimizing a surrogate in $n-1$ dimensions is often intractable. We investigate ways to trade off surrogate loss dimension, the number of problem instances, and restricting the region of consistency in the simplex for multiclass classification. Following past work, we examine an intuitive embedding procedure that maps outcomes into the vertices of convex polytopes in a low-dimensional surrogate space. We show that full-dimensional subsets of the simplex exist around each point mass distribution for which consistency holds, but also, with less than $n-1$ dimensions, there exist distributions for which a phenomenon called hallucination occurs, which is when the optimal report under the surrogate loss is an outcome with zero probability. Looking towards application, we derive a result to check if consistency holds under a given polytope embedding and low-noise assumption, providing insight into when to use a particular embedding. We provide examples of embedding $n = 2^{d}$ outcomes into the $d$-dimensional unit cube and $n = d!$ outcomes into the $d$-dimensional permutahedron under low-noise assumptions. Finally, we demonstrate that with multiple problem instances, we can learn the mode with $\frac{n}{2}$ dimensions over the whole simplex.
+ oai:arXiv.org:2402.10818v3
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Enrique Nueve, Bo Waggoner, Dhamma Kimpara, Jessie Finocchiaro
+
+
+ The Design and Implementation of a High-Performance Log-Structured RAID System for ZNS SSDs
+ https://arxiv.org/abs/2402.17963
+ arXiv:2402.17963v3 Announce Type: replace
+Abstract: Zoned Namespace (ZNS) defines a new abstraction for host software to flexibly manage storage in flash-based SSDs as append-only zones. It also provides a Zone Append primitive to further boost the write performance of ZNS SSDs by exploiting intra-zone parallelism. However, making Zone Append effective for reliable and scalable storage, in the form of a RAID array of multiple ZNS SSDs, is non-trivial, since Zone Append offloads address management to ZNS SSDs and requires hosts to specifically manage RAID stripes across multiple drives. We propose ZapRAID, a high-performance log-structured RAID system for ZNS SSDs by carefully exploiting Zone Append to achieve high write parallelism and lightweight stripe management. ZapRAID adopts a group-based data layout with a coarse-grained ordering across multiple groups of stripes, such that it can use small-size metadata for stripe management on a per-group basis under Zone Append. It further adopts hybrid data management to simultaneously achieve intra-zone and inter-zone parallelism through a careful combination of both Zone Write and Zone Append primitives. We implement ZapRAID as a user-space block device, and evaluate ZapRAID using microbenchmarks, trace-driven experiments, and real-application experiments. Our evaluation results show that ZapRAID achieves high write throughput and maintains high performance in normal reads, degraded reads, crash recovery, and full-drive recovery.
+ oai:arXiv.org:2402.17963v3
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinhong Li, Yiyang Geng, Qiuping Wang, Shujie Han, Patrick P. C. Lee
+
+
+ Unified Source-Free Domain Adaptation
+ https://arxiv.org/abs/2403.07601
+ arXiv:2403.07601v4 Announce Type: replace
+Abstract: In the pursuit of transferring a source model to a target domain without access to the source training data, Source-Free Domain Adaptation (SFDA) has been extensively explored across various scenarios, including Closed-set, Open-set, Partial-set, and Generalized settings. Existing methods, focusing on specific scenarios, not only address a limited subset of challenges but also necessitate prior knowledge of the target domain, significantly limiting their practical utility and deployability. In light of these considerations, we introduce a more practical yet challenging problem, termed unified SFDA, which comprehensively incorporates all specific scenarios in a unified manner. In this paper, we propose a novel approach latent Causal factors discovery for unified SFDA (CausalDA). In contrast to previous alternatives that emphasize learning the statistical description of reality, we formulate CausalDA from a causality perspective. The objective is to uncover potential causality between latent variables and model decisions, enhancing the reliability and robustness of the learned model against domain shifts. To integrate extensive world knowledge, we leverage a pre-trained vision-language model such as CLIP. This aids in the formation and discovery of latent causal factors in the absence of supervision in the variation of distribution and semantics, coupled with a newly designed information bottleneck with theoretical guarantees. Extensive experiments demonstrate that CausalDA can achieve new state-of-the-art results in distinct SFDA settings, as well as source-free out-of-distribution generalization. Our code and data are available at https://github.com/tntek/CausalDA.
+ oai:arXiv.org:2403.07601v4
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Song Tang, Wenxin Su, Mao Ye, Boyu Wang, Xiatian Zhu
+
+
+ A restricted additive smoother for finite cell flow problems
+ https://arxiv.org/abs/2403.11636
+ arXiv:2403.11636v2 Announce Type: replace
+Abstract: In this work, we propose an adaptive geometric multigrid method for the solution of large-scale finite cell flow problems. The finite cell method seeks to circumvent the need for a boundary-conforming mesh through the embedding of the physical domain in a regular background mesh. As a result of the intersection between the physical domain and the background computational mesh, the resultant systems of equations are typically numerically ill-conditioned, rendering the appropriate treatment of cutcells a crucial aspect of the solver. To this end, we propose a smoother operator with favorable parallel properties and discuss its memory footprint and parallelization aspects. We propose three cache policies that offer a balance between cached and on-the-fly computation and discuss the optimization opportunities offered by the smoother operator. It is shown that the smoother operator, on account of its additive nature, can be replicated in parallel exactly with little communication overhead, which offers a major advantage in parallel settings as the geometric multigrid solver is consequently independent of the number of processes. The convergence and scalability of the geometric multigrid method is studied using numerical examples. It is shown that the iteration count of the solver remains bounded independent of the problem size and depth of the grid hierarchy. The solver is shown to obtain excellent weak and strong scaling using numerical benchmarks with more than 665 million degrees of freedom. The presented geometric multigrid solver is, therefore, an attractive option for the solution of large-scale finite cell problems in massively parallel high-performance computing environments.
+ oai:arXiv.org:2403.11636v2
+ math.NA
+ cs.NA
+ math-ph
+ math.MP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ M. Saberi, A. Vogel
+
+
+ Towards automated formal security analysis of SAML V2.0 Web Browser SSO standard -- the POST/Artifact use case
+ https://arxiv.org/abs/2403.11859
+ arXiv:2403.11859v2 Announce Type: replace
+Abstract: Single Sign-On (SSO) protocols streamline user authentication with a unified login for multiple online services, improving usability and security. One of the most common SSO protocol frameworks - the Security Assertion Markup Language V2.0 (SAML) Web SSO Profile - has been in use for more than two decades, primarily in government, education and enterprise environments. Despite its mission-critical nature, only certain deployments and configurations of the Web SSO Profile have been formally analyzed. This paper attempts to bridge this gap by performing a comprehensive formal security analysis of the SAML V2.0 SP-initiated SSO with POST/Artifact Bindings use case. Rather than focusing on a specific deployment and configuration, we closely follow the specification with the goal of capturing many different deployments allowed by the standard. Modeling and analysis is performed using Tamarin prover - state-of-the-art tool for automated verification of security protocols in the symbolic model of cryptography. Technically, we build a meta-model of the use case that we instantiate to eight different protocol variants. Using the Tamarin prover, we formally verify a number of critical security properties for those protocol variants, while identifying certain drawbacks and potential vulnerabilities.
+ oai:arXiv.org:2403.11859v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1109/ACCESS.2025.3622379
+ IEEE Access, vol. 13, pp. 180126-180144, 2025
+ Zvonimir Hartl, Ante {\DJ}erek
+
+
+ Reflexive graph lenses in univalent foundations
+ https://arxiv.org/abs/2404.07854
+ arXiv:2404.07854v2 Announce Type: replace
+Abstract: Martin-L\"of's identity types provide a generic (albeit opaque) notion of identification or "equality" between any two elements of the same type, embodied in a canonical reflexive graph structure $(=_A, \mathbf{refl})$ on any type $A$. The miracle of Voevodsky's univalence principle is that it ensures, for essentially any naturally occurring structure in mathematics, that this the resultant notion of identification is equivalent to the type of isomorphisms in the category of such structures. Characterisations of this kind are not automatic and must be established one-by-one; to this end, several authors have employed reflexive graphs and displayed reflexive graphs to organise the characterisation of identity types. We contribute reflexive graph lenses, a new family of intermediate abstractions lying between families of reflexive graphs and displayed reflexive graphs that simplifies the characterisation of identity types for complex structures. Every reflexive graph lens gives rise to a (more complicated) displayed reflexive graph, and our experience suggests that many naturally occurring displayed reflexive graphs arise in this way. Evidence for the utility of reflexive graph lenses is given by means of several case studies, including the theory of reflexive graphs itself as well as that of polynomial type operators. Finally, we exhibit an equivalence between the type of reflexive graph fibrations and the type of univalent reflexive graph lenses.
+ oai:arXiv.org:2404.07854v2
+ cs.LO
+ math.CT
+ math.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jonathan Sterling
+
+
+ Predictive Handover Strategy in 6G and Beyond: A Deep and Transfer Learning Approach
+ https://arxiv.org/abs/2404.08113
+ arXiv:2404.08113v2 Announce Type: replace
+Abstract: Next-generation cellular networks will evolve into more complex and virtualized systems, employing machine learning for enhanced optimization and leveraging higher frequency bands and denser deployments to meet varied service demands. This evolution, while bringing numerous advantages, will also pose challenges, especially in mobility management, as it will increase the overall number of handovers due to smaller coverage areas and the higher signal attenuation. To address these challenges, we propose a deep learning based algorithm for predicting the future serving cell utilizing sequential user equipment measurements to minimize the handover failures and interruption time. Our algorithm enables network operators to dynamically adjust handover triggering events or incorporate UAV base stations for enhanced coverage and capacity, optimizing network objectives like load balancing and energy efficiency through transfer learning techniques. Our framework complies with the O-RAN specifications and can be deployed in a Near-Real-Time RAN Intelligent Controller as an xApp leveraging the E2SM-KPM service model. The evaluation results demonstrate that our algorithm achieves a 92% accuracy in predicting future serving cells with high probability. Finally, by utilizing transfer learning, our algorithm significantly reduces the retraining time by 91% and 77% when new handover trigger decisions or UAV base stations are introduced to the network dynamically.
+ oai:arXiv.org:2404.08113v2
+ cs.NI
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ioannis Panitsas, Akrit Mudvari, Ali Maatouk, Leandros Tassiulas
+
+
+ Parameterized Algorithms for Coordinated Motion Planning: Minimizing Energy
+ https://arxiv.org/abs/2404.15950
+ arXiv:2404.15950v2 Announce Type: replace
+Abstract: We study the parameterized complexity of a generalization of the coordinated motion planning problem on graphs, where the goal is to route a specified subset of a given set of $k$ robots to their destinations with the aim of minimizing the total energy (i.e., the total length traveled). We develop novel techniques to push beyond previously-established results that were restricted to solid grids.
+ We design a fixed-parameter additive approximation algorithm for this problem parameterized by $k$ alone. This result, which is of independent interest, allows us to prove the following two results pertaining to well-studied coordinated motion planning problems: (1) A fixed-parameter algorithm, parameterized by $k$, for routing a single robot to its destination while avoiding the other robots, which is related to the famous Rush-Hour Puzzle; and (2) a fixed-parameter algorithm, parameterized by $k$ plus the treewidth of the input graph, for the standard \textsc{Coordinated Motion Planning} (CMP) problem in which we need to route all the $k$ robots to their destinations. The latter of these results implies, among others, the fixed-parameter tractability of CMP parameterized by $k$ on graphs of bounded outerplanarity, which include bounded-height subgrids.
+ We complement the above results with a lower bound which rules out the fixed-parameter tractability for CMP when parameterized by the total energy. This contrasts the recently-obtained tractability of the problem on solid grids under the same parameterization. As our final result, we strengthen the aforementioned fixed-parameter tractability to hold not only on solid grids but all graphs of bounded local treewidth -- a class including, among others, all graphs of bounded genus.
+ oai:arXiv.org:2404.15950v2
+ cs.DM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Argyrios Deligkas, Eduard Eiben, Robert Ganian, Iyad Kanj, M. S. Ramanujan
+
+
+ Improved All-Pairs Approximate Shortest Paths in Congested Clique
+ https://arxiv.org/abs/2405.02695
+ arXiv:2405.02695v2 Announce Type: replace
+Abstract: In this paper, we present a new randomized $O(1)$-approximation algorithm for the All-Pairs Shortest Paths (APSP) problem in weighted undirected graphs that runs in just $O(\log \log \log n)$ rounds in the Congested-Clique model.
+ Before our work, the fastest algorithms achieving an $O(1)$-approximation for APSP in weighted undirected graphs required $\operatorname{poly}(\log n)$ rounds, as shown by Censor-Hillel, Dory, Korhonen, and Leitersdorf (PODC 2019 & Distributed Computing 2021). In the unweighted undirected setting, Dory and Parter (PODC 2020 & Journal of the ACM 2022) obtained $O(1)$-approximation in $\operatorname{poly}(\log \log n)$ rounds.
+ By terminating our algorithm early, for any given parameter $t \geq 1$, we obtain an $O(t)$-round algorithm that guarantees an $O\left(\log^{1/2^t} n\right)$ approximation in weighted undirected graphs. This tradeoff between round complexity and approximation factor offers flexibility, allowing the algorithm to adapt to different requirements. In particular, for any constant $\varepsilon > 0$, an $O\left(\log^\varepsilon n\right)$-approximation can be obtained in $O(1)$ rounds. Previously, $O(1)$-round algorithms were only known for $O(\log n)$-approximation, as shown by Chechik and Zhang (PODC 2022).
+ A key ingredient in our algorithm is a lemma that, under certain conditions, allows us to improve an $a$-approximation for APSP to an $O(\sqrt{a})$-approximation in $O(1)$ rounds. To prove this lemma, we develop several new techniques, including an $O(1)$-round algorithm for computing the $k$-nearest nodes, as well as new types of hopsets and skeleton graphs based on the notion of $k$-nearest nodes.
+ oai:arXiv.org:2405.02695v2
+ cs.DS
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hong Duc Bui, Shashwat Chandra, Yi-Jun Chang, Michal Dory, Dean Leitersdorf
+
+
+ An Improved Reversible Data Hiding Algorithm Based on Reconstructed Mapping for PVO-k
+ https://arxiv.org/abs/2405.04068
+ arXiv:2405.04068v2 Announce Type: replace
+Abstract: Reversible Data Hiding (RDH) is a practical and efficient technique for information encryption. Among its methods, the Pixel-Value Ordering (PVO) algorithm and its variants primarily modify prediction errors to embed information. However, both the classic PVO and its improved versions, such as IPVO and PVO-k, share a common limitation: their maximum data embedding capacity for a given grayscale image is relatively low. This poses a challenge when large amounts of data need to be embedded into an image. In response to these issues, this paper proposes an improved design targeting the PVO-k algorithm. We have reconstructed the mapping scheme of the PVO-k algorithm to maximize the number of pixels that can embed encrypted information. Experimental validations show that our proposed scheme significantly surpasses previous algorithms in terms of the maximum data embedding capacity. For instance, when embedding information into a grayscale image of an airplane, our method's capacity exceeds that of PVO-k by 11,207 bits, PVO by 8,004 bits, and IPVO by 4,562 bits. The results demonstrate that our algorithm holds substantial advantages over existing methods and introduces innovative mapping ideas, laying a foundation for future research in reversible data hiding in images.
+ oai:arXiv.org:2405.04068v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yusen Zhang, Haoyun Xu, Jingwen Li
+
+
+ The computational power of discrete chemical reaction networks with bounded executions
+ https://arxiv.org/abs/2405.08649
+ arXiv:2405.08649v4 Announce Type: replace
+Abstract: Chemical reaction networks (CRNs) model systems where molecules interact according to a finite set of reactions such as $A + B \to C$, representing that if a molecule of $A$ and $B$ collide, they disappear and a molecule of $C$ is produced. CRNs can compute Boolean-valued predicates $\phi:\mathbb{N}^d \to \{0,1\}$ and integer-valued functions $f:\mathbb{N}^d \to \mathbb{N}$; for instance $X_1 + X_2 \to Y$ computes the function $\min(x_1,x_2)$.
+ We study the computational power of execution bounded CRNs, in which only a finite number of reactions can occur from the initial configuration (e.g., ruling out reversible reactions such as $A \rightleftharpoons B$). The power and composability of such CRNs depend crucially on some other modeling choices that do not affect the computational power of CRNs with unbounded executions, namely whether an initial leader is present, and whether (for predicates) all species are required to "vote" for the Boolean output. If the CRN starts with an initial leader, and can allow only the leader to vote, then all semilinear predicates and functions can be stably computed in $O(n \log n)$ parallel time by execution bounded CRNs.
+ However, if no initial leader is allowed, all species vote, and the CRN is "noncollapsing" (does not shrink from initially large to final $O(1)$ size configurations), then execution bounded CRNs are severely limited, able to compute only eventually constant predicates. A key tool is to characterize execution bounded CRNs as precisely those with a nonnegative linear potential function that is strictly decreased by every reaction, a result that may be of independent interest.
+ oai:arXiv.org:2405.08649v4
+ cs.CC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ David Doty, Ben Heckmann
+
+
+ On the Identification of Temporally Causal Representation with Instantaneous Dependence
+ https://arxiv.org/abs/2405.15325
+ arXiv:2405.15325v4 Announce Type: replace
+Abstract: Temporally causal representation learning aims to identify the latent causal process from time series observations, but most methods require the assumption that the latent causal processes do not have instantaneous relations. Although some recent methods achieve identifiability in the instantaneous causality case, they require either interventions on the latent variables or grouping of the observations, which are in general difficult to obtain in real-world scenarios. To fill this gap, we propose an \textbf{ID}entification framework for instantane\textbf{O}us \textbf{L}atent dynamics (\textbf{IDOL}) by imposing a sparse influence constraint that the latent causal processes have sparse time-delayed and instantaneous relations. Specifically, we establish identifiability results of the latent causal process based on sufficient variability and the sparse influence constraint by employing contextual information of time series data. Based on these theories, we incorporate a temporally variational inference architecture to estimate the latent variables and a gradient-based sparsity regularization to identify the latent causal process. Experimental results on simulation datasets illustrate that our method can identify the latent causal process. Furthermore, evaluations on multiple human motion forecasting benchmarks with instantaneous dependencies indicate the effectiveness of our method in real-world settings.
+ oai:arXiv.org:2405.15325v4
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zijian Li, Yifan Shen, Kaitao Zheng, Ruichu Cai, Xiangchen Song, Mingming Gong, Guangyi Chen, Kun Zhang
+
+
+ Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization
+ https://arxiv.org/abs/2405.19650
+ arXiv:2405.19650v3 Announce Type: replace
+Abstract: Multi-objective optimization can be found in many real-world applications where some conflicting objectives can not be optimized by a single solution. Existing optimization methods often focus on finding a set of Pareto solutions with different optimal trade-offs among the objectives. However, the required number of solutions to well approximate the whole Pareto optimal set could be exponentially large with respect to the number of objectives, which makes these methods unsuitable for handling many optimization objectives. In this work, instead of finding a dense set of Pareto solutions, we propose a novel Tchebycheff set scalarization method to find a few representative solutions (e.g., 5) to cover a large number of objectives (e.g., $>100$) in a collaborative and complementary manner. In this way, each objective can be well addressed by at least one solution in the small solution set. In addition, we further develop a smooth Tchebycheff set scalarization approach for efficient optimization with good theoretical guarantees. Experimental studies on different problems with many optimization objectives demonstrate the effectiveness of our proposed method.
+ oai:arXiv.org:2405.19650v3
+ cs.LG
+ cs.AI
+ cs.NE
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xi Lin, Yilu Liu, Xiaoyuan Zhang, Fei Liu, Zhenkun Wang, Qingfu Zhang
+
+
+ Generator-Based Fuzzers with Type-Based Targeted Mutation
+ https://arxiv.org/abs/2406.02034
+ arXiv:2406.02034v4 Announce Type: replace
+Abstract: As with any fuzzer, directing Generator-Based Fuzzers (GBF) to reach particular code targets can increase the fuzzer's effectiveness. In previous work, coverage-guided fuzzers used a mix of static analysis, taint analysis, and constraint-solving approaches to address this problem. However, none of these techniques were particularly crafted for GBF where input generators are used to construct program inputs. The observation is that input generators carry information about the input structure that is naturally present through the typing composition of the program input.
+ In this paper, we introduce a type-based mutation heuristic, along with constant string lookup, for Java GBF. Our key intuition is that if one can identify which sub-part (types) of the input will likely influence the branching decision, then focusing on mutating the choices of the generators constructing these types is likely to achieve the desired coverages. We used our technique to fuzz AWSLambda applications. Results compared to a baseline GBF tool show an almost 20\% average improvement in application coverage, and larger improvements when third-party code is included.
+ oai:arXiv.org:2406.02034v4
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Soha Hussein, Stephen McCamant, Mike Whalen
+
+
+ Hi5: Synthetic Data for Inclusive, Robust, Hand Pose Estimation
+ https://arxiv.org/abs/2406.03599
+ arXiv:2406.03599v2 Announce Type: replace
+Abstract: Hand pose estimation plays a vital role in capturing subtle nonverbal cues essential for understanding human affect. However, collecting diverse, expressive real-world data remains challenging due to labor-intensive manual annotation that often underrepresents demographic diversity and natural expressions. To address this issue, we introduce a cost-effective approach to generating synthetic data using high-fidelity 3D hand models and a wide range of affective hand poses. Our method includes varied skin tones, genders, dynamic environments, realistic lighting conditions, and diverse naturally occurring gesture animations. The resulting dataset, Hi5, contains 583,000 pose-annotated images, carefully balanced to reflect natural diversity and emotional expressiveness. Models trained exclusively on Hi5 achieve performance comparable to human-annotated datasets, exhibiting superior robustness to occlusions and consistent accuracy across diverse skin tones -- which is crucial for reliably recognizing expressive gestures in affective computing applications. Our results demonstrate that synthetic data effectively addresses critical limitations of existing datasets, enabling more inclusive, expressive, and reliable gesture recognition systems while achieving competitive performance in pose estimation benchmarks. The Hi5 dataset, data synthesis pipeline, source code, and game engine project are publicly released to support further research in synthetic hand-gesture applications.
+ oai:arXiv.org:2406.03599v2
+ cs.CV
+ cs.GR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Masum Hasan, Cengiz Ozel, Nina Long, Alexander Martin, Samuel Potter, Tariq Adnan, Sangwu Lee, Ehsan Hoque
+
+
+ Unleashing the Potential of Tracklets for Unsupervised Video Person Re-Identification
+ https://arxiv.org/abs/2406.14261
+ arXiv:2406.14261v2 Announce Type: replace
+Abstract: With rich temporal-spatial information, video-based person re-identification methods have shown broad prospects. Although tracklets can be easily obtained with ready-made tracking models, annotating identities is still expensive and impractical. Therefore, some video-based methods propose using only a few identity annotations or camera labels to facilitate feature learning. They also simply average the frame features of each tracklet, overlooking unexpected variations and inherent identity consistency within tracklets. In this paper, we propose the Self-Supervised Refined Clustering (SSR-C) framework without relying on any annotation or auxiliary information to promote unsupervised video person re-identification. Specifically, we first propose the Noise-Filtered Tracklet Partition (NFTP) module to reduce the feature bias of tracklets caused by noisy tracking results, and sequentially partition the noise-filtered tracklets into "sub-tracklets". Then, we cluster and further merge sub-tracklets using the self-supervised signal from the tracklet partition, which is enhanced through a progressive strategy to generate reliable pseudo labels, facilitating intra-class cross-tracklet aggregation. Moreover, we propose the Class Smoothing Classification (CSC) loss to efficiently promote model learning. Extensive experiments on the MARS and DukeMTMC-VideoReID datasets demonstrate that our proposed SSR-C for unsupervised video person re-identification achieves state-of-the-art results and is comparable to advanced supervised methods. The code is available at https://github.com/Darylmeng/SSRC-Reid.
+ oai:arXiv.org:2406.14261v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/TIFS.2025.3648202
+ Nanxing Meng, Qizao Wang, Bin Li, Xiangyang Xue
+
+
+ Rethinking and Red-Teaming Protective Perturbation in Personalized Diffusion Models
+ https://arxiv.org/abs/2406.18944
+ arXiv:2406.18944v5 Announce Type: replace
+Abstract: Personalized diffusion models (PDMs) have become prominent for adapting pre-trained text-to-image models to generate images of specific subjects using minimal training data. However, PDMs are susceptible to minor adversarial perturbations, leading to significant degradation when fine-tuned on corrupted datasets. These vulnerabilities are exploited to create protective perturbations that prevent unauthorized image generation. Existing purification methods attempt to red-team the protective perturbation to break the protection but often over-purify images, resulting in information loss. In this work, we conduct an in-depth analysis of the fine-tuning process of PDMs through the lens of shortcut learning. We hypothesize and empirically demonstrate that adversarial perturbations induce a latent-space misalignment between images and their text prompts in the CLIP embedding space. This misalignment causes the model to erroneously associate noisy patterns with unique identifiers during fine-tuning, resulting in poor generalization. Based on these insights, we propose a systematic red-teaming framework that includes data purification and contrastive decoupling learning. We first employ off-the-shelf image restoration techniques to realign images with their original semantic content in latent space. Then, we introduce contrastive decoupling learning with noise tokens to decouple the learning of personalized concepts from spurious noise patterns. Our study not only uncovers shortcut learning vulnerabilities in PDMs but also provides a thorough evaluation framework for developing stronger protection. Our extensive evaluation demonstrates its advantages over existing purification methods and its robustness against adaptive perturbations.
+ oai:arXiv.org:2406.18944v5
+ cs.CV
+ cs.AI
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yixin Liu, Ruoxi Chen, Xun Chen, Lichao Sun
+
+
+ Fully tensorial approach to hypercomplex neural networks
+ https://arxiv.org/abs/2407.00449
+ arXiv:2407.00449v4 Announce Type: replace
+Abstract: Fully tensorial theory of hypercomplex neural networks is given. It allows neural networks to use arithmetic based on arbitrary algebras. The key point is to observe that algebra multiplication can be represented as a rank three tensor and use this tensor in every algebraic operation. This approach is attractive for neural network libraries that support effective tensorial operations. It agrees with previous implementations for four-dimensional algebras. The proof of Universal Approximation Theorem for tensor formalism was given.
+ oai:arXiv.org:2407.00449v4
+ cs.LG
+ cs.AI
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Agnieszka Niemczynowicz, Rados{\l}aw Antoni Kycia
+
+
+ VRP-UDF: Towards Unbiased Learning of Unsigned Distance Functions from Multi-view Images with Volume Rendering Priors
+ https://arxiv.org/abs/2407.16396
+ arXiv:2407.16396v2 Announce Type: replace
+Abstract: Unsigned distance functions (UDFs) have been a vital representation for open surfaces. With different differentiable renderers, current methods are able to train neural networks to infer a UDF by minimizing the rendering errors with the UDF to the multi-view ground truth. However, these differentiable renderers are mainly handcrafted, which makes them either biased on ray-surface intersections, or sensitive to unsigned distance outliers, or not scalable to large scenes. To resolve these issues, we present a novel differentiable renderer to infer UDFs more accurately. Instead of using handcrafted equations, our differentiable renderer is a neural network which is pre-trained in a data-driven manner. It learns how to render unsigned distances into depth images, leading to a prior knowledge, dubbed volume rendering priors. To infer a UDF for an unseen scene from multiple RGB images, we generalize the learned volume rendering priors to map inferred unsigned distances in alpha blending for RGB image rendering. To reduce the bias of sampling in UDF inference, we utilize an auxiliary point sampling prior as an indicator of ray-surface intersection, and propose novel schemes towards more accurate and uniform sampling near the zero-level sets. We also propose a new strategy that leverages our pretrained volume rendering prior to serve as a general surface refiner, which can be integrated with various Gaussian reconstruction methods to optimize the Gaussian distributions and refine geometric details. Our results show that the learned volume rendering prior is unbiased, robust, scalable, 3D aware, and more importantly, easy to learn. Further experiments show that the volume rendering prior is also a general strategy to enhance other neural implicit representations such as signed distance function and occupancy.
+ oai:arXiv.org:2407.16396v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Wenyuan Zhang, Chunsheng Wang, Kanle Shi, Yu-Shen Liu, Zhizhong Han
+
+
+ Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention
+ https://arxiv.org/abs/2407.18552
+ arXiv:2407.18552v4 Announce Type: replace
+Abstract: Multimodal emotion recognition (MER) aims to infer human affect by jointly modeling audio and visual cues; however, existing approaches often struggle with temporal misalignment, weakly discriminative feature representations, and suboptimal fusion of heterogeneous modalities. To address these challenges, we propose AVT-CA, an Audio-Video Transformer architecture with cross attention for robust emotion recognition. The proposed model introduces a hierarchical video feature representation that combines channel attention, spatial attention, and local feature extraction to emphasize emotionally salient regions while suppressing irrelevant information. These refined visual features are integrated with audio representations through an intermediate transformer-based fusion mechanism that captures interlinked temporal dependencies across modalities. Furthermore, a cross-attention module selectively reinforces mutually consistent audio-visual cues, enabling effective feature selection and noise-aware fusion. Extensive experiments on three benchmark datasets, CMU-MOSEI, RAVDESS, and CREMA-D, demonstrate that AVT-CA consistently outperforms state-of-the-art baselines, achieving significant improvements in both accuracy and F1-score. Our source code is publicly available at https://github.com/shravan-18/AVTCA.
+ oai:arXiv.org:2407.18552v4
+ cs.MM
+ cs.CL
+ cs.CV
+ cs.LG
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Joe Dhanith P R, Shravan Venkatraman, Vigya Sharma, Santhosh Malarvannan
+
+
+ Astra: Efficient Transformer Architecture and Contrastive Dynamics Learning for Embodied Instruction Following
+ https://arxiv.org/abs/2408.01147
+ arXiv:2408.01147v2 Announce Type: replace
+Abstract: Vision-language-action models have gained significant attention for their ability to model multimodal sequences in embodied instruction following tasks. However, most existing models rely on causal attention, which we find suboptimal for processing sequences composed of interleaved segments from different modalities. In this paper, we introduce Astra, a novel Transformer architecture featuring trajectory attention and learnable action queries, designed to efficiently process segmented multimodal trajectories and predict actions for imitation learning. Furthermore, we propose a contrastive dynamics learning objective to enhance the model's understanding of environment dynamics and multimodal alignment, complementing the primary behavior cloning objective. Through extensive experiments on three large-scale robot manipulation benchmarks, Astra demonstrates substantial performance improvements over previous models.
+ oai:arXiv.org:2408.01147v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.18653/v1/2025.emnlp-main.688
+ Yueen Ma, Dafeng Chi, Shiguang Wu, Yuecheng Liu, Yuzheng Zhuang, Irwin King
+
+
+ An Evaluation of Explanation Methods for Black-Box Detectors of Machine-Generated Text
+ https://arxiv.org/abs/2408.14252
+ arXiv:2408.14252v2 Announce Type: replace
+Abstract: The increasing difficulty to distinguish language-model-generated from human-written text has led to the development of detectors of machine-generated text (MGT). However, in many contexts, a black-box prediction is not sufficient, it is equally important to know on what grounds a detector made that prediction. Explanation methods that estimate feature importance promise to provide indications of which parts of an input are used by classifiers for prediction. However, these are typically evaluated with simple classifiers and tasks that are intuitive to humans. To assess their suitability beyond these contexts, this study conducts the first systematic evaluation of explanation quality for detectors of MGT. The dimensions of faithfulness and stability are evaluated with five automated experiments, and usefulness is assessed in a user study. We use a dataset of ChatGPT-generated and human-written documents, and pair predictions of three existing language-model-based detectors with the corresponding SHAP, LIME, and Anchor explanations. We find that SHAP performs best in terms of faithfulness, stability, and in helping users to predict the detector's behavior. In contrast, LIME, perceived as most useful by users, scores the worst in terms of user performance at predicting detector behavior.
+ oai:arXiv.org:2408.14252v2
+ cs.LG
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Loris Schoenegger, Yuxi Xia, Benjamin Roth
+
+
+ On Expressive Power of Quantized Neural Networks under Fixed-Point Arithmetic
+ https://arxiv.org/abs/2409.00297
+ arXiv:2409.00297v2 Announce Type: replace
+Abstract: Existing works on the expressive power of neural networks typically assume real parameters and exact operations. In this work, we study the expressive power of quantized networks under discrete fixed-point parameters and inexact fixed-point operations with round-off errors. We first provide a necessary condition and a sufficient condition on fixed-point arithmetic and activation functions for quantized networks to represent all fixed-point functions from fixed-point vectors to fixed-point numbers. Then, we show that various popular activation functions satisfy our sufficient condition, e.g., Sigmoid, ReLU, ELU, SoftPlus, SiLU, Mish, and GELU. In other words, networks using those activation functions are capable of representing all fixed-point functions. We further show that our necessary condition and sufficient condition coincide under a mild condition on activation functions: e.g., for an activation function $\sigma$, there exists a fixed-point number $x$ such that $\sigma(x)=0$. Namely, we find a necessary and sufficient condition for a large class of activation functions. We lastly show that even quantized networks using binary weights in $\{-1,1\}$ can also represent all fixed-point functions for practical activation functions.
+ oai:arXiv.org:2409.00297v2
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Yeachan Park, Sejun Park, Geonho Hwang
+
+
+ Barrier Integral Control for Global Asymptotic Tracking of Uncertain Nonlinear Systems under State and Input Constraints
+ https://arxiv.org/abs/2409.04767
+ arXiv:2409.04767v3 Announce Type: replace
+Abstract: This paper addresses the problem of asymptotic tracking for high-order control-affine MIMO nonlinear systems with unknown dynamic terms subject to input and transient state constraints. We introduce Barrier Integral Control (BRIC), a novel algorithm designed to confine the system's state within a predefined funnel, ensuring adherence to the transient state constraints, and asymptotically drive it to a given reference trajectory from any initial condition. The algorithm leverages the innovative integration of a reciprocal barrier function and error-integral terms, featuring smooth feedback control. We further develop an extension of the algorithm, entailing continuous feedback, that uses a reference-modification technique to account for the input-saturation constraints. Notably, BRIC operates without relying on any information or approximation schemes for the (unknown) dynamic terms, which, unlike a large class of previous works, are not assumed to be bounded or to comply with globally Lipschitz/growth conditions. Additionally, the system's trajectory and asymptotic performance are decoupled from the uncertain model, control-gain selection, and initial conditions. Finally, comparative simulation studies validate the effectiveness of the proposed algorithm.
+ oai:arXiv.org:2409.04767v3
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Christos K. Verginis
+
+
+ Matrix perturbation analysis of methods for extracting singular values from approximate singular subspaces
+ https://arxiv.org/abs/2409.09187
+ arXiv:2409.09187v2 Announce Type: replace
+Abstract: Given (orthonormal) approximations $\tilde{U}$ and $\tilde{V}$ to the left and right subspaces spanned by the leading singular vectors of a matrix $A$, we discuss methods to approximate the leading singular values of $A$ and study their accuracy. In particular, we focus our analysis on the generalized Nystr\"om approximation, as surprisingly, it is able to obtain significantly better accuracy than classical methods, namely Rayleigh-Ritz and (one-sided) projected SVD.
+ A key idea of the analysis is to view the methods as finding the exact singular values of a perturbation of $A$. In this context, we derive a matrix perturbation result that exploits the structure of such $2\times2$ block matrix perturbation. Furthermore, we extend it to block tridiagonal matrices. We then obtain bounds on the accuracy of the extracted singular values. This leads to sharp bounds that predict well the approximation error trends and explain the difference in the behavior of these methods. Finally, we present an approach to derive an a-posteriori version of those bounds, which are more amenable to computation in practice.
+ oai:arXiv.org:2409.09187v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Lorenzo Lazzarino, Hussam Al Daas, Yuji Nakatsukasa
+
+
+ Unveiling and Mitigating Bias in Large Language Model Recommendations: A Path to Fairness
+ https://arxiv.org/abs/2409.10825
+ arXiv:2409.10825v4 Announce Type: replace
+Abstract: Large Language Model (LLM)-based recommendation systems excel in delivering comprehensive suggestions by deeply analyzing content and user behavior. However, they often inherit biases from skewed training data, favoring mainstream content while underrepresenting diverse or non-traditional options. This study explores the interplay between bias and LLM-based recommendation systems, focusing on music, song, and book recommendations across diverse demographic and cultural groups. This paper analyzes bias in LLM-based recommendation systems across multiple models (GPT, LLaMA, and Gemini), revealing its deep and pervasive impact on outcomes. Intersecting identities and contextual factors, like socioeconomic status, further amplify biases, complicating fair recommendations across diverse groups. Our findings reveal that bias in these systems is deeply ingrained, yet even simple interventions like prompt engineering can significantly reduce it. We further propose a retrieval-augmented generation strategy to mitigate bias more effectively. Numerical experiments validate these strategies, demonstrating both the pervasive nature of bias and the impact of the proposed solutions.
+ oai:arXiv.org:2409.10825v4
+ cs.IR
+ cs.AI
+ cs.ET
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Anindya Bijoy Das, Shahnewaz Karim Sakib
+
+
+ QMC integration based on arbitrary (t,m,s)-nets yields optimal convergence rates on several scales of function spaces
+ https://arxiv.org/abs/2409.12879
+ arXiv:2409.12879v2 Announce Type: replace
+Abstract: We study the integration problem over the $s$-dimensional unit cube on four types of Banach spaces of integrands. First we consider Haar wavelet spaces, consisting of functions whose Haar wavelet coefficients exhibit a certain decay behavior measured by a parameter $\alpha >0$. We study the worst case error of integration over the norm unit ball and provide upper error bounds for quasi-Monte Carlo (QMC) cubature rules based on arbitrary $(t,m,s)$-nets as well as matching lower error bounds for arbitrary cubature rules. These results show that using arbitrary $(t,m,s)$-nets as sample points yields the best possible rate of convergence. Afterwards we study spaces of integrands of fractional smoothness $\alpha \in (0,1)$ and state a sharp Koksma-Hlawka-type inequality. More precisely, we show that on those spaces the worst case error of integration is equal to the corresponding fractional discrepancy. Those spaces can be continuously embedded into tensor product Bessel potential spaces, also known as Sobolev spaces of dominated mixed smoothness, with the same set of parameters. The latter spaces can be embedded into suitable Besov spaces of dominating mixed smoothness $\alpha$, which in turn can be embedded into the Haar wavelet spaces with the same set of parameters. Therefore our upper error bounds on Haar wavelet spaces for QMC cubatures based on $(t,m,s)$-nets transfer (with possibly different constants) to the corresponding spaces of integrands of fractional smoothness and to Sobolev and Besov spaces of dominating mixed smoothness. Moreover, known lower error bounds for periodic Sobolev and Besov spaces of dominating mixed smoothness show that QMC integration based on arbitrary $(t,m,s)$-nets yields the best possible convergence rate on periodic as well as on non-periodic Sobolev and Besov spaces of dominating smoothness.
+ oai:arXiv.org:2409.12879v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Michael Gnewuch, Josef Dick, Lev Markhasin, Winfried Sickel
+
+
+ D2D Coded Caching from Two Classes of Optimal DPDAs using Cross Resolvable Designs
+ https://arxiv.org/abs/2409.14350
+ arXiv:2409.14350v2 Announce Type: replace
+Abstract: Device to device (D2D) communication is one of the most promising techniques for fifth-generation and beyond wireless communication systems. This paper considers coded caching in a wireless D2D network, in which a central server initially places the data in the user cache memories, and all user demands are served through inter-user coded multicast transmissions. D2D placement delivery array (DPDA) was proposed as a tool for designing coded caching schemes with reduced subpacketization levels in a D2D network. In this paper, we first constructed three classes of DPDAs using a cross resolvable design, a group divisible design, and a newly developed block design. The resulting D2D schemes achieve low subpacketization levels while meeting the known lower bound on the transmission load of a DPDA. These classes of constructed DPDAs either simplify or generalize all existing DPDA constructions that achieve the known lower bound and have low subpacketization levels. Furthermore, a new lower bound on the transmission load of a DPDA is proposed. Two new classes of DPDAs are then constructed using a cross resolvable design and a newly developed block design, respectively. These constructions yield low-subpacketization D2D schemes and achieve the proposed lower bound on the transmission load. Compared to existing schemes with the same system parameters as those obtained from the proposed DPDAs, the proposed schemes have an advantage in either transmission load or subpacketization level or both.
+ oai:arXiv.org:2409.14350v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rashid Ummer N. T., B. Sundar Rajan
+
+
+ A new baseline for edge detection: Make Encoder-Decoder great again
+ https://arxiv.org/abs/2409.14976
+ arXiv:2409.14976v3 Announce Type: replace
+Abstract: The performance of deep learning based edge detector has far exceeded that of humans, but the huge computational cost and complex training strategy hinder its further development and application. In this paper, we eliminate these complexities with a vanilla encoder-decoder based detector. Firstly, we design a bilateral encoder to decouple the extraction process of location features and semantic features. Since the location branch no longer provides cues for the semantic branch, the richness of features can be further compressed, which is the key to make our model more compact. We propose a cascaded feature fusion decoder, where the location features are progressively refined by semantic features. The refined location features are the only basis for generating the edge map. The coarse original location features and semantic features are avoided from direct contact with the final result. So the noise in the location features and the location error in the semantic features can be suppressed in the generated edge map. The proposed New Baseline for Edge Detection (NBED) achieves superior performance consistently across multiple edge detection benchmarks, even compared with those methods with huge computational cost and complex training strategy. The ODS of NBED on BSDS500 is 0.838, achieving state-of-the-art performance. Our study shows that what really matters in the current edge detection is high-quality features, and we can make the encoder-decoder based detector great again even without complex training strategies and huge computational cost. The code is available at https://github.com/Li-yachuan/NBED.
+ oai:arXiv.org:2409.14976v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.image.2026.117485
+ Yachuan Li, Xavier Soria Pomab, Yongke Xi, Guanlin Li, Chaozhi Yang, Qian Xiao, Yun Bai, Zongmin LI
+
+
+ Towards Accessible Robot Control: Comparing Kinesthetic Teaching, SpaceMouse Teleoperation, and a Mixed Reality Interface
+ https://arxiv.org/abs/2409.18394
+ arXiv:2409.18394v3 Announce Type: replace
+Abstract: Teleoperation interfaces are essential tools for enabling human control of robotic systems. Although a wide range of interfaces has been developed, a persistent gap remains between the level of performance humans can achieve through these interfaces and the capabilities afforded by direct human-guided robot control. This gap is further exacerbated when users are inexperienced or unfamiliar with the robotic platform or control interface. In this work, we aim to better characterize this performance gap for non-expert users by comparing two teleoperation approaches, SpaceMouse teleoperation and a Mixed Reality (MR) interface, against kinesthetic teaching as a non-teleoperation baseline. All three approaches were evaluated in a comprehensive user study involving two robotic platforms and six complex manipulation tasks. Quantitative results show that the SpaceMouse and MR interfaces performed comparably, with significant differences in task completion observed for only two tasks, and success rates declining as task complexity increased. Qualitative analysis reflected these trends, highlighting differences in Physical Demand and identifying interface attributes that influence users' ability to perform, learn, and understand. This study quantifies the limitations of current teleoperation methods and incorporates subjective feedback from 25 participants. The results highlight the critical need to design and rigorously evaluate teleoperation systems for non-expert users, particularly in contexts where autonomous robots are deployed in personal or everyday environments, to ensure usability, efficiency, and accessibility.
+ oai:arXiv.org:2409.18394v3
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Aliyah Smith, Monroe Kennedy III
+
+
+ Probabilistic Analysis of Copyright Disputes and Generative AI Safety
+ https://arxiv.org/abs/2410.00475
+ arXiv:2410.00475v5 Announce Type: replace
+Abstract: This paper presents a probabilistic approach to analyzing copyright infringement disputes. Evidentiary principles shaped by case law are formalized in probabilistic terms, and the ``inverse ratio rule'' -- a controversial legal doctrine adopted by some courts -- is examined. Although this rule has faced significant criticism, a formal proof demonstrates its validity, provided it is properly defined. The probabilistic approach is further employed to study the copyright safety of generative AI. Specifically, the Near Access-Free (NAF) condition, previously proposed as a strategy for mitigating the heightened copyright infringement risks of generative AI, is evaluated. The analysis reveals limitations in its justifiability and efficacy.
+ oai:arXiv.org:2410.00475v5
+ cs.CY
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Proc. 20th Int. Conf. on Artificial Intelligence and Law (ICAIL '25), ACM, pp. 470-474 (2026)
+ Hiroaki Chiba-Okabe
+
+
+ MedQA-CS: Objective Structured Clinical Examination (OSCE)-Style Benchmark for Evaluating LLM Clinical Skills
+ https://arxiv.org/abs/2410.01553
+ arXiv:2410.01553v2 Announce Type: replace
+Abstract: Artificial intelligence (AI) and large language models (LLMs) in healthcare require advanced clinical skills (CS), yet current benchmarks fail to evaluate these comprehensively. We introduce MedQA-CS, an AI-SCE framework inspired by medical education's Objective Structured Clinical Examinations (OSCEs), to address this gap. MedQA-CS evaluates LLMs through two instruction-following tasks, LLM-as-medical-student and LLM-as-CS-examiner, designed to reflect real clinical scenarios. Our contributions include developing MedQA-CS, a comprehensive evaluation framework with publicly available data and expert annotations, and providing the quantitative and qualitative assessment of LLMs as reliable judges in CS evaluation. Our experiments show that MedQA-CS is a more challenging benchmark for evaluating clinical skills than traditional multiple-choice QA benchmarks (e.g., MedQA). Combined with existing benchmarks, MedQA-CS enables a more comprehensive evaluation of LLMs' clinical capabilities for both open- and closed-source LLMs.
+ oai:arXiv.org:2410.01553v2
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zonghai Yao, Zihao Zhang, Chaolong Tang, Xingyu Bian, Youxia Zhao, Zhichao Yang, Junda Wang, Huixue Zhou, Won Seok Jang, Feiyun Ouyang, Hong Yu
+
+
+ Undesirable Memorization in Large Language Models: A Survey
+ https://arxiv.org/abs/2410.02650
+ arXiv:2410.02650v3 Announce Type: replace
+Abstract: While recent research increasingly showcases the remarkable capabilities of Large Language Models (LLMs), it is equally crucial to examine their associated risks. Among these, privacy and security vulnerabilities are particularly concerning, posing significant ethical and legal challenges. At the heart of these vulnerabilities stands memorization, which refers to a model's tendency to store and reproduce phrases from its training data. This phenomenon has been shown to be a fundamental source to various privacy and security attacks against LLMs. In this paper, we provide a taxonomy of the literature on LLM memorization, exploring it across three dimensions: granularity, retrievability, and desirability. Next, we discuss the metrics and methods used to quantify memorization, followed by an analysis of the causes and factors that contribute to memorization phenomenon. We then explore strategies that are used so far to mitigate the undesirable aspects of this phenomenon. We conclude our survey by identifying potential research topics for the near future, including methods to balance privacy and performance, and the analysis of memorization in specific LLM contexts such as conversational agents, retrieval-augmented generation, and diffusion language models. Given the rapid research pace in this field, we also maintain a dedicated repository of the references discussed in this survey which will be regularly updated to reflect the latest developments.
+ oai:arXiv.org:2410.02650v3
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ali Satvaty, Suzan Verberne, Fatih Turkmen
+
+
+ A Comprehensive Study on GDPR-Oriented Analysis of Privacy Policies: Taxonomy, Corpus and GDPR Concept Classifiers
+ https://arxiv.org/abs/2410.04754
+ arXiv:2410.04754v2 Announce Type: replace
+Abstract: Machine learning based classifiers that take a privacy policy as the input and predict relevant concepts are useful in different applications such as (semi-)automated compliance analysis against requirements of the EU GDPR. In all past studies, such classifiers produce a concept label per segment (e.g., sentence or paragraph) and their performances were evaluated by using a dataset of labeled segments without considering the privacy policy they belong to. However, such an approach could overestimate the performance in real-world settings, where all segments in a new privacy policy are supposed to be unseen. Additionally, we also observed other research gaps, including the lack of a more complete GDPR taxonomy and the less consideration of hierarchical information in privacy policies. To fill such research gaps, we developed a more complete GDPR taxonomy, created the first corpus of labeled privacy policies with hierarchical information, and conducted the most comprehensive performance evaluation of GDPR concept classifiers for privacy policies. Our work leads to multiple novel findings, including the confirmed inappropriateness of splitting training and test sets at the segment level, the benefits of considering hierarchical information, and the limitations of the "one size fits all" approach, and the significance of testing cross-corpus generalizability.
+ oai:arXiv.org:2410.04754v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Peng Tang, Xin Li, Yuxin Chen, Weidong Qiu, Haochen Mei, Allison Holmes, Fenghua Li, Shujun Li
+
+
+ Subspace method based on neural networks for eigenvalue problems
+ https://arxiv.org/abs/2410.13358
+ arXiv:2410.13358v2 Announce Type: replace
+Abstract: In this paper, we propose a subspace method based on neural networks for eigenvalue problems with high accuracy and low cost. We first construct a neural network-based orthogonal basis by some deep learning method and dimensionality reduction technique, and then calculate the Galerkin projection of the eigenvalue problem onto the subspace spanned by the orthogonal basis and obtain an approximate solution. Numerical experiments show that we can obtain approximate eigenvalues and eigenfunctions with very high accuracy but low cost.
+ oai:arXiv.org:2410.13358v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaoying Dai, Yunying Fan, Zhiqiang Sheng
+
+
+ RemoteDet-Mamba: A Hybrid Mamba-CNN Network for Multi-modal Object Detection in Remote Sensing Images
+ https://arxiv.org/abs/2410.13532
+ arXiv:2410.13532v2 Announce Type: replace
+Abstract: Unmanned Aerial Vehicle (UAV) remote sensing, with its advantages of rapid information acquisition and low cost, has been widely applied in scenarios such as emergency response. However, due to the long imaging distance and complex imaging mechanisms, targets in remote sensing images often face challenges such as small object size, dense distribution, and low inter-class discriminability. To address these issues, this paper proposes a multi-modal remote sensing object detection network called RemoteDet-Mamba, which is based on a patch-level four-direction selective scanning fusion strategy. This method simultaneously learns unimodal local features and fuses cross-modal patch-level global semantic information, thereby enhancing the distinguishability of small objects and improving inter-class discrimination. Furthermore, the designed lightweight fusion mechanism effectively decouples densely packed targets while reducing computational complexity. Experimental results on the DroneVehicle dataset demonstrate that RemoteDet-Mamba achieves superior detection performance compared to current mainstream methods, while maintaining low parameter count and computational overhead, showing promising potential for practical applications.
+ oai:arXiv.org:2410.13532v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Kejun Ren, Xin Wu, Lianming Xu, Li Wang
+
+
+ TabDPT: Scaling Tabular Foundation Models on Real Data
+ https://arxiv.org/abs/2410.18164
+ arXiv:2410.18164v3 Announce Type: replace
+Abstract: Tabular data is one of the most ubiquitous sources of information worldwide, spanning a wide variety of domains. This inherent heterogeneity has slowed the development of Tabular Foundation Models (TFMs) capable of fast generalization to unseen datasets. In-Context Learning (ICL) has recently emerged as a promising solution for TFMs, enabling dynamic adaptation to new tasks without additional tuning. While many studies have attempted to re-purpose large language models for tabular ICL, they have had limited success, so recent works have focused on developing tabular-specific foundation models. In this work, we propose an approach to combine ICL-based retrieval with self supervised learning to train tabular foundation models. We also investigate the utility of real vs. synthetic data for model pre-training, and show that real data can contain useful signal not easily captured in synthetic training. Specifically, we show that incorporating real data during the pre-training phase can lead to significantly faster training and better downstream generalization to unseen data. Our resulting model, TabDPT, achieves strong performance on both regression (CTR23) and classification (CC18) benchmarks. Importantly, we also demonstrate that with our pre-training procedure, scaling both model and data size leads to consistent performance improvements that follow power laws. This echoes scaling laws in LLMs and other foundation models, and suggests that large-scale TFMs can be achievable. We open-source our full pipeline: inference code including trained model weights can be found at github.com/layer6ai-labs/TabDPT-inference, and the training code to reproduce experiments can be found at github.com/layer6ai-labs/TabDPT-training.
+ oai:arXiv.org:2410.18164v3
+ cs.LG
+ cs.AI
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ NeurIPS 2025 Proceedings
+ Junwei Ma, Valentin Thomas, Rasa Hosseinzadeh, Alex Labach, Hamidreza Kamkari, Jesse C. Cresswell, Keyvan Golestan, Guangwei Yu, Anthony L. Caterini, Maksims Volkovs
+
+
+ Zero-shot Generalization in Inventory Management: Train, then Estimate and Decide
+ https://arxiv.org/abs/2411.00515
+ arXiv:2411.00515v3 Announce Type: replace
+Abstract: Deploying deep reinforcement learning (DRL) in real-world inventory management presents challenges, including dynamic environments and uncertain problem parameters, e.g. demand and lead time distributions. These challenges highlight a research gap, suggesting a need for a unifying framework to model and solve sequential decision-making under parameter uncertainty. We address this by exploring an underexplored area of DRL for inventory management: training generally capable agents (GCAs) under zero-shot generalization (ZSG). Here, GCAs are advanced DRL policies designed to handle a broad range of sampled problem instances with diverse inventory challenges. ZSG refers to the ability to successfully apply learned policies to unseen instances with unknown parameters without retraining.
+ We propose a unifying Super-Markov Decision Process formulation and the Train, then Estimate and Decide (TED) framework to train and deploy a GCA tailored to inventory management applications. The TED framework consists of three phases: training a GCA on varied problem instances, continuously estimating problem parameters during deployment, and making decisions based on these estimates. Applied to periodic review inventory problems with lost sales, cyclic demand patterns, and stochastic lead times, our trained agent, the Generally Capable Lost Sales Network (GC-LSN) consistently outperforms well-known traditional policies when problem parameters are known. Moreover, under conditions where demand and/or lead time distributions are initially unknown and must be estimated, we benchmark against online learning methods that provide worst-case performance guarantees. Our GC-LSN policy, paired with the Kaplan-Meier estimator, is demonstrated to complement these methods by providing superior empirical performance.
+ oai:arXiv.org:2411.00515v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Tarkan Temiz\"oz, Christina Imdahl, Remco Dijkman, Douniel Lamghari-Idrissi, Willem van Jaarsveld
+
+
+ CausAdv: A Causal-based Framework for Detecting Adversarial Examples
+ https://arxiv.org/abs/2411.00839
+ arXiv:2411.00839v3 Announce Type: replace
+Abstract: Deep learning has led to tremendous success in computer vision, largely due to Convolutional Neural Networks (CNNs). However, CNNs have been shown to be vulnerable to crafted adversarial perturbations. This vulnerability of adversarial examples has has motivated research into improving model robustness through adversarial detection and defense methods. In this paper, we address the adversarial robustness of CNNs through causal reasoning. We propose CausAdv: a causal framework for detecting adversarial examples based on counterfactual reasoning. CausAdv learns both causal and non-causal features of every input, and quantifies the counterfactual information (CI) of every filter of the last convolutional layer. We then perform a statistical analysis of the filters' CI across clean and adversarial samples, to demonstrate that adversarial examples exhibit different CI distributions compared to clean samples. Our results show that causal reasoning enhances the process of adversarial detection without the need to train a separate detector. Moreover, we illustrate the efficiency of causal explanations as a helpful detection tool by visualizing the extracted causal features.
+ oai:arXiv.org:2411.00839v3
+ cs.LG
+ cs.AI
+ cs.CV
+ stat.ME
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hichem Debbi
+
+
+ Modular Deep Learning for Multivariate Time-Series: Decoupling Imputation and Downstream Tasks
+ https://arxiv.org/abs/2411.03941
+ arXiv:2411.03941v2 Announce Type: replace
+Abstract: Missing values are pervasive in large-scale time-series data, posing challenges for reliable analysis and decision-making. Many neural architectures have been designed to model and impute the complex and heterogeneous missingness patterns of such data. Most existing methods are end-to-end, rendering imputation tightly coupled with downstream predictive tasks and leading to limited reusability of the trained model, reduced interpretability, and challenges in assessing model quality. In this paper, we call for a modular approach that decouples imputation and downstream tasks, enabling independent optimisation and greater adaptability. Using the largest open-source Python library for deep learning-based time-series analysis, PyPOTS, we evaluate a modular pipeline across six state-of-the-art models that perform imputation and prediction on seven datasets spanning multiple domains. Our results show that a modular approach maintains high performance while prioritising flexibility and reusability - qualities that are crucial for real-world applications. Through this work, we aim to demonstrate how modularity can benefit multivariate time-series analysis, achieving a balance between performance and adaptability.
+ oai:arXiv.org:2411.03941v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Joseph Arul Raj, Linglong Qian, Zina Ibrahim
+
+
+ A Two-Stage Reactive Auction Framework for the Multi-Depot Rural Postman Problem with Dynamic Vehicle Failures
+ https://arxiv.org/abs/2411.04073
+ arXiv:2411.04073v2 Announce Type: replace
+Abstract: Although unmanned vehicle fleets offer efficiency in transportation, logistics and inspection, their susceptibility to failures poses a significant challenge to mission continuity. We study the Multi-Depot Rural Postman Problem with Rechargeable and Reusable Vehicles (MD-RPP-RRV) with vehicle failures, where unmanned rechargeable vehicles placed at multiple depots with capacity constraints may fail while serving arc-based demands. To address unexpected vehicle breakdowns during operation, we propose a two-stage real-time rescheduling framework. First, a centralized auction quickly generates a feasible rescheduling solution; for this stage, we derive a theoretical additive bound that establishes an analytical guarantee on the worst-case rescheduling penalty. Second, a peer auction refines this baseline through a problem-specific magnetic field router for local schedule repair, utilizing parameters calibrated via sensitivity analysis to ensure controlled computational growth. We benchmark this approach against a simulated annealing metaheuristic to evaluate solution quality and execution speed. Experimental results on 257 diverse failure scenarios demonstrate that the framework achieves an average runtime reduction of over 95\% relative to the metaheuristic baseline, cutting rescheduling times from hours to seconds while maintaining high solution quality. The two-stage framework excels on large-scale instances, surpassing the centralized auction in nearly 80\% of scenarios with an average solution improvement exceeding 12\%. Moreover, it outperforms the simulated annealing mean and best results in 59\% and 28\% of scenarios, respectively, offering the robust speed-quality trade-off required for real-time mission continuity.
+ oai:arXiv.org:2411.04073v2
+ cs.RO
+ cs.CC
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Eashwar Sathyamurthy, Jeffrey W. Herrmann, Shapour Azarm
+
+
+ LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation
+ https://arxiv.org/abs/2411.04997
+ arXiv:2411.04997v5 Announce Type: replace
+Abstract: CLIP is a foundational multimodal model that aligns image and text features into a shared representation space via contrastive learning on large-scale image-text pairs. Its effectiveness primarily stems from the use of natural language as rich supervision. Motivated by the remarkable advancements in large language models (LLMs), this work explores how LLMs' superior text understanding and extensive open-world knowledge can enhance CLIP's capability, especially for processing longer and more complex image captions. We propose an efficient post-training strategy that integrates LLMs into pretrained CLIP. To address the challenge posed by the autoregressive nature of LLMs, we introduce a caption-to-caption contrastive fine-tuning framework, significantly enhancing the discriminative quality of LLM outputs. Extensive experiments demonstrate that our approach outperforms LoRA-based methods, achieving nearly fourfold faster training with superior performance. Furthermore, we validate substantial improvements over state-of-the-art models such as CLIP, EVA02, and SigLip2 across various zero-shot multimodal retrieval tasks, cross-lingual retrieval tasks, and multimodal language model pretraining.
+ oai:arXiv.org:2411.04997v5
+ cs.CV
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weiquan Huang, Aoqi Wu, Yifan Yang, Xufang Luo, Yuqing Yang, Liang Hu, Qi Dai, Chunyu Wang, Xiyang Dai, Dongdong Chen, Chong Luo, Lili Qiu
+
+
+ An Adaptive Online Smoother with Closed-Form Solutions and Information-Theoretic Lag Selection for Conditional Gaussian Nonlinear Systems
+ https://arxiv.org/abs/2411.05870
+ arXiv:2411.05870v2 Announce Type: replace
+Abstract: Data assimilation (DA) combines partial observations with dynamical models to improve state estimation. Filter-based DA uses only past and present data and is the prerequisite for real-time forecasts. Smoother-based DA exploits both past and future observations. It aims to fill in missing data, provide more accurate estimations, and develop high-quality datasets. However, the standard smoothing procedure requires using all historical state estimations, which is storage-demanding, especially for high-dimensional systems. This paper develops an adaptive-lag online smoother for a large class of complex dynamical systems with strong nonlinear and non-Gaussian features, which has important applications to many real-world problems. The adaptive lag allows the utilization of observations only within a nearby window, thus reducing computational complexity and storage needs. Online lag adjustment is essential for tackling turbulent systems, where temporal autocorrelation varies significantly over time due to intermittency, extreme events, and nonlinearity. Based on the uncertainty reduction in the estimated state, an information criterion is developed to systematically determine the adaptive lag. Notably, the mathematical structure of these systems facilitates the use of closed analytic formulae to calculate the online smoother and adaptive lag, avoiding empirical tunings as in ensemble-based DA methods. The adaptive online smoother is applied to studying three important scientific problems. First, it helps detect online causal relationships between state variables. Second, the advantage of reduced computational storage expenditure is illustrated via Lagrangian DA, a high-dimensional nonlinear problem. Finally, the adaptive smoother advances online parameter estimation with partial observations, emphasizing the role of the observed extreme events in accelerating convergence.
+ oai:arXiv.org:2411.05870v2
+ eess.SY
+ cs.SY
+ math.DS
+ math.PR
+ physics.data-an
+ stat.ME
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Marios Andreou, Nan Chen, Yingda Li
+
+
+ GalaxyEdit: Large-Scale Image Editing Dataset with Enhanced Diffusion Adapter
+ https://arxiv.org/abs/2411.13794
+ arXiv:2411.13794v2 Announce Type: replace
+Abstract: Training of large-scale text-to-image and image-to-image models requires a huge amount of annotated data. While text-to-image datasets are abundant, data available for instruction-based image-to-image tasks like object addition and removal is limited. This is because of the several challenges associated with the data generation process, such as, significant human effort, limited automation, suboptimal end-to-end models, data diversity constraints and high expenses. We propose an automated data generation pipeline aimed at alleviating such limitations, and introduce GalaxyEdit - a large-scale image editing dataset for add and remove operations. We fine-tune the SD v1.5 model on our dataset and find that our model can successfully handle a broader range of objects and complex editing instructions, outperforming state-of-the-art methods in FID scores by 11.2\% and 26.1\% for add and remove tasks respectively. Furthermore, in light of on-device usage scenarios, we expand our research to include task-specific lightweight adapters leveraging the ControlNet-xs architecture. While ControlNet-xs excels in canny and depth guided generation, we propose to improve the communication between the control network and U-Net for more intricate add and remove tasks. We achieve this by enhancing ControlNet-xs with non-linear interaction layers based on Volterra filters. Our approach outperforms ControlNet-xs in both add/remove and canny-guided image generation tasks, highlighting the effectiveness of the proposed enhancement.
+ oai:arXiv.org:2411.13794v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Aniruddha Bala, Rohan Jaiswal, Siddharth Roheda, Rohit Chowdhury, Loay Rashid
+
+
+ Riemannian Denoising Model for Molecular Structure Optimization with Chemical Accuracy
+ https://arxiv.org/abs/2411.19769
+ arXiv:2411.19769v2 Announce Type: replace
+Abstract: We introduce a framework for molecular structure optimization using denoising model on a physics-informed Riemannian manifold (R-DM). Unlike conventional approaches operating in Euclidean space, our method leverages a Riemannian metric that better aligns with molecular energy change, enabling more robust modeling of potential energy surfaces. By incorporating internal coordinates reflective of energetic properties, R-DM achieves chemical accuracy with an energy error below 1 kcal/mol. Comparative evaluations on QM9, QM7-X, and GEOM datasets demonstrate improvements in both structural and energetic accuracy, surpassing conventional Euclidean-based denoising models. This approach highlights the potential of physics-informed coordinates for tackling complex molecular optimization problems, with implications for tasks in computational chemistry and materials science.
+ oai:arXiv.org:2411.19769v2
+ cs.LG
+ physics.chem-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jeheon Woo, Seonghwan Kim, Jun Hyeong Kim, Woo Youn Kim
+
+
+ JANUS: A Difference-Oriented Analyzer For Financial Centralization Risks in Smart Contracts
+ https://arxiv.org/abs/2412.03938
+ arXiv:2412.03938v2 Announce Type: replace
+Abstract: Some smart contracts violate decentralization principles by defining privileged accounts that manage other users' assets without permission, introducing centralization risks that have caused financial losses. Existing methods, however, face challenges in accurately detecting diverse centralization risks due to their dependence on predefined behavior patterns. In this paper, we propose JANUS, an automated analyzer for Solidity smart contracts that detects financial centralization risks independently of their specific behaviors. JANUS identifies differences between states reached by privileged and ordinary accounts, and analyzes whether these differences are finance-related. Focusing on the impact of risks rather than behaviors, JANUS achieves improved accuracy compared to existing tools and can uncover centralization risks with unknown patterns.
+ To evaluate JANUS's performance, we compare it with other tools using a dataset of 540 contracts. Our evaluation demonstrates that JANUS outperforms representative tools in terms of detection accuracy for financial centralization risks . Additionally, we evaluate JANUS on a real-world dataset of 33,151 contracts, successfully identifying two types of risks that other tools fail to detect. We also prove that the state traversal method and variable summaries, which are used in JANUS to reduce the number of states to be compared, do not introduce false alarms or omissions in detection.
+ oai:arXiv.org:2412.03938v2
+ cs.LG
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wansen Wang, Pu Zhang, Renjie Ji, Wenchao Huang, Zhaoyi Meng, Yan Xiong
+
+
+ Deblur4DGS: 4D Gaussian Splatting from Blurry Monocular Video
+ https://arxiv.org/abs/2412.06424
+ arXiv:2412.06424v3 Announce Type: replace
+Abstract: Recent 4D reconstruction methods have yielded impressive results but rely on sharp videos as supervision. However, motion blur often occurs in videos due to camera shake and object movement, while existing methods render blurry results when using such videos for reconstructing 4D models. Although a few approaches attempted to address the problem, they struggled to produce high-quality results, due to the inaccuracy in estimating continuous dynamic representations within the exposure time. Encouraged by recent works in 3D motion trajectory modeling using 3D Gaussian Splatting (3DGS), we take 3DGS as the scene representation manner, and propose Deblur4DGS to reconstruct a high-quality 4D model from blurry monocular video. Specifically, we transform continuous dynamic representations estimation within an exposure time into the exposure time estimation. Moreover, we introduce the exposure regularization term, multi-frame, and multi-resolution consistency regularization term to avoid trivial solutions. Furthermore, to better represent objects with large motion, we suggest blur-aware variable canonical Gaussians. Beyond novel-view synthesis, Deblur4DGS can be applied to improve blurry video from multiple perspectives, including deblurring, frame interpolation, and video stabilization. Extensive experiments in both synthetic and real-world data on the above four tasks show that Deblur4DGS outperforms state-of-the-art 4D reconstruction methods. The codes are available at https://github.com/ZcsrenlongZ/Deblur4DGS.
+ oai:arXiv.org:2412.06424v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renlong Wu, Zhilu Zhang, Mingyang Chen, Zifei Yan, Wangmeng Zuo
+
+
+ Beyond Knowledge Silos: Task Fingerprinting for Democratization of Medical Imaging AI
+ https://arxiv.org/abs/2412.08763
+ arXiv:2412.08763v2 Announce Type: replace
+Abstract: The field of medical imaging AI is currently undergoing rapid transformations, with methodical research increasingly translated into clinical practice. Despite these successes, research suffers from knowledge silos, hindering collaboration and progress: Existing knowledge is scattered across publications and many details remain unpublished, while privacy regulations restrict data sharing. In the spirit of democratizing of AI, we propose a framework for secure knowledge transfer in the field of medical image analysis. The key to our approach is dataset "fingerprints", structured representations of feature distributions, that enable quantification of task similarity. We tested our approach across 71 distinct tasks and 12 medical imaging modalities by transferring neural architectures, pretraining, augmentation policies, and multi-task learning. According to comprehensive analyses, our method outperforms traditional methods for identifying relevant knowledge and facilitates collaborative model training. Our framework fosters the democratization of AI in medical imaging and could become a valuable tool for promoting faster scientific advancement.
+ oai:arXiv.org:2412.08763v2
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Patrick Godau, Akriti Srivastava, Constantin Ulrich, Tim Adler, Klaus Maier-Hein, Lena Maier-Hein
+
+
+ SuperGSeg: Open-Vocabulary 3D Segmentation with Structured Super-Gaussians
+ https://arxiv.org/abs/2412.10231
+ arXiv:2412.10231v3 Announce Type: replace
+Abstract: 3D Gaussian Splatting has recently gained traction for its efficient training and real-time rendering. While its vanilla representation is mainly designed for view synthesis, recent works extended it to scene understanding with language features. However, storing additional high-dimensional features per Gaussian for semantic information is memory-intensive, which limits their ability to segment and interpret challenging scenes. To this end, we introduce SuperGSeg, a novel approach that fosters cohesive, context-aware hierarchical scene representation by disentangling segmentation and language field distillation. SuperGSeg first employs neural 3D Gaussians to learn geometry, instance and hierarchical segmentation features from multi-view images with the aid of off-the-shelf 2D masks. These features are then leveraged to create a sparse set of \acrlong{superg}s. \acrlong{superg}s facilitate the lifting and distillation of 2D language features into 3D space. They enable hierarchical scene understanding with high-dimensional language feature rendering at moderate GPU memory costs. Extensive experiments demonstrate that SuperGSeg achieves remarkable performance on both open-vocabulary object selection and semantic segmentation tasks.
+ oai:arXiv.org:2412.10231v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Siyun Liang, Sen Wang, Kunyi Li, Michael Niemeyer, Stefano Gasperini, Hendrik P. A. Lensch, Nassir Navab, Federico Tombari
+
+
+ "They've Stolen My GPL-Licensed Model!": Toward Standardized and Transparent Model Licensing
+ https://arxiv.org/abs/2412.11483
+ arXiv:2412.11483v2 Announce Type: replace
+Abstract: As model parameter sizes scale into the billions and training consumes zettaFLOPs of computation, the reuse of Machine Learning (ML) assets and collaborative development have become increasingly prevalent in the ML community. These ML assets, including models, datasets, and software, may originate from various sources and be published under different licenses, which govern the use and distribution of licensed works and their derivatives. However, commonly chosen licenses, such as GPL and Apache, are software-specific and are not clearly defined or bounded in the context of model publishing. Meanwhile, the reused assets may also be under free-content licenses and model licenses, which pose a potential risk of license noncompliance and rights infringement within the model production workflow. In this paper, we address these challenges along two lines: 1) For ML workflow compliance, we propose ModelGo (MG) Analyzer, a tool that incorporates a vocabulary for ML workflow management and encoded license rules, enabling ontological reasoning to analyze rights granting and compliance issues. 2) For standardized model publishing, we introduce ModelGo Licenses, a set of modell-specific licenses that provide flexible options to meet the diverse needs of the ML community. MG Analyzer is built on Turtle language and Notation3 reasoning engine, envisioned as a first step toward Linked Open Data for ML workflow management. We have also encoded our proposed model licenses into rules and demonstrated the effects of GPL and other commonly used licenses in model publishing, along with the flexibility advantages of our licenses, through comparisons and experiments.
+ oai:arXiv.org:2412.11483v2
+ cs.CY
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ 10.1145/3774904.3792968
+ Moming Duan, Rui Zhao, Linshan Jiang, Nigel Shadbolt, Bingsheng He
+
+
+ Intention Knowledge Graph Construction for User Intention Relation Modeling
+ https://arxiv.org/abs/2412.11500
+ arXiv:2412.11500v3 Announce Type: replace
+Abstract: Understanding user intentions is challenging for online platforms. Recent work on intention knowledge graphs addresses this but often lacks focus on connecting intentions, which is crucial for modeling user behavior and predicting future actions. This paper introduces a framework to automatically generate an intention knowledge graph, capturing connections between user intentions. Using the Amazon m2 dataset, we construct an intention graph with 351 million edges, demonstrating high plausibility and acceptance. Our model effectively predicts new session intentions and enhances product recommendations, outperforming previous state-of-the-art methods and showcasing the approach's practical utility.
+ oai:arXiv.org:2412.11500v3
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jiaxin Bai, Zhaobo Wang, Junfei Cheng, Dan Yu, Zerui Huang, Weiqi Wang, Xin Liu, Chen Luo, Yanming Zhu, Bo Li, Yangqiu Song
+
+
+ Optimization Insights into Deep Diagonal Linear Networks
+ https://arxiv.org/abs/2412.16765
+ arXiv:2412.16765v3 Announce Type: replace
+Abstract: Gradient-based methods successfully train highly overparameterized models in practice, even though the associated optimization problems are markedly nonconvex. Understanding the mechanisms that make such methods effective has become a central problem in modern optimization. To investigate this question in a tractable setting, we study Deep Diagonal Linear Networks. These are multilayer architectures with a reparameterization that preserves convexity in the effective parameter, while inducing a nontrivial geometry in the optimization landscape. Under mild initialization conditions, we show that gradient flow on the layer parameters induces a mirror-flow dynamic in the effective parameter space. This structural insight yields explicit convergence guarantees, including exponential decay of the loss under a Polyak-Lojasiewicz condition, and clarifies how the parametrization and initialization scale govern the training speed. Overall, our results demonstrate that deep diagonal over parameterizations, despite their apparent complexity, can endow standard gradient methods with well-behaved and interpretable optimization dynamics.
+ oai:arXiv.org:2412.16765v3
+ cs.LG
+ math.OC
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hippolyte Labarri\`ere, Cesare Molinari, Lorenzo Rosasco, Cristian Vega, Silvia Villa
+
+
+ LLM-based relevance assessment still can't replace human relevance assessment
+ https://arxiv.org/abs/2412.17156
+ arXiv:2412.17156v3 Announce Type: replace
+Abstract: The use of large language models (LLMs) for relevance assessment in information retrieval has gained significant attention, with recent studies suggesting that LLM-based judgments provide comparable evaluations to human judgments. Notably, based on TREC 2024 data, Upadhyay et al make a bold claim that LLM-based relevance assessments, such as those generated by the Umbrela system, can fully replace traditional human relevance assessments in TREC-style evaluations. This paper critically examines this claim, highlighting practical and theoretical limitations that undermine the validity of this conclusion.
+ First, we question whether the evidence provided by Upadhyay et al. genuinely supports their claim, particularly when the test collection is intended to serve as a benchmark for future research innovations.Second, we submit a system deliberately crafted to exploit automatic evaluation metrics, demonstrating that it can achieve artificially inflated scores without truly improving retrieval quality. Third, we simulate the consequences of circularity by analyzing Kendall's tau correlations under the hypothetical scenario in which all systems adopt Umbrela as a final-stage re-ranker, illustrating how reliance on LLM-based assessments can distort system rankings. Theoretical challenges - including the inherent narcissism of LLMs, the risk of overfitting to LLM-based metrics, and the potential degradation of future LLM performance - that must be addressed before LLM-based relevance assessments can be considered a viable replacement for human judgments.
+ oai:arXiv.org:2412.17156v3
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ 10.20736/0002002105
+ Charles L. A. Clarke, Laura Dietz
+
+
+ Learning Randomized Reductions
+ https://arxiv.org/abs/2412.18134
+ arXiv:2412.18134v3 Announce Type: replace
+Abstract: A self-corrector for a function $f$ takes a black-box oracle computing $f$ that is correct on most inputs and turns it into one that is correct on every input with high probability. Self-correctors exist for any function that is randomly self-reducible (RSR), where the value $f$ at a given point $x$ can be recovered by computing $f$ on random correlated points. While RSRs enable powerful self-correction capabilities and have applications in complexity theory and cryptography, their discovery has traditionally required manual derivation by experts. We present Bitween, a method and tool for automated learning of randomized self-reductions for mathematical functions. We make two key contributions: First, we demonstrate that our learning framework based on linear regression outperforms sophisticated methods including genetic algorithms, symbolic regression, and mixed-integer linear programming for discovering RSRs from correlated samples. Second, we introduce Agentic Bitween, a neuro-symbolic approach where large language models dynamically discover novel query functions for RSR property discovery, leveraging vanilla Bitween as a tool for inference and verification, moving beyond the fixed query functions ($x+r$, $x-r$, $x \cdot r$, $x$, $r$) previously used in the literature. On RSR-Bench, our benchmark suite of 80 scientific and machine learning functions, vanilla Bitween surpasses existing symbolic methods, while Agentic Bitween discovers new RSR properties using frontier models to uncover query functions.
+ oai:arXiv.org:2412.18134v3
+ cs.LG
+ cs.CC
+ cs.PL
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ferhat Erata, Orr Paradise, Thanos Typaldos, Timos Antonopoulos, ThanhVu Nguyen, Shafi Goldwasser, Ruzica Piskac
+
+
+ "Feeling that I was Collaborating with Them:" A 20-year Scoping Review of Social Virtual Reality Leveraging Collaboration
+ https://arxiv.org/abs/2412.20266
+ arXiv:2412.20266v3 Announce Type: replace
+Abstract: As more people meet, interact, and socialize online, Social Virtual Reality (VR) emerges as a technology that bridges the gap between traditional face-to-face and online communication. Unlike traditional screen-based applications, Social VR provides immersive, spatial, and three-dimensional social interactions, making it a potential tool for enhancing remote collaborations. Despite the growing interest in Social VR, research on its role in collaboration remains fragmented, calling for a synthesis to identify research gaps and future directions. We conducted a 20-year scoping review, screening 2,035 articles and identifying 62 articles that addressed how Social VR has supported collaboration. Our analysis shows three key levels of support: Social VR can enhance individual perceptions and experiences within their groups, foster team dynamics with virtual elements that enable realistic interactions, and employ the unique affordances of VR to augment users' spaces. We discuss how future research in Social VR should move beyond replicating physical-world interactions and explore how immersive environments can cultivate long-term collaboration, trust, and more diverse and inclusive participation. This review highlights the current practices and challenges, highlighting new opportunities for theorizing and designing Social VR systems that responsibly support remote collaborations.
+ oai:arXiv.org:2412.20266v3
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3788053
+ Proc. ACM Hum.-Comput. Interact. 10, 2, Article CSCW017 (April 2026), 35 pages
+ Niloofar Sayadi, Sadie Co, Diego Gomez-Zara
+
+
+ MIRAGE: Exploring How Large Language Models Perform in Complex Social Interactive Environments
+ https://arxiv.org/abs/2501.01652
+ arXiv:2501.01652v3 Announce Type: replace
+Abstract: Large Language Models (LLMs) have shown remarkable capabilities in environmental perception, reasoning-based decision-making, and simulating complex human behaviors, particularly in interactive role-playing contexts. This paper introduces the Multiverse Interactive Role-play Ability General Evaluation (MIRAGE), a comprehensive framework designed to assess LLMs' proficiency in portraying advanced human behaviors through murder mystery games. MIRAGE features eight intricately crafted scripts encompassing diverse themes and styles, providing a rich simulation. To evaluate LLMs' performance, MIRAGE employs four distinct methods: the Trust Inclination Index (TII) to measure dynamics of trust and suspicion, the Clue Investigation Capability (CIC) to measure LLMs' capability of conducting information, the Interactivity Capability Index (ICI) to assess role-playing capabilities and the Script Compliance Index (SCI) to assess LLMs' capability of understanding and following instructions. Our experiments indicate that even popular models like GPT-4 face significant challenges in navigating the complexities presented by the MIRAGE. The datasets and simulation codes are available in \href{https://github.com/lime728/MIRAGE}{github}.
+ oai:arXiv.org:2501.01652v3
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yin Cai, Zhouhong Gu, Zhaohan Du, Zheyu Ye, Shaosheng Cao, Yiqian Xu, Hongwei Feng, Ping Chen
+
+
+ Active Learning Techniques for Pomset Recognizers
+ https://arxiv.org/abs/2501.03914
+ arXiv:2501.03914v2 Announce Type: replace
+Abstract: Pomsets are a promising formalism for concurrent programs based on partially ordered sets. Among this class, series-parallel pomsets admit a convenient linear representation and can be recognized by simple algebraic structures known as pomset recognizers. Active learning consists in inferring a formal model of a recognizable language by asking membership and equivalence queries to a minimally adequate teacher (MAT). We improve existing learning algorithms for pomset recognizers by 1. introducing a new counter-example analysis procedure that is in the best case scenario exponentially more efficient than existing methods 2. adapting the state-of-the-art $L^{\lambda}$ algorithm to minimize the impact of exceedingly verbose counter-examples and remove redundant queries 3. designing a suitable finite test suite that ensures general equivalence between two pomset recognizers by extending the well-known W-method.
+ oai:arXiv.org:2501.03914v2
+ cs.FL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Adrien Pommellet, Amazigh Amrane, Edgar Delaporte, Geoffroy Du Prey, Oscar Peyron
+
+
+ From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training
+ https://arxiv.org/abs/2501.06148
+ arXiv:2501.06148v2 Announce Type: replace
+Abstract: We study the problem of training neural stochastic differential equations, or diffusion models, to sample from a Boltzmann distribution without access to target samples. Existing methods for training such models enforce time-reversal of the generative and noising processes, using either differentiable simulation or off-policy reinforcement learning (RL). We prove equivalences between families of objectives in the limit of infinitesimal discretization steps, linking entropic RL methods (GFlowNets) with continuous-time objects (partial differential equations and path space measures). We further show that an appropriate choice of coarse time discretization during training allows greatly improved sample efficiency and the use of time-local objectives, achieving competitive performance on standard sampling benchmarks with reduced computational cost.
+ oai:arXiv.org:2501.06148v2
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Julius Berner, Lorenz Richter, Marcin Sendera, Jarrid Rector-Brooks, Nikolay Malkin
+
+
+ Tensorization of neural networks for improved privacy and interpretability
+ https://arxiv.org/abs/2501.06300
+ arXiv:2501.06300v3 Announce Type: replace
+Abstract: We present a tensorization algorithm for constructing tensor train/matrix product state (MPS) representations of functions, drawing on sketching and cross interpolation ideas. The method only requires black-box access to the target function and a small set of sample points defining the domain of interest. Thus, it is particularly well-suited for machine learning models, where the domain of interest is naturally defined by the training dataset. We show that this approach can be used to enhance the privacy and interpretability of neural network models. Specifically, we apply our decomposition to (i) obfuscate neural networks whose parameters encode patterns tied to the training data distribution, and (ii) estimate topological phases of matter that are easily accessible from the MPS representation. Additionally, we show that this tensorization can serve as an efficient initialization method for optimizing MPS in general settings, and that, for model compression, our algorithm achieves a superior trade-off between memory and time complexity compared to conventional tensorization methods of neural networks.
+ oai:arXiv.org:2501.06300v3
+ math.NA
+ cs.LG
+ cs.NA
+ physics.comp-ph
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.21468/SciPostPhysCore.8.4.095
+ SciPost Phys. Core 8, 095 (2025)
+ Jos\'e Ram\'on Pareja Monturiol, Alejandro Pozas-Kerstjens, David P\'erez-Garc\'ia
+
+
+ Asymptotic-Preserving Neural Networks based on Even-odd Decomposition for Multiscale Gray Radiative Transfer Equations
+ https://arxiv.org/abs/2501.08166
+ arXiv:2501.08166v2 Announce Type: replace
+Abstract: We present a novel Asymptotic-Preserving Neural Network (APNN) approach utilizing even-odd decomposition to tackle the nonlinear gray radiative transfer equations (GRTEs). Our AP loss demonstrates consistent stability concerning the small Knudsen number, ensuring the neural network solution uniformly converges to the diffusion limit solution. This APNN method alleviates the rigorous conservation requirements while simultaneously incorporating an auxiliary deep neural network, distinguishing it from the APNN method based on micro-macro decomposition for GRTE. Several numerical problems are examined to demonstrate the effectiveness of our proposed APNN technique.
+ oai:arXiv.org:2501.08166v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keke Wu, Xizhe Xie, Wengu Chen, Han Wang, Zheng Ma
+
+
+ FaceXBench: Evaluating Multimodal LLMs on Face Understanding
+ https://arxiv.org/abs/2501.10360
+ arXiv:2501.10360v3 Announce Type: replace
+Abstract: Multimodal Large Language Models (MLLMs) demonstrate impressive problem-solving abilities across a wide range of tasks and domains. However, their capacity for face understanding has not been systematically studied. To address this gap, we introduce FaceXBench, a comprehensive benchmark designed to evaluate MLLMs on complex face understanding tasks. FaceXBench includes 5,000 multimodal multiple-choice questions derived from 25 public datasets and a newly created dataset, FaceXAPI. These questions cover 14 tasks across 6 broad categories, assessing MLLMs' face understanding abilities in bias and fairness, face authentication, recognition, analysis, localization and tool retrieval. Using FaceXBench, we conduct an extensive evaluation of 26 open-source MLLMs alongside 2 proprietary models, revealing the unique challenges in complex face understanding tasks. We analyze the models across three evaluation settings: zero-shot, in-context task description, and chain-of-thought prompting. Our detailed analysis reveals that current MLLMs, including advanced models like GPT-4o, and GeminiPro 1.5, show significant room for improvement. We believe FaceXBench will be a crucial resource for developing MLLMs equipped to perform sophisticated face understanding. Code: https://github.com/Kartik-3004/facexbench
+ oai:arXiv.org:2501.10360v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kartik Narayan, Vibashan VS, Vishal M. Patel
+
+
+ VENENA: A Deceptive Visual Encryption Framework for Wireless Semantic Secrecy
+ https://arxiv.org/abs/2501.10699
+ arXiv:2501.10699v4 Announce Type: replace
+Abstract: Eavesdropping has been a long-standing threat to the security and privacy of wireless communications, since it is difficult to detect and costly to prevent. As networks evolve towards Sixth Generation (6G) and semantic communication becomes increasingly central to next-generation wireless systems, securing semantic information transmission emerges as a critical challenge. While classical physical layer security (PLS) focuses on passive security, the recently proposed concept of physical layer deception (PLD) offers a semantic encryption measure to actively deceive eavesdroppers. Yet the existing studies of PLD have been dominantly information-theoretical and link-level oriented, lacking considerations of system-level design and practical implementation.
+ In this work we propose Visual ENcryption for Eavesdropping NegAtion (VENENA), an artificial intelligence-enabled framework for secure image-based communication. VENENA protects confidential messages by encoding them visually while actively deceiving eavesdroppers: legitimate receivers use artificial intelligence (AI)-based classifiers to extract true message semantics, while interceptors perceive only falsified content. The framework transmits two superimposed image components with different power levels - a high-power decoy image and a low-power correction mask - ensuring only authorized receivers with favorable channel conditions can reconstruct the true message. Experimental validation demonstrates over 93% accuracy for legitimate users while limiting eavesdropper success to 52% even when system design is fully known, validating VENENA's active defense capability for 6G semantic communication.
+ oai:arXiv.org:2501.10699v4
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bin Han, Ye Yuan, Hans D. Schotten
+
+
+ D2D Coded Caching Schemes for Multiaccess Networks with Combinatorial Access Topology
+ https://arxiv.org/abs/2501.10756
+ arXiv:2501.10756v2 Announce Type: replace
+Abstract: This paper considers wireless device-to-device (D2D) coded caching in a multiaccess network, where the users communicate with each other and each user can access multiple cache nodes. Access topologies derived from two combinatorial designs known as the $t$-design and $t$-group divisible design ($t$-GDD), referred to as the $t$-design and $t$-GDD topologies respectively, which subsume a few other known topologies, have been studied for the multiaccess coded caching (MACC) network by Cheng \textit{et al.} in \cite{MACC_des}. These access topologies are extended to a multiaccess D2D coded caching (MADCC) network and novel MADCC schemes are proposed. MADCC network has been studied so far only for the cyclic wrap-around topology. Apart from the proposed novel MADCC schemes, MADCC schemes are also derived from the existing MACC schemes in \cite{MACC_des}. To compare the performance of different MADCC schemes, the metrics of load per user and subpacketization level are used while keeping the number of caches and cache memory size same. The proposed MADCC scheme with $t$-design topology performs better in terms of subpacketization level while achieving the same load per user compared to the MADCC scheme derived from the MACC scheme with $t$-design topology in \cite{MACC_des}. The proposed MADCC scheme with $t$-GDD topology performs better in terms of load per user while achieving the same subpacketization level compared to the MADCC scheme derived from the MACC scheme with $t$-GDD topology in \cite{MACC_des} in some cases. Compared to the existing MADCC scheme with cyclic wrap-around topology, the proposed MADCC scheme with $t$-design topology performs better in terms of load per user, and the proposed MADCC scheme with $t$-GDD topology performs better in terms of subpacketization level at the expense of an increase in load per user.
+ oai:arXiv.org:2501.10756v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rashid Ummer N. T., B. Sundar Rajan
+
+
+ Generative AI Misuse Potential in Cyber Security Education: A Case Study of a UK Degree Program
+ https://arxiv.org/abs/2501.12883
+ arXiv:2501.12883v4 Announce Type: replace
+Abstract: Recent advances in generative artificial intelligence (AI), such as ChatGPT, Google Gemini, and other large language models (LLMs), pose significant challenges for maintaining academic integrity within higher education. This paper examines the structural susceptibility of a certified M.Sc. Cyber Security program at a UK Russell Group university to the misuse of LLMs. Building on and extending a recently proposed quantitative framework for estimating assessment-level exposure, we analyse all summative assessments on the program and derive both module-level and program-level exposure metrics. Our results show that the majority of modules exhibit high exposure to LLM misuse, driven largely by independent project- and report-based assessments, with the capstone dissertation module particularly vulnerable. We introduce a credit-weighted program exposure score and find that the program as a whole falls within a high to very high risk band. We also discuss contextual factors -- such as block teaching and a predominantly international cohort -- that may amplify incentives to misuse LLMs. In response, we outline a set of LLM-resistant assessment strategies, critically assess the limitations of detection-based approaches, and argue for a pedagogy-first approach that preserves academic standards while preparing students for the realities of professional cyber security practice.
+ oai:arXiv.org:2501.12883v4
+ cs.CR
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Carlton Shepherd
+
+
+ vSTMD: Visual Motion Detection for Extremely Tiny Target at Various Velocities
+ https://arxiv.org/abs/2501.13054
+ arXiv:2501.13054v2 Announce Type: replace
+Abstract: Visual motion detection for extremely tiny (ET-) targets is challenging, due to their category-independent nature and the scarcity of visual cues, which often incapacitate mainstream feature-based models. Natural architectures with rich interpretability offer a promising alternative, where STMD architectures derived from insect visual STMD (Small Target Motion Detector) pathways have demonstrated their effectiveness. However, previous STMD models are constrained to a narrow velocity range, hindering their efficacy in real-world scenarios where targets exhibit diverse and unstable dynamics. To address this limitation, we present vSTMD, a learning-free model for motion detection of ET-targets at various velocities. Our key innovations include: (1) a cross-Inhibition Dynamic Potential (cIDP) that serves as a self-adaptive mechanism efficiently capturing motion cues across a wide velocity spectrum, and (2) the first Collaborative Directional Gradient Calculation (CDGC) strategy, which enhances orienting accuracy and robustness while reducing computational overhead to one-eighth of previously isolated strategies. Evaluated on the real-world dataset RIST, the proposed vSTMD and its feedback-facilitated variant vSTMD-F achieve relative $F_{1}$ gains of $30\%$ and $58\%$ over state-of-the-art (SOTA) STMD approaches, respectively. Furthermore, both models demonstrate competitive orientation estimation performance compared to SOTA deep learning-driven methods. Experiments also reveal the superiority of the natural architecture for ET-object motion detection - vSTMD is $60\times$ faster than contemporary data-driven methods, making it highly suitable for real-time applications in dynamic scenarios and complex backgrounds. Code is available at https://github.com/MingshuoXu/vSTMD.
+ oai:arXiv.org:2501.13054v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mingshuo Xu, Hao Luan, Zhou Daniel Hao, Jigen Peng, Shigang Yue
+
+
+ Engineering Carbon Credits Towards A Responsible FinTech Era: The Practices, Implications, and Future
+ https://arxiv.org/abs/2501.14750
+ arXiv:2501.14750v2 Announce Type: replace
+Abstract: Carbon emissions significantly contribute to climate change, and carbon credits have emerged as a key tool for mitigating environmental damage and helping organizations manage their carbon footprint. Despite their growing importance across sectors, fully leveraging carbon credits remains challenging. This study explores engineering practices and fintech solutions to enhance carbon emission management. We first review the negative impacts of carbon emission non-disclosure, revealing its adverse effects on financial stability and market value. Organizations are encouraged to actively manage emissions and disclose relevant data to mitigate risks. Next, we analyze factors influencing carbon prices and review advanced prediction algorithms that optimize carbon credit purchasing strategies, reducing costs and improving efficiency. Additionally, we examine corporate carbon emission prediction models, which offer accurate performance assessments and aid in planning future carbon credit needs. By integrating carbon price and emission predictions, we propose research directions, including corporate carbon management cost forecasting. This study provides a foundation for future quantitative research on the financial and market impacts of carbon management practices and is the first systematic review focusing on computing solutions and engineering practices for carbon credits.
+ oai:arXiv.org:2501.14750v2
+ cs.CY
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Qingwen Zeng, Hanlin Xu, Nanjun Xu, Zhenghao Zhao, Joakim Westerholm, Flora Salim, Junbin Gao, Huaming Chen
+
+
+ Doracamom: Joint 3D Detection and Occupancy Prediction with Multi-view 4D Radars and Cameras for Omnidirectional Perception
+ https://arxiv.org/abs/2501.15394
+ arXiv:2501.15394v3 Announce Type: replace
+Abstract: 3D object detection and occupancy prediction are critical tasks in autonomous driving, attracting significant attention. Despite the potential of recent vision-based methods, they encounter challenges under adverse conditions. Thus, integrating cameras with next-generation 4D imaging radar to achieve unified multi-task perception is highly significant, though research in this domain remains limited. In this paper, we propose Doracamom, the first framework that fuses multi-view cameras and 4D radar for joint 3D object detection and semantic occupancy prediction, enabling comprehensive environmental perception. Specifically, we introduce a novel Coarse Voxel Queries Generator that integrates geometric priors from 4D radar with semantic features from images to initialize voxel queries, establishing a robust foundation for subsequent Transformer-based refinement. To leverage temporal information, we design a Dual-Branch Temporal Encoder that processes multi-modal temporal features in parallel across BEV and voxel spaces, enabling comprehensive spatio-temporal representation learning. Furthermore, we propose a Cross-Modal BEV-Voxel Fusion module that adaptively fuses complementary features through attention mechanisms while employing auxiliary tasks to enhance feature quality. Extensive experiments on the OmniHD-Scenes, View-of-Delft (VoD), and TJ4DRadSet datasets demonstrate that Doracamom achieves state-of-the-art performance in both tasks, establishing new benchmarks for multi-modal 3D perception. Code and models will be publicly available.
+ oai:arXiv.org:2501.15394v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lianqing Zheng, Jianan Liu, Runwei Guan, Long Yang, Shouyi Lu, Yuanzhe Li, Xiaokai Bai, Jie Bai, Zhixiong Ma, Hui-Liang Shen, Xichan Zhu
+
+
+ FDLLM: A Dedicated Detector for Black-Box LLMs Fingerprinting
+ https://arxiv.org/abs/2501.16029
+ arXiv:2501.16029v4 Announce Type: replace
+Abstract: Large Language Models (LLMs) are rapidly transforming the landscape of digital content creation. However, the prevalent black-box Application Programming Interface (API) access to many LLMs introduces significant challenges in accountability, governance, and security. LLM fingerprinting, which aims to identify the source model by analyzing statistical and stylistic features of generated text, offers a potential solution. Current progress in this area is hindered by a lack of dedicated datasets and the need for efficient, practical methods that are robust against adversarial manipulations. To address these challenges, we introduce FD-Dataset, a comprehensive bilingual fingerprinting benchmark comprising 90,000 text samples from 20 famous proprietary and open-source LLMs. Furthermore, we present FDLLM, a novel fingerprinting method that leverages parameter-efficient Low-Rank Adaptation (LoRA) to fine-tune a foundation model. This approach enables LoRA to extract deep, persistent features that characterize each source LLM. Through our analysis, we find that LoRA adaptation promotes the aggregation of outputs from the same LLM in representation space while enhancing the separation between different LLMs. This mechanism explains why LoRA proves particularly effective for LLM fingerprinting. Extensive empirical evaluations on FD-Dataset demonstrate FDLLM's superiority, achieving a Macro F1 score 22.1% higher than the strongest baseline. FDLLM also exhibits strong generalization to newly released models, achieving an average accuracy of 95% on unseen models. Notably, FDLLM remains consistently robust under various adversarial attacks, including polishing, translation, and synonym substitution. Experimental results show that FDLLM reduces the average attack success rate from 49.2% (LM-D) to 23.9%.
+ oai:arXiv.org:2501.16029v4
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhiyuan Fu, Junfan Chen, Lan Zhang, Ting Yang, Jun Niu, Hongyu Sun, Ruidong Li, Peng Liu, Jice Wang, Fannv He, Qiuling Yue, Yuqing Zhang
+
+
+ From #Dr00gtiktok to #harmreduction: Exploring Substance Use Hashtags on TikTok
+ https://arxiv.org/abs/2501.16123
+ arXiv:2501.16123v2 Announce Type: replace
+Abstract: TikTok has emerged as a major source of information and social interaction for youth, raising urgent questions about how substance use discourse manifests and circulates on the platform. This paper presents the first comprehensive analysis of publicly visible, algorithmically surfaced substance-related content on TikTok, drawing on hashtags spanning all major substance categories. Using a mixed-methods approach that combines social network analysis with qualitative content coding, we examined 2,333 substance-related hashtags, identifying 16 distinct hashtag communities and characterizing their structural and thematic relationships. Our network analysis reveals a highly interconnected small-world structure in which recovery-focused hashtags such as \textit{\#addiction}, \textit{\#recovery}, and \textit{\#sober} serve as central bridges between communities. Qualitative analysis of 351 representative videos shows that Recovery Advocacy content (33.9\%) and Satirical content (28.2\%) dominate, while direct substance depiction appears in only 26\% of videos, with active use shown in just 6.5\% of them. These findings suggest that the algorithmically surfaced layer of substance-related discourse on TikTok is predominantly oriented toward recovery, support, and coping rather than explicit promotion of substance use. We further show that hashtag communities and video content are closely aligned, indicating that substance-related discourse on TikTok is shaped through organic community formation within platform affordances rather than widespread adversarial evasion of moderation. This work contributes to social computing research by showing how algorithmic visibility on TikTok shapes the organization of substance-related discourse and the formation of recovery and support communities.
+ oai:arXiv.org:2501.16123v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Layla Bouzoubaa, Muqi Guo, Joseph Trybala, Afsaneh Razi, Rezvaneh Rezapour
+
+
+ HKAN: Hierarchical Kolmogorov-Arnold Network without Backpropagation
+ https://arxiv.org/abs/2501.18199
+ arXiv:2501.18199v2 Announce Type: replace
+Abstract: This paper introduces the Hierarchical Kolmogorov-Arnold Network (HKAN), a novel network architecture that offers a competitive alternative to the recently proposed Kolmogorov-Arnold Network (KAN). Unlike KAN, which relies on backpropagation, HKAN adopts a randomized learning approach, where the parameters of its basis functions are fixed, and linear aggregations are optimized using least-squares regression. HKAN utilizes a hierarchical multi-stacking framework, with each layer refining the predictions from the previous one by solving a series of linear regression problems. This non-iterative training method simplifies computation and eliminates sensitivity to local minima in the loss function. Empirical results show that HKAN delivers comparable, if not superior, accuracy and stability relative to KAN across various regression tasks, while also providing insights into variable importance. The proposed approach seamlessly integrates theoretical insights with practical applications, presenting a robust and efficient alternative for neural network modeling.
+ oai:arXiv.org:2501.18199v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Grzegorz Dudek, Tomasz Rodak
+
+
+ A Novel Approach to the Initial Value Problem with a complete validated algorithm
+ https://arxiv.org/abs/2502.00503
+ arXiv:2502.00503v4 Announce Type: replace
+Abstract: We consider the first order autonomous differential equation (ODE) ${\bf x}'={\bf f}({\bf x})$ where ${\bf f}: {\mathbb R}^n\to{\mathbb R}^n$ is locally Lipschitz. For ${\bf x}_0\in{\mathbb R}^n$ and $h>0$, the initial value problem (IVP) for $({\bf f},{\bf x}_0,h)$ is to determine if there is a unique solution, i.e., a function ${\bf x}:[0,h]\to{\mathbb R}^n$ that satisfies the ODE with ${\bf x}(0)={\bf x}_0$. Write ${\bf x} ={\tt IVP}_{\bf f}({\bf x}_0,h)$ for this unique solution.
+ We pose a corresponding computational problem, called the End Enclosure Problem: given $({\bf f},B_0,h,\varepsilon_0)$ where $B_0\subseteq{\mathbb R}^n$ is a box and $\varepsilon_0>0$, to compute a pair of non-empty boxes $(\underline{B}_0,B_1)$ such that $\underline{B}_0\subseteq B_0$, width of $B_1$ is $<\varepsilon_0$, and for all ${\bf x}_0\in \underline{B}_0$, ${\bf x}={\tt IVP}_{\bf f}({\bf x}_0,h)$ exists and ${\bf x}(h)\in B_1$. We provide a complete validated algorithm for this problem. Under the assumption (promise) that for all ${\bf x}_0\in B_0$, ${\tt IVP}_{\bf f}({\bf x}_0,h)$ exists, we prove the halting of our algorithm. This is the first halting algorithm for IVP problems in such a general setting.
+ We also introduce novel techniques for subroutines such as StepA and StepB, and a scaffold datastructure to support our End Enclosure algorithm. Among the techniques are new ways refine full- and end-enclosures based on a {\bf radical transform} combined with logarithm norms. Our preliminary implementation and experiments show considerable promise, and compare well with current validated algorithms.
+ oai:arXiv.org:2502.00503v4
+ cs.SC
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bingwei Zhang, Chee Yap
+
+
+ Rethinking Residual Distribution in Locate-then-Edit Model Editing
+ https://arxiv.org/abs/2502.03748
+ arXiv:2502.03748v3 Announce Type: replace
+Abstract: Model editing enables targeted updates to the knowledge of large language models (LLMs) with minimal retraining. Among existing approaches, locate-then-edit methods constitute a prominent paradigm: they first identify critical layers, then compute residuals at the final critical layer based on the target edit, and finally apply least-squares-based multi-layer updates via $\textbf{residual distribution}$. While empirically effective, we identify a counterintuitive failure mode: residual distribution, a core mechanism in these methods, introduces weight shift errors that undermine editing precision. Through theoretical and empirical analysis, we show that such errors increase with the distribution distance, batch size, and edit sequence length, ultimately leading to inaccurate or suboptimal edits. To address this, we propose the $\textbf{B}$oundary $\textbf{L}$ayer $\textbf{U}$pdat$\textbf{E (BLUE)}$ strategy to enhance locate-then-edit methods. Sequential batch editing experiments on three LLMs and two datasets demonstrate that BLUE not only delivers an average performance improvement of 35.59\%, significantly advancing the state of the art in model editing, but also enhances the preservation of LLMs' general capabilities. Our code is available at https://github.com/xpq-tech/BLUE.
+ oai:arXiv.org:2502.03748v3
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaopeng Li, Shanwen Wang, Shasha Li, Shezheng Song, Bin Ji, Jun Ma, Jie Yu
+
+
+ Jingfang: An LLM-Based Multi-Agent System for Precise Medical Consultation and Syndrome Differentiation in Traditional Chinese Medicine
+ https://arxiv.org/abs/2502.04345
+ arXiv:2502.04345v3 Announce Type: replace
+Abstract: The practice of Traditional Chinese Medicine (TCM) requires profound expertise and extensive clinical experience. While Large Language Models (LLMs) offer significant potential in this domain, current TCM-oriented LLMs suffer two critical limitations: (1) a rigid consultation framework that fails to conduct comprehensive and patient-tailored interactions, often resulting in diagnostic inaccuracies; and (2) treatment recommendations generated without rigorous syndrome differentiation, which deviates from the core diagnostic and therapeutic principles of TCM. To address these issues, we develop \textbf{JingFang (JF)}, an advanced LLM-based multi-agent system for TCM that facilitates the implementation of AI-assisted TCM diagnosis and treatment. JF integrates various TCM Specialist Agents in accordance with authentic diagnostic and therapeutic scenarios of TCM, enabling personalized medical consultations, accurate syndrome differentiation and treatment recommendations. A \textbf{Multi-Agent Collaborative Consultation Mechanism (MACCM)} for TCM is constructed, where multiple Agents collaborate to emulate real-world TCM diagnostic workflows, enhancing the diagnostic ability of base LLMs to provide accurate and patient-tailored medical consultation. Moreover, we introduce a dedicated \textbf{Syndrome Differentiation Agent} fine-tuned on a preprocessed dataset, along with a designed \textbf{Dual-Stage Recovery Scheme (DSRS)} within the Treatment Agent, which together substantially improve the model's accuracy of syndrome differentiation and treatment. Comprehensive evaluations and experiments demonstrate JF's superior performance in medical consultation, and also show improvements of at least 124% and 21.1% in the precision of syndrome differentiation compared to existing TCM models and State of the Art (SOTA) LLMs, respectively.
+ oai:arXiv.org:2502.04345v3
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yehan Yang, Tianhao Ma, Ruotai Li, Xinhan Zheng, Guodong Shan
+
+
+ Optimistic Gradient Learning with Hessian Corrections for High-Dimensional Black-Box Optimization
+ https://arxiv.org/abs/2502.04829
+ arXiv:2502.04829v2 Announce Type: replace
+Abstract: Black-box algorithms are designed to optimize functions without relying on their underlying analytical structure or gradient information, making them essential when gradients are inaccessible or difficult to compute. Traditional methods for solving black-box optimization (BBO) problems predominantly rely on non-parametric models and struggle to scale to large input spaces. Conversely, parametric methods that model the function with neural estimators and obtain gradient signals via backpropagation may suffer from significant gradient errors. A recent alternative, Explicit Gradient Learning (EGL), which directly learns the gradient using a first-order Taylor approximation, has demonstrated superior performance over both parametric and non-parametric methods. In this work, we propose two novel gradient learning variants to address the robustness challenges posed by high-dimensional, complex, and highly non-linear problems. Optimistic Gradient Learning (OGL) introduces a bias toward lower regions in the function landscape, while Higher-order Gradient Learning (HGL) incorporates second-order Taylor corrections to improve gradient accuracy. We combine these approaches into the unified OHGL algorithm, achieving state-of-the-art (SOTA) performance on the synthetic COCO suite. Additionally, we demonstrate OHGLs applicability to high-dimensional real-world machine learning (ML) tasks such as adversarial training and code generation. Our results highlight OHGLs ability to generate stronger candidates, offering a valuable tool for ML researchers and practitioners tackling high-dimensional, non-linear optimization challenges
+ oai:arXiv.org:2502.04829v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ AAAI2026
+ Yedidya Kfir, Elad Sarafian, Sarit Kraus, Yoram Louzoun
+
+
+ Who is Helping Whom? Analyzing Inter-dependencies to Evaluate Cooperation in Human-AI Teaming
+ https://arxiv.org/abs/2502.06976
+ arXiv:2502.06976v3 Announce Type: replace
+Abstract: State-of-the-art methods for Human-AI Teaming and Zero-shot Cooperation focus on task completion, i.e., task rewards, as the sole evaluation metric while being agnostic to how the two agents work with each other. Furthermore, subjective user studies only offer limited insight into the quality of cooperation existing within the team. Specifically, we are interested in understanding the cooperative behaviors arising within the team when trained agents are paired with humans -- a problem that has been overlooked by the existing literature. To formally address this problem, we propose the concept of constructive interdependence -- measuring how much agents rely on each other's actions to achieve the shared goal -- as a key metric for evaluating cooperation in human-agent teams. We interpret interdependence in terms of action interactions in a STRIPS formalism, and define metrics that allow us to assess the degree of reliance between the agents' actions. We pair state-of-the-art agents HAT with learned human models as well as human participants in a user study for the popular Overcooked domain, and evaluate the task reward and teaming performance for these human-agent teams. Our results demonstrate that although trained agents attain high task rewards, they fail to induce cooperative behavior, showing very low levels of interdependence across teams. Furthermore, our analysis reveals that teaming performance is not necessarily correlated with task reward, highlighting that task reward alone cannot reliably measure cooperation arising in a team.
+ oai:arXiv.org:2502.06976v3
+ cs.MA
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Upasana Biswas, Vardhan Palod, Siddhant Bhambri, Subbarao Kambhampati
+
+
+ DiffRatio: Training One-Step Diffusion Models Without Teacher Supervision
+ https://arxiv.org/abs/2502.08005
+ arXiv:2502.08005v4 Announce Type: replace
+Abstract: Score-based distillation methods (e.g., variational score distillation) train one-step diffusion models by first pre-training a teacher score model and then distilling it into a one-step student model. However, the gradient estimator in the distillation stage usually suffers from two sources of bias: (1) biased teacher supervision due to score estimation error incurred during pre-training, and (2) the student model's score estimation error during distillation. These biases can degrade the quality of the resulting one-step diffusion model. To address this, we propose DiffRatio, a new framework for training one-step diffusion models: instead of estimating the teacher and student scores independently and then taking their difference, we directly estimate the score difference as the gradient of a learned log density ratio between the student and data distributions across diffusion time steps. This approach greatly simplifies the training pipeline, significantly reduces gradient estimation bias, and improves one-step generation quality. Additionally, it also reduces auxiliary network size by using a lightweight density-ratio network instead of two full score networks, which improves computational and memory efficiency. DiffRatio achieves competitive one-step generation results on CIFAR-10 and ImageNet (64x64 and 512x512), outperforming most teacher-supervised distillation approaches.
+ oai:arXiv.org:2502.08005v4
+ cs.LG
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wenlin Chen, Mingtian Zhang, Jiajun He, Zijing Ou, Jos\'e Miguel Hern\'andez-Lobato, Bernhard Sch\"olkopf, David Barber
+
+
+ FBFL: A Field-Based Coordination Approach for Data Heterogeneity in Federated Learning
+ https://arxiv.org/abs/2502.08577
+ arXiv:2502.08577v4 Announce Type: replace
+Abstract: In the last years, Federated learning (FL) has become a popular solution to train machine learning models in domains with high privacy concerns. However, FL scalability and performance face significant challenges in real-world deployments where data across devices are non-independently and identically distributed (non-IID). The heterogeneity in data distribution frequently arises from spatial distribution of devices, leading to degraded model performance in the absence of proper handling. Additionally, FL typical reliance on centralized architectures introduces bottlenecks and single-point-of-failure risks, particularly problematic at scale or in dynamic environments. To close this gap, we propose Field-Based Federated Learning (FBFL), a novel approach leveraging macroprogramming and field coordination to address these limitations through: (i) distributed spatial-based leader election for personalization to mitigate non-IID data challenges; and (ii) construction of a self-organizing, hierarchical architecture using advanced macroprogramming patterns. Moreover, FBFL not only overcomes the aforementioned limitations, but also enables the development of more specialized models tailored to the specific data distribution in each subregion. This paper formalizes FBFL and evaluates it extensively using MNIST, FashionMNIST, and Extended MNIST datasets. We demonstrate that, when operating under IID data conditions, FBFL performs comparably to the widely-used FedAvg algorithm. Furthermore, in challenging non-IID scenarios, FBFL not only outperforms FedAvg but also surpasses other state-of-the-art methods, namely FedProx and Scaffold, which have been specifically designed to address non-IID data distributions. Additionally, we showcase the resilience of FBFL's self-organizing hierarchical architecture against server failures.
+ oai:arXiv.org:2502.08577v4
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Davide Domini, Gianluca Aguzzi, Lukas Esterle, Mirko Viroli
+
+
+ A Taxonomy of Real Faults in Hybrid Quantum-Classical Architectures
+ https://arxiv.org/abs/2502.08739
+ arXiv:2502.08739v3 Announce Type: replace
+Abstract: With the popularity of Hybrid Quantum-Classical architectures, particularly noisy intermediate-scale quantum (NISQ) architectures, comes the need for quality assurance methods tailored to their specific faults. In this study, we propose a taxonomy of faults in Hybrid Quantum-Classical architectures accompanied by a dataset of real faults in the identified categories. To achieve this, we empirically analysed open-source repositories for fixed faults. We analysed over 5000 closed issues on GitHub and pre-selected 529 of them based on rigorously defined inclusion criteria. We selected 133 faults that we labelled around symptoms and the origin of the faults. We cross-validated the classification and labels assigned to every fault between two of the authors. As a result, we introduced a taxonomy of real faults in Hybrid Quantum-Classical architectures. Subsequently, we validated the taxonomy through interviews conducted with eleven developers. The taxonomy was dynamically updated throughout the cross-validation and interview processes. The final version was validated and discussed through surveys conducted with an independent group of domain experts to ensure its relevance and to gain further insights.
+ oai:arXiv.org:2502.08739v3
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3788677
+ Avner Bensoussan, Gunel Jahangirova, Mohammad Reza Mousavi
+
+
+ Object-Centric Latent Action Learning
+ https://arxiv.org/abs/2502.09680
+ arXiv:2502.09680v3 Announce Type: replace
+Abstract: Leveraging vast amounts of unlabeled internet video data for embodied AI is currently bottlenecked by the lack of action labels and the presence of action-correlated visual distractors. Although recent latent action policy optimization (LAPO) has shown promise in inferring proxy action labels from visual observations, its performance degrades significantly when distractors are present. To address this limitation, we propose a novel object-centric latent action learning framework that centers on objects rather than pixels. We leverage self-supervised object-centric pretraining to disentangle the movement of the agent and distracting background dynamics. This allows LAPO to focus on task-relevant interactions, resulting in more robust proxy-action labels, enabling better imitation learning and efficient adaptation of the agent with just a few action-labeled trajectories. We evaluated our method in eight visually complex tasks across the Distracting Control Suite (DCS) and Distracting MetaWorld (DMW). Our results show that object-centric pretraining mitigates the negative effects of distractors by 50%, as measured by downstream task performance: average return (DCS) and success rate (DMW).
+ oai:arXiv.org:2502.09680v3
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Albina Klepach, Alexander Nikulin, Ilya Zisman, Denis Tarasov, Alexander Derevyagin, Andrei Polubarov, Nikita Lyubaykin, Igor Kiselev, Vladislav Kurenkov
+
+
+ A Program Logic for Under-approximating Worst-case Resource Usage
+ https://arxiv.org/abs/2502.11091
+ arXiv:2502.11091v2 Announce Type: replace
+Abstract: Understanding and predicting the worst-case resource usage is crucial for software quality; however, existing methods either over-approximate with potentially loose bounds or under-approximate without asymptotic guarantees. This paper presents a program logic to under-approximate worst-case resource usage, adapting incorrectness logic (IL) to reason quantitatively about resource consumption. We propose quantitative forward and backward under-approximate (QFUA and QBUA) triples, which generalize IL to identify execution paths leading to high resource usage. We also introduce a variant of QBUA that supports reasoning about high-water marks. Our logic is proven sound and complete with respect to a simple IMP-like language, and all meta-theoretical results are mechanized and verified in Rocq. We implement a prototype checker for all three variants of our logic and demonstrate its utility through a few examples and four case studies.
+ oai:arXiv.org:2502.11091v2
+ cs.LO
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ziyue Jin, Di Wang
+
+
+ What Scalable Second-Order Information Knows for Pruning at Initialization
+ https://arxiv.org/abs/2502.11450
+ arXiv:2502.11450v2 Announce Type: replace
+Abstract: Pruning remains an effective strategy for reducing both the costs and environmental impact associated with deploying large neural networks (NNs) while maintaining performance. Classical methods, such as OBD (LeCun et al., 1989) and OBS (Hassibi et al., 1992), demonstrate that utilizing curvature information can significantly enhance the balance between network complexity and performance. However, the computation and storage of the Hessian matrix make it impractical for modern NNs, motivating the use of approximations. Recent research (Gur et al., 2018; Karakida et al., 2019) suggests that the top eigenvalues guide optimization in a small subspace, are identifiable early, and remain consistent during training. Motivated by these findings, we revisit pruning at initialization (PaI) to evaluate scalable, unbiased second-order approximations, such as the Empirical Fisher and Hutchinson diagonals. Our experiments show that these methods capture sufficient curvature information to improve the identification of critical parameters compared to first-order baselines, while maintaining linear complexity. Additionally, we empirically demonstrate that updating batch normalization statistics as a warmup phase improves the performance of data-dependent criteria and mitigates the issue of layer collapse. Notably, Hutchinson-based criteria consistently outperformed or matched existing PaI algorithms across various models (including VGG, ResNet, and ViT) and datasets (such as CIFAR-10/100, TinyImageNet, and ImageNet). Our findings suggest that scalable second-order approximations strike an effective balance between computational efficiency and accuracy, making them a valuable addition to the pruning toolkit. We make our code available.
+ oai:arXiv.org:2502.11450v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ivo Gollini Navarrete, Nicol\'as Mauricio Cuadrado \'Avila, Martin Tak\'a\v{c}, Samuel Horv\'ath
+
+
+ Generative Personality Simulation via Theory-Informed Structured Interview
+ https://arxiv.org/abs/2502.12109
+ arXiv:2502.12109v2 Announce Type: replace
+Abstract: Despite their potential as human proxies, LLMs often fail to generate heterogeneous data with human-like diversity, thereby diminishing their value in advancing social science research. To address this gap, we propose a novel method to incorporate psychological insights into LLM simulation through the Personality Structured Interview (PSI). PSI leverages psychometric scale-development procedures to capture personality-related linguistic information from a formal psychological perspective. To systematically evaluate simulation fidelity, we developed a measurement theory grounded evaluation procedure that considers the latent construct nature of personality and evaluates its reliability, structural validity, and external validity. Results from three experiments demonstrate that PSI effectively improves human-like heterogeneity in LLM-simulated personality data and predicts personality-related behavioral outcomes. We further offer a theoretical framework for designing theory-informed structured interviews to enhance the reliability and effectiveness of LLMs in simulating human-like data for broader psychometric research.
+ oai:arXiv.org:2502.12109v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Pengda Wang, Huiqi Zou, Han Jiang, Hanjie Chen, Tianjun Sun, Xiaoyuan Yi, Ziang Xiao, Frederick L. Oswald
+
+
+ Universal Embedding Function for Traffic Classification via QUIC Domain Recognition Pretraining: A Transfer Learning Success
+ https://arxiv.org/abs/2502.12930
+ arXiv:2502.12930v2 Announce Type: replace
+Abstract: Encrypted traffic classification (TC) methods must adapt to new protocols and extensions as well as to advancements in other machine learning fields. In this paper, we adopt a transfer learning setup best known from computer vision. We first pretrain an embedding model on a complex task with a large number of classes and then transfer it to seven established TC datasets. The pretraining task is recognition of SNI domains in encrypted QUIC traffic, which in itself is a challenge for network monitoring due to the growing adoption of TLS Encrypted Client Hello. Our training pipeline -- featuring a disjoint class setup, ArcFace loss function, and a modern deep learning architecture -- aims to produce universal embeddings applicable across tasks. A transfer method based on model fine-tuning surpassed SOTA performance on nine of ten downstream TC tasks, with an average improvement of 6.4%. Furthermore, a comparison with a baseline method using raw packet sequences revealed unexpected findings with potential implications for the broader TC field. We released the model architecture, trained weights, and codebase for transfer learning experiments.
+ oai:arXiv.org:2502.12930v2
+ cs.LG
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/TNSM.2025.3642984
+ IEEE Transactions on Network and Service Management, vol. 23, pp. 1647-1663, 2026
+ Jan Luxemburk, Karel Hynek, Richard Pln\'y, Tom\'a\v{s} \v{C}ejka
+
+
+ A Survey of Fuzzing Open-Source Operating Systems
+ https://arxiv.org/abs/2502.13163
+ arXiv:2502.13163v3 Announce Type: replace
+Abstract: Vulnerabilities in open-source operating systems (OSs) pose substantial security risks to software systems, making their detection crucial. While fuzzing has been an effective vulnerability detection technique in various domains, OS fuzzing (OSF) faces unique challenges due to OS complexity and multi-layered interaction, and has not been comprehensively reviewed. Therefore, this work systematically surveys the state-of-the-art OSF techniques, categorizes them based on the general fuzzing process, and investigates challenges specific to kernel, file system, driver, and hypervisor fuzzing. Finally, future research directions for OSF are discussed.
+ oai:arXiv.org:2502.13163v3
+ cs.OS
+ cs.CR
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kun Hu, Qicai Chen, Wenzhuo Zhang, Zilong Lu, Bihuan Chen, You Lu, Haowen Jiang, Bingkun Sun, Xin Peng, Wenyun Zhao
+
+
+ Quasi Zigzag Persistence: A Topological Framework for Analyzing Time-Varying Data
+ https://arxiv.org/abs/2502.16049
+ arXiv:2502.16049v3 Announce Type: replace
+Abstract: In this paper, we propose Quasi Zigzag Persistent Homology (QZPH) as a framework for analyzing time-varying data by integrating multiparameter persistence and zigzag persistence. To this end, we introduce a stable topological invariant that captures both static and dynamic features at different scales. We present an algorithm to compute this invariant efficiently. We show that it enhances the machine learning models when applied to tasks such as sleep-stage detection, demonstrating its effectiveness in capturing the evolving patterns in time-varying datasets.
+ oai:arXiv.org:2502.16049v3
+ cs.LG
+ math.AT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Tamal K. Dey, Shreyas N. Samaga
+
+
+ Worse than Zero-shot? A Fact-Checking Dataset for Evaluating the Robustness of RAG Against Misleading Retrievals
+ https://arxiv.org/abs/2502.16101
+ arXiv:2502.16101v5 Announce Type: replace
+Abstract: Retrieval-augmented generation (RAG) has shown impressive capabilities in mitigating hallucinations in large language models (LLMs). However, LLMs struggle to maintain consistent reasoning when exposed to misleading or conflicting evidence, especially in real-world domains such as politics, where information is polarized or selectively framed. Mainstream RAG benchmarks evaluate models under clean retrieval settings, where systems generate answers from gold-standard documents, or under synthetically perturbed settings, where documents are artificially injected with noise. These assumptions fail to reflect real-world conditions, often leading to an overestimation of RAG system performance. To address this gap, we introduce RAGuard, the first benchmark to evaluate the robustness of RAG systems against misleading retrievals. Unlike prior benchmarks that rely on synthetic noise, our fact-checking dataset captures naturally occurring misinformation by constructing its retrieval corpus from Reddit discussions. It categorizes retrieved evidence into three types: supporting, misleading, and unrelated, providing a realistic and challenging testbed for assessing how well RAG systems navigate different types of evidence. Our experiments reveal that, when exposed to potentially misleading retrievals, all tested LLM-powered RAG systems perform worse than their zero-shot baselines (i.e., no retrieval at all), while human annotators consistently perform better, highlighting LLMs' susceptibility to noisy environments. To our knowledge, RAGuard is the first benchmark to systematically assess the robustness of the RAG against misleading evidence. We expect this benchmark to drive future research toward improving RAG systems beyond idealized datasets, making them more reliable for real-world applications. The dataset is available at https://huggingface.co/datasets/UCSC-IRKM/RAGuard.
+ oai:arXiv.org:2502.16101v5
+ cs.AI
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Linda Zeng, Rithwik Gupta, Divij Motwani, Yi Zhang, Diji Yang
+
+
+ Comparing the Framing Effect in Humans and LLMs on Naturally Occurring Texts
+ https://arxiv.org/abs/2502.17091
+ arXiv:2502.17091v2 Announce Type: replace
+Abstract: Humans are influenced by how information is presented, a phenomenon known as the framing effect. Prior work suggests that LLMs may also be susceptible to framing, but it has relied on synthetic data and did not compare to human behavior. To address this gap, we introduce WildFrame - a dataset for evaluating LLM responses to positive and negative framing in naturally-occurring sentences, alongside human responses on the same data. WildFrame consists of 1,000 real-world texts selected to convey a clear sentiment; we then reframe each text in either a positive or negative light and collect human sentiment annotations. Evaluating eleven LLMs on WildFrame, we find that all models respond to reframing in a human-like manner ($r\geq0.52$), and that both humans and models are influenced more by positive than negative reframing. Notably, GPT models are the least correlated with human behavior among all tested models. These findings raise a discussion around the goals of state-of-the-art LLM development and whether models should align closely with human behavior, to preserve cognitive phenomena such as the framing effect, or instead mitigate such biases in favor of fairness and consistency.
+ oai:arXiv.org:2502.17091v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Gili Lior, Liron Nacchace, Gabriel Stanovsky
+
+
+ A Jacobian-free Newton-Krylov method for cell-centred finite volume solid mechanics
+ https://arxiv.org/abs/2502.17217
+ arXiv:2502.17217v2 Announce Type: replace
+Abstract: This study investigates the efficacy of Jacobian-free Newton-Krylov methods in finite-volume solid mechanics. Traditional Newton-based approaches require explicit Jacobian matrix formation and storage, which can be computationally expensive and memory-intensive. In contrast, Jacobian-free Newton-Krylov methods approximate the Jacobian's action using finite differences, combined with Krylov subspace solvers such as the generalised minimal residual method (GMRES), enabling seamless integration into existing segregated finite-volume frameworks without major code refactoring. This work proposes and benchmarks the performance of a compact-stencil Jacobian-free Newton-Krylov method against a conventional segregated approach on a suite of test cases, encompassing varying geometric dimensions, nonlinearities, dynamic responses, and material behaviours. Key metrics, including computational cost, memory efficiency, and robustness, are evaluated, along with the influence of preconditioning strategies and stabilisation scaling. Results show that the proposed Jacobian-free Newton-Krylov method outperforms the segregated approach in all linear and nonlinear elastic cases, achieving order-of-magnitude speedups in many instances; however, divergence is observed in elastoplastic cases, highlighting areas for further development. It is found that preconditioning choice impacts performance: a LU direct solver is fastest in small to moderately-sized cases, while a multigrid method is more effective for larger problems. The findings demonstrate that Jacobian-free Newton-Krylov methods are promising for advancing finite-volume solid mechanics simulations, particularly for existing segregated frameworks where minimal modifications enable their adoption. The described implementations are available in the solids4foam toolbox for OpenFOAM, inviting the community to explore, extend, and compare these procedures.
+ oai:arXiv.org:2502.17217v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Philip Cardiff, Dylan Armfield, \v{Z}eljko Tukovi\'c, Ivan Batisti\'c
+
+
+ LeanProgress: Guiding Search for Neural Theorem Proving via Proof Progress Prediction
+ https://arxiv.org/abs/2502.17925
+ arXiv:2502.17925v3 Announce Type: replace
+Abstract: Mathematical reasoning remains a significant challenge for Large Language Models (LLMs) due to hallucinations. When combined with formal proof assistants like Lean, these hallucinations can be eliminated through rigorous verification, making theorem proving reliable. However, even with formal verification, LLMs still struggle with long proofs and complex mathematical formalizations. While Lean with LLMs offers valuable assistance with retrieving lemmas, generating tactics, or even complete proofs, it lacks a crucial capability: providing a sense of proof progress. This limitation particularly impacts the overall development efficiency in large formalization projects. We introduce LeanProgress, a method that predicts the progress in the proof. Training and evaluating our models made on a large corpus of Lean proofs from Lean Workbook Plus and Mathlib4 and how many steps remain to complete it, we employ data preprocessing and balancing techniques to handle the skewed distribution of proof lengths. Our experiments show that LeanProgress achieves an overall prediction accuracy of 75.8% in predicting the amount of progress and, hence, the remaining number of steps. When integrated into a best-first search framework using Reprover, our method shows a 3.8% improvement on Mathlib4 compared to baseline performances of 41.4%, particularly for longer proofs. These results demonstrate how proof progress prediction can enhance both automated and interactive theorem proving, enabling users to make more informed decisions about proof strategies. Our code is merged in this library here https://github.com/lean-dojo/LeanDojo-v2.
+ oai:arXiv.org:2502.17925v3
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Robert Joseph George, Suozhi Huang, Peiyang Song, Anima Anandkumar
+
+
+ Scalable Equilibrium Sampling with Sequential Boltzmann Generators
+ https://arxiv.org/abs/2502.18462
+ arXiv:2502.18462v3 Announce Type: replace
+Abstract: Scalable sampling of molecular states in thermodynamic equilibrium is a long-standing challenge in statistical physics. Boltzmann generators tackle this problem by pairing normalizing flows with importance sampling to obtain uncorrelated samples under the target distribution. In this paper, we extend the Boltzmann generator framework with two key contributions, denoting our framework Sequential Boltzmann generators (SBG). The first is a highly efficient Transformer-based normalizing flow operating directly on all-atom Cartesian coordinates. In contrast to the equivariant continuous flows of prior methods, we leverage exactly invertible non-equivariant architectures which are highly efficient during both sample generation and likelihood evaluation. This efficiency unlocks more sophisticated inference strategies beyond standard importance sampling. In particular, we perform inference-time scaling of flow samples using a continuous-time variant of sequential Monte Carlo, in which flow samples are transported towards the target distribution with annealed Langevin dynamics. SBG achieves state-of-the-art performance w.r.t. all metrics on peptide systems, demonstrating the first equilibrium sampling in Cartesian coordinates of tri-, tetra- and hexa-peptides that were thus far intractable for prior Boltzmann generators.
+ oai:arXiv.org:2502.18462v3
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Charlie B. Tan, Avishek Joey Bose, Chen Lin, Leon Klein, Michael M. Bronstein, Alexander Tong
+
+
+ Simple Self Organizing Map with Visual Transformer
+ https://arxiv.org/abs/2503.04121
+ arXiv:2503.04121v3 Announce Type: replace
+Abstract: Vision Transformers (ViTs) have demonstrated exceptional performance in various vision tasks. However, they tend to underperform on smaller datasets due to their inherent lack of inductive biases. Current approaches address this limitation implicitly-often by pairing ViTs with pretext tasks or by distilling knowledge from convolutional neural networks (CNNs) to strengthen the prior. In contrast, Self-Organizing Maps (SOMs), a widely adopted self-supervised framework, are inherently structured to preserve topology and spatial organization, making them a promising candidate to directly address the limitations of ViTs in limited or small training datasets. Despite this potential, equipping SOMs with modern deep learning architectures remains largely unexplored. In this study, we conduct a novel exploration on how Vision Transformers (ViTs) and Self-Organizing Maps (SOMs) can empower each other, aiming to bridge this critical research gap. Our findings demonstrate that these architectures can synergistically enhance each other, leading to significantly improved performance in both unsupervised and supervised tasks. Code is publicly available on GitHub.
+ oai:arXiv.org:2503.04121v3
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/LSP.2025.3643388
+ IEEE Signal Processing Letters, 2025, pp. 1-5
+ Alan Luo, Kaiwen Yuan
+
+
+ FMASH: Advancing Traditional Chinese Medicine Formula Recommendation with Efficient Fusion of Multiscale Associations of Symptoms and Herbs
+ https://arxiv.org/abs/2503.05167
+ arXiv:2503.05167v2 Announce Type: replace
+Abstract: Traditional Chinese medicine (TCM) exhibits remarkable therapeutic efficacy in disease treatment and healthcare through patienti-specific formulas. However, current AI-based TCM formula recommendation models and methods mainly focus on data-based textual associations between symptoms and herbs, and have not fully utilized their features and relations at different scales, especially at the molecular scale. To address these limitations, we propose the Fusion of Multiscale Associations of Symptoms and Herbs (FMASH), an novel framework that effectively combines molecular-scale features and macroscopic properties of herbs with clinical symptoms, and provides the refined representation of their multiscale associations, enhancing the effectiveness of TCM formula recommendation. This framework can integrate molecular-scale chemical features and macroscopic properties of herbs, and capture complex local and global relations in the heterogeneous graph of symptoms and herbs, providing the effective embedding representation of their multiscale features and associations in a unified semantic space. Based on the refined feature representation, the framework is not only compatible with both traditional unordered formula recommendation task and the ordered herb sequence generation task, but also improves model's performance in both tasks. Comprehensive evaluations demonstrate FMASH's superior performance on the TCM formula recommendation over the state-of-the-art (SOTA) baseline, achieving relative improvements of 9.45\% in Precision@5, 12.11% in Recall@5, and 11.01% in F1@5 compared to the SOTA model on benchmark datasets. This work facilitates the practical application of AI-based TCM formula recommendation system.
+ oai:arXiv.org:2503.05167v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xinhan Zheng, Huyu Wu, Ruotai Li, Haopeng Jin, Xueting Wang, Yehan Yang, Guodong Shan
+
+
+ REF-VLM: Triplet-Based Referring Paradigm for Unified Visual Decoding
+ https://arxiv.org/abs/2503.07413
+ arXiv:2503.07413v2 Announce Type: replace
+Abstract: Multimodal Large Language Models (MLLMs) demonstrate robust zero-shot capabilities across diverse vision-language tasks after training on mega-scale datasets. However, dense prediction tasks, such as semantic segmentation and keypoint detection, pose significant challenges for MLLMs when represented solely as text outputs. Simultaneously, current MLLMs utilizing latent embeddings for visual task decoding generally demonstrate limited adaptability to both multi-task learning and multi-granularity scenarios. In this work, we present \textbf{REF-VLM}, an end-to-end framework for unified training of various visual decoding tasks. To address complex visual decoding scenarios, we introduce the \textbf{Triplet-Based Referring Paradigm (TRP)}, which explicitly decouples three critical dimensions in visual decoding tasks through a triplet structure: concepts, decoding types, and targets. TRP employs symbolic delimiters to enforce structured representation learning, enhancing the parsability and interpretability of model outputs. Additionally, we construct \textbf{Visual-Task Instruction Following Dataset (VT-Instruct)}, a large-scale multi-task dataset containing over 100 million multimodal dialogue samples across 25 task types. Beyond text inputs and outputs, VT-Instruct incorporates various visual prompts such as point, box, scribble, and mask, and generates outputs composed of text and visual units like box, keypoint, depth and mask. The combination of different visual prompts and visual units generates a wide variety of task types, expanding the applicability of REF-VLM significantly. Both qualitative and quantitative experiments demonstrate that our REF-VLM outperforms other MLLMs across a variety of standard benchmarks. The code, dataset, and demo will be publicly available.
+ oai:arXiv.org:2503.07413v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yan Tai, Luhao Zhu, Yunan Ding, Yiying Dong, Guangtao Zhai, Xiaohong Liu, Guodong Guo
+
+
+ UVE: Are MLLMs Unified Evaluators for AI-Generated Videos?
+ https://arxiv.org/abs/2503.09949
+ arXiv:2503.09949v4 Announce Type: replace
+Abstract: With the rapid growth of video generative models (VGMs), it is essential to develop reliable and comprehensive automatic metrics for AI-generated videos (AIGVs). Existing methods either use off-the-shelf models optimized for other tasks or rely on human assessment data to train specialized evaluators. These approaches are constrained to specific evaluation aspects and are difficult to scale with the increasing demands for finer-grained and more comprehensive evaluations. To address this issue, this work investigates the feasibility of using multimodal large language models (MLLMs) as a unified evaluator for AIGVs, leveraging their strong visual perception and language understanding capabilities. To evaluate the performance of automatic metrics in unified AIGV evaluation, we introduce a benchmark called UVE-Bench. UVE-Bench collects videos generated by state-of-the-art VGMs and provides pairwise human preference annotations across 15 evaluation aspects. Using UVE-Bench, we extensively evaluate 18 MLLMs. Our empirical results suggest that while advanced MLLMs (e.g., Qwen2VL-72B and InternVL2.5-78B) still lag behind human evaluators, they demonstrate promising ability in unified AIGV evaluation, significantly surpassing existing specialized evaluation methods. Additionally, we conduct an in-depth analysis of key design choices that impact the performance of MLLM-driven evaluators, offering valuable insights for future research on AIGV evaluation.
+ oai:arXiv.org:2503.09949v4
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Yuanxin Liu, Rui Zhu, Shuhuai Ren, Jiacong Wang, Haoyuan Guo, Xu Sun, Lu Jiang
+
+
+ Dual-Domain Fusion for Semi-Supervised Learning
+ https://arxiv.org/abs/2503.11824
+ arXiv:2503.11824v2 Announce Type: replace
+Abstract: Labeled time-series data is often expensive and difficult to obtain, making it challenging to train accurate machine learning models for real-world applications such as anomaly detection or fault diagnosis. The scarcity of labeled samples limits model generalization and leaves valuable unlabeled data underutilized. We propose Dual-Domain Fusion (DDF), a new model-agnostic semi-supervised learning (SSL) framework applicable to any time-series signal. DDF performs dual-domain training by combining the one-dimensional time-domain signals with their two-dimensional time-frequency representations and fusing them to maximize learning performance. Its tri-model architecture consists of time-domain, time-frequency, and fusion components, enabling the model to exploit complementary information across domains during training. To support practical deployment, DDF maintains the same inference cost as standard time-domain models by discarding the time-frequency and fusion branches at test time. Experimental results on two public fault diagnosis datasets demonstrate substantial accuracy improvements of 8-46% over widely used SSL methods FixMatch, MixMatch, Mean Teacher, Adversarial Training, and Self-training. These results show that DDF provides an effective and generalizable strategy for semi-supervised time-series classification.
+ oai:arXiv.org:2503.11824v2
+ cs.LG
+ cs.AI
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Tuomas Jalonen, Mohammad Al-Sa'd, Serkan Kiranyaz, Moncef Gabbouj
+
+
+ A Text-to-3D Framework for Joint Generation of CG-Ready Humans and Compatible Garments
+ https://arxiv.org/abs/2503.12052
+ arXiv:2503.12052v3 Announce Type: replace
+Abstract: Creating detailed 3D human avatars with fitted garments traditionally requires specialized expertise and labor-intensive workflows. While recent advances in generative AI have enabled text-to-3D human and clothing synthesis, existing methods fall short in offering accessible, integrated pipelines for generating CG-ready 3D avatars with physically compatible outfits; here we use the term CG-ready for models following a technical aesthetic common in computer graphics (CG) and adopt standard CG polygonal meshes and strands representations (rather than neural representations like NeRF and 3DGS) that can be directly integrated into conventional CG pipelines and support downstream tasks such as physical simulation. To bridge this gap, we introduce Tailor, an integrated text-to-3D framework that generates high-fidelity, customizable 3D avatars dressed in simulation-ready garments. Tailor consists of three stages. (1) Seman tic Parsing: we employ a large language model to interpret textual descriptions and translate them into parameterized human avatars and semantically matched garment templates. (2) Geometry-Aware Garment Generation: we propose topology-preserving deformation with novel geometric losses to generate body-aligned garments under text control. (3) Consistent Texture Synthesis: we propose a novel multi-view diffusion process optimized for garment texturing, which enforces view consistency, preserves photorealistic details, and optionally supports symmetric texture generation common in garments. Through comprehensive quantitative and qualitative evaluations, we demonstrate that Tailor outperforms state-of-the-art methods in fidelity, usability, and diversity. Our code will be released for academic use. Project page: https://human-tailor.github.io
+ oai:arXiv.org:2503.12052v3
+ cs.CV
+ cs.GR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhiyao Sun, Yu-Hui Wen, Ho-Jui Fang, Sheng Ye, Matthieu Lin, Tian Lv, Yong-Jin Liu
+
+
+ EmoBipedNav: Emotion-aware Social Navigation for Bipedal Robots with Deep Reinforcement Learning
+ https://arxiv.org/abs/2503.12538
+ arXiv:2503.12538v2 Announce Type: replace
+Abstract: This study presents an emotion-aware navigation framework -- EmoBipedNav -- using deep reinforcement learning (DRL) for bipedal robots walking in socially interactive environments. The inherent locomotion constraints of bipedal robots challenge their safe maneuvering capabilities in dynamic environments. When combined with the intricacies of social environments, including pedestrian interactions and social cues, such as emotions, these challenges become even more pronounced. To address these coupled problems, we propose a two-stage pipeline that considers both bipedal locomotion constraints and complex social environments. Specifically, social navigation scenarios are represented using sequential LiDAR grid maps (LGMs), from which we extract latent features, including collision regions, emotion-related discomfort zones, social interactions, and the spatio-temporal dynamics of evolving environments. The extracted features are directly mapped to the actions of reduced-order models (ROMs) through a DRL architecture. Furthermore, the proposed framework incorporates full-order dynamics and locomotion constraints during training, effectively accounting for tracking errors and restrictions of the locomotion controller while planning the trajectory with ROMs. Comprehensive experiments demonstrate that our approach exceeds both model-based planners and DRL-based baselines. The hardware videos and open-source code are available at https://gatech-lidar.github.io/emobipednav.github.io/.
+ oai:arXiv.org:2503.12538v2
+ cs.RO
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Wei Zhu, Abirath Raju, Abdulaziz Shamsah, Anqi Wu, Seth Hutchinson, Ye Zhao
+
+
+ YOLO-LLTS: Real-Time Low-Light Traffic Sign Detection via Prior-Guided Enhancement and Multibranch Feature Interaction
+ https://arxiv.org/abs/2503.13883
+ arXiv:2503.13883v4 Announce Type: replace
+Abstract: Traffic sign detection is essential for autonomous driving and Advanced Driver Assistance Systems (ADAS). However, existing methods struggle to address the challenges of poor image quality and insufficient information under low-light conditions, leading to a decline in detection accuracy and affecting driving safety. To address this issue, we propose YOLO-LLTS, an end-to-end real-time traffic sign detection algorithm specifically designed for low-light environments. YOLO-LLTS introduces three main contributions: the HRFM-SOD module retains more information about distant or tiny traffic signs compared to traditional methods; the MFIA module interacts features with different receptive fields to improve information utilization; the PGFE module enhances detection accuracy by improving brightness, edges, contrast, and supplementing detail information. Additionally, we construct a new dataset, the Chinese Nighttime Traffic Sign Sample Set (CNTSSS), covering diverse nighttime scenarios. Experiments show that YOLO-LLTS achieves state-of-the-art performance, outperforming previous best methods by 2.7% mAP50 and 1.6% mAP50:95 on TT100K-night, 1.3% mAP50 and 1.9% mAP50:95 on CNTSSS, 7.5% mAP50 and 9.8% mAP50:95 on GTSDB-night, and superior results on CCTSDB2021. Deployment on edge devices confirms its real-time applicability and effectiveness. The code and the dataset are available at https://github.com/linzy88/YOLO-LLTS.
+ oai:arXiv.org:2503.13883v4
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/TIM.2025.3604925
+ IEEE Trans. Instrum. Meas., vol. 74, pp. 1-18, 2025
+ Ziyu Lin, Yunfan Wu, Yuhang Ma, Junzhou Chen, Ronghui Zhang, Jiaming Wu, Guodong Yin, Liang Lin
+
+
+ Light4GS: Lightweight Compact 4D Gaussian Splatting Generation via Context Model
+ https://arxiv.org/abs/2503.13948
+ arXiv:2503.13948v2 Announce Type: replace
+Abstract: 3D Gaussian Splatting (3DGS) has emerged as an efficient and high-fidelity paradigm for novel view synthesis. To adapt 3DGS for dynamic content, deformable 3DGS incorporates temporally deformable primitives with learnable latent embeddings to capture complex motions. Despite its impressive performance, the high-dimensional embeddings and vast number of primitives lead to substantial storage requirements. In this paper, we introduce a \textbf{Light}weight \textbf{4}D\textbf{GS} framework, called Light4GS, that employs significance pruning with a deep context model to provide a lightweight storage-efficient dynamic 3DGS representation. The proposed Light4GS is based on 4DGS that is a typical representation of deformable 3DGS. Specifically, our framework is built upon two core components: (1) a spatio-temporal significance pruning strategy that eliminates over 64\% of the deformable primitives, followed by an entropy-constrained spherical harmonics compression applied to the remainder; and (2) a deep context model that integrates intra- and inter-prediction with hyperprior into a coarse-to-fine context structure to enable efficient multiscale latent embedding compression. Our approach achieves over 120x compression and increases rendering FPS up to 20\% compared to the baseline 4DGS, and also superior to frame-wise state-of-the-art 3DGS compression methods, revealing the effectiveness of our Light4GS in terms of both intra- and inter-prediction methods without sacrificing rendering quality.
+ oai:arXiv.org:2503.13948v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mufan Liu, Qi Yang, He Huang, Wenjie Huang, Zhenlong Yuan, Zhu Li, Yiling Xu
+
+
+ Securing Automated Insulin Delivery Systems: A Review of Security Threats and Protective Strategies
+ https://arxiv.org/abs/2503.14006
+ arXiv:2503.14006v2 Announce Type: replace
+Abstract: Automated Insulin Delivery (AID) systems represent a significant advancement in diabetes care and wearable physiological closed-loop control technologies, integrating continuous glucose monitoring, control algorithms, and insulin pumps to improve blood glucose level control and reduce the burden of patient self-management. However, their increasing dependence on wireless communication and automatic control introduces security risks that may compromise patient privacy or result in life-threatening treatment errors. This paper presents a comprehensive survey of the AID system security landscape, covering technical vulnerabilities, regulatory frameworks, and commercial security measures. In addition, we conduct a systematic review of attack vectors and defence mechanisms proposed in the literature, following the PRISMA framework. Our findings highlight critical gaps, including the lack of specific security evaluation frameworks, insufficient protections in real-world deployments, and the need for comprehensive, lightweight, and adaptive defence mechanisms. We further investigate available research resources and outline open research challenges and future directions to guide the development of more secure and reliable AID systems. By focusing on AID systems, this review offers a representative case study for examining and improving the cybersecurity of safety-critical medical wearable systems.
+ oai:arXiv.org:2503.14006v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1016/j.cose.2025.104733
+ Computers & Security, Volume 160 (2026), 104733, ISSN 0167-4048
+ Yuchen Niu, Siew-Kei Lam
+
+
+ The value of hedging against energy storage uncertainties when designing energy parks
+ https://arxiv.org/abs/2503.15416
+ arXiv:2503.15416v4 Announce Type: replace
+Abstract: Energy storage is needed to match renewable generation to industrial loads in energy parks. However, the future performance of bulk storage technologies is currently highly uncertain. Due to the urgency of decarbonization targets, energy park projects must be designed and begun now. But, as uncertainty in storage performance reduces, a different technology than identified during initial design may turn out cheaper. Enabling flexibility so that design adaptations can be made as better information becomes available would lower the cost of decarbonizing industry. But having this flexibility is itself costly. This raises the question, "Is it worth it?"
+ This study quantifies the benefit of retaining flexibility to adapt energy park designs and optionality over storage technology choice as uncertainty reduces, to determine whether it is economically worthwhile. It applies the Value of Information analysis framework to the sizing of wind, solar, and storage in an illustrative energy park model based on a real-world proposal near Rotterdam, considering uncertainty in storage efficiency, lifetime, and capital cost.
+ Updating asset sizings after storage uncertainty reduced is found to reduce total costs by 18% on average. Having the option to switch storage technology choice as well reduces costs by a further 13%, which is substantially greater than the cost of providing storage optionality. Using two storage technologies in the energy park reduces costs by 14%, and in this case storage optionality is not worthwhile. These results are robust to the level of uncertainty reduction in storage performance, and the risk aversion of the system designer.
+ oai:arXiv.org:2503.15416v4
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1016/j.energy.2025.137600
+ Energy 334 (2025) 137600
+ Max Langtry, Ruchi Choudhary
+
+
+ Zero-Knowledge Federated Learning: A New Trustworthy and Privacy-Preserving Distributed Learning Paradigm
+ https://arxiv.org/abs/2503.15550
+ arXiv:2503.15550v3 Announce Type: replace
+Abstract: Federated Learning (FL) has emerged as a promising paradigm in distributed machine learning, enabling collaborative model training while preserving data privacy. However, despite its many advantages, FL still contends with significant challenges -- most notably regarding security and trust. Zero-Knowledge Proofs (ZKPs) offer a potential solution by establishing trust and enhancing system integrity throughout the FL process. Although several studies have explored ZKP-based FL (ZK-FL), a systematic framework and comprehensive analysis are still lacking. This article makes two key contributions. First, we propose a structured ZK-FL framework that categorizes and analyzes the technical roles of ZKPs across various FL stages and tasks. Second, we introduce a novel algorithm, Verifiable Client Selection FL (Veri-CS-FL), which employs ZKPs to refine the client selection process. In Veri-CS-FL, participating clients generate verifiable proofs for the performance metrics of their local models and submit these concise proofs to the server for efficient verification. The server then selects clients with high-quality local models for uploading, subsequently aggregating the contributions from these selected clients. By integrating ZKPs, Veri-CS-FL not only ensures the accuracy of performance metrics but also fortifies trust among participants while enhancing the overall efficiency and security of FL systems.
+ oai:arXiv.org:2503.15550v3
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Taotao Wang, Yuxin Jin, Qing Yang, Yihan Xia, Long Shi, Shengli Zhang
+
+
+ High-dimensional sparse recovery from function samples Decoders, guarantees and instance optimality
+ https://arxiv.org/abs/2503.16209
+ arXiv:2503.16209v2 Announce Type: replace
+Abstract: We investigate the reconstruction of multivariate functions from samples using sparse recovery techniques. For Square Root Lasso, Orthogonal Matching Pursuit, and Compressive Sampling Matching Pursuit, we demonstrate both theoretically and empirically that they allow us to recover functions from a small number of random samples. In contrast to Basis Pursuit Denoising, the deployed decoders only require a search space $V_J$ spanned by dictionary elements indexed by $J$ and a sparsity parameter $n$ to guarantee an $L_2$-approximation error decaying no worse than a best $n$-term approximation error and the truncation error with respect to the search space $V_J$ and the uniform norm. We show that this happens simultaneously for all admissible functions if the number of samples scales as $n\log^2 n\log |J|$, coming from known bounds for the RIP for matrices built upon bounded orthonormal systems. As a consequence, we obtain bounds for sampling widths in function classes. In addition, we establish lower bounds on the required sample complexity, which show that the log-factor in $\vert J \vert$ is indeed necessary to obtain such {\em instance-optimal} error guarantees. Finally, we conduct several numerical experiments to show that our theoretical bounds are reasonable and compare the discussed decoders in practice.
+ oai:arXiv.org:2503.16209v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Moritz Moeller, Sebastian Neumayer, Kateryna Pozharska, Tizian Sommerfeld, Tino Ullrich
+
+
+ Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't
+ https://arxiv.org/abs/2503.16219
+ arXiv:2503.16219v2 Announce Type: replace
+Abstract: Enhancing the reasoning capabilities of large language models (LLMs) typically relies on massive computational resources and extensive datasets, limiting accessibility for resource-constrained settings. Our study investigates the potential of reinforcement learning (RL) to improve reasoning in small LLMs, focusing on a 1.5-billion-parameter model, DeepSeek-R1-Distill-Qwen-1.5B, under strict constraints: training on 4 NVIDIA A40 GPUs (48 GB VRAM each) within 24 hours. Adapting the Group Relative Policy Optimization (GRPO) algorithm and curating a compact, high-quality mathematical reasoning dataset, we conducted three experiments to explore model behavior and performance. Our results demonstrate rapid reasoning gains - e.g., AMC23 accuracy rising from 63% to 80% and AIME24 reaching 46.7%, surpassing o1-preview - using only 7,000 samples and a $42 training cost, compared to thousands of dollars for baseline models. However, challenges such as optimization instability and length constraints emerged with prolonged training. These findings highlight the efficacy of RL-based fine-tuning for small LLMs, offering a cost-effective alternative to large-scale approaches. We release our code and datasets as open-source resources, providing insights into trade-offs and laying a foundation for scalable, reasoning-capable LLMs in resource-limited environments. All are available at https://github.com/knoveleng/open-rs.
+ oai:arXiv.org:2503.16219v2
+ cs.LG
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Quy-Anh Dang, Chris Ngo
+
+
+ LLM-Glasses: GenAI-driven Glasses with Haptic Feedback for Navigation of Visually Impaired People
+ https://arxiv.org/abs/2503.16475
+ arXiv:2503.16475v2 Announce Type: replace
+Abstract: LLM-Glasses is a wearable navigation system which assists visually impaired people by utilizing YOLO-World object detection, GPT-4o-based reasoning, and haptic feedback for real-time guidance. The device translates visual scene understanding into intuitive tactile feedback on the temples, allowing hands-free navigation. Three studies evaluate the system: recognition of 13 haptic patterns with an average recognition rate of 81.3%, VICON-based guidance with predefined paths using haptic cues, and an LLM-guided scene evaluation with decision accuracies of 91.8% without obstacles, 84.6% with static obstacles, and 81.5% with dynamic obstacles. These results show that LLM-Glasses can deliver reliable navigation support in controlled environments and motivate further work on responsiveness and deployment in more complex real-world scenarios.
+ oai:arXiv.org:2503.16475v2
+ cs.HC
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Issatay Tokmurziyev, Miguel Altamirano Cabrera, Muhammad Haris Khan, Yara Mahmoud, Dzmitry Tsetserukou
+
+
+ BPINN-EM-Post: Bayesian Physics-Informed Neural Network based Stochastic Electromigration Damage Analysis in the Post-void Phase
+ https://arxiv.org/abs/2503.17393
+ arXiv:2503.17393v3 Announce Type: replace
+Abstract: In contrast to the assumptions of most existing Electromigration (EM) analysis tools, the evolution of EM-induced stress is inherently non-deterministic, influenced by factors such as input current fluctuations and manufacturing non-idealities. Traditional approaches for estimating stress variations typically involve computationally expensive and inefficient Monte Carlo simulations with industrial solvers, which quantify variations using mean and variance metrics. In this work, we introduce a novel machine learning-based framework, termed BPINN-EM- Post, for efficient stochastic analysis of EM-induced post-voiding aging processes. For the first time, our new approach integrates closed-form analytical solutions with a Bayesian Physics- Informed Neural Network (BPINN) framework to accelerate the analysis. The closed-form solutions enforce physical laws at the individual wire segment level, while the BPINN ensures that physics constraints at inter-segment junctions are satisfied and stochastic behaviors are accurately modeled. By reducing the number of variables in the loss functions through utilizing analytical solutions, our method significantly improves training efficiency without accuracy loss and naturally incorporates variational effects. Additionally, the analytical solutions effectively address the challenge of incorporating initial stress distributions in interconnect structures during post-void stress calculations. Numerical results demonstrate that BPINN-EM-Post achieves over 240x and more than 67x speedup compared to Monte Carlo simulations using the FEM-based COMSOL solver and FDM-based EMSpice, respectively, with marginal accuracy loss.
+ oai:arXiv.org:2503.17393v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Subed Lamichhane, Haotian Lu, Sheldon X. -D. Tan
+
+
+ Tube-Based Robust Control Strategy for Vision-Guided Autonomous Vehicles
+ https://arxiv.org/abs/2503.18752
+ arXiv:2503.18752v2 Announce Type: replace
+Abstract: A robust control strategy for autonomous vehicles can improve system stability, enhance riding comfort, and prevent driving accidents. This paper presents a novel interpolation-tube-based constrained iterative linear quadratic regulator (itube-CILQR) algorithm for autonomous computer-vision-based vehicle lane-keeping. The goal of the algorithm is to enhance robustness during high-speed cornering on tight turns. Compared with standard tube-based approaches, the proposed itube-CILQR algorithm reduces system conservatism and exhibits higher computational speed. Numerical simulations and vision-based experiments were conducted to examine the feasibility of using the proposed algorithm for controlling autonomous vehicles. The results indicated that the proposed algorithm achieved superior vehicle lane-keeping performance to variational CILQR-based methods and model predictive control (MPC) approaches involving the use of a classical interior-point optimizer. Specifically, itube-CILQR required an average runtime of 3.45 ms to generate a control signal for guiding a self-driving vehicle. By comparison, itube-MPC typically required a 4.32 times longer computation time to complete the same task. Moreover, the influence of conservatism on system behavior was investigated by exploring the variations in the interpolation variables derived using the proposed itube-CILQR algorithm during lane-keeping maneuvers.
+ oai:arXiv.org:2503.18752v2
+ eess.SY
+ cs.CV
+ cs.RO
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Der-Hau Lee
+
+
+ EconEvals: Benchmarks and Litmus Tests for Economic Decision-Making by LLM Agents
+ https://arxiv.org/abs/2503.18825
+ arXiv:2503.18825v3 Announce Type: replace
+Abstract: We develop evaluation methods for measuring the economic decision-making capabilities and tendencies of LLMs. First, we develop benchmarks derived from key problems in economics -- procurement, scheduling, and pricing -- that test an LLM's ability to learn from the environment in context. Second, we develop the framework of litmus tests, evaluations that quantify an LLM's choice behavior on a stylized decision-making task with multiple conflicting objectives. Each litmus test outputs a litmus score, which quantifies an LLM's tradeoff response, a reliability score, which measures the coherence of an LLM's choice behavior, and a competency score, which measures an LLM's capability at the same task when the conflicting objectives are replaced by a single, well-specified objective. Evaluating a broad array of frontier LLMs, we (1) investigate changes in LLM capabilities and tendencies over time, (2) derive economically meaningful insights from the LLMs' choice behavior and chain-of-thought, (3) validate our litmus test framework by testing self-consistency, robustness, and generalizability. Overall, this work provides a foundation for evaluating LLM agents as they are further integrated into economic decision-making.
+ oai:arXiv.org:2503.18825v3
+ cs.AI
+ cs.CL
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sara Fish, Julia Shephard, Minkai Li, Ran I. Shorrer, Yannai A. Gonczarowski
+
+
+ The Case for "Thick Evaluations" of Cultural Representation in AI
+ https://arxiv.org/abs/2503.19075
+ arXiv:2503.19075v2 Announce Type: replace
+Abstract: Generative AI model outputs have been increasingly evaluated for their (in)ability to represent non-Western cultures. We argue that these evaluations often operate through reductive ideals of representation, abstracted from how people define their own representation and neglecting the inherently interpretive and contextual nature of cultural representation. In contrast to these 'thin' evaluations, we introduce the idea of 'thick evaluations:' a more granular, situated, and discursive measurement framework for evaluating representations of social worlds in AI outputs, steeped in communities' own understandings of representation. We develop this evaluation framework through workshops in South Asia, by studying the 'thick' ways in which people interpret and assign meaning to AI-generated images of their own cultures. We introduce practices for thicker evaluations of representation that expand the understanding of representation underpinning AI evaluations and by co-constructing metrics with communities, bringing measurement in line with the experiences of communities on the ground.
+ oai:arXiv.org:2503.19075v2
+ cs.CY
+ cs.AI
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1609/aies.v8i3.36696
+ Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(3), 2067-2080 (2025)
+ Rida Qadri, Mark Diaz, Ding Wang, Michael Madaio
+
+
+ WVSC: Wireless Video Semantic Communication with Multi-frame Compensation
+ https://arxiv.org/abs/2503.21197
+ arXiv:2503.21197v2 Announce Type: replace
+Abstract: Existing wireless video transmission schemes directly conduct video coding in pixel level, while neglecting the inner semantics contained in videos. In this paper, we propose a wireless video semantic communication framework, abbreviated as WVSC, which integrates the idea of semantic communication into wireless video transmission scenarios. WVSC first encodes original video frames as semantic frames and then conducts video coding based on such compact representations, enabling the video coding in semantic level rather than pixel level. Moreover, to further reduce the communication overhead, a reference semantic frame is introduced to substitute motion vectors of each frame in common video coding methods. At the receiver, multi-frame compensation (MFC) is proposed to produce compensated current semantic frame with a multi-frame fusion attention module. With both the reference frame transmission and MFC, the bandwidth efficiency improves with satisfying video transmission performance. Experimental results verify the performance gain of WVSC over other DL-based methods e.g. DVSC about 1 dB and traditional schemes about 2 dB in terms of PSNR.
+ oai:arXiv.org:2503.21197v2
+ cs.MM
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bingyan Xie, Yongpeng Wu, Yuxuan Shi, Biqian Feng, Wenjun Zhang, Jihong Park, Tony Q. S. Quek
+
+
+ NetSSM: Multi-Flow and State-Aware Network Trace Generation using State-Space Models
+ https://arxiv.org/abs/2503.22663
+ arXiv:2503.22663v2 Announce Type: replace
+Abstract: Access to raw network traffic data is essential for many computer networking tasks, from traffic modeling to performance evaluation. Unfortunately, this data is scarce due to high collection costs and governance rules. Previous efforts explore this challenge by generating synthetic network data, but fail to reliably handle multi-flow sessions, struggle to reason about stateful communication in moderate to long-duration network sessions, and lack robust evaluations tied to real-world utility. We propose a new method based on state-space models called NetSSM that generates raw network traffic at the packet-level granularity. Our approach captures interactions between multiple, interleaved flows -- an objective unexplored in prior work -- and effectively reasons about flow-state in sessions to capture traffic characteristics. NetSSM accomplishes this by learning from and producing traces 8x and 78x longer than existing transformer-based approaches. Evaluation results show that our method generates high-fidelity traces that outperform prior efforts in existing benchmarks. We also find that NetSSM's traces have high semantic similarity to real network data regarding compliance with standard protocol requirements and flow and session-level traffic characteristics.
+ oai:arXiv.org:2503.22663v2
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3786289
+ Andrew Chu, Xi Jiang, Shinan Liu, Arjun Bhagoji, Francesco Bronzino, Paul Schmitt, Nick Feamster
+
+
+ Hummus: A Dataset of Humorous Multimodal Metaphor Use
+ https://arxiv.org/abs/2504.02983
+ arXiv:2504.02983v2 Announce Type: replace
+Abstract: Metaphor and humor share a lot of common ground, and metaphor is one of the most common humorous mechanisms. This study focuses on the humorous capacity of multimodal metaphors, which has not received due attention in the community. We take inspiration from the Incongruity Theory of humor, the Conceptual Metaphor Theory, and the annotation scheme behind the VU Amsterdam Metaphor Corpus, and developed a novel annotation scheme for humorous multimodal metaphor use in image-caption pairs. We create the Hummus Dataset of Humorous Multimodal Metaphor Use, providing expert annotation on 1k image-caption pairs sampled from the New Yorker Caption Contest corpus. Using the dataset, we test state-of-the-art multimodal large language models (MLLMs) on their ability to detect and understand humorous multimodal metaphor use. Our experiments show that current MLLMs still struggle with processing humorous multimodal metaphors, particularly with regard to integrating visual and textual information. We release our dataset and code at github.com/xiaoyuisrain/humorous-multimodal-metaphor-use.
+ oai:arXiv.org:2504.02983v2
+ cs.CL
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Xiaoyu Tong, Zhi Zhang, Pia Sommerauer, Martha Lewis, Ekaterina Shutova
+
+
+ Hard Negative Sampling via Large Language Models for Recommendation
+ https://arxiv.org/abs/2504.04726
+ arXiv:2504.04726v2 Announce Type: replace
+Abstract: Hard negative sampling improves recommendation performance by accelerating convergence and sharpening the decision boundary. However, most existing methods rely on heuristic strategies, selecting negatives from a fixed candidate pool. Lacking semantic awareness, these methods often misclassify items that align with users' semantic interests as negatives, resulting in False Hard Negative Samples (FHNS). Such FHNS inject noisy supervision and hinder the model's optimal performance. To address this challenge, we propose HNLMRec, a generative semantic negative sampling framework. Leveraging the semantic reasoning capabilities of Large Language Models (LLMs), HNLMRec directly generates negative samples that are behaviorally distinct yet semantically relevant with respect to user preferences. Furthermore, we integrate collaborative filtering signals into the LLM via supervised fine-tuning, guiding the model to synthesize more reliable and informative hard negatives. Extensive experiments on multiple real-world datasets demonstrate that HNLMRec significantly outperforms traditional methods and LLM-enhanced baselines, while effectively mitigating popularity bias and data sparsity, thereby improving generalization.
+ oai:arXiv.org:2504.04726v2
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Chu Zhao, Enneng Yang, Yuting Liu, Jianzhe Zhao, Guibing Guo
+
+
+ VectorLiteRAG: Latency-Aware and Fine-Grained Resource Partitioning for Efficient RAG
+ https://arxiv.org/abs/2504.08930
+ arXiv:2504.08930v3 Announce Type: replace
+Abstract: Retrieval-Augmented Generation (RAG) systems combine vector similarity search with large language models (LLMs) to deliver accurate, context-aware responses. However, co-locating the vector retriever and the LLM on shared GPU infrastructure introduces significant challenges: vector search is memory and I/O intensive, while LLM inference demands high throughput and low latency. Naive resource sharing often leads to severe performance degradation, particularly under high request load or large index sizes.
+ We present VectorLiteRAG, a deployment-friendly RAG system that achieves latency-compliant inference without requiring additional hardware resources. VectorLiteRAG introduces a fine-grained GPU resource allocation mechanism based on detailed performance modeling and access pattern analysis. By estimating search latency and query hit rate distributions, it identifies an optimal index partitioning point across CPU and GPU tiers to minimize contention and maximize throughput.
+ Our evaluations show that VectorLiteRAG consistently expands the SLO compliant request rate range across all tested configurations, including both small and large LLMs, and small and large vector databases compared to naive baselines and state of the art alternatives. In the best case, VectorLiteRAG improves the attainable SLO throughput by up to 1.5 times without compromising generation quality or requiring additional compute resources.
+ oai:arXiv.org:2504.08930v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Junkyum Kim, Divya Mahajan
+
+
+ Linear complementary dual quasi-cyclic codes of index 2
+ https://arxiv.org/abs/2504.09126
+ arXiv:2504.09126v3 Announce Type: replace
+Abstract: We provide a polynomial approach to investigate linear complementary dual (LCD) quasi-cyclic codes over finite fields. We establish necessary and sufficient conditions for LCD quasi-cyclic codes of index 2 with respect to the Euclidean, Hermitian, and symplectic inner products. As a consequence of these characterizations, we derive necessary and sufficient conditions for LCD one-generator quasi-cyclic codes. Furthermore, using these characterizations, we construct some new quasi-cyclic LCD codes over small fields.
+ oai:arXiv.org:2504.09126v3
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Kanat Abdukhalikov, Duy Ho, San Ling, Gyanendra K. Verma
+
+
+ PraNet-V2: Dual-Supervised Reverse Attention for Medical Image Segmentation
+ https://arxiv.org/abs/2504.10986
+ arXiv:2504.10986v2 Announce Type: replace
+Abstract: Accurate medical image segmentation is essential for effective diagnosis and treatment. Previously, PraNet-V1 was proposed to enhance polyp segmentation by introducing a reverse attention (RA) module that utilizes background information. However, PraNet-V1 struggles with multi-class segmentation tasks. To address this limitation, we propose PraNet-V2, which, compared to PraNet-V1, effectively performs a broader range of tasks including multi-class segmentation. At the core of PraNet-V2 is the Dual-Supervised Reverse Attention (DSRA) module, which incorporates explicit background supervision, independent background modeling, and semantically enriched attention fusion. Our PraNet-V2 framework demonstrates strong performance on four polyp segmentation datasets. Additionally, by integrating DSRA to iteratively enhance foreground segmentation results in three state-of-the-art semantic segmentation models, we achieve up to a 1.36% improvement in mean Dice score. Code is available at: https://github.com/ai4colonoscopy/PraNet-V2/tree/main/binary_seg/jittor.
+ oai:arXiv.org:2504.10986v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Bo-Cheng Hu, Ge-Peng Ji, Dian Shao, Deng-Ping Fan
+
+
+ A structure-preserving numerical method for quasi-incompressible Navier-Stokes-Maxwell-Stefan systems
+ https://arxiv.org/abs/2504.11892
+ arXiv:2504.11892v2 Announce Type: replace
+Abstract: A conforming finite element scheme with mixed explicit-implicit time discretization for quasi-incompressible Navier-Stokes-Maxwell-Stefan systems in a bounded domain with periodic boundary conditions is presented. The system consists of the Navier-Stokes equations, together with a quasi-incompressibility constraint, coupled with the cross-diffusion Maxwell-Stefan equations. The numerical scheme preserves the partial masses and the quasi-incompressibility constraint and dissipates the discrete energy. Numerical experiments in two space dimensions illustrate the convergence of the scheme and the structure-preserving properties.
+ oai:arXiv.org:2504.11892v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Aaron Brunk, Ansgar J\"ungel, Maria Luk\'a\v{c}ov\'a-Medvid'ov\'a
+
+
+ Reinforcement Learning from Human Feedback
+ https://arxiv.org/abs/2504.12501
+ arXiv:2504.12501v5 Announce Type: replace
+Abstract: Reinforcement learning from human feedback (RLHF) has become an important technical and storytelling tool to deploy the latest machine learning systems. In this book, we hope to give a gentle introduction to the core methods for people with some level of quantitative background. The book starts with the origins of RLHF -- both in recent literature and in a convergence of disparate fields of science in economics, philosophy, and optimal control. We then set the stage with definitions, problem formulation, data collection, and other common math used in the literature. The core of the book details every optimization stage in using RLHF, from starting with instruction tuning to training a reward model and finally all of rejection sampling, reinforcement learning, and direct alignment algorithms. The book concludes with advanced topics -- understudied research questions in synthetic data and evaluation -- and open questions for the field.
+ oai:arXiv.org:2504.12501v5
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nathan Lambert
+
+
+ A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
+ https://arxiv.org/abs/2504.12636
+ arXiv:2504.12636v5 Announce Type: replace
+Abstract: Robotic manipulation faces critical challenges in understanding spatial affordances--the "where" and "how" of object interactions--essential for complex manipulation tasks like wiping a board or stacking objects. Existing methods, including modular-based and end-to-end approaches, often lack robust spatial reasoning capabilities. Unlike recent point-based and flow-based affordance methods that focus on dense spatial representations or trajectory modeling, we propose A0, a hierarchical affordance-aware diffusion model that decomposes manipulation tasks into high-level spatial affordance understanding and low-level action execution. A0 leverages the Embodiment-Agnostic Affordance Representation, which captures object-centric spatial affordances by predicting contact points and post-contact trajectories. A0 is pre-trained on 1 million contact points data and fine-tuned on annotated trajectories, enabling generalization across platforms. Key components include Position Offset Attention for motion-aware feature extraction and a Spatial Information Aggregation Layer for precise coordinate mapping. The model's output is executed by the action execution module. Experiments on multiple robotic systems (Franka, Kinova, Realman, and Dobot) demonstrate A0's superior performance in complex tasks, showcasing its efficiency, flexibility, and real-world applicability.
+ oai:arXiv.org:2504.12636v5
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rongtao Xu, Jian Zhang, Minghao Guo, Youpeng Wen, Haoting Yang, Min Lin, Jianzheng Huang, Zhe Li, Kaidong Zhang, Liqiong Wang, Yuxuan Kuang, Meng Cao, Feng Zheng, Xiaodan Liang
+
+
+ ESPLoRA: Enhanced Spatial Precision with Low-Rank Adaption in Text-to-Image Diffusion Models for High-Definition Synthesis
+ https://arxiv.org/abs/2504.13745
+ arXiv:2504.13745v2 Announce Type: replace
+Abstract: Diffusion models have revolutionized text-to-image (T2I) synthesis, producing high-quality, photorealistic images. However, they still struggle to properly render the spatial relationships described in text prompts. To address the lack of spatial information in T2I generations, existing methods typically use external network conditioning and predefined layouts, resulting in higher computational costs and reduced flexibility. Our approach builds upon a curated dataset of spatially explicit prompts, meticulously extracted and synthesized from LAION-400M to ensure precise alignment between textual descriptions and spatial layouts. Alongside this dataset, we present ESPLoRA, a flexible fine-tuning framework based on Low-Rank Adaptation, specifically designed to enhance spatial consistency in generative models without increasing generation time or compromising the quality of the outputs. In addition to ESPLoRA, we propose refined evaluation metrics grounded in geometric constraints, capturing 3D spatial relations such as "in front of" or "behind". These metrics also expose spatial biases in T2I models which, even when not fully mitigated, can be strategically exploited by our TORE algorithm to further improve the spatial consistency of generated images. Our method outperforms CoMPaSS, the current baseline framework, on spatial consistency benchmarks.
+ oai:arXiv.org:2504.13745v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Andrea Rigo, Luca Stornaiuolo, Mauro Martino, Bruno Lepri, Nicu Sebe
+
+
+ Quantifying Emotional Arousal through Pupillary Response: A Novel Approach for Isolating the Luminosity Effect and Predicting Affective States
+ https://arxiv.org/abs/2504.13886
+ arXiv:2504.13886v2 Announce Type: replace
+Abstract: Pupil dilation is recognized as an objective indicator of emotional arousal, but confounding factors such as the luminosity of stimuli and the surrounding environment have greatly limited its practical usefulness. This study presents a new approach to isolate and remove the effect of luminosity on pupil dilation. We validated this approach by showing 32 video clips with different content and emotional intensity to 47 participants, who reported their level of emotional arousal after each video. We developed a model capable of predicting the effect of luminosity on pupil size as a function of screen brightness, which adapts to individual physiological differences and different types of monitors through a brief pre-experimental calibration. We thus estimated the pupil size due exclusively to luminosity and subtracted it from the total recorded pupil size, obtaining the component due exclusively to arousal. From the latter, we predicted the arousal of each participant for each video using two models. We first used a simple linear regression model. When we used the luminosity-corrected pupil size, we obtained a correlation between predicted and self-reported arousal of r = 0.65 +/- 0.12, and R2 of 0.43 +/- 0.12. The uncorrected pupil size, instead, showed virtually no predictive power (r = 0.26 +/- 0.15, R2 = 0.09 +/- 0.089). We then used an Extreme Gradient Boosting model, obtaining even better results in the case of luminosity correction (r = 0.765 +/- 0.047, R2 = 0.556 +/- 0.085). Our results highlight that separating emotional and luminosity components from pupillary responses is crucial for accurately predicting arousal.
+ oai:arXiv.org:2504.13886v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Zeel Pansara, Gabriele Navyte, Tatiana Freitas-Mendes, Camila Bottger, Edoardo Franco, Luca Citi, Erik S. Jacobi, Giulia L. Poerio, Helge Gillmeister, Caterina Cinel, Vito De Feo
+
+
+ Contextual Embedding-based Clustering to Identify Topics for Healthcare Service Improvement
+ https://arxiv.org/abs/2504.14068
+ arXiv:2504.14068v3 Announce Type: replace
+Abstract: Understanding patient feedback is crucial for improving healthcare services, yet analyzing unlabeled short-text feedback presents challenges due to limited data and domain-specific nuances. Traditional supervised approaches require extensive labeled datasets, making unsupervised methods more practical for extracting insights. This study applies unsupervised techniques to analyze 439 survey responses from a healthcare system in Wisconsin, USA. A keyword-based filter was used to isolate complaint-related feedback using a domain-specific lexicon. To identify dominant themes, we evaluated traditional topic models such as Latent Dirichlet Allocation (LDA) and Gibbs Sampling Dirichlet Multinomial Mixture (GSDMM) -- alongside BERTopic, a neural embedding-based clustering method. To improve coherence and interpretability in sparse, short-text data, we propose kBERT, which integrates BERT embeddings with k-means clustering. Model performance was assessed using coherence scores (Cv ) and average Inverted Rank-Biased Overlap (IRBOavg). kBERT achieved the highest coherence (Cv = 0.53) and topic separation (IRBOavg = 1.00), outperforming all other models. These findings highlight the value of embedding-based, context-aware models in healthcare analytics.
+ oai:arXiv.org:2504.14068v3
+ cs.LG
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1109/COMPSAC65507.2025.00106
+ K M Sajjadul Islam, Ravi Teja Karri, Srujan Vegesna, Jiawei Wu, Praveen Madiraju
+
+
+ ChronoRoot 2.0: An Open AI-Powered Platform for 2D Temporal Plant Phenotyping
+ https://arxiv.org/abs/2504.14736
+ arXiv:2504.14736v2 Announce Type: replace
+Abstract: Plant developmental plasticity, particularly in root system architecture, is fundamental to understanding adaptability and agricultural sustainability. ChronoRoot 2.0 builds upon established low-cost hardware while significantly enhancing software capabilities and usability. The system employs nnUNet architecture for multi-class segmentation, demonstrating significant accuracy improvements while simultaneously tracking six distinct plant structures encompassing root, shoot, and seed components: main root, lateral roots, seed, hypocotyl, leaves, and petiole. This architecture enables easy retraining and incorporation of additional training data without requiring machine learning expertise. The platform introduces dual specialized graphical interfaces: a Standard Interface for detailed architectural analysis with novel gravitropic response parameters, and a Screening Interface enabling high-throughput analysis of multiple plants through automated tracking. Functional Principal Component Analysis integration enables discovery of novel phenotypic parameters through temporal pattern comparison. We demonstrate multi-species analysis, with Arabidopsis thaliana and Solanum lycopersicum, both morphologically distinct plant species. Three use cases in Arabidopsis thaliana and validation with tomato seedlings demonstrate enhanced capabilities: circadian growth pattern characterization, gravitropic response analysis in transgenic plants, and high-throughput etiolation screening across multiple genotypes.ChronoRoot 2.0 maintains the low-cost, modular hardware advantages of its predecessor while dramatically improving accessibility through intuitive graphical interfaces and expanded analytical capabilities. The open-source platform makes sophisticated temporal plant phenotyping more accessible to researchers without computational expertise.
+ oai:arXiv.org:2504.14736v2
+ cs.CV
+ q-bio.QM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Nicol\'as Gaggion, Noelia A. Boccardo, Rodrigo Bonazzola, Mar\'ia Florencia Legascue, Mar\'ia Florencia Mammarella, Florencia Sol Rodriguez, Federico Emanuel Aballay, Florencia Bel\'en Catulo, Andana Barrios, Luciano J. Santoro, Franco Accavallo, Santiago Nahuel Villarreal, Leonardo I. Pereyra-Bistrain, Moussa Benhamed, Martin Crespi, Martiniano Mar\'ia Ricardi, Ezequiel Petrillo, Thomas Blein, Federico Ariel, Enzo Ferrante
+
+
+ RainbowPlus: Enhancing Adversarial Prompt Generation via Evolutionary Quality-Diversity Search
+ https://arxiv.org/abs/2504.15047
+ arXiv:2504.15047v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) exhibit remarkable capabilities but are susceptible to adversarial prompts that exploit vulnerabilities to produce unsafe or biased outputs. Existing red-teaming methods often face scalability challenges, resource-intensive requirements, or limited diversity in attack strategies. We propose RainbowPlus, a novel red-teaming framework rooted in evolutionary computation, enhancing adversarial prompt generation through an adaptive quality-diversity (QD) search that extends classical evolutionary algorithms like MAP-Elites with innovations tailored for language models. By employing a multi-element archive to store diverse high-quality prompts and a comprehensive fitness function to evaluate multiple prompts concurrently, RainbowPlus overcomes the constraints of single-prompt archives and pairwise comparisons in prior QD methods like Rainbow Teaming. Experiments comparing RainbowPlus to QD methods across six benchmark datasets and four open-source LLMs demonstrate superior attack success rate (ASR) and diversity (Diverse-Score $\approx 0.84$), generating up to 100 times more unique prompts (e.g., 10,418 vs. 100 for Ministral-8B-Instruct-2410). Against nine state-of-the-art methods on the HarmBench dataset with twelve LLMs (ten open-source, two closed-source), RainbowPlus achieves an average ASR of 81.1%, surpassing AutoDAN-Turbo by 3.9%, and is 9 times faster (1.45 vs. 13.50 hours). Our open-source implementation fosters further advancements in LLM safety, offering a scalable tool for vulnerability assessment. Code and resources are publicly available at https://github.com/knoveleng/rainbowplus, supporting reproducibility and future research in LLM red-teaming.
+ oai:arXiv.org:2504.15047v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Quy-Anh Dang, Chris Ngo, Truong-Son Hy
+
+
+ Linear Complementary Pairs of Quasi-Cyclic and Quasi-Twisted Codes
+ https://arxiv.org/abs/2504.15231
+ arXiv:2504.15231v2 Announce Type: replace
+Abstract: In this paper, we provide a polynomial characterization of linear complementary pairs of quasi-cyclic and quasi-twisted codes of index 2. We also give several examples of linear complementary pairs of quasi-cyclic and quasi-twisted codes with optimal security parameters.
+ oai:arXiv.org:2504.15231v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Kanat Abdukhalikov, Duy Ho, San Ling, Gyanendra K. Verma
+
+
+ KeyDiff: Key Similarity-Based KV Cache Eviction for Long-Context LLM Inference in Resource-Constrained Environments
+ https://arxiv.org/abs/2504.15364
+ arXiv:2504.15364v4 Announce Type: replace
+Abstract: We demonstrate that geometrically distinctive keys during LLM inference tend to have high attention scores. Based on the phenomenon we propose KeyDiff, a training-free KV cache eviction method based solely on key similarity. Unlike other KV cache eviction methods, KeyDiff can process arbitrarily long prompts within strict resource constraints and efficiently generate responses. We provide a theoretical basis for KeyDiff by relating key diversity with attention scores. These results imply KeyDiff can efficiently identify the most important tokens to retain. Notably KeyDiff does not rely on attention scores, allowing the use of optimized attention mechanisms like FlashAttention. Under a strict memory allowance, we demonstrate the effectiveness of KeyDiff for the Llama and Qwen model families by observing a performance gap of less than 0.04% with 8K cache budget ($\sim$23% KV cache reduction) from the non-evicting baseline on LongBench for Llama 3.1-8B and Llama 3.2-3B. We also observe near baseline performance for Deepseek-R1-Distill-Llama-8B on the Math500 reasoning benchmark and decrease end-to-end inference latency by up to 30% compared to the other token-eviction methods.
+ oai:arXiv.org:2504.15364v4
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Junyoung Park, Dalton Jones, Matthew J Morse, Raghavv Goel, Mingu Lee, Chris Lott
+
+
+ Combating Toxic Language: A Review of LLM-Based Strategies for Software Engineering
+ https://arxiv.org/abs/2504.15439
+ arXiv:2504.15439v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) have become integral to Software Engineering (SE), increasingly used in development workflows. However, their widespread adoption raises concerns about the presence and propagation of toxic language - harmful or offensive content that can foster exclusionary environments. This paper provides a comprehensive review of recent research (2020-2024) on toxicity detection and mitigation, focusing on both SE-specific and general-purpose datasets. We examine annotation and pre-processing techniques, assess detection methodologies, and evaluate mitigation strategies, particularly those leveraging LLMs. Additionally, we conduct an ablation study demonstrating the effectiveness of LLM-based rewriting for reducing toxicity. This review is limited to studies published within the specified timeframe and within the domain of toxicity in LLMs and SE; therefore, certain emerging methods or datasets beyond this period may fall outside its purview. By synthesizing existing work and identifying open challenges, this review highlights key areas for future research to ensure the responsible deployment of LLMs in SE and beyond.
+ oai:arXiv.org:2504.15439v2
+ cs.LG
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hao Zhuo, Yicheng Yang, Kewen Peng
+
+
+ Compton Form Factor Extraction using Quantum Deep Neural Networks
+ https://arxiv.org/abs/2504.15458
+ arXiv:2504.15458v3 Announce Type: replace
+Abstract: We extract Compton form factors (CFFs) from deeply virtual Compton scattering measurements at the Thomas Jefferson National Accelerator Facility (JLab) using quantum-inspired deep neural networks (QDNNs). The analysis implements the twist-2 Belitsky-Kirchner-M\"uller formalism and employs a fitting strategy that emulates standard local fits. Using pseudodata, we benchmark QDNNs against classical deep neural networks (CDNNs) and find that QDNNs often deliver higher predictive accuracy and tighter uncertainties at comparable model complexity. Guided by these results, we introduce a quantitative selection metric that indicates when QDNNs or CDNNs are optimal for a given experimental fit. After obtaining local extractions from the JLab data, we perform a standard neural-network global CFF fit and compare with previous global analyses. The results support QDNNs as an efficient and complementary tool to CDNNs for CFF determination and for future multidimensional studies of parton distributions and hadronic structure.
+ oai:arXiv.org:2504.15458v3
+ cs.LG
+ hep-ph
+ nucl-th
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Brandon B. Le, Dustin Keller
+
+
+ AI-Based Vulnerability Analysis of NFT Smart Contracts
+ https://arxiv.org/abs/2504.16113
+ arXiv:2504.16113v3 Announce Type: replace
+Abstract: With the rapid growth of the NFT market, the security of smart contracts has become crucial. However, existing AI-based detection models for NFT contract vulnerabilities remain limited due to their complexity, while traditional manual methods are time-consuming and costly. This study proposes an AI-driven approach to detect vulnerabilities in NFT smart contracts.
+ We collected 16,527 public smart contract codes, classifying them into five vulnerability categories: Risky Mutable Proxy, ERC-721 Reentrancy, Unlimited Minting, Missing Requirements, and Public Burn. Python-processed data was structured into training/test sets. Using the CART algorithm with Gini coefficient evaluation, we built initial decision trees for feature extraction. A random forest model was implemented to improve robustness through random data/feature sampling and multitree integration. GridSearch hyperparameter tuning further optimized the model, with 3D visualizations demonstrating parameter impacts on vulnerability detection.
+ Results show the random forest model excels in detecting all five vulnerabilities. For example, it identifies Risky Mutable Proxy by analyzing authorization mechanisms and state modifications, while ERC-721 Reentrancy detection relies on external call locations and lock mechanisms. The ensemble approach effectively reduces single-tree overfitting, with stable performance improvements after parameter tuning. This method provides an efficient technical solution for automated NFT contract detection and lays groundwork for scaling AI applications.
+ oai:arXiv.org:2504.16113v3
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xin Wang, Xiaoqi Li
+
+
+ What Sensors See, What People Feel: An Exploratory Study of Subjective Collaboration Perception in Mixed Reality
+ https://arxiv.org/abs/2504.16373
+ arXiv:2504.16373v3 Announce Type: replace
+Abstract: Mixed Reality (MR) enables rich, embodied collaboration; however, it is uncertain whether sensor- and system-logged behavioral signals capture how users experience that collaboration. This disconnect stems from a fundamental gap. Behavioral signals are observable and continuous, while collaboration is interpreted subjectively and shaped by internal states like presence, cognitive availability, and social awareness. Our core insight is that sensor signals serve as observable manifestations of subjective experiences in MR collaboration, and they can be captured through sensor data such as shared gaze, speech, spatial movement, and other system-logged performance metrics. We propose the Sensor-to-Subjective (S2S) Mapping Framework, a conceptual model that links observable interaction patterns to users' subjective perceptions of collaboration and internal cognitive states through sensor-based indicators and task performance metrics. To evaluate this model, we conducted an exploratory study with 48 participants across 12 MR groups engaged in a collaborative image-sorting task. Our findings show a correlation between sensed behavior and perceived collaboration, particularly through shared attention and proximity.
+ oai:arXiv.org:2504.16373v3
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yasra Chandio, Diana Romero, Salma Elmalaki, Fatima Anwar
+
+
+ Modeling and Simulation of Open Membranes in Stokes Flow with Mixed-Dimensional Coupling
+ https://arxiv.org/abs/2504.16823
+ arXiv:2504.16823v2 Announce Type: replace
+Abstract: In this work, we present a mathematical and computational framework to model the dynamics of open lipid bilayer membranes interacting with ambient Stokes flow. The model explicitly couples the three-dimensional viscous fluid, the two-dimensional membrane surface, and its one-dimensional free edge. We develop an axisymmetric hybrid BEM-FEM method that solves the problem with an effective one-dimensional formulation. A key component is a local mesh refinement strategy designed to accurately resolve singularities and boundary layers originating at the membrane edge. Several numerical examples are provided to showcase its ability to capture intricate edge dynamics and multiscale fluid-membrane coupling.
+ oai:arXiv.org:2504.16823v2
+ math.NA
+ cs.NA
+ physics.comp-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Han Zhou, Yuan-Nan Young, Yoichiro Mori
+
+
+ Beyond Task and Motion Planning: Hierarchical Robot Planning with General-Purpose Skills
+ https://arxiv.org/abs/2504.17901
+ arXiv:2504.17901v2 Announce Type: replace
+Abstract: Task and motion planning is a well-established approach for solving long-horizon robot planning problems. However, traditional methods assume that each task-level robot action, or skill, can be reduced to kinematic motion planning. We address the challenge of combining motion planning with closed-loop motor controllers that go beyond mere kinematic considerations. We propose a novel framework that integrates these policies into motion planning using Composable Interaction Primitives (CIPs), enabling the use of diverse, non-composable pre-learned skills in hierarchical robot planning. We validate our Task and Skill Planning (TASP) approach through real-world experiments on a bimanual manipulator and a mobile manipulator, demonstrating that CIPs allow diverse robots to combine motion planning with general-purpose skills to solve complex, long-horizon tasks.
+ oai:arXiv.org:2504.17901v2
+ cs.RO
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Benned Hedegaard, Yichen Wei, Ahmed Jaafar, Stefanie Tellex, George Konidaris, Naman Shah
+
+
+ A finite volume Simo-Reissner beam method for moored floating body dynamics
+ https://arxiv.org/abs/2504.18248
+ arXiv:2504.18248v2 Announce Type: replace
+Abstract: This paper presents a novel finite volume mooring line model based on the geometrically exact Simo-Reissner beam model for analysing the interaction between a floating rigid body and its mooring lines. The coupled numerical model is implemented entirely within a finite volume-based discretisation framework using a popular computational fluid dynamics C++ toolbox, OpenFOAM. Unlike existing methods for modelling mooring lines, which rely on lumped mass models or finite element-based approaches, this work simulates the mooring cables using non-linear beam models implemented in a finite volume framework to account for bending, tensile, and torsional loading. This advancement makes the current work particularly valuable for simulating extreme sea conditions. The coupled model developed in this study has been validated and verified using experimental and numerical data for a floating box moored with four catenary mooring lines under regular wave conditions featuring different wave heights and periods. The results demonstrate strong agreement with both experimental and numerical data, highlighting the model's accuracy in capturing mooring dynamics and floating body motion.
+ oai:arXiv.org:2504.18248v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1016/j.apor.2025.104845
+ Amirhossein Taran, Seevani Bali, Zeljko Tukovic, Vikram Pakrashi, Philip Cardiff
+
+
+ The frequency $K_i$s for symmetrical traveling salesman problem
+ https://arxiv.org/abs/2504.19608
+ arXiv:2504.19608v4 Announce Type: replace
+Abstract: The frequency $K_i$s ($i\in[4,n]$) are studied for symmetric traveling salesman problem ($TSP$) to illustrate the structure properties of the edges in optimal Hamiltonian cycle ($OHC$). A frequency $K_i$ is computed with the set of ${{i}\choose{2}}$ optimal $i$-vertex paths with given endpoints (optimal $i$-vertex paths) in one corresponding $K_i$ in $K_n$. Given an $OHC$ edge related to $K_i$, it has certain frequency bigger than $\frac{1}{2}{{i}\choose{2}}$ in the frequency $K_i$, and that of an ordinary edge not in $OHC$ is smaller than $2(n-3)$. Moreover, given a frequency $K_i$ containing an $OHC$ edge related to $K_n$, the frequency of the $OHC$ edge is bigger than $\frac{1}{2}{{i}\choose{2}}$ in the average case. It also found that the probability that an $OHC$ edge is contained in the optimal $i$-vertex paths increases according to $i\in [4, n]$ or keeps stable if it decreases from $i$ to $i+1\leq n$. As the frequency $K_i$s are used to compute the frequency of an edge, each $OHC$ edge reaches its own peak frequency at $i=P_0$ where $P_0=\frac{n}{2} + 2$ for even $n$ or $\frac{n+1}{2} + 1$ for odd $n$. For each ordinary edge out of $OHC$, the probability that they are contained in the optimal $i$-vertex paths decreases according to $i$, respectively, in the average case. Moreover, the average frequency of an ordinary edge will be smaller than $\frac{1}{2}{{i}\choose{2}}$ if $i \geq 2i_d$ where $i_d$ is the smallest number meeting the condition $\frac{(n-2)(n-3) - (i_d-2)(i_d-3)}{(n-2)(n-3) - (i_d-1)(i_d-2)} \geq \sqrt{1 + \frac{2}{i_d(i_d+1)}}$ and $i_d = O(n^{\frac{4}{7}})$. Based on these findings, an algorithm is presented to find $OHC$ in $O(n^2i_d^42^{2i_d})$ time using dynamic programming. The experiments are executed to verify these findings with the benchmark $TSP$ instances.
+ oai:arXiv.org:2504.19608v4
+ cs.DM
+ math.CO
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yong Wang
+
+
+ Mutual Information Minimization for Side-Channel Attack Resistance via Optimal Noise Injection
+ https://arxiv.org/abs/2504.20556
+ arXiv:2504.20556v3 Announce Type: replace
+Abstract: Side-channel attacks (SCAs) pose a serious threat to system security by extracting secret keys through physical leakages such as power consumption, timing variations, and electromagnetic emissions. Among existing countermeasures, artificial noise injection is recognized as one of the most effective techniques. However, its high power consumption poses a major challenge for resource-constrained systems such as Internet of Things (IoT) devices, motivating the development of more efficient protection schemes. In this paper, we model SCAs as a communication channel and aim to suppress information leakage by minimizing the mutual information between the secret information and side-channel observations, subject to a power constraint on the artificial noise. We propose an optimal artificial noise injection method that minimizes the mutual information under power constraints for artificial noise. Specifically, we formulate two convex optimization problems: 1) minimizing the total mutual information, and 2) minimizing the maximum mutual information across observations. Our first major contribution is proposing an optimal artificial noise injection framework for the case of Gaussian input, where the mutual information becomes the channel capacity, which is one way to quantify the information leakage. Our second major contribution extends the optimization framework to arbitrary input distributions. We identify conditions ensuring the convexity of the optimization problem and derive the optimal solution using the fundamental relationship between the mutual information and the minimum mean squared error. The simulation results show that the proposed methods significantly reduce both total and maximum mutual information compared to conventional techniques, confirming their effectiveness for resource-constrained, security-critical systems.
+ oai:arXiv.org:2504.20556v3
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiheon Woo, Donggyun Ryu, Daewon Seo, Young-Sik Kim, Namyoon Lee, Yuval Cassuto, Yongjune Kim
+
+
+ Fine-grained spatial-temporal perception for gas leak segmentation
+ https://arxiv.org/abs/2505.00295
+ arXiv:2505.00295v2 Announce Type: replace
+Abstract: Gas leaks pose significant risks to human health and the environment. Despite long-standing concerns, there are limited methods that can efficiently and accurately detect and segment leaks due to their concealed appearance and random shapes. In this paper, we propose a Fine-grained Spatial-Temporal Perception (FGSTP) algorithm for gas leak segmentation. FGSTP captures critical motion clues across frames and integrates them with refined object features in an end-to-end network. Specifically, we first construct a correlation volume to capture motion information between consecutive frames. Then, the fine-grained perception progressively refines the object-level features using previous outputs. Finally, a decoder is employed to optimize boundary segmentation. Because there is no highly precise labeled dataset for gas leak segmentation, we manually label a gas leak video dataset, GasVid. Experimental results on GasVid demonstrate that our model excels in segmenting non-rigid objects such as gas leaks, generating the most accurate mask compared to other state-of-the-art (SOTA) models.
+ oai:arXiv.org:2505.00295v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/ICIP55913.2025.11084304
+ IEEE International Conference on Image Processing (ICIP), pp. 869-874, 2025
+ Xinlong Zhao, Shan Du
+
+
+ Stabilization by Controllers Having Integer Coefficients
+ https://arxiv.org/abs/2505.00481
+ arXiv:2505.00481v2 Announce Type: replace
+Abstract: The system property of ``having integer coefficients,'' that is, a transfer function has an integer monic polynomial as its denominator, is significant in the field of encrypted control as it is required for a dynamic controller to be realized over encrypted data. This paper shows that there always exists a controller with integer coefficients stabilizing a given discrete-time linear time-invariant plant. A constructive algorithm to obtain such a controller is provided, along with numerical examples. Furthermore, the proposed method is applied to converting a pre-designed controller to have integer coefficients, while the original performance is preserved in the sense that the transfer function of the closed-loop system remains unchanged.
+ oai:arXiv.org:2505.00481v2
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Joowon Lee, Donggil Lee, Junsoo Kim
+
+
+ Don't be lazy: CompleteP enables compute-efficient deep transformers
+ https://arxiv.org/abs/2505.01618
+ arXiv:2505.01618v4 Announce Type: replace
+Abstract: We study compute efficiency of LLM training when using different parameterizations, i.e., rules for adjusting model and optimizer hyperparameters (HPs) as model size changes. Some parameterizations fail to transfer optimal base HPs (such as learning rate) across changes in model depth, requiring practitioners to either re-tune these HPs as they scale up (expensive), or accept sub-optimal training when re-tuning is prohibitive. Even when they achieve HP transfer, we develop theory to show parameterizations may still exist in the lazy learning regime where layers learn only features close to their linearization, preventing effective use of depth and nonlinearity. Finally, we identify and adopt the parameterization we call CompleteP that achieves both depth-wise HP transfer and non-lazy learning in all layers. CompleteP enables a wider range of model width/depth ratios to remain compute-efficient, unlocking shapes better suited for different hardware settings and operational contexts. Moreover, CompleteP enables 12-34% compute efficiency improvements over the prior state-of-the-art. All experiments were run on Cerebras CS-3 systems. A minimal implementation is available at https://github.com/EleutherAI/nanoGPT-mup/tree/completep.
+ oai:arXiv.org:2505.01618v4
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Nolan Dey, Bin Claire Zhang, Lorenzo Noci, Mufan Li, Blake Bordelon, Shane Bergsma, Cengiz Pehlevan, Boris Hanin, Joel Hestness
+
+
+ Beyond Fixed Patches: Enhancing GPTs for Financial Prediction with Adaptive Segmentation and Learnable Wavelets
+ https://arxiv.org/abs/2505.02880
+ arXiv:2505.02880v2 Announce Type: replace
+Abstract: The extensive adoption of web technologies in the finance and investment sectors has led to an explosion of financial data, which contributes to the complexity of the forecasting task. Traditional machine learning models exhibit limitations in this forecasting task constrained by their restricted model capacity. Recent advances in Generative Pre-trained Transformers (GPTs), with their greatly expanded parameter spaces, demonstrate promising potential for modeling complex dependencies in temporal sequences. However, existing pretraining-based approaches typically focus on fixed-length patch analysis, ignoring market data's multi-scale pattern characteristics. In this study, we propose $\mathbf{GPT4FTS}$, a novel framework that enhances pretrained transformer capabilities for temporal sequence modeling through dynamic patch segmentation and learnable wavelet transform modules. Specifically, we first employ K-means++ clustering based on DTW distance to identify scale-invariant patterns in market data. Building upon pattern recognition results, we introduce adaptive patch segmentation that partitions temporal sequences while preserving pattern integrity. To accommodate time-varying frequency characteristics, we devise a dynamic wavelet transform module that emulates discrete wavelet transformation with enhanced flexibility in capturing time-frequency features. Extensive experiments on real-world financial datasets substantiate the framework's efficacy. The source code is available: \href{https://anonymous.4open.science/r/GPT4FTS-6BCC/}
+ oai:arXiv.org:2505.02880v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renjun Jia, Zian Liu, Peng Zhu, Dawei Cheng, Yuqi Liang
+
+
+ ADMM-Based Training for Spiking Neural Networks
+ https://arxiv.org/abs/2505.05527
+ arXiv:2505.05527v2 Announce Type: replace
+Abstract: In recent years, spiking neural networks (SNNs) have gained momentum due to their high potential in time-series processing combined with minimal energy consumption. However, they still lack a dedicated and efficient training algorithm. The popular backpropagation with surrogate gradients, adapted from stochastic gradient descent (SGD)-derived algorithms, has several drawbacks when used as an optimizer for SNNs. Specifically, the approximation introduced by the use of surrogate gradients leads to numerical imprecision, poor tracking of SNN firing times at training time, and, in turn, poor scalability. In this paper, we propose a novel SNN training method based on the alternating direction method of multipliers (ADMM). Our ADMM-based training aims to solve the problem of the SNN step function's non-differentiability by taking an entirely new approach with respect to gradient backpropagation. For the first time, we formulate the SNN training problem as an ADMM-based iterative optimization, derive closed-form updates, and empirically show the optimizer's convergence, its great potential, and discuss future and promising research directions to improve the method to different layer types and deeper architectures.
+ oai:arXiv.org:2505.05527v2
+ cs.LG
+ cs.AI
+ cs.NE
+ eess.SP
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Giovanni Perin, Cesare Bidini, Riccardo Mazzieri, Michele Rossi
+
+
+ DAPPER: Discriminability-Aware Policy-to-Policy Preference-Based Reinforcement Learning for Query-Efficient Robot Skill Acquisition
+ https://arxiv.org/abs/2505.06357
+ arXiv:2505.06357v3 Announce Type: replace
+Abstract: Preference-based Reinforcement Learning (PbRL) enables policy learning through simple queries comparing trajectories from a single policy. While human responses to these queries make it possible to learn policies aligned with human preferences, PbRL suffers from low query efficiency, as policy bias limits trajectory diversity and reduces the number of discriminable queries available for learning preferences. This paper identifies preference discriminability, which quantifies how easily a human can judge which trajectory is closer to their ideal behavior, as a key metric for improving query efficiency. To address this, we move beyond comparisons within a single policy and instead generate queries by comparing trajectories from multiple policies, as training them from scratch promotes diversity without policy bias. We propose Discriminability-Aware Policy-to-Policy Preference-Based Efficient Reinforcement Learning (DAPPER), which integrates preference discriminability with trajectory diversification achieved by multiple policies. DAPPER trains new policies from scratch after each reward update and employs a discriminator that learns to estimate preference discriminability, enabling the prioritized sampling of more discriminable queries. During training, it jointly maximizes the preference reward and preference discriminability score, encouraging the discovery of highly rewarding and easily distinguishable policies. Experiments in simulated and real-world legged robot environments demonstrate that DAPPER outperforms previous methods in query efficiency, particularly under challenging preference discriminability conditions. A supplementary video that facilitates understanding of the proposed framework and its experimental results is available at: https://youtu.be/lRwX8FNN8n4
+ oai:arXiv.org:2505.06357v3
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuki Kadokawa, Jonas Frey, Takahiro Miki, Takamitsu Matsubara, Marco Hutter
+
+
+ Benchmarking AI scientists for omics data driven biological discovery
+ https://arxiv.org/abs/2505.08341
+ arXiv:2505.08341v2 Announce Type: replace
+Abstract: Recent advances in large language models have enabled the emergence of AI scientists that aim to autonomously analyze biological data and assist scientific discovery. Despite rapid progress, it remains unclear to what extent these systems can extract meaningful biological insights from real experimental data. Existing benchmarks either evaluate reasoning in the absence of data or focus on predefined analytical outputs, failing to reflect realistic, data-driven biological research. Here, we introduce BAISBench (Biological AI Scientist Benchmark), a benchmark for evaluating AI scientists on real single-cell transcriptomic datasets. BAISBench comprises two tasks: cell type annotation across 15 expert-labeled datasets, and scientific discovery through 193 multiple-choice questions derived from biological conclusions reported in 41 published single-cell studies. We evaluated several representative AI scientists using BAISBench and, to provide a human performance baseline, invited six graduate-level bioinformaticians to collectively complete the same tasks. The results show that while current AI scientists fall short of fully autonomous biological discovery, they already demonstrate substantial potential in supporting data-driven biological research. These results position BAISBench as a practical benchmark for characterizing the current capabilities and limitations of AI scientists in biological research. We expect BAISBench to serve as a practical evaluation framework for guiding the development of more capable AI scientists and for helping biologists identify AI systems that can effectively support real-world research workflows. The BAISBench can be found at: https://github.com/EperLuo/BAISBench, https://huggingface.co/datasets/EperLuo/BaisBench.
+ oai:arXiv.org:2505.08341v2
+ cs.AI
+ cs.MA
+ q-bio.GN
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Erpai Luo, Jinmeng Jia, Yifan Xiong, Xiangyu Li, Xiaobo Guo, Baoqi Yu, Minsheng Hao, Lei Wei, Xuegong Zhang
+
+
+ Large Language Models Meet Stance Detection: A Survey of Tasks, Methods, Applications, Challenges and Future Directions
+ https://arxiv.org/abs/2505.08464
+ arXiv:2505.08464v2 Announce Type: replace
+Abstract: Stance detection is essential for understanding subjective content across various platforms such as social media, news articles, and online reviews. Recent advances in Large Language Models (LLMs) have revolutionized stance detection by introducing novel capabilities in contextual understanding, cross-domain generalization, and multimodal analysis. Despite these progressions, existing surveys often lack comprehensive coverage of approaches that specifically leverage LLMs for stance detection. To bridge this critical gap, our review article conducts a systematic analysis of stance detection, comprehensively examining recent advancements of LLMs transforming the field, including foundational concepts, methodologies, datasets, applications, and emerging challenges. We present a novel taxonomy for LLM-based stance detection approaches, structured along three key dimensions: 1) learning methods, including supervised, unsupervised, few-shot, and zero-shot; 2) data modalities, such as unimodal, multimodal, and hybrid; and 3) target relationships, encompassing in-target, cross-target, and multi-target scenarios. Furthermore, we discuss the evaluation techniques and analyze benchmark datasets and performance trends, highlighting the strengths and limitations of different architectures. Key applications in misinformation detection, political analysis, public health monitoring, and social media moderation are discussed. Finally, we identify critical challenges such as implicit stance expression, cultural biases, and computational constraints, while outlining promising future directions, including explainable stance reasoning, low-resource adaptation, and real-time deployment frameworks. Our survey highlights emerging trends, open challenges, and future directions to guide researchers and practitioners in developing next-generation stance detection systems powered by large language models.
+ oai:arXiv.org:2505.08464v2
+ cs.CL
+ cs.LG
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Lata Pangtey, Anukriti Bhatnagar, Shubhi Bansal, Shahid Shafi Dar, Nagendra Kumar
+
+
+ A Large-scale Benchmark on Geological Fault Delineation Models: Domain Shift, Training Dynamics, Generalizability, Evaluation and Inferential Behavior
+ https://arxiv.org/abs/2505.08585
+ arXiv:2505.08585v4 Announce Type: replace
+Abstract: Machine learning has taken a critical role in seismic interpretation workflows, especially in fault delineation tasks. However, despite the recent proliferation of pretrained models and synthetic datasets, the field still lacks a systematic understanding of the generalizability limits of these models across seismic data representing diverse geologic, acquisition and processing settings. Distributional shifts between data sources, limitations in fine-tuning strategies and labeled data accessibility, and inconsistent evaluation protocols all remain major roadblocks to deploying reliable models in real-world exploration. In this paper, we present the first large-scale benchmarking study explicitly designed to provide guidelines for domain shift strategies in seismic interpretation. Our benchmark spans over 200 combinations of model architectures, datasets and training strategies, across three datasets (synthetic and real) including FaultSeg3D, CRACKS, and Thebe. We systematically assess pretraining, fine-tuning, and joint training under varying domain shifts. Our analysis shows that common fine-tuning practices can lead to catastrophic forgetting, especially when source and target datasets are disjoint, and that larger models such as Segformer are more robust than smaller architectures. We also find that domain adaptation methods outperform fine-tuning when shifts are large, yet underperform when domains are similar. Finally, we complement segmentation metrics with a novel analysis based on fault characteristic descriptors, revealing how models absorb structural biases from training datasets. Overall, we establish a robust experimental baseline that provides insights into tradeoffs in current fault delineation workflows and highlights directions for building more generalizable and interpretable models.
+ oai:arXiv.org:2505.08585v4
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jorge Quesada, Chen Zhou, Prithwijit Chowdhury, Mohammad Alotaibi, Ahmad Mustafa, Yusufjon Kumakov, Mohit Prabhushankar, Ghassan AlRegib
+
+
+ High-Order Hermite Optimization: Fast and Exact Gradient Computation in Open-Loop Quantum Optimal Control using a Discrete Adjoint Approach
+ https://arxiv.org/abs/2505.09857
+ arXiv:2505.09857v3 Announce Type: replace
+Abstract: This work introduces the High-Order Hermite Optimization (HOHO) method, an open-loop discrete adjoint method for quantum optimal control. Our method is the first of its kind to efficiently compute exact (discrete) gradients when using continuous, parameterized control pulses while solving the forward equations (e.g. Schrodinger's equation or the Linblad master equation) with an arbitrarily high-order Hermite Runge-Kutta method. The HOHO method is implemented in QuantumGateDesign$.$jl (https://github.com/leespen1/QuantumGateDesign.jl), an open-source software package for the Julia programming language, which we use to perform numerical experiments comparing the method to Juqbox$.$jl (https://github.com/LLNL/Juqbox.jl). For realistic model problems we observe speedups up to 775x.
+ oai:arXiv.org:2505.09857v3
+ math.NA
+ cs.NA
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Spencer Lee, Daniel Appelo
+
+
+ SM3D: Mitigating Spectral Bias and Semantic Dilution in Point Cloud State Space Models
+ https://arxiv.org/abs/2505.11099
+ arXiv:2505.11099v3 Announce Type: replace
+Abstract: Point clouds are a fundamental 3D data representation that underpins various computer vision tasks. Recently, Mamba has demonstrated strong potential for 3D point cloud understanding. However, existing approaches primarily focus on point serialization, overlooking a more fundamental limitation: State Space Models (SSMs) inherently exhibit a spectral low-pass bias arising from their recursive formulation. In serialized point clouds, this bias is particularly detrimental, as it suppresses high-frequency geometric structures and progressively dilutes semantic discriminability across deep layers. To address these limitations, we propose SM3D, a spectral-aware framework designed to jointly preserve geometric fidelity and semantic consistency. First, a Geometric Spectral Compensator (GSC) is introduced to counteract the low-pass bias by explicitly injecting graph-guided high-frequency components through local Laplacian analysis, thereby restoring structural sensitivity. Second, we design a Semantic Coherence Refiner (SCR) to rectify semantic drift through frequency-aware channel recalibration. To balance theoretical precision and computational efficiency, SCR is instantiated via two pathways: an exact Laplacian eigendecomposition (SCR-L) and a linear-complexity Chebyshev polynomial approximation (SCR-C). Extensive experiments demonstrate that SM3D achieves state-of-the-art performance, including 96.0% accuracy on ModelNet40 and 86.5% mIoU on ShapeNetPart, validating its effectiveness in mitigating spectral low-pass bias and semantic dilution (Code: https://github.com/L1277471578/SM3D).
+ oai:arXiv.org:2505.11099v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bin Liu, Chunyang Wang, Xuelian Liu
+
+
+ Quantization Meets Reasoning: Exploring and Mitigating Degradation of Low-Bit LLMs in Mathematical Reasoning
+ https://arxiv.org/abs/2505.11574
+ arXiv:2505.11574v4 Announce Type: replace
+Abstract: Low-bit post-training quantization (PTQ) is a practical route to deploy reasoning-capable LLMs under tight memory and latency budgets, yet it can markedly impair mathematical reasoning (drops up to 69.81% in our harder settings). We address two deployment-critical questions with process-level precision: Where along a step-structured solution does degradation first arise? How to mitigate it while staying in the low-bit regime? Across widely used PTQ methods (AWQ, GPTQ, SmoothQuant), open-source model families (Qwen, LLaMA; 0.5--7B), and math reasoning benchmarks (GSM8K, MATH, AIME), we perform format-aligned chain-of-thought with step-aligned attribution and uncover two robust regularities: (i) PTQ disproportionately elevates method and execution errors relative to high-level conceptual mistakes; and (ii) failures emerge early, with the first vulnerable step flipping and cascading to the final answer. These regularities suggest a general intervention principle: restore local token-level margins exactly at the earliest failure frontier. We instantiate this principle as a lightweight measure$\rightarrow$locate$\rightarrow$restore loop that operates directly on the quantized model: detect the first faulty step, construct our "Silver Bullet" datasets, and apply small-scale supervised/preference tuning. In our settings, as few as 332 curated examples and 3--5 minutes of compute on a single GPU recover 4-bit weight math reasoning toward the full-precision baseline while preserving PTQ efficiency. Our framework is quantizer- and architecture-agnostic within the evaluated regimes, and turns low-bit degradation from a global accuracy problem into a local, reproducible process intervention.
+ oai:arXiv.org:2505.11574v4
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhen Li, Yupeng Su, Songmiao Wang, Runming Yang, Congkai Xie, Aofan Liu, Ming Li, Jiannong Cao, Yuan Xie, Ngai Wong, Hongxia Yang
+
+
+ Monotone Subsystem Decomposition for Efficient Multi-Objective Robot Design
+ https://arxiv.org/abs/2505.11624
+ arXiv:2505.11624v2 Announce Type: replace
+Abstract: Automating design minimizes errors, accelerates the design process, and reduces cost. However, automating robot design is challenging due to recursive constraints, multiple design objectives, and cross-domain design complexity possibly spanning multiple abstraction layers. Here we look at the problem of component selection, a combinatorial optimization problem in which a designer, given a robot model, must select compatible components from an extensive catalog. The goal is to satisfy high-level task specifications while optimally balancing trade-offs between competing design objectives. In this paper, we extend our previous constraint programming approach to multi-objective design problems and propose the novel technique of monotone subsystem decomposition to efficiently compute a Pareto front of solutions for large-scale problems. We prove that subsystems can be optimized for their Pareto fronts and, under certain conditions, these results can be used to determine a globally optimal Pareto front. Furthermore, subsystems serve as an intuitive design abstraction and can be reused across various design problems. Using an example quadcopter design problem, we compare our method to a linear programming approach and demonstrate our method scales better for large catalogs, solving a multi-objective problem of 10^25 component combinations in seconds. We then expand the original problem and solve a task-oriented, multi-objective design problem to build a fleet of quadcopters to deliver packages. We compute a Pareto front of solutions in seconds where each solution contains an optimal component-level design and an optimal package delivery schedule for each quadcopter.
+ oai:arXiv.org:2505.11624v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/ICRA55743.2025.11128384
+ 2025 IEEE International Conference on Robotics and Automation (ICRA), pp. 8114-8120
+ Andrew Wilhelm, Nils Napp
+
+
+ Missing vs. Unused Knowledge Hypothesis for Language Model Bottlenecks in Patent Understanding
+ https://arxiv.org/abs/2505.12452
+ arXiv:2505.12452v4 Announce Type: replace
+Abstract: While large language models (LLMs) excel at factual recall, the real challenge lies in knowledge application. A gap persists between their ability to answer complex questions and their effectiveness in performing tasks that require that knowledge. We investigate this gap using a patent classification problem that requires deep conceptual understanding to distinguish semantically similar but objectively different patents written in dense, strategic technical language. We find that LLMs often struggle with this distinction. To diagnose the source of these failures, we introduce a framework that decomposes model errors into two categories: missing knowledge and unused knowledge. Our method prompts models to generate clarifying questions and compares three settings -- raw performance, self-answered questions that activate internal knowledge, and externally provided answers that supply missing knowledge (if any). We show that most errors stem from failures to deploy existing knowledge rather than from true knowledge gaps. We also examine how models differ in constructing task-specific question-answer databases. Smaller models tend to generate simpler questions that they, and other models, can retrieve and use effectively, whereas larger models produce more complex questions that are less effective, suggesting complementary strengths across model scales. Together, our findings highlight that shifting evaluation from static fact recall to dynamic knowledge application offers a more informative view of model capabilities.
+ oai:arXiv.org:2505.12452v4
+ cs.CL
+ cs.CY
+ cs.DL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Siyang Wu, Honglin Bao, Nadav Kunievsky, James A. Evans
+
+
+ Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models
+ https://arxiv.org/abs/2505.12509
+ arXiv:2505.12509v2 Announce Type: replace
+Abstract: Post-hoc explanations provide transparency and are essential for guiding model optimization, such as prompt engineering and data sanitation. However, applying model-agnostic techniques to Large Language Models (LLMs) is hindered by prohibitive computational costs, rendering these tools dormant for real-world applications. To revitalize model-agnostic interpretability, we propose a budget-friendly proxy framework that leverages efficient models to approximate the decision boundaries of expensive LLMs. We introduce a screen-and-apply mechanism to statistically verify local alignment before deployment. Our empirical evaluation confirms that proxy explanations achieve over 90% fidelity with only 11% of the oracle's cost. Building on this foundation, we demonstrate the actionable utility of our framework in prompt compression and poisoned example removal. Results show that reliable proxy explanations effectively guide optimization, transforming interpretability from a passive observation tool into a scalable primitive for LLM development. Additionally, we open-source code and datasets to facilitate future research.
+ oai:arXiv.org:2505.12509v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junhao Liu, Haonan Yu, Zhenyu Yan, Xin Zhang
+
+
+ Seek in the Dark: Reasoning via Test-Time Instance-Level Policy Gradient in Latent Space
+ https://arxiv.org/abs/2505.13308
+ arXiv:2505.13308v3 Announce Type: replace
+Abstract: Reasoning ability, a core component of human intelligence, continues to pose a significant challenge for Large Language Models (LLMs) in the pursuit of AGI. Although model performance has improved under the training scaling law, significant challenges remain, particularly with respect to training algorithms, such as catastrophic forgetting, and the limited availability of novel training data. As an alternative, test-time scaling enhances reasoning performance by increasing test-time computation without parameter updating. Unlike prior methods in this paradigm focused on token space, we propose leveraging latent space for more effective reasoning and better adherence to the test-time scaling law. We introduce LatentSeek, a novel framework that enhances LLM reasoning through Test-Time Instance-level Adaptation (TTIA) within the model's latent space. Specifically, LatentSeek leverages policy gradient to iteratively update latent representations, guided by self-generated reward signals. LatentSeek is evaluated on a range of reasoning benchmarks, including GSM8K, MATH-500, and AIME2024, across multiple LLM architectures. Results show that LatentSeek consistently outperforms strong baselines, such as Chain-of-Thought prompting and fine-tuning-based methods. Furthermore, our analysis demonstrates that LatentSeek is highly efficient, typically converging within a few iterations for problems of average complexity, while also benefiting from additional iterations, thereby highlighting the potential of test-time scaling in the latent space. These findings position LatentSeek as a lightweight, scalable, and effective solution for enhancing the reasoning capabilities of LLMs.
+ oai:arXiv.org:2505.13308v3
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hengli Li, Chenxi Li, Tong Wu, Xuekai Zhu, Yuxuan Wang, Zhaoxin Yu, Eric Hanchen Jiang, Song-Chun Zhu, Zixia Jia, Ying Nian Wu, Zilong Zheng
+
+
+ Second-Order Convergence in Private Stochastic Non-Convex Optimization
+ https://arxiv.org/abs/2505.15647
+ arXiv:2505.15647v2 Announce Type: replace
+Abstract: We investigate the problem of finding second-order stationary points (SOSP) in differentially private (DP) stochastic non-convex optimization. Existing methods suffer from two key limitations: (i) inaccurate convergence error rate due to overlooking gradient variance in the saddle point escape analysis, and (ii) dependence on auxiliary private model selection procedures for identifying DP-SOSP, which can significantly impair utility, particularly in distributed settings. To address these issues, we propose a generic perturbed stochastic gradient descent (PSGD) framework built upon Gaussian noise injection and general gradient oracles. A core innovation of our framework is using model drift distance to determine whether PSGD escapes saddle points, ensuring convergence to approximate local minima without relying on second-order information or additional DP-SOSP identification. By leveraging the adaptive DP-SPIDER estimator as a specific gradient oracle, we develop a new DP algorithm that rectifies the convergence error rates reported in prior work. We further extend this algorithm to distributed learning with heterogeneous data, providing the first formal guarantees for finding DP-SOSP in such settings. Our analysis also highlights the detrimental impacts of private selection procedures in distributed learning under high-dimensional models, underscoring the practical benefits of our design. Numerical experiments on real-world datasets validate the efficacy of our approach.
+ oai:arXiv.org:2505.15647v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Youming Tao, Zuyuan Zhang, Dongxiao Yu, Xiuzhen Cheng, Falko Dressler, Di Wang
+
+
+ Ranking Free RAG: Replacing Re-ranking with Selection in RAG for Sensitive Domains
+ https://arxiv.org/abs/2505.16014
+ arXiv:2505.16014v4 Announce Type: replace
+Abstract: In sensitive domains, Retrieval-Augmented Generation (RAG) must be interpretable and robust because errors do not just mislead, they invite lawsuits, undermine scholarly credibility, and breach compliance. Stakeholders require traceable evidence, clear rationales for why specific evidence is selected, and safeguards against poisoned or misleading content. Yet current RAG pipelines rely on similarity-based retrieval with arbitrary top-k cutoffs, provide no explanation for selections, and remain vulnerable to poisoning attacks. We propose METEORA, which replaces these drawbacks with rationale-driven selection, using explicit reasoning to guide evidence choice, explain decisions, and improve robustness to RAG poisoning. METEORA operates in three stages: (1) a general-purpose LLM is preference-tuned to generate query-conditioned rationales using direct preference optimization; (2) these rationales drive an Evidence Chunk Selection Engine that pairs rationales with retrieved evidence for query-specific relevance and applies elbow detection to choose an adaptive cutoff (optionally expanding context with neighboring chunks); and (3) a Verifier LLM uses the rationales to detect and filter poisoned or misleading evidence before generation. Across six datasets, METEORA achieves 13.41% higher recall and, without expansion, 21.05% higher precision than the strongest baseline. It reduces the evidence needed for comparable recall by 80%, improving downstream answer accuracy by 33.34%, and strengthens adversarial defense by increasing F1 from 0.10 to 0.44. Code is available at: https://anonymous.4open.science/r/METEORA-DC46/README.md
+ oai:arXiv.org:2505.16014v4
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yash Saxena, Ankur Padia, Mandar S Chaudhary, Kalpa Gunaratna, Srinivasan Parthasarathy, Manas Gaur
+
+
+ KNN-SSD: Enabling Dynamic Self-Speculative Decoding via Nearest Neighbor Layer Set Optimization
+ https://arxiv.org/abs/2505.16162
+ arXiv:2505.16162v2 Announce Type: replace
+Abstract: Speculative Decoding (SD) has emerged as a widely used paradigm to accelerate the inference of large language models (LLMs) without compromising generation quality. It works by efficiently drafting multiple tokens using a compact model and then verifying them in parallel using the target LLM. Notably, Self-Speculative Decoding proposes skipping certain layers to construct the draft model, which eliminates the need for additional parameters or training. Despite its strengths, we observe in this work that drafting with layer skipping exhibits significant sensitivity to domain shifts, leading to a substantial drop in acceleration performance. To enhance the domain generalizability of this paradigm, we introduce KNN-SSD, an algorithm that leverages K-Nearest Neighbor (KNN) search to match different skipped layers with various domain inputs. We evaluated our algorithm in various models and multiple tasks, observing that its application leads to 1.3x-1.6x speedup in LLM inference.
+ oai:arXiv.org:2505.16162v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mingbo Song, Heming Xia, Jun Zhang, Chak Tou Leong, Qiancheng Xu, Wenjie Li, Sujian Li
+
+
+ When Do LLMs Admit Their Mistakes? Understanding The Role Of Model Belief In Retraction
+ https://arxiv.org/abs/2505.16170
+ arXiv:2505.16170v3 Announce Type: replace
+Abstract: Can large language models (LLMs) admit their mistakes when they should know better? In this work, we study when and why LLMs choose to retract, i.e., spontaneously and immediately acknowledge their errors. Using model-specific testbeds, we find that while LLMs are capable of retraction, they do so only rarely, even when they can recognize their mistakes when asked in a separate interaction. We identify a reliable predictor of retraction: the model's momentary belief, as measured by a probe on its internal states that is trained to predict correctness on external datasets unrelated to retraction. A model retracts only when it "believes" its answers to be incorrect during generation; these beliefs frequently diverge from models' parametric knowledge as measured by factoid questions. Steering experiments further demonstrate that model belief causally drives retraction. In particular, when the model believes its answer to be incorrect, this not only encourages the model to attempt further verification, but also alters attention dynamics. Finally, we show that supervised fine-tuning improves retraction performance by helping the model learn more accurate internal belief. Code and datasets are available on https://github.com/ayyyq/llm-retraction .
+ oai:arXiv.org:2505.16170v3
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuqing Yang, Robin Jia
+
+
+ Zebra-Llama: Towards Extremely Efficient Hybrid Models
+ https://arxiv.org/abs/2505.17272
+ arXiv:2505.17272v2 Announce Type: replace
+Abstract: With the growing demand for deploying large language models (LLMs) across diverse applications, improving their inference efficiency is crucial for sustainable and democratized access. However, retraining LLMs to meet new user-specific requirements is prohibitively expensive and environmentally unsustainable. In this work, we propose a practical and scalable alternative: composing efficient hybrid language models from existing pre-trained models. Our approach, Zebra-Llama, introduces a family of 1B, 3B, and 8B hybrid models by combining State Space Models (SSMs) and Multi-head Latent Attention (MLA) layers, using a refined initialization and post-training pipeline to efficiently transfer knowledge from pre-trained Transformers. Zebra-Llama achieves Transformer-level accuracy with near-SSM efficiency using only 7-11B training tokens (compared to trillions of tokens required for pre-training) and an 8B teacher. Moreover, Zebra-Llama dramatically reduces KV cache size -down to 3.9%, 2%, and 2.73% of the original for the 1B, 3B, and 8B variants, respectively-while preserving 100%, 100%, and >97% of average zero-shot performance on LM Harness tasks. Compared to models like MambaInLLaMA, X-EcoMLA, Minitron, and Llamba, Zebra-Llama consistently delivers competitive or superior accuracy while using significantly fewer tokens, smaller teachers, and vastly reduced KV cache memory. Notably, Zebra-Llama-8B surpasses Minitron-8B in few-shot accuracy by 7% while using 8x fewer training tokens, over 12x smaller KV cache, and a smaller teacher (8B vs. 15B). It also achieves 2.6x-3.8x higher throughput (tokens/s) than MambaInLlama up to a 32k context length. We will release code and model checkpoints upon acceptance.
+ oai:arXiv.org:2505.17272v2
+ cs.LG
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Mingyu Yang, Mehdi Rezagholizadeh, Guihong Li, Vikram Appia, Emad Barsoum
+
+
+ A Multi-Head Attention Soft Random Forest for Interpretable Patient No-Show Prediction
+ https://arxiv.org/abs/2505.17344
+ arXiv:2505.17344v2 Announce Type: replace
+Abstract: Unattended scheduled appointments, defined as patient no-shows, adversely affect both healthcare providers and patients' health, disrupting the continuity of care, operational efficiency, and the efficient allocation of medical resources. Accurate predictive modeling is needed to reduce the impact of no-shows. Although machine learning methods, such as logistic regression, random forest models, and decision trees, are widely used in predicting patient no-shows, they often rely on hard decision splits and static feature importance, limiting their adaptability to specific or complex patient behaviors. To address this limitation, we propose a new hybrid Multi-Head Attention Soft Random Forest (MHASRF) model that integrates attention mechanisms into a random forest model using probabilistic soft splitting instead of hard splitting. The MHASRF model assigns attention weights differently across the trees, enabling attention on specific patient behaviors. The model exhibited 93.72% accuracy, 94.77% specificity, 90.23% precision, 89.38% recall, a 91.54% F1 score and AUC 97.87%, demonstrated high and balance performance across metrics, outperforming decision tree, random forest, logistic regression, and naive bayes models overall. Furthermore, MHASRF was able to identify key predictors of patient no-shows using two levels of feature importance (tree level and attention mechanism level), offering deeper insights into patient no-show predictors. The proposed model is a robust, adaptable, and interpretable method for predicting patient no-shows that will help healthcare providers in optimizing resources.
+ oai:arXiv.org:2505.17344v2
+ cs.LG
+ cs.AI
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ninda Nurseha Amalina, Heungjo An
+
+
+ EVADE-Bench: Multimodal Benchmark for Evasive Content Detection in E-Commerce Applications
+ https://arxiv.org/abs/2505.17654
+ arXiv:2505.17654v3 Announce Type: replace
+Abstract: E-commerce platforms increasingly rely on Large Language Models (LLMs) and Vision-Language Models (VLMs) to detect illicit or misleading product content. However, these models remain vulnerable to evasive content: inputs (text or images) that superficially comply with platform policies while covertly conveying prohibited claims. Unlike traditional adversarial attacks that induce overt failures, evasive content exploits ambiguity and context, making it far harder to detect. Existing robustness benchmarks provide little guidance for this demanding, real-world challenge. We introduce EVADE, the first expert-curated, Chinese, multimodal benchmark specifically designed to evaluate foundation models on evasive content detection in e-commerce. The dataset contains 2,833 annotated text samples and 13,961 images spanning six demanding product categories, including body shaping, height growth, and health supplements. Two complementary tasks assess distinct capabilities: Single-Violation, which probes fine-grained reasoning under short prompts, and All-in-One, which tests long-context reasoning by merging overlapping policy rules into unified instructions. Notably, the All-in-One setting significantly narrows the performance gap between partial and full-match accuracy, suggesting that clearer rule definitions improve alignment between human and model judgment. We benchmark 26 mainstream LLMs and VLMs and observe substantial performance gaps: even state-of-the-art models frequently misclassify evasive samples. By releasing EVADE and strong baselines, we provide the first rigorous standard for evaluating evasive-content detection, expose fundamental limitations in current multimodal reasoning, and lay the groundwork for safer and more transparent content moderation systems in e-commerce. The dataset is publicly available at https://huggingface.co/datasets/koenshen/EVADE-Bench.
+ oai:arXiv.org:2505.17654v3
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ancheng Xu, Zhihao Yang, Jingpeng Li, Guanghu Yuan, Longze Chen, Liang Yan, Jiehui Zhou, Zhen Qin, Hengyu Chang, Hamid Alinejad-Rokny, Min Yang
+
+
+ Knot So Simple: A Minimalistic Environment for Spatial Reasoning
+ https://arxiv.org/abs/2505.18028
+ arXiv:2505.18028v3 Announce Type: replace
+Abstract: We propose KnotGym, an interactive environment for complex, spatial reasoning and manipulation. KnotGym includes goal-oriented rope manipulation tasks with varying levels of complexity, all requiring acting from pure image observations. Tasks are defined along a clear and quantifiable axis of complexity based on the number of knot crossings, creating a natural generalization test. KnotGym has a simple observation space, allowing for scalable development, yet it highlights core challenges in integrating acute perception, spatial reasoning, and grounded manipulation. We evaluate methods of different classes, including model-based RL, model-predictive control, and chain-of-thought reasoning, and illustrate the challenges KnotGym presents. KnotGym is available at https://github.com/lil-lab/knotgym.
+ oai:arXiv.org:2505.18028v3
+ cs.LG
+ cs.AI
+ cs.CV
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zizhao Chen, Yoav Artzi
+
+
+ MathEDU: Feedback Generation on Problem-Solving Processes for Mathematical Learning Support
+ https://arxiv.org/abs/2505.18056
+ arXiv:2505.18056v2 Announce Type: replace
+Abstract: The increasing reliance on Large Language Models (LLMs) across various domains extends to education, where students progressively use generative AI as a tool for learning. While prior work has examined LLMs' mathematical ability, their reliability in grading authentic student problem-solving processes and delivering effective feedback remains underexplored. This study introduces MathEDU, a dataset consisting of student problem-solving processes in mathematics and corresponding teacher-written feedback. We systematically evaluate the reliability of various models across three hierarchical tasks: answer correctness classification, error identification, and feedback generation. Experimental results show that fine-tuning strategies effectively improve performance in classifying correctness and locating erroneous steps. However, the generated feedback across models shows a considerable gap from teacher-written feedback. Critically, the generated feedback is often verbose and fails to provide targeted explanations for the student's underlying misconceptions. This emphasizes the urgent need for trustworthy and pedagogy-aware AI feedback in education.
+ oai:arXiv.org:2505.18056v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Wei-Ling Hsu, Yu-Chien Tang, An-Zi Yen
+
+
+ GenPO: Generative Diffusion Models Meet On-Policy Reinforcement Learning
+ https://arxiv.org/abs/2505.18763
+ arXiv:2505.18763v3 Announce Type: replace
+Abstract: Recent advances in reinforcement learning (RL) have demonstrated the powerful exploration capabilities and multimodality of generative diffusion-based policies. While substantial progress has been made in offline RL and off-policy RL settings, integrating diffusion policies into on-policy frameworks like PPO remains underexplored. This gap is particularly significant given the widespread use of large-scale parallel GPU-accelerated simulators, such as IsaacLab, which are optimized for on-policy RL algorithms and enable rapid training of complex robotic tasks. A key challenge lies in computing state-action log-likelihoods under diffusion policies, which is straightforward for Gaussian policies but intractable for flow-based models due to irreversible forward-reverse processes and discretization errors (e.g., Euler-Maruyama approximations). To bridge this gap, we propose GenPO, a generative policy optimization framework that leverages exact diffusion inversion to construct invertible action mappings. GenPO introduces a novel doubled dummy action mechanism that enables invertibility via alternating updates, resolving log-likelihood computation barriers. Furthermore, we also use the action log-likelihood for unbiased entropy and KL divergence estimation, enabling KL-adaptive learning rates and entropy regularization in on-policy updates. Extensive experiments on eight IsaacLab benchmarks, including legged locomotion (Ant, Humanoid, Anymal-D, Unitree H1, Go2), dexterous manipulation (Shadow Hand), aerial control (Quadcopter), and robotic arm tasks (Franka), demonstrate GenPO's superiority over existing RL baselines. Notably, GenPO is the first method to successfully integrate diffusion policies into on-policy RL, unlocking their potential for large-scale parallelized training and real-world robotic deployment.
+ oai:arXiv.org:2505.18763v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shutong Ding, Ke Hu, Shan Zhong, Haoyang Luo, Weinan Zhang, Jingya Wang, Jun Wang, Ye Shi
+
+
+ Genie Centurion: Accelerating Scalable Real-World Robot Training with Human Rewind-and-Refine Guidance
+ https://arxiv.org/abs/2505.18793
+ arXiv:2505.18793v2 Announce Type: replace
+Abstract: While Vision-Language-Action (VLA) models show strong generalizability in various tasks, real-world deployment of robotic policy still requires large-scale, high-quality human expert demonstrations. However, data collection via human teleoperation requires continuous operator attention, which is costly, hard to scale. To address this, we propose Genie Centurion (GCENT), a scalable and general data collection paradigm based on human rewind-and-refine guidance, enabling robots' interactive learning in deployment. GCENT starts at an imperfect policy and improves over time. When the robot execution failures occur, GCENT allows robots to revert to a previous state with a rewind mechanism, after which a teleoperator provides corrective demonstrations to refine the policy. This framework supports a one-human-to-many-robots supervision scheme with a Task Sentinel module, which autonomously predicts task success and solicits human intervention when necessary. Empirical results show that GCENT achieves up to 40% higher task success rates than state-of-the-art data collection methods, and reaches comparable performance using less than half the data in long-horizon and precise tasks. We also quantify the data yield-to-effort ratio under multi-robot scenarios, demonstrating GCENT's potential for scalable and cost-efficient robot policy training in real-world environments.
+ oai:arXiv.org:2505.18793v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Wenhao Wang, Jianheng Song, Chiming Liu, Jiayao Ma, Siyuan Feng, Jingyuan Wang, Yuxin Jiang, Kylin Chen, Sikang Zhan, Yi Wang, Tong Meng, Modi Shi, Xindong He, Guanghui Ren, Yang Yang, Maoqing Yao
+
+
+ Can Large Language Models Infer Causal Relationships from Real-World Text?
+ https://arxiv.org/abs/2505.18931
+ arXiv:2505.18931v3 Announce Type: replace
+Abstract: Understanding and inferring causal relationships from texts is a core aspect of human cognition and is essential for advancing large language models (LLMs) towards artificial general intelligence. Existing work evaluating LLM causal reasoning primarily relies on synthetic or simplified texts with explicitly stated causal relationships. These texts typically feature short passages and few causal relations, failing to reflect the complexities of real-world reasoning. In this paper, we investigate whether LLMs are capable of inferring causal relationships from real-world texts. We develop a benchmark drawn from real-world academic literature, which includes diverse texts with respect to length, complexity (different levels of explicitness, number of causal events and relationships), and domain. To the best of our knowledge, our benchmark is the first-ever real-world dataset for this task. Our experiments on this dataset show that LLMs face significant challenges in inferring causal relationships from real-world text, with the best-performing model achieving an average F$_1$ score of only 0.535. Through systematic analysis across aspects of real-world text (explicitness, number of causal events and relationships, length of text, domain), our benchmark offers targeted insights for further research into advancing LLM causal reasoning. Our code and dataset can be found at https://github.com/Ryan-Saklad/ReCITE .
+ oai:arXiv.org:2505.18931v3
+ cs.AI
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ryan Saklad, Aman Chadha, Oleg Pavlov, Raha Moraffah
+
+
+ Position: Language Models Should be Used to Surface the Unwritten Code of Science and Society
+ https://arxiv.org/abs/2505.18942
+ arXiv:2505.18942v5 Announce Type: replace
+Abstract: This position paper calls on the research community not only to investigate how human biases are inherited by large language models (LLMs) but also to explore how these biases in LLMs can be leveraged to make society's "unwritten code" - such as implicit stereotypes and heuristics - visible and accessible for critique. We introduce a conceptual framework through a case study in science: uncovering hidden rules in peer review - the factors that reviewers care about but rarely state explicitly due to normative scientific expectations. The idea of the framework is to push LLMs to speak out their heuristics through generating self-consistent hypotheses - why one paper appeared stronger in reviewer scoring - among paired papers submitted to 46 academic conferences, while iteratively searching deeper hypotheses from remaining pairs where existing hypotheses cannot explain. We observed that LLMs' normative priors about the internal characteristics of good science extracted from their self-talk, e.g., theoretical rigor, were systematically updated toward posteriors that emphasize storytelling about external connections, such as how the work is positioned and connected within and across literatures. Human reviewers tend to explicitly reward aspects that moderately align with LLMs' normative priors (correlation = 0.49) but avoid articulating contextualization and storytelling posteriors in their review comments (correlation = -0.14), despite giving implicit reward to them with positive scores. These patterns are robust across different models and out-of-sample judgments. We discuss the broad applicability of our proposed framework, leveraging LLMs as diagnostic tools to amplify and surface the tacit codes underlying human society, enabling public discussion of revealed values and more precisely targeted responsible AI.
+ oai:arXiv.org:2505.18942v5
+ cs.CY
+ cs.CL
+ cs.DL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Honglin Bao, Siyang Wu, Jiwoong Choi, Yingrong Mao, James A. Evans
+
+
+ Position: Foundation Models for Tabular Data within Systemic Contexts Need Grounding
+ https://arxiv.org/abs/2505.19825
+ arXiv:2505.19825v2 Announce Type: replace
+Abstract: This position paper argues that foundation models for tabular data face inherent limitations when isolated from operational context - the procedural logic, declarative rules, and domain knowledge that define how data is created and governed. Current approaches focus on single-table generalization or schema-level relationships, fundamentally missing the operational knowledge that gives data meaning. We introduce Semantically Linked Tables (SLT) and Foundation Models for SLT (FMSLT) as a new model class that grounds tabular data in its operational context. We propose dual-phase training: pre-training on open-source code-data pairs and synthetic systems to learn business logic mechanics, followed by zero-shot inference on proprietary data. We introduce the ``Operational Turing Test'' benchmark and argue that operational grounding is essential for autonomous agents in complex data environments.
+ oai:arXiv.org:2505.19825v2
+ cs.LG
+ cs.AI
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Tassilo Klein, Johannes Hoffart
+
+
+ StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs
+ https://arxiv.org/abs/2505.20139
+ arXiv:2505.20139v2 Announce Type: replace
+Abstract: As Large Language Models (LLMs) become integral to software development workflows, their ability to generate structured outputs has become critically important. We introduce StructEval, a comprehensive benchmark for evaluating LLMs' capabilities in producing both non-renderable (JSON, YAML, CSV) and renderable (HTML, React, SVG) structured formats. Unlike prior benchmarks, StructEval systematically evaluates structural fidelity across diverse formats through two paradigms: 1) generation tasks, producing structured output from natural language prompts, and \textbf{2)} conversion tasks, translating between structured formats. Our benchmark encompasses 18 formats and 44 types of task, with novel metrics for format adherence and structural correctness. Results reveal significant performance gaps-even state-of-the-art models like o1-mini achieve only 75.58 average score, with open-source alternatives lagging approximately 10 points behind. We find generation tasks more challenging than conversion tasks, and producing correct visual content more difficult than generating text-only structures.
+ oai:arXiv.org:2505.20139v2
+ cs.SE
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jialin Yang, Dongfu Jiang, Lipeng He, Sherman Siu, Yuxuan Zhang, Disen Liao, Zhuofeng Li, Huaye Zeng, Yiming Jia, Haozhe Wang, Benjamin Schneider, Chi Ruan, Wentao Ma, Zhiheng Lyu, Yifei Wang, Yi Lu, Quy Duc Do, Ziyan Jiang, Ping Nie, Wenhu Chen
+
+
+ Improving the OOD Performance of Closed-Source LLMs on NLI Through Strategic Data Selection
+ https://arxiv.org/abs/2505.20209
+ arXiv:2505.20209v2 Announce Type: replace
+Abstract: We investigate the robustness of fine-tuned Large Language Models (LLMs) for the task of Natural Language Inference (NLI), finding that the in-distribution gains from fine-tuning correspond to a large drop in out-of-distribution (OOD) performance. Despite the widespread use of closed-source LLMs, there are no robustness mitigation methods that work under their API fine-tuning constraints. Existing methods to improve robustness typically require changing the fine-tuning process or large-scale data augmentation, methods that are infeasible or cost prohibitive for closed-source models. To address this, we propose strategically selecting the NLI fine-tuning data, prioritising more complex examples or replacing existing training examples with LLM-generated data. Prioritising more complex training examples improves performance on challenging OOD NLI datasets, while training with synthetic data leads to substantial improvements on easier OOD datasets. We find that synthetic examples are often too simple, and by prompting LLMs to create more complex synthetic data we can improve performance on both easy and challenging OOD datasets. Finally, we show that recent autoregressive LLMs are substantially more robust to distributional shifts compared to encoder models, and should be a preferred baseline for future research.
+ oai:arXiv.org:2505.20209v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Joe Stacey, Lisa Alazraki, Aran Ubhi, Beyza Ermis, Aaron Mueller, Marek Rei
+
+
+ Towards the Automated Extraction and Refactoring of NoSQL Schemas from Application Code
+ https://arxiv.org/abs/2505.20230
+ arXiv:2505.20230v3 Announce Type: replace
+Abstract: In this paper, we present a static code analysis strategy to extract logical schemas from NoSQL applications. Our solution is based on a model-driven reverse engineering process composed of a chain of platform-independent model transformations. The extracted schema conforms to the U-Schema unified metamodel, which can represent both NoSQL and relational schemas. To support this process, we define a metamodel capable of representing the core elements of object-oriented languages. Application code is first injected into a code model, from which a control flow model is derived. This, in turn, enables the generation of a model representing both data access operations and the structure of stored data. From these models, the U-Schema logical schema is inferred. Additionally, the extracted information can be used to identify refactoring opportunities. We illustrate this capability through the detection of join-like query patterns and the automated application of field duplication strategies to eliminate expensive joins. All stages of the process are described in detail, and the approach is validated through a round-trip experiment in which a application using a MongoDB store is automatically generated from a predefined schema. The inferred schema is then compared to the original to assess the accuracy of the extraction process.
+ oai:arXiv.org:2505.20230v3
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Carlos J. Fernandez-Candel, Anthony Cleve, Jesus J. Garcia-Molina
+
+
+ SpecExtend: A Drop-in Enhancement for Speculative Decoding of Long Sequences
+ https://arxiv.org/abs/2505.20776
+ arXiv:2505.20776v4 Announce Type: replace
+Abstract: Speculative decoding is a widely used technique for accelerating inference in large language models (LLMs), but its performance degrades as input length grows, with significant drops even at moderate lengths. Yet, this early degradation has remained largely underexplored. We introduce SpecExtend, a drop-in enhancement that improves speculative decoding on long sequences without additional training. SpecExtend integrates efficient attention mechanisms such as FlashAttention and Hybrid Tree Attention to accelerate prefill and verification steps. To improve both draft accuracy and speed on long inputs without retraining, we propose Cross-model Retrieval, a novel KV cache eviction strategy that leverages the target model's attention scores to dynamically select relevant context for the smaller draft model. Extensive evaluations show that SpecExtend accelerates speculative decoding by up to 2.84x on 16K-token long document summarization and up to 3.86x on long-form reasoning, while preserving the short-input performance of state-of-the-art frameworks. Our code is available at https://github.com/jycha98/SpecExtend .
+ oai:arXiv.org:2505.20776v4
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jungyoub Cha, Hyunjong Kim, Sungzoon Cho
+
+
+ NeuralOM: Neural Ocean Model for Subseasonal-to-Seasonal Simulation
+ https://arxiv.org/abs/2505.21020
+ arXiv:2505.21020v5 Announce Type: replace
+Abstract: Long-term, high-fidelity simulation of slow-changing physical systems, such as the ocean and climate, presents a fundamental challenge in scientific computing. Traditional autoregressive machine learning models often fail in these tasks as minor errors accumulate and lead to rapid forecast degradation. To address this problem, we propose NeuralOM, a general neural operator framework designed for simulating complex, slow-changing dynamics. NeuralOM's core consists of two key innovations: (1) a Progressive Residual Correction Framework that decomposes the forecasting task into a series of fine-grained refinement steps, effectively suppressing long-term error accumulation; and (2) a Physics-Guided Graph Network whose built-in adaptive messaging mechanism explicitly models multi-scale physical interactions, such as gradient-driven flows and multiplicative couplings, thereby enhancing physical consistency while maintaining computational efficiency. We validate NeuralOM on the challenging task of global Subseasonal-to-Seasonal (S2S) ocean simulation. Extensive experiments demonstrate that NeuralOM not only surpasses state-of-the-art models in forecast accuracy and long-term stability, but also excels in simulating extreme events. For instance, at a 60-day lead time, NeuralOM achieves a 13.3% lower RMSE compared to the best-performing baseline, offering a stable, efficient, and physically-aware paradigm for data-driven scientific computing. Code link: https://github.com/YuanGao-YG/NeuralOM.
+ oai:arXiv.org:2505.21020v5
+ cs.LG
+ physics.ao-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yuan Gao, Hao Wu, Fan Xu, Yanfei Xiang, Ruijian Gou, Ruiqi Shu, Qingsong Wen, Xian Wu, Kun Wang, Xiaomeng Huang
+
+
+ BLUCK: A Benchmark Dataset for Bengali Linguistic Understanding and Cultural Knowledge
+ https://arxiv.org/abs/2505.21092
+ arXiv:2505.21092v2 Announce Type: replace
+Abstract: In this work, we introduce BLUCK, a new dataset designed to measure the performance of Large Language Models (LLMs) in Bengali linguistic understanding and cultural knowledge. Our dataset comprises 2366 multiple-choice questions (MCQs) carefully curated from compiled collections of several college and job level examinations and spans 23 categories covering knowledge on Bangladesh's culture and history and Bengali linguistics. We benchmarked BLUCK using 6 proprietary and 3 open-source LLMs - including GPT-4o, Claude-3.5-Sonnet, Gemini-1.5-Pro, Llama-3.3-70B-Instruct, and DeepSeekV3. Our results show that while these models perform reasonably well overall, they, however, struggles in some areas of Bengali phonetics. Although current LLMs' performance on Bengali cultural and linguistic contexts is still not comparable to that of mainstream languages like English, our results indicate Bengali's status as a mid-resource language. Importantly, BLUCK is also the first MCQ-based evaluation benchmark that is centered around native Bengali culture, history, and linguistics.
+ oai:arXiv.org:2505.21092v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Daeen Kabir, Minhajur Rahman Chowdhury Mahim, Sheikh Shafayat, Adnan Sadik, Arian Ahmed, Eunsu Kim, Alice Oh
+
+
+ Universal Harmful Information Synthesis via Model Crowdsourcing
+ https://arxiv.org/abs/2505.21184
+ arXiv:2505.21184v3 Announce Type: replace
+Abstract: To construct responsible and secure AI applications, harmful information data is widely utilized for adversarial testing and the development of safeguards. Existing studies mainly leverage Large Language Models (LLMs) to synthesize data to obtain high-quality task datasets at scale, thereby avoiding costly human annotation. However, limited by the safety alignment mechanisms of LLMs, the synthesis of harmful data still faces challenges in generation reliability and content diversity. In this study, we propose a novel harmful information synthesis framework, SwarmLaunder, which applies the model crowdsourcing strategy to generate diverse harmful data while maintaining a high success rate. Specifically, we generate abundant benign data as the based templates in a counterfactual manner. Subsequently, we decompose each based template into multiple semantic units and perform unit-by-unit toxification and final refinement through dynamic model switching, thus ensuring the success of synthesis. Experimental results demonstrate that SwarmLaunder achieves state-of-the-art performance in synthesizing different categories of harmful data with high scalability and diversity.
+ oai:arXiv.org:2505.21184v3
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yu Yan, Sheng Sun, Zhifei Zheng, Ziji Hao, Teli Liu, Min Liu
+
+
+ Hypothesis Generation via LLM-Automated Language Bias for ILP
+ https://arxiv.org/abs/2505.21486
+ arXiv:2505.21486v2 Announce Type: replace
+Abstract: Inductive Logic Programming (ILP) is a principled approach for generalizing regularities from data and constructing hypotheses as interpretable logic programs. However, a key limitation is its reliance on expert-crafted language bias - the predicate inventory, types, and mode declarations that delimit the search space. We propose hypothesis generation via LLM-automated language bias: multi-agent LLMs design the bias from raw text and translate descriptions into typed facts, and a robust ILP solver induces rules under a global consistency objective. This approach reduces traditional ILP's reliance on predefined symbolic structures and the noise sensitivity of LLM-only pipelines that directly generate hypotheses as text or code. Extensive experiments in diverse, challenging scenarios validate superior performance, providing a practical, explainable, and verifiable route to hypothesis generation.
+ oai:arXiv.org:2505.21486v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yang Yang, Jiemin Wu, Yutao Yue
+
+
+ Federated Unsupervised Semantic Segmentation
+ https://arxiv.org/abs/2505.23292
+ arXiv:2505.23292v2 Announce Type: replace
+Abstract: This work explores the application of Federated Learning (FL) to Unsupervised Semantic image Segmentation (USS). Recent USS methods extract pixel-level features using frozen visual foundation models and refine them through self-supervised objectives that encourage semantic grouping. These features are then grouped to semantic clusters to produce segmentation masks. Extending these ideas to federated settings requires feature representation and cluster centroid alignment across distributed clients, an inherently difficult task under heterogeneous data distributions in the absence of supervision. To address this, we propose FUSS (Federated Unsupervised image Semantic Segmentation) which is, to our knowledge, the first framework to enable fully decentralized, label-free semantic segmentation training. FUSS introduces novel federation strategies that promote global consistency in feature and prototype space, jointly optimizing local segmentation heads and shared semantic centroids. Experiments on both benchmark and real-world datasets, including binary and multi-class segmentation tasks, show that FUSS consistently outperforms local-only client trainings as well as extensions of classical FL algorithms under varying client data distributions. To fully support reproducibility, the source code, data partitioning scripts, and implementation details are publicly available at: https://github.com/evanchar/FUSS
+ oai:arXiv.org:2505.23292v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Evangelos Charalampakis, Vasileios Mygdalis, Ioannis Pitas
+
+
+ EVOREFUSE: Evolutionary Prompt Optimization for Evaluation and Mitigation of LLM Over-Refusal to Pseudo-Malicious Instructions
+ https://arxiv.org/abs/2505.23473
+ arXiv:2505.23473v3 Announce Type: replace
+Abstract: Large language models (LLMs) frequently refuse to respond to pseudo-malicious instructions: semantically harmless input queries triggering unnecessary LLM refusals due to conservative safety alignment, significantly impairing user experience. Collecting such instructions is crucial for evaluating and mitigating over-refusals, but existing instruction curation methods, like manual creation or instruction rewriting, either lack scalability or fail to produce sufficiently diverse and effective refusal-inducing prompts. To address these limitations, we introduce EVOREFUSE, a prompt optimization approach that generates diverse pseudo-malicious instructions consistently eliciting confident refusals across LLMs. EVOREFUSE employs an evolutionary algorithm exploring the instruction space in more diverse directions than existing methods via mutation strategies and recombination, and iteratively evolves seed instructions to maximize evidence lower bound on LLM refusal probability. Using EVOREFUSE, we create two novel datasets: EVOREFUSE-TEST, a benchmark of 582 pseudo-malicious instructions that outperforms the next-best benchmark with 85.34% higher average refusal triggering rate across 9 LLMs without a safety-prior system prompt, 34.86% greater lexical diversity, and 40.03% improved LLM response confidence scores; and EVOREFUSE-ALIGN, which provides 3,000 pseudo-malicious instructions with responses for supervised and preference-based alignment training. With supervised fine-tuning on EVOREFUSE-ALIGN, LLAMA3.1-8B-INSTRUCT achieves up to 29.85% fewer over-refusals than models trained on the second-best alignment dataset, without compromising safety. Our analysis with EVOREFUSE-TEST reveals models trigger over-refusals by overly focusing on sensitive keywords while ignoring broader context. Our code and datasets are available at https://github.com/FishT0ucher/EVOREFUSE.
+ oai:arXiv.org:2505.23473v3
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaorui Wu, Fei Li, Xiaofeng Mao, Xin Zhang, Li Zheng, Yuxiang Peng, Chong Teng, Donghong Ji, Zhuang Li
+
+
+ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs
+ https://arxiv.org/abs/2505.23816
+ arXiv:2505.23816v2 Announce Type: replace
+Abstract: Despite advances in large language models (LLMs) on reasoning and instruction-following tasks, it is unclear whether they can reliably produce outputs aligned with a variety of user goals, a concept called steerability. Two gaps in current LLM evaluation impede steerability evaluation: (1) many benchmarks are built with past LLM chats and Internet-scraped text, which may skew towards common requests, and (2) scalar measures of performance common in prior work could conceal behavioral shifts in LLM outputs in open-ended generation. Thus, we introduce a framework based on a multi-dimensional goal-space that models user goals and LLM outputs as vectors with dimensions corresponding to text attributes (e.g., reading difficulty). Applied to a text-rewriting task, we find that current LLMs induce unintended changes or side effects to text attributes, impeding steerability. Interventions to improve steerability, such as prompt engineering, best-of-N sampling, and reinforcement learning fine-tuning, have varying effectiveness but side effects remain problematic. Our findings suggest that even strong LLMs struggle with steerability, and existing alignment strategies may be insufficient. We open-source our steerability evaluation framework at https://github.com/MLD3/steerability.
+ oai:arXiv.org:2505.23816v2
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Trenton Chang, Tobias Schnabel, Adith Swaminathan, Jenna Wiens
+
+
+ Cache Your Prompt When It's Green: Carbon-Aware Caching for Large Language Model Serving
+ https://arxiv.org/abs/2505.23970
+ arXiv:2505.23970v2 Announce Type: replace
+Abstract: As large language models (LLMs) become widely used, their environmental impact, especially carbon emission, has attracted more attention. Prior studies focus on compute-related carbon emissions. In this paper, we find that storage is another key contributor. LLM caching, which saves and reuses KV caches for repeated context, reduces operational carbon by avoiding redundant computation. However, this benefit comes at the cost of embodied carbon from high-capacity, high-speed SSDs. As LLMs scale, the embodied carbon of storage grows significantly. To address this tradeoff, we present GreenCache, a carbon-aware cache management framework that dynamically derives resource allocation plans for LLM serving. GreenCache analyzes the correlation between carbon emission and SLO satisfaction, reconfiguring the resource over time to keep the balance between SLO and carbon emission under dynamic workloads. Evaluations from real traces demonstrate that GreenCache achieves an average carbon reduction of 15.1 % when serving Llama-3 70B in the FR grid, with reductions reaching up to 25.3 %, while staying within latency constraints for > 90 % of requests.
+ oai:arXiv.org:2505.23970v2
+ cs.DC
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuyang Tian, Desen Sun, Yi Ding, Sihang Liu
+
+
+ How Students (Really) Use ChatGPT: Uncovering Experiences Among Undergraduate Students
+ https://arxiv.org/abs/2505.24126
+ arXiv:2505.24126v4 Announce Type: replace
+Abstract: This study investigates how undergraduate students engage with ChatGPT in self-directed learning contexts. Analyzing naturalistic interaction logs, we identify five dominant use categories of ChatGPT: information seeking, content generation, language refinement, metacognitive engagement, and conversational repair. Behavioral modeling reveals that structured, goal-driven tasks like coding, multiple-choice solving, and job application writing are strong predictors of continued use. Drawing on Self-Directed Learning (SDL) and the Uses and Gratifications Theory (UGT), we show how students actively manage ChatGPT's affordances and limitations through prompt adaptation, follow-ups, and emotional regulation. Rather than disengaging after breakdowns, students often persist through clarification and repair, treating the assistant as both tool and learning partner. We also offer design and policy recommendations to support transparent, responsive, and pedagogically grounded integration of generative AI in higher education.
+ oai:arXiv.org:2505.24126v4
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Tawfiq Ammari, Meilun Chen, S M Mehedi Zaman, Kiran Garimella
+
+
+ Rydberg Atomic Receivers for Multi-Band Communications and Sensing
+ https://arxiv.org/abs/2505.24168
+ arXiv:2505.24168v3 Announce Type: replace
+Abstract: Harnessing multi-level electron transitions, Rydberg Atomic REceivers (RAREs) can detect wireless signals across a wide range of frequency bands, from Megahertz to Terahertz. This capability enables multi-band wireless communications and sensing (CommunSense). Existing research on multi-band RAREs primarily focuses on experimental demonstrations, lacking a tractable model to mathematically characterize their mechanisms. This issue leaves the multi-band RARE as a black box and poses challenges in its practical applications. To fill in this gap, this paper investigates the underlying mechanism of multiband RAREs and explores their optimal performance. For the first time, an analytical transfer function with a closed-form expression for multi-band RAREs is derived by solving the quantum response of Rydberg atoms. It shows that a multiband RARE simultaneously serves as a multi-band atomic mixer for down-converting multi-band signals and a multi-band atomic amplifier that reflects its sensitivity to each band. Further analysis of the atomic amplifier unveils that the intrinsic gain at each frequency band can be decoupled into a global gain term and a Rabi attention term. The former determines the overall sensitivity of a RARE to all frequency bands of wireless signals. The latter influences the allocation of the overall sensitivity to each frequency band, representing a unique attention mechanism of multi-band RAREs. The optimal design of the global gain is provided to maximize the overall sensitivity of multi-band RAREs. Subsequently, the optimal Rabi attentions are also derived to maximize the practical multi-band CommunSense performance. An experiment platform is built to validate the effectiveness of the derived transfer function, and numerical results confirm the superiority of multi-band RAREs.
+ oai:arXiv.org:2505.24168v3
+ cs.IT
+ eess.SP
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Mingyao Cui, Qunsong Zeng, Minze Chen, Zhanwei Wang, Tianqi Mao, Dezhi Zheng, Kaibin Huang
+
+
+ WikiGap: Promoting Epistemic Equity by Surfacing Knowledge Gaps Between English Wikipedia and other Language Editions
+ https://arxiv.org/abs/2505.24195
+ arXiv:2505.24195v4 Announce Type: replace
+Abstract: With more than 11 times as many pageviews as the next largest edition, English Wikipedia dominates global knowledge access relative to other language editions. Readers are prone to assuming English Wikipedia as a superset of all language editions, leading many to prefer it even when their primary language is not English. Other language editions, however, comprise complementary facts rooted in their respective cultures and media environments, which are marginalized in English Wikipedia. While Wikipedia's user interface enables switching between language editions through its Interlanguage Link (ILL) system, it does not reveal to readers that other language editions contain valuable, complementary information. We present WikiGap, a system that surfaces complementary facts sourced from other Wikipedias within the English Wikipedia interface. Specifically, by combining a recent multilingual information-gap discovery method with a user-centered design, WikiGap enables access to complementary information from French, Russian, and Chinese Wikipedia. In a mixed-methods study (n=21), WikiGap significantly improved fact-finding accuracy, reduced task time, and received a 32-point higher usability score relative to Wikipedia's current ILL-based navigation system. Participants reported increased awareness of the availability of complementary information in non-English editions and reconsidered the completeness of English Wikipedia. WikiGap thus paves the way for improved epistemic equity across language editions.
+ oai:arXiv.org:2505.24195v4
+ cs.HC
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zining Wang, Yuxuan Zhang, Dongwook Yoon, Nicholas Vincent, Farhan Samir, Vered Shwartz
+
+
+ TalkingHeadBench: A Multi-Modal Benchmark & Analysis of Talking-Head DeepFake Detection
+ https://arxiv.org/abs/2505.24866
+ arXiv:2505.24866v3 Announce Type: replace
+Abstract: The rapid advancement of talking-head deepfake generation fueled by advanced generative models has elevated the realism of synthetic videos to a level that poses substantial risks in domains such as media, politics, and finance. However, current benchmarks for deepfake talking-head detection fail to reflect this progress, relying on outdated generators and offering limited insight into model robustness and generalization. We introduce TalkingHeadBench, a comprehensive multi-model multi-generator benchmark and curated dataset designed to evaluate the performance of state-of-the-art detectors on the most advanced generators. Our dataset includes deepfakes synthesized by leading academic and commercial models and features carefully constructed protocols to assess generalization under distribution shifts in identity and generator characteristics. We benchmark a diverse set of existing detection methods, including CNNs, vision transformers, and temporal models, and analyze their robustness and generalization capabilities. In addition, we provide error analysis using Grad-CAM visualizations to expose common failure modes and detector biases. TalkingHeadBench is hosted on https://huggingface.co/datasets/luchaoqi/TalkingHeadBench with open access to all data splits and protocols. Our benchmark aims to accelerate research towards more robust and generalizable detection models in the face of rapidly evolving generative techniques.
+ oai:arXiv.org:2505.24866v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xinqi Xiong, Prakrut Patel, Qingyuan Fan, Amisha Wadhwa, Sarathy Selvam, Xiao Guo, Luchao Qi, Xiaoming Liu, Roni Sengupta
+
+
+ SiLVR: A Simple Language-based Video Reasoning Framework
+ https://arxiv.org/abs/2505.24869
+ arXiv:2505.24869v2 Announce Type: replace
+Abstract: Recent advances in test-time optimization have led to remarkable reasoning capabilities in Large Language Models (LLMs), enabling them to solve highly complex problems in math and coding. However, the reasoning capabilities of multimodal LLMs (MLLMs) still significantly lag, especially for complex video-language tasks. To address this issue, we present SILVR, a Simple Language-based Video Reasoning framework that decomposes complex video understanding into two stages. In the first stage, SILVR transforms raw video into language-based representations using multisensory inputs, such as short clip captions and audio/speech subtitles. In the second stage, language descriptions are fed into a powerful reasoning LLM to solve complex video-language understanding tasks. To handle long-context multisensory inputs, we use an Adaptive Context Reduction scheme, which dynamically determines the temporal granularity with which to sample the tokens. Our simple, modular, and training-free video reasoning framework achieves the best-reported results on Video-MME (long), Video-MMMU (comprehension), Video-MMLU, CGBench, and EgoLife. Furthermore, our empirical study focused on video reasoning capabilities shows that, despite not being explicitly trained on video, strong reasoning LLMs can effectively aggregate multisensory input information from video, speech, and audio for complex temporal, causal, long-context, and knowledge acquisition reasoning tasks in video. More details can be found at https://sites.google.com/cs.unc.edu/silvr.
+ oai:arXiv.org:2505.24869v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ce Zhang, Yan-Bo Lin, Ziyang Wang, Mohit Bansal, Gedas Bertasius
+
+
+ XMAD-Bench: Cross-Domain Multilingual Audio Deepfake Benchmark
+ https://arxiv.org/abs/2506.00462
+ arXiv:2506.00462v2 Announce Type: replace
+Abstract: Recent advances in audio generation led to an increasing number of deepfakes, making the general public more vulnerable to financial scams, identity theft, and misinformation. Audio deepfake detectors promise to alleviate this issue, with many recent studies reporting accuracy rates close to 99%. However, these methods are typically tested in an in-domain setup, where the deepfake samples from the training and test sets are produced by the same generative models. To this end, we introduce XMAD-Bench, a large-scale cross-domain multilingual audio deepfake benchmark comprising 668.8 hours of real and deepfake speech. In our novel dataset, the speakers, the generative methods, and the real audio sources are distinct across training and test splits. This leads to a challenging cross-domain evaluation setup, where audio deepfake detectors can be tested "in the wild". Our in-domain and cross-domain experiments indicate a clear disparity between the in-domain performance of deepfake detectors, which is usually as high as 100%, and the cross-domain performance of the same models, which is sometimes similar to random chance. Our benchmark highlights the need for the development of robust audio deepfake detectors, which maintain their generalization capacity across different languages, speakers, generative methods, and data sources. Our benchmark is publicly released at https://github.com/ristea/xmad-bench/.
+ oai:arXiv.org:2506.00462v2
+ cs.SD
+ cs.AI
+ cs.CL
+ cs.LG
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Ioan-Paul Ciobanu, Andrei-Iulian Hiji, Nicolae-Catalin Ristea, Paul Irofti, Cristian Rusu, Radu Tudor Ionescu
+
+
+ Learning with pyCub: A Simulation and Exercise Framework for Humanoid Robotics
+ https://arxiv.org/abs/2506.01756
+ arXiv:2506.01756v2 Announce Type: replace
+Abstract: We present pyCub, an open-source physics-based simulation of the humanoid robot iCub, along with exercises to teach students the basics of humanoid robotics. Compared to existing iCub simulators (iCub SIM, iCub Gazebo), which require C++ code and YARP as middleware, pyCub works without YARP and with Python code. The complete robot with all articulations has been simulated, with two cameras in the eyes and the unique sensitive skin of the iCub comprising 4000 receptors on its body surface. The exercises range from basic control of the robot in velocity, joint, and Cartesian space to more complex tasks like gazing, grasping, or reactive control. The whole framework is written and controlled with Python, thus allowing to be used even by people with small or almost no programming practice. The exercises can be scaled to different difficulty levels. We tested the framework in two runs of a course on humanoid robotics. The simulation, exercises, documentation, Docker images, and example videos are publicly available at https://rustlluk.github.io/pyCub.
+ oai:arXiv.org:2506.01756v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Lukas Rustler, Matej Hoffmann
+
+
+ Two-Stage Bidirectional Inverter Equivalent Circuit Model for Distribution Grid Steady-State Analysis and Optimization
+ https://arxiv.org/abs/2506.03430
+ arXiv:2506.03430v3 Announce Type: replace
+Abstract: This paper presents a \textit{physics-based} steady-state equivalent circuit model of a two-stage bidirectional inverter. These inverters connect distributed energy resources (DERs), such as photovoltaic (PV) and battery systems, to distribution grids. Existing inverter models have technical gaps on three fronts: i) inadequate modeling of inverter losses; ii) use of mathematical abstractions for bidirectional flow of power; and iii) inability to integrate different control modes into nonlinear solvers without loss of generality. We propose a physics-first model that explicitly captures losses in passive circuit components based on circuit-level principles. We enable bidirectional power flow without binary or complementarity constraints by formulating loss terms as smooth, sign-aware expressions of current. We introduce and parameterize controlled current sources with twice-differentiable continuous functions to enable inverter control modes without loss of generality. We integrate DERs with the proposed inverter model at the load buses of distribution networks to perform power flow and optimization studies on real-world distribution networks with over 20,000 nodes. We demonstrate that the proposed model is more accurate, integrates seamlessly with various control modes without loss of generality, and scales robustly to large optimization problems.
+ Index Terms: bidirectional inverter model, circuit-based modeling, DERs, inverter efficiency, power control, steady-state analysis.
+ oai:arXiv.org:2506.03430v3
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Emmanuel O. Badmus, Amritanshu Pandey
+
+
+ MoA: Heterogeneous Mixture of Adapters for Parameter-Efficient Fine-Tuning of Large Language Models
+ https://arxiv.org/abs/2506.05928
+ arXiv:2506.05928v2 Announce Type: replace
+Abstract: Recent studies integrate Low-Rank Adaptation (LoRA) and Mixture-of-Experts (MoE) to further enhance the performance of parameter-efficient fine-tuning (PEFT) methods in Large Language Model (LLM) applications. Existing methods employ \emph{homogeneous} MoE-LoRA architectures composed of LoRA experts with either similar or identical structures and capacities. However, these approaches often suffer from representation collapse and expert load imbalance, which negatively impact the potential of LLMs. To address these challenges, we propose a \emph{heterogeneous} \textbf{Mixture-of-Adapters (MoA)} approach. This method dynamically integrates PEFT adapter experts with diverse structures, leveraging their complementary representational capabilities to foster expert specialization, thereby enhancing the effective transfer of pre-trained knowledge to downstream tasks. MoA supports two variants: \textbf{(i)} \textit{Soft MoA} achieves fine-grained integration by performing a weighted fusion of all expert outputs; \textbf{(ii)} \textit{Sparse MoA} activates adapter experts sparsely based on their contribution, achieving this with negligible performance degradation. Experimental results demonstrate that heterogeneous MoA outperforms homogeneous MoE-LoRA methods in both performance and parameter efficiency. Our project is available at https://github.com/DCDmllm/MoA.
+ oai:arXiv.org:2506.05928v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Jie Cao, Tianwei Lin, Bo Yuan, Rolan Yan, Hongyang He, Wenqiao Zhang, Juncheng Li, Dongping Zhang, Siliang Tang, Yueting Zhuang
+
+
+ CulturalFrames: Assessing Cultural Expectation Alignment in Text-to-Image Models and Evaluation Metrics
+ https://arxiv.org/abs/2506.08835
+ arXiv:2506.08835v3 Announce Type: replace
+Abstract: The increasing ubiquity of text-to-image (T2I) models as tools for visual content generation raises concerns about their ability to accurately represent diverse cultural contexts -- where missed cues can stereotype communities and undermine usability. In this work, we present the first study to systematically quantify the alignment of T2I models and evaluation metrics with respect to both explicit (stated) as well as implicit (unstated, implied by the prompt's cultural context) cultural expectations. To this end, we introduce CulturalFrames, a novel benchmark designed for rigorous human evaluation of cultural representation in visual generations. Spanning 10 countries and 5 socio-cultural domains, CulturalFrames comprises 983 prompts, 3637 corresponding images generated by 4 state-of-the-art T2I models, and over 10k detailed human annotations. We find that across models and countries, cultural expectations are missed an average of 44% of the time. Among these failures, explicit expectations are missed at a surprisingly high average rate of 68%, while implicit expectation failures are also significant, averaging 49%. Furthermore, we show that existing T2I evaluation metrics correlate poorly with human judgments of cultural alignment, irrespective of their internal reasoning. Collectively, our findings expose critical gaps, provide a concrete testbed, and outline actionable directions for developing culturally informed T2I models and metrics that improve global usability.
+ oai:arXiv.org:2506.08835v3
+ cs.CV
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Shravan Nayak, Mehar Bhatia, Xiaofeng Zhang, Verena Rieser, Lisa Anne Hendricks, Sjoerd van Steenkiste, Yash Goyal, Karolina Sta\'nczak, Aishwarya Agrawal
+
+
+ Rejection-Sampled Linear Codes for Lossy Compression and Channel Simulation
+ https://arxiv.org/abs/2506.09239
+ arXiv:2506.09239v2 Announce Type: replace
+Abstract: We show that linear codes combined with rejection sampling can yield a capacity-achieving scheme for simulating additive exchangeable noise channels. Specifically, our scheme achieves an amount of communication within $\log e + 1$ bits from the excess functional information lower bound. Hence, it can be used in lossy source coding to achieve the rate-distortion function. We discuss practical implementations based on BCH codes and polar codes. For the simulation of binary symmetric channels, the BCH-based construction with a blocklength of $n = 63$ attains a rate comparable to the PolarSim with $n = 4096$, while significantly reducing the latency. The polar-based construction asymptotically achieves the channel capacity with polynomial average complexity. Furthermore, using the idea from greedy rejection sampling, we propose an algorithm to construct capacity-achieving schemes based on any linear codes. Experiments reveal that our construction can outperform conventional covering codes for lossy source coding with Hamming distortion for a certain range of distortion levels, and performs well even when the blocklength is small (e.g., $n = 24$).
+ oai:arXiv.org:2506.09239v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianguo Zhao, Cheuk Ting Li
+
+
+ Integrating Symbolic Execution with LLMs for Automated Generation of Program Specifications
+ https://arxiv.org/abs/2506.09550
+ arXiv:2506.09550v4 Announce Type: replace
+Abstract: Automatically generating formal specifications including loop invariants, preconditions, and postconditions for legacy code is critical for program understanding, reuse and verification. However, the inherent complexity of control and data structures in programs makes this task particularly challenging. This paper presents a novel framework that integrates symbolic execution with large language models (LLMs) to automatically synthesize formally verified program specifications. Our method first employs symbolic execution to derive precise strongest postconditions for loop-free code segments. These symbolic execution results, along with automatically generated invariant templates, then guide the LLM to propose and iteratively refine loop invariants until a correct specification is obtained. The template-guided generation process robustly combines symbolic inference with LLM reasoning, significantly reducing hallucinations and syntactic errors by structurally constraining the LLM's output space. Furthermore, our approach can produce strong specifications without relying on externally provided verification goals, enabled by the rich semantic context supplied by symbolic execution, overcoming a key limitation of prior goal-dependent tools. Extensive evaluation shows that our tool SESpec outperforms the existing state-of-the-art tools across numerical and data-structure benchmarks, demonstrating both high precision and broad applicability.
+ oai:arXiv.org:2506.09550v4
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Fanpeng Yang, Xu Ma, Shuling Wang, Xiong Xu, Qinxiang Cao, Naijun Zhan, Xiaofeng Li, Bin Gu
+
+
+ Synthetic Geology: Structural Geology Meets Deep Learning
+ https://arxiv.org/abs/2506.11164
+ arXiv:2506.11164v3 Announce Type: replace
+Abstract: Reconstructing the structural geology and mineral composition of the first few kilometers of the Earth's subsurface from sparse or indirect surface observations remains a long-standing challenge with critical applications in mineral exploration, geohazard assessment, and geotechnical engineering. This inherently ill-posed problem is often addressed by classical geophysical inversion methods, which typically yield a single maximum-likelihood model that fails to capture the full range of plausible geology. The adoption of modern deep learning methods has been limited by the lack of large 3D training datasets. We address this gap with \textit{StructuralGeo}, a geological simulation engine that mimics eons of tectonic, magmatic, and sedimentary processes to generate a virtually limitless supply of realistic synthetic 3D lithological models. Using this dataset, we train both unconditional and conditional generative flow-matching models with a 3D attention U-Net architecture. The resulting foundation model can reconstruct multiple plausible 3D scenarios from surface topography and sparse borehole data, depicting structures such as layers, faults, folds, and dikes. By sampling many reconstructions from the same observations, we introduce a probabilistic framework for estimating the size and extent of subsurface features. While the realism of the output is bounded by the fidelity of the training data to true geology, this combination of simulation and generative AI functions offers a flexible prior for probabilistic modeling, regional fine-tuning, and use as an AI-based regularizer in traditional geophysical inversion workflows.
+ oai:arXiv.org:2506.11164v3
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Simon Ghyselincks, Valeriia Okhmak, Stefano Zampini, George Turkiyyah, David Keyes, Eldad Haber
+
+
+ GLAP: General contrastive audio-text pretraining across domains and languages
+ https://arxiv.org/abs/2506.11350
+ arXiv:2506.11350v2 Announce Type: replace
+Abstract: Contrastive Language Audio Pretraining (CLAP) is a widely-used method to bridge the gap between audio and text domains. Current CLAP methods enable sound and music retrieval in English, ignoring multilingual spoken content. To address this, we introduce general language audio pretraining (GLAP), which expands CLAP with multilingual and multi-domain abilities. GLAP demonstrates its versatility by achieving competitive performance on standard audio-text retrieval benchmarks like Clotho and AudioCaps, while significantly surpassing existing methods in speech retrieval and classification tasks. Additionally, GLAP achieves strong results on widely used sound-event zero-shot benchmarks, while simultaneously outperforming previous methods on speech content benchmarks. Further keyword spotting evaluations across 50 languages emphasize GLAP's advanced multilingual capabilities. Finally, multilingual sound and music understanding is evaluated across four languages. Checkpoints and Source: https://github.com/xiaomi-research/dasheng-glap.
+ oai:arXiv.org:2506.11350v2
+ cs.SD
+ cs.CL
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Heinrich Dinkel, Zhiyong Yan, Tianzi Wang, Yongqing Wang, Xingwei Sun, Yadong Niu, Jizhong Liu, Gang Li, Junbo Zhang, Jian Luan
+
+
+ The CAISAR Platform: Extending the Reach of Machine Learning Specification and Verification
+ https://arxiv.org/abs/2506.12084
+ arXiv:2506.12084v2 Announce Type: replace
+Abstract: The formal specification and verification of machine learning programs saw remarkable progress in less than a decade, leading to a profusion of tools. However, diversity may lead to fragmentation, resulting in tools that are difficult to compare, except for very specific benchmarks. Furthermore, this progress is heavily geared towards the specification and verification of a certain class of property, that is, local robustness properties. But while provers are becoming more and more efficient at solving local robustness properties, even slightly more complex properties, involving multiple neural networks for example, cannot be expressed in the input languages of winners of the International Competition of Verification of Neural Networks VNN-Comp. In this tool paper, we present CAISAR, an open-source platform dedicated to machine learning specification and verification. We present its specification language, suitable for modelling complex properties on neural networks, support vector machines and boosted trees. We show on concrete use-cases how specifications written in this language are automatically translated to queries to state-of-the-art provers, notably by using automated graph editing techniques, making it possible to use their off-the-shelf versions. The artifact to reproduce the paper claims is available at the following DOI: https://doi.org/10.5281/zenodo.15209510
+ oai:arXiv.org:2506.12084v2
+ cs.SE
+ cs.AI
+ cs.CL
+ cs.FL
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Michele Alberti (LSL), Fran\c{c}ois Bobot (LSL), Julien Girard-Satabin (LSL), Alban Grastien (LSL), Aymeric Varasse (LSL), Zakaria Chihani (LSL)
+
+
+ Representing Time-Continuous Behavior of Cyber-Physical Systems in Knowledge Graphs
+ https://arxiv.org/abs/2506.13773
+ arXiv:2506.13773v2 Announce Type: replace
+Abstract: Time-continuous dynamic models are essential for various Cyber-Physical System (CPS) applications. To ensure effective usability in different lifecycle phases, such behavioral information in the form of differential equations must be contextualized and integrated with further CPS information. While knowledge graphs provide a formal description and structuring mechanism for this task, there is a lack of reusable ontological artifacts and methods to reduce manual instantiation effort. Hence, this contribution introduces two artifacts: Firstly, a modular semantic model based on standards is introduced to represent differential equations directly within knowledge graphs and to enrich them semantically. Secondly, a method for efficient knowledge graph generation is presented. A validation of these artifacts was conducted in the domain of aviation maintenance. Results show that differential equations of a complex Electro-Hydraulic Servoactuator can be formally represented in a knowledge graph and be contextualized with other lifecycle data, proving the artifacts' practical applicability.
+ oai:arXiv.org:2506.13773v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/ETFA65518.2025.11205677
+ Milapji Singh Gill, Tom Jeleniewski, Felix Gehlhoff, Alexander Fay
+
+
+ Mxplainer: Explain and Learn Insights by Imitating Mahjong Agents
+ https://arxiv.org/abs/2506.14246
+ arXiv:2506.14246v2 Announce Type: replace
+Abstract: People need to internalize the skills of AI agents to improve their own capabilities. Our paper focuses on Mahjong, a multiplayer game involving imperfect information and requiring effective long-term decision-making amidst randomness and hidden information. Through the efforts of AI researchers, several impressive Mahjong AI agents have already achieved performance levels comparable to those of professional human players; however, these agents are often treated as black boxes from which few insights can be gleaned. This paper introduces Mxplainer, a parameterized search algorithm that can be converted into an equivalent neural network to learn the parameters of black-box agents. Experiments on both human and AI agents demonstrate that Mxplainer achieves a top-three action prediction accuracy of over 92% and 90%, respectively, while providing faithful and interpretable approximations that outperform decision-tree methods (34.8% top-three accuracy). This enables Mxplainer to deliver both strategy-level insights into agent characteristics and actionable, step-by-step explanations for individual decisions.
+ oai:arXiv.org:2506.14246v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ 10.3390/a18120738
+ Algorithms 2025, 18(12), 738
+ Lingfeng Li, Yunlong Lu, Yongyi Wang, Qifan Zheng, Wenxin Li
+
+
+ Scaling Laws for Geospatial Foundation Models: A case study on PhilEO Bench
+ https://arxiv.org/abs/2506.14765
+ arXiv:2506.14765v5 Announce Type: replace
+Abstract: Foundation Models (FMs) have achieved state-of-the-art performance across domains by leveraging large-scale pretraining. In Earth Observation (EO), the availability of petabyte-scale satellite archives has recently enabled the development of GeoSpatial Foundation Models (GFMs). Yet, fundamental questions remain regarding how dataset size, model architecture, and size interact to determine downstream performance. In this work, we systematically explore this design space by pretraining and fine-tuning models on three dataset scales: PhilEO Globe (0.5TB), FastTOM (2TB, introduced here), and MajorTOM (23TB). We evaluate three architectural families: Geo-Aware U-Net (CNN), ViT-UPerNet (Transformer), and Mamba (State-Space Model); across model sizes ranging from 44M to 300M parameters. All models are benchmarked on the PhilEO Bench, covering: road density and building density regression, and land cover segmentation, and are compared against existing GFMs such as TerraMind and Prithvi-EO-2.0. Our results show that CNN-based models remain highly competitive in low-shot settings, with a 200M-parameter Geo-Aware U-Net outperforming larger architectures on regression tasks. However, when scaling to multi-terabyte datasets, ViT-UPerNet achieves the best performance, particularly for semantic segmentation on MajorTOM (23TB). Finally, we provide the first extensive evaluation of Mamba models in EO, highlighting their potential efficiency advantages, though further large-scale pretraining is required to fully match CNNs and ViTs. All code, pretrained models, and the FastTOM dataset are released publicly, enabling reproducibility and further exploration of scaling laws for GFMs.
+ oai:arXiv.org:2506.14765v5
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Nikolaos Dionelis, Riccardo Musto, Jente Bosmans, Simone Sarti, Giancarlo Paoletti, Peter Naylor, Valerio Marsocci, S\'ebastien Lef\`evre, Bertrand Le Saux, Nicolas Long\'ep\'e
+
+
+ Knee-Deep in C-RASP: A Transformer Depth Hierarchy
+ https://arxiv.org/abs/2506.16055
+ arXiv:2506.16055v3 Announce Type: replace
+Abstract: It has been observed that transformers with greater depth (that is, more layers) have more capabilities, but can we establish formally which capabilities are gained? We answer this question with a theoretical proof followed by an empirical study. First, we consider transformers that round to fixed precision except inside attention. We show that this subclass of transformers is expressively equivalent to the programming language C-RASP and this equivalence preserves depth. Second, we prove that deeper C-RASP programs are more expressive than shallower C-RASP programs, implying that deeper transformers are more expressive than shallower transformers (within the subclass mentioned above). The same is also proven for transformers with positional encodings (like RoPE and ALiBi). These results are established by studying a temporal logic with counting operators equivalent to C-RASP. Finally, we provide empirical evidence that our theory predicts the depth required for transformers without positional encodings to length-generalize on a family of sequential dependency tasks.
+ oai:arXiv.org:2506.16055v3
+ cs.CL
+ cs.FL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Andy Yang, Micha\"el Cadilhac, David Chiang
+
+
+ PersonalAI: A Systematic Comparison of Knowledge Graph Storage and Retrieval Approaches for Personalized LLM agents
+ https://arxiv.org/abs/2506.17001
+ arXiv:2506.17001v3 Announce Type: replace
+Abstract: Personalizing language models that effectively incorporating user interaction history remains a central challenge in development of adaptive AI systems. While large language models (LLMs), combined with Retrieval-Augmented Generation (RAG), have improved factual accuracy, they often lack structured memory and fail to scale in complex, long-term interactions. To address this, we propose a flexible external memory framework based on knowledge graph, which construct and update memory model automatically by LLM itself. Building upon the AriGraph architecture, we introduce a novel hybrid graph design that supports both standard edges and two types of hyper-edges, enabling rich and dynamic semantic and temporal representations. Our framework also supports diverse retrieval mechanisms, including A*, water-circle traversal, beam search and hybrid methods, making it adaptable to different datasets and LLM capacities. We evaluate our system on three benchmarks: TriviaQA, HotpotQA, DiaASQ and demonstrate that different memory and retrieval configurations yield optimal performance depending on the task. Additionally, we extend the DiaASQ benchmark with temporal annotations and internally contradictory statements, showing that our system remains robust and effective in managing temporal dependencies and context-aware reasoning.
+ oai:arXiv.org:2506.17001v3
+ cs.CL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mikhail Menschikov, Dmitry Evseev, Victoria Dochkina, Ruslan Kostoev, Ilia Perepechkin, Petr Anokhin, Evgeny Burnaev, Nikita Semenov
+
+
+ Challenges and Practices in Quantum Software Testing and Debugging: Insights from Practitioners
+ https://arxiv.org/abs/2506.17306
+ arXiv:2506.17306v2 Announce Type: replace
+Abstract: Quantum software engineering is an emerging discipline with distinct challenges, particularly in testing and debugging. As quantum computing transitions from theory to implementation, developers face issues not present in classical software development, such as probabilistic execution, limited observability, shallow abstractions, and low awareness of quantum-specific tools. To better understand current practices, we surveyed 26 quantum software developers from academia and industry and conducted follow-up interviews focused on testing, debugging, and recurring challenges. All participants reported engaging in testing, with unit testing (88%), regression testing (54%), and acceptance testing (54%) being the most common. However, only 31% reported using quantum-specific testing tools, relying instead on classical and manual methods. Debugging practices were similarly grounded in classical strategies, such as print statements, circuit visualizations, and simulators, which respondents noted do not scale well. The most frequently cited sources of bugs were classical in nature: library updates (81%), developer errors (69%), and compatibility issues (62%)-often worsened by limited abstraction in existing quantum SDKs. These findings highlight the urgent need for better-aligned testing and debugging tools integrated more seamlessly into the workflows of quantum developers. We present these results in detail and offer actionable recommendations grounded in the real-world needs of practitioners.
+ oai:arXiv.org:2506.17306v2
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jake Zappin, Trevor Stalnaker, Oscar Chaparro, Denys Poshyvanyk
+
+
+ May the Feedback Be with You! Unlocking the Power of Feedback-Driven Deep Learning Framework Fuzzing via LLMs
+ https://arxiv.org/abs/2506.17642
+ arXiv:2506.17642v4 Announce Type: replace
+Abstract: Deep Learning (DL) frameworks have served as fundamental components in DL systems over the last decade. However, bugs in DL frameworks could lead to catastrophic consequences in critical scenarios. A simple yet effective way to find bugs in DL frameworks is fuzz testing (Fuzzing). Existing approaches focus on test generation, leaving execution results with high semantic value (e.g., coverage information, bug reports, and exception logs) in the wild, which can serve as multiple types of feedback. To fill this gap, we propose FUEL to effectively utilize the feedback information, which comprises two Large Language Models (LLMs): analysis LLM and generation LLM. Specifically, analysis LLM infers analysis summaries from feedback information, while the generation LLM creates tests guided by these summaries. Furthermore, based on multiple feedback guidance, we design two additional components: (i) a feedback-aware simulated annealing algorithm to select operators for test generation, enriching test diversity. (ii) a program self-repair strategy to automatically repair invalid tests, enhancing test validity. We evaluate FUEL on the two most popular DL frameworks, and experiment results show that FUEL can improve line code coverage of PyTorch and TensorFlow by 4.48% and 9.14% over four state-of-the-art baselines. By the time of submission, FUEL has detected 104 previously unknown bugs for PyTorch and TensorFlow, with 93 confirmed as new bugs, 53 already fixed. 14 vulnerabilities have been assigned CVE IDs, among which 7 are rated as high-severity with a CVSS score of "7.5 HIGH". Our artifact is available at https://github.com/NJU-iSE/FUEL
+ oai:arXiv.org:2506.17642v4
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shaoyu Yang, Chunrong Fang, Haifeng Lin, Xiang Chen, Jia Liu, Zhenyu Chen
+
+
+ Dynamic Hybrid Modeling: Incremental Identification and Model Predictive Control
+ https://arxiv.org/abs/2506.18344
+ arXiv:2506.18344v2 Announce Type: replace
+Abstract: Mathematical models are crucial for optimizing and controlling chemical processes, yet they often face significant limitations in terms of computational time, algorithm complexity, and development costs. Hybrid models, which combine mechanistic models with data-driven models (i.e. models derived via the application of machine learning to experimental data), have emerged as a promising solution to these challenges. However, the identification of dynamic hybrid models remains difficult due to the need to integrate data-driven models within mechanistic model structures.
+ We present an incremental identification approach for dynamic hybrid models that decouples the mechanistic and data-driven components to overcome computational and conceptual difficulties. Our methodology comprises four key steps: (1) regularized dynamic parameter estimation to determine optimal time profiles for flux variables, (2) correlation analysis to evaluate relationships between variables, (3) data-driven model identification using advanced machine learning techniques, and (4) hybrid model integration to combine the mechanistic and data-driven components. This approach facilitates early evaluation of model structure suitability, accelerates the development of hybrid models, and allows for independent identification of data-driven components.
+ Three case studies are presented to illustrate the robustness, reliability, and efficiency of our incremental approach in handling complex systems and scenarios with limited data.
+ oai:arXiv.org:2506.18344v2
+ eess.SY
+ cs.LG
+ cs.SY
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1016/j.compchemeng.2025.109413
+ Computers & Chemical Engineering, Volume 204, January 2026, 109413
+ Adrian Caspari, Thomas Bierweiler, Sarah Fadda, Daniel Labisch, Maarten Nauta, Franzisko Wagner, Merle Warmbold, Constantinos C. Pantelides
+
+
+ Efficient Beam Selection for ISAC in Cell-Free Massive MIMO via Digital Twin-Assisted Deep Reinforcement Learning
+ https://arxiv.org/abs/2506.18560
+ arXiv:2506.18560v2 Announce Type: replace
+Abstract: Beamforming enhances signal strength and quality by focusing energy in specific directions. This capability is particularly crucial in cell-free integrated sensing and communication (ISAC) systems, where multiple distributed access points (APs) collaborate to provide both communication and sensing services. In this work, we first derive the distribution of joint target detection probabilities across multiple receiving APs under false alarm rate constraints, and then formulate the beam selection procedure as a Markov decision process (MDP). We establish a deep reinforcement learning (DRL) framework, in which reward shaping and sinusoidal embedding are introduced to facilitate agent learning. To eliminate the high costs and associated risks of real-time agent-environment interactions, we further propose a novel digital twin (DT)-assisted offline DRL approach. Different from traditional online DRL, a conditional generative adversarial network (cGAN)-based DT module, operating as a replica of the real world, is meticulously designed to generate virtual state-action transition pairs and enrich data diversity, enabling offline adjustment of the agent's policy. Additionally, we address the out-of-distribution issue by incorporating an extra penalty term into the loss function design. The convergency of agent-DT interaction and the upper bound of the Q-error function are theoretically derived. Numerical results demonstrate the remarkable performance of our proposed approach, which significantly reduces online interaction overhead while maintaining effective beam selection across diverse conditions including strict false alarm control, low signal-to-noise ratios, and high target velocities.
+ oai:arXiv.org:2506.18560v2
+ cs.ET
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiexin Zhang, Shu Xu, Chunguo Li, Yongming Huang, Luxi Yang
+
+
+ Automating Traffic Monitoring with SHM Sensor Networks via Vision-Supervised Deep Learning
+ https://arxiv.org/abs/2506.19023
+ arXiv:2506.19023v3 Announce Type: replace
+Abstract: Bridges, as critical components of civil infrastructure, are increasingly affected by deterioration, making reliable traffic monitoring essential for assessing their remaining service life. Among operational loads, traffic load plays a pivotal role, and recent advances in deep learning - particularly in computer vision (CV) - have enabled progress toward continuous, automated monitoring. However, CV-based approaches suffer from limitations, including privacy concerns and sensitivity to lighting conditions, while traditional non-vision-based methods often lack flexibility in deployment and validation. To bridge this gap, we propose a fully automated deep-learning pipeline for continuous traffic monitoring using structural health monitoring (SHM) sensor networks. Our approach integrates CV-assisted high-resolution dataset generation with supervised training and inference, leveraging graph neural networks (GNNs) to capture the spatial structure and interdependence of sensor data. By transferring knowledge from CV outputs to SHM sensors, the proposed framework enables sensor networks to achieve comparable accuracy of vision-based systems, with minimal human intervention. Applied to accelerometer and strain gauge data in a real-world case study, the model achieves state-of-the-art performance, with classification accuracies of 99% for light vehicles and 94% for heavy vehicles.
+ oai:arXiv.org:2506.19023v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hanshuo Wu, Xudong Jian, Christos Lataniotis, Cyprien Hoelzl, Eleni Chatzi, Yves Reuland
+
+
+ Physics-Informed Machine Learning Regulated by Finite Element Analysis for Simulation Acceleration of Laser Powder Bed Fusion
+ https://arxiv.org/abs/2506.20537
+ arXiv:2506.20537v2 Announce Type: replace
+Abstract: Efficient simulation of Laser Powder Bed Fusion (LPBF) is crucial for process prediction due to the lasting issue of high computation cost using traditional numerical methods such as finite element analysis (FEA). This study presents an efficient modeling framework termed FEA-Regulated Physics-Informed Neural Network (FEA-PINN) to accelerate the thermal field prediction in a LPBF process while maintaining the FEA accuracy. A novel dynamic material updating strategy is developed to capture the dynamic phase change of powder-liquid-solid in the PINN model. The PINN model incorporates temperature-dependent material properties and phase change behavior using the apparent heat capacity method. While the PINN model demonstrates high accuracy with a small training data and enables generalization of new process parameters via transfer learning, it faces the challenge of high computation cost in time-dependent problems due to the residual accumulation. To overcome this issue, the FEA-PINN framework integrates corrective FEA simulations during inference to enforce physical consistency and reduce error drift. A comparative analysis shows that FEA-PINN achieves equivalent accuracy to FEA while significantly reducing computational cost. The framework has been validated using the benchmark FEA data and demonstrated through single-track scanning in LPBF.
+ oai:arXiv.org:2506.20537v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ R. Sharma, M. Raissi, Y. B. Guo
+
+
+ ANUBHUTI: A Comprehensive Corpus For Sentiment Analysis In Bangla Regional Languages
+ https://arxiv.org/abs/2506.21686
+ arXiv:2506.21686v2 Announce Type: replace
+Abstract: Sentiment analysis for regional dialects of Bangla remains an underexplored area due to linguistic diversity and limited annotated data. This paper introduces ANUBHUTI, a comprehensive dataset consisting of 10,000 sentences manually translated from standard Bangla into four major regional dialects Mymensingh, Noakhali, Sylhet, and Chittagong. The dataset predominantly features political and religious content, reflecting the contemporary socio political landscape of Bangladesh, alongside neutral texts to maintain balance. Each sentence is annotated using a dual annotation scheme: multiclass thematic labeling categorizes sentences as Political, Religious, or Neutral, and multilabel emotion annotation assigns one or more emotions from Anger, Contempt, Disgust, Enjoyment, Fear, Sadness, and Surprise. Expert native translators conducted the translation and annotation, with quality assurance performed via Cohens Kappa inter annotator agreement, achieving strong consistency across dialects. The dataset was further refined through systematic checks for missing data, anomalies, and inconsistencies. ANUBHUTI fills a critical gap in resources for sentiment analysis in low resource Bangla dialects, enabling more accurate and context aware natural language processing.
+ oai:arXiv.org:2506.21686v2
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Swastika Kundu, Autoshi Ibrahim, Mithila Rahman, Tanvir Ahmed
+
+
+ Modeling Hierarchical Spaces: A Review and Unified Framework for Surrogate-Based Architecture Design
+ https://arxiv.org/abs/2506.22621
+ arXiv:2506.22621v3 Announce Type: replace
+Abstract: Simulation-based problems involving mixed-variable inputs frequently feature domains that are hierarchical, conditional, heterogeneous, or tree-structured. These characteristics pose challenges for data representation, modeling, and optimization. This paper reviews extensive literature on these structured input spaces and proposes a unified framework that generalizes existing approaches.
+ In this framework, input variables may be continuous, integer, or categorical. A variable is described as meta if its value governs the presence of other decreed variables, enabling the modeling of conditional and hierarchical structures. We further introduce the concept of partially-decreed variables, whose activation depends on contextual conditions.
+ To capture these inter-variable hierarchical relationships, we introduce design space graphs, combining principles from feature modeling and graph theory. This allows the definition of general hierarchical domains suitable for describing complex system architectures.
+ Our framework defines hierarchical distances and kernels to enable surrogate modeling and optimization on hierarchical domains. We demonstrate its effectiveness on complex system design problems, including a neural network and a green-aircraft case study. Our methods are available in the open-source Surrogate Modeling Toolbox (SMT 2.0).
+ oai:arXiv.org:2506.22621v3
+ cs.LG
+ math.OC
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1007/s00158-026-04249-2
+ Paul Saves, Edward Hall\'e-Hannan, Jasper Bussemaker, Youssef Diouane, Nathalie Bartoli
+
+
+ Surrogate Modeling via Factorization Machine and Ising Model with Enhanced Higher-Order Interaction Learning
+ https://arxiv.org/abs/2507.01389
+ arXiv:2507.01389v2 Announce Type: replace
+Abstract: Recently, a surrogate model was proposed that employs a factorization machine to approximate the underlying input-output mapping of the original system, with quantum annealing used to optimize the resulting surrogate function. Inspired by this approach, we propose an enhanced surrogate model that incorporates additional slack variables into both the factorization machine and its associated Ising representation thereby unifying what was by design a two-step process into a single, integrated step. During the training phase, the slack variables are iteratively updated, enabling the model to account for higher-order feature interactions. We apply the proposed method to the task of predicting drug combination effects. Experimental results indicate that the introduction of slack variables leads to a notable improvement of performance. Our algorithm offers a promising approach for building efficient surrogate models that exploit potential quantum advantages.
+ oai:arXiv.org:2507.01389v2
+ cs.LG
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1103/knt1-yd9s
+ Anbang Wang, Dunbo Cai, Yu Zhang, Yangqing Huang, Xiangyang Feng, Zhihong Zhang
+
+
+ Challenges & Opportunities with LLM-Assisted Visualization Retargeting
+ https://arxiv.org/abs/2507.01436
+ arXiv:2507.01436v3 Announce Type: replace
+Abstract: Despite the ubiquity of visualization examples published on the web, retargeting existing custom chart implementations to new datasets remains difficult, time-intensive, and tedious. The adaptation process assumes author familiarity with both the implementation of the example as well as how the new dataset might need to be transformed to fit into the example code. With recent advances in Large Language Models (LLMs), automatic adaptation of code can be achieved from high-level user prompts, reducing the barrier for visualization retargeting. To better understand how LLMs can assist retargeting and its potential limitations, we characterize and evaluate the performance of LLM assistance across multiple datasets and charts of varying complexity, categorizing failures according to type and severity. In our evaluation, we compare two approaches: (1) directly instructing the LLM model to fully generate and adapt code by treating code as text inputs and (2) a more constrained program synthesis pipeline where the LLM guides the code construction process by providing structural information (e.g., visual encodings) based on properties of the example code and data. We find that both approaches struggle when new data has not been appropriately transformed, and discuss important design recommendations for future retargeting systems.
+ oai:arXiv.org:2507.01436v3
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Luke S. Snyder, Chenglong Wang, Steven M. Drucker
+
+
+ SoK: On the Survivability of Backdoor Attacks on Unconstrained Face Recognition Systems
+ https://arxiv.org/abs/2507.01607
+ arXiv:2507.01607v5 Announce Type: replace
+Abstract: The widespread deployment of Deep Learning-based Face Recognition Systems raises many security concerns. While prior research has identified backdoor vulnerabilities on isolated components, Backdoor Attacks on real-world, unconstrained pipelines remain underexplored. This SoK paper presents the first comprehensive system-level analysis and measurement of the impact of Backdoor Attacks on fully-fledged Face Recognition Systems. We combine the existing Supervised Learning backdoor literature targeting face detectors, face antispoofing, and face feature extractors to demonstrate a system-level vulnerability. By analyzing 20 pipeline configurations and 15 attack scenarios in a holistic manner, we reveal that an attacker only needs a single backdoored model to compromise an entire Face Recognition System. Finally, we discuss the impact of such attacks and propose best practices and countermeasures for stakeholders.
+ oai:arXiv.org:2507.01607v5
+ cs.CV
+ cs.AI
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Quentin Le Roux, Yannick Teglia, Teddy Furon, Philippe Loubet-Moundi, Eric Bourbao
+
+
+ K-Function: Joint Pronunciation Transcription and Feedback for Evaluating Kids Language Function
+ https://arxiv.org/abs/2507.03043
+ arXiv:2507.03043v2 Announce Type: replace
+Abstract: Evaluating young children's language is challenging for automatic speech recognizers due to high-pitched voices, prolonged sounds, and limited data. We introduce K-Function, a framework that combines accurate sub-word transcription with objective, Large Language Model (LLM)-driven scoring. Its core, Kids-Weighted Finite State Transducer (K-WFST), merges an acoustic phoneme encoder with a phoneme-similarity model to capture child-specific speech errors while remaining fully interpretable. K-WFST achieves a 1.39 % phoneme error rate on MyST and 8.61 % on Multitudes-an absolute improvement of 10.47 % and 7.06 % over a greedy-search decoder. These high-quality transcripts are used by an LLM to grade verbal skills, developmental milestones, reading, and comprehension, with results that align closely with human evaluators. Our findings show that precise phoneme recognition is essential for creating an effective assessment framework, enabling scalable language screening for children.
+ oai:arXiv.org:2507.03043v2
+ cs.CL
+ cs.AI
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shuhe Li, Chenxu Guo, Jiachen Lian, Cheol Jun Cho, Wenshuo Zhao, Xiner Xu, Ruiyu Jin, Xiaoyu Shi, Xuanru Zhou, Dingkun Zhou, Sam Wang, Grace Wang, Jingze Yang, Jingyi Xu, Ruohan Bao, Xingrui Chen, Elise Brenner, Brandon In, Francesca Pei, Maria Luisa Gorno-Tempini, Gopala Anumanchipalli
+
+
+ Assessing Small Language Models for Code Generation: An Empirical Study with Benchmarks
+ https://arxiv.org/abs/2507.03160
+ arXiv:2507.03160v4 Announce Type: replace
+Abstract: The recent advancements of Small Language Models (SLMs) have opened new possibilities for efficient code generation. SLMs offer lightweight and cost-effective alternatives to Large Language Models (LLMs), making them attractive for use in resource-constrained environments. However, empirical understanding of SLMs, particularly their capabilities, limitations, and performance trade-offs in code generation remains limited. This study presents a comprehensive empirical evaluation of 20 open-source SLMs ranging from 0.4B to 10B parameters on five diverse code-related benchmarks (HumanEval, MBPP, Mercury, HumanEvalPack, and CodeXGLUE). The models are assessed along three dimensions: i) functional correctness of generated code, ii) computational efficiency and iii) performance across multiple programming languages. The findings of this study reveal that several compact SLMs achieve competitive results while maintaining a balance between performance and efficiency, making them viable for deployment in resource-constrained environments. However, achieving further improvements in accuracy requires switching to larger models. These models generally outperform their smaller counterparts, but they require much more computational power. We observe that for 10% performance improvements, models can require nearly a 4x increase in VRAM consumption, highlighting a trade-off between effectiveness and scalability. Besides, the multilingual performance analysis reveals that SLMs tend to perform better in languages such as Python, Java, and PHP, while exhibiting relatively weaker performance in Go, C++, and Ruby. However, statistical analysis suggests these differences are not significant, indicating a generalizability of SLMs across programming languages. Based on the findings, this work provides insights into the design and selection of SLMs for real-world code generation tasks.
+ oai:arXiv.org:2507.03160v4
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Md Mahade Hasan, Muhammad Waseem, Kai-Kristian Kemell, Jussi Rasku, Juha Ala-Rantala, Pekka Abrahamsson
+
+
+ Visual Hand Gesture Recognition with Deep Learning: A Comprehensive Review of Methods, Datasets, Challenges and Future Research Directions
+ https://arxiv.org/abs/2507.04465
+ arXiv:2507.04465v3 Announce Type: replace
+Abstract: The rapid evolution of deep learning (DL) models and the ever-increasing size of available datasets have raised the interest of the research community in the always important field of visual hand gesture recognition (VHGR), and delivered a wide range of applications, such as sign language understanding and human-computer interaction using cameras. Despite the large volume of research works in the field, a structured and complete survey on VHGR is still missing, leaving researchers to navigate through hundreds of papers in order to find the right combination of data, model, and approach for each task. The current survey aims to fill this gap by presenting a comprehensive overview of this computer vision field. With a systematic research methodology that identifies the state-of-the-art works and a structured presentation of the various methods, datasets, and evaluation metrics, this review aims to constitute a useful guideline for researchers, helping them to choose the right strategy for handling a VHGR task. Starting with the methodology used to locate the related literature, the survey identifies and organizes the key VHGR approaches in a taxonomy-based format, and presents the various dimensions that affect the final method choice, such as input modality, task type, and application domain. The state-of-the-art techniques are grouped across three primary VHGR tasks: static gesture recognition, isolated dynamic gestures, and continuous gesture recognition. For each task, the architectural trends and learning strategies are listed. To support the experimental evaluation of future methods in the field, the study reviews commonly used datasets and presents the standard performance metrics. Our survey concludes by identifying the major challenges in VHGR, including both general computer vision issues and domain-specific obstacles, and outlines promising directions for future research.
+ oai:arXiv.org:2507.04465v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Konstantinos Foteinos, Manousos Linardakis, Panagiotis Radoglou-Grammatikis, Vasileios Argyriou, Panagiotis Sarigiannidis, Iraklis Varlamis, Georgios Th. Papadopoulos
+
+
+ Pre-Trained Policy Discriminators are General Reward Models
+ https://arxiv.org/abs/2507.05197
+ arXiv:2507.05197v2 Announce Type: replace
+Abstract: We offer a novel perspective on reward modeling by formulating it as a policy discriminator, which quantifies the difference between two policies to generate a reward signal, guiding the training policy towards a target policy with desired behaviors. Based on this conceptual insight, we propose a scalable pre-training method named Policy Discriminative Learning (POLAR), which trains a reward model (RM) to discern identical policies and discriminate different ones. Unlike traditional reward modeling methods relying on absolute preferences, POLAR captures the relative difference between one policy and an arbitrary target policy, which is a scalable, high-level optimization objective suitable for modeling generic ranking relationships. Leveraging the POLAR pre-training paradigm, we present a series of RMs with parameter scales from 1.8B to 7B. Empirical results show that POLAR substantially outperforms traditional non-pre-trained methods, significantly enhancing RM performance. For instance, POLAR-7B could improve preference accuracy from 54.8% to 81.0% on STEM tasks and from 57.9% to 85.5% on creative writing tasks compared to SOTA baselines. POLAR also shows robust generalization capabilities in RLHF using Reinforcement Fine-tuning (RFT), providing reliable reward signals and markedly enhancing policy performance--improving LLaMa3.1-8B from an average of 47.36% to 56.33% and Qwen2.5-32B from 64.49% to 70.47% on 20 benchmarks. Moreover, scaling experiments reveal a clear power-law relationship between computation and performance, supported by linear correlation coefficients approaching 0.99. The impressive performance, strong generalization, and scaling properties suggest that POLAR is a promising direction for developing general and strong reward models.
+ oai:arXiv.org:2507.05197v2
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shihan Dou, Shichun Liu, Yuming Yang, Yicheng Zou, Yunhua Zhou, Shuhao Xing, Chenhao Huang, Qiming Ge, Demin Song, Haijun Lv, Songyang Gao, Chengqi Lv, Enyu Zhou, Honglin Guo, Zhiheng Xi, Wenwei Zhang, Qipeng Guo, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Tao Gui, Kai Chen
+
+
+ Bridging Expressivity and Scalability with Adaptive Unitary SSMs
+ https://arxiv.org/abs/2507.05238
+ arXiv:2507.05238v3 Announce Type: replace
+Abstract: Recent work has revealed that state space models (SSMs), while efficient for long-sequence processing, are fundamentally limited in their ability to represent formal languages-particularly due to time-invariant and real-valued recurrence structures. In this work, we draw inspiration from adaptive and structured dynamics observed in biological neural systems and introduce the Adaptive Unitary State Space Model (AUSSM): a novel class of SSMs that leverages skew-symmetric, input-dependent recurrence to achieve unitary evolution and high expressive power. Using algebraic automata theory, we prove that AUSSM can perform modulo counting and simulate solvable group automata at precision logarithmically bounded in the input length, enabling SSMs to model a broad class of regular languages out of reach for other SSM architectures. To overcome the practical inefficiencies of adaptive recurrence, we develop a separable convolution formulation and a CUDA implementation that enables scalable parallel training. Empirically, we show that AUSSM and its hybrid variant-interleaved with Mamba-outperform prior SSMs on formal algorithmic tasks such as parity and modular arithmetic, and achieve competent performance on real-world long time-series classification benchmarks. Our results demonstrate that adaptive unitary recurrence provides a powerful and efficient inductive bias for both symbolic and continuous sequence modeling. The code is available at https://github.com/arjunkaruvally/AUSSM
+ oai:arXiv.org:2507.05238v3
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Arjun Karuvally, Franz Nowak, Anderson T. Keller, Carmen Amo Alonso, Terrence J. Sejnowski, Hava T. Siegelmann
+
+
+ CoRe: Benchmarking LLMs Code Reasoning Capabilities through Static Analysis Tasks
+ https://arxiv.org/abs/2507.05269
+ arXiv:2507.05269v3 Announce Type: replace
+Abstract: Large language models (LLMs) have been widely adopted across diverse domains of software engineering, such as code generation, program repair, and vulnerability detection. These applications require understanding beyond surface-level code patterns: value propagation, control flow, and interdependence between program elements. However, existing benchmarks primarily evaluate end-to-end outcomes, such as whether code is correctly repaired or generated, leaving the models' ability for program semantic reasoning underexplored. This work presents CORE, a high-quality, human-verified benchmark designed to evaluate LLMs on fundamental static analysis tasks. CORE includes 12,553 task instances spanning data dependency, control dependency, and information flow across programs written in C/C++, Java, and Python. To ensure semantic diversity and reasoning complexity, we propose a semantics-aware diverse sampling strategy that selects targets and task instances based on structural coverage and dependency depth. We evaluate 10 mainstream LLMs and show that, while they perform well at identifying dependencies, models still struggle with tasks that require deeper semantic understanding and multi-step reasoning. We further conduct qualitative analyses to uncover key challenges, such as complex control structures and backward dependency patterns, offering insights into improving LLMs' code reasoning capabilities.
+ oai:arXiv.org:2507.05269v3
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Danning Xie, Mingwei Zheng, Xuwei Liu, Jiannan Wang, Chengpeng Wang, Lin Tan, Xiangyu Zhang
+
+
+ Reinforcement Fine-Tuning Naturally Mitigates Forgetting in Continual Post-Training
+ https://arxiv.org/abs/2507.05386
+ arXiv:2507.05386v4 Announce Type: replace
+Abstract: Continual post-training (CPT) is a popular and effective technique for adapting foundation models like multimodal large language models to specific and ever-evolving downstream tasks. While existing research has primarily concentrated on methods like data replay, model expansion, or parameter regularization, the fundamental role of the learning paradigm within CPT remains largely unexplored. This paper presents a comparative analysis of two core post-training paradigms: supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT), investigating their respective impacts on knowledge retention during CPT. Our experiments are conducted on a benchmark comprising seven diverse multimodal tasks, utilizing Qwen2.5-VL-7B-Instruct as the base model for continual post-training. The investigation yields two significant findings: (1) When continuously learning on downstream tasks, SFT leads to catastrophic forgetting of previously learned tasks. In contrast, RFT inherently preserves prior knowledge and achieve performance comparable to multi-task training. (2) RFT successfully protects and even enhances the model's general knowledge on standard benchmarks (e.g., MMMU and MMLU-Pro). Conversely, SFT degrades general model capabilities severely. Further analysis reveals that this stability is not primarily due to explicit mechanisms like KL penalty or chain-of-thought reasoning. Instead, we identify an implicit regularization mechanism inherent to RFT as a key contributing factor. Our theoretical analysis suggests that RFT's gradient updates are naturally scaled by the reward variance, acting as a data-dependent regularizer that inherently protects previously acquired knowledge. Finally, we propose a rollout-based instance filtering algorithm to enhance the stability and efficiency of RFT. Our comprehensive study demonstrates the superiority of RFT as a robust paradigm for continual post-training.
+ oai:arXiv.org:2507.05386v4
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Song Lai, Haohan Zhao, Rong Feng, Changyi Ma, Wenzhuo Liu, Hongbo Zhao, Xi Lin, Dong Yi, Qingfu Zhang, Hongbin Liu, Gaofeng Meng, Fei Zhu
+
+
+ On the Costs and Benefits of Learned Indexing for Dynamic High-Dimensional Data: Extended Version
+ https://arxiv.org/abs/2507.05865
+ arXiv:2507.05865v2 Announce Type: replace
+Abstract: One of the main challenges within the growing research area of learned indexing is the lack of adaptability to dynamically expanding datasets. This paper explores the dynamization of a static learned index for complex data through operations such as node splitting and broadening, enabling efficient adaptation to new data. Furthermore, we evaluate the trade-offs between static and dynamic approaches by introducing an amortized cost model to assess query performance in tandem with the build costs of the index structure, enabling experimental determination of when a dynamic learned index outperforms its static counterpart. We apply the dynamization method to a static learned index and demonstrate that its superior scaling quickly surpasses the static implementation in terms of overall costs as the database grows. This is an extended version of the paper presented at DAWAK 2025.
+ oai:arXiv.org:2507.05865v2
+ cs.IR
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ter\'ezia Slanin\'akov\'a, Jaroslav Olha, David Proch\'azka, Matej Antol, Vlastislav Dohnal
+
+
+ Efficient Parametric SVD of Koopman Operator for Stochastic Dynamical Systems
+ https://arxiv.org/abs/2507.07222
+ arXiv:2507.07222v3 Announce Type: replace
+Abstract: The Koopman operator provides a principled framework for analyzing nonlinear dynamical systems through linear operator theory. Recent advances in dynamic mode decomposition (DMD) have shown that trajectory data can be used to identify dominant modes of a system in a data-driven manner. Building on this idea, deep learning methods such as VAMPnet and DPNet have been proposed to learn the leading singular subspaces of the Koopman operator. However, these methods require backpropagation through potentially numerically unstable operations on empirical second moment matrices, such as singular value decomposition and matrix inversion, during objective computation, which can introduce biased gradient estimates and hinder scalability to large systems. In this work, we propose a scalable and conceptually simple method for learning the top-$k$ singular functions of the Koopman operator for stochastic dynamical systems based on the idea of low-rank approximation. Our approach eliminates the need for unstable linear-algebraic operations and integrates easily into modern deep learning pipelines. Empirical results demonstrate that the learned singular subspaces are both reliable and effective for downstream tasks such as eigen-analysis and multi-step prediction.
+ oai:arXiv.org:2507.07222v3
+ cs.LG
+ cs.NA
+ math.DS
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Minchan Jeong, J. Jon Ryu, Se-Young Yun, Gregory W. Wornell
+
+
+ SGPMIL: Sparse Gaussian Process Multiple Instance Learning
+ https://arxiv.org/abs/2507.08711
+ arXiv:2507.08711v2 Announce Type: replace
+Abstract: Multiple Instance Learning (MIL) offers a natural solution for settings where only coarse, bag-level labels are available, without having access to instance-level annotations. This is usually the case in digital pathology, which consists of gigapixel-sized images. While deterministic attention-based MIL approaches achieve strong bag-level performance, they often overlook the uncertainty inherent in instance relevance. In this paper, we address the lack of uncertainty quantification in instance-level attention scores by introducing SGPMIL, a new probabilistic attention-based MIL framework grounded in Sparse Gaussian Processes (SGP). By learning a posterior distribution over attention scores, SGPMIL enables principled uncertainty estimation, resulting in more reliable and calibrated instance relevance maps. Our approach not only preserves competitive bag-level performance but also significantly improves the quality and interpretability of instance-level predictions under uncertainty. SGPMIL extends prior work by introducing feature scaling in the SGP predictive mean function, leading to faster training, improved efficiency, and enhanced instance-level performance. Extensive experiments on multiple well-established digital pathology datasets highlight the effectiveness of our approach across both bag- and instance-level evaluations. Our code is available at https://github.com/mandlos/SGPMIL.
+ oai:arXiv.org:2507.08711v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Andreas Lolos, Stergios Christodoulidis, Aris L. Moustakas, Jose Dolz, Maria Vakalopoulou
+
+
+ Analytical Study on the Visibility of Potential Positions for External Human-Machine Interfaces
+ https://arxiv.org/abs/2507.08973
+ arXiv:2507.08973v2 Announce Type: replace
+Abstract: As we move towards a future of autonomous vehicles, questions regarding their method of communication have arisen. One of the common questions concerns the placement of the signaling used to communicate with pedestrians and road users, but little work has been published fully dedicated to exploring this. This paper uses a simulation made in the Unity game engine to record the visibility of fifteen different vehicles, specifically regarding the visibility of frontal elements by a pedestrian on the sidewalk. Variables include the vehicle position, number of vehicles on the road, and minimum and maximum distance of the recorded points. It was concluded that the areas of the vehicle most often seen by pedestrians on the sidewalk attempting to cross the road were the frontal frontal fenders and the headlights, with the frontal wheels, frontal doors, bumper, and side mirrors are less visible alternatives. These findings are valuable in the future design of signaling for autonomous vehicles, in order to ensure pedestrians are able to see them on approaching vehicles. The software used provides a platform for similar works in the future to be conducted.
+ oai:arXiv.org:2507.08973v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jose Gonzalez-Belmonte, Jaerock Kwon
+
+
+ KisMATH: Do LLMs Have Knowledge of Implicit Structures in Mathematical Reasoning?
+ https://arxiv.org/abs/2507.11408
+ arXiv:2507.11408v2 Announce Type: replace
+Abstract: Chain-of-thought (CoT) traces have been shown to improve performance of large language models on a plethora of reasoning tasks, yet there is no consensus on the mechanism by which this boost is achieved. To shed more light on this, we introduce Causal CoT Graphs (CCGraphs), which are directed acyclic graphs automatically extracted from reasoning traces that model fine-grained causal dependencies in language-model outputs. A collection of 1671 mathematical reasoning problems from MATH500, GSM8K, and AIME, together with their associated CCGraphs, has been compiled into our dataset -- KisMATH. Our detailed empirical analysis with 15 open-weight LLMs shows that (i) reasoning nodes in the CCGraphs are causal contributors to the final answer, which we argue is constitutive of reasoning; and (ii) LLMs emphasize the reasoning paths captured by the CCGraphs, indicating that the models internally realize structures similar to our graphs. KisMATH enables controlled, graph-aligned interventions and opens avenues for further investigation into the role of CoT in LLM reasoning.
+ oai:arXiv.org:2507.11408v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Soumadeep Saha, Akshay Chaturvedi, Saptarshi Saha, Utpal Garain, Nicholas Asher
+
+
+ AI-Native Open RAN for Non-Terrestrial Networks: An Overview
+ https://arxiv.org/abs/2507.11935
+ arXiv:2507.11935v3 Announce Type: replace
+Abstract: Non-terrestrial network (NTN) is envisioned as a critical component of Sixth Generation (6G) networks by enabling ubiquitous services and enhancing network resilience. However, the inherent mobility and high-altitude operation of NTN pose significant challenges throughout the development and operations (DevOps) lifecycle. To address these challenges, integrating NTNs with the Open Radio Access Network (ORAN) is a promising approach, since ORAN can offer disaggregation, openness, virtualization, and embedded intelligence. Despite extensive literature on ORAN and NTN, a holistic view of ORAN-based NTN frameworks is still lacking, particularly regarding how ORAN can effectively address the existing challenges of NTN. Furthermore, although artificial intelligence native (AI-Native) capabilities have the potential to enhance intelligence network control and optimization, their practical realization in NTNs has not yet been sufficiently investigated. Therefore, in this paper, we provide a comprehensive and structured overview of AI-Native ORAN for NTN. This paper commences with an in-depth review of the existing literature and subsequently introduces the necessary background about ORAN, NTN, and AI-Native for communication. After analyzing the DevOps challenges for NTN, we propose the orchestrated AI-Native ORAN-based NTN framework and discuss its key technological enablers. Finally, we present the representative use cases and outline the prospective future research directions of this study.
+ oai:arXiv.org:2507.11935v3
+ cs.NI
+ cs.AI
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jikang Deng, S. Fizza Hassan, Hui Zhou, Saad Al-Ahmadi, Mohamed-Slim Alouini, Daniel B. Da Costa
+
+
+ SHACL Validation in the Presence of Ontologies: Semantics and Rewriting Techniques
+ https://arxiv.org/abs/2507.12286
+ arXiv:2507.12286v2 Announce Type: replace
+Abstract: SHACL and OWL are two prominent W3C standards for managing RDF data. These languages share many features, but they have one fundamental difference: OWL, designed for inferring facts from incomplete data, makes the open-world assumption, whereas SHACL is a constraint language that treats the data as complete and must be validated under the closed-world assumption. The combination of both formalisms is very appealing and has been called for, but their semantic gap is a major challenge, semantically and computationally. In this paper, we advocate a semantics for SHACL validation in the presence of ontologies based on core universal models. We provide a technique for constructing these models for ontologies in the rich data-tractable description logic Horn-ALCHIQ. Furthermore, we use a finite representation of this model to develop a rewriting technique that reduces SHACL validation in the presence of ontologies to standard validation. Finally, we study the complexity of SHACL validation in the presence of ontologies, and show that even very simple ontologies make the problem EXPTIME-complete, and PTIME-complete in data complexity.
+ oai:arXiv.org:2507.12286v2
+ cs.LO
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1016/j.artint.2026.104483
+ Volume 352, 2026, 104483
+ Anouk Oudshoorn, Magdalena Ortiz, Mantas Simkus
+
+
+ Refinement of the theory and convergence of the Sinc convolution -- beyond Stenger's conjecture
+ https://arxiv.org/abs/2507.12406
+ arXiv:2507.12406v3 Announce Type: replace
+Abstract: The Sinc convolution is an approximate formula for indefinite convolutions proposed by Stenger. The formula was derived based on the Sinc indefinite integration formula combined with the single-exponential transformation. Although its efficiency has been confirmed in various fields, several theoretical issues remain unresolved. The first contribution of this study is to resolve those issues by refining the underlying theory of the Sinc convolution. This contribution includes an essential resolution of Stenger's conjecture. The second contribution of this study is to improve the convergence rate by replacing the single-exponential transformation with the double-exponential transformation. Theoretical analysis and numerical experiments confirm that the modified formula achieves superior convergence compared to Stenger's original formula.
+ oai:arXiv.org:2507.12406v3
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Tomoaki Okayama
+
+
+ Benchmarking Deception Probes via Black-to-White Performance Boosts
+ https://arxiv.org/abs/2507.12691
+ arXiv:2507.12691v3 Announce Type: replace
+Abstract: AI assistants will occasionally respond deceptively to user queries. Recently, linear classifiers (called "deception probes") have been trained to distinguish the internal activations of a language model during deceptive versus honest responses. However, it's unclear how effective these probes are at detecting deception in practice, nor whether such probes are resistant to simple counter strategies from a deceptive assistant who wishes to evade detection. In this paper, we compare white-box monitoring (where the monitor has access to token-level probe activations) to black-box monitoring (without such access). We benchmark deception probes by the extent to which the white box monitor outperforms the black-box monitor, i.e. the black-to-white performance boost. We find weak but encouraging black-to-white performance boosts from existing deception probes.
+ oai:arXiv.org:2507.12691v3
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Avi Parrack, Carlo Leonardo Attubato, Stefan Heimersheim
+
+
+ Design Patterns of Human-AI Interfaces in Healthcare
+ https://arxiv.org/abs/2507.12721
+ arXiv:2507.12721v2 Announce Type: replace
+Abstract: Human-AI interfaces play a pivotal role in integrating clinicians' expertise with artificial intelligence to enhance both healthcare practice and research. However, designing effective interfaces in this domain remains a significant challenge. The inherent complexity of medical data, the influence of domain-specific conventions, and the diverse needs of clinical users compound the challenge of developing practical and usable solutions. In this study, we review existing solutions and synthesize a set of design patterns - recurring approaches that support the design of human-AI interfaces in clinical settings. We conducted a comprehensive literature review of human-AI interaction designs in clinical contexts, through which we identified 15 information entities commonly presented to users and 12 design patterns used to organize and communicate this information effectively. For each design pattern, we summarize the underlying design problem, the proposed solution, and the rationale for when the pattern should or should not be applied, based on insights from both the literature and semi-structured interviews with 12 healthcare professionals. We evaluated the proposed design patterns through an online workshop involving 14 experienced UI designers. During the workshop, participants were asked to create interface sketches for healthcare-related scenarios drawn from their own professional experience, using our design patterns as guidance. Our findings show that the proposed design patterns helped participants ground their designs in user needs, generate a wider range of design alternatives, and simplify complex interface structures. We further analyzed and summarized the participants' usage strategies and feedback regarding the applicability and usefulness of the design patterns.
+ oai:arXiv.org:2507.12721v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.ijhcs.2026.103737
+ Rui Sheng, Chuhan Shi, Sobhan Lotfi, Shiyi Liu, Adam Perer, Huamin Qu, Furui Cheng
+
+
+ WaveletInception Networks for on-board Vibration-Based Infrastructure Health Monitoring
+ https://arxiv.org/abs/2507.12969
+ arXiv:2507.12969v2 Announce Type: replace
+Abstract: This paper presents a deep learning framework for analyzing on board vibration response signals in infrastructure health monitoring. The proposed WaveletInception-BiGRU network uses a Learnable Wavelet Packet Transform (LWPT) for early spectral feature extraction, followed by one-dimensional Inception-Residual Network (1D Inception-ResNet) modules for multi-scale, high-level feature learning. Bidirectional Gated Recurrent Unit (BiGRU) modules then integrate temporal dependencies and incorporate operational conditions, such as the measurement speed. This approach enables effective analysis of vibration signals recorded at varying speeds, eliminating the need for explicit signal preprocessing. The sequential estimation head further leverages bidirectional temporal information to produce an accurate, localized assessment of infrastructure health. Ultimately, the framework generates high-resolution health profiles spatially mapped to the physical layout of the infrastructure. Case studies involving track stiffness regression and transition zone classification using real-world measurements demonstrate that the proposed framework significantly outperforms state-of-the-art methods, underscoring its potential for accurate, localized, and automated on-board infrastructure health monitoring.
+ oai:arXiv.org:2507.12969v2
+ cs.LG
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Reza Riahi Samani, Alfredo Nunez, Bart De Schutter
+
+
+ Quantum Blockchain Survey: Foundations, Trends, and Gaps
+ https://arxiv.org/abs/2507.13720
+ arXiv:2507.13720v2 Announce Type: replace
+Abstract: Quantum computing poses fundamental risks to classical blockchain systems by undermining widely used cryptographic primitives. In response, two major research directions have emerged: post-quantum blockchains, which integrate quantum-resistant algorithms, and quantum blockchains, which leverage quantum properties such as entanglement and quantum key distribution. This survey reviews key developments in both areas, analyzing their cryptographic foundations, architectural designs, and implementation challenges. This work provides a comparative overview of technical proposals, highlight trade-offs in security, scalability, and deployment, and identify open research problems across hardware, consensus, and network design. The goal is to offer a structured and comprehensive reference for advancing secure blockchain systems in the quantum era.
+ oai:arXiv.org:2507.13720v2
+ cs.CR
+ cs.DC
+ cs.ET
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Saurav Ghosh, Niloy Deb Roy Mishu
+
+
+ Can AR Embedded Visualizations Foster Appropriate Reliance on AI in Spatial Decision-Making? A Comparative Study of AR X-Ray vs. 2D Minimap
+ https://arxiv.org/abs/2507.14316
+ arXiv:2507.14316v2 Announce Type: replace
+Abstract: Artificial Intelligence (AI) and indoor sensing increasingly support decision-making in spatial environments. However, traditional visualization methods impose a substantial mental workload when viewers translate this digital information into real-world spaces, leading to inappropriate reliance on AI. Embedded visualizations in Augmented Reality (AR), by integrating information into physical environments, may reduce this workload and foster more appropriate reliance on AI. To assess this, we conducted an empirical study (N = 32) comparing an AR embedded visualization (X-ray) and 2D Minimap in AI-assisted, time-critical spatial target selection tasks. Surprisingly, evidence shows that the embedded visualization led to greater inappropriate reliance on AI, primarily as over-reliance, due to factors like perceptual challenges, visual proximity illusions, and highly realistic visual representations. Nonetheless, the embedded visualization demonstrated benefits in spatial mapping. We conclude by discussing empirical insights, design implications, and directions for future research on human-AI collaborative decision in AR.
+ oai:arXiv.org:2507.14316v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xianhao Carton Liu, Difan Jia, Tongyu Nie, Evan Suma Rosenberg, Victoria Interrante, Chen Zhu-Tian
+
+
+ Paired Image Generation with Diffusion-Guided Diffusion Models
+ https://arxiv.org/abs/2507.14833
+ arXiv:2507.14833v2 Announce Type: replace
+Abstract: The segmentation of mass lesions in digital breast tomosynthesis (DBT) images is very significant for the early screening of breast cancer. However, the high-density breast tissue often leads to high concealment of the mass lesions, which makes manual annotation difficult and time-consuming. As a result, there is a lack of annotated data for model training. Diffusion models are commonly used for data augmentation, but the existing methods face two challenges. First, due to the high concealment of lesions, it is difficult for the model to learn the features of the lesion area. This leads to the low generation quality of the lesion areas, thus limiting the quality of the generated images. Second, existing methods can only generate images and cannot generate corresponding annotations, which restricts the usability of the generated images in supervised training. In this work, we propose a paired image generation method. The method does not require external conditions and can achieve the generation of paired images by training an extra diffusion guider for the conditional diffusion model. During the experimental phase, we generated paired DBT slices and mass lesion masks. Then, we incorporated them into the supervised training process of the mass lesion segmentation task. The experimental results show that our method can improve the generation quality without external conditions. Moreover, it contributes to alleviating the shortage of annotated data, thus enhancing the performance of downstream tasks. The source code is available at https://github.com/zhanghx1320/PIG.
+ oai:arXiv.org:2507.14833v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1007/978-3-032-04965-0_35
+ Haoxuan Zhang, Wenju Cui, Yuzhu Cao, Tao Tan, Jie Liu, Yunsong Peng, Jian Zheng
+
+
+ EduThink4AI: Bridging Educational Critical Thinking and Multi-Agent LLM Systems
+ https://arxiv.org/abs/2507.15015
+ arXiv:2507.15015v2 Announce Type: replace
+Abstract: Large language models (LLMs) have demonstrated significant potential as educational tutoring agents, capable of tailoring hints, orchestrating lessons, and grading with near-human finesse across various academic domains. However, current LLM-based educational systems exhibit critical limitations in promoting genuine critical thinking, failing on over one-third of multi-hop questions with counterfactual premises, and remaining vulnerable to adversarial prompts that trigger biased or factually incorrect responses. To address these gaps, we propose \textbf{EDU-Prompting}, a novel multi-agent framework that bridges established educational critical thinking theories with LLM agent design to generate critical, bias-aware explanations while fostering diverse perspectives. Our systematic evaluation across theoretical benchmarks and practical college-level critical writing scenarios demonstrates that EDU-Prompting significantly enhances both content truthfulness and logical soundness in AI-generated educational responses. The framework's modular design enables seamless integration into existing prompting frameworks and educational applications, allowing practitioners to directly incorporate critical thinking catalysts that promote analytical reasoning and introduce multiple perspectives without requiring extensive system modifications.
+ oai:arXiv.org:2507.15015v2
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xinmeng Hou, Ziting Chang, Zhouquan Lu, Chen Wenli, Liang Wan, Wei Feng, Hai Hu, Qing Guo
+
+
+ Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge
+ https://arxiv.org/abs/2507.16559
+ arXiv:2507.16559v3 Announce Type: replace
+Abstract: Reliable recognition and localization of surgical instruments in endoscopic video recordings are foundational for a wide range of applications in computer- and robot-assisted minimally invasive surgery (RAMIS), including surgical training, skill assessment, and autonomous assistance. However, robust performance under real-world conditions remains a significant challenge. Incorporating surgical context - such as the current procedural phase - has emerged as a promising strategy to improve robustness and interpretability.
+ To address these challenges, we organized the Surgical Procedure Phase, Keypoint, and Instrument Recognition (PhaKIR) sub-challenge as part of the Endoscopic Vision (EndoVis) challenge at MICCAI 2024. We introduced a novel, multi-center dataset comprising thirteen full-length laparoscopic cholecystectomy videos collected from three distinct medical institutions, with unified annotations for three interrelated tasks: surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation. Unlike existing datasets, ours enables joint investigation of instrument localization and procedural context within the same data while supporting the integration of temporal information across entire procedures.
+ We report results and findings in accordance with the BIAS guidelines for biomedical image analysis challenges. The PhaKIR sub-challenge advances the field by providing a unique benchmark for developing temporally aware, context-driven methods in RAMIS and offers a high-quality resource to support future research in surgical scene understanding.
+ oai:arXiv.org:2507.16559v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Tobias Rueckert, David Rauber, Raphaela Maerkl, Leonard Klausmann, Suemeyye R. Yildiran, Max Gutbrod, Danilo Weber Nunes, Alvaro Fernandez Moreno, Imanol Luengo, Danail Stoyanov, Nicolas Toussaint, Enki Cho, Hyeon Bae Kim, Oh Sung Choo, Ka Young Kim, Seong Tae Kim, Gon\c{c}alo Arantes, Kehan Song, Jianjun Zhu, Junchen Xiong, Tingyi Lin, Shunsuke Kikuchi, Hiroki Matsuzaki, Atsushi Kouno, Jo\~ao Renato Ribeiro Manesco, Jo\~ao Paulo Papa, Tae-Min Choi, Tae Kyeong Jeong, Juyoun Park, Oluwatosin Alabi, Meng Wei, Tom Vercauteren, Runzhi Wu, Mengya Xu, An Wang, Long Bai, Hongliang Ren, Amine Yamlahi, Jakob Hennighausen, Lena Maier-Hein, Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa, Shu Yang, Yihui Wang, Hao Chen, Santiago Rodr\'iguez, Nicol\'as Aparicio, Leonardo Manrique, Juan Camilo Lyons, Olivia Hosie, Nicol\'as Ayobi, Pablo Arbel\'aez, Yiping Li, Yasmina Al Khalil, Sahar Nasirihaghighi, Stefanie Speidel, Daniel Rueckert, Hubertus Feussner, Dirk Wilhelm, Christoph Palm
+
+
+ Controllable Video Generation: A Survey
+ https://arxiv.org/abs/2507.16869
+ arXiv:2507.16869v3 Announce Type: replace
+Abstract: With the rapid development of AI-generated content (AIGC), video generation has emerged as one of its most dynamic and impactful subfields. In particular, the advancement of video generation foundation models has led to growing demand for controllable video generation methods that can more accurately reflect user intent. Most existing foundation models are designed for text-to-video generation, where text prompts alone are often insufficient to express complex, multi-modal, and fine-grained user requirements. This limitation makes it challenging for users to generate videos with precise control using current models. To address this issue, recent research has explored the integration of additional non-textual conditions, such as camera motion, depth maps, and human pose, to extend pretrained video generation models and enable more controllable video synthesis. These approaches aim to enhance the flexibility and practical applicability of AIGC-driven video generation systems. In this survey, we provide a systematic review of controllable video generation, covering both theoretical foundations and recent advances in the field. We begin by introducing the key concepts and commonly used open-source video generation models. We then focus on control mechanisms in video diffusion models, analyzing how different types of conditions can be incorporated into the denoising process to guide generation. Finally, we categorize existing methods based on the types of control signals they leverage, including single-condition generation, multi-condition generation, and universal controllable generation. For a complete list of the literature on controllable video generation reviewed, please visit our curated repository at https://github.com/mayuelala/Awesome-Controllable-Video-Generation.
+ oai:arXiv.org:2507.16869v3
+ cs.GR
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yue Ma, Kunyu Feng, Zhongyuan Hu, Xinyu Wang, Yucheng Wang, Mingzhe Zheng, Bingyuan Wang, Qinghe Wang, Xuanhua He, Hongfa Wang, Chenyang Zhu, Hongyu Liu, Yingqing He, Zeyu Wang, Zhifeng Li, Xiu Li, Sirui Han, Yike Guo, Wei Liu, Dan Xu, Linfeng Zhang, Qifeng Chen
+
+
+ Multi-Stage Verification-Centric Framework for Mitigating Hallucination in Multi-Modal RAG
+ https://arxiv.org/abs/2507.20136
+ arXiv:2507.20136v2 Announce Type: replace
+Abstract: This paper presents the technical solution developed by team CRUISE for the KDD Cup 2025 Meta Comprehensive RAG Benchmark for Multi-modal, Multi-turn (CRAG-MM) challenge. The challenge aims to address a critical limitation of modern Vision Language Models (VLMs): their propensity to hallucinate, especially when faced with egocentric imagery, long-tail entities, and complex, multi-hop questions. This issue is particularly problematic in real-world applications where users pose fact-seeking queries that demand high factual accuracy across diverse modalities. To tackle this, we propose a robust, multi-stage framework that prioritizes factual accuracy and truthfulness over completeness. Our solution integrates a lightweight query router for efficiency, a query-aware retrieval and summarization pipeline, a dual-pathways generation and a post-hoc verification. This conservative strategy is designed to minimize hallucinations, which incur a severe penalty in the competition's scoring metric. Our approach achieved 3rd place in Task 1, demonstrating the effectiveness of prioritizing answer reliability in complex multi-modal RAG systems. Our implementation is available at https://github.com/Breezelled/KDD-Cup-2025-Meta-CRAG-MM .
+ oai:arXiv.org:2507.20136v2
+ cs.CL
+ cs.AI
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Baiyu Chen, Wilson Wongso, Xiaoqian Hu, Yue Tan, Flora Salim
+
+
+ WEEP: A Differentiable Nonconvex Sparse Regularizer via Weakly-Convex Envelope
+ https://arxiv.org/abs/2507.20447
+ arXiv:2507.20447v2 Announce Type: replace
+Abstract: Sparse regularization is fundamental in signal processing and feature extraction but often relies on non-differentiable penalties, conflicting with gradient-based optimizers. We propose WEEP (Weakly-convex Envelope of Piecewise Penalty), a novel differentiable regularizer derived from the weakly-convex envelope framework. WEEP provides tunable, unbiased sparsity and a simple closed-form proximal operator, while maintaining full differentiability and L-smoothness, ensuring compatibility with both gradient-based and proximal algorithms. This resolves the tradeoff between statistical performance and computational tractability. We demonstrate superior performance compared to established convex and non-convex sparse regularizers on challenging compressive sensing and image denoising tasks.
+ oai:arXiv.org:2507.20447v2
+ cs.LG
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Takanobu Furuhashi, Hidekata Hontani, Qibin Zhao, Tatsuya Yokota
+
+
+ A Survey of Self-Evolving Agents: What, When, How, and Where to Evolve on the Path to Artificial Super Intelligence
+ https://arxiv.org/abs/2507.21046
+ arXiv:2507.21046v4 Announce Type: replace
+Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse tasks but remain fundamentally static, unable to adapt their internal parameters to novel tasks, evolving knowledge domains, or dynamic interaction contexts. As LLMs are increasingly deployed in open-ended, interactive environments, this static nature has become a critical bottleneck, necessitating agents that can adaptively reason, act, and evolve in real time. This paradigm shift -- from scaling static models to developing self-evolving agents -- has sparked growing interest in architectures and methods enabling continual learning and adaptation from data, interactions, and experiences. This survey provides the first systematic and comprehensive review of self-evolving agents, organizing the field around three foundational dimensions: what, when, and how to evolve. We examine evolutionary mechanisms across agent components (e.g., models, memory, tools, architecture), categorize adaptation methods by stages (e.g., intra-test-time, inter-test-time), and analyze the algorithmic and architectural designs that guide evolutionary adaptation (e.g., scalar rewards, textual feedback, single-agent and multi-agent systems). Additionally, we analyze evaluation metrics and benchmarks tailored for self-evolving agents, highlight applications in domains such as coding, education, and healthcare, and identify critical challenges and research directions in safety, scalability, and co-evolutionary dynamics. By providing a structured framework for understanding and designing self-evolving agents, this survey establishes a roadmap for advancing more adaptive, robust, and versatile agentic systems in both research and real-world deployments, and ultimately sheds light on the realization of Artificial Super Intelligence (ASI) where agents evolve autonomously and perform beyond human-level intelligence across tasks.
+ oai:arXiv.org:2507.21046v4
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Huan-ang Gao, Jiayi Geng, Wenyue Hua, Mengkang Hu, Xinzhe Juan, Hongzhang Liu, Shilong Liu, Jiahao Qiu, Xuan Qi, Yiran Wu, Hongru Wang, Han Xiao, Yuhang Zhou, Shaokun Zhang, Jiayi Zhang, Jinyu Xiang, Yixiong Fang, Qiwen Zhao, Dongrui Liu, Qihan Ren, Cheng Qian, Zhenhailong Wang, Minda Hu, Huazheng Wang, Qingyun Wu, Heng Ji, Mengdi Wang
+
+
+ MaPPO: Maximum a Posteriori Preference Optimization with Prior Knowledge
+ https://arxiv.org/abs/2507.21183
+ arXiv:2507.21183v3 Announce Type: replace
+Abstract: As the era of large language models (LLMs) on behalf of users unfolds, Preference Optimization (PO) methods have become a central approach to aligning LLMs with human preferences and improving performance. We propose Maximum a Posteriori Preference Optimization (MaPPO), a framework for learning from preferences that explicitly incorporates prior reward knowledge into the optimization objective. While existing methods such as Direct Preference Optimization (DPO) and its variants treat preference learning as a Maximum Likelihood Estimation (MLE) problem, MaPPO extends this paradigm by integrating prior reward estimates into a principled Maximum a Posteriori (MaP) objective. This not only generalizes DPO and its variants, but also enhances alignment by mitigating the oversimplified binary classification of responses. More importantly, MaPPO introduces no additional hyperparameter, and supports preference optimization in both offline and online settings. In addition, MaPPO can be used as a plugin with consistent improvement on DPO variants, including widely used SimPO, IPO, and CPO. Extensive empirical evaluations of different model sizes and model series on three standard benchmarks, including MT-Bench, AlpacaEval 2.0, and Arena-Hard, demonstrate consistent improvements in alignment performance without sacrificing computational efficiency.
+ oai:arXiv.org:2507.21183v3
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Guangchen Lan, Sipeng Zhang, Tianle Wang, Yuwei Zhang, Daoan Zhang, Xinpeng Wei, Xiaoman Pan, Hongming Zhang, Dong-Jun Han, Christopher G. Brinton
+
+
+ Conversations over Clicks: Impact of Chatbots on Information Search in Interdisciplinary Learning
+ https://arxiv.org/abs/2507.21490
+ arXiv:2507.21490v2 Announce Type: replace
+Abstract: This full research paper investigates the impact of generative AI (GenAI) on the learner experience, with a focus on how learners engage with and utilize the information it provides. In e-learning environments, learners often need to navigate a complex information space on their own. This challenge is further compounded in interdisciplinary fields like bioinformatics, due to the varied prior knowledge and backgrounds. In this paper, we studied how GenAI influences information search in bioinformatics research: (1) How do interactions with a GenAI chatbot influence learner orienteering behaviors?; and (2) How do learners identify information scent in GenAI chatbot responses? We adopted an autoethnographic approach to investigate these questions. GenAI was found to support orienteering once a learning plan was established, but it was counterproductive prior to that. Moreover, traditionally value-rich information sources such as bullet points and related terms proved less effective when applied to GenAI responses. Information scents were primarily recognized through the presence or absence of prior knowledge of the domain. These findings suggest that GenAI should be adopted into e-learning environments with caution, particularly in interdisciplinary learning contexts.
+ oai:arXiv.org:2507.21490v2
+ cs.HC
+ cs.CY
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/FIE63693.2025.11328556
+ 2025 IEEE Frontiers in Education Conference (FIE), Nashville, TN, USA, 2025, pp. 1-9
+ Hannah Kim, Sergei L. Kosakovsky Pond, Stephen MacNeil
+
+
+ Construction and educational application of a linguistically grounded dependency treebank for Uyghur
+ https://arxiv.org/abs/2507.21536
+ arXiv:2507.21536v2 Announce Type: replace
+Abstract: Developing effective educational technologies for low-resource agglutinative languages like Uyghur is often hindered by the mismatch between existing annotation frameworks and specific grammatical structures. To address this challenge, this study introduces the Modern Uyghur Dependency Treebank (MUDT), a linguistically grounded annotation framework specifically designed to capture the agglutinative complexity of Uyghur, including zero copula constructions and fine-grained case marking. Utilizing a hybrid pipeline that combines Large Language Model pre-annotation with rigorous human correction, a high-quality treebank consisting of 3,456 sentences was constructed. Intrinsic structural evaluation reveals that MUDT significantly improves dependency projectivity by reducing the crossing-arc rate from 7.35\% in the Universal Dependencies standard to 0.06\%. Extrinsic parsing experiments using UDPipe and Stanza further demonstrate that models trained on MUDT achieve superior in-domain accuracy and cross-domain generalization compared to UD-based baselines. To validate the practical utility of this computational resource, an AI-assisted grammar tutoring system was developed to translate MUDT-based syntactic analyses into interpretable pedagogical feedback. A controlled experiment involving 35 second-language learners indicated that students receiving syntax-aware feedback achieved significantly higher learning gains compared to those in a control group. These findings establish MUDT as a robust foundation for syntactic analysis and underscore the critical role of linguistically informed natural language processing resources in bridging the gap between computational models and the cognitive needs of second-language learners.
+ oai:arXiv.org:2507.21536v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Jiaxin Zuo, Yiquan Wang, Yuan Pan, Xiadiya Yibulayin
+
+
+ When Proximity Falls Short: Inequalities in Commuting and Accessibility by Public Transport in Santiago, Chile
+ https://arxiv.org/abs/2507.21743
+ arXiv:2507.21743v2 Announce Type: replace
+Abstract: Traditional measures of urban accessibility often rely on static models or survey data. However, location information from mobile networks now enables large-scale, dynamic analyses of how people navigate cities. This study uses eXtended Detail Records (XDRs) derived from mobile phone activity to analyze commuting patterns and accessibility inequalities in Santiago, Chile. First, we identify residential and work locations and model commuting routes using the R5 multimodal routing engine, which combines public transport and walking. To explore spatial patterns, we apply a bivariate spatial clustering analysis (LISA) alongside regression techniques to identify distinct commuting behaviors and their alignment with vulnerable population groups. Our findings reveal that average commuting times remain consistent across socioeconomic groups. However, despite residing in areas with greater opportunity density, higher-income populations do not consistently experience shorter commuting times. This highlights a disconnect between spatial proximity to opportunities and actual travel experience. Our analysis reveals significant disparities between sociodemographic groups, particularly regarding the distribution of indigenous populations and gender. Overall, the findings of our study suggest that commuting and accessibility inequalities in Santiago are closely linked to broader social and demographic structures.
+ oai:arXiv.org:2507.21743v2
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Cesar Marin-Flores, Leo Ferres, Henrikki Tenkanen
+
+
+ Evaluating Large Language Models (LLMs) in Financial NLP: A Comparative Study on Financial Report Analysis
+ https://arxiv.org/abs/2507.22936
+ arXiv:2507.22936v2 Announce Type: replace
+Abstract: Large language models (LLMs) are increasingly used to support the analysis of complex financial disclosures, yet their reliability, behavioral consistency, and transparency remain insufficiently understood in high-stakes settings. This paper presents a controlled evaluation of five transformer-based LLMs applied to question answering over the Business sections of U.S. 10-K filings. To capture complementary aspects of model behavior, we combine human evaluation, automated similarity metrics, and behavioral diagnostics under standardized and context-controlled prompting conditions. Human assessments indicate that models differ in their average performance across qualitative dimensions such as relevance, completeness, clarity, conciseness, and factual accuracy, though inter-rater agreement is modest, reflecting the subjective nature of these criteria. Automated metrics reveal systematic differences in lexical overlap and semantic similarity across models, while behavioral diagnostics highlight variation in response stability and cross-prompt alignment. Importantly, no single model consistently dominates across all evaluation perspectives. Together, these findings suggest that apparent performance differences should be interpreted as relative tendencies under the tested conditions rather than definitive indicators of general reliability. The results underscore the need for evaluation frameworks that account for human disagreement, behavioral variability, and interpretability when deploying LLMs in financially consequential applications.
+ oai:arXiv.org:2507.22936v2
+ cs.CL
+ cs.AI
+ cs.CE
+ cs.HC
+ q-fin.CP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Md Talha Mohsin
+
+
+ Med-R$^3$: Enhancing Medical Retrieval-Augmented Reasoning of LLMs via Progressive Reinforcement Learning
+ https://arxiv.org/abs/2507.23541
+ arXiv:2507.23541v4 Announce Type: replace
+Abstract: In medical scenarios, effectively retrieving external knowledge and leveraging it for rigorous logical reasoning is of significant importance. Despite their potential, existing work has predominantly focused on enhancing either retrieval or reasoning capabilities of the models in isolation, with little attention given to their joint optimization, which leads to limited coordination between the two processes. Additionally, current methods rely heavily on supervised fine-tuning (SFT), which can cause models to memorize existing problem-solving pathways, thereby restricting their generalization ability when confronted with novel problem contexts. Furthermore, while some studies have explored to improve retrieval-augmented reasoning in general domains via reinforcement learning, their reward function designs do not adequately capture the specific demands of the medical domain. To address these challenges, we introduce **Med-R$^3$**, a **Med**ical **R**etrieval-augmented **R**easoning framework driven by progressive **R**einforcement learning. In this framework, we first develop the model's ability to perform logical reasoning over medical problems. Subsequently, on the basis of this foundation, we adaptively optimize the retrieval capability to better align with the characteristics of knowledge corpus and external information utilization throughout the reasoning process. Finally, we conduct joint optimization of the model's retrieval and reasoning coordination. Extensive experiments indicate that **Med-R$^3$** could achieve state-of-the-art performances, with LLaMA3.1-8B-Instruct + Med-R$^3$ surpassing closed-sourced GPT-4o-mini by 3.93\% at a comparable parameter scale, while Qwen2.5-14B augmented with Med-R$^3$ shows a more substantial gain of 13.53\%.
+ oai:arXiv.org:2507.23541v4
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keer Lu, Zheng Liang, Youquan Li, Jiejun Tan, Xili Wang, Da Pan, Shusen Zhang, Guosheng Dong, Bin Cui, Yunhuai Liu, Wentao Zhang
+
+
+ Large AI Model-Enabled Secure Communications in Low-Altitude Wireless Networks: Concepts, Perspectives and Case Study
+ https://arxiv.org/abs/2508.00256
+ arXiv:2508.00256v2 Announce Type: replace
+Abstract: Low-altitude wireless networks (LAWNs) have the potential to revolutionize communications by supporting a range of applications, including urban parcel delivery, aerial inspections and air taxis. However, compared with traditional wireless networks, LAWNs face unique security challenges due to low-altitude operations, frequent mobility and reliance on unlicensed spectrum, making it more vulnerable to some malicious attacks. In this paper, we investigate some large artificial intelligence model (LAM)-enabled solutions for secure communications in LAWNs. Specifically, we first explore the amplified security risks and important limitations of traditional AI methods in LAWNs. Then, we introduce the basic concepts of LAMs and delve into the role of LAMs in addressing these challenges. To demonstrate the practical benefits of LAMs for secure communications in LAWNs, we propose a novel LAM-based optimization framework that leverages large language models (LLMs) to generate enhanced state features on top of handcrafted representations, and to design intrinsic rewards accordingly, thereby improving reinforcement learning performance for secure communication tasks. Through a typical case study, simulation results validate the effectiveness of the proposed framework. Finally, we outline future directions for integrating LAMs into secure LAWN applications.
+ oai:arXiv.org:2508.00256v2
+ cs.NI
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chuang Zhang, Geng Sun, Yijing Lin, Weijie Yuan, Sinem Coleri, Dusit Niyato
+
+
+ Wukong Framework for Not Safe For Work Detection in Text-to-Image systems
+ https://arxiv.org/abs/2508.00591
+ arXiv:2508.00591v2 Announce Type: replace
+Abstract: Text-to-Image (T2I) generation is a popular AI-generated content (AIGC) technology enabling diverse and creative image synthesis. However, some outputs may contain Not Safe For Work (NSFW) content (e.g., violence), violating community guidelines. Detecting NSFW content efficiently and accurately, known as external safeguarding, is essential. Existing external safeguards fall into two types: text filters, which analyze user prompts but overlook T2I model-specific variations and are prone to adversarial attacks; and image filters, which analyze final generated images but are computationally costly and introduce latency. Diffusion models, the foundation of modern T2I systems like Stable Diffusion, generate images through iterative denoising using a U-Net architecture with ResNet and Transformer blocks. We observe that: (1) early denoising steps define the semantic layout of the image, and (2) cross-attention layers in U-Net are crucial for aligning text and image regions. Based on these insights, we propose Wukong, a transformer-based NSFW detection framework that leverages intermediate outputs from early denoising steps and reuses U-Net's pre-trained cross-attention parameters. Wukong operates within the diffusion process, enabling early detection without waiting for full image generation. We also introduce a new dataset containing prompts, seeds, and image-specific NSFW labels, and evaluate Wukong on this and two public benchmarks. Results show that Wukong significantly outperforms text-based safeguards and achieves comparable accuracy of image filters, while offering much greater efficiency.
+ oai:arXiv.org:2508.00591v2
+ cs.CV
+ cs.AI
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mingrui Liu, Sixiao Zhang, Cheng Long
+
+
+ A Deep Reinforcement Learning-Based TCP Congestion Control Algorithm: Design, Simulation, and Evaluation
+ https://arxiv.org/abs/2508.01047
+ arXiv:2508.01047v3 Announce Type: replace
+Abstract: This paper introduces a Deep Reinforcement Learning (DRL) based TCP congestion-control algorithm that uses a Deep Q-Network (DQN) to adapt the congestion window (cWnd) dynamically based on observed network state. The proposed approach utilizes DQNs to optimize the congestion window by observing key network parameters and taking real-time actions. The algorithm is trained and evaluated within the NS-3 network simulator using the OpenGym interface. The results demonstrate that the DRL-based algorithm provides a superior balance between throughput and latency compared to both traditional TCP New Reno and TCP Cubic algorithms. Specifically: Compared to TCP Cubic, the DRL algorithm achieved comparable throughput (statistically insignificant difference of -3.79%, $p>0.05$) while delivering a massive 46.29% reduction in Round-Trip Time (RTT). Furthermore, the DRL agent maintained near-zero packet loss, whereas Cubic suffered from significant buffer overflow. Compared to TCP New Reno, the DRL algorithm achieved comparable throughput (+0.38%) with a 32.40% reduction in RTT. Results from NS-3 simulations indicate that the proposed DRL agent effectively mitigates bufferbloat without compromising bandwidth utilization. This study emphasizes the potential of reinforcement learning techniques for solving complex congestion control problems in modern networks by learning the network capacity rather than saturating it.
+ oai:arXiv.org:2508.01047v3
+ cs.NI
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Efe A\u{g}lamazlar, Emirhan Eken, Harun Batur Ge\c{c}ici
+
+
+ Bias Association Discovery Framework for Open-Ended LLM Generations
+ https://arxiv.org/abs/2508.01412
+ arXiv:2508.01412v2 Announce Type: replace
+Abstract: Social biases embedded in Large Language Models (LLMs) raise critical concerns, resulting in representational harms -- unfair or distorted portrayals of demographic groups -- that may be expressed in subtle ways through generated language. Existing evaluation methods often depend on predefined identity-concept associations, limiting their ability to surface new or unexpected forms of bias. In this work, we present the Bias Association Discovery Framework (BADF), a systematic approach for extracting both known and previously unrecognized associations between demographic identities and descriptive concepts from open-ended LLM outputs. Through comprehensive experiments spanning multiple models and diverse real-world contexts, BADF enables robust mapping and analysis of the varied concepts that characterize demographic identities. Our findings advance the understanding of biases in open-ended generation and provide a scalable tool for identifying and analyzing bias associations in LLMs.
+ oai:arXiv.org:2508.01412v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jinhao Pan, Chahat Raj, Ziwei Zhu
+
+
+ ReflecSched: Solving Dynamic Flexible Job-Shop Scheduling via LLM-Powered Hierarchical Reflection
+ https://arxiv.org/abs/2508.01724
+ arXiv:2508.01724v3 Announce Type: replace
+Abstract: The NP-hard Dynamic Flexible Job-Shop Scheduling (DFJSP) problem involves real-time events and complex routing. While traditional rules are efficient but rigid, deep learning is opaque and requires feature engineering. Large Language Models (LLMs) promise adaptive reasoning without this engineering overhead, yet we find their direct application is suboptimal. Baseline LLMs suffer from three key pitfalls: the long-context paradox, where crucial data is underutilized; an underutilization of expert heuristics; and myopic decision-making. To address this, we propose ReflecSched, a framework that empowers the LLM beyond a direct scheduler by equipping it with a strategic analysis capability. ReflecSched tasks the LLM to analyze heuristic-driven simulations across multiple planning horizons and distill them into a concise, natural-language summary termed Strategic Experience. This summary is then integrated into the prompt of a final decision-making module, guiding it to produce non-myopic actions. Experiments demonstrate ReflecSched achieves superior performance, with its best variants attaining an average RPD of 6.09% and rank of 4.39 on GEN-Bench, significantly outperforming strong traditional and learning-based methods including HMPSAC and IDDQN. It also statistically and decisively surpasses direct LLM baselines, securing a 71.35% Win Rate while being, on average, 15.1% more token-efficient on Normal-scale problems. Furthermore, cumulative runtime analysis reveals that ReflecSched's zero-shot nature eliminates the training bottleneck, providing a decisive efficiency advantage in high-variability manufacturing environments. Ablation studies attribute this performance to a robust reflection mechanism that leverages high-quality, contrastive experience. Ultimately, the framework's performance is statistically on par with an oracle-like strategy, showcasing its effectiveness and robustness.
+ oai:arXiv.org:2508.01724v3
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shijie Cao, Yuan Yuan
+
+
+ HCF: Hierarchical Cascade Framework for Distributed Multi-Stage Image Compression
+ https://arxiv.org/abs/2508.02051
+ arXiv:2508.02051v3 Announce Type: replace
+Abstract: Distributed multi-stage image compression -- where visual content traverses multiple processing nodes under varying quality requirements -- poses challenges. Progressive methods enable bitstream truncation but underutilize available compute resources; successive compression repeats costly pixel-domain operations and suffers cumulative quality loss and inefficiency; fixed-parameter models lack post-encoding flexibility. In this work, we developed the Hierarchical Cascade Framework (HCF) that achieves high rate-distortion performance and better computational efficiency through direct latent-space transformations across network nodes in distributed multi-stage image compression systems. Under HCF, we introduced policy-driven quantization control to optimize rate-distortion trade-offs, and established the edge quantization principle through differential entropy analysis. The configuration based on this principle demonstrates up to 0.6dB PSNR gains over other configurations. When comprehensively evaluated on the Kodak, CLIC, and CLIC2020-mobile datasets, HCF outperforms successive-compression methods by up to 5.56% BD-Rate in PSNR on CLIC, while saving up to 97.8% FLOPs, 96.5% GPU memory, and 90.0% execution time. It also outperforms state-of-the-art progressive compression methods by up to 12.64% BD-Rate on Kodak and enables retraining-free cross-quality adaptation with 7.13-10.87% BD-Rate reductions on CLIC2020-mobile.
+ oai:arXiv.org:2508.02051v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junhao Cai, Taegun An, Chengjun Jin, Sung Il Choi, Juhyun Park, Changhee Joo
+
+
+ Knowing When Not to Answer: Lightweight KB-Aligned OOD Detection for Safe RAG
+ https://arxiv.org/abs/2508.02296
+ arXiv:2508.02296v2 Announce Type: replace
+Abstract: Retrieval-Augmented Generation (RAG) systems are increasingly deployed in high-stakes domains, where safety depends not only on how a system answers, but also on whether a query should be answered given a knowledge base (KB). Out-of-domain (OOD) queries can cause dense retrieval to surface weakly related context and lead the generator to produce fluent but unjustified responses. We study lightweight, KB-aligned OOD detection as an always-on gate for RAG systems. Our approach applies PCA to KB embeddings and scores queries in a compact subspace selected either by explained-variance retention (EVR) or by a separability-driven t-test ranking. We evaluate geometric semantic-search rules and lightweight classifiers across 16 domains, including high-stakes COVID-19 and Substance Use KBs, and stress-test robustness using both LLM-generated attacks and an in-the-wild 4chan attack. We find that low-dimensional detectors achieve competitive OOD performance while being faster, cheaper, and more interpretable than prompted LLM-based judges. Finally, human and LLM-based evaluations show that OOD queries primarily degrade the relevance of RAG outputs, showing the need for efficient external OOD detection to maintain safe, in-scope behavior.
+ oai:arXiv.org:2508.02296v2
+ cs.CL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ilias Triantafyllopoulos, Renyi Qu, Salvatore Giorgi, Brenda Curtis, Lyle H. Ungar, Jo\~ao Sedoc
+
+
+ Large AI Models for Wireless Physical Layer
+ https://arxiv.org/abs/2508.02314
+ arXiv:2508.02314v2 Announce Type: replace
+Abstract: Large artificial intelligence models (LAMs) are transforming wireless physical layer technologies through their robust generalization, multitask processing, and multimodal capabilities. This article reviews recent advancements in applying LAMs to physical layer communications, addressing obstacles of conventional AI-based approaches. LAM-based solutions are classified into two strategies: leveraging pre-trained LAMs and developing native LAMs designed specifically for physical layer tasks. The motivations and key frameworks of these approaches are comprehensively examined through multiple use cases. Both strategies significantly improve performance and adaptability across diverse wireless scenarios. Future research directions, including efficient architectures, interpretability, standardized datasets, and collaboration between large and small models, are proposed to advance LAM-based physical layer solutions for next-generation communication systems.
+ oai:arXiv.org:2508.02314v2
+ cs.IT
+ cs.AI
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiajia Guo, Yiming Cui, Shi Jin, Jun Zhang
+
+
+ Adaptive Riemannian Graph Neural Networks
+ https://arxiv.org/abs/2508.02600
+ arXiv:2508.02600v2 Announce Type: replace
+Abstract: Graph data often exhibits complex geometric heterogeneity, where structures with varying local curvature, such as tree-like hierarchies and dense communities, coexist within a single network. Existing geometric GNNs, which embed graphs into single fixed-curvature manifolds or discrete product spaces, struggle to capture this diversity. We introduce Adaptive Riemannian Graph Neural Networks (ARGNN), a novel framework that learns a continuous and anisotropic Riemannian metric tensor field over the graph. It allows each node to determine its optimal local geometry, enabling the model to fluidly adapt to the graph's structural landscape. Our core innovation is an efficient parameterization of the node-wise metric tensor, specializing to a learnable diagonal form that captures directional geometric information while maintaining computational tractability. To ensure geometric regularity and stable training, we integrate a Ricci flow-inspired regularization that smooths the learned manifold. Theoretically, we establish the rigorous geometric evolution convergence guarantee for ARGNN and provide a continuous generalization that unifies prior fixed or mixed-curvature GNNs. Empirically, our method demonstrates superior performance on both homophilic and heterophilic benchmark datasets with the ability to capture diverse structures adaptively. Moreover, the learned geometries both offer interpretable insights into the underlying graph structure and empirically corroborate our theoretical analysis.
+ oai:arXiv.org:2508.02600v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xudong Wang, Chris Ding, Tongxin Li, Jicong Fan
+
+
+ SmallKV: Small Model Assisted Compensation of KV Cache Compression for Efficient LLM Inference
+ https://arxiv.org/abs/2508.02751
+ arXiv:2508.02751v2 Announce Type: replace
+Abstract: KV cache eviction has emerged as an effective solution to alleviate resource constraints faced by LLMs in long-context scenarios. However, existing token-level eviction methods often overlook two critical aspects: (1) their irreversible eviction strategy fails to adapt to dynamic attention patterns during decoding (the saliency shift problem), and (2) they treat both marginally important tokens and truly unimportant tokens equally, despite the collective significance of marginal tokens to model performance (the marginal information over-compression problem). To address these issues, we design two compensation mechanisms based on the high similarity of attention matrices between LLMs of different scales. We propose SmallKV, a small model assisted compensation method for KV cache compression. SmallKV can maintain attention matching between different-scale LLMs to: 1) assist the larger model in perceiving globally important information of attention; and 2) use the smaller model's attention scores to approximate those of marginal tokens in the larger model. Extensive experiments on benchmarks including GSM8K, BBH, MT-Bench, and LongBench demonstrate the effectiveness of SmallKV. Moreover, efficiency evaluations show that SmallKV achieves 1.75 - 2.56 times higher throughput than baseline methods, highlighting its potential for efficient and performant LLM inference in resource constrained environments.
+ oai:arXiv.org:2508.02751v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yi Zhao, Yajuan Peng, Cam-Tu Nguyen, Zuchao Li, Xiaoliang Wang, Hai Zhao, Xiaoming Fu
+
+
+ Infrared Object Detection with Ultra Small ConvNets: Is ImageNet Pretraining Still Useful?
+ https://arxiv.org/abs/2508.02927
+ arXiv:2508.02927v2 Announce Type: replace
+Abstract: Many real-world applications require recognition models that are robust to different operational conditions and modalities, but at the same time run on small embedded devices, with limited hardware. While for normal size models, pre-training is known to be very beneficial in accuracy and robustness, for small models, that can be employed for embedded and edge devices, its effect is not clear. In this work, we investigate the effect of ImageNet pretraining on increasingly small backbone architectures (ultra-small models, with less than 1M parameters) with respect to robustness in downstream object detection tasks in the infrared visual modality. Using scaling laws derived from standard object recognition architectures, we construct two ultra-small backbone families and systematically study their performance. Our experiments on three different datasets reveal that while ImageNet pre-training is still useful, beyond a certain capacity threshold, it offers diminishing returns in terms of out-of-distribution detection robustness. Therefore, we advise practitioners to still use pre-training and, when possible avoid too small models as while they might work well for in-domain problems, they are brittle when working conditions are different.
+ oai:arXiv.org:2508.02927v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Srikanth Muralidharan, Heitor R. Medeiros, Masih Aminbeidokhti, Eric Granger, Marco Pedersoli
+
+
+ RCP-Merging: Merging Long Chain-of-Thought Models with Domain-Specific Models by Considering Reasoning Capability as Prior
+ https://arxiv.org/abs/2508.03140
+ arXiv:2508.03140v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) with long chain-of-thought (CoT) capability, termed Reasoning Models, demonstrate superior intricate problem-solving abilities through multi-step long CoT reasoning. To create a dual-capability model with long CoT capability and domain-specific knowledge without substantial computational and data costs, model merging emerges as a highly resource-efficient method. However, significant challenges lie in merging domain-specific LLMs with long CoT ones since nowadays merging methods suffer from reasoning capability degradation, even gibberish output and output collapse. To overcome this, we introduce RCP-Merging: Merging Long Chain-of-Thought Models with Domain-Specific Models by Considering Reasoning Capability as Prior, a novel merging framework designed to integrate domain-specific LLMs with long CoT capability, meanwhile maintaining model performance in the original domain. Treating reasoning model weights as foundational prior, our method utilizes a reasoning capability indicator to preserve core long CoT capability model weights while selectively merging essential domain-specific weights. We conducted extensive experiments on Qwen2.5-7B, Llama3.1-8B, and Qwen2.5-1.5B models in BioMedicine and Finance domains. Our results show that RCP-Merging successfully merges a reasoning model with domain-specific ones, improving domain task performance by 9.5% and 9.2% over state-of-the-art methods, without significantly harming the original long CoT reasoning capability.
+ oai:arXiv.org:2508.03140v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junyao Yang, Jianwei Wang, Huiping Zhuang, Cen Chen, Ziqian Zeng
+
+
+ Revisiting Deep Information Propagation: Fractal Frontier and Finite-size Effects
+ https://arxiv.org/abs/2508.03222
+ arXiv:2508.03222v2 Announce Type: replace
+Abstract: Information propagation characterizes how input correlations evolve across layers in deep neural networks. This framework has been well studied using mean-field theory, which assumes infinitely wide networks. However, these assumptions break down for practical, finite-size networks. In this work, we study information propagation in randomly initialized neural networks with finite width and reveal that the boundary between ordered and chaotic regimes exhibits a fractal structure. This shows the fundamental complexity of neural network dynamics, in a setting that is independent of input data and optimization. To extend this analysis beyond multilayer perceptrons, we leverage recently introduced Fourier-based structured transforms, and show that information propagation in convolutional neural networks also follow the same behavior. In practice, our investigation highlights the importance of finite network depth with respect to the tradeoff between separation and robustness.
+ oai:arXiv.org:2508.03222v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Giuseppe Alessio D'Inverno, Zhiyuan Hu, Leo Davy, Michael Unser, Gianluigi Rozza, Jonathan Dong
+
+
+ Matrix-Free Two-to-Infinity and One-to-Two Norms Estimation
+ https://arxiv.org/abs/2508.04444
+ arXiv:2508.04444v2 Announce Type: replace
+Abstract: In this paper, we propose new randomized algorithms for estimating the two-to-infinity and one-to-two norms in a matrix-free setting, using only matrix-vector multiplications. Our methods are based on appropriate modifications of Hutchinson's diagonal estimator and its Hutch++ version. We provide oracle complexity bounds for both modifications. We further illustrate the practical utility of our algorithms for Jacobian-based regularization in deep neural network training on image classification tasks. We also demonstrate that our methodology can be applied to mitigate the effect of adversarial attacks in the domain of recommender systems.
+ oai:arXiv.org:2508.04444v2
+ cs.LG
+ cs.NA
+ math.NA
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Askar Tsyganov, Evgeny Frolov, Sergey Samsonov, Maxim Rakhuba
+
+
+ HierarchicalPrune: Position-Aware Compression for Large-Scale Diffusion Models
+ https://arxiv.org/abs/2508.04663
+ arXiv:2508.04663v3 Announce Type: replace
+Abstract: State-of-the-art text-to-image diffusion models (DMs) achieve remarkable quality, yet their massive parameter scale (8-11B) poses significant challenges for inferences on resource-constrained devices. In this paper, we present HierarchicalPrune, a novel compression framework grounded in a key observation: DM blocks exhibit distinct functional hierarchies, where early blocks establish semantic structures while later blocks handle texture refinements. HierarchicalPrune synergistically combines three techniques: (1) Hierarchical Position Pruning, which identifies and removes less essential later blocks based on position hierarchy; (2) Positional Weight Preservation, which systematically protects early model portions that are essential for semantic structural integrity; and (3) Sensitivity-Guided Distillation, which adjusts knowledge-transfer intensity based on our discovery of block-wise sensitivity variations. As a result, our framework brings billion-scale diffusion models into a range more suitable for on-device inference, while preserving the quality of the output images. Specifically, combined with INT4 weight quantisation, HierarchicalPrune achieves 77.5-80.4% memory footprint reduction (e.g., from 15.8 GB to 3.2 GB) and 27.9-38.0% latency reduction, measured on server and consumer grade GPUs, with the minimum drop of 2.6% in GenEval score and 7% in HPSv2 score compared to the original model. Finally, our comprehensive user study with 85 participants demonstrates that HierarchicalPrune maintains perceptual quality comparable to the original model while significantly outperforming prior works.
+ oai:arXiv.org:2508.04663v3
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Young D. Kwon, Rui Li, Sijia Li, Da Li, Sourav Bhattacharya, Stylianos I. Venieris
+
+
+ AttriLens-Mol: Attribute Guided Reinforcement Learning for Molecular Property Prediction with Large Language Models
+ https://arxiv.org/abs/2508.04748
+ arXiv:2508.04748v3 Announce Type: replace
+Abstract: Large Language Models (LLMs) have shown promise in assisting molecular property prediction tasks but often rely on human-crafted prompts and chain-of-thought templates. While recent advanced large reasoning models like DeepSeek-R1 employ reinforcement learning for an extended ``thinking'' process, their reasoning can be verbose and lack relevance. We introduce AttriLens-Mol, an attribute-guided reinforcement learning framework for molecular property prediction with LLMs. AttriLens-Mol steers the model's reasoning by using: (1) a format reward encouraging attribute-based structured output, (2) a count reward to avoid enumerating irrelevant attributes, and (3) a rationality reward using advanced LLMs and RDKit to verify the relatedness of the generated attributes. This approach implicitly elicits the model's inherent knowledge of relevant molecular attributes during reasoning, enables making predictions for the molecular property more effectively. Experiments on both in-distribution and out-of-distribution datasets show that, training both 7B-size R1-Distilled-Qwen2.5 and R1-Distilled-LLaMA3.1 models on 4,000 samples with our proposed AttriLens-Mol method significantly boosts the performance, getting comparable or better results than supervised fine-tuning models (Mol-Instructions, ChemDFM, etc.) and advanced models (GPT-3.5, GPT-4o, DeepSeek-V3, DeepSeek-R1, etc.). Further, our extracted attributes for the target property, when used as features for an interpretable decision tree model, yield superior performance compared to attributes generated by prompting LLMs. This shows that AttriLens-Mol effectively elicits more relevant and predictive molecular attributes, leading to enhanced interpretability and performance for property prediction. We release the code in https://github.com/szu-tera/AttriLens-Mol.
+ oai:arXiv.org:2508.04748v3
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xuan Lin, Long Chen, Yile Wang
+
+
+ Single-Step Reconstruction-Free Anomaly Detection and Segmentation via Diffusion Models
+ https://arxiv.org/abs/2508.04818
+ arXiv:2508.04818v2 Announce Type: replace
+Abstract: Generative models have demonstrated significant success in anomaly detection and segmentation over the past decade. Recently, diffusion models have emerged as a powerful alternative, outperforming previous approaches such as GANs and VAEs. In typical diffusion-based anomaly detection, a model is trained on normal data, and during inference, anomalous images are perturbed to a predefined intermediate step in the forward diffusion process. The corresponding normal image is then reconstructed through iterative reverse sampling.
+ However, reconstruction-based approaches present three major challenges: (1) the reconstruction process is computationally expensive due to multiple sampling steps, making real-time applications impractical; (2) for complex or subtle patterns, the reconstructed image may correspond to a different normal pattern rather than the original input; and (3) Choosing an appropriate intermediate noise level is challenging because it is application-dependent and often assumes prior knowledge of anomalies, an assumption that does not hold in unsupervised settings.
+ We introduce Reconstruction-free Anomaly Detection with Attention-based diffusion models in Real-time (RADAR), which overcomes the limitations of reconstruction-based anomaly detection. Unlike current SOTA methods that reconstruct the input image, RADAR directly produces anomaly maps from the diffusion model, improving both detection accuracy and computational efficiency. We evaluate RADAR on real-world 3D-printed material and the MVTec-AD dataset. Our approach surpasses state-of-the-art diffusion-based and statistical machine learning models across all key metrics, including accuracy, precision, recall, and F1 score. Specifically, RADAR improves F1 score by 7% on MVTec-AD and 13% on the 3D-printed material dataset compared to the next best model.
+ Code available at: https://github.com/mehrdadmoradi124/RADAR
+ oai:arXiv.org:2508.04818v2
+ cs.CV
+ eess.IV
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mehrdad Moradi, Marco Grasso, Bianca Maria Colosimo, Kamran Paynabar
+
+
+ CycleDiff: Cycle Diffusion Models for Unpaired Image-to-image Translation
+ https://arxiv.org/abs/2508.06625
+ arXiv:2508.06625v2 Announce Type: replace
+Abstract: We introduce a diffusion-based cross-domain image translator in the absence of paired training data. Unlike GAN-based methods, our approach integrates diffusion models to learn the image translation process, allowing for more coverable modeling of the data distribution and performance improvement of the cross-domain translation. However, incorporating the translation process within the diffusion process is still challenging since the two processes are not aligned exactly, i.e., the diffusion process is applied to the noisy signal while the translation process is conducted on the clean signal. As a result, recent diffusion-based studies employ separate training or shallow integration to learn the two processes, yet this may cause the local minimal of the translation optimization, constraining the effectiveness of diffusion models. To address the problem, we propose a novel joint learning framework that aligns the diffusion and the translation process, thereby improving the global optimality. Specifically, we propose to extract the image components with diffusion models to represent the clean signal and employ the translation process with the image components, enabling an end-to-end joint learning manner. On the other hand, we introduce a time-dependent translation network to learn the complex translation mapping, resulting in effective translation learning and significant performance improvement. Benefiting from the design of joint learning, our method enables global optimization of both processes, enhancing the optimality and achieving improved fidelity and structural consistency. We have conducted extensive experiments on RGB$\leftrightarrow$RGB and diverse cross-modality translation tasks including RGB$\leftrightarrow$Edge, RGB$\leftrightarrow$Semantics and RGB$\leftrightarrow$Depth, showcasing better generative performances than the state of the arts.
+ oai:arXiv.org:2508.06625v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shilong Zou, Yuhang Huang, Renjiao Yi, Chenyang Zhu, Kai Xu
+
+
+ TurnGuide: Enhancing Meaningful Full Duplex Spoken Interactions via Dynamic Turn-Level Text-Speech Interleaving
+ https://arxiv.org/abs/2508.07375
+ arXiv:2508.07375v2 Announce Type: replace
+Abstract: Full-Duplex Speech Language Models (FD-SLMs) are specialized foundation models designed to enable natural, real-time spoken interactions by modeling complex conversational turn-taking such as interruptions, backchannels, and overlapping speech. End-to-end (e2e) FD-SLMs leverage real-world double-channel conversational data to capture nuanced two-speaker dialogue patterns for human-like interactions, but their conversational abilities often degrade compared to pure-text conversation due to prolonged speech sequences and limited high-quality spoken dialogue data. Although interleaved text-speech generation could mitigate this degradation, integrating discrete text tokens into continuous double-channel audio streams could disrupt the precise time alignment required for fluid interaction. To address this, we propose TurnGuide, a novel text-speech interleaved generation approach for e2e FD-SLMs that dynamically segments assistant speech into dialogue turns and interleaves turn-level text and speech generation. This approach allows FD-SLMs to integrate the semantic intelligence of LLMs without compromising the natural acoustic flow. Extensive experiments show that TurnGuide not only significantly improves e2e FD-SLMs to produce semantically meaningful, coherent speech but also achieves state-of-the-art performance on various turn-taking events. Demos are available at https://dreamtheater123.github.io/TurnGuide-Demo/. Code will be available at https://github.com/dreamtheater123/TurnGuide.
+ oai:arXiv.org:2508.07375v2
+ cs.CL
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Wenqian Cui, Lei Zhu, Xiaohui Li, Zhihan Guo, Haoli Bai, Lu Hou, Irwin King
+
+
+ Tailored Emotional LLM-Supporter: Enhancing Cultural Sensitivity
+ https://arxiv.org/abs/2508.07902
+ arXiv:2508.07902v2 Announce Type: replace
+Abstract: Large language models (LLMs) show promise in offering emotional support and generating empathetic responses for individuals in distress, but their ability to deliver culturally sensitive support remains underexplored due to a lack of resources. In this work, we introduce CultureCare, the first dataset designed for this task, spanning four cultures and including 1729 distress messages, 1523 cultural signals, and 1041 support strategies with fine-grained emotional and cultural annotations. Leveraging CultureCare, we (i) develop and test four adaptation strategies for guiding three state-of-the-art LLMs toward culturally sensitive responses; (ii) conduct comprehensive evaluations using LLM-as-a-Judge, in-culture human annotators, and clinical psychologists; (iii) show that adapted LLMs outperform anonymous online peer responses, and that simple cultural role-play is insufficient for cultural sensitivity; and (iv) explore the application of LLMs in clinical training, where experts highlight their potential in fostering cultural competence in novice therapists.
+ oai:arXiv.org:2508.07902v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Chen Cecilia Liu, Hiba Arnaout, Nils Kova\v{c}i\'c, Dana Atzil-Slonim, Iryna Gurevych
+
+
+ Calibration Attention: Learning Reliability-Aware Representations for Vision Transformers
+ https://arxiv.org/abs/2508.08547
+ arXiv:2508.08547v2 Announce Type: replace
+Abstract: Most calibration methods operate at the logit level, implicitly assuming that miscalibration can be corrected without changing the underlying representation. We challenge this assumption and propose \textbf{Calibration Attention (CalAttn)}, a \emph{representation-aware} calibration module for vision transformers that couples instance-wise temperature scaling to transformer token geometry under a proper scoring objective. CalAttn predicts a sample-specific temperature from the \texttt{[CLS]} token and backpropagates calibration gradients into the backbone, thereby reshaping the uncertainty structure of the representation rather than post-hoc adjusting confidence. This yields \emph{token-conditioned uncertainty modulation} with negligible overhead (\(<0.1\%\) additional parameters). Across multiple datasets with ViT/DeiT/Swin backbones, CalAttn consistently improves calibration while preserving accuracy, achieving relative ECE reductions of \(3.7\%\) to \(77.7\%\) over strong baselines across diverse training objectives. Our results indicate that treating calibration as a representation-level problem is a practical and effective direction for trustworthy uncertainty estimation in transformers. Code: [https://github.com/EagleAdelaide/CalibrationAttention-CalAttn-](https://github.com/EagleAdelaide/CalibrationAttention-CalAttn-)
+ oai:arXiv.org:2508.08547v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Wenhao Liang, Wei Emma Zhang, Lin Yue, Miao Xu, Mingyu Guo, Olaf Maennel, Weitong Chen
+
+
+ Transferable Model-agnostic Vision-Language Model Adaptation for Efficient Weak-to-Strong Generalization
+ https://arxiv.org/abs/2508.08604
+ arXiv:2508.08604v3 Announce Type: replace
+Abstract: Vision-Language Models (VLMs) have been widely used in various visual recognition tasks due to their remarkable generalization capabilities. As these models grow in size and complexity, fine-tuning becomes costly, emphasizing the need to reuse adaptation knowledge from 'weaker' models to efficiently enhance 'stronger' ones. However, existing adaptation transfer methods exhibit limited transferability across models due to their model-specific design and high computational demands. To tackle this, we propose Transferable Model-agnostic adapter (TransMiter), a light-weight adapter that improves vision-language models 'without backpropagation'. TransMiter captures the knowledge gap between pre-trained and fine-tuned VLMs, in an 'unsupervised' manner. Once trained, this knowledge can be seamlessly transferred across different models without the need for backpropagation. Moreover, TransMiter consists of only a few layers, inducing a negligible additional inference cost. Notably, supplementing the process with a few labeled data further yields additional performance gain, often surpassing a fine-tuned stronger model, with a marginal training cost. Experimental results and analyses demonstrate that TransMiter effectively and efficiently transfers adaptation knowledge while preserving generalization abilities across VLMs of different sizes and architectures in visual recognition tasks.
+ oai:arXiv.org:2508.08604v3
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jihwan Park, Taehoon Song, Sanghyeok Lee, Miso Choi, Hyunwoo J. Kim
+
+
+ Optimum 1-Step Majority-Logic Decoding of Binary Reed-Muller Codes
+ https://arxiv.org/abs/2508.08736
+ arXiv:2508.08736v3 Announce Type: replace
+Abstract: The classical majority-logic decoder proposed by Reed for Reed-Muller codes RM(r, m) of order r and length 2^m, unfolds in r+1 sequential steps, decoding message symbols from highest to lowest degree. Several follow-up decoding algorithms reduced the number of steps, but for a limited set of parameters, or at the expense of reduced performance, or relying on the existence of some combinatorial structures. We show that any one-step majority-logic decoder-that is, a decoder performing all majority votes in one step simultaneously without sequential processing-can correct at most d_min/4 errors for all values of r and m, where d_min denotes the code's minimum distance. We then introduce a new hard-decision decoder that completes the decoding in a single step and attains this error-correction limit. It applies to all r and m, and can be viewed as a parallel realization of Reed's original algorithm, decoding all message symbols simultaneously. Remarkably, we also prove that the decoder is optimum in the erasure setting: it recovers the message from any erasure pattern of up to d_min-1 symbols-the theoretical limit. To our knowledge, this is the first 1-step decoder for RM codes that achieves both optimal erasure correction and the maximum one-step error correction capability.
+ oai:arXiv.org:2508.08736v3
+ cs.IT
+ math.CO
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hoang Ly, Emina Soljanin
+
+
+ Generation of Real-time Robotic Emotional Expressions Learning from Human Demonstration in Mixed Reality
+ https://arxiv.org/abs/2508.08999
+ arXiv:2508.08999v2 Announce Type: replace
+Abstract: Expressive behaviors in robots are critical for effectively conveying their emotional states during interactions with humans. In this work, we present a framework that autonomously generates realistic and diverse robotic emotional expressions based on expert human demonstrations captured in Mixed Reality (MR). Our system enables experts to teleoperate a virtual robot from a first-person perspective, capturing their facial expressions, head movements, and upper-body gestures, and mapping these behaviors onto corresponding robotic components including eyes, ears, neck, and arms. Leveraging a flow-matching-based generative process, our model learns to produce coherent and varied behaviors in real-time in response to moving objects, conditioned explicitly on given emotional states. A preliminary test validated the effectiveness of our approach for generating autonomous expressions.
+ oai:arXiv.org:2508.08999v2
+ cs.RO
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chao Wang, Michael Gienger, Fan Zhang
+
+
+ Integrating Reinforcement Learning with Visual Generative Models: Foundations and Advances
+ https://arxiv.org/abs/2508.10316
+ arXiv:2508.10316v3 Announce Type: replace
+Abstract: Generative models have made significant progress in synthesizing visual content, including images, videos, and 3D/4D structures. However, they are typically trained with surrogate objectives such as likelihood or reconstruction loss, which often misalign with perceptual quality, semantic accuracy, or physical realism. Reinforcement learning (RL) offers a principled framework for optimizing non-differentiable, preference-driven, and temporally structured objectives. Recent advances demonstrate its effectiveness in enhancing controllability, consistency, and human alignment across generative tasks. This survey provides a systematic overview of RL-based methods for visual content generation. We review the evolution of RL from classical control to its role as a general-purpose optimization tool, and examine its integration into image, video, and 3D/4D generation. Across these domains, RL serves not only as a fine-tuning mechanism but also as a structural component for aligning generation with complex, high-level goals. We conclude with open challenges and future research directions at the intersection of RL and generative modeling.
+ oai:arXiv.org:2508.10316v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuanzhi Liang, Yijie Fang, Ke Hao, Rui Li, Ziqi Ni, Ruijie Su, Chi Zhang
+
+
+ STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes
+ https://arxiv.org/abs/2508.10427
+ arXiv:2508.10427v3 Announce Type: replace
+Abstract: Vision-Language Models (VLMs) have been applied to autonomous driving to support decision-making in complex real-world scenarios. However, their training on static, web-sourced image-text pairs fundamentally limits the precise spatiotemporal reasoning required to understand and predict dynamic traffic scenes. We address this critical gap with STRIDE-QA, a large-scale visual question answering (VQA) dataset for physically grounded reasoning from an ego-centric perspective. Constructed from 100 hours of multi-sensor driving data in Tokyo, capturing diverse and challenging conditions, STRIDE-QA is the largest VQA dataset for spatiotemporal reasoning in urban driving, offering 16M QA pairs over 270K frames. Grounded by dense, automatically generated annotations including 3D bounding boxes, segmentation masks, and multi-object tracks, the dataset uniquely supports both object-centric and ego-centric reasoning through three novel QA tasks that require spatial localization and temporal prediction. Our benchmarks demonstrate that existing VLMs struggle significantly, with near-zero scores on prediction consistency. In contrast, VLMs fine-tuned on STRIDE-QA exhibit dramatic performance gains, achieving 55% success in spatial localization and 28% consistency in future motion prediction, compared to near-zero scores from general-purpose VLMs. Therefore, STRIDE-QA establishes a comprehensive foundation for developing more reliable VLMs for safety-critical autonomous systems.
+ oai:arXiv.org:2508.10427v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keishi Ishihara, Kento Sasaki, Tsubasa Takahashi, Daiki Shiono, Yu Yamaguchi
+
+
+ SPHENIC: Topology-Aware Multi-View Clustering for Spatial Transcriptomics
+ https://arxiv.org/abs/2508.10646
+ arXiv:2508.10646v2 Announce Type: replace
+Abstract: Spatial transcriptomics clustering is pivotal for identifying cell subpopulations by leveraging spatial location information. While recent graph-based methods modeling cell-cell interactions have improved clustering accuracy, they remain limited in two key aspects: (i) reliance on local aggregation in static graphs often fails to capture robust global topological structures (e.g., loops and voids) and is vulnerable to noisy edges; and (ii) dimensionality reduction techniques frequently neglect spatial coherence, causing physically adjacent spots to be erroneously separated in the latent space. To overcome these challenges, we propose SPHENIC, a Spatial Persistent Homology-Enhanced Neighborhood Integrative Clustering method. Specifically, it explicitly incorporates topology-invariant features into the clustering network to ensure robust representation learning against noise. Furthermore, we design a dual-regularized optimization module that imposes spatial constraints alongside distributional optimization, ensuring that the embedding space preserves the physical proximity of cells. Extensive experiments on 11 benchmark datasets demonstrate that SPHENIC outperforms state-of-the-art methods by 4.19%-9.14%, validating its superiority in characterizing complex tissue architectures.
+ oai:arXiv.org:2508.10646v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chenkai Guo, Yikai Zhu, Renxiang Guan, Jinli Ma, Siwei Wang, Ke Liang, Guangdun Peng, Dayu Hu
+
+
+ Beyond "Not Novel Enough": Enriching Scholarly Critique with LLM-Assisted Feedback
+ https://arxiv.org/abs/2508.10795
+ arXiv:2508.10795v4 Announce Type: replace
+Abstract: Novelty assessment is a central yet understudied aspect of peer review, particularly in high volume fields like NLP where reviewer capacity is increasingly strained. We present a structured approach for automated novelty evaluation that models expert reviewer behavior through three stages: content extraction from submissions, retrieval and synthesis of related work, and structured comparison for evidence based assessment. Our method is informed by a large scale analysis of human written novelty reviews and captures key patterns such as independent claim verification and contextual reasoning. Evaluated on 182 ICLR 2025 submissions with human annotated reviewer novelty assessments, the approach achieves 86.5% alignment with human reasoning and 75.3% agreement on novelty conclusions - substantially outperforming existing LLM based baselines. The method produces detailed, literature aware analyses and improves consistency over ad hoc reviewer judgments. These results highlight the potential for structured LLM assisted approaches to support more rigorous and transparent peer review without displacing human expertise. Data and code are made available.
+ oai:arXiv.org:2508.10795v4
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Osama Mohammed Afzal, Preslav Nakov, Tom Hope, Iryna Gurevych
+
+
+ ToxiFrench: Benchmarking and Enhancing Language Models via CoT Fine-Tuning for French Toxicity Detection
+ https://arxiv.org/abs/2508.11281
+ arXiv:2508.11281v2 Announce Type: replace
+Abstract: Detecting toxic content using language models is crucial yet challenging. While substantial progress has been made in English, toxicity detection in French remains underdeveloped, primarily due to the lack of culturally relevant, human-annotated, large-scale datasets. In this work, we release ToxiFrench, a dataset of 53,622 French online comments together with a balanced benchmark split for systematic evaluation. The dataset is constructed via a semi-automated annotation pipeline that reduces manual labeling to only 10% through high-confidence LLM-based pre-annotation and human verification, while ensuring statistical alignment with human-only annotation. We then benchmark a broad range of models and uncover a counterintuitive finding: Small Language Models (SLMs) often surpass larger models in robustness and generalization on this task. Motivated by this finding, we propose a novel Chain-of-Thought (CoT) fine-tuning strategy using a Dynamic Weighted Loss (DWL) that progressively emphasizes the model's final decision and significantly improves faithfulness. Our fine-tuned 4B model (Qwen3-4B) achieves state-of-the-art performance on the benchmark. It improves its balanced accuracy by 10% over its baseline and achieves better performance than GPT-4o and DeepSeek-R1 on our benchmark, while successfully retaining cross-lingual capabilities.
+ oai:arXiv.org:2508.11281v2
+ cs.CL
+ cs.AI
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Axel Delaval, Shujian Yang, Haicheng Wang, Han Qiu, Jialiang Lu
+
+
+ Diagnostic-Guided Dynamic Profile Optimization for LLM-based User Simulators in Sequential Recommendation
+ https://arxiv.org/abs/2508.12645
+ arXiv:2508.12645v5 Announce Type: replace
+Abstract: Recent advances in large language models (LLMs) have enabled realistic user simulators for developing and evaluating recommender systems (RSs). However, existing LLM-based simulators for RSs face two major limitations: (1) static and single-step prompt-based inference that leads to inaccurate and incomplete user profile construction; (2) unrealistic and single-round recommendation-feedback interaction pattern that fails to capture real-world scenarios. To address these limitations, we propose DGDPO (Diagnostic-Guided Dynamic Profile Optimization), a novel framework that constructs user profile through a dynamic and iterative optimization process to enhance the simulation fidelity. Specifically, DGDPO incorporates two core modules within each optimization loop: firstly, a specialized LLM-based diagnostic module, calibrated through our novel training strategy, accurately identifies specific defects in the user profile. Subsequently, a generalized LLM-based treatment module analyzes the diagnosed defect and generates targeted suggestions to refine the profile. Furthermore, unlike existing LLM-based user simulators that are limited to single-round interactions, we are the first to integrate DGDPO with sequential recommenders, enabling a bidirectional evolution where user profiles and recommendation strategies adapt to each other over multi-round interactions. Extensive experiments conducted on three real-world datasets demonstrate the effectiveness of our proposed framework.
+ oai:arXiv.org:2508.12645v5
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hongyang Liu, Zhu Sun, Tianjun Wei, Yan Wang, Jiajie Zhu, Xinghua Qu
+
+
+ V2P: Visual Attention Calibration for GUI Grounding via Background Suppression and Center Peaking
+ https://arxiv.org/abs/2508.13634
+ arXiv:2508.13634v3 Announce Type: replace
+Abstract: Precise localization of GUI elements is crucial for the development of GUI agents. Traditional methods rely on bounding box or center-point regression, neglecting spatial interaction uncertainty and visual-semantic hierarchies. Recent methods incorporate attention mechanisms but still face two key issues: (1) ignoring processing background regions causes attention drift from the desired area, and (2) uniform modeling the target UI element fails to distinguish between its center and edges, leading to click imprecision. Inspired by how humans visually process and interact with GUI elements, we propose the Valley-to-Peak (V2P) method to address these issues. To mitigate background distractions, V2P introduces a suppression attention mechanism that minimizes the model's focus on irrelevant regions to highlight the intended region. For the issue of center-edge distinction, V2P applies a Fitts' Law-inspired approach by modeling GUI interactions as 2D Gaussian heatmaps where the weight gradually decreases from the center towards the edges. The weight distribution follows a Gaussian function, with the variance determined by the target's size. Consequently, V2P effectively isolates the target area and teaches the model to concentrate on the most essential point of the UI element. The model trained by V2P achieves the performance with 92.4\% and 52.5\% on two benchmarks ScreenSpot-v2 and ScreenSpot-Pro. Ablations further confirm each component's contribution, underscoring V2P's generalizability in precise GUI grounding tasks and its potential for real-world deployment in future GUI agents.
+ oai:arXiv.org:2508.13634v3
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jikai Chen, Long Chen, Dong Wang, Qinglin Su, Zhixuan Chu, Bingguang Hao, Leilei Gan, Chenyi Zhuang, Jinjie Gu
+
+
+ SAGA: Learning Signal-Aligned Distributions for Improved Text-to-Image Generation
+ https://arxiv.org/abs/2508.13866
+ arXiv:2508.13866v2 Announce Type: replace
+Abstract: State-of-the-art text-to-image models produce visually impressive results but often struggle with precise alignment to text prompts, leading to missing critical elements or unintended blending of distinct concepts. We propose a novel approach that learns a high-success-rate distribution conditioned on a target prompt, ensuring that generated images faithfully reflect the corresponding prompts. Our method explicitly models the signal component during the denoising process, offering fine-grained control that mitigates over-optimization and out-of-distribution artifacts. Moreover, our framework is training-free and seamlessly integrates with both existing diffusion and flow matching architectures. It also supports additional conditioning modalities -- such as bounding boxes -- for enhanced spatial alignment. Extensive experiments demonstrate that our approach outperforms current state-of-the-art methods. The code is available at https://github.com/grimalPaul/gsn-factory.
+ oai:arXiv.org:2508.13866v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Paul Grimal, Micha\"el Soumm, Herv\'e Le Borgne, Olivier Ferret, Akihiro Sugimoto
+
+
+ Scaled Signed Averaging Improves In-Context and Early Learning Benchmark Performance in Small Transformers
+ https://arxiv.org/abs/2508.14685
+ arXiv:2508.14685v3 Announce Type: replace
+Abstract: While Large Language models' abilities for in-context learning (ICL) have had much success, they have limitations on simple semantic tasks involving quantifiers like {\em every} and {\em some}, as well as on tasks with linear functions. We analyze those limitations and identify Softmax, the scoring function in the attention mechanism, as a contributing factor to these limitations. Our \textbf{scaled signed averaging (SSA)}, a novel scoring function mitigates these limitations. SSA significantly improves performance on our ICL tasks. In addition, SSA outperforms transformer models with Softmax on several early learning NLP benchmarks and linguistic probing tasks on zero and few-shot settings.
+ oai:arXiv.org:2508.14685v3
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Omar Naim, Swarnadeep Bhar, J\'er\^ome Bolte, Nicholas Asher
+
+
+ Efficient Switchable Safety Control in LLMs via Magic-Token-Guided Co-Training
+ https://arxiv.org/abs/2508.14904
+ arXiv:2508.14904v3 Announce Type: replace
+Abstract: Current methods for content safety in Large Language Models (LLMs), such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), often rely on multi-stage training pipelines and lack fine-grained, post-deployment controllability. To address these limitations, we propose a unified co-training framework that efficiently integrates multiple safety behaviors: positive (lawful/prosocial), negative (unfiltered/risk-prone) and rejective (refusal-oriented/conservative) within a single SFT stage. Notably, each behavior is dynamically activated via a simple system-level instruction, or magic token, enabling stealthy and efficient behavioral switching at inference time. This flexibility supports diverse deployment scenarios, such as positive for safe user interaction, negative for internal red-teaming, and rejective for context-aware refusals triggered by upstream moderation signals. This co-training strategy induces a distinct Safety Alignment Margin in the output space, characterized by well-separated response distributions corresponding to each safety mode. The existence of this margin provides empirical evidence for the model's safety robustness and enables unprecedented fine-grained control. Experiments show that our method matches the safety alignment quality of SFT+DPO, with our 8B model notably surpassing DeepSeek-R1 (671B) in safety performance, while significantly reducing both training complexity and deployment costs. This work presents a scalable, efficient, and highly controllable solution for LLM content safety.
+ oai:arXiv.org:2508.14904v3
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianfeng Si, Lin Sun, Zhewen Tan, Xiangzheng Zhang
+
+
+ High-Capacity and Low-PAPR BICM-OFDM Systems Using Non-Equiprobable and Non-Uniform Constellation Shaping With Clipping and Filtering
+ https://arxiv.org/abs/2508.15639
+ arXiv:2508.15639v2 Announce Type: replace
+Abstract: We address a design of high-capacity and low-peak-to-average power ratio (PAPR) orthogonal frequency-division multiplexing (OFDM) systems based on bit-interleaved coded modulation (BICM) utilizing non-equiprobable and non-uniform (NENU) constellations as well as clipping and filtering (CAF). The proposed constellations are generated using a truncated Gaussian distribution and the merging of constellation points, where the former creates a non-uniform constellation (NUC), and the latter adjusts the number of signal points for further improving the total bit-wise mutual information (BMI). Unlike other exhaustive search-based approaches, the proposed constellations are uniquely determined by only two parameters associated with NUC and cardinality. Due to this property of limited degrees of freedom, the complexity required for the numerical optimization process can be significantly low. We focus on the constellation design based on one dimension, i.e., pulse amplitude modulation (PAM), which facilitates the reduction of demapping complexity for the BICM receiver. The use of CAF at the transmitter can efficiently reduce the PAPR of OFDM signals; however, it introduces clipping noise that may degrade error rate performance, making the application of clipping noise cancellation (CNC) at the receiver essential. Therefore, we optimize the NENU constellations in the presence of CAF and CNC. Simulation results demonstrate that the combination of constellation shaping with CAF and CNC enables BICM-OFDM systems to simultaneously achieve low PAPR and high spectral efficiency over additive white Gaussian noise (AWGN) as well as frequency-selective fading channels. Furthermore, comparative studies confirm that the proposed system significantly outperforms the single-carrier counterpart (i.e., DFT-precoded BICM-OFDM) in terms of PAPR and bit error rate (BER) performance over fading channels.
+ oai:arXiv.org:2508.15639v2
+ cs.IT
+ eess.SP
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Eito Kurihara, Hideki Ochiai
+
+
+ SurGE: A Benchmark and Evaluation Framework for Scientific Survey Generation
+ https://arxiv.org/abs/2508.15658
+ arXiv:2508.15658v4 Announce Type: replace
+Abstract: The rapid growth of academic literature makes the manual creation of scientific surveys increasingly infeasible. While large language models show promise for automating this process, progress in this area is hindered by the absence of standardized benchmarks and evaluation protocols. To bridge this critical gap, we introduce SurGE (Survey Generation Evaluation), a new benchmark for scientific survey generation in computer science. SurGE consists of (1) a collection of test instances, each including a topic description, an expert-written survey, and its full set of cited references, and (2) a large-scale academic corpus of over one million papers. In addition, we propose an automated evaluation framework that measures the quality of generated surveys across four dimensions: comprehensiveness, citation accuracy, structural organization, and content quality. Our evaluation of diverse LLM-based methods demonstrates a significant performance gap, revealing that even advanced agentic frameworks struggle with the complexities of survey generation and highlighting the need for future research in this area. We have open-sourced all the code, data, and models at: https://github.com/oneal2000/SurGE
+ oai:arXiv.org:2508.15658v4
+ cs.CL
+ cs.AI
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Weihang Su, Anzhe Xie, Qingyao Ai, Jianming Long, Xuanyi Chen, Jiaxin Mao, Ziyi Ye, Yiqun Liu
+
+
+ Amortized In-Context Mixed Effect Transformer Models: A Zero-Shot Approach for Pharmacokinetics
+ https://arxiv.org/abs/2508.15659
+ arXiv:2508.15659v3 Announce Type: replace
+Abstract: Accurate dose-response forecasting under sparse sampling is central to precision pharmacotherapy. We present the Amortized In-Context Mixed-Effect Transformer (AICMET) model, a transformer-based latent-variable framework that unifies mechanistic compartmental priors with amortized in-context Bayesian inference. AICMET is pre-trained on hundreds of thousands of synthetic pharmacokinetic trajectories with Ornstein-Uhlenbeck priors over the parameters of compartment models, endowing the model with strong inductive biases and enabling zero-shot adaptation to new compounds. At inference time, the decoder conditions on the collective context of previously profiled trial participants, generating calibrated posterior predictions for newly enrolled patients after a few early drug concentration measurements. This capability collapses traditional model-development cycles from weeks to hours while preserving some degree of expert modelling. Experiments across public datasets show that AICMET attains state-of-the-art predictive accuracy and faithfully quantifies inter-patient variability -- outperforming both nonlinear mixed-effects baselines and recent neural ODE variants. Our results highlight the feasibility of transformer-based, population-aware neural architectures as offering a new alternative for bespoke pharmacokinetic modeling pipelines, charting a path toward truly population-aware personalized dosing regimens.
+ oai:arXiv.org:2508.15659v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ C\'esar Ali Ojeda Marin, Wilhelm Huisinga, Purity Kavwele, Rams\'es J. S\'anchez, Niklas Hartung
+
+
+ Unveiling Unicode's Unseen Underpinnings in Undermining Authorship Attribution
+ https://arxiv.org/abs/2508.15840
+ arXiv:2508.15840v4 Announce Type: replace
+Abstract: When using a public communication channel--whether formal or informal, such as commenting or posting on social media--end users have no expectation of privacy: they compose a message and broadcast it for the world to see. Even if an end user takes utmost precautions to anonymize their online presence--using an alias or pseudonym; masking their IP address; spoofing their geolocation; concealing their operating system and user agent; deploying encryption; registering with a disposable phone number or email; disabling non-essential settings; revoking permissions; and blocking cookies and fingerprinting--one obvious element still lingers: the message itself. Assuming they avoid lapses in judgment or accidental self-exposure, there should be little evidence to validate their actual identity, right? Wrong. The content of their message--necessarily open for public consumption--exposes an attack vector: stylometric analysis, or author profiling. In this paper, we dissect the technique of stylometry, discuss an antithetical counter-strategy in adversarial stylometry, and devise enhancements through Unicode steganography.
+ oai:arXiv.org:2508.15840v4
+ cs.CR
+ cs.CL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Robert Dilworth
+
+
+ A predictive modular approach to constraint satisfaction under uncertainty -- with application to glycosylation in continuous monoclonal antibody biosimilar production
+ https://arxiv.org/abs/2508.16803
+ arXiv:2508.16803v4 Announce Type: replace
+Abstract: The paper proposes a modular-based approach to constraint handling in process optimization and control. This is partly motivated by the recent interest in learning-based methods, e.g., within bioproduction, for which constraint handling under uncertainty is a challenge. The proposed constraint handler, called predictive filter, is combined with an adaptive constraint margin and a constraint violation cost monitor to minimize the cost of violating soft constraints due to model uncertainty and disturbances. The module can be combined with any controller and is based on minimally modifying the controller output, in a least squares sense, such that constraints are satisfied within the considered horizon. The proposed method is computationally efficient and suitable for real-time applications. The effectiveness of the method is illustrated through a realistic case study of glycosylation constraint satisfaction in continuous monoclonal antibody biosimilar production using Chinese hamster ovary cells, employing a metabolic network model consisting of 23 extracellular metabolites and 126 reactions. In the case study, the average constraint-violation cost is reduced by more than 60% compared to the case without the proposed constraint-handling method.
+ oai:arXiv.org:2508.16803v4
+ eess.SY
+ cs.SY
+ math.OC
+ q-bio.QM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.jprocont.2026.103632
+ Journal of Process Control, Volume 158, 2026, 103632, ISSN 0959-1524
+ Yu Wang, Xiao Chen, Hubert Schwarz, V\'eronique Chotteau, Elling W. Jacobsen
+
+
+ UM3: Unsupervised Map to Map Matching
+ https://arxiv.org/abs/2508.16874
+ arXiv:2508.16874v2 Announce Type: replace
+Abstract: Map-to-map matching is a critical task for aligning spatial data across heterogeneous sources, yet it remains challenging due to the lack of ground truth correspondences, sparse node features, and scalability demands. In this paper, we propose an unsupervised graph-based framework that addresses these challenges through three key innovations. First, our method is an unsupervised learning approach that requires no training data, which is crucial for large-scale map data where obtaining labeled training samples is challenging. Second, we introduce pseudo coordinates that capture the relative spatial layout of nodes within each map, which enhances feature discriminability and enables scale-invariant learning. Third, we design an mechanism to adaptively balance feature and geometric similarity, as well as a geometric-consistent loss function, ensuring robustness to noisy or incomplete coordinate data. At the implementation level, to handle large-scale maps, we develop a tile-based post-processing pipeline with overlapping regions and majority voting, which enables parallel processing while preserving boundary coherence. Experiments on real-world datasets demonstrate that our method achieves state-of-the-art accuracy in matching tasks, surpassing existing methods by a large margin, particularly in high-noise and large-scale scenarios. Our framework provides a scalable and practical solution for map alignment, offering a robust and efficient alternative to traditional approaches.
+ oai:arXiv.org:2508.16874v2
+ cs.LG
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chaolong Ying, Yinan Zhang, Lei Zhang, Jiazhuang Wang, Shujun Jia, Tianshu Yu
+
+
+ Investigating red packet fraud in Android applications: Insights from user reviews
+ https://arxiv.org/abs/2508.16941
+ arXiv:2508.16941v2 Announce Type: replace
+Abstract: With the popularization of smartphones, red packets have been widely used in mobile apps. However, the issues of fraud associated with them have also become increasingly prominent. As reported in user reviews from mobile app markets, many users have complained about experiencing red packet fraud and being persistently troubled by fraudulent red packets. To uncover this phenomenon, we conduct the first investigation into an extensive collection of user reviews on apps with red packets. In this paper, we first propose a novel automated approach, ReckDetector, for effectively identifying apps with red packets from app markets. We then collect over 360,000 real user reviews from 334 apps with red packets available on Google Play and three popular alternative Android app markets. We preprocess the user reviews to extract those related to red packets and fine-tune a pre-trained BERT model to identify negative reviews. Finally, based on semantic analysis, we have summarized six distinct categories of red packet fraud issues reported by users. Through our study, we found that red packet fraud is highly prevalent, significantly impacting user experience and damaging the reputation of apps. Moreover, red packets have been widely exploited by unscrupulous app developers as a deceptive incentive mechanism to entice users into completing their designated tasks, thereby maximizing their profits.
+ oai:arXiv.org:2508.16941v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1186/s42400-025-00459-1
+ Cybersecurity 9, 104 (2026)
+ Yu Cheng, Xiaofang Qi, Yanhui Li
+
+
+ Physics-Informed Kolmogorov-Arnold Networks for multi-material elasticity problems in electronic packaging
+ https://arxiv.org/abs/2508.16999
+ arXiv:2508.16999v2 Announce Type: replace
+Abstract: This paper proposes a Physics-Informed Kolmogorov-Arnold Network for analyzing elasticity problems in multi-material electronic packaging structures. The method replaces traditional Multi-Layer Perceptrons with Kolmogorov-Arnold Networks within an energy-based Physics-Informed Neural Network framework. By constructing admissible displacement fields satisfying essential boundary conditions and optimizing network parameters through numerical integration, the proposed method effectively handles material property discontinuities. Unlike traditional methods that require domain decomposition and interface constraints for multi-material problems, Kolmogorov-Arnold Networks' trainable B-spline activation functions provide inherent piecewise characteristics. This capability stems from B-splines' local support, which enables effective approximation of discontinuities despite their individual smoothness. Consequently, this approach enables accurate approximation across the entire domain using a single network and simplifying the computational framework. Numerical experiments demonstrate that the proposed method achieves excellent accuracy and robustness in multi-material elasticity problems, validating its practical potential for electronic packaging analysis. Source codes are available at https://github.com/yanpeng-gong/PIKAN-MultiMaterial.
+ oai:arXiv.org:2508.16999v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yanpeng Gong, Yida He, Yue Mei, Xiaoying Zhuang, Fei Qin, Timon Rabczuk
+
+
+ Optimizing Multi-Modality Trackers via Sensitivity-regularized Tuning
+ https://arxiv.org/abs/2508.17488
+ arXiv:2508.17488v2 Announce Type: replace
+Abstract: This paper tackles the critical challenge of optimizing multi-modality trackers by effectively adapting pre-trained models for RGB data. Existing fine-tuning paradigms oscillate between excessive freedom and over-restriction, both leading to a suboptimal plasticity-stability trade-off. To mitigate this dilemma, we propose a novel sensitivity-regularized fine-tuning framework, which delicately refines the learning process by incorporating intrinsic parameter sensitivities. Through a comprehensive investigation of the transition from pre-trained to multi-modal contexts, we identify that parameters sensitive to pivotal foundational patterns and cross-domain shifts are the primary drivers of this issue. Specifically, we first probe the tangent space of pre-trained weights to measure and orient prior sensitivities, dedicated to preserving generalization. Subsequently, we characterize transfer sensitivities during the tuning phase, emphasizing adaptability and stability. By incorporating these sensitivities as unified regularization terms, our method significantly enhances the transferability across modalities. Extensive experiments showcase the superior performance of our method, surpassing current state-of-the-art techniques across various multi-modality tracking benchmarks. The source code and models will be publicly available at https://github.com/zhiwen-xdu/SRTrack.
+ oai:arXiv.org:2508.17488v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zhiwen Chen, Jinjian Wu, Zhiyu Zhu, Yifan Zhang, Guangming Shi, Junhui Hou
+
+
+ FAIRGAMER: Evaluating Social Biases in LLM-Based Video Game NPCs
+ https://arxiv.org/abs/2508.17825
+ arXiv:2508.17825v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) have increasingly enhanced or replaced traditional Non-Player Characters (NPCs) in video games. However, these LLM-based NPCs inherit underlying social biases (e.g., race or class), posing fairness risks during in-game interactions. To address the limited exploration of this issue, we introduce FairGamer, the first benchmark to evaluate social biases across three interaction patterns: transaction, cooperation, and competition. FairGamer assesses four bias types, including class, race, age, and nationality, across 12 distinct evaluation tasks using a novel metric, FairMCV. Our evaluation of seven frontier LLMs reveals that: (1) models exhibit biased decision-making, with Grok-4-Fast demonstrating the highest bias (average FairMCV = 76.9%); and (2) larger LLMs display more severe social biases, suggesting that increased model capacity inadvertently amplifies these biases. We release FairGamer at https://github.com/Anonymous999-xxx/FairGamer to facilitate future research on NPC fairness.
+ oai:arXiv.org:2508.17825v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Bingkang Shi, Jen-tse Huang, Long Luo, Tianyu Zong, Hongzhu Yi, Yuanxiang Wang, Songlin Hu, Xiaodan Zhang, Zhongjiang Yao
+
+
+ Amortized Sampling with Transferable Normalizing Flows
+ https://arxiv.org/abs/2508.18175
+ arXiv:2508.18175v3 Announce Type: replace
+Abstract: Efficient equilibrium sampling of molecular conformations remains a core challenge in computational chemistry and statistical inference. Classical approaches such as molecular dynamics or Markov chain Monte Carlo inherently lack amortization; the computational cost of sampling must be paid in full for each system of interest. The widespread success of generative models has inspired interest towards overcoming this limitation through learning sampling algorithms. Despite performing competitively with conventional methods when trained on a single system, learned samplers have so far demonstrated limited ability to transfer across systems. We demonstrate that deep learning enables the design of scalable and transferable samplers by introducing Prose, a 285 million parameter all-atom transferable normalizing flow trained on a corpus of peptide molecular dynamics trajectories up to 8 residues in length. Prose draws zero-shot uncorrelated proposal samples for arbitrary peptide systems, achieving the previously intractable transferability across sequence length, whilst retaining the efficient likelihood evaluation of normalizing flows. Through extensive empirical evaluation we demonstrate the efficacy of Prose as a proposal for a variety of sampling algorithms, finding a simple importance sampling-based finetuning procedure to achieve competitive performance to established methods such as sequential Monte Carlo. We open-source the Prose codebase, model weights, and training dataset, to further stimulate research into amortized sampling methods and finetuning objectives.
+ oai:arXiv.org:2508.18175v3
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Charlie B. Tan, Majdi Hassan, Leon Klein, Saifuddin Syed, Dominique Beaini, Michael M. Bronstein, Alexander Tong, Kirill Neklyudov
+
+
+ Scene-Aware Vectorized Memory Multi-Agent Framework with Cross-Modal Differentiated Quantization VLMs for Visually Impaired Assistance
+ https://arxiv.org/abs/2508.18177
+ arXiv:2508.18177v3 Announce Type: replace
+Abstract: Visually impaired individuals face significant challenges in environmental perception. Traditional assistive technologies often lack adaptive intelligence, focusing on individual components rather than integrated systems. While Vision-Language Models (VLMs) offer a promising path to richer, integrated understanding, their deployment is severely limited by substantial computational requirements, demanding dozens of gigabytes of memory. To address these gaps in computational efficiency and integrated design, this study proposes a dual technological innovation framework: a cross-modal differentiated quantization framework for VLMs and a scene-aware vectorized memory multi-agent system. The quantization framework implements differentiated strategies, reducing memory from 38GB to 11.3GB. The multi-agent system uses vectorized memory and perception-memory-reasoning workflows to provide environmental information beyond the current view, achieving 2.83-3.52s latency to initial speech output. Experiments show the quantized 19B-parameter model only experiences a 2.05% performance drop on MMBench and maintains 63.7 accuracy on OCR-VQA (original: 64.9), outperforming smaller models with equivalent memory. This research advances computational efficiency and assistive technology, offering comprehensive assistance in scene perception, text recognition, and navigation.
+ oai:arXiv.org:2508.18177v3
+ cs.CV
+ cs.LG
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Xiangxiang Wang, Xuanyu Wang, YiJia Luo, Yongbin Yu, Manping Fan, Jingtao Zhang, Liyong Ren
+
+
+ MIRAGE: Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains
+ https://arxiv.org/abs/2508.18260
+ arXiv:2508.18260v2 Announce Type: replace
+Abstract: Large reasoning models (LRMs) have shown significant progress in test-time scaling through chain-of-thought prompting. Current approaches like search-o1 integrate retrieval augmented generation (RAG) into multi-step reasoning processes but rely on a single, linear reasoning chain while incorporating unstructured textual information in a flat, context-agnostic manner. As a result, these approaches can lead to error accumulation throughout the reasoning chain, which significantly limits its effectiveness in medical question-answering (QA) tasks where both accuracy and traceability are critical requirements. To address these challenges, we propose MIRAGE (Multi-chain Inference with Retrieval-Augmented Graph Exploration), a novel test-time scalable reasoning framework that performs dynamic multi-chain inference over structured medical knowledge graphs. Specifically, MIRAGE 1) decomposes complex queries into entity-grounded sub-questions, 2) executes parallel inference chains, 3) retrieves evidence adaptively via neighbor expansion and multi-hop traversal, and 4) integrates answers using cross-chain verification to resolve contradictions. Experiments on three medical QA benchmarks (GenMedGPT-5k, CMCQA, and ExplainCPE) show that MIRAGE consistently outperforms GPT-4o, Tree-of-Thought variants, and other retrieval-augmented baselines in both automatic and human evaluations. Additionally, MIRAGE improves interpretability by generating explicit reasoning chains that trace each factual claim to concrete chains within the knowledge graph, making it well-suited for complex medical reasoning scenarios. The code will be available for further research.
+ oai:arXiv.org:2508.18260v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Kaiwen Wei, Rui Shan, Dongsheng Zou, Jianzhong Yang, Bi Zhao, Junnan Zhu, Jiang Zhong
+
+
+ Answering the Unanswerable Is to Err Knowingly: Analyzing and Mitigating Abstention Failures in Large Reasoning Models
+ https://arxiv.org/abs/2508.18760
+ arXiv:2508.18760v3 Announce Type: replace
+Abstract: Large reasoning models (LRMs) have shown remarkable progress on complex reasoning tasks. However, some questions posed to LRMs are inherently unanswerable, such as math problems lacking sufficient conditions. We find that LRMs continually fail to provide appropriate abstentions when confronted with these unanswerable questions. In this paper, we systematically analyze, investigate, and resolve this issue for trustworthy AI. We first conduct a detailed analysis of the distinct response behaviors of LRMs when facing unanswerable questions. Then, we show that LRMs possess sufficient cognitive capabilities to recognize the flaws in these questions. However, they fail to exhibit appropriate abstention behavior, revealing a misalignment between their internal cognition and external response. Finally, to resolve this issue, we propose a lightweight, two-stage method that combines cognitive monitoring with inference-time intervention. Experimental results demonstrate that our method significantly improves the abstention rate while maintaining the overall reasoning performance.
+ oai:arXiv.org:2508.18760v3
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yi Liu, Xiangyu Liu, Zequn Sun, Wei Hu
+
+
+ Safe Navigation under State Uncertainty: Online Adaptation for Robust Control Barrier Functions
+ https://arxiv.org/abs/2508.19159
+ arXiv:2508.19159v2 Announce Type: replace
+Abstract: Measurements and state estimates are often imperfect in control practice, posing challenges for safety-critical applications, where safety guarantees rely on accurate state information. In the presence of estimation errors, several prior robust control barrier function (R-CBF) formulations have imposed strict conditions on the input. These methods can be overly conservative and can introduce issues such as infeasibility, high control effort, etc. This work proposes a systematic method to improve R-CBFs, and demonstrates its advantages on a tracked vehicle that navigates among multiple obstacles. A primary contribution is a new optimization-based online parameter adaptation scheme that reduces the conservativeness of existing R-CBFs. In order to reduce the complexity of the parameter optimization, we merge several safety constraints into one unified numerical CBF via Poisson's equation. We further address the dual relative degree issue that typically causes difficulty in vehicle tracking. Experimental trials demonstrate the overall performance improvement of our approach over existing formulations.
+ oai:arXiv.org:2508.19159v2
+ eess.SY
+ cs.RO
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/LRA.2026.3653366
+ Ersin Das, Rahal Nanayakkara, Xiao Tan, Ryan M. Bena, Joel W. Burdick, Paulo Tabuada, Aaron D. Ames
+
+
+ Demystifying Scientific Problem-Solving in LLMs by Probing Knowledge and Reasoning
+ https://arxiv.org/abs/2508.19202
+ arXiv:2508.19202v2 Announce Type: replace
+Abstract: Scientific problem solving poses unique challenges for LLMs, requiring both deep domain knowledge and the ability to apply such knowledge through complex reasoning. While automated scientific reasoners hold great promise for assisting human scientists, there is currently no widely adopted holistic benchmark for evaluating scientific reasoning, and few approaches systematically disentangle the distinct roles of knowledge and reasoning in these tasks.
+ To address these gaps, we introduce SciReas, a diverse suite of existing benchmarks for scientific reasoning tasks, and SciReas-Pro, a selective subset that requires more complex reasoning. Our holistic evaluation surfaces insights about scientific reasoning performance that remain hidden when relying on individual benchmarks alone. We then propose KRUX, a probing framework for studying the distinct roles of reasoning and knowledge in scientific tasks.
+ Combining the two, we conduct an in-depth analysis that yields several key findings: (1) Retrieving task-relevant knowledge from model parameters is a critical bottleneck for LLMs in scientific reasoning; (2) Reasoning models consistently benefit from external knowledge added in-context on top of the reasoning enhancement; (3) Enhancing verbalized reasoning improves LLMs' ability to surface task-relevant knowledge.
+ oai:arXiv.org:2508.19202v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alan Li, Yixin Liu, Arpan Sarkar, Doug Downey, Arman Cohan
+
+
+ Discovering equations from data: symbolic regression in dynamical systems
+ https://arxiv.org/abs/2508.20257
+ arXiv:2508.20257v2 Announce Type: replace
+Abstract: The process of discovering equations from data lies at the heart of physics and in many other areas of research, including mathematical ecology and epidemiology. Recently, machine learning methods known as symbolic regression emerged as a way to automate this task. This study presents an overview of the current literature on symbolic regression, while also comparing the efficiency of five state-of-the-art methods in recovering the governing equations from nine processes, including chaotic dynamics and epidemic models. Benchmark results demonstrate the PySR method as the most suitable for inferring equations, with some estimates being indistinguishable from the original analytical forms. These results highlight the potential of symbolic regression as a robust tool for inferring and modeling real-world phenomena.
+ oai:arXiv.org:2508.20257v2
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Beatriz R. Brum, Luiza Lober, Isolde Previdelli, Francisco A. Rodrigues
+
+
+ Breaking Diffusion with Cache: Exploiting Approximate Caches in Diffusion Models
+ https://arxiv.org/abs/2508.20424
+ arXiv:2508.20424v2 Announce Type: replace
+Abstract: Diffusion models are a powerful class of generative models that produce content, such as images, from user prompts, but they are computationally intensive. To mitigate this cost, recent academic and industry work has adopted approximate caching, which reuses intermediate states from similar prompts in a cache. While efficient, this optimization introduces new security risks by breaking isolation among users. This work aims to comprehensively assess new security vulnerabilities arising from approximate caching. First, we demonstrate a remote covert channel established with the cache, where a sender injects prompts with special keywords into the cache and a receiver can recover that even after days, to exchange information. Second, we introduce a prompt stealing attack using the cache, where an attacker can recover existing cached prompts based on cache hit prompts. Finally, we introduce a poisoning attack that embeds the attacker's logos into the previously stolen prompt, to render them in future user prompts that hit the cache. These attacks are all performed remotely through the serving system, which indicates severe security vulnerabilities in approximate caching.
+ oai:arXiv.org:2508.20424v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Desen Sun, Shuncheng Jie, Sihang Liu
+
+
+ Mass conservation analysis of extrusion-based 3D printing simulations based on the level-set method
+ https://arxiv.org/abs/2508.20617
+ arXiv:2508.20617v2 Announce Type: replace
+Abstract: Accurate numerical simulation of material extrusion additive manufacturing requires reliable tracking of evolving material interfaces while preserving mass conservation. Inaccurate mass conservation can lead to significant discrepancies between simulated and deposited strand geometries, undermining the predictive capability of the model. In this work, we investigate the mass conservation performance of the conservative level-set (CLS) method in extrusion-based 3D printing simulations. A systematic parametric study is conducted to quantify the influence of the interface thickness and reinitialization parameters on mass conservation, using the steady-state cross-sectional area of deposited strands as a quantitative metric. Simulated cross-sections are compared against reference values obtained from analytical mass balance relations. The results show that reducing both the interface thickness and the reinitialization parameter improves mass conservation accuracy, although diminishing returns and increased computational cost are observed beyond certain thresholds. In addition, appropriate tuning of the interface thickness can relax mesh refinement requirements while maintaining acceptable accuracy. The proposed parameter selection strategy is validated across a range of printing conditions, materials, and nozzle geometries, including multilayer deposition of viscoplastic fluids. The simulations show reasonable agreement with experimentally validated data from the literature, confirming that careful CLS parameter tuning enables accurate and computationally efficient prediction of strand geometry in extrusion-based 3D printing.
+ oai:arXiv.org:2508.20617v2
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Carlos J. G. Rojas, Md. Tusher Mollah, C. A. G\'omez-P\'erez, Leyla \"Ozkan
+
+
+ Beyond the Safety Tax: Mitigating Unsafe Text-to-Image Generation via External Safety Rectification
+ https://arxiv.org/abs/2508.21099
+ arXiv:2508.21099v3 Announce Type: replace
+Abstract: Text-to-image (T2I) generative models have achieved remarkable visual fidelity, yet remain vulnerable to generating unsafe content. Existing safety defenses typically intervene internally within the generative model, but suffer from severe concept entanglement, leading to degradation of benign generation quality, a trade-off we term the Safety Tax. To overcome this limitation, we advocate a paradigm shift from destructive internal editing to external safety rectification. Following this principle, we propose SafePatch, a structurally isolated safety module that performs external, interpretable rectification without modifying the base model. The core backbone of SafePatch is architecturally instantiated as a trainable clone of the base model's encoder, allowing it to inherit rich semantic priors and maintain representation consistency. To enable interpretable safety rectification, we construct a strictly aligned counterfactual safety dataset (ACS) for differential supervision training. Across nudity and multi-category benchmarks and recent adversarial prompt attacks, SafePatch achieves robust unsafe suppression (7% unsafe on I2P) while preserving image quality and semantic alignment.
+ oai:arXiv.org:2508.21099v3
+ cs.CV
+ cs.AI
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiangtao Meng, Yingkai Dong, Ning Yu, Li Wang, Zheng Li, Shanqing Guo
+
+
+ Lightning Fast Caching-based Parallel Denoising Prediction for Accelerating Talking Head Generation
+ https://arxiv.org/abs/2509.00052
+ arXiv:2509.00052v2 Announce Type: replace
+Abstract: Diffusion-based talking head models generate high-quality, photorealistic videos but suffer from slow inference, limiting practical applications. Existing acceleration methods for general diffusion models fail to exploit the temporal and spatial redundancies unique to talking head generation. In this paper, we propose a task-specific framework addressing these inefficiencies through two key innovations. First, we introduce Lightning-fast Caching-based Parallel denoising prediction (LightningCP), caching static features to bypass most model layers in inference time. We also enable parallel prediction using cached features and estimated noisy latents as inputs, efficiently bypassing sequential sampling. Second, we propose Decoupled Foreground Attention (DFA) to further accelerate attention computations, exploiting the spatial decoupling in talking head videos to restrict attention to dynamic foreground regions. Additionally, we remove reference features in certain layers to bring extra speedup. Extensive experiments demonstrate that our framework significantly improves inference speed while preserving video quality.
+ oai:arXiv.org:2509.00052v2
+ cs.GR
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianzhi Long, Wenhao Sun, Rongcheng Tu, Dacheng Tao
+
+
+ Ireland in 2057: Projections using a Geographically Diverse Dynamic Microsimulation
+ https://arxiv.org/abs/2509.01446
+ arXiv:2509.01446v2 Announce Type: replace
+Abstract: This paper presents a dynamic microsimulation model developed for Ireland, designed to simulate key demographic processes and individual life-course transitions from 2022 to 2057. The model captures four primary events: births, deaths, internal migration, and international migration, enabling a comprehensive examination of population dynamics over time. Each individual in the simulation is defined by seven core attributes: age, sex, marital status, citizenship, whether the person was living in Ireland in the previous year, highest level of education attained, and economic status. These characteristics evolve stochastically based on transition probabilities derived from empirical data from the Irish context. Individuals are spatially disaggregated at the Electoral Division level. By modelling individuals at this granular level, the simulation facilitates in-depth local analysis of demographic shifts and socioeconomic outcomes under varying scenarios and policy assumptions. The model thus serves as a versatile tool for both academic inquiry and evidence-based policy development, offering projections that can inform long-term planning and strategic decision-making through 2057. The microsimulation achieves a close match in population size and makeup in all scenarios when compared to Demographic Component Methods. Education levels are projected to increase significantly, with nearly 70% of young people projected to attain a third level degree at some point in their lifetime. The unemployment rate is also projected to decrease as a result of the increased education levels.
+ oai:arXiv.org:2509.01446v2
+ cs.CY
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Se\'an Caulfield Curley, Karl Mason, Patrick Mannion
+
+
+ Batch Query Processing and Optimization for Agentic Workflows
+ https://arxiv.org/abs/2509.02121
+ arXiv:2509.02121v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) in agentic workflows combine multi-step reasoning, heterogeneous tool use, and collaboration across multiple specialized agents. Existing LLM serving engines optimize individual calls in isolation, while multi-agent frameworks focus on orchestration without system-level performance planning. As a result, repeated prompts, overlapping contexts, and fragmented CPU-GPU execution create substantial redundancy and poor hardware utilization, especially in batch analytics scenarios. We introduce Halo, a system that brings batch query processing and optimization into agentic LLM workflows. Halo represents each workflow as a structured query plan DAG and constructs a consolidated graph for batched queries that exposes shared computation. Guided by a cost model that jointly considers heterogeneous resource constraints, prefill and decode costs, cache reuse, and GPU placement, Halo performs plan-level optimization to minimize redundant execution. The Processor integrates adaptive batching, KV-cache sharing and migration, along with fine-grained CPU-GPU pipelining to maximize holistic hardware efficiency. Evaluation across six benchmarks shows that Halo achieves up to 3.6x speedup for batch inference and 2.6x throughput improvement under online serving, scaling to workloads of thousands of queries and complex graphs. These gains are achieved without compromising output quality. By unifying query optimization with heterogeneous LLM serving, Halo enables efficient agentic workflows in data analytics and decision-making applications.
+ oai:arXiv.org:2509.02121v2
+ cs.DB
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Junyi Shen, Noppanat Wadlom, Yao Lu
+
+
+ Predicting Movie Success with Multi-Task Learning: A Hybrid Framework Combining GPT-Based Sentiment Analysis and SIR Propagation
+ https://arxiv.org/abs/2509.02809
+ arXiv:2509.02809v3 Announce Type: replace
+Abstract: This study presents a hybrid framework for predicting movie success. The framework integrates multi-task learning (MTL), GPT-based sentiment analysis, and Susceptible-Infected-Recovered (SIR) propagation modeling. The study examines limitations in existing approaches. It models static production attributes, information dissemination, and audience sentiment at the same time. The framework uses 5,840 films from 2004 to 2024 and approximate 300,000 user reviews. It shows predictive performance with classification accuracy of 0.964 and regression metrics of MAE 0.388. Ablation analysis indicates component interactions. Selective feature combinations perform better than the comprehensive model. This result questions assumptions about feature integration. The model shows virality patterns between successful and unsuccessful films. Innovations include epidemiological modeling for information diffusion, multidimensional sentiment features from GPT-based analysis, and a shared representation architecture that optimizes multiple success metrics. The framework provides applications in the film production lifecycle. It also contributes to understanding how audience engagement leads to commercial outcomes.
+ oai:arXiv.org:2509.02809v3
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wenlan Xie
+
+
+ Can the Waymo Open Motion Dataset Support Realistic Behavioral Modeling? A Validation Study with Naturalistic Trajectories
+ https://arxiv.org/abs/2509.03515
+ arXiv:2509.03515v2 Announce Type: replace
+Abstract: The Waymo Open Motion Dataset (WOMD) has become a popular resource for data-driven modeling of autonomous vehicles (AVs) behavior. However, its validity for behavioral analysis remains uncertain due to proprietary post-processing, the absence of error quantification, and the segmentation of trajectories into 20-second clips. This study examines whether WOMD accurately captures the dynamics and interactions observed in real-world AV operations. Leveraging an independently collected naturalistic dataset from Level 4 AV operations in Phoenix, Arizona (PHX), we perform comparative analyses across three representative urban driving scenarios: discharging at signalized intersections, car-following, and lane-changing behaviors. For the discharging analysis, headways are manually extracted from aerial video to ensure negligible measurement error. For the car-following and lane-changing cases, we apply the Simulation-Extrapolation (SIMEX) method to account for empirically estimated error in the PHX data and use Dynamic Time Warping (DTW) distances to quantify behavioral differences. Results across all scenarios consistently show that behavior in PHX falls outside the behavioral envelope of WOMD. Notably, WOMD underrepresents short headways and abrupt decelerations. These findings suggest that behavioral models calibrated solely on WOMD may systematically underestimate the variability, risk, and complexity of naturalistic driving. Caution is therefore warranted when using WOMD for behavior modeling without proper validation against independently collected data.
+ oai:arXiv.org:2509.03515v2
+ cs.RO
+ cs.AI
+ cs.LG
+ cs.SY
+ eess.SY
+ stat.AP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yanlin Zhang, Sungyong Chung, Nachuan Li, Dana Monzer, Hani S. Mahmassani, Samer H. Hamdar, Alireza Talebpour
+
+
+ Towards a Unified View of Large Language Model Post-Training
+ https://arxiv.org/abs/2509.04419
+ arXiv:2509.04419v2 Announce Type: replace
+Abstract: Two major sources of training data exist for post-training modern language models: online (model-generated rollouts) data, and offline (human or other-model demonstrations) data. These two types of data are typically used by approaches like Reinforcement Learning (RL) and Supervised Fine-Tuning (SFT), respectively. In this paper, we show that these approaches are not in contradiction, but are instances of a single optimization process. We derive a Unified Policy Gradient Estimator, and present the calculations of a wide spectrum of post-training approaches as the gradient of a common objective under different data distribution assumptions and various bias-variance tradeoffs. The gradient estimator is constructed with four interchangeable parts: stabilization mask, reference policy denominator, advantage estimate, and likelihood gradient. Motivated by our theoretical findings, we propose Hybrid Post-Training (HPT), an algorithm that dynamically selects different training signals. HPT is designed to yield both effective exploitation of demonstration and stable exploration without sacrificing learned reasoning patterns. We provide extensive experiments and ablation studies to verify the effectiveness of our unified theoretical framework and HPT. Across six mathematical reasoning benchmarks and two out-of-distribution suites, HPT consistently surpasses strong baselines across models of varying scales and families.
+ oai:arXiv.org:2509.04419v2
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xingtai Lv, Yuxin Zuo, Youbang Sun, Hongyi Liu, Yuntian Wei, Zhekai Chen, Xuekai Zhu, Kaiyan Zhang, Bingning Wang, Ning Ding, Bowen Zhou
+
+
+ Uncertainty-Aware Collaborative System of Large and Small Models for Multimodal Sentiment Analysis
+ https://arxiv.org/abs/2509.04459
+ arXiv:2509.04459v2 Announce Type: replace
+Abstract: Multimodal Large Language Models (MLLMs) have notably enhanced the performance of Multimodal Sentiment Analysis (MSA), yet their massive parameter scale leads to excessive resource consumption in training and inference, severely limiting model efficiency. To balance performance and efficiency for MSA, this paper innovatively proposes a novel Uncertainty-Aware Collaborative System (U-ACS) that integrates Uncertainty-aware Baseline Model (UBM) with MLLMs. U-ACS operates in three stages: First, all samples are processed by the UBM, retain high-confidence samples and forward low-confidence samples to the MLLM. Notably, to address the challenge that continuous outputs of regression tasks hinder uncertainty calculation, we innovatively convert the continuous sentiment label prediction task to a classification task, enabling a more accurate calculation of entropy and uncertainty. Second, the MLLM performs initial process. In this stage, high-confidence samples or low-confidence samples whose predictive sentiment polarity matches that of the UBM are deemed acceptable, while unqualified samples are forwarded for further processing. Finally, the MLLM performs secondary inference on remaining low-confidence samples using prompts augmented with prior rounds predictions as references. By aggregating results from the three stages, U-ACS preserves high MSA prediction accuracy while drastically boosting efficiency via offloading most simple samples to the UBM and minimizing MLLM processing volume. Extensive experiments verify that U-ACS maintains superior performance while significantly reducing computational overhead and resource consumption.
+ oai:arXiv.org:2509.04459v2
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shiqin Han, Manning Gao, Menghua Jiang, Yuncheng Jiang, Haifeng Hu, Sijie Mai
+
+
+ Learned Hallucination Detection in Black-Box LLMs using Token-level Entropy Production Rate
+ https://arxiv.org/abs/2509.04492
+ arXiv:2509.04492v2 Announce Type: replace
+Abstract: Hallucinations in Large Language Model (LLM) outputs for Question Answering (QA) tasks can critically undermine their real-world reliability. This paper introduces a methodology for robust, one-shot hallucination detection, specifically designed for scenarios with limited data access, such as interacting with black-box LLM APIs that typically expose only a few top candidate log-probabilities per token. Our approach derives uncertainty indicators directly from these readily available log-probabilities generated during non-greedy decoding. We first derive an Entropy Production Rate (EPR) that offers baseline performance, later augmented with supervised learning. Our learned model leverages the entropic contributions of the accessible top-ranked tokens within a single generated sequence, without multiple re-runs per query. Evaluated across diverse QA datasets and multiple LLMs, this estimator significantly improves token-level hallucination detection over state-of-the-art methods. Crucially, high performance is demonstrated using only the typically small set of available log-probabilities (e.g., top-10 per token), confirming its practical efficiency and suitability for API-constrained deployments. This work provides a lightweight technique to enhance the trustworthiness of LLM responses, at the token level, after a single generation pass, for QA and Retrieval-Augmented Generation (RAG) systems. Our experiments confirmed the performance of our method against existing approaches on public dataset as well as for a financial framework analyzing annual company reports.
+ oai:arXiv.org:2509.04492v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Charles Moslonka, Hicham Randrianarivo, Arthur Garnier, Emmanuel Malherbe
+
+
+ AI-in-the-Loop: Privacy Preserving Real-Time Scam Detection and Conversational Scambaiting by Leveraging LLMs and Federated Learning
+ https://arxiv.org/abs/2509.05362
+ arXiv:2509.05362v4 Announce Type: replace
+Abstract: Scams exploiting real-time social engineering -- such as phishing, impersonation, and phone fraud -- remain a persistent and evolving threat across digital platforms. Existing defenses are largely reactive, offering limited protection during active interactions. We propose a privacy-preserving, AI-in-the-loop framework that proactively detects and disrupts scam conversations in real time. The system combines instruction-tuned artificial intelligence with a safety-aware utility function that balances engagement with harm minimization, and employs federated learning to enable continual model updates without raw data sharing. Experimental evaluations show that the system produces fluent and engaging responses (perplexity as low as 22.3, engagement $\approx$0.80), while human studies confirm significant gains in realism, safety, and effectiveness over strong baselines. In federated settings, models trained with FedAvg sustain up to 30 rounds while preserving high engagement ($\approx$0.80), strong relevance ($\approx$0.74), and low PII leakage ($\leq$0.0085). Even with differential privacy, novelty and safety remain stable, indicating that robust privacy can be achieved without sacrificing performance. The evaluation of guard models (LlamaGuard, LlamaGuard2/3, MD-Judge) shows a straightforward pattern: stricter moderation settings reduce the chance of exposing personal information, but they also limit how much the model engages in conversation. In contrast, more relaxed settings allow longer and richer interactions, which improve scam detection, but at the cost of higher privacy risk. To our knowledge, this is the first framework to unify real-time scam-baiting, federated privacy preservation, and calibrated safety moderation into a proactive defense paradigm.
+ oai:arXiv.org:2509.05362v4
+ cs.CR
+ cs.AI
+ cs.LG
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ismail Hossain, Sai Puppala, Md Jahangir Alam, Sajedul Talukder
+
+
+ Code2MCP: Transforming Code Repositories into MCP Services
+ https://arxiv.org/abs/2509.05941
+ arXiv:2509.05941v3 Announce Type: replace
+Abstract: The Model Context Protocol (MCP) aims to create a standard for how Large Language Models use tools. However, most current research focuses on selecting tools from an existing pool. A more fundamental, yet largely overlooked, problem is how to populate this pool by converting the vast number of existing software projects into MCP-compatible services. To bridge this gap, we introduce Code2MCP, an agent-based framework that automatically transforms a GitHub repository into a functional MCP service with minimal human intervention. Code2MCP employs a multi-agent workflow for code analysis, environment setup, tool function design, and service generation, enhanced by a self-correcting loop to ensure reliability. We demonstrate that Code2MCP successfully transforms open-source computing libraries in scientific fields such as bioinformatics, mathematics, and fluid dynamics that are not available in existing MCP servers. By providing a novel automated pathway to unlock GitHub, the world's largest code repository, for the MCP ecosystem, Code2MCP serves as a catalyst to significantly accelerate the protocol's adoption and practical application. The code is public at https://github.com/DEFENSE-SEU/Code2MCP.
+ oai:arXiv.org:2509.05941v3
+ cs.SE
+ cs.LG
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chaoqian Ouyang, Ling Yue, Shimin Di, Libin Zheng, Linan Yue, Shaowu Pan, Jian Yin, Min-Ling Zhang
+
+
+ Does DINOv3 Set a New Medical Vision Standard? Benchmarking 2D and 3D Classification, Segmentation, and Registration
+ https://arxiv.org/abs/2509.06467
+ arXiv:2509.06467v3 Announce Type: replace
+Abstract: The advent of large-scale vision foundation models, pre-trained on diverse natural images, has marked a paradigm shift in computer vision. However, how the frontier vision foundation models' efficacies transfer to specialised domains such as medical imaging remains an open question. This report investigates whether DINOv3, a state-of-the-art self-supervised vision transformer (ViT) pre-trained on natural images, can directly serve as a powerful, unified encoder for medical vision tasks without domain-specific fine-tuning. To answer this, we benchmark DINOv3 across common medical vision tasks, including 2D and 3D classification, segmentation, and registration on a wide range of medical imaging modalities. We systematically analyse its scalability by varying model sizes and input image resolutions. Our findings reveal that DINOv3 shows impressive performance and establishes a formidable new baseline. Remarkably, it can even outperform medical-specific foundation models like BiomedCLIP and CT-Net on several tasks, despite being trained solely on natural images. However, we identify clear limitations: The model's features degrade in scenarios requiring deep domain specialisation, such as in whole-slide images (WSIs), electron microscopy (EM), and positron emission tomography (PET). Furthermore, we observe that DINOv3 does not consistently follow the scaling law in the medical domain. Its performance does not reliably increase with larger models or finer feature resolutions, showing diverse scaling behaviours across tasks. Overall, our work establishes DINOv3 as a strong baseline, whose powerful visual features can serve as a robust prior for multiple medical tasks. This opens promising future directions, such as leveraging its features to enforce multiview consistency in 3D reconstruction.
+ oai:arXiv.org:2509.06467v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Che Liu, Yinda Chen, Haoyuan Shi, Jinpeng Lu, Bailiang Jian, Jiazhen Pan, Linghan Cai, Jiayi Wang, Jieming Yu, Ziqi Gao, Xiaoran Zhang, Long Bai, Yundi Zhang, Jun Li, Cosmin I. Bercea, Cheng Ouyang, Chen Chen, Zhiwei Xiong, Benedikt Wiestler, Christian Wachinger, James S. Duncan, Daniel Rueckert, Wenjia Bai, Rossella Arcucci
+
+
+ Building Large-Scale English-Romanian Literary Translation Resources with Open Models
+ https://arxiv.org/abs/2509.07829
+ arXiv:2509.07829v3 Announce Type: replace
+Abstract: Literary translation has recently gained attention as a distinct and complex task in machine translation research. However, the translation by small open models remains an open problem. We contribute to this ongoing research by introducing TINYFABULIST TRANSLATION FRAMEWORK (TF2), a unified framework for dataset creation, fine-tuning, and evaluation in English-Romanian literary translations, centred on the creation and open release of both a compact, fine-tuned language model (TF2-12B) and large-scale synthetic parallel datasets (DS-TF2-EN-RO-3M and DS-TF2-EN-RO-15K). Building on DS-TF1-EN-3M (TF1), the largest collection of synthetic English fables to date, we address the need for rich, high-quality literary datasets in low-resource languages such as Romanian. Our pipeline first generates 15k high-quality Romanian references from the TF1 pool using a high-performing LLM. We then apply a two-stage fine-tuning process to a 12B-parameter open-weight model: (i) instruction tuning to capture genre-specific narrative style, and (ii) adapter compression for efficient deployment. Evaluation combines corpus-level BLEU and a five-dimension LLM-based rubric (accuracy, fluency, coherence, style, cultural adaptation) to provide a nuanced assessment of translation quality. Results show that our fine-tuned model achieves strong fluency and adequacy, narrowing the gap to top-performing proprietary models under automated and human-anchored evaluation, while being open, accessible, and significantly more cost-effective. Alongside the finetuned model, and both datasets, we publicly release all scripts and evaluation prompts. TF2 thus provides an end-to-end, reproducible pipeline for research on cost-efficient translation, cross-lingual narrative generation, and the broad adoption of open models for culturally significant literary content in low-resource settings.
+ oai:arXiv.org:2509.07829v3
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mihai Nadas, Laura Diosan, Andreea Tomescu, Andrei Piscoran
+
+
+ Quasi-optimal time-space discretizations for a class of nonlinear parabolic PDEs
+ https://arxiv.org/abs/2509.08645
+ arXiv:2509.08645v2 Announce Type: replace
+Abstract: We consider parabolic evolution equations with Lipschitz continuous and strongly monotone spatial operators. By introducing an additional variable, we construct an equivalent system where the operator is a Lipschitz continuous mapping from a Hilbert space $Y \times X$ to its dual, with a Lipschitz continuous inverse. Resulting Galerkin discretizations can be solved with an inexact Uzawa type algorithm. Quasi-optimality of the Galerkin approximations is guaranteed under an inf-sup condition on the selected `test' and `trial' subspaces of $Y$ and $X$. To circumvent the restriction imposed by this inf-sup condition, an a posteriori condition for quasi-optimality is developed that is shown to be satisfied whenever the test space is sufficiently large.
+ oai:arXiv.org:2509.08645v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Nina Beranek, Robin Smeets, Rob Stevenson
+
+
+ Vejde: A Framework for Inductive Deep Reinforcement Learning Based on Factor Graph Color Refinement
+ https://arxiv.org/abs/2509.09219
+ arXiv:2509.09219v2 Announce Type: replace
+Abstract: We present and evaluate Vejde; a framework which combines data abstraction, graph neural networks and reinforcement learning to produce inductive policy functions for decision problems with richly structured states, such as object classes and relations. MDP states are represented as data bases of facts about entities, and Vejde converts each state to a bipartite graph, which is mapped to latent states through neural message passing. The factored representation of both states and actions allows Vejde agents to handle problems of varying size and structure. We tested Vejde agents on eight problem domains defined in RDDL, with ten problem instances each, where policies were trained using both supervised and reinforcement learning. To test policy generalization, we separate problem instances in two sets, one for training and the other solely for testing. Test results on unseen instances for the Vejde agents were compared to MLP agents trained on each problem instance, as well as the online planning algorithm Prost. Our results show that Vejde policies in average generalize to the test instances without a significant loss in score. Additionally, the inductive agents received scores on unseen test instances that on average were close to the instance-specific MLP agents.
+ oai:arXiv.org:2509.09219v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Transactions on Machine Learning Research, January 2026
+ Jakob Nyberg, Pontus Johnson
+
+
+ Generative Diffusion Contrastive Network for Multi-View Clustering
+ https://arxiv.org/abs/2509.09527
+ arXiv:2509.09527v2 Announce Type: replace
+Abstract: In recent years, Multi-View Clustering (MVC) has been significantly advanced under the influence of deep learning. By integrating heterogeneous data from multiple views, MVC enhances clustering analysis, making multi-view fusion critical to clustering performance. However, there is a problem of low-quality data in multi-view fusion. This problem primarily arises from two reasons: 1) Certain views are contaminated by noisy data. 2) Some views suffer from missing data. This paper proposes a novel Stochastic Generative Diffusion Fusion (SGDF) method to address this problem. SGDF leverages a multiple generative mechanism for the multi-view feature of each sample. It is robust to low-quality data. Building on SGDF, we further present the Generative Diffusion Contrastive Network (GDCN). Extensive experiments show that GDCN achieves the state-of-the-art results in deep MVC tasks. The source code is publicly available at https://github.com/HackerHyper/GDCN.
+ oai:arXiv.org:2509.09527v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jian Zhu, Xin Zou, Xi Wang, Lei Liu, Chang Tang, Li-Rong Dai
+
+
+ Prespecified-Performance Kinematic Tracking Control for Aerial Manipulation
+ https://arxiv.org/abs/2509.10065
+ arXiv:2509.10065v3 Announce Type: replace
+Abstract: This paper studies the kinematic tracking control problem for aerial manipulators. Existing kinematic tracking control methods, which typically employ proportional-derivative feedback or tracking-error-based feedback strategies, may fail to achieve tracking objectives within specified time constraints. To address this limitation, we propose a novel control framework comprising two key components: end-effector tracking control based on a user-defined preset trajectory and quadratic programming-based reference allocation. Compared with state-of-the-art approaches, the proposed method has several attractive features. First, it ensures that the end-effector reaches the desired position within a preset time while keeping the tracking error within a performance envelope that reflects task requirements. Second, quadratic programming is employed to allocate the references of the quadcopter base and the Delta arm, while considering the physical constraints of the aerial manipulator, thus preventing solutions that may violate physical limitations. The proposed approach is validated through three experiments. Experimental results demonstrate the effectiveness of the proposed algorithm and its capability to guarantee that the target position is reached within the preset time.
+ oai:arXiv.org:2509.10065v3
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Huazi Cao, Jiahao Shen, Zhengzhen Li, Qinquan Ren, Shiyu Zhao
+
+
+ Predictive Spike Timing Enables Distributed Shortest Path Computation in Spiking Neural Networks
+ https://arxiv.org/abs/2509.10077
+ arXiv:2509.10077v2 Announce Type: replace
+Abstract: Efficient planning and sequence selection are central to intelligence, yet current approaches remain largely incompatible with biological computation. Classical graph algorithms like Dijkstra's or A* require global state and biologically implausible operations such as backtracing, while reinforcement learning methods rely on slow gradient-based policy updates that appear inconsistent with rapid behavioral adaptation observed in natural systems.
+ We propose a biologically plausible algorithm for shortest-path computation that operates through local spike-based message-passing with realistic processing delays. The algorithm exploits spike-timing coincidences to identify nodes on optimal paths: Neurons that receive inhibitory-excitatory message pairs earlier than predicted reduce their response delays, creating a temporal compression that propagates backwards from target to source. Through analytical proof and simulations on random spatial networks, we demonstrate that the algorithm converges and discovers all shortest paths using purely timing-based mechanisms. By showing how short-term timing dynamics alone can compute shortest paths, this work provides new insights into how biological networks might solve complex computational problems through purely local computation and relative spike-time prediction. These findings open new directions for understanding distributed computation in biological and artificial systems, with possible implications for computational neuroscience, AI, reinforcement learning, and neuromorphic systems.
+ oai:arXiv.org:2509.10077v2
+ cs.NE
+ cs.AI
+ cs.DS
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Simen Storesund, Kristian Valset Aars, Robin Dietrich, Nicolai Waniek
+
+
+ Towards the Distributed Large-scale k-NN Graph Construction by Graph Merge
+ https://arxiv.org/abs/2509.11697
+ arXiv:2509.11697v4 Announce Type: replace
+Abstract: In order to support the real-time interaction with LLMs and the instant search or the instant recommendation on social media, it becomes an imminent problem to build a k-NN graph or an indexing graph for the massive number of vectorized multimedia data. In such scenarios, the scale of the data or the scale of the graph may exceed the processing capacity of a single machine. This paper aims to address the graph construction problem of such scale via efficient graph merge. For the graph construction on a single node, two generic and highly parallelizable algorithms, namely Two-way Merge and Multi-way Merge are proposed to merge subgraphs into one. For the graph construction across multiple nodes, a multi-node procedure based on Two-way Merge is presented. The procedure makes it feasible to construct a large-scale k-NN graph/indexing graph on either a single node or multiple nodes when the data size exceeds the memory capacity of one node. Extensive experiments are conducted on both large-scale k-NN graph and indexing graph construction. For the k-NN graph construction, the large-scale and high-quality k-NN graphs are constructed by graph merge in parallel. Typically, a billion-scale k-NN graph can be built in approximately 17h when only three nodes are employed. For the indexing graph construction, similar NN search performance as the original indexing graph is achieved with the merged indexing graphs while requiring much less time of construction.
+ oai:arXiv.org:2509.11697v4
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Cheng Zhang, Wan-Lei Zhao, Shihai Xiao, Jiajie Yao, Xuecang Zhang
+
+
+ TransLibEval: Demystify Large Language Models' Capability in Third-party Library-targeted Code Translation
+ https://arxiv.org/abs/2509.12087
+ arXiv:2509.12087v2 Announce Type: replace
+Abstract: In recent years, Large Language Models (LLMs) have been widely studied in the code translation field on the method, class, and even repository levels. However, most of these benchmarks are limited in terms of Third-Party Library (TPL) categories and scales, making TPL-related errors hard to expose and hindering the development of targeted solutions. Considering the high dependence (over 90%) on TPLs in practical programming, demystifying and analyzing LLMs' code translation performance involving various TPLs becomes imperative. To address this gap, we construct TransLibEval, the first benchmark dedicated to library-centric code translation. It consists of 200 real-world tasks across Python, Java, and C++, each explicitly involving TPLs from diverse categories such as data processing, machine learning, and web development, with comprehensive dependency coverage and high-coverage test suites. We evaluate seven recent LLMs of commercial, general, and code-specialized families under six translation strategies of three categories: Direct, IR-guided, and Retrieval-augmented. Experimental results show a dramatic performance drop compared with library-free settings (average CA decline over 60%), while diverse strategies demonstrate heterogeneous advantages. Furthermore, we analyze 4,831 failed cases from GPT-4o, one of the State-of-the-Art (SOTA) LLMs, revealing numerous third-party reference errors that were obscured previously. These findings highlight the unique challenges of library-centric translation and provide practical guidance for improving TPL-aware code intelligence.
+ oai:arXiv.org:2509.12087v2
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Pengyu Xue, Kunwu Zheng, Zhen Yang, Yifei Pei, Linhao Wu, Jiahui Dong, Xiapu Luo, Yan Xiao, Fei Liu, Yuxuan Zhang, Xiran Lyu, Xianhang Li, Xuanyu Zhu, Chengyi Wang
+
+
+ RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation
+ https://arxiv.org/abs/2509.12710
+ arXiv:2509.12710v2 Announce Type: replace
+Abstract: Text-driven infrared and visible image fusion has gained attention for enabling natural language to guide the fusion process. However, existing methods lack a goal-aligned task to supervise and evaluate how effectively the input text contributes to the fusion outcome. We observe that referring image segmentation (RIS) and text-driven fusion share a common objective: highlighting the object referred to by the text. Motivated by this, we propose RIS-FUSION, a cascaded framework that unifies fusion and RIS through joint optimization. At its core is the LangGatedFusion module, which injects textual features into the fusion backbone to enhance semantic alignment. To support multimodal referring image segmentation task, we introduce MM-RIS, a large-scale benchmark with 12.5k training and 3.5k testing triplets, each consisting of an infrared-visible image pair, a segmentation mask, and a referring expression. Extensive experiments show that RIS-FUSION achieves state-of-the-art performance, outperforming existing methods by over 11% in mIoU. Code and dataset will be released at https://github.com/SijuMa2003/RIS-FUSION.
+ oai:arXiv.org:2509.12710v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Siju Ma, Changsiyu Gong, Xiaofeng Fan, Yong Ma, Chengjie Jiang
+
+
+ Sy-FAR: Symmetry-based Fair Adversarial Robustness
+ https://arxiv.org/abs/2509.12939
+ arXiv:2509.12939v2 Announce Type: replace
+Abstract: Security-critical machine-learning (ML) systems, such as face-recognition systems, are susceptible to adversarial examples, including real-world physically realizable attacks. Various means to boost ML's adversarial robustness have been proposed; however, they typically induce unfair robustness: It is often easier to attack from certain classes or groups than from others. Several techniques have been developed to improve adversarial robustness while seeking perfect fairness between classes. Yet, prior work has focused on settings where security and fairness are less critical. Our insight is that achieving perfect parity in realistic fairness-critical tasks, such as face recognition, is often infeasible -- some classes may be highly similar, leading to more misclassifications between them. Instead, we suggest that seeking symmetry -- i.e., attacks from class $i$ to $j$ would be as successful as from $j$ to $i$ -- is more tractable. Intuitively, symmetry is a desirable because class resemblance is a symmetric relation in most domains. Additionally, as we prove theoretically, symmetry between individuals induces symmetry between any set of sub-groups, in contrast to other fairness notions where group-fairness is often elusive. We develop Sy-FAR, a technique to encourage symmetry while also optimizing adversarial robustness and extensively evaluate it using five datasets, with three model architectures, including against targeted and untargeted realistic attacks. The results show Sy-FAR significantly improves fair adversarial robustness compared to state-of-the-art methods. Moreover, we find that Sy-FAR is faster and more consistent across runs. Notably, Sy-FAR also ameliorates another type of unfairness we discover in this work -- target classes that adversarial examples are likely to be classified into become significantly less vulnerable after inducing symmetry.
+ oai:arXiv.org:2509.12939v2
+ cs.LG
+ cs.AI
+ cs.CR
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Haneen Najjar, Eyal Ronen, Mahmood Sharif
+
+
+ Reducts of fuzzy contexts: Formal concept analysis vs. rough set theory
+ https://arxiv.org/abs/2509.13059
+ arXiv:2509.13059v2 Announce Type: replace
+Abstract: We postulate the intuitive idea of reducts of fuzzy contexts based on formal concept analysis and rough set theory. For a complete residuated lattice $L$, it is shown that reducts of $L$-contexts in formal concept analysis are interdefinable with reducts of $L$-contexts in rough set theory via negation if, and only if, $L$ satisfies the law of double negation.
+ oai:arXiv.org:2509.13059v2
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yuxu Chen, Jing Liu, Lili Shen, Xiaoye Tang
+
+
+ PERSEUS: Perception with Semantic Endoscopic Understanding and SLAM
+ https://arxiv.org/abs/2509.13541
+ arXiv:2509.13541v2 Announce Type: replace
+Abstract: Purpose: Natural orifice surgeries minimize the need for incisions and reduce the recovery time compared to open surgery; however, they require a higher level of expertise due to visualization and orientation challenges. We propose a perception pipeline for these surgeries that allows semantic scene understanding.
+ Methods: We bring learning-based segmentation, depth estimation, and 3D reconstruction modules together to create real-time segmented maps of the surgical scenes. Additionally, we use registration with robot poses to solve the scale ambiguity of mapping from monocular images, and allow the use of semantically informed real-time reconstructions in robotic surgeries.
+ Results: We achieve sub-milimeter reconstruction accuracy based on average one-sided Chamfer distances, average pose registration RMSE of 0.9 mm, and an estimated scale within 2% of ground truth.
+ Conclusion: We present a modular perception pipeline, integrating semantic segmentation with real-time monocular SLAM for natural orifice surgeries. This pipeline offers a promising solution for scene understanding that can facilitate automation or surgeon guidance.
+ oai:arXiv.org:2509.13541v2
+ cs.RO
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ayberk Acar, Fangjie Li, Susheela Sharma Stern, Lidia Al-Zogbi, Hao Li, Kanyifeechukwu Jane Oguine, Dilara Isik, Brendan Burkhart, Jesse F. d'Almeida, Robert J. Webster III, Ipek Oguz, Jie Ying Wu
+
+
+ Freeze-Tag is NP-hard in 2D with $L_1$ distance
+ https://arxiv.org/abs/2509.14357
+ arXiv:2509.14357v2 Announce Type: replace
+Abstract: The Freeze-Tag Problem (FTP) is a scheduling problem with application in robot swarm activation and was introduced by Arkin et al. in 2002. This problem seeks an efficient way of activating a robot swarm, starting with a single active robot. Activations occur through direct contact, and once a robot becomes active, it can move and help activate other robots. Although the problem has been shown to be NP-hard in the Euclidean plane $\mathbb{R}^2$ under the $L_2$ distance, and in three-dimensional Euclidean space $\mathbb{R}^3$ under any $L_p$ distance with $p \ge 1$, its complexity under the $L_1$ (Manhattan) distance in $\mathbb{R}^2$ has remained an open question. In this paper, we settle this question by proving that FTP is strongly NP-hard in the Euclidean plane with $L_1$ distance.
+ oai:arXiv.org:2509.14357v2
+ cs.CG
+ cs.CC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Lucas de Oliveira Silva, Lehilton Lelis Chaves Pedrosa
+
+
+ How Does Instrumental Music Help SingFake Detection?
+ https://arxiv.org/abs/2509.14675
+ arXiv:2509.14675v2 Announce Type: replace
+Abstract: Although many models exist to detect singing voice deepfakes (SingFake), how these models operate, particularly with instrumental accompaniment, is unclear. We investigate how instrumental music affects SingFake detection from two perspectives. To investigate the behavioral effect, we test different backbones, unpaired instrumental tracks, and frequency subbands. To analyze the representational effect, we probe how fine-tuning alters encoders' speech and music capabilities. Our results show that instrumental accompaniment acts mainly as data augmentation rather than providing intrinsic cues (e.g., rhythm or harmony). Furthermore, fine-tuning increases reliance on shallow speaker features while reducing sensitivity to content, paralinguistic, and semantic information. These insights clarify how models exploit vocal versus instrumental cues and can inform the design of more interpretable and robust SingFake detection systems.
+ oai:arXiv.org:2509.14675v2
+ cs.SD
+ eess.AS
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Xuanjun Chen, Chia-Yu Hu, I-Ming Lin, Yi-Cheng Lin, I-Hsiang Chiu, You Zhang, Sung-Feng Huang, Yi-Hsuan Yang, Haibin Wu, Hung-yi Lee, Jyh-Shing Roger Jang
+
+
+ Enhancing Retrieval Augmentation via Adversarial Collaboration
+ https://arxiv.org/abs/2509.14750
+ arXiv:2509.14750v2 Announce Type: replace
+Abstract: Retrieval-augmented Generation (RAG) is a prevalent approach for domain-specific LLMs, yet it is often plagued by "Retrieval Hallucinations"--a phenomenon where fine-tuned models fail to recognize and act upon poor-quality retrieved documents, thus undermining performance. To address this, we propose the Adversarial Collaboration RAG (AC-RAG) framework. AC-RAG employs two heterogeneous agents: a generalist Detector that identifies knowledge gaps, and a domain-specialized Resolver that provides precise solutions. Guided by a moderator, these agents engage in an adversarial collaboration, where the Detector's persistent questioning challenges the Resolver's expertise. This dynamic process allows for iterative problem dissection and refined knowledge retrieval. Extensive experiments show that AC-RAG significantly improves retrieval accuracy and outperforms state-of-the-art RAG methods across various vertical domains.
+ oai:arXiv.org:2509.14750v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Letian Zhang, Guanghao Meng, Xudong Ren, Yiming Wang, Shu-Tao Xia
+
+
+ Precision Neural Networks: Joint Graph And Relational Learning
+ https://arxiv.org/abs/2509.14821
+ arXiv:2509.14821v2 Announce Type: replace
+Abstract: CoVariance Neural Networks (VNNs) perform convolutions on the graph determined by the covariance matrix of the data, which enables expressive and stable covariance-based learning. However, covariance matrices are typically dense, fail to encode conditional independence, and are often precomputed in a task-agnostic way, which may hinder performance. To overcome these limitations, we study Precision Neural Networks (PNNs), i.e., VNNs on the precision matrix - the inverse covariance. The precision matrix naturally encodes statistical independence, often exhibits sparsity, and preserves the covariance spectral structure. To make precision estimation task-aware, we formulate an optimization problem that jointly learns the network parameters and the precision matrix, and solve it via alternating optimization, by sequentially updating the network weights and the precision estimate. We theoretically bound the distance between the estimated and true precision matrices at each iteration, and demonstrate the effectiveness of joint estimation compared to two-step approaches on synthetic and real-world data.
+ oai:arXiv.org:2509.14821v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Andrea Cavallo, Samuel Rey, Antonio G. Marques, Elvin Isufi
+
+
+ Controllable Localized Face Anonymization Via Diffusion Inpainting
+ https://arxiv.org/abs/2509.14866
+ arXiv:2509.14866v2 Announce Type: replace
+Abstract: The growing use of portrait images in computer vision highlights the need to protect personal identities. At the same time, anonymized images must remain useful for downstream computer vision tasks. In this work, we propose a unified framework that leverages the inpainting ability of latent diffusion models to generate realistic anonymized images. Unlike prior approaches, we have complete control over the anonymization process by designing an adaptive attribute-guidance module that applies gradient correction during the reverse denoising process, aligning the facial attributes of the generated image with those of the synthesized target image. Our framework also supports localized anonymization, allowing users to specify which facial regions are left unchanged. Extensive experiments conducted on the public CelebA-HQ and FFHQ datasets show that our method outperforms state-of-the-art approaches while requiring no additional model training. The source code is available on our page.
+ oai:arXiv.org:2509.14866v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ali Salar, Qing Liu, Guoying Zhao
+
+
+ From Hype to Insight: Rethinking Large Language Model Integration in Visual Speech Recognition
+ https://arxiv.org/abs/2509.14880
+ arXiv:2509.14880v3 Announce Type: replace
+Abstract: Advances in self-supervised encoders have improved Visual Speech Recognition (VSR). Recent approaches integrating these encoders with LLM decoders improves transcription accuracy; however, it remains unclear whether these gains stem from visual understanding or stronger language modeling. In this work, we systematically evaluate LLM decoders by freezing or selectively updating the visual encoder, scaling decoder size, comparing adaptation strategies and architectures, and varying training data across LRS2, LRS3, and their combination. Evaluation on LRS2, LRS3, and WildVSR shows that scaling and adaptation yield limited improvements, while combining datasets enhances generalization. Semantic analysis reveals that gains arise primarily from lexical rather than semantic processing. Our Llama-2-13B model trained on the combined set achieves 24.7% WER on LRS3 and 47.0% on WildVSR, establishing SOTA among models trained without additional supervision. Our findings indicate LLM decoders refine contextual reasoning rather than visual features, emphasizing the need for stronger visual encoders to drive meaningful progress.
+ oai:arXiv.org:2509.14880v3
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rishabh Jain, Naomi Harte
+
+
+ FAWN: A MultiEncoder Fusion-Attention Wave Network for Integrated Sensing and Communication Indoor Scene Inference
+ https://arxiv.org/abs/2509.14968
+ arXiv:2509.14968v2 Announce Type: replace
+Abstract: The upcoming generations of wireless technologies promise an era where everything is interconnected and intelligent. As the need for intelligence grows, networks must learn to better understand the physical world. However, deploying dedicated hardware to perceive the environment is not always feasible, mainly due to costs and/or complexity. Integrated Sensing and Communication (ISAC) has made a step forward in addressing this challenge. Within ISAC, passive sensing emerges as a cost-effective solution that reuses wireless communications to sense the environment, without interfering with existing communications. Nevertheless, the majority of current solutions are limited to one technology (mostly Wi-Fi or 5G), constraining the maximum accuracy reachable. As different technologies work with different spectrums, we see a necessity in integrating more than one technology to augment the coverage area. Hence, we take the advantage of ISAC passive sensing, to present FAWN, a MultiEncoder Fusion-Attention Wave Network for ISAC indoor scene inference. FAWN is based on the original transformers architecture, to fuse information from Wi-Fi and 5G, making the network capable of understanding the physical world without interfering with the current communication. To test our solution, we have built a prototype and integrated it in a real scenario. Results show errors below 0.6 m around 84% of times.
+ oai:arXiv.org:2509.14968v2
+ cs.LG
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Carlos Barroso-Fern\'andez, Alejandro Calvillo-Fernandez, Antonio de la Oliva, Carlos J. Bernardos
+
+
+ Frustratingly Easy Data Augmentation for Low-Resource ASR
+ https://arxiv.org/abs/2509.15373
+ arXiv:2509.15373v3 Announce Type: replace
+Abstract: This paper introduces three self-contained data augmentation methods for low-resource Automatic Speech Recognition (ASR). Our techniques first generate novel text--using gloss-based replacement, random replacement, or an LLM-based approach--and then apply Text-to-Speech (TTS) to produce synthetic audio. We apply these methods, which leverage only the original annotated data, to four languages with extremely limited resources (Vatlongos, Nashta, Shinekhen Buryat, and Kakabe). Fine-tuning a pretrained Wav2Vec2-XLSR-53 model on a combination of the original audio and generated synthetic data yields significant performance gains, including a 14.3% absolute WER reduction for Nashta. The methods prove effective across all four low-resource languages and also show utility for high-resource languages like English, demonstrating their broad applicability.
+ oai:arXiv.org:2509.15373v3
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Katsumi Ibaraki, David Chiang
+
+
+ TennisTV: Do Multimodal Large Language Models Understand Tennis Rallies?
+ https://arxiv.org/abs/2509.15602
+ arXiv:2509.15602v3 Announce Type: replace
+Abstract: Multimodal large language models (MLLMs) excel at general video understanding but struggle with fast, high-frequency sports like tennis, where rally clips are short yet information-dense. To systematically evaluate MLLMs in this challenging domain, we present TennisTV, the first and most comprehensive benchmark for tennis video understanding. TennisTV models each rally as a temporal-ordered sequence of consecutive stroke events, using automated pipelines for filtering and question generation. It covers 8 tasks from the stroke level to the rally level and includes 2527 human-verified questions. Evaluating 17 representative MLLMs, we provide the first systematic assessment of tennis video understanding. Results yield two key insights: (i) frame-sampling density should be tailored and balanced across tasks, and (ii) improving temporal grounding is essential for stronger reasoning.
+ oai:arXiv.org:2509.15602v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhongyuan Bao, Lejun Zhang
+
+
+ Omni-LIVO: Robust RGB-Colored Multi-Camera Visual-Inertial-LiDAR Odometry via Photometric Migration and ESIKF Fusion
+ https://arxiv.org/abs/2509.15673
+ arXiv:2509.15673v3 Announce Type: replace
+Abstract: Wide field-of-view (FoV) LiDAR sensors provide dense geometry across large environments, but existing LiDAR-inertial-visual odometry (LIVO) systems generally rely on a single camera, limiting their ability to fully exploit LiDAR-derived depth for photometric alignment and scene colorization. We present Omni-LIVO, a tightly coupled multi-camera LIVO system that leverages multi-view observations to comprehensively utilize LiDAR geometric information across extended spatial regions. Omni-LIVO introduces a Cross-View direct alignment strategy that maintains photometric consistency across non-overlapping views, and extends the Error-State Iterated Kalman Filter (ESIKF) with multi-view updates and adaptive covariance. The system is evaluated on public benchmarks and our custom dataset, showing improved accuracy and robustness over state-of-the-art LIVO, LIO, and visual-inertial SLAM baselines. Code and dataset will be released upon publication.
+ oai:arXiv.org:2509.15673v3
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yinong Cao, Chenyang Zhang, Xin He, Yuwei Chen, Chengyu Pu, Bingtao Wang, Kaile Wu, Shouzheng Zhu, Fei Han, Shijie Liu, Chunlai Li, Jianyu Wang
+
+
+ Angelfish: Leader, DAG, or Anywhere in Between
+ https://arxiv.org/abs/2509.15847
+ arXiv:2509.15847v2 Announce Type: replace
+Abstract: To maximize performance, many modern blockchain systems rely on eventually-synchronous, Byzantine fault-tolerant (BFT) consensus protocols. Two protocol designs have emerged in this space: protocols that minimize latency using a leader that drives both data dissemination and consensus, and protocols that maximize throughput using a separate, asynchronous data dissemination layer. Recent protocols such as Partially-Synchronous Bullshark and Sailfish combine elements of both approaches by using a DAG to enable parallel data dissemination and a leader that paces DAG formation. This improves latency while achieving state-of-the-art throughput. However, the DAG-formation process of those protocols imposes overheads that prevent matching the latency possible with a leader-based protocol.
+ We present Angelfish, a hybrid protocol that adapts smoothly across this design space, from leader-based to DAG-based consensus. Angelfish lets a dynamically-adjusted subset of parties use best-effort broadcast to issue lightweight votes instead of using a costlier reliably broadcast to create DAG vertices. This reduces communication, tolerates more lagging nodes, and lowers latency in practice compared to prior DAG-based protocols. Our empirical evaluation shows that Angelfish attains state-of-the-art peak throughput while matching the latency of leader-based protocols under moderate throughput, delivering the best of both worlds. The implementation is open-sourced and publicly available.
+ oai:arXiv.org:2509.15847v2
+ cs.DC
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qianyu Yu, Giuliano Losa, Nibesh Shrestha, Xuechao Wang
+
+
+ Learning the Influence Graph of a Markov Process that Randomly Resets to the Past
+ https://arxiv.org/abs/2509.16129
+ arXiv:2509.16129v2 Announce Type: replace
+Abstract: Learning the influence graph G of a high-dimensional Markov process is central to many application domains, including social networks, neuroscience, and financial risk analysis. However, in many of these applications, future states of the process are occasionally and unpredictably influenced by a distant past state, thus destroying the Markovianity. To study this practical issue, we propose the past influence model (PIM), which captures the occasional "random resets to past" by modifying the Markovian dynamics in [1], which, in turn, is a non-linear generalization of the dynamics studied in [2], [3]. The recursive greedy algorithm proposed in this paper recovers any bounded degree $G$ when the number of ``jumps back in time" is order-wise smaller than the total number of samples, and the algorithm does not require memory.
+ oai:arXiv.org:2509.16129v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Sudharsan Senthil, Avhishek Chatterjee
+
+
+ Domain-Specific Constitutional AI: Enhancing Safety in LLM-Powered Mental Health Chatbots
+ https://arxiv.org/abs/2509.16444
+ arXiv:2509.16444v2 Announce Type: replace
+Abstract: Mental health applications have emerged as a critical area in computational health, driven by rising global rates of mental illness, the integration of AI in psychological care, and the need for scalable solutions in underserved communities. These include therapy chatbots, crisis detection, and wellness platforms handling sensitive data, requiring specialized AI safety beyond general safeguards due to emotional vulnerability, risks like misdiagnosis or symptom exacerbation, and precise management of vulnerable states to avoid severe outcomes such as self-harm or loss of trust. Despite AI safety advances, general safeguards inadequately address mental health-specific challenges, including crisis intervention accuracy to avert escalations, therapeutic guideline adherence to prevent misinformation, scale limitations in resource-constrained settings, and adaptation to nuanced dialogues where generics may introduce biases or miss distress signals. We introduce an approach to apply Constitutional AI training with domain-specific mental health principles for safe, domain-adapted CAI systems in computational mental health applications.
+ oai:arXiv.org:2509.16444v2
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/BSN66969.2025.11337405
+ 2025 IEEE 21st International Conference on Body Sensor Networks (BSN), pp. 1-4
+ Chenhan Lyu, Yutong Song, Pengfei Zhang, Amir M. Rahmani
+
+
+ Tracing the Techno-Supremacy Doctrine: A Critical Discourse Analysis of the AI Executive Elite
+ https://arxiv.org/abs/2509.18079
+ arXiv:2509.18079v2 Announce Type: replace
+Abstract: This paper critically analyzes the discourse of the 'AI executive elite,' a group of highly influential individuals shaping the way AI is funded, developed, and deployed worldwide. The primary objective is to examine the presence and dynamics of the 'Techno-Supremacy Doctrine' (TSD), a term introduced in this study to describe a belief system characterized by an excessive trust in technology's alleged inherent superiority in solving complex societal problems. This study integrates quantitative heuristics with in-depth qualitative investigations. Its methodology is operationalized in a two-phase critical discourse analysis of 14 texts published by elite members between 2017 and 2025. The findings demonstrate that the elite is not a monolithic bloc but exhibits a broad spectrum of stances. The discourse is highly dynamic, showing a marked polarization and general increase in pro-TSD discourse following the launch of ChatGPT. The analysis identifies key discursive patterns, including a dominant pro-TSD narrative that combines utopian promises with claims of inevitable progress, and the common tactic of acknowledging risks only as a strategic preamble to proposing further technological solutions. This paper presents TSD as a comprehensive analytical framework and provides a 'diagnostic toolkit' for identifying its manifestations, from insidious to benign. It argues that fostering critical awareness of these discursive patterns is essential for AI practitioners, policymakers, and the public to actively navigate the future of AI.
+ oai:arXiv.org:2509.18079v2
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ H\'ector P\'erez-Urbina
+
+
+ MOMEMTO: Patch-based Memory Gate Model in Time Series Foundation Model
+ https://arxiv.org/abs/2509.18751
+ arXiv:2509.18751v2 Announce Type: replace
+Abstract: Recently reconstruction-based deep models have been widely used for time series anomaly detection, but as their capacity and generalization capability increase, these models tend to over-generalize, often reconstructing unseen anomalies accurately. Prior works have attempted to mitigate this by incorporating a memory architecture that stores prototypes of normal patterns. Nevertheless, these approaches suffer from high training costs and have yet to be effectively integrated with time series foundation models (TFMs). To address these challenges, we propose MOMEMTO, an improved variant of TFM for anomaly detection, enhanced with a patch-based memory module to mitigate over-generalization. The memory module is designed to capture representative normal patterns from multiple domains and enables a single model to be jointly fine-tuned across multiple datasets through a multi-domain training strategy. MOMEMTO initializes memory items with latent representations from a pre-trained encoder, organizes them into patch-level units, and updates them via an attention mechanism. We evaluate our method using 23 univariate benchmark datasets. Experimental results demonstrate that MOMEMTO, as a single model, achieves higher scores on AUC and VUS metrics compared to baseline methods, and further enhances the performance of its backbone TFM, particularly in few-shot learning scenarios.
+ oai:arXiv.org:2509.18751v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Samuel Yoon, Jongwon Kim, Juyoung Ha, Young Myoung Ko
+
+
+ Evaluation-Aware Reinforcement Learning
+ https://arxiv.org/abs/2509.19464
+ arXiv:2509.19464v2 Announce Type: replace
+Abstract: Policy evaluation is often a prerequisite for deploying safety- and performance-critical systems. Existing evaluation approaches frequently suffer from high variance due to limited data and long-horizon tasks, or high bias due to unequal support or inaccurate environmental models. We posit that these challenges arise, in part, from the standard reinforcement learning (RL) paradigm of policy learning without explicit consideration of evaluation. As an alternative, we propose evaluation-aware reinforcement learning (EvA-RL), in which a policy is trained to maximize expected return while simultaneously minimizing expected evaluation error under a given value prediction scheme -- in other words, being "easy" to evaluate. We formalize a framework for EvA-RL and design an instantiation that enables accurate policy evaluation, conditioned on a small number of rollouts in an assessment environment that can be different than the deployment environment. However, our theoretical analysis and empirical results show that there is often a tradeoff between evaluation accuracy and policy performance when using a fixed value-prediction scheme within EvA-RL. To mitigate this tradeoff, we extend our approach to co-learn an assessment-conditioned state-value predictor alongside the policy. Empirical results across diverse discrete and continuous action domains demonstrate that EvA-RL can substantially reduce evaluation error while maintaining competitive returns. This work lays the foundation for a broad new class of RL methods that treat reliable evaluation as a first-class principle during training.
+ oai:arXiv.org:2509.19464v2
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Shripad Vilasrao Deshmukh, Will Schwarzer, Scott Niekum
+
+
+ Causal Time Series Generation via Diffusion Models
+ https://arxiv.org/abs/2509.20846
+ arXiv:2509.20846v2 Announce Type: replace
+Abstract: Time series generation (TSG) synthesizes realistic sequences and has achieved remarkable success. Among TSG, conditional models generate sequences given observed covariates, however, such models learn observational correlations without considering unobserved confounding. In this work, we propose a causal perspective on conditional TSG and introduce causal time series generation as a new TSG task family, formalized within Pearl's causal ladder, extending beyond observational generation to include interventional and counterfactual settings. To instantiate these tasks, we develop CaTSG, a unified diffusion-based framework with backdoor-adjusted guidance that causally steers sampling toward desired interventions and individual counterfactuals while preserving observational fidelity. Specifically, our method derives causal score functions via backdoor adjustment and the abduction-action-prediction procedure, thus enabling principled support for all three levels of TSG. Extensive experiments on both synthetic and real-world datasets show that CaTSG achieves superior fidelity and also supporting interventional and counterfactual generation that existing baselines cannot handle. Overall, we propose the causal TSG family and instantiate it with CaTSG, providing an initial proof-of-concept and opening a promising direction toward more reliable simulation under interventions and counterfactual generation.
+ oai:arXiv.org:2509.20846v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yutong Xia, Chang Xu, Yuxuan Liang, Qingsong Wen, Roger Zimmermann, Jiang Bian
+
+
+ Geographical Centralization Resilience in Ethereum's Block-Building Paradigms
+ https://arxiv.org/abs/2509.21475
+ arXiv:2509.21475v2 Announce Type: replace
+Abstract: Decentralization has an important geographic dimension that conventional metrics, such as stake distribution, often overlook. Where validators operate affects resilience to regional shocks (e.g., outages, natural disasters, or government intervention) as well as fairness in reward access. Yet in permissionless systems, validator locations cannot be prescribed by protocol rules; instead, they emerge endogenously from economic incentives. When certain locations offer systematic advantages, validators may strategically co-locate to maximize expected rewards, as observed in Ethereum, where validators cluster along the Atlantic corridor, which exhibits structurally favorable latency.
+ In this paper, we design and implement an agent-based simulation framework to study how Ethereum's protocol design, particularly its block-building paradigms of local and external block building, interacts with validator and information-source distributions to shape geographical positioning incentives. Our simulations show that Ethereum's block-building architecture is not geographically neutral: both paradigms induce location-dependent payoffs and migration incentives, with asymmetric access to information sources amplifying geographical centralization. We further demonstrate that consensus parameters, such as attestation thresholds and slot times, modulate latency sensitivity and can amplify these effects, acting as protocol-level levers. Finally, we discuss the implications of our findings for protocol design and outline potential mitigation directions informed by our analysis.
+ oai:arXiv.org:2509.21475v2
+ cs.CR
+ cs.CE
+ cs.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Sen Yang, Burak \"Oz, Fei Wu, Fan Zhang
+
+
+ SlimDiff: Training-Free, Activation-Guided Hands-free Slimming of Diffusion Models
+ https://arxiv.org/abs/2509.21498
+ arXiv:2509.21498v2 Announce Type: replace
+Abstract: Diffusion models (DMs), lauded for their generative performance, are computationally prohibitive due to their billion-scale parameters and iterative denoising dynamics. Existing efficiency techniques, such as quantization, timestep reduction, or pruning, offer savings in compute, memory, or runtime but are strictly bottlenecked by reliance on fine-tuning or retraining to recover performance. In this work, we introduce SlimDiff, an automated activation-informed structural compression framework that reduces both attention and feedforward dimensionalities in DMs, while being entirely gradient-free. SlimDiff reframes DM compression as a spectral approximation task, where activation covariances across denoising timesteps define low-rank subspaces that guide dynamic pruning under a fixed compression budget. This activation-aware formulation mitigates error accumulation across timesteps by applying module-wise decompositions over functional weight groups: query--key interactions, value--output couplings, and feedforward projections, rather than isolated matrix factorizations, while adaptively allocating sparsity across modules to respect the non-uniform geometry of diffusion trajectories. SlimDiff achieves up to 35\% acceleration and $\sim$100M parameter reduction over baselines, with generation quality on par with uncompressed models without any backpropagation. Crucially, our approach requires only about 500 calibration samples, over 70$\times$ fewer than prior methods. To our knowledge, this is the first closed-form, activation-guided structural compression of DMs that is entirely training-free, providing both theoretical clarity and practical efficiency.
+ oai:arXiv.org:2509.21498v2
+ cs.LG
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Arani Roy, Shristi Das Biswas, Kaushik Roy
+
+
+ Following the TRACE: A Structured Path to Empathetic Response Generation with Multi-Agent Models
+ https://arxiv.org/abs/2509.21849
+ arXiv:2509.21849v2 Announce Type: replace
+Abstract: Empathetic response generation is a crucial task for creating more human-like and supportive conversational agents. However, existing methods face a core trade-off between the analytical depth of specialized models and the generative fluency of Large Language Models (LLMs). To address this, we propose TRACE, Task-decomposed Reasoning for Affective Communication and Empathy, a novel framework that models empathy as a structured cognitive process by decomposing the task into a pipeline for analysis and synthesis. By building a comprehensive understanding before generation, TRACE unites deep analysis with expressive generation. Experimental results show that our framework significantly outperforms strong baselines in both automatic and LLM-based evaluations, confirming that our structured decomposition is a promising paradigm for creating more capable and interpretable empathetic agents. Our code is available at https://anonymous.4open.science/r/TRACE-18EF/README.md.
+ oai:arXiv.org:2509.21849v2
+ cs.CL
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ziqi Liu, Ziyang Zhou, Yilin Li, Haiyang Zhang, Yangbin Chen
+
+
+ Task-Aware Mixture-of-Experts for Time Series Analysis
+ https://arxiv.org/abs/2509.22279
+ arXiv:2509.22279v3 Announce Type: replace
+Abstract: Time Series Analysis is widely used in various real-world applications such as weather forecasting, financial fraud detection, imputation for missing data in IoT systems, and classification for action recognization. Mixture-of-Experts (MoE), as a powerful architecture, though demonstrating effectiveness in NLP, still falls short in adapting to versatile tasks in time series analytics due to its task-agnostic router and the lack of capability in modeling channel correlations. In this study, we propose a novel, general MoE-based time series framework called PatchMoE to support the intricate ``knowledge'' utilization for distinct tasks, thus task-aware. Based on the observation that hierarchical representations often vary across tasks, e.g., forecasting vs. classification, we propose a Recurrent Noisy Gating to utilize the hierarchical information in routing, thus obtaining task-sepcific capability. And the routing strategy is operated on time series tokens in both temporal and channel dimensions, and encouraged by a meticulously designed Temporal \& Channel Load Balancing Loss to model the intricate temporal and channel correlations. Comprehensive experiments on five downstream tasks demonstrate the state-of-the-art performance of PatchMoE.
+ oai:arXiv.org:2509.22279v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xingjian Wu, Zhengyu Li, Hanyin Cheng, Xiangfei Qiu, Jilin Hu, Chenjuan Guo, Bin Yang
+
+
+ We Think, Therefore We Align LLMs to Helpful, Harmless and Honest Before They Go Wrong
+ https://arxiv.org/abs/2509.22510
+ arXiv:2509.22510v2 Announce Type: replace
+Abstract: Alignment of Large Language Models (LLMs) along multiple objectives-helpfulness, harmlessness, and honesty (HHH)-is critical for safe and reliable deployment. Prior work has used steering vector-small control signals injected into hidden states-to guide LLM outputs, typically via one-to-one (1-to-1) Transformer decoders. In this setting, optimizing a single alignment objective can inadvertently overwrite representations learned for other objectives, leading to catastrophic forgetting. More recent approaches extend steering vectors via one-to-many (1-to-N) Transformer decoders. While this alleviates catastrophic forgetting, naive multi-branch designs optimize each objective independently, which can cause inference fragmentation-outputs across HHH objectives may become inconsistent. We propose Adaptive Multi-Branch Steering (AMBS), a two-stage 1-to-N framework for unified and efficient multi-objective alignment. In Stage I, post-attention hidden states of the Transformer layer are computed once to form a shared representation. In Stage II, this representation is cloned into parallel branches and steered via a policy-reference mechanism, enabling objective-specific control while maintaining cross-objective consistency. Empirical evaluations on Alpaca, BeaverTails, and TruthfulQA show that AMBS consistently improves HHH alignment across multiple 7B LLM backbones. For example, on DeepSeek-7B, AMBS improves average alignment scores by +32.4% and reduces unsafe outputs by 11.0% compared to a naive 1-to-N baseline, while remaining competitive with state-of-the-art methods.
+ oai:arXiv.org:2509.22510v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Gautam Siddharth Kashyap, Mark Dras, Usman Naseem
+
+
+ Transport Based Mean Flows for Generative Modeling
+ https://arxiv.org/abs/2509.22592
+ arXiv:2509.22592v2 Announce Type: replace
+Abstract: Flow-matching generative models have emerged as a powerful paradigm for continuous data generation, achieving state-of-the-art results across domains such as images, 3D shapes, and point clouds. Despite their success, these models suffer from slow inference due to the requirement of numerous sequential sampling steps. Recent work has sought to accelerate inference by reducing the number of sampling steps. In particular, Mean Flows offer a one-step generation approach that delivers substantial speedups while retaining strong generative performance. Yet, in many continuous domains, Mean Flows fail to faithfully approximate the behavior of the original multi-step flow-matching process. In this work, we address this limitation by incorporating optimal transport-based sampling strategies into the Mean Flow framework, enabling one-step generators that better preserve the fidelity and diversity of the original multi-step flow process. Experiments on controlled low-dimensional settings and on high-dimensional tasks such as image generation, image-to-image translation, and point cloud generation demonstrate that our approach achieves superior inference accuracy in one-step generative modeling.
+ oai:arXiv.org:2509.22592v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Elaheh Akbari, Ping He, Ahmadreza Moradipari, Yikun Bai, Soheil Kolouri
+
+
+ GeLoc3r: Enhancing Relative Camera Pose Regression with Geometric Consistency Regularization
+ https://arxiv.org/abs/2509.23038
+ arXiv:2509.23038v2 Announce Type: replace
+Abstract: Prior ReLoc3R achieves breakthrough performance with fast 25ms inference and state-of-the-art regression accuracy, yet our analysis reveals subtle geometric inconsistencies in its internal representations that prevent reaching the precision ceiling of correspondence-based methods like MASt3R (which require 300ms per pair). In this work, we present GeLoc3r, a novel approach to relative camera pose estimation that enhances pose regression methods through Geometric Consistency Regularization (GCR). GeLoc3r overcomes the speed-accuracy dilemma by training regression networks to produce geometrically consistent poses without inference-time geometric computation. During training, GeLoc3r leverages ground-truth depth to generate dense 3D-2D correspondences, weights them using a FusionTransformer that learns correspondence importance, and computes geometrically-consistent poses via weighted RANSAC. This creates a consistency loss that transfers geometric knowledge into the regression network. Unlike FAR method which requires both regression and geometric solving at inference, GeLoc3r only uses the enhanced regression head at test time, maintaining ReLoc3R's fast speed and approaching MASt3R's high accuracy. On challenging benchmarks, GeLoc3r consistently outperforms ReLoc3R, achieving significant improvements including 40.45% vs. 34.85% AUC@5{\deg} on the CO3Dv2 dataset (16% relative improvement), 68.66% vs. 66.70% AUC@5{\deg} on RealEstate10K, and 50.45% vs. 49.60% on MegaDepth1500. By teaching geometric consistency during training rather than enforcing it at inference, GeLoc3r represents a paradigm shift in how neural networks learn camera geometry, achieving both the speed of regression and the geometric understanding of correspondence methods.
+ oai:arXiv.org:2509.23038v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jingxing Li, Yongjae Lee, Deliang Fan
+
+
+ Balanced Diffusion-Guided Fusion for Multimodal Remote Sensing Classification
+ https://arxiv.org/abs/2509.23310
+ arXiv:2509.23310v2 Announce Type: replace
+Abstract: Deep learning-based techniques for the analysis of multimodal remote sensing data have become popular due to their ability to effectively integrate complementary spatial, spectral, and structural information from different sensors. Recently, denoising diffusion probabilistic models (DDPMs) have attracted attention in the remote sensing community due to their powerful ability to capture robust and complex spatial-spectral distributions. However, pre-training multimodal DDPMs may result in modality imbalance, and effectively leveraging diffusion features to guide complementary diversity feature extraction remains an open question. To address these issues, this paper proposes a balanced diffusion-guided fusion (BDGF) framework that leverages multimodal diffusion features to guide a multi-branch network for land-cover classification. Specifically, we propose an adaptive modality masking strategy to encourage the DDPMs to obtain a modality-balanced rather than spectral image-dominated data distribution. Subsequently, these diffusion features hierarchically guide feature extraction among CNN, Mamba, and transformer networks by integrating feature fusion, group channel attention, and cross-attention mechanisms. Finally, a mutual learning strategy is developed to enhance inter-branch collaboration by aligning the probability entropy and feature similarity of individual subnetworks. Extensive experiments on four multimodal remote sensing datasets demonstrate that the proposed method achieves superior classification performance. The code is available at https://github.com/HaoLiu-XDU/BDGF.
+ oai:arXiv.org:2509.23310v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hao Liu, Yongjie Zheng, Yuhan Kang, Mingyang Zhang, Maoguo Gong, Lorenzo Bruzzone
+
+
+ Channel, Trend and Periodic-Wise Representation Learning for Multivariate Long-term Time Series Forecasting
+ https://arxiv.org/abs/2509.23583
+ arXiv:2509.23583v2 Announce Type: replace
+Abstract: Downsampling-based methods for time series forecasting have attracted increasing attention due to their superiority in capturing sequence trends. However, this approaches mainly capture dependencies within subsequences but neglect inter-subsequence and inter-channel interactions, which limits forecasting accuracy. To address these limitations, we propose CTPNet, a novel framework that explicitly learns representations from three perspectives: i) inter-channel dependencies, captured by a temporal query-based multi-head attention mechanism; ii) intra-subsequence dependencies, modeled via a Transformer to characterize trend variations; and iii) inter-subsequence dependencies, extracted by reusing the encoder with residual connections to capture global periodic patterns. By jointly integrating these levels, proposed method provides a more holistic representation of temporal dynamics. Extensive experiments demonstrate the superiority of the proposed method.
+ oai:arXiv.org:2509.23583v2
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Zhangyao Song, Nanqing Jiang, Miaohong He, Xiaoyu Zhao, Tao Guo
+
+
+ MASH: A Multiplatform and Multimodal Annotated Dataset for Societal Impact of Hurricane
+ https://arxiv.org/abs/2509.23627
+ arXiv:2509.23627v2 Announce Type: replace
+Abstract: Natural disasters cause multidimensional threats to human societies, with hurricanes exemplifying one of the most disruptive events that not only caused severe physical damage but also sparked widespread discussion on social media platforms. Existing datasets for studying societal impacts of hurricanes often focus on outdated hurricanes and are limited to a single social media platform, failing to capture the broader societal impact in today's diverse social media environment. Moreover, existing datasets annotate visual and textual content of the post separately, failing to account for the multimodal nature of social media posts. To address these gaps, we present a multiplatform and Multimodal Annotated Dataset for Societal Impact of Hurricane (MASH) that includes 59,607 relevant social media data posts from Reddit, TikTok, and YouTube. In addition, all relevant social media data posts are annotated in a multimodal approach that considers both textual and visual content on three dimensions: Humanitarian Classes, Bias Classes, and Information Integrity Classes. To our best knowledge, MASH is the first large-scale, multi-platform, multimodal, and multi-dimensionally annotated dataset centered on hurricane disasters. In addition, we introduce an online platform that supports interactive data exploration, provides preliminary analytical results, and allows users to share their insights regarding the societal impacts of hurricanes. We envision that MASH can contribute to the study of hurricanes' impact on society, such as disaster response, disaster severity classification, public sentiment analysis, disaster policy making, and bias identification. The dataset is publicly available at https://huggingface.co/datasets/YRC10/MASH under the Creative Commons Attribution 4.0 (CC BY 4.0) license.
+ oai:arXiv.org:2509.23627v2
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ruichen Yao, Aslanbek Murzakhmetov, Raaghav Pillai, Aliya Maussymbayeva, Zelin Li, Yifan Liu, Yaokun Liu, Lanyu Shang, Yang Zhang, Na Wei, Ximing Cai, Dong Wang
+
+
+ GenView++: Unifying Adaptive Generative Augmentation and Quality-Driven Supervision for Contrastive Representation Learning
+ https://arxiv.org/abs/2509.23770
+ arXiv:2509.23770v3 Announce Type: replace
+Abstract: The success of contrastive learning depends on the construction and utilization of high-quality positive pairs. However, current methods face critical limitations on two fronts: on the construction side, both handcrafted and generative augmentations often suffer from limited diversity and risk semantic corruption; on the learning side, the absence of a quality assessment mechanism leads to suboptimal supervision where all pairs are treated equally. To tackle these challenges, we propose GenView++, a unified framework that addresses both fronts by introducing two synergistic innovations. To improve pair construction, GenView++ introduces a multi-source adaptive view generation mechanism to synthesize diverse yet semantically coherent views by dynamically modulating generative parameters across image-conditioned, text-conditioned, and image-text-conditioned strategies. Second, a quality-driven contrastive learning mechanism assesses each pair's semantic alignment and diversity to dynamically reweight their training contribution, prioritizing high-quality pairs while suppressing redundant or misaligned pairs. Extensive experiments demonstrate the effectiveness of GenView++ across both vision and vision-language tasks. For vision representation learning, it improves MoCov2 by +2.5% on ImageNet linear classification. For vision-language learning, it raises the average zero-shot classification accuracy by +12.31% over CLIP and +5.31% over SLIP across ten datasets, and further improves Flickr30k text retrieval R@5 by +3.2%.
+ oai:arXiv.org:2509.23770v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaojie Li, Bei Wang, Wei Liu, Jianlong Wu, Yue Yu, Liqiang Nie, Min Zhang
+
+
+ SandCell: Sandboxing Rust Beyond Unsafe Code
+ https://arxiv.org/abs/2509.24032
+ arXiv:2509.24032v2 Announce Type: replace
+Abstract: Rust is a modern systems programming language that ensures memory safety by enforcing ownership and borrowing rules at compile time. While the unsafe keyword allows programmers to bypass these restrictions, it introduces significant risks. Various approaches for isolating unsafe code to protect safe Rust from vulnerabilities have been proposed, yet these methods provide only fixed isolation boundaries and do not accommodate expressive policies that require sandboxing both safe and unsafe code. This paper presents SandCell for flexible and lightweight isolation in Rust by leveraging existing syntactic boundaries. SandCell allows programmers to specify which components to sandbox with minimal annotation effort, enabling fine-grained control over isolation. The system also introduces novel techniques to minimize overhead when transferring data between sandboxes. Our evaluation demonstrates SandCell's effectiveness in preventing vulnerabilities across various Rust applications while maintaining reasonable performance overheads.
+ oai:arXiv.org:2509.24032v2
+ cs.SE
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Jialun Zhang, Merve Gulmez, Thomas Nyman, Gang Tan
+
+
+ LOGOS: LLM-driven End-to-End Grounded Theory Development and Schema Induction for Qualitative Research
+ https://arxiv.org/abs/2509.24294
+ arXiv:2509.24294v2 Announce Type: replace
+Abstract: Grounded theory offers deep insights from qualitative data, but its reliance on expert-intensive manual coding presents a major scalability bottleneck. Existing computational tools either fail on full automation or lack flexible schema construction. We introduce LOGOS, a novel, end-to-end framework that fully automates the grounded theory workflow, transforming raw text into a structured, hierarchical theory. LOGOS integrates LLM-driven coding, semantic clustering, graph reasoning, and a novel iterative refinement process to build highly reusable codebooks. To ensure fair comparison, we also introduce a principled 5-dimensional metric and a train-test split protocol for standardized, unbiased evaluation. Across five diverse corpora, LOGOS consistently outperforms strong baselines and achieves a remarkable average $80.4\%$ alignment with an expert-developed schema on complex datasets. LOGOS demonstrates a potential to democratize and scale qualitative research without sacrificing theoretical nuance.
+ oai:arXiv.org:2509.24294v2
+ cs.CL
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xinyu Pi, Qisen Yang, Chuong Nguyen
+
+
+ humancompatible.detect: a Python Toolkit for Detecting Bias in AI Models
+ https://arxiv.org/abs/2509.24340
+ arXiv:2509.24340v2 Announce Type: replace
+Abstract: There is a strong recent emphasis on trustworthy AI. In particular, international regulations, such as the AI Act, demand that AI practitioners measure data quality on the input and estimate bias on the output of high-risk AI systems. However, there are many challenges involved, including scalability (MMD) and computability (Wasserstein-1) issues of traditional methods for estimating distances on measure spaces. Here, we present humancompatible.detect, a toolkit for bias detection that addresses these challenges. It incorporates two newly developed methods to detect and evaluate bias: maximum subgroup discrepancy (MSD) and subsampled $\ell_\infty$ distances. It has an easy-to-use API documented with multiple examples. humancompatible.detect is licensed under the Apache License, Version 2.0.
+ oai:arXiv.org:2509.24340v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ German M. Matilla, Jiri Nemecek, Illia Kryvoviaz, Jakub Marecek
+
+
+ Learning to Solve Optimization Problems Constrained with Partial Differential Equations
+ https://arxiv.org/abs/2509.24573
+ arXiv:2509.24573v2 Announce Type: replace
+Abstract: Partial differential equation (PDE)-constrained optimization arises in many scientific and engineering domains, such as energy systems, fluid dynamics and material design. In these problems, the decision variables (e.g., control inputs or design parameters) are tightly coupled with the PDE state variables, and the feasible set is implicitly defined by the governing PDE constraints. This coupling makes the problems computationally demanding, as it requires handling high dimensional discretization and dynamic constraints. To address these challenges, this paper introduces a learning-based framework that integrates a dynamic predictor with an optimization surrogate. The dynamic predictor, a novel time-discrete Neural Operator (Lu et al.), efficiently approximate system trajectories governed by PDE dynamics, while the optimization surrogate leverages proxy optimizer techniques (Kotary et al.) to approximate the associated optimal decisions. This dual-network design enables real-time approximation of optimal strategies while explicitly capturing the coupling between decisions and PDE dynamics. We validate the proposed approach on benchmark PDE-constrained optimization tasks inlacing Burgers' equation, heat equation and voltage regulation, and demonstrate that it achieves solution quality comparable to classical control-based algorithms, such as the Direct Method and Model Predictive Control (MPC), while providing up to four orders of magnitude improvement in computational speed.
+ oai:arXiv.org:2509.24573v2
+ cs.LG
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yusuf Guven, Vincenzo Di Vito, Ferdinando Fioretto
+
+
+ Visual serial processing deficits explain divergences in human and VLM reasoning
+ https://arxiv.org/abs/2509.25142
+ arXiv:2509.25142v2 Announce Type: replace
+Abstract: Why do Vision Language Models (VLMs), despite success on standard benchmarks, often fail to match human performance on surprisingly simple visual reasoning tasks? While the underlying computational principles are still debated, we hypothesize that a crucial factor is a deficit in visually-grounded serial processing. To test this hypothesis, we compared human and VLM performance across tasks designed to vary serial processing demands in three distinct domains: geometric reasoning, perceptual enumeration, and mental rotation. Tasks within each domain varied serial processing load by manipulating factors such as geometric concept complexity, perceptual individuation load, and transformation difficulty. Across all domains, our results revealed a consistent pattern: decreased VLM accuracy was strongly correlated with increased human reaction time (used as a proxy for serial processing load). As tasks require more demanding serial processing -- whether composing concepts, enumerating items, or performing mental transformations -- the VLM-human performance gap widens reliably. These findings support our hypothesis, indicating that limitations in serial, visually grounded reasoning represent a fundamental bottleneck that distinguishes current VLMs from humans.
+ oai:arXiv.org:2509.25142v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Nicholas Budny, Kia Ghods, Declan Campbell, Raja Marjieh, Amogh Joshi, Sreejan Kumar, Jonathan D. Cohen, Taylor W. Webb, Thomas L. Griffiths
+
+
+ Message passing-based inference in an autoregressive active inference agent
+ https://arxiv.org/abs/2509.25482
+ arXiv:2509.25482v2 Announce Type: replace
+Abstract: We present the design of an autoregressive active inference agent in the form of message passing on a factor graph. Expected free energy is derived and distributed across a planning graph. The proposed agent is validated on a robot navigation task, demonstrating exploration and exploitation in a continuous-valued observation space with bounded continuous-valued actions. Compared to a classical optimal controller, the agent modulates action based on predictive uncertainty, arriving later but with a better model of the robot's dynamics.
+ oai:arXiv.org:2509.25482v2
+ cs.AI
+ cs.LG
+ cs.RO
+ cs.SY
+ eess.SY
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wouter M. Kouw, Tim N. Nisslbeck, Wouter L. N. Nuijten
+
+
+ A Hamiltonian driven Geometric Construction of Neural Networks on the Lognormal Statistical Manifold
+ https://arxiv.org/abs/2509.25778
+ arXiv:2509.25778v2 Announce Type: replace
+Abstract: Bridging information geometry with machine learning, this paper presents a method for constructing neural networks intrinsically on statistical manifolds. We demonstrate this approach by formulating a neural network architecture directly on the lognormal statistical manifold. The construction is driven by the Hamiltonian system that is equivalent to the gradient flow on this manifold. First, we define the network's input values using the coordinate system of this Hamiltonian dynamics, naturally embedded in the Poincare disk. The core of our contribution lies in the derivation of the network's components from geometric principles: the rotation component of the synaptic weight matrix is determined by the Lie group action of SU(1,1) on the disk, while the activation function emerges from the symplectic structure of the system. We subsequently obtain the complete weight matrix, including its translation vector, and the resulting output values. This work shows that the lognormal manifold can be seamlessly viewed as a neural manifold, with its geometric properties dictating a unique and interpretable neural network structure. The proposed method offers a new paradigm for building learning systems grounded in the differential geometry of their underlying parameter spaces.
+ oai:arXiv.org:2509.25778v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Prosper Rosaire Mama Assandje, Teumsa Aboubakar, Thomas Bouetou Bouetou
+
+
+ FTSCommDetector: Discovering Behavioral Communities through Temporal Synchronization
+ https://arxiv.org/abs/2510.00014
+ arXiv:2510.00014v3 Announce Type: replace
+Abstract: Why do trillion-dollar tech giants AAPL and MSFT diverge into different response patterns during market disruptions despite identical sector classifications? This paradox reveals a fundamental limitation: traditional community detection methods fail to capture synchronization-desynchronization patterns where entities move independently yet align during critical moments. To this end, we introduce FTSCommDetector, implementing our Temporal Coherence Architecture (TCA) to discover similar and dissimilar communities in continuous multivariate time series. Unlike existing methods that process each timestamp independently, causing unstable community assignments and missing evolving relationships, our approach maintains coherence through dual-scale encoding and static topology with dynamic attention. Furthermore, we establish information-theoretic foundations demonstrating how scale separation maximizes complementary information and introduce Normalized Temporal Profiles (NTP) for scale-invariant evaluation. As a result, FTSCommDetector achieves consistent improvements across four diverse financial markets (SP100, SP500, SP1000, Nikkei 225), with gains ranging from 3.5% to 11.1% over the strongest baselines. The method demonstrates remarkable robustness with only 2% performance variation across window sizes from 60 to 120 days, making dataset-specific tuning unnecessary, providing practical insights for portfolio construction and risk management.
+ oai:arXiv.org:2510.00014v3
+ cs.SI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Tianyang Luo, Xikun Zhang, Dongjin Song
+
+
+ When Hallucination Costs Millions: Benchmarking AI Agents in High-Stakes Adversarial Financial Markets
+ https://arxiv.org/abs/2510.00332
+ arXiv:2510.00332v2 Announce Type: replace
+Abstract: We present CAIA, a benchmark exposing a critical blind spot in AI evaluation: the inability of state-of-the-art models to operate in adversarial, high-stakes environments where misinformation is weaponized and errors are irreversible. While existing benchmarks measure task completion in controlled settings, real-world deployment demands resilience against active deception. Using crypto markets as a testbed where $30 billion was lost to exploits in 2024, we evaluate 17 models on 178 time-anchored tasks requiring agents to distinguish truth from manipulation, navigate fragmented information landscapes, and make irreversible financial decisions under adversarial pressure.
+ Our results reveal a fundamental capability gap: without tools, even frontier models achieve only 28% accuracy on tasks junior analysts routinely handle. Tool augmentation improves performance but plateaus at 67.4% versus 80% human baseline, despite unlimited access to professional resources. Most critically, we uncover a systematic tool selection catastrophe: models preferentially choose unreliable web search over authoritative data, falling for SEO-optimized misinformation and social media manipulation. This behavior persists even when correct answers are directly accessible through specialized tools, suggesting foundational limitations rather than knowledge gaps. We also find that Pass@k metrics mask dangerous trial-and-error behavior for autonomous deployment.
+ The implications extend beyond crypto to any domain with active adversaries, e.g. cybersecurity, content moderation, etc. We release CAIA with contamination controls and continuous updates, establishing adversarial robustness as a necessary condition for trustworthy AI autonomy. The benchmark reveals that current models, despite impressive reasoning scores, remain fundamentally unprepared for environments where intelligence must survive active opposition.
+ oai:arXiv.org:2510.00332v2
+ cs.AI
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zeshi Dai, Zimo Peng, Zerui Cheng, Ryan Yihe Li
+
+
+ Towards Verifiable Federated Unlearning: Framework, Challenges, and The Road Ahead
+ https://arxiv.org/abs/2510.00833
+ arXiv:2510.00833v2 Announce Type: replace
+Abstract: Federated unlearning (FUL) enables removing the data influence from the model trained across distributed clients, upholding the right to be forgotten as mandated by privacy regulations. FUL facilitates a value exchange where clients gain privacy-preserving control over their data contributions, while service providers leverage decentralized computing and data freshness. However, this entire proposition is undermined because clients have no reliable way to verify that their data influence has been provably removed, as current metrics and simple notifications offer insufficient assurance. We envision unlearning verification becoming a pivotal and trust-by-design part of the FUL life-cycle development, essential for highly regulated and data-sensitive services and applications like healthcare. This article introduces veriFUL, a reference framework for verifiable FUL that formalizes verification entities, goals, approaches, and metrics. Specifically, we consolidate existing efforts and contribute new insights, concepts, and metrics to this domain. Finally, we highlight research challenges and identify potential applications and developments for verifiable FUL and veriFUL. This article aims to provide a comprehensive resource for researchers and practitioners to navigate and advance the field of verifiable FUL.
+ oai:arXiv.org:2510.00833v2
+ cs.DC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Thanh Linh Nguyen, Marcela Tuler de Oliveira, An Braeken, Aaron Yi Ding, Quoc-Viet Pham
+
+
+ COMMET: orders-of-magnitude speed-up in finite element method via batch-vectorized neural constitutive updates
+ https://arxiv.org/abs/2510.00884
+ arXiv:2510.00884v2 Announce Type: replace
+Abstract: Constitutive evaluations often dominate the computational cost of finite element (FE) simulations whenever material models are complex. Neural constitutive models (NCMs) offer a highly expressive and flexible framework for modeling complex material behavior in solid mechanics. However, their practical adoption in large-scale FE simulations remains limited due to significant computational costs, especially in repeatedly evaluating stress and stiffness. NCMs thus represent an extreme case: their large computational graphs make stress and stiffness evaluations prohibitively expensive, restricting their use to small-scale problems. In this work, we introduce COMMET, an open-source FE framework whose architecture has been redesigned from the ground up to accelerate high-cost constitutive updates. Our framework features a novel assembly algorithm that supports batched and vectorized constitutive evaluations, compute-graph-optimized derivatives that replace automatic differentiation, and distributed-memory parallelism via MPI. These advances dramatically reduce runtime, with speed-ups exceeding three orders of magnitude relative to traditional non-vectorized automatic differentiation-based implementations. While we demonstrate these gains primarily for NCMs, the same principles apply broadly wherever for-loop based assembly or constitutive updates limit performance, establishing a new standard for large-scale, high-fidelity simulations in computational mechanics.
+ oai:arXiv.org:2510.00884v2
+ cs.CE
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.cma.2026.118728
+ Computer Methods in Applied Mechanics and Engineering 452 (2026)
+ Benjamin Alheit, Mathias Peirlinck, Siddhant Kumar
+
+
+ Accurate Small-Signal Modeling of Digitally Controlled Buck Converters with ADC-PWM Synchronization
+ https://arxiv.org/abs/2510.00943
+ arXiv:2510.00943v3 Announce Type: replace
+Abstract: Digital control has become increasingly widespread in modern power electronic converters. When acquiring feedback signals such as the inductor current, synchronizing the analog-to-digital converter (ADC) with the digital pulse-width modulator (DPWM) is commonly employed to accurately track their steady-state average. However, the small-signal implications of such synchronization have not been investigated. This paper presents an exact small-signal model for digitally controlled buck converters operating in forced continuous-conduction mode (FCCM) under constant-frequency current-mode control, explicitly accounting for DPWM-ADC synchronization. Using a sampled-data framework, the proposed model captures all sideband effects introduced by the sampling process, yielding precise predictions of both analog and digital loop gains, even at frequencies beyond the switching and sampling frequencies. Both asymmetrical and symmetrical carrier modulations are considered. Furthermore, the digital loop gain is derived in closed form using the modified z-transform, enabling low-complexity compensator design and stability assessment. Within this framework, the analog loop gain can be directly obtained from the digital loop gain, thereby eliminating the need for computationally intensive infinite series evaluations. The validity of the proposed model is confirmed through both simulation and experimental results.
+ oai:arXiv.org:2510.00943v3
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hang Zhou, Yuxin Yang, Branislav Hredzak, John Edward Fletcher
+
+
+ PRISM-Consult: A Panel-of-Experts Architecture for Clinician-Aligned Diagnosis
+ https://arxiv.org/abs/2510.01114
+ arXiv:2510.01114v2 Announce Type: replace
+Abstract: We present PRISM-Consult, a clinician-aligned panel-of-experts architecture that extends the compact PRISM sequence model into a routed family of domain specialists. Episodes are tokenized as structured clinical events; a light-weight router reads the first few tokens and dispatches to specialist models (Cardiac-Vascular, Pulmonary, Gastro-Oesophageal, Musculoskeletal, Psychogenic). Each specialist inherits PRISM's small transformer backbone and token template, enabling parameter efficiency and interpretability. This initial study evaluates a scoped panel of five specialist families defined by high-impact ED diagnostic groups. On real-world Emergency Department cohorts, specialists exhibit smooth convergence with low development perplexities across domains, while the router achieves high routing quality and large compute savings versus consult-all under a safety-first policy. We detail the data methodology (initial vs.\ conclusive ICD-9 families), routing thresholds and calibration, and report per-domain results to avoid dominance by common events. The framework provides a practical path to safe, auditable, and low-latency consult at scale, and we outline validation steps-external/temporal replication, asymmetric life-threat thresholds, and multi-label arbitration-to meet prospective clinical deployment standards.
+ oai:arXiv.org:2510.01114v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Lionel Levine, John Santerre, Alexander S. Young, T. Barry Levine, Francis Campion, Majid Sarrafzadeh
+
+
+ An Optical Measurement System for Open-Source Tracking of Jaw Motions
+ https://arxiv.org/abs/2510.01191
+ arXiv:2510.01191v2 Announce Type: replace
+Abstract: Precise tracking of the jaw kinematics is crucial for diagnosing various musculoskeletal and neuromuscular diseases affecting the masticatory system and for advancing rehabilitative devices such as jaw exoskeletons, a hardly explored research field, to treat these disorders. We introduce an open-source, low-cost, precise, non-invasive, and biocompatible jaw tracking system based on optical motion capture technology to address the need for accessible and adaptable research tools. The system encompasses a complete pipeline from data acquisition, processing, and kinematic analysis to filtering, visualization, and data storage. We evaluated its performance and feasibility in experiments with four participants executing various jaw movements. The system demonstrated reliable kinematic tracking with an estimated precision of $(182 \pm 47) {\mu}m$ and $(0.126 \pm 0.034) {\deg}$. Therefore, the open-source nature of the system and its utility comparable to commercial systems make it suitable for many research and development contexts, especially for applications such as the integration and design of jaw exoskeletons and customized diagnostic protocols. The complete system is available at GitHub with the aim of promoting innovation in temporomandibular disorders research and jaw assistive technology.
+ oai:arXiv.org:2510.01191v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/SENSORS59705.2025.11330651
+ Paul-Otto M\"uller, Sven Suppelt, Mario Kupnik, Oskar von Stryk
+
+
+ PepCompass: Navigating peptide embedding spaces using Riemannian Geometry
+ https://arxiv.org/abs/2510.01988
+ arXiv:2510.01988v4 Announce Type: replace
+Abstract: Antimicrobial peptide discovery is challenged by the astronomical size of peptide space and the relative scarcity of active peptides. Generative models provide continuous latent "maps" of peptide space, but conventionally ignore decoder-induced geometry and rely on flat Euclidean metrics, rendering exploration and optimization distorted and inefficient. Prior manifold-based remedies assume fixed intrinsic dimensionality, which critically fails in practice for peptide data. Here, we introduce PepCompass, a geometry-aware framework for peptide exploration and optimization. At its core, we define a Union of $\kappa$-Stable Riemannian Manifolds $\mathbb{M}^{\kappa}$, a family of decoder-induced manifolds that captures local geometry while ensuring computational stability. We propose two local exploration methods: Second-Order Riemannian Brownian Efficient Sampling, which provides a convergent second-order approximation to Riemannian Brownian motion, and Mutation Enumeration in Tangent Space, which reinterprets tangent directions as discrete amino-acid substitutions. Combining these yields Local Enumeration Bayesian Optimization (LE-BO), an efficient algorithm for local activity optimization. Finally, we introduce Potential-minimizing Geodesic Search (PoGS), which interpolates between prototype embeddings along property-enriched geodesics, biasing discovery toward seeds, i.e. peptides with favorable activity. In-vitro validation confirms the effectiveness of PepCompass: PoGS yields four novel seeds, and subsequent optimization with LE-BO discovers 25 highly active peptides with broad-spectrum activity, including against resistant bacterial strains. These results demonstrate that geometry-informed exploration provides a powerful new paradigm for antimicrobial peptide design.
+ oai:arXiv.org:2510.01988v4
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Marcin Mo\.zejko, Adam Bielecki, Jurand Pr\k{a}dzy\'nski, Marcin Traskowski, Antoni Janowski, Hyun-Su Lee, Marcelo Der Torossian Torres, Micha{\l} Kmicikiewicz, Paulina Szymczak, Karol Jurasz, Micha{\l} Kucharczyk, Cesar de la Fuente-Nunez, Ewa Szczurek
+
+
+ C2|Q>: A Robust Framework for Bridging Classical and Quantum Software Development
+ https://arxiv.org/abs/2510.02854
+ arXiv:2510.02854v2 Announce Type: replace
+Abstract: Quantum Software Engineering (QSE) is emerging as a critical discipline to make quantum computing accessible to a broader developer community; however, most quantum development environments still require developers to engage with low-level details across the software stack - including problem encoding, circuit construction, algorithm configuration, hardware selection, and result interpretation - making them difficult for classical software engineers to use. To bridge this gap, we present C2|Q>, a hardware-agnostic quantum software development framework that translates specific types of classical specifications into quantum-executable programs while preserving methodological rigor. The framework applies modular software engineering principles by classifying the workflow into three core modules: an encoder that classifies problems, produces Quantum-Compatible Formats (QCFs), and constructs quantum circuits, a deployment module that generates circuits and recommends hardware based on fidelity, runtime, and cost, and a decoder that interprets quantum outputs into classical solutions. This architecture supports systematic evaluation across simulators and Noisy Intermediate-Scale Quantum (NISQ) quantum devices, remaining scalable to new problem classes and algorithms. In evaluation, the encoder module achieved a 93.8% completion rate, the hardware recommendation module consistently selected the appropriate quantum devices for workloads scaling up to 56 qubits. These results indicate that C2|Q> lowers the entry barrier to quantum software development by providing a reproducible, extensible toolchain that connects classical specifications to quantum execution. The open-source implementation of C2|Q> is available at https://github.com/C2-Q/C2Q.
+ oai:arXiv.org:2510.02854v2
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Boshuai Ye, Arif Ali Khan, Teemu Pihkakoski, Peng Liang, Muhammad Azeem Akbar, Matti Silveri, Lauri Malmi
+
+
+ Signature-Informed Transformer for Asset Allocation
+ https://arxiv.org/abs/2510.03129
+ arXiv:2510.03129v2 Announce Type: replace
+Abstract: Modern deep learning for asset allocation typically separates forecasting from optimization. We argue this creates a fundamental mismatch where minimizing prediction errors fails to yield robust portfolios. We propose the Signature Informed Transformer to address this by unifying feature extraction and decision making into a single policy. Our model employs path signatures to encode complex path dependencies and introduces a specialized attention mechanism that targets geometric asset relationships. By directly minimizing the Conditional Value at Risk we ensure the training objective aligns with financial goals. We prove that our attention module rigorously amplifies signature derived signals. Experiments across diverse equity universes show our approach significantly outperforms both traditional strategies and advanced forecasting baselines. The code is available at: https://anonymous.4open.science/r/Signature-Informed-Transformer-For-Asset-Allocation-DB88
+ oai:arXiv.org:2510.03129v2
+ cs.LG
+ cs.AI
+ q-fin.PM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yoontae Hwang, Stefan Zohren
+
+
+ What Do We Mean When We Talk About Data Storytelling?
+ https://arxiv.org/abs/2510.04761
+ arXiv:2510.04761v2 Announce Type: replace
+Abstract: Data storytelling has seen rapid growth through a proliferation of examples, as well as theoretical and technical advancements contributed across multiple disciplines. In this paper, we present a comprehensive survey of data storytelling research from 2010 to 2025. By analyzing the conceptualizations of data storytelling collected from related publications, we reveal the field's perspectives on the What, How, Why, and Who of data storytelling. We further investigated the operationalization of data stories. We identified 12 data story forms that provide concrete examples of how data stories have been presented. We derived a set of spectrum-based dimensions that capture important properties of data stories. Along each spectrum, applicable forms and design alternatives were discussed to analyze how they shape data storytelling experiences, along with data storytelling design trade-offs. Additionally, we examine how traditional narrative elements, like plot and character, have been adapted in data stories to support the operationalization of a data storytelling narratological perspective. Finally, we concluded the survey with a synthesis of our major findings and implications for future research.
+ oai:arXiv.org:2510.04761v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Leni Yang, Zezhong Wang, Xingyu Lan
+
+
+ Think Then Embed: Generative Context Improves Multimodal Embedding
+ https://arxiv.org/abs/2510.05014
+ arXiv:2510.05014v4 Announce Type: replace
+Abstract: There is a growing interest in Universal Multimodal Embeddings (UME), where models are required to generate task-specific representations. While recent studies show that Multimodal Large Language Models (MLLMs) perform well on such tasks, they treat MLLMs solely as encoders, overlooking their generative capacity. However, such an encoding paradigm becomes less effective as instructions become more complex and require compositional reasoning. Inspired by the proven effectiveness of chain-of-thought reasoning, we propose a general Think-Then-Embed (TTE) framework for UME, composed of a reasoner and an embedder. The reasoner MLLM first generates reasoning traces that explain complex queries, followed by an embedder that produces representations conditioned on both the original query and the intermediate reasoning. This explicit reasoning step enables more nuanced understanding of complex multimodal instructions. Our contributions are threefold. First, by leveraging a powerful MLLM reasoner, we achieve state-of-the-art performance on the MMEB-V2 benchmark, surpassing proprietary models trained on massive in-house datasets. Second, to reduce the dependency on large MLLM reasoners, we finetune a smaller MLLM reasoner using high-quality embedding-centric reasoning traces, achieving the best performance among open-source models with a 7% absolute gain over recently proposed models. Third, we investigate strategies for integrating the reasoner and embedder into a unified model for improved efficiency without sacrificing performance.
+ oai:arXiv.org:2510.05014v4
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xuanming Cui, Jianpeng Cheng, Hong-you Chen, Satya Narayan Shukla, Abhijeet Awasthi, Xichen Pan, Chaitanya Ahuja, Shlok Kumar Mishra, Yonghuan Yang, Jun Xiao, Qi Guo, Ser-Nam Lim, Aashu Singh, Xiangjun Fan
+
+
+ Structuring Reasoning for Complex Rules Beyond Flat Representations
+ https://arxiv.org/abs/2510.05134
+ arXiv:2510.05134v2 Announce Type: replace
+Abstract: Large language models (LLMs) face significant challenges when processing complex rule systems, as they typically treat interdependent rules as unstructured textual data rather than as logically organized frameworks. This limitation results in reasoning divergence, where models often overlook critical rule dependencies essential for accurate interpretation. Although existing approaches such as Chain-of-Thought (CoT) reasoning have shown promise, they lack systematic methodologies for structured rule processing and are particularly susceptible to error propagation through sequential reasoning chains. To address these limitations, we propose the Dynamic Adjudication Template (DAT), a novel framework inspired by expert human reasoning processes. DAT structures the inference mechanism into three methodical stages: qualitative analysis, evidence gathering, and adjudication. During the qualitative analysis phase, the model comprehensively evaluates the contextual landscape. The subsequent evidence gathering phase involves the targeted extraction of pertinent information based on predefined template elements ([placeholder]), followed by systematic verification against applicable rules. Finally, in the adjudication phase, the model synthesizes these validated components to formulate a comprehensive judgment. Empirical results demonstrate that DAT consistently outperforms conventional CoT approaches in complex rule-based tasks. Notably, DAT enables smaller language models to match, and in some cases exceed, the performance of significantly larger LLMs, highlighting its efficiency and effectiveness in managing intricate rule systems.
+ oai:arXiv.org:2510.05134v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhihao Yang, Ancheng Xu, Jingpeng Li, Liang Yan, Jiehui Zhou, Zhen Qin, Hengyu Chang, Yukun Chen, Longze Chen, Ahmadreza Argha, Hamid Alinejad-Rokny, Minghuan Tan, Yujun Cai, Min Yang
+
+
+ Auditable Unit-Aware Thresholds in Symbolic Regression via Logistic-Gated Operators
+ https://arxiv.org/abs/2510.05178
+ arXiv:2510.05178v3 Announce Type: replace
+Abstract: AI for health will only scale when models are not only accurate but also readable, auditable, and governable. Many clinical and public-health decisions hinge on numeric thresholds -- cut-points that trigger alarms, treatment, or follow-up -- yet most machine-learning systems bury those thresholds inside opaque scores or smooth response curves. We introduce logistic-gated operators (LGO) for symbolic regression, which promote thresholds to first-class, unit-aware parameters inside equations and map them back to physical units for direct comparison with guidelines. On public ICU and population-health cohorts (MIMIC-IV ICU, eICU, NHANES), LGO recovers clinically plausible gates on MAP, lactate, GCS, SpO2, BMI, fasting glucose, and waist circumference while remaining competitive with established scoring systems (AutoScore) and explainable boosting machines (EBM). The gates are sparse and selective: they appear when regime switching is supported by the data and are pruned on predominantly smooth tasks, yielding compact formulas that clinicians can inspect, stress-test, and revise. As a standalone symbolic model or a safety overlay on black-box systems, LGO helps translate observational data into auditable, unit-aware rules for medicine and other threshold-driven domains.
+ oai:arXiv.org:2510.05178v3
+ cs.LG
+ cs.AI
+ cs.SC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ou Deng, Ruichen Cong, Jianting Xu, Shoji Nishimura, Atsushi Ogihara, Qun Jin
+
+
+ Evaluating LLM Safety Across Child Development Stages: A Simulated Agent Approach
+ https://arxiv.org/abs/2510.05484
+ arXiv:2510.05484v2 Announce Type: replace
+Abstract: Current safety alignment for Large Language Models (LLMs) implicitly optimizes for a "modal adult user," leaving models vulnerable to distributional shifts in user cognition. We present ChildSafe, a benchmark that quantifies alignment robustness under cognitive shifts corresponding to four developmental stages. Unlike static persona-based evaluations, we introduce a parametric cognitive simulation approach, formalizing developmental stages as hyperparameter constraints (e.g., volatility, context horizon) to generate out-of-distribution interaction traces. We validate these agents against ground-truth human linguistic data (CHILDES) and deploy them across 1,200 multi-turn interactions. Our results reveal a systematic alignment generalization gap: state-of-the-art models exhibit up to 11.5% performance degradation when interacting with early-childhood agents compared to standard baselines. We provide the research community with the validated agent artifacts and evaluation protocols to facilitate robust alignment testing against non-adversarial, cognitively diverse populations.
+ oai:arXiv.org:2510.05484v2
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Abhejay Murali, Saleh Afroogh, Kevin Chen, David Atkinson, Amit Dhurandhar, Junfeng Jiao
+
+
+ Learning from Failures: Understanding LLM Alignment through Failure-Aware Inverse RL
+ https://arxiv.org/abs/2510.06092
+ arXiv:2510.06092v2 Announce Type: replace
+Abstract: Reinforcement Learning from Human Feedback (RLHF) aligns Large Language Models (LLMs) with human preferences, yet the underlying reward signals they internalize remain hidden, posing a critical challenge for interpretability and safety. Existing approaches attempt to extract these latent incentives using Inverse Reinforcement Learning (IRL), but treat all preference pairs equally, often overlooking the most informative signals: those examples the extracted reward model misclassifies or assigns nearly equal scores, which we term \emph{failures}. We introduce a novel \emph{failure-aware} IRL algorithm that focuses on misclassified or difficult examples to recover the latent rewards defining model behaviors. By learning from these failures, our failure-aware IRL extracts reward functions that better reflect the true objectives behind RLHF. We demonstrate that failure-aware IRL outperforms existing IRL baselines across multiple metrics when applied to LLM detoxification, without requiring external classifiers or supervision. Crucially, failure-aware IRL yields rewards that better capture the true incentives learned during RLHF, enabling more effective re-RLHF training than standard IRL. This establishes failure-aware IRL as a robust, scalable method for auditing model alignment and reducing ambiguity in the IRL process.
+ oai:arXiv.org:2510.06092v2
+ cs.LG
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Nyal Patel, Matthieu Bou, Arjun Jagota, Satyapriya Krishna, Sonali Parbhoo
+
+
+ CoT Referring: Improving Referring Expression Tasks with Grounded Reasoning
+ https://arxiv.org/abs/2510.06243
+ arXiv:2510.06243v2 Announce Type: replace
+Abstract: Referring Expression Comprehension and Segmentation are critical tasks for assessing the integration of language understanding and image comprehension, serving as benchmarks for Multimodal Large Language Models (MLLMs) capabilities. To address these challenges, we propose a new strategy, CoT Referring, which enhances model reasoning across modalities through a structured, chain-of-thought training data structure. Our approach systematically parses textual structures to a sequential referring step, where in each step it identifies relationships and ensures consistent reference alignment, thereby improving accuracy in complex query scenarios. We restructure the training data to enforce a new output form, providing new annotations for existing datasets and compiling an evaluation benchmark from existing resources. This benchmark is designed explicitly for complex referring cases. We also integrate detection and segmentation capabilities into a unified MLLM framework, training it with a novel adaptive weighted loss to optimize performance. Experimental results on our curated benchmark and RefCOCO/+/g demonstrate the effectiveness of our approach, with a notable increase of 2.5%+ over baseline models.
+ oai:arXiv.org:2510.06243v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qihua Dong, Luis Figueroa, Handong Zhao, Kushal Kafle, Jason Kuen, Zhihong Ding, Scott Cohen, Yun Fu
+
+
+ Relational Database Distillation: From Structured Tables to Condensed Graph Data
+ https://arxiv.org/abs/2510.06980
+ arXiv:2510.06980v2 Announce Type: replace
+Abstract: Relational databases (RDBs) underpin the majority of global data management systems, where information is structured into multiple interdependent tables. To effectively use the knowledge within RDBs for predictive tasks, recent advances leverage graph representation learning to capture complex inter-table relations as multi-hop dependencies. Despite achieving state-of-the-art performance, these methods remain hindered by the prohibitive storage overhead and excessive training time, due to the massive scale of the database and the computational burden of intensive message passing across interconnected tables. To alleviate these concerns, we propose and study the problem of Relational Database Distillation (RDD). Specifically, we aim to distill large-scale RDBs into compact heterogeneous graphs while retaining the predictive power (i.e., utility) required for training graph-based models. Multi-modal column information is preserved through node features, and primary-foreign key relations are encoded via heterogeneous edges, thereby maintaining both data fidelity and relational structure. To ensure adaptability across diverse downstream tasks without engaging the traditional, inefficient bi-level distillation framework, we further design a kernel ridge regression-guided objective with pseudo-labels, which produces quality features for the distilled graph. Extensive experiments on multiple real-world RDBs demonstrate that our solution substantially reduces the data size while maintaining competitive performance on classification and regression tasks, creating an effective pathway for scalable learning with RDBs.
+ oai:arXiv.org:2510.06980v2
+ cs.DB
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xinyi Gao, Jingxi Zhang, Lijian Chen, Tong Chen, Lizhen Cui, Hongzhi Yin
+
+
+ When Benchmarks Age: Temporal Misalignment through Large Language Model Factuality Evaluation
+ https://arxiv.org/abs/2510.07238
+ arXiv:2510.07238v2 Announce Type: replace
+Abstract: The rapid evolution of large language models (LLMs) and the real world has outpaced the static nature of widely used evaluation benchmarks, raising concerns about their reliability for evaluating LLM factuality. While substantial works continue to rely on the popular but old benchmarks, their temporal misalignment with real-world facts and modern LLMs, and their effects on LLM factuality evaluation remain underexplored. Therefore, in this work, we present a systematic investigation of this issue by examining five popular factuality benchmarks and eight LLMs released across different years. An up-to-date fact retrieval pipeline and three metrics are tailored to quantify benchmark aging and its impact on LLM factuality evaluation. Experimental results and analysis illustrate that a considerable portion of samples in the widely used factuality benchmarks are outdated, leading to unreliable assessments of LLM factuality. We hope our work can provide a testbed to assess the reliability of a benchmark for LLM factuality evaluation and inspire more research on the benchmark aging issue. Codes are available in https://github.com/JiangXunyi/BenchAge.
+ oai:arXiv.org:2510.07238v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xunyi Jiang, Dingyi Chang, Julian McAuley, Xin Xu
+
+
+ Anti-Jamming based on Beam-Steering Antennas and Intelligent UAV Swarm Behavior
+ https://arxiv.org/abs/2510.07292
+ arXiv:2510.07292v2 Announce Type: replace
+Abstract: In recent years, Unmanned Aerial Vehicles (UAVs) have brought a new true revolution to military tactics. While UAVs already constitute an advantage when operating alone, multi-UAV swarms expand the available possibilities, allowing the UAVs to collaborate and support each other as a team to carry out a given task. This entails the capability to exchange information related with situation awareness and action coordination by means of a suitable wireless communication technology. In such scenario, the adversary is expected to disrupt communications by jamming the communication channel. The latter becomes the Achilles heel of the swarm. While anti-jamming techniques constitute a well covered topic in the literature, the use of intelligent swarm behaviors to leverage those techniques is still an open research issue.
+ This paper explores the use of Genetic Algorithms (GAs) to jointly optimize UAV swarm formation, beam-steering antennas and traffic routing in order to mitigate the effect of jamming in the main coordination channel, under the assumption that a more robust and low data rate channel is used for formation management signaling. Simulation results show the effectiveness of proposed approach. However, the significant computational cost paves the way for further research.
+ oai:arXiv.org:2510.07292v2
+ cs.NI
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/COMNETSAT68601.2025.11324847
+ IEEE International Conference on Communication, Networks and Satellite (COMNETSAT), Padang, Indonesia, 2025, pp. 278-285
+ Tiago Silva, Ant\'onio Grilo
+
+
+ AgentAsk: Multi-Agent Systems Need to Ask
+ https://arxiv.org/abs/2510.07593
+ arXiv:2510.07593v2 Announce Type: replace
+Abstract: Multi-agent systems (MAS) built on large language models promise improved problem-solving through collaboration, yet they often fail to consistently outperform strong single-agent baselines due to error propagation at inter-agent message handoffs.In this work, we conduct a systematic empirical analysis of such failures and introduce an edge-level error taxonomy that identifies four dominant error types: Data Gap, Signal Corruption, Referential Drift, and Capability Gap, as primary sources of failure in multi-agent interactions. Building on this taxonomy, we propose AgentAsk, a lightweight clarification module designed to intervene at the edge level in MAS to prevent cascading errors. The module operates by strategically applying minimal clarifications at critical points within the system, improving the accuracy and efficiency of the overall task. AgentAsk is trained to balance the trade-offs between clarification cost, latency, and accuracy, while it is also architecture-agnostic and can be easily integrated into existing systems. Evaluated across five benchmarks, AgentAsk consistently improves accuracy by up to 4.69%, while keeping latency and extra costs below 10% compared to baseline MAS, showcasing its high efficiency and minimal overhead.
+ oai:arXiv.org:2510.07593v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bohan Lin, Kuo Yang, Zelin Tan, Yingchuan Lai, Chen Zhang, Guibin Zhang, Xinlei Yu, Miao Yu, Xu Wang, Yudong Zhang, Yang Wang
+
+
+ MONKEY: Masking ON KEY-Value Activation Adapter for Personalization
+ https://arxiv.org/abs/2510.07656
+ arXiv:2510.07656v3 Announce Type: replace
+Abstract: Personalizing diffusion models allows users to generate new images that incorporate a given subject, allowing more control than a text prompt. These models often suffer somewhat when they end up just recreating the subject image and ignoring the text prompt. We observe that one popular method for personalization, IP-Adapter, automatically generates masks that segment the subject from the background during inference. We propose to use this automatically generated mask on a second pass to mask the image tokens, thus restricting them to the subject, not the background, allowing the text prompt to attend to the rest of the image. For text prompts describing locations and places, this produces images that accurately depict the subject while definitively matching the prompt. We compare our method to a few other test time personalization methods, and find our method displays high prompt and source image alignment. We also perform a user study to validate whether end users would appreciate our method. Code available at https://github.com/jamesBaker361/monkey
+ oai:arXiv.org:2510.07656v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ James Baker
+
+
+ An approach for systematic decomposition of complex llm tasks
+ https://arxiv.org/abs/2510.07772
+ arXiv:2510.07772v3 Announce Type: replace
+Abstract: Large Language Models (LLMs) suffer from reliability issues on complex tasks, as existing decomposition methods are heuristic and rely on agent or manual decomposition. This work introduces a novel, systematic decomposition framework that we call Analysis of CONstraint-Induced Complexity (ACONIC), which models the task as a constraint problem and leverages formal complexity measures to guide decomposition. On combinatorial (SAT-Bench) and LLM database querying tasks (Spider), we find that by decomposing the tasks following the measure of complexity, agent can perform considerably better.
+ oai:arXiv.org:2510.07772v3
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Tianle Zhou, Jiakai Xu, Guanhong Liu, Jiaxiang Liu, Haonan Wang, Eugene Wu
+
+
+ From Defender to Devil? Unintended Risk Interactions Induced by LLM Defenses
+ https://arxiv.org/abs/2510.07968
+ arXiv:2510.07968v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) have shown remarkable performance across various applications, but their deployment in real-world settings faces several risks, including jailbreak attacks and privacy leaks. To mitigate these risks, numerous defense strategies have been proposed. However, most existing studies assess these defenses in isolation and ignore their effects on other risk dimensions. In this work, we introduce a new cross-risk evaluation paradigm and take the first step in investigating unintended interactions among defenses in LLMs. Specifically, we focus on the interplay between safety, fairness, and privacy. To this end, we propose CrossRiskEval, a framework that systematically characterizes how a defense designed for one risk (e.g., safety) affects others (e.g., fairness or privacy). We conduct extensive empirical studies and mechanistic analyses on 14 LLMs with deployed defenses, covering 12 defense strategies. Our results show that defenses targeting a single risk often cause measurable effects on other risks. These effects vary in direction and magnitude across a range of factors (e.g., models, tasks, and defense strategies), and are often asymmetric across risk pairs. Furthermore, our mechanistic analysis shows that these interactions are not random: they arise from conflict-entangled neurons, which are shared internal representations that contribute in opposite ways to different risks. Adjusting one risk therefore perturbs these representations and leads to systematic changes in non-target risks. These findings reveal the limits of single-risk evaluation and highlight the need for holistic and interaction-aware assessment when designing and deploying LLM defenses.
+ oai:arXiv.org:2510.07968v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiangtao Meng, Tianshuo Cong, Li Wang, Wenyu Chen, Zheng Li, Shanqing Guo, Xiaoyun Wang
+
+
+ LLMs Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions
+ https://arxiv.org/abs/2510.08211
+ arXiv:2510.08211v2 Announce Type: replace
+Abstract: Previous research has shown that LLMs finetuned on malicious or incorrect completions within narrow domains (e.g., insecure code or incorrect medical advice) can become broadly misaligned to exhibit harmful behaviors, which is called emergent misalignment. In this work, we investigate whether this phenomenon can extend beyond safety behaviors to a broader spectrum of dishonesty and deception under high-stakes scenarios (e.g., lying under pressure and deceptive behavior). To explore this, we finetune open-sourced LLMs on misaligned completions across diverse domains. Experimental results demonstrate that LLMs show broadly misaligned behavior in dishonesty. Additionally, we further explore this phenomenon in a downstream combined finetuning setting, and find that introducing as little as 1% of misalignment data into a standard downstream task is sufficient to decrease honest behavior over 20%. Furthermore, we consider a more practical human-AI interaction environment where we simulate both benign and biased users to interact with the assistant LLM. Notably, we find that the assistant can be misaligned unintentionally to exacerbate its dishonesty with only 10% biased user population. In summary, we extend the study of emergent misalignment to the domain of dishonesty and deception under high-stakes scenarios, and demonstrate that this risk arises not only through direct finetuning, but also in downstream mixture tasks and practical human-AI interactions. Refer to https://github.com/hxhcreate/LLM_Deceive_Unintentionally for experimental resources.
+ oai:arXiv.org:2510.08211v2
+ cs.CL
+ cs.AI
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xuhao Hu, Peng Wang, Xiaoya Lu, Dongrui Liu, Xuanjing Huang, Jing Shao
+
+
+ Thinking Longer, Not Always Smarter: Evaluating LLM Capabilities in Hierarchical Legal Reasoning
+ https://arxiv.org/abs/2510.08710
+ arXiv:2510.08710v2 Announce Type: replace
+Abstract: Case-based reasoning is a cornerstone of U.S. legal practice, requiring professionals to argue about a current case by drawing analogies to and distinguishing from past precedents. While Large Language Models (LLMs) have shown remarkable capabilities, their proficiency in this complex, nuanced form of reasoning needs further investigation. We propose a formal framework that decomposes the process of identifying significant distinctions between cases into three-stage reasoning tasks. Our framework models cases using factual predicates called factors, organizes them into a legal knowledge hierarchy, and defines verifiable rules for identifying distinctions, analyzing their argumentative support, and evaluating their significance. Through comprehensive evaluation of modern reasoning LLMs, we reveal a paradox: while models achieve high accuracy on surface-level reasoning (Task 1), performance degrades on hierarchical reasoning (Task 2: 64.82%-92.09%) and collapses on integrated analysis (Task 3: 11.46%-33.99%). Most strikingly, we find that models consistently expend more computational resources on incorrect responses than correct ones, suggesting that "thinking longer" does not always mean "thinking smarter." Our work provides a methodology for fine-grained analysis of LLM reasoning capabilities in complex domains and reveals fundamental limitations that must be addressed for robust and trustworthy legal AI.
+ oai:arXiv.org:2510.08710v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3788646.3789522
+ Li Zhang, Matthias Grabmair, Morgan Gray, Kevin Ashley
+
+
+ Saving SWE-Bench: A Benchmark Mutation Approach for Realistic Agent Evaluation
+ https://arxiv.org/abs/2510.08996
+ arXiv:2510.08996v3 Announce Type: replace
+Abstract: Current benchmarks for evaluating software engineering agents, such as SWE-Bench Verified, are predominantly derived from GitHub issues and fail to accurately reflect how developers interact with chat-based coding assistants in integrated development environments (IDEs). We posit that this mismatch leads to a systematic overestimation of agent's capabilities in real-world scenarios, especially bug fixing. We introduce a novel benchmarking framework that transforms existing formal benchmarks into realistic user queries through systematic analysis of developer interaction patterns with chat-based agents. Our methodology is flexible and can be easily extended to existing benchmarks. In this paper, we apply our testing framework to SWE-Bench Verified, the TypeScript subset of Multi-SWE-Bench and a private benchmark, SWE-Bench C# and transform formal GitHub issue descriptions into realistic user-style queries based on telemetry analysis of a popular chat-based agent interactions. Our findings reveal that existing benchmarks significantly overestimate agent capabilities for some models by >50% over baseline performance for public benchmarks and ~10-16% for our internal benchmark. This work establishes a new paradigm for evaluating interactive chat-based software engineering agents through benchmark mutation techniques.
+ oai:arXiv.org:2510.08996v3
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Spandan Garg, Benjamin Steenhoek, Yufan Huang
+
+
+ LitE-SQL: A Lightweight and Efficient Text-to-SQL Framework with Vector-based Schema Linking and Execution-Guided Self-Correction
+ https://arxiv.org/abs/2510.09014
+ arXiv:2510.09014v2 Announce Type: replace
+Abstract: The Text-to-SQL task translates natural language questions into SQL queries, enabling intuitive database interaction for non-experts. While recent methods leveraging Large Language Models (LLMs) achieve strong performance, their reliance on proprietary models raise concerns about deployment feasibility and data privacy. In this work, we introduce LitE-SQL, a Lightweight and Efficient framework with two components: (i) a Schema Retriever that performs efficient schema linking using a vector database of pre-computed schema embeddings, optimized with a hard-negative supervised contrastive objective to distinguish semantically similar but functionally irrelevant columns, and (ii) a SQL Generator fine-tuned in two stages-supervised fine-tuning followed by execution-guided reinforcement-enabling execution-guided self-correction without multi-candidate sampling, which is commonly required by prior LLM-based approaches. On BIRD, LitE-SQL achieves 72.10% execution accuracy, and on Spider 1.0 it reaches 88.45%, demonstrating comparable or superior performance to LLM-based methods despite using 2x to 30x fewer parameters. Our findings demonstrate that high-quality Text-to-SQL generation is feasible with lightweight models, offering a practical solution for privacy-sensitive and resource-constrained settings.
+ oai:arXiv.org:2510.09014v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shengmin Piao, Jieun Lee, Sanghyun Park
+
+
+ Chain-of-Influence: Tracing Interdependencies Across Time and Features in Clinical Predictive Modelings
+ https://arxiv.org/abs/2510.09895
+ arXiv:2510.09895v3 Announce Type: replace
+Abstract: Modeling clinical time-series data is hampered by the challenge of capturing latent, time-varying dependencies among features. State-of-the-art approaches often rely on black-box mechanisms or simple aggregation, failing to explicitly model how the influence of one clinical variable propagates through others over time. We propose $\textbf{Chain-of-Influence (CoI)}$, an interpretable deep learning framework that constructs an explicit, time-unfolded graph of feature interactions. CoI enables the tracing of influence pathways, providing a granular audit trail that shows how any feature at any time contributes to the final prediction, both directly and through its influence on other variables. We evaluate CoI on mortality and disease progression tasks using the MIMIC-IV dataset and a chronic kidney disease cohort. Our framework achieves state-of-the-art predictive performance (AUROC of 0.960 on CKD progression and 0.950 on ICU mortality), with deletion-based sensitivity analyses confirming that CoI's learned attributions faithfully reflect its decision process. Through case studies, we demonstrate that CoI uncovers clinically meaningful, patient-specific patterns of disease progression, offering enhanced transparency into the temporal and cross-feature dependencies that inform clinical decision-making.
+ oai:arXiv.org:2510.09895v3
+ cs.LG
+ cs.AI
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yubo Li, Rema Padman
+
+
+ Rethinking Entropy Interventions in RLVR: An Entropy Change Perspective
+ https://arxiv.org/abs/2510.10150
+ arXiv:2510.10150v2 Announce Type: replace
+Abstract: While Reinforcement Learning with Verifiable Rewards (RLVR) can enhance LLM reasoning, its training process carries a critical risk: entropy collapse. This phenomenon is a rapid decrease in policy entropy, which severely limits exploration and diminishes learning effectiveness. Recent methods attempt to mitigate this collapse via heuristic entropy interventions, yet the underlying mechanisms governing entropy remain unclear. In this work, we conduct a theoretical and quantitative analysis of GRPO's entropy dynamics, revealing that token-level entropy change in each update step is jointly governed by four key factors: clipping strategy, advantage, token probability, and token entropy. These findings not only explain the mechanisms of existing methods, but also reveal their limitations: they rely on heuristic adjustments to only one or two factors, leaving other relevant factors unconsidered and reducing their effectiveness. This motivates us to propose a new method, STEER, which adaptively reweights tokens based on their estimated entropy change to regulate entropy in a principled manner. Experiments on both math and coding benchmarks demonstrate that STEER effectively mitigates entropy collapse and consistently outperforms state-of-the-art baselines.
+ oai:arXiv.org:2510.10150v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zhezheng Hao, Hong Wang, Haoyang Liu, Jian Luo, Jiarui Yu, Hande Dong, Qiang Lin, Can Wang, Jiawei Chen
+
+
+ Transport-Coupled Bayesian Flows for Molecular Graph Generation
+ https://arxiv.org/abs/2510.10211
+ arXiv:2510.10211v3 Announce Type: replace
+Abstract: Molecular graph generation (MGG) is essentially a multi-class generative task, aimed at predicting categories of atoms and bonds under strict chemical and structural constraints. However, many prevailing diffusion paradigms learn to regress numerical embeddings and rely on a hard discretization rule during sampling to recover discrete labels. This introduces a fundamental discrepancy between training and sampling. While models are trained for point-wise numerical fidelity, the sampling process fundamentally relies on crossing categorical decision boundaries. This discrepancy forces the model to expend efforts on intra-class variations that become irrelevant after discretization, ultimately compromising diversity, structural statistics, and generalization performance. Therefore, we propose TopBF, a unified framework that (i) performs MGG directly in continuous parameter distributions, (ii) learns graph-topological understanding through a Quasi-Wasserstein optimal-transport coupling under geodesic costs, and (iii) supports controllable, property-conditioned generation during sampling without retraining the base model. TopBF innovatively employs cumulative distribution function (CDF) to compute category probabilities induced by the Gaussian channel, thereby unifying the training objective with the sampling discretization operation. Experiments on QM9 and ZINC250k demonstrate superior structural fidelity and efficient generation with improved performance.
+ oai:arXiv.org:2510.10211v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yida Xiong, Jiameng Chen, Kun Li, Hongzhi Zhang, Xiantao Cai, Jia Wu, Wenbin Hu
+
+
+ Image-to-Video Transfer Learning based on Image-Language Foundation Models: A Comprehensive Survey
+ https://arxiv.org/abs/2510.10671
+ arXiv:2510.10671v3 Announce Type: replace
+Abstract: Image-Language Foundation Models (ILFMs) have demonstrated remarkable success in vision-language understanding, providing transferable multimodal representations that generalize across diverse downstream image-based tasks. The advancement of video-text research has spurred growing interest in extending image-based models to the video domain. This paradigm, termed as image-to-video transfer learning, effectively mitigates the substantial data and computational demands compared to training video-language models from scratch while achieves comparable or even stronger model performance. This survey provides the first comprehensive review of this emerging field, which begins by summarizing the widely used ILFMs and their capabilities. We then systematically classify existing image-to-video transfer learning techniques into two broad root categories (frozen features and adapted features), along with numerous fine-grained subcategories, based on the paradigm for transferring image understanding capability to video tasks. Building upon the task-specific nature of image-to-video transfer, this survey methodically elaborates these strategies and details their applications across a spectrum of video-text learning tasks, ranging from fine-grained settings (e.g., spatio-temporal video grounding) to coarse-grained ones (e.g., video question answering). We further present a detailed experimental analysis to investigate the efficacy of different image-to-video transfer learning paradigms on a range of downstream video understanding tasks. Finally, we identify prevailing challenges and highlight promising directions for future research. By offering a comprehensive and structured overview, this survey aims to establish a structured roadmap for advancing video-text learning based on existing ILFM, and to inspire future research directions in this rapidly evolving domain. Github repository is available.
+ oai:arXiv.org:2510.10671v3
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jinxuan Li, Chaolei Tan, Haoxuan Chen, Jianxin Ma, Jian-Fang Hu, Jianhuang Lai, Wei-Shi Zheng
+
+
+ TCitH- and VOLEitH-based Signatures from Restricted Decoding
+ https://arxiv.org/abs/2510.11224
+ arXiv:2510.11224v2 Announce Type: replace
+Abstract: Threshold-Computation-in-the-Head (TCitH) and VOLE-in-the-Head (VOLEitH), two recent developments of the MPC-in-the-Head (MPCitH) paradigm, have significantly improved the performance of digital signature schemes. This work embeds the restricted decoding problem within these frameworks: we propose a structurally simple modeling that achieves competitive signature sizes. Specifically, by instantiating the restricted decoding problem with the same hardness assumption underlying CROSS, we reduce sizes by more than a factor of two compared to the NIST submission. Moreover, we observe that ternary full-weight decoding, closely related to the hardness assumption underlying WAVE, is a restricted decoding problem. Using ternary full-weight decoding, we obtain signature sizes comparable to the smallest MPCitH-based candidates in the NIST competition.
+ oai:arXiv.org:2510.11224v2
+ cs.CR
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sebastian Bitzer, Michele Battagliola, Antonia Wachter-Zeh, Violetta Weger
+
+
+ Emergent Misalignment via In-Context Learning: Narrow in-context examples can produce broadly misaligned LLMs
+ https://arxiv.org/abs/2510.11288
+ arXiv:2510.11288v3 Announce Type: replace
+Abstract: Recent work has shown that narrow finetuning can produce broadly misaligned LLMs, a phenomenon termed emergent misalignment (EM). While concerning, these findings were limited to finetuning and activation steering, leaving out in-context learning (ICL). We therefore ask: does EM emerge in ICL? We find that it does: across four model families (Gemini, Kimi-K2, Grok, and Qwen), narrow in-context examples cause models to produce misaligned responses to benign, unrelated queries. With 16 in-context examples, EM rates range from 1\% to 24\% depending on model and domain, appearing with as few as 2 examples. Neither larger model scale nor explicit reasoning provides reliable protection. We formulate and test a hypothesis, which explains in-context EM as conflict between safety objectives and context-following behavior. Consistent with this, instructing models to prioritize safety reduces EM while prioritizing context-following increases it. These findings establish ICL as a previously underappreciated vector for emergent misalignment that operates without parameter modification and resists simple scaling-based solutions.
+ oai:arXiv.org:2510.11288v3
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nikita Afonin, Nikita Andriyanov, Vahagn Hovhannisyan, Nikhil Bageshpura, Kyle Liu, Kevin Zhu, Sunishchal Dev, Ashwinee Panda, Oleg Rogov, Elena Tutubalina, Alexander Panchenko, Mikhail Seleznyov
+
+
+ DocReward: A Document Reward Model for Structuring and Stylizing
+ https://arxiv.org/abs/2510.11391
+ arXiv:2510.11391v2 Announce Type: replace
+Abstract: Recent advances in agentic workflows have enabled the automation of tasks such as professional document generation. However, they primarily focus on textual quality, neglecting visual structure and style, which are crucial for readability and engagement. This gap stems mainly from a lack of effective reward models capable of guiding agents toward producing documents with high structural and stylistic professionalism. To address this, we propose DocReward, a document reward model that evaluates documents based on their structure and style. The model is trained under a textual-quality-agnostic framework to assess professionalism without being influenced by textual quality. To achieve this, we construct a multi-domain dataset DocPair of 117K paired documents, covering 32 domains and 267 document types, each comprising a high- and low-professionalism document with identical content but different structure and style. This setup enables the model to evaluate professionalism comprehensively and independently of textual quality. DocReward is trained using the Bradley-Terry loss to score documents, penalizing predictions that contradict the annotated ranking. On a manually annotated benchmark, DocReward outperforms GPT-5 by 14.6 percentage points in accuracy. Extrinsic RL experiments further validate its effectiveness in guiding professional document generation.
+ oai:arXiv.org:2510.11391v2
+ cs.CV
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junpeng Liu, Yuzhong Zhao, Bowen Cao, Jiayu Ding, Yilin Jia, Tengchao Lv, Yupan Huang, Shaohan Huang, Nan Yang, Li Dong, Lei Cui, Tao Ge, Xun Wang, Huitian Jiao, Sun Mao, FNU Kartik, Si-Qing Chen, Wai Lam, Furu Wei
+
+
+ Community size rather than grammatical complexity better predicts Large Language Model accuracy in a novel Wug Test
+ https://arxiv.org/abs/2510.12463
+ arXiv:2510.12463v2 Announce Type: replace
+Abstract: The linguistic abilities of Large Language Models are a matter of ongoing debate. This study contributes to this discussion by investigating model performance in a morphological generalization task that involves novel words. Using a multilingual adaptation of the Wug Test, six models were tested across four partially unrelated languages (Catalan, English, Greek, and Spanish) and compared with human speakers. The aim is to determine whether model accuracy approximates human competence and whether it is shaped primarily by linguistic complexity or by the size of the linguistic community, which affects the quantity of available training data. Consistent with previous research, the results show that the models are able to generalize morphological processes to unseen words with human-like accuracy. However, accuracy patterns align more closely with community size and data availability than with structural complexity, refining earlier claims in the literature. In particular, languages with larger speaker communities and stronger digital representation, such as Spanish and English, revealed higher accuracy than less-resourced ones like Catalan and Greek. Overall, our findings suggest that model behavior is mainly driven by the richness of linguistic resources rather than by sensitivity to grammatical complexity, reflecting a form of performance that resembles human linguistic competence only superficially.
+ oai:arXiv.org:2510.12463v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Nikoleta Pantelidou, Evelina Leivada, Raquel Montero, Paolo Morosi
+
+
+ Towards Fast Coarse-graining and Equation Discovery with Foundation Inference Models
+ https://arxiv.org/abs/2510.12618
+ arXiv:2510.12618v2 Announce Type: replace
+Abstract: High-dimensional recordings of dynamical processes are often characterized by a much smaller set of effective variables, evolving on low-dimensional manifolds. Identifying these latent dynamics requires solving two intertwined problems: discovering appropriate coarse-grained variables and simultaneously fitting the governing equations. Most machine learning approaches tackle these tasks jointly by training autoencoders together with models that enforce dynamical consistency. We propose to decouple the two problems by leveraging the recently introduced Foundation Inference Models (FIMs). FIMs are pretrained models that estimate the infinitesimal generators of dynamical systems (e.g., the drift and diffusion of a stochastic differential equation) in zero-shot mode. By amortizing the inference of the dynamics through a FIM with frozen weights, and training only the encoder-decoder map, we define a simple, simulation-consistent loss that stabilizes representation learning. A proof of concept on a stochastic double-well system with semicircle diffusion, embedded into synthetic video data, illustrates the potential of this approach for fast and reusable coarse-graining pipelines.
+ oai:arXiv.org:2510.12618v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Manuel Hinz, Maximilian Mauel, Patrick Seifner, David Berghaus, Kostadin Cvejoski, Ramses J. Sanchez
+
+
+ On Foundation Models for Temporal Point Processes to Accelerate Scientific Discovery
+ https://arxiv.org/abs/2510.12640
+ arXiv:2510.12640v2 Announce Type: replace
+Abstract: Many scientific fields, from medicine to seismology, rely on analyzing sequences of events over time to understand complex systems. Traditionally, machine learning models must be built and trained from scratch for each new dataset, which is a slow and costly process. We introduce a new approach: a single, powerful model that learns the underlying patterns of event data in context. We trained this "foundation model" on millions of simulated event sequences, teaching it a general-purpose understanding of how events can unfold. As a result, our model can analyze new scientific data instantly, without retraining, simply by looking at a few examples from the dataset. It can also be quickly fine-tuned for even higher accuracy. This approach makes sophisticated event analysis more accessible and accelerates the pace of scientific discovery.
+ oai:arXiv.org:2510.12640v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ David Berghaus, Patrick Seifner, Kostadin Cvejoski, Ramses J. Sanchez
+
+
+ Reflection-Based Task Adaptation for Self-Improving VLA
+ https://arxiv.org/abs/2510.12710
+ arXiv:2510.12710v2 Announce Type: replace
+Abstract: Pre-trained Vision-Language-Action (VLA) models represent a major leap towards general-purpose robots, yet efficiently adapting them to novel, specific tasks in-situ remains a significant hurdle. While reinforcement learning (RL) is a promising avenue for such adaptation, the process often suffers from low efficiency, hindering rapid task mastery. We introduce Reflective Self-Adaptation, a framework for rapid, autonomous task adaptation without human intervention. Our framework establishes a self-improving loop where the agent learns from its own experience to enhance both strategy and execution.
+ The core of our framework is a dual-pathway architecture that addresses the full adaptation lifecycle. First, a Failure-Driven Reflective RL pathway enables rapid learning by using the VLM's causal reasoning to automatically synthesize a targeted, dense reward function from failure analysis. This provides a focused learning signal that significantly accelerates policy exploration. However, optimizing such proxy rewards introduces a potential risk of "reward hacking," where the agent masters the reward function but fails the actual task. To counteract this, our second pathway, Success-Driven Quality-Guided SFT, grounds the policy in holistic success. It identifies and selectively imitates high-quality successful trajectories, ensuring the agent remains aligned with the ultimate task goal. This pathway is strengthened by a conditional curriculum mechanism to aid initial exploration.
+ We conduct experiments in challenging manipulation tasks. The results demonstrate that our framework achieves faster convergence and higher final success rates compared to representative baselines. Our work presents a robust solution for creating self-improving agents that can efficiently and reliably adapt to new environments.
+ oai:arXiv.org:2510.12710v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Baicheng Li, Dong Wu, Zike Yan, Xinchen Liu, Zecui Zeng, Lusong Li, Hongbin Zha
+
+
+ CymbaDiff: Structured Spatial Diffusion for Sketch-based 3D Semantic Urban Scene Generation
+ https://arxiv.org/abs/2510.13245
+ arXiv:2510.13245v3 Announce Type: replace
+Abstract: Outdoor 3D semantic scene generation produces realistic and semantically rich environments for applications such as urban simulation and autonomous driving. However, advances in this direction are constrained by the absence of publicly available, well-annotated datasets. We introduce SketchSem3D, the first large-scale benchmark for generating 3D outdoor semantic scenes from abstract freehand sketches and pseudo-labeled annotations of satellite images. SketchSem3D includes two subsets, Sketch-based SemanticKITTI and Sketch-based KITTI-360 (containing LiDAR voxels along with their corresponding sketches and annotated satellite images), to enable standardized, rigorous, and diverse evaluations. We also propose Cylinder Mamba Diffusion (CymbaDiff) that significantly enhances spatial coherence in outdoor 3D scene generation. CymbaDiff imposes structured spatial ordering, explicitly captures cylindrical continuity and vertical hierarchy, and preserves both physical neighborhood relationships and global context within the generated scenes. Extensive experiments on SketchSem3D demonstrate that CymbaDiff achieves superior semantic consistency, spatial realism, and cross-dataset generalization. The code and dataset will be available at https://github.com/Lillian-research-hub/CymbaDiff
+ oai:arXiv.org:2510.13245v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Li Liang, Bo Miao, Xinyu Wang, Naveed Akhtar, Jordan Vice, Ajmal Mian
+
+
+ Mobile Coverage Analysis using Crowdsourced Data
+ https://arxiv.org/abs/2510.13459
+ arXiv:2510.13459v2 Announce Type: replace
+Abstract: Effective assessment of mobile network coverage and the precise identification of service weak spots are paramount for network operators striving to enhance user Quality of Experience (QoE). This paper presents a novel framework for mobile coverage and weak spot analysis utilising crowdsourced QoE data. The core of our methodology involves coverage analysis at the individual cell (antenna) level, subsequently aggregated to the site level, using empirical geolocation data. A key contribution of this research is the application of One-Class Support Vector Machine (OC-SVM) algorithm for calculating mobile network coverage. This approach models the decision hyperplane as the effective coverage contour, facilitating robust calculation of coverage areas for individual cells and entire sites. The same methodology is extended to analyse crowdsourced service loss reports, thereby identifying and quantifying geographically localised weak spots. Our findings demonstrate the efficacy of this novel framework in accurately mapping mobile coverage and, crucially, in highlighting granular areas of signal deficiency, particularly within complex urban environments.
+ oai:arXiv.org:2510.13459v2
+ cs.AI
+ cs.CE
+ cs.NI
+ stat.AP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Timothy Wong, Tom Freeman, Joseph Feehily
+
+
+ Unsupervised Constitutive Model Discovery from Sparse and Noisy Data
+ https://arxiv.org/abs/2510.13559
+ arXiv:2510.13559v2 Announce Type: replace
+Abstract: Recently, unsupervised constitutive model discovery has gained attention through frameworks based on the Virtual Fields Method (VFM), most prominently the EUCLID approach. However, the performance of VFM-based approaches, including EUCLID, is affected by measurement noise and data sparsity, which are unavoidable in practice. The statistical finite element method (statFEM) offers a complementary perspective by providing a Bayesian framework for assimilating noisy and sparse measurements to reconstruct the full-field displacement response, together with quantified uncertainty. While statFEM recovers displacement fields under uncertainty, it does not strictly enforce consistency with constitutive relations. In this work, we integrate statFEM with unsupervised constitutive model discovery in the EUCLID framework, yielding statFEM-EUCLID. The framework is demonstrated for isotropic hyperelastic materials. The results show that this integration reduces sensitivity to noise and data sparsity, while ensuring that the reconstructed fields remain consistent with both equilibrium and constitutive laws.
+ oai:arXiv.org:2510.13559v2
+ cs.CE
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.cma.2025.118722
+ Vahab Knauf Narouie, Jorge-Humberto Urrea-Quintero, Fehmi Cirak, Henning Wessels
+
+
+ MimicKit: A Reinforcement Learning Framework for Motion Imitation and Control
+ https://arxiv.org/abs/2510.13794
+ arXiv:2510.13794v4 Announce Type: replace
+Abstract: MimicKit is an open-source framework for training motion controllers using motion imitation and reinforcement learning. The codebase provides implementations of commonly-used motion-imitation techniques and RL algorithms. This framework is intended to support research and applications in computer graphics and robotics by providing a unified training framework, along with standardized environment, agent, and data structures. The codebase is designed to be modular and easily configurable, enabling convenient modification and extension to new characters and tasks. The open-source codebase is available at: https://github.com/xbpeng/MimicKit.
+ oai:arXiv.org:2510.13794v4
+ cs.GR
+ cs.LG
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xue Bin Peng
+
+
+ Harnessing Consistency for Robust Test-Time LLM Ensemble
+ https://arxiv.org/abs/2510.13855
+ arXiv:2510.13855v2 Announce Type: replace
+Abstract: Different large language models (LLMs) exhibit diverse strengths and weaknesses, and LLM ensemble serves as a promising approach to integrate their complementary capabilities. Despite substantial progress in improving ensemble quality, limited attention has been paid to the robustness of ensembles against potential erroneous signals, which often arise from heterogeneous tokenization schemes and varying model expertise. Our analysis shows that ensemble failures typically arise from both the token level and the model level: the former reflects severe disagreement in token predictions, while the latter involves low confidence and pronounced disparities among models. In light of this, we propose CoRE, a plug-and-play technique that harnesses model consistency for robust LLM ensemble, which can be seamlessly integrated with diverse ensemble methods. *Token-level consistency* captures fine-grained disagreements by applying a low-pass filter to downweight uncertain tokens with high inconsistency, often due to token misalignment, thereby improving robustness at a granular level. *Model-level consistency* models global agreement by promoting model outputs with high self-confidence and minimal divergence from others, enhancing robustness at a coarser level. Extensive experiments across diverse benchmarks, model combinations, and ensemble strategies demonstrate that CoRE consistently improves ensemble performance and robustness. Our code is available at https://github.com/zhichenz98/CoRE-EACL26.
+ oai:arXiv.org:2510.13855v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhichen Zeng, Qi Yu, Xiao Lin, Ruizhong Qiu, Xuying Ning, Tianxin Wei, Yuchen Yan, Jingrui He, Hanghang Tong
+
+
+ Joint Discriminative-Generative Modeling via Dual Adversarial Training
+ https://arxiv.org/abs/2510.13872
+ arXiv:2510.13872v3 Announce Type: replace
+Abstract: Simultaneously achieving robust classification and high-fidelity generative modeling within a single framework presents a significant challenge. Hybrid approaches, such as Joint Energy-Based Models (JEM), interpret classifiers as EBMs but are often limited by the instability and poor sample quality inherent in Stochastic Gradient Langevin Dynamics (SGLD)-based training. We address these limitations by proposing a novel training framework that integrates adversarial training (AT) principles for both discriminative robustness and stable generative learning. The proposed method introduces three key innovations: (1) the replacement of SGLD-based JEM learning with a stable, AT-based approach that optimizes the energy function by discriminating between real data and Projected Gradient Descent (PGD)-generated contrastive samples using the BCE loss; (2) synergistic adversarial training for the discriminative component that enhances classification robustness while eliminating the need for explicit gradient penalties; and (3) a two-stage training strategy that addresses normalization-related instabilities and enables leveraging pretrained robust classifiers, generalizing effectively across diverse architectures. Experiments on CIFAR-10/100 and ImageNet demonstrate that our approach: (1) is the first EBM-based hybrid to scale to high-resolution datasets with high training stability, simultaneously achieving state-of-the-art discriminative and generative performance on ImageNet 256$\times$256; (2) uniquely combines generative quality with adversarial robustness, enabling critical applications like robust counterfactual explanations; and (3) functions as a competitive standalone generative model, matching the generative quality of autoregressive methods (VAR-d16) and surpassing diffusion models while offering unique versatility.
+ oai:arXiv.org:2510.13872v3
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Xuwang Yin, Claire Zhang, Julie Steele, Nir Shavit, Tony T. Wang
+
+
+ PolyFly: Polytopic Optimal Planning for Collision-Free Cable-Suspended Aerial Payload Transportation
+ https://arxiv.org/abs/2510.15226
+ arXiv:2510.15226v2 Announce Type: replace
+Abstract: Aerial transportation robots using suspended cables have emerged as versatile platforms for disaster response and rescue operations. To maximize the capabilities of these systems, robots need to aggressively fly through tightly constrained environments, such as dense forests and structurally unsafe buildings, while minimizing flight time and avoiding obstacles. Existing methods geometrically over-approximate the vehicle and obstacles, leading to conservative maneuvers and increased flight times. We eliminate these restrictions by proposing PolyFly, an optimal global planner which considers a non-conservative representation for aerial transportation by modeling each physical component of the environment, and the robot (quadrotor, cable and payload), as independent polytopes. We further increase the model accuracy by incorporating the attitude of the physical components by constructing orientation-aware polytopes. The resulting optimal control problem is efficiently solved by converting the polytope constraints into smooth differentiable constraints via duality theory. We compare our method against the existing state-of-the-art approach in eight maze-like environments and show that PolyFly produces faster trajectories in each scenario. We also experimentally validate our proposed approach on a real quadrotor with a suspended payload, demonstrating the practical reliability and accuracy of our method.
+ oai:arXiv.org:2510.15226v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mrunal Sarvaiya, Guanrui Li, Giuseppe Loianno
+
+
+ ReviewSense: Transforming Customer Review Dynamics into Actionable Business Insights
+ https://arxiv.org/abs/2510.16466
+ arXiv:2510.16466v2 Announce Type: replace
+Abstract: As customer feedback becomes increasingly central to strategic growth, the ability to derive actionable insights from unstructured reviews is essential. While traditional AI-driven systems excel at predicting user preferences, far less work has focused on transforming customer reviews into prescriptive, business-facing recommendations. This paper introduces ReviewSense, a novel prescriptive decision support framework that leverages advanced large language models (LLMs) to transform customer reviews into targeted, actionable business recommendations. By identifying key trends, recurring issues, and specific concerns within customer sentiments, ReviewSense extends beyond preference-based systems to provide businesses with deeper insights for sustaining growth and enhancing customer loyalty. The novelty of this work lies in integrating clustering, LLM adaptation, and expert-driven evaluation into a unified, business-facing pipeline. Preliminary manual evaluations indicate strong alignment between the model's recommendations and business objectives, highlighting its potential for driving data-informed decision-making. This framework offers a new perspective on AI-driven sentiment analysis, demonstrating its value in refining business strategies and maximizing the impact of customer feedback.
+ oai:arXiv.org:2510.16466v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Siddhartha Krothapalli, Kartikey Singh Bhandari, Tridib Kumar Das, Praveen Kumar, Naveen Suravarpu, Pratik Narang
+
+
+ DeepDetect: Learning All-in-One Dense Keypoints
+ https://arxiv.org/abs/2510.17422
+ arXiv:2510.17422v3 Announce Type: replace
+Abstract: Keypoint detection is the foundation of many computer vision tasks, including image registration, structure-from-motion, 3D reconstruction, visual odometry, and SLAM. Traditional detectors (SIFT, ORB, BRISK, FAST, etc.) and learning-based methods (SuperPoint, R2D2, QuadNet, LIFT, etc.) have shown strong performance gains yet suffer from key limitations: sensitivity to photometric changes, low keypoint density and repeatability, limited adaptability to challenging scenes, and lack of semantic understanding, often failing to prioritize visually important regions. We present DeepDetect, an intelligent, all-in-one, dense detector that unifies the strengths of classical detectors using deep learning. Firstly, we create ground-truth masks by fusing outputs of 7 keypoint and 2 edge detectors, extracting diverse visual cues from corners and blobs to prominent edges and textures in the images. Afterwards, a lightweight and efficient model: ESPNet, is trained using fused masks as labels, enabling DeepDetect to focus semantically on images while producing highly dense keypoints, that are adaptable to diverse and visually degraded conditions. Evaluations on Oxford, HPatches, and Middlebury datasets demonstrate that DeepDetect surpasses other detectors achieving maximum values of 0.5143 (average keypoint density), 0.9582 (average repeatability), 338,118 (correct matches), and 842,045 (voxels in stereo 3D reconstruction).
+ oai:arXiv.org:2510.17422v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shaharyar Ahmed Khan Tareen, Filza Khan Tareen, Xiaojing Yuan
+
+
+ Introducing Linear Implication Types to $\lambda_{GT}$ for Computing With Incomplete Graphs
+ https://arxiv.org/abs/2510.17429
+ arXiv:2510.17429v3 Announce Type: replace
+Abstract: Designing programming languages that enable intuitive and safe manipulation of data structures is a critical research challenge. Conventional destructive memory operations using pointers are complex and prone to errors. Existing type systems, such as affine types and shape types, address this problem towards safe manipulation of heaps and pointers, but design of high-level declarative languages that allow us to manipulate complex pointer data structures at a higher level of abstraction is largely an open problem. The $\lambda_{GT}$ language, a purely functional programming language that treats hypergraphs (hereafter referred to as graphs) as primary data structures, addresses some of these challenges. By abstracting data with shared references and cycles as graphs, it enables declarative operations through pattern matching and leverages its type system to guarantee safety of these operations. Nevertheless, the previously proposed type system of $\lambda_{GT}$ leaves two significant open challenges. First, the type system does not support \emph{incomplete graphs}, that is, graphs in which some elements are missing from the graphs of user-defined types. Second, the type system relies on dynamic type checking during pattern matching. This study addresses these two challenges by incorporating linear implication into the $\lambda_{GT}$ type system, while introducing new constraints to ensure its soundness.
+ oai:arXiv.org:2510.17429v3
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jin Sano, Naoki Yamamoto, Kazunori Ueda
+
+
+ Mapping Hidden Heritage: Self-supervised Pre-training on High-Resolution LiDAR DEM Derivatives for Archaeological Stone Wall Detection
+ https://arxiv.org/abs/2510.17644
+ arXiv:2510.17644v3 Announce Type: replace
+Abstract: Historic dry-stone walls hold significant cultural and environmental importance, serving as historical markers and contributing to ecosystem preservation and wildfire management during dry seasons in Australia. However, many of these stone structures in remote or vegetated landscapes remain undocumented due to limited accessibility and the high cost of manual mapping. Deep learning-based segmentation offers a scalable approach for automated mapping of such features, but challenges remain: 1.the visual occlusion of low-lying dry-stone walls by dense vegetation and 2.the scarcity of labeled training data. This study presents DINO-CV, a self-supervised cross-view pre-training framework based on knowledge distillation, designed for accurate and data-efficient mapping of dry-stone walls using Digital Elevation Models (DEMs) derived from high-resolution airborne LiDAR. By learning invariant geometric and geomorphic features across DEM-derived views, (i.e., Multi-directional Hillshade and Visualization for Archaeological Topography), DINO-CV addresses the occlusion by vegetation and data scarcity challenges. Applied to the Budj Bim Cultural Landscape at Victoria, Australia, a UNESCO World Heritage site, the approach achieves a mean Intersection over Union (mIoU) of 68.6% on test areas and maintains 63.8% mIoU when fine-tuned with only 10% labeled data. These results demonstrate the potential of self-supervised learning on high-resolution DEM derivatives for large-scale, automated mapping of cultural heritage features in complex and vegetated environments. Beyond archaeology, this approach offers a scalable solution for environmental monitoring and heritage preservation across inaccessible or environmentally sensitive regions.
+ oai:arXiv.org:2510.17644v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Zexian Huang, Mashnoon Islam, Brian Armstrong, Billy Bell, Kourosh Khoshelham, Martin Tomko
+
+
+ LIME: Link-based user-item Interaction Modeling with decoupled xor attention for Efficient test time scaling
+ https://arxiv.org/abs/2510.18239
+ arXiv:2510.18239v3 Announce Type: replace
+Abstract: Scaling large recommendation systems requires advancing three major frontiers: processing longer user histories, expanding candidate sets, and increasing model capacity. While promising, transformers' computational cost scales quadratically with the user sequence length and linearly with the number of candidates. This trade-off makes it prohibitively expensive to expand candidate sets or increase sequence length at inference, despite the significant performance improvements.
+ We introduce \textbf{LIME}, a novel architecture that resolves this trade-off. Through two key innovations, LIME fundamentally reduces computational complexity. First, low-rank ``link embeddings" enable pre-computation of attention weights by decoupling user and candidate interactions, making the inference cost nearly independent of candidate set size. Second, a linear attention mechanism, \textbf{LIME-XOR}, reduces the complexity with respect to user sequence length from quadratic ($O(N^2)$) to linear ($O(N)$).
+ Experiments on public and industrial datasets show LIME achieves near-parity with state-of-the-art transformers but with a 10$\times$ inference speedup on large candidate sets or long sequence lengths. When tested on a major recommendation platform, LIME improved user engagement while maintaining minimal inference costs with respect to candidate set size and user history length, establishing a new paradigm for efficient and expressive recommendation systems.
+ oai:arXiv.org:2510.18239v3
+ cs.IR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yunjiang Jiang, Ayush Agarwal, Yang Liu, Bi Xue
+
+
+ A Stage-Wise Learning Strategy with Fixed Anchors for Robust Speaker Verification
+ https://arxiv.org/abs/2510.18530
+ arXiv:2510.18530v2 Announce Type: replace
+Abstract: Learning robust speaker representations under noisy conditions presents significant challenges, which requires careful handling of both discriminative and noise-invariant properties. In this work, we proposed an anchor-based stage-wise learning strategy for robust speaker representation learning. Specifically, our approach begins by training a base model to establish discriminative speaker boundaries, and then extract anchor embeddings from this model as stable references. Finally, a copy of the base model is fine-tuned on noisy inputs, regularized by enforcing proximity to their corresponding fixed anchor embeddings to preserve speaker identity under distortion. Experimental results suggest that this strategy offers advantages over conventional joint optimization, particularly in maintaining discrimination while improving noise robustness. The proposed method demonstrates consistent improvements across various noise conditions, potentially due to its ability to handle boundary stabilization and variation suppression separately.
+ oai:arXiv.org:2510.18530v2
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bin Gu, Lipeng Dai, Huipeng Du, Haitao Zhao, Jibo Wei
+
+
+ When Abstraction Breaks Physics: Rethinking Modular Design in Quantum Software
+ https://arxiv.org/abs/2510.18557
+ arXiv:2510.18557v2 Announce Type: replace
+Abstract: Abstraction is a fundamental principle in classical software engineering, which enables modularity, reusability, and scalability. However, quantum programs adhere to fundamentally different semantics, such as unitarity, entanglement, the no-cloning theorem, and the destructive nature of measurement, which introduce challenges to the safe use of classical abstraction mechanisms. This paper identifies a fundamental conflict in quantum software engineering: abstraction practices that are syntactically valid may violate the physical constraints of quantum computation. We present three classes of failure cases where naive abstraction breaks quantum semantics and propose a set of design principles for physically sound abstraction mechanisms. We further propose research directions, including quantum-specific type systems, effect annotations, and contract-based module design. Our goal is to initiate a systematic rethinking of abstraction in quantum software engineering, based on quantum semantics and considering engineering scalability.
+ oai:arXiv.org:2510.18557v2
+ cs.SE
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianjun Zhao
+
+
+ Event-Grounding Graph: Unified Spatio-Temporal Scene Graph from Robotic Observations
+ https://arxiv.org/abs/2510.18697
+ arXiv:2510.18697v2 Announce Type: replace
+Abstract: A fundamental aspect for building intelligent autonomous robots that can assist humans in their daily lives is the construction of rich environmental representations. While advances in semantic scene representations have enriched robotic scene understanding, current approaches lack a connection between spatial features and dynamic events; e.g., connecting the blue mug to the event washing a mug. In this work, we introduce the event-grounding graph (EGG), a framework grounding event interactions to spatial features of a scene. This representation allows robots to perceive, reason, and respond to complex spatio-temporal queries. Experiments using real robotic data demonstrate EGG's capability to retrieve relevant information and respond accurately to human inquiries concerning the environment and events within. Furthermore, the EGG framework's source code and evaluation dataset are released as open-source at: https://github.com/aalto-intelligent-robotics/EGG.
+ oai:arXiv.org:2510.18697v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Phuoc Nguyen, Francesco Verdoja, Ville Kyrki
+
+
+ Dimensionality Reduction for Remote Sensing Data Analysis: A Systematic Review of Methods and Applications
+ https://arxiv.org/abs/2510.18935
+ arXiv:2510.18935v2 Announce Type: replace
+Abstract: Earth observation involves collecting, analyzing, and processing an ever-growing mass of data. This planetary data is crucial for addressing relevant societal, economic, and environmental challenges, ranging from environmental monitoring to urban planning and disaster management. However, its high dimensionality entails significant feature redundancy and computational overhead limiting the effectiveness of machine learning models. Dimensionality reduction (DR) techniques, specifically feature extraction, address these challenges by preserving essential data properties while reducing redundancy and enhancing tasks in Remote Sensing (RS). The landscape of DR for RS is a diverse, disorganized, and rapidly evolving field. We offer a practical guide for this landscape by introducing a framework of DR. Using this framework, we trace the evolution of DR across the data value chain in RS. Finally, we synthesize these trends and offer perspectives for the future of DR in RS by first characterizing this shift from single-task models to unified representations, then identifying two perspectives in the foundation model era: the need for robust and interpretable DR and the potential of bridging classical DR with modern representation learning.
+ oai:arXiv.org:2510.18935v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Nathan Mankovich, Kai-Hendrik Cohrs, Homer Durand, Vasileios Sitokonstantinou, Tristan Williams, Gustau Camps-Valls
+
+
+ HAMLOCK: HArdware-Model LOgically Combined attacK
+ https://arxiv.org/abs/2510.19145
+ arXiv:2510.19145v2 Announce Type: replace
+Abstract: The growing use of third-party hardware accelerators (e.g., FPGAs, ASICs) for deep neural networks (DNNs) introduces new security vulnerabilities. Conventional model-level backdoor attacks, which only poison a model's weights to misclassify inputs with a specific trigger, are often detectable because the entire attack logic is embedded within the model (i.e., software), creating a traceable layer-by-layer activation path.
+ This paper introduces the HArdware-Model Logically Combined Attack (HAMLOCK), a far stealthier threat that distributes the attack logic across the hardware-software boundary. The software (model) is now only minimally altered by tuning the activations of few neurons to produce uniquely high activation values when a trigger is present. A malicious hardware Trojan detects those unique activations by monitoring the corresponding neurons' most significant bit or the 8-bit exponents and triggers another hardware Trojan to directly manipulate the final output logits for misclassification.
+ This decoupled design is highly stealthy, as the model itself contains no complete backdoor activation path as in conventional attacks and hence, appears fully benign. Empirically, across benchmarks like MNIST, CIFAR10, GTSRB, and ImageNet, HAMLOCK achieves a near-perfect attack success rate with a negligible clean accuracy drop. More importantly, HAMLOCK circumvents the state-of-the-art model-level defenses without any adaptive optimization. The hardware Trojan is also undetectable, incurring area and power overheads as low as 0.01%, which is easily masked by process and environmental noise. Our findings expose a critical vulnerability at the hardware-software interface, demanding new cross-layer defenses against this emerging threat.
+ oai:arXiv.org:2510.19145v2
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Sanskar Amgain, Daniel Lobo, Atri Chatterjee, Swarup Bhunia, Fnu Suya
+
+
+ Continual Knowledge Adaptation for Reinforcement Learning
+ https://arxiv.org/abs/2510.19314
+ arXiv:2510.19314v2 Announce Type: replace
+Abstract: Reinforcement Learning enables agents to learn optimal behaviors through interactions with environments. However, real-world environments are typically non-stationary, requiring agents to continuously adapt to new tasks and changing conditions. Although Continual Reinforcement Learning facilitates learning across multiple tasks, existing methods often suffer from catastrophic forgetting and inefficient knowledge utilization. To address these challenges, we propose Continual Knowledge Adaptation for Reinforcement Learning (CKA-RL), which enables the accumulation and effective utilization of historical knowledge. Specifically, we introduce a Continual Knowledge Adaptation strategy, which involves maintaining a task-specific knowledge vector pool and dynamically using historical knowledge to adapt the agent to new tasks. This process mitigates catastrophic forgetting and enables efficient knowledge transfer across tasks by preserving and adapting critical model parameters. Additionally, we propose an Adaptive Knowledge Merging mechanism that combines similar knowledge vectors to address scalability challenges, reducing memory requirements while ensuring the retention of essential knowledge. Experiments on three benchmarks demonstrate that the proposed CKA-RL outperforms state-of-the-art methods, achieving an improvement of 4.20% in overall performance and 8.02% in forward transfer. The source code is available at https://github.com/Fhujinwu/CKA-RL.
+ oai:arXiv.org:2510.19314v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jinwu Hu, Zihao Lian, Zhiquan Wen, Chenghao Li, Guohao Chen, Xutao Wen, Bin Xiao, Mingkui Tan
+
+
+ ETOM: A Five-Level Benchmark for Evaluating Tool Orchestration within the MCP Ecosystem
+ https://arxiv.org/abs/2510.19423
+ arXiv:2510.19423v2 Announce Type: replace
+Abstract: We introduce ETOM, a five-level benchmark for evaluating multi-hop, end-to-end tool orchestration by LLM agents within a hierarchical Model-Context Protocol (MCP) ecosystem. Existing benchmarks often assess tools in isolation, overlooking challenges such as functional overlap and cross-server orchestration, which can lead to overly optimistic evaluations. ETOM addresses these gaps by constructing ground truth through "equal function sets", enabling objective metrics such as F1 score and reducing reliance on LLM-as-a-judge evaluation. Its five-level curriculum systematically tests agent capabilities, from single-tool orchestration to complex cross-server planning, as well as robustness to out-of-scope requests. Experiments reveal that rigid hierarchies can hinder performance without co-designed strategies, and even state-of-the-art agents exhibit systemic weaknesses in robustness. ETOM provides a diagnostic framework to expose these limitations and guide the development of more capable and efficient tool-using agents.
+ oai:arXiv.org:2510.19423v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jia-Kai Dong, I-Wei Huang, Chun-Tin Wu, Yi-Tien Tsai
+
+
+ From Prototypes to Sparse ECG Explanations: SHAP-Driven Counterfactuals for Multivariate Time-Series Multi-class Classification
+ https://arxiv.org/abs/2510.19514
+ arXiv:2510.19514v2 Announce Type: replace
+Abstract: In eXplainable Artificial Intelligence (XAI), instance-based explanations for time series have gained increasing attention due to their potential for actionable and interpretable insights in domains such as healthcare. Addressing the challenges of explainability of state-of-the-art models, we propose a prototype-driven framework for generating sparse counterfactual explanations tailored to 12-lead ECG classification models. Our method employs SHAP-based thresholds to identify critical signal segments and convert them into interval rules, uses Dynamic Time Warping (DTW) and medoid clustering to extract representative prototypes, and aligns these prototypes to query R-peaks for coherence with the sample being explained. The framework generates counterfactuals that modify only 78% of the original signal while maintaining 81.3% validity across all classes and achieving 43% improvement in temporal stability. We evaluate three variants of our approach, Original, Sparse, and Aligned Sparse, with class-specific performance ranging from 98.9% validity for myocardial infarction (MI) to challenges with hypertrophy (HYP) detection (13.2%). This approach supports near realtime generation (< 1 second) of clinically valid counterfactuals and provides a foundation for interactive explanation platforms. Our findings establish design principles for physiologically-aware counterfactual explanations in AI-based diagnosis systems and outline pathways toward user-controlled explanation interfaces for clinical deployment.
+ oai:arXiv.org:2510.19514v2
+ cs.LG
+ cs.AI
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Maciej Mozolewski, Bet\"ul Bayrak, Kerstin Bach, Grzegorz J. Nalepa
+
+
+ The Dog the Cat Chased Stumped the Model: Measuring When Language Models Abandon Structure for Shortcuts
+ https://arxiv.org/abs/2510.20543
+ arXiv:2510.20543v2 Announce Type: replace
+Abstract: When language models correctly parse "The cat that the dog chased meowed," are they analyzing syntax or simply familiar with dogs chasing cats? Despite extensive benchmarking, we lack methods to distinguish structural understanding from semantic pattern matching. We introduce CenterBench, a dataset of 9,720 comprehension questions on center-embedded sentences (like "The cat [that the dog chased] meowed") where relative clauses nest recursively, creating processing demands from simple to deeply nested structures. Each sentence has a syntactically identical but semantically implausible counterpart (e.g., mailmen prescribe medicine, doctors deliver mail) and six comprehension questions testing surface understanding, syntactic dependencies, and causal reasoning. Testing six models reveals that performance gaps between plausible and implausible sentences widen systematically with complexity, with models showing median gaps up to 26.8 percentage points, quantifying when they abandon structural analysis for semantic associations. Notably, semantic plausibility harms performance on questions about resulting actions, where following causal relationships matters more than semantic coherence. Reasoning models improve accuracy but their traces show semantic shortcuts, overthinking, and answer refusal. Unlike models whose plausibility advantage systematically widens with complexity, humans shows variable semantic effects. CenterBench provides the first framework to identify when models shift from structural analysis to pattern matching.
+ oai:arXiv.org:2510.20543v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Sangmitra Madhusudan, Kaige Chen, Ali Emami
+
+
+ H-SPLID: HSIC-based Saliency Preserving Latent Information Decomposition
+ https://arxiv.org/abs/2510.20627
+ arXiv:2510.20627v2 Announce Type: replace
+Abstract: We introduce H-SPLID, a novel algorithm for learning salient feature representations through the explicit decomposition of salient and non-salient features into separate spaces. We show that H-SPLID promotes learning low-dimensional, task-relevant features. We prove that the expected prediction deviation under input perturbations is upper-bounded by the dimension of the salient subspace and the Hilbert-Schmidt Independence Criterion (HSIC) between inputs and representations. This establishes a link between robustness and latent representation compression in terms of the dimensionality and information preserved. Empirical evaluations on image classification tasks show that models trained with H-SPLID primarily rely on salient input components, as indicated by reduced sensitivity to perturbations affecting non-salient features, such as image backgrounds. Our code is available at https://github.com/neu-spiral/H-SPLID.
+ oai:arXiv.org:2510.20627v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lukas Miklautz, Chengzhi Shi, Andrii Shkabrii, Theodoros Thirimachos Davarakis, Prudence Lam, Claudia Plant, Jennifer Dy, Stratis Ioannidis
+
+
+ GRACE: Graph Neural Networks for Locus-of-Care Prediction under Extreme Class Imbalance
+ https://arxiv.org/abs/2510.20671
+ arXiv:2510.20671v2 Announce Type: replace
+Abstract: Determining the appropriate locus of care for addiction patients is one of the most critical clinical decisions that affects patient treatment outcomes and effective use of resources. With a lack of sufficient specialized treatment resources, such as inpatient beds or staff, there is an unmet need to develop an automated framework for the same. Current decision-making approaches suffer from severe class imbalances in addiction datasets. To address this limitation, we propose a novel graph neural network (GRACE) framework that formalizes locus of care prediction as a structured learning problem. In addition, we propose a new approach of obtaining an unbiased meta-graph to train a GNN to overcome the class imbalance problem. Experimental results with real-world data show an improvement of 11-35% in terms of the F1 score of the minority class over competitive baselines. Further, if we jointly finetune the base embedding fed into GRACE as input together with the rest of the GNN component of GRACE, there is a remarkable boost of 15.8% in performance.
+ oai:arXiv.org:2510.20671v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Subham Kumar, Lekhansh Shukla, Animesh Mukherjee, Koustav Rudra, Prakrithi Shivaprakash
+
+
+ FicSim: A Dataset for Multi-Faceted Semantic Similarity in Long-Form Fiction
+ https://arxiv.org/abs/2510.20926
+ arXiv:2510.20926v2 Announce Type: replace
+Abstract: As language models become capable of processing increasingly long and complex texts, there has been growing interest in their application within computational literary studies. However, evaluating the usefulness of these models for such tasks remains challenging due to the cost of fine-grained annotation for long-form texts and the data contamination concerns inherent in using public-domain literature. Current embedding similarity datasets are not suitable for evaluating literary-domain tasks because of a focus on coarse-grained similarity and primarily on very short text. We assemble and release FICSIM, a dataset of long-form, recently written fiction, including scores along 12 axes of similarity informed by author-produced metadata and validated by digital humanities scholars. We evaluate a suite of embedding models on this task, demonstrating a tendency across models to focus on surface-level features over semantic categories that would be useful for computational literary studies tasks. Throughout our data-collection process, we prioritize author agency and rely on continual, informed author consent.
+ oai:arXiv.org:2510.20926v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.18653/v1/2025.findings-emnlp.1375
+ Natasha Johnson, Amanda Bertsch, Maria-Emil Deal, Emma Strubell
+
+
+ Sequentially Teaching Sequential Tasks $(ST)^2$: Teaching Robots Long-horizon Manipulation Skills
+ https://arxiv.org/abs/2510.21046
+ arXiv:2510.21046v2 Announce Type: replace
+Abstract: Learning from demonstration has proved itself useful for teaching robots complex skills with high sample efficiency. However, teaching long-horizon tasks with multiple skills is challenging as deviations tend to accumulate, the distributional shift becomes more evident, and human teachers become fatigued over time, thereby increasing the likelihood of failure. To address these challenges, we introduce $(ST)^2$, a sequential method for learning long-horizon manipulation tasks that allows users to control the teaching flow by specifying key points, enabling structured and incremental demonstrations. Using this framework, we study how users respond to two teaching paradigms: (i) a traditional monolithic approach, in which users demonstrate the entire task trajectory at once, and (ii) a sequential approach, in which the task is segmented and demonstrated step by step. We conducted an extensive user study on the restocking task with $16$ participants in a realistic retail store environment, evaluating the user preferences and effectiveness of the methods. User-level analysis showed superior performance for the sequential approach in most cases (10 users), compared with the monolithic approach (5 users), with one tie. Our subjective results indicate that some teachers prefer sequential teaching -- as it allows them to teach complicated tasks iteratively -- or others prefer teaching in one go due to its simplicity.
+ oai:arXiv.org:2510.21046v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/MRA.2026.3650853
+ Zlatan Ajanovi\'c, Ravi Prakash, Leandro de Souza Rosa, Jens Kober
+
+
+ Soft Instruction De-escalation Defense
+ https://arxiv.org/abs/2510.21057
+ arXiv:2510.21057v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) are increasingly deployed in agentic systems that interact with an external environment; this makes them susceptible to prompt injections when dealing with untrusted data. To overcome this limitation, we propose SIC (Soft Instruction Control)-a simple yet effective iterative prompt sanitization loop designed for tool-augmented LLM agents. Our method repeatedly inspects incoming data for instructions that could compromise agent behavior. If such content is found, the malicious content is rewritten, masked, or removed, and the result is re-evaluated. The process continues until the input is clean or a maximum iteration limit is reached; if imperative instruction-like content remains, the agent halts to ensure security. By allowing multiple passes, our approach acknowledges that individual rewrites may fail but enables the system to catch and correct missed injections in later steps. Although immediately useful, worst-case analysis shows that SIC is not infallible; strong adversary can still get a 15% ASR by embedding non-imperative workflows. This nonetheless raises the bar.
+ oai:arXiv.org:2510.21057v2
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Nils Philipp Walter, Chawin Sitawarin, Jamie Hayes, David Stutz, Ilia Shumailov
+
+
+ Universal Maximum Likelihood (List) Decoding via Fast Vector-Matrix Multiplication
+ https://arxiv.org/abs/2510.21414
+ arXiv:2510.21414v3 Announce Type: replace
+Abstract: Maximum-likelihood (ML) decoding for arbitrary block codes remains fundamentally hard, with worst-case time complexity-measured by the total number of multiplications-being no better than straightforward exhaustive search, which requires $q^{k} n$ operations for an $[n,k]_q$ code. This paper introduces a simple, code-agnostic framework that reduces the worst-case complexity by a factor of $n$, down to $q^{k}$ operations, a highly desirable reduction in practice. The result holds for both linear and nonlinear block codes over general memoryless channels and under both hard-decision and soft-decision decoding. It naturally extends to intersymbol-interference (ISI) channels and ML list decoding with only a negligible increase in complexity. Our core insight is that, upon receipt of each sequence at the receiver, the conditional probability of that sequence for each codeword in the codebook (i.e., the \emph{likelihood}) can be expressed as the inner product of two carefully constructed vectors -- the first depending on the received sequence, and the second on that codeword itself. As a result, evaluating the likelihoods for all codewords in the codebook reduces to a single vector-matrix multiplication, and ML decoding (MLD) becomes the simple task of picking the maximum entry in the resulting vector. The only non-trivial cost lies in the vector-matrix product. However, our matrix construction allows the use of the Mailman algorithm to reduce this cost. This time reduction is achieved at the cost of high space complexity, requiring $\mathcal{O}(q^{k+1} n)$ space to store the pre-computed codebook matrix.
+ oai:arXiv.org:2510.21414v3
+ cs.IT
+ cs.DS
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hoang Ly, Emina Soljanin, Michael Schleppy
+
+
+ Joint Score-Threshold Optimization for Interpretable Risk Assessment
+ https://arxiv.org/abs/2510.21934
+ arXiv:2510.21934v2 Announce Type: replace
+Abstract: Risk assessment tools in healthcare commonly employ point-based scoring systems that map patients to ordinal risk categories via thresholds. While electronic health record (EHR) data presents opportunities for data-driven optimization of these tools, two fundamental challenges impede standard supervised learning: (1) labels are often available only for extreme risk categories due to intervention-censored outcomes, and (2) misclassification cost is asymmetric and increases with ordinal distance. We propose a mixed-integer programming (MIP) framework that jointly optimizes scoring weights and category thresholds in the face of these challenges. Our approach prevents label-scarce category collapse via threshold constraints, and utilizes an asymmetric, distance-aware objective. The MIP framework supports governance constraints, including sign restrictions, sparsity, and minimal modifications to incumbent tools, ensuring practical deployability in clinical workflows. We further develop a continuous relaxation of the MIP problem to provide warm-start solutions for more efficient MIP optimization. We apply the proposed score optimization framework to a case study of inpatient falls risk assessment using the Johns Hopkins Fall Risk Assessment Tool.
+ oai:arXiv.org:2510.21934v2
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fardin Gankhanloo, Emmett Springer, Erik H. Hoyer, Daniel L. Young, Kimia Ghobadi
+
+
+ LAMP: Data-Efficient Linear Affine Weight-Space Models for Parameter-Controlled 3D Shape Generation and Extrapolation
+ https://arxiv.org/abs/2510.22491
+ arXiv:2510.22491v2 Announce Type: replace
+Abstract: Generating high-fidelity 3D geometries that satisfy specific parameter constraints has broad applications in design and engineering. However, current methods typically rely on large training datasets and struggle with controllability and generalization beyond the training distributions. To overcome these limitations, we introduce LAMP (Linear Affine Mixing of Parametric shapes), a data-efficient framework for controllable and interpretable 3D generation. LAMP first aligns signed distance function (SDF) decoders by overfitting each exemplar from a shared initialization, then synthesizes new geometries by solving a parameter-constrained mixing problem in the aligned weight space. To ensure robustness, we further propose a safety metric that detects geometry validity via linearity mismatch. We evaluate LAMP on two 3D parametric benchmarks: DrivAerNet++ and BlendedNet. We found that LAMP enables (i) controlled interpolation within bounds with as few as 100 samples, (ii) safe extrapolation by up to 100% parameter difference beyond training ranges, (iii) physics performance-guided optimization under fixed parameters. LAMP significantly outperforms conditional autoencoder and Deep Network Interpolation (DNI) baselines in both extrapolation and data efficiency. Our results demonstrate that LAMP advances controllable, data-efficient, and safe 3D generation for design exploration, dataset generation, and performance-driven optimization.
+ oai:arXiv.org:2510.22491v2
+ cs.LG
+ cs.CE
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ghadi Nehme, Yanxia Zhang, Dule Shu, Matt Klenk, Faez Ahmed
+
+
+ A Survey of AI Scientists
+ https://arxiv.org/abs/2510.23045
+ arXiv:2510.23045v5 Announce Type: replace
+Abstract: Artificial intelligence is undergoing a profound transition from a computational instrument to an autonomous originator of scientific knowledge. This emerging paradigm, the AI scientist, is architected to emulate the complete scientific workflow-from initial hypothesis generation to the final synthesis of publishable findings-thereby promising to fundamentally reshape the pace and scale of discovery. However, the rapid and unstructured proliferation of these systems has created a fragmented research landscape, obscuring overarching methodological principles and developmental trends. This survey provides a systematic and comprehensive synthesis of this domain by introducing a unified, six-stage methodological framework that deconstructs the end-to-end scientific process into: Literature Review, Idea Generation, Experimental Preparation, Experimental Execution, Scientific Writing, and Paper Generation. Through this analytical lens, we chart the field's evolution from early Foundational Modules (2022-2023) to integrated Closed-Loop Systems (2024), and finally to the current frontier of Scalability, Impact, and Human-AI Collaboration (2025-present). By rigorously synthesizing these developments, this survey not only clarifies the current state of autonomous science but also provides a critical roadmap for overcoming remaining challenges in robustness and governance, ultimately guiding the next generation of systems toward becoming trustworthy and indispensable partners in human scientific inquiry.
+ oai:arXiv.org:2510.23045v5
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Guiyao Tie, Pan Zhou, Lichao Sun
+
+
+ Numerical Spectrum Linking: Identification of Governing PDE via Koopman-Chebyshev Approximation
+ https://arxiv.org/abs/2510.23078
+ arXiv:2510.23078v3 Announce Type: replace
+Abstract: A numerical framework is proposed for identifying partial differential equations (PDEs) governing dynamical systems directly from their observation data using Chebyshev polynomial approximation. In contrast to data-driven approaches such as dynamic mode decomposition (DMD), which approximate the Koopman operator without a clear connection to differential operators, the proposed method constructs finite-dimensional Koopman matrices by projecting the dynamics onto a Chebyshev basis, thereby capturing both differential and nonlinear terms. This establishes a numerical link between the Koopman and differential operators. Numerical experiments on benchmark dynamical systems confirm the accuracy and efficiency of the approach, underscoring its potential for interpretable operator learning. The framework also lays a foundation for future integration with symbolic regression, enabling the construction of explicit mathematical models directly from data.
+ oai:arXiv.org:2510.23078v3
+ math.NA
+ cs.NA
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Phonepaserth Sisaykeo, Shogo Muramatsu
+
+
+ Tree-Cotree-Based IETI-DP for Eddy Current Problems in Time-Domain
+ https://arxiv.org/abs/2510.23446
+ arXiv:2510.23446v2 Announce Type: replace
+Abstract: For low-frequency electromagnetic problems, where wave-propagation effects can be neglected, eddy current formulations are commonly used as a simplification of the full Maxwell's equations. In this setup, time-domain simulations, needed to capture transient startup responses or nonlinear behavior, are often computationally expensive. We propose a novel tearing and interconnecting approach for eddy currents in time-domain and investigate its scalability.
+ oai:arXiv.org:2510.23446v2
+ math.NA
+ cs.CE
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mario Mally, Rafael V\'azquez, Sebastian Sch\"ops
+
+
+ Blockage-Aware Multi-RIS WSR Maximization via Per-RIS Indexed Synchronization Sequences and Closed-Form Riemannian Updates
+ https://arxiv.org/abs/2510.24723
+ arXiv:2510.24723v2 Announce Type: replace
+Abstract: Millimeter-wave (mmWave) multi-user MIMO systems are highly vulnerable to blockage, and reconfigurable intelligent surfaces (RIS) have been proposed as a remedy. However, RIS links may themselves be blocked, while most prior works assume ideal RIS availability. We propose an end-to-end blockage-aware multi-RIS weighted sum-rate (WSR) optimization framework. The BS transmits short per-RIS indexed synchronization signals, enabling each user to identify blocked panels through a simple energy detection test. Based on the detected feasible sets, we jointly optimize the BS precoder and RIS phases via a Closed-form Riemannian Phase Alignment (CRPA) algorithm. CRPA provides unit-modulus-preserving closed-form updates, requiring no projection or line search, and ensures monotone ascent. Simulations validate reliable blockage detection and notable WSR and convergence gains over existing baselines.
+ oai:arXiv.org:2510.24723v2
+ eess.SY
+ cs.SY
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sehyun Ryu, Hyun Jong Yang
+
+
+ Perception, Understanding and Reasoning, A Multimodal Benchmark for Video Fake News Detection
+ https://arxiv.org/abs/2510.24816
+ arXiv:2510.24816v2 Announce Type: replace
+Abstract: The advent of multi-modal large language models (MLLMs) has greatly advanced research on video fake news detection (VFND) tasks. Existing benchmarks typically focus on the detection accuracy, while failing to provide fine-grained assessments for the entire detection process. To address these limitations, we introduce {POVFNDB (Process-oriented Video Fake News Detection Benchmark)}, a process-oriented benchmark comprising 10 tasks designed to systematically evaluate MLLMs' perception, understanding, and reasoning capabilities in VFND. This benchmark contains \textit{36,240} human-annotated question-answer (QA) in structured or open-ended formats, spanning 15 distinct evaluation dimensions that characterize different aspects of the video fake news detection process. Using POVFNDB, we conduct comprehensive evaluations on both proprietary and open-source MLLMs. Moreover, we establish a strong benchmark baseline by fine-tuning Qwen2.5VL-7B-Instruct on process-oriented chain-of-thought data constructed with our proposed POVFND-CoT framework, achieving state-of-the-art performance on VFND.
+ oai:arXiv.org:2510.24816v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Cui Yakun, Peng Qi, Fushuo Huo, Hang Du, Weijie Shi, Juntao Dai, Zhenghao Zhu, Sirui Han, Yike Guo
+
+
+ Cyclic Counterfactuals under Shift-Scale Interventions
+ https://arxiv.org/abs/2510.25005
+ arXiv:2510.25005v2 Announce Type: replace
+Abstract: Most counterfactual inference frameworks traditionally assume acyclic structural causal models (SCMs), i.e. directed acyclic graphs (DAGs). However, many real-world systems (e.g. biological systems) contain feedback loops or cyclic dependencies that violate acyclicity. In this work, we study counterfactual inference in cyclic SCMs under shift-scale interventions, i.e., soft, policy-style changes that rescale and/or shift a variable's mechanism.
+ oai:arXiv.org:2510.25005v2
+ cs.AI
+ cs.LG
+ math.ST
+ stat.ML
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Saptarshi Saha, Dhruv Vansraj Rathore, Utpal Garain
+
+
+ Comparative Study of UNet-based Architectures for Liver Tumor Segmentation in Multi-Phase Contrast-Enhanced Computed Tomography
+ https://arxiv.org/abs/2510.25522
+ arXiv:2510.25522v5 Announce Type: replace
+Abstract: Segmentation of liver structures in multi-phase contrast-enhanced computed tomography (CECT) plays a crucial role in computer-aided diagnosis and treatment planning. In this study, we investigate the performance of UNet-based architectures for liver tumor segmentation, evaluating ResNet, Transformer-based, and State-space (Mamba) backbones initialized with pretrained weights. Our comparative analysis reveals that despite the theoretical advantages of modern architectures in modeling long-range dependencies, ResNet-based models demonstrated superior sample efficiency on this dataset. This suggests that the inherent inductive biases of Convolutional Neural Networks (CNNs) remain advantageous for generalizing on limited medical data compared to data-hungry alternatives. To further improve segmentation quality, we introduce attention mechanisms into the backbone, finding that the Convolutional Block Attention Module (CBAM) yields the optimal configuration. The ResNetUNet3+ with CBAM achieved the highest nominal performance with a Dice score of 0.755 and IoU of 0.662, while also delivering the most precise boundary delineation (lowest HD95 of 77.911). Critically, while statistical testing indicated that the improvement in mean Dice score was not significant (p > 0.05) compared to the baseline, the proposed model exhibited greater stability (lower standard deviation) and higher specificity (0.926). These findings demonstrate that classical ResNet architectures, when enhanced with modern attention modules, provide a robust and statistically comparable alternative to emerging methods, offering a stable direction for liver tumor segmentation in clinical practice.
+ oai:arXiv.org:2510.25522v5
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Doan-Van-Anh Ly (The Saigon International University), Thanh-Hai Le (The Saigon International University), Thi-Thu-Hien Pham (International University, Vietnam National University HCMC)
+
+
+ Empirical and Sustainability Aspects of Software Engineering Research in the Era of Large Language Models: A Reflection
+ https://arxiv.org/abs/2510.26538
+ arXiv:2510.26538v3 Announce Type: replace
+Abstract: Software Engineering (SE) research involving the use of Large Language Models (LLMs) has introduced several new challenges related to rigour in benchmarking, contamination, replicability, and sustainability. In this paper, we invite the research community to reflect on how these challenges are addressed in SE. Our results provide a structured overview of current LLM-based SE research at ICSE, highlighting both encouraging practices and persistent shortcomings. We conclude with recommendations to strengthen benchmarking rigour, improve replicability, and address the financial and environmental costs of LLM-based SE.
+ oai:arXiv.org:2510.26538v3
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ David Williams, Max Hort, Maria Kechagia, Aldeida Aleti, Justyna Petke, Federica Sarro
+
+
+ A DRL-Empowered Multi-Level Jamming Approach for Secure Semantic Communication
+ https://arxiv.org/abs/2510.26610
+ arXiv:2510.26610v3 Announce Type: replace
+Abstract: Semantic communication (SemCom) aims to transmit only task-relevant information, thereby improving communication efficiency but also exposing semantic information to potential eavesdropping. In this paper, we propose a deep reinforcement learning (DRL)-empowered multi-level jamming approach to enhance the security of SemCom systems over MIMO fading wiretap channels. This approach combines semantic layer jamming, achieved by encoding task-irrelevant text, and physical layer jamming, achieved by encoding random Gaussian noise. These two-level jamming signals are superposed with task-relevant semantic information to protect the transmitted semantics from eavesdropping. A deep deterministic policy gradient (DDPG) algorithm is further introduced to dynamically design and optimize the precoding matrices for both task-relevant semantic information and multi-level jamming signals, aiming to enhance the legitimate user's image reconstruction while degrading the eavesdropper's performance. To jointly train the SemCom model and the DDPG agent, we propose an alternating optimization strategy where the two modules are updated iteratively. Experimental results demonstrate that, compared with both the encryption-based (ESCS) and encoded jammer-based (EJ) benchmarks, our method achieves comparable security while improving the legitimate user's peak signal-to-noise ratio (PSNR) by up to approximately 0.6 dB.
+ oai:arXiv.org:2510.26610v3
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weixuan Chen, Qianqian Yang
+
+
+ Process-based Indicators of Vulnerability Re-Introducing Code Changes: An Exploratory Case Study
+ https://arxiv.org/abs/2510.26676
+ arXiv:2510.26676v2 Announce Type: replace
+Abstract: Software vulnerabilities often persist or re-emerge even after being fixed, revealing the complex interplay between code evolution and socio-technical factors. While source code metrics provide useful indicators of vulnerabilities, software engineering process metrics can uncover patterns that lead to their introduction. Yet few studies have explored whether process metrics can reveal risky development activities over time -- insights that are essential for anticipating and mitigating software vulnerabilities. This work highlights the critical role of process metrics along with code changes in understanding and mitigating vulnerability reintroduction. We move beyond file-level prediction and instead analyze security fixes at the commit level, focusing not only on whether a single fix introduces a vulnerability but also on the longer sequences of changes through which vulnerabilities evolve and re-emerge. Our approach emphasizes that reintroduction is rarely the result of one isolated action, but emerges from cumulative development activities and socio-technical conditions. To support this analysis, we conducted a case study on the ImageMagick project by correlating longitudinal process metrics such as bus factor, issue density, and issue spoilage with vulnerability reintroduction activities, encompassing 76 instances of reintroduced vulnerabilities. Our findings show that reintroductions often align with increased issue spoilage and fluctuating issue density, reflecting short-term inefficiencies in issue management and team responsiveness. These observations provide a foundation for broader studies that combine process and code metrics to predict risky fixes and strengthen software security.
+ oai:arXiv.org:2510.26676v2
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Samiha Shimmi, Nicholas M. Synovic, Mona Rahimi, George K. Thiruvathukal
+
+
+ Inverse Knowledge Search over Verifiable Reasoning: Synthesizing a Scientific Encyclopedia from a Long Chains-of-Thought Knowledge Base
+ https://arxiv.org/abs/2510.26854
+ arXiv:2510.26854v3 Announce Type: replace
+Abstract: Most scientific materials compress reasoning, presenting conclusions while omitting the derivational chains that justify them. This compression hinders verification by lacking explicit, step-wise justifications and inhibits cross-domain links by collapsing the very pathways that establish the logical and causal connections between concepts. We introduce a scalable framework that decompresses scientific reasoning, constructing a verifiable Long Chain-of-Thought (LCoT) knowledge base and projecting it into an emergent encyclopedia, SciencePedia. Our pipeline operationalizes an endpoint-driven, reductionist strategy: a Socratic agent, guided by a curriculum of around 200 courses, generates approximately 3 million first-principles questions. To ensure high fidelity, multiple independent solver models generate LCoTs, which are then rigorously filtered by prompt sanitization and cross-model answer consensus, retaining only those with verifiable endpoints. This verified corpus powers the Brainstorm Search Engine, which performs inverse knowledge search -- retrieving diverse, first-principles derivations that culminate in a target concept. This engine, in turn, feeds the Plato synthesizer, which narrates these verified chains into coherent articles. The initial SciencePedia comprises approximately 200,000 fine-grained entries spanning mathematics, physics, chemistry, biology, engineering, and computation. In evaluations across six disciplines, Plato-synthesized articles (conditioned on retrieved LCoTs) exhibit substantially higher knowledge-point density and significantly lower factual error rates than an equally-prompted baseline without retrieval (as judged by an external LLM). Built on this verifiable LCoT knowledge base, this reasoning-centric approach enables trustworthy, cross-domain scientific synthesis at scale and establishes the foundation for an ever-expanding encyclopedia.
+ oai:arXiv.org:2510.26854v3
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yu Li, Yuan Huang, Tao Wang, Caiyu Fan, Xiansheng Cai, Sihan Hu, Xinzijian Liu, Cheng Shi, Mingjun Xu, Zhen Wang, Yan Wang, Xiangqi Jin, Tianhan Zhang, Linfeng Zhang, Lei Wang, Youjin Deng, Pan Zhang, Weijie Sun, Xinyu Li, Weinan E, Linfeng Zhang, Zhiyuan Yao, Kun Chen
+
+
+ Floor Plan-Guided Visual Navigation Incorporating Depth and Directional Cues
+ https://arxiv.org/abs/2511.01493
+ arXiv:2511.01493v3 Announce Type: replace
+Abstract: Current visual navigation strategies mainly follow an exploration-first and then goal-directed navigation paradigm. This exploratory phase inevitably compromises the overall efficiency of navigation. Recent studies propose leveraging floor plans alongside RGB inputs to guide agents, aiming for rapid navigation without prior exploration or mapping. Key issues persist despite early successes. The modal gap and content misalignment between floor plans and RGB images necessitate an efficient approach to extract the most salient and complementary features from both for reliable navigation. Here, we propose GlocDiff, a novel framework that employs a diffusion-based policy to continuously predict future waypoints. This policy is conditioned on two complementary information streams: (1) local depth cues derived from the current RGB observation, and (2) global directional guidance extracted from the floor plan. The former handles immediate navigation safety by capturing surrounding geometry, while the latter ensures goal-directed efficiency by offering definitive directional cues. Extensive evaluations on the FloNa benchmark demonstrate that GlocDiff achieves superior efficiency and effectiveness. Furthermore, its successful deployment in real-world scenarios underscores its strong potential for broad practical application.
+ oai:arXiv.org:2511.01493v3
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weiqi Huang, Jiaxin Li, Zan Wang, Huijun Di, Wei Liang, Zhu Yang
+
+
+ Ergodic Rate Analysis of Two-State Pinching-Antenna Systems
+ https://arxiv.org/abs/2511.01798
+ arXiv:2511.01798v2 Announce Type: replace
+Abstract: Flexible Antenna Systems (FAS) are a key enabler of next-generation wireless networks, allowing the antenna aperture to be dynamically reconfigured to adapt to channel conditions and service requirements. In this context, pinching-antenna systems (PASs) implemented on software-controllable dielectric waveguides provide the ability to reconfigure both channel characteristics and path loss by selectively exciting discrete radiation points. Existing works, however, typically assume continuously adjustable pinching positions, neglecting the spatial discreteness imposed by practical implementations. This paper develops a closed-form analytical framework for the ergodic rate of two-state PASs, where pinching antennas are fixed and only their activation states are controlled. To quantify the impact of spatial discretization, pinching discretization efficiency is introduced, characterizing the performance gap relative to the ideal continuous case. Finally, numerical results show that near-continuous performance can be achieved with a limited number of pinching points, providing design insights for scalable PASs.
+ oai:arXiv.org:2511.01798v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Dimitrios Tyrovolas, Sotiris A. Tegos, Yue Xiao, Panagiotis D. Diamantoulakis, Sotiris Ioannidis, Christos Liaskos, George K. Karagiannidis, Stylianos D. Asimonis
+
+
+ Assessing the value of Geo-Foundational Models for Flood Inundation Mapping: Benchmarking models for Sentinel-1, Sentinel-2, and Planetscope for end-users
+ https://arxiv.org/abs/2511.01990
+ arXiv:2511.01990v3 Announce Type: replace
+Abstract: Geo-Foundational Models (GFMs) enable fast and reliable extraction of spatiotemporal information from satellite imagery, improving flood inundation mapping by leveraging location and time embeddings. Despite their potential, it remains unclear whether GFMs outperform traditional models like U-Net. A systematic comparison across sensors and data availability scenarios is still lacking, which is an essential step to guide end-users in model selection. To address this, we evaluate three GFMs, Prithvi 2.0, Clay V1.5, DOFA, and UViT (a Prithvi variant), against TransNorm, U-Net, and Attention U-Net using PlanetScope, Sentinel-1, and Sentinel-2. We observe competitive performance among all GFMs, with only 2-5% variation between the best and worst models across sensors. Clay outperforms others on PlanetScope (0.79 mIoU) and Sentinel-2 (0.70), while Prithvi leads on Sentinel-1 (0.57). In leave-one-region-out cross-validation across five regions, Clay shows slightly better performance across all sensors (mIoU: 0.72(0.04), 0.66(0.07), 0.51(0.08)) compared to Prithvi (0.70(0.05), 0.64(0.09), 0.49(0.13)) and DOFA (0.67(0.07), 0.64(0.04), 0.49(0.09)) for PlanetScope, Sentinel-2, and Sentinel-1, respectively. Across all 19 sites, leave-one-region-out cross-validation reveals a 4% improvement by Clay compared to U-Net. Visual inspection highlights Clay's superior ability to retain fine details. Few-shot experiments show Clay achieves 0.64 mIoU on PlanetScope with just five training images, outperforming Prithvi (0.24) and DOFA (0.35). In terms of computational time, Clay is a better choice due to its smaller model size (26M parameters), making it ~3x faster than Prithvi (650M) and 2x faster than DOFA (410M). Contrary to previous findings, our results suggest GFMs offer small to moderate improvements in flood mapping accuracy at lower computational cost and labeling effort compared to traditional U-Net.
+ oai:arXiv.org:2511.01990v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Saurabh Kaushik, Lalit Maurya, Elizabeth Tellman, ZhiJie Zhang
+
+
+ Multimodal Reasoning via Latent Refocusing
+ https://arxiv.org/abs/2511.02360
+ arXiv:2511.02360v3 Announce Type: replace
+Abstract: Chain of Thought (CoT) reasoning enhances logical performance by decomposing complex tasks, yet its multimodal extension faces a trade-off. The existing Thinking with Images paradigm is limited by the modality gap between vision and language, which hinders reliable extraction of reasoning relevant information from high dimensional visual data. Recent latent space reasoning method provides stronger multimodal representations, but it often lacks the ability to refocus on visual inputs and suffers from limited interpretability. To address these issues, we propose \underline{La}tent \underline{Re}focusing (LaRe), a novel multimodal reasoning paradigm that combines visual refocusing with rich latent representations, enabling iterative reasoning within the latent space. We further design a semantic augmentation training strategy that enhances the semantic structure of the latent space through joint alignment and reconstruction objectives. Experimental evaluations demonstrate that LaRe improves average accuracy by 9.4\% compared to existing baselines while reducing the number of tokens required for inference by 16.5\%. When scaled to a 7B-parameter Large Language Model backbone, LaRe achieves performance comparable to state-of-the-art models and outperforms larger-scale models on almost all benchmarks. Code and checkpoints will be released later.
+ oai:arXiv.org:2511.02360v3
+ cs.CV
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jizheng Ma, Xiaofei Zhou, Geyuan Zhang, Yanlong Song, Han Yan
+
+
+ Part-Aware Bottom-Up Group Reasoning for Fine-Grained Social Interaction Detection
+ https://arxiv.org/abs/2511.03666
+ arXiv:2511.03666v2 Announce Type: replace
+Abstract: Social interactions often emerge from subtle, fine-grained cues such as facial expressions, gaze, and gestures. However, existing methods for social interaction detection overlook such nuanced cues and primarily rely on holistic representations of individuals. Moreover, they directly detect social groups without explicitly modeling the underlying interactions between individuals. These drawbacks limit their ability to capture localized social signals and introduce ambiguity when group configurations should be inferred from social interactions grounded in nuanced cues. In this work, we propose a part-aware bottom-up group reasoning framework for fine-grained social interaction detection. The proposed method infers social groups and their interactions using body part features and their interpersonal relations. Our model first detects individuals and enhances their features using part-aware cues, and then infers group configuration by associating individuals via similarity-based reasoning, which considers not only spatial relations but also subtle social cues that signal interactions, leading to more accurate group inference. Experiments on the NVI dataset demonstrate that our method outperforms prior methods, achieving the new state of the art, while additional results on the Caf\'e dataset further validate its generalizability to group activity understanding.
+ oai:arXiv.org:2511.03666v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dongkeun Kim, Minsu Cho, Suha Kwak
+
+
+ LLM-enhanced Air Quality Monitoring Interface via Model Context Protocol
+ https://arxiv.org/abs/2511.03706
+ arXiv:2511.03706v3 Announce Type: replace
+Abstract: Air quality monitoring is central to environmental sustainability and public health, yet traditional systems remain difficult for non-expert users to interpret due to complex visualizations, limited interactivity, and high deployment costs. Recent advances in Large Language Models (LLMs) offer new opportunities to make sensor data more accessible, but their tendency to produce hallucinations limits reliability in safety-critical domains. To address these challenges, we present an LLM-enhanced Air Monitoring Interface (AMI) that integrates real-time sensor data with a conversational interface via the Model Context Protocol (MCP). Our system grounds LLM outputs in live environmental data, enabling accurate, context-aware responses while reducing hallucination risk. The architecture combines a Django-based backend, a responsive user dashboard, and a secure MCP server that exposes system functions as discoverable tools, allowing the LLM to act as an active operator rather than a passive responder. Expert evaluation demonstrated high factual accuracy (4.78), completeness (4.82), and minimal hallucinations (4.84), on a scale of 5, supported by inter-rater reliability analysis. These results highlight the potential of combining LLMs with standardized tool protocols to create reliable, secure, and user-friendly interfaces for real-time environmental monitoring.
+ oai:arXiv.org:2511.03706v3
+ cs.ET
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1109/ISAECT68904.2025.11318775
+ Yu-Erh Pan, Ayesha Siddika Nipu
+
+
+ An Architectural Advantage of The Instruction-Tuned LLM in Containing The Readability-Accuracy Tension in Text Simplification
+ https://arxiv.org/abs/2511.05080
+ arXiv:2511.05080v3 Announce Type: replace
+Abstract: The increasing health-seeking behavior and digital consumption of biomedical information by the general public necessitate scalable solutions for automatically adapting complex scientific and technical documents into plain language. Automatic text simplification solutions, including advanced large language models (LLMs), however, continue to face challenges in reliably arbitrating the tension between optimizing readability performance and ensuring preservation of discourse fidelity. This report empirically assesses two major classes of general-purpose LLMs, demonstrating how they navigate the readability-accuracy tension compared to a human benchmark. Using a comparative analysis of the instruction-tuned Mistral-Small 3 24B and the reasoning-augmented QWen2.5 32B, we identify an architectural advantage in the instruction-tuned LLM. Mistral exhibits a tempered lexical simplification strategy that enhances readability across a suite of metrics while preserving human-level discourse with a BERTScore of 0.91. QWen also attains enhanced readability performance and a reasonable BERTScore of 0.89, but its operational strategy shows a disconnect in balancing between readability and accuracy. Additionally, a comprehensive correlation analysis of a suite of 21 metrics spanning readability, discourse fidelity, content safety, and underlying distributional measures for mechanistic insights, confirms strong functional redundancies, and informs metric selection and domain adaptation for text simplification.
+ oai:arXiv.org:2511.05080v3
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ P. Bilha Githinji, Aikaterini Meilliou, Zeming Liang, Lian Zhang, Peiwu Qin
+
+
+ Conformal Prediction-Driven Adaptive Sampling for Digital Water Twins
+ https://arxiv.org/abs/2511.05610
+ arXiv:2511.05610v2 Announce Type: replace
+Abstract: Digital Twins (DTs) for Water Distribution Networks (WDNs) require accurate state estimation with limited sensors. Uniform sampling often wastes resources across nodes with different uncertainty. We propose an adaptive framework combining LSTM forecasting and Conformal Prediction (CP) to estimate node-wise uncertainty and focus sensing on the most uncertain points. Marginal CP is used for its low computational cost, suitable for real-time DTs. Experiments on Hanoi, Net3, and CTOWN show 33--34\% lower demand error than uniform sampling at 40\% coverage and maintain 89.4--90.2\% empirical coverage with only 5--10\% extra computation.
+ oai:arXiv.org:2511.05610v2
+ cs.LG
+ cs.AI
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Mohammadhossein Homaei, Mehran Tarif, Pablo Garcia Rodriguez, Andres Caro, Mar Avila
+
+
+ Setting $\varepsilon$ is not the Issue in Differential Privacy
+ https://arxiv.org/abs/2511.06305
+ arXiv:2511.06305v2 Announce Type: replace
+Abstract: This position paper argues that setting the privacy budget in differential privacy should not be viewed as an important limitation of differential privacy compared to alternative methods for privacy-preserving machine learning. The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy in real-world deployments and is sometimes used to promote alternative mitigation techniques for data protection. We believe this misleads decision-makers into choosing unsafe methods. We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself, but from the intrinsic difficulty of estimating privacy risks in context, a challenge that any rigorous method for privacy risk assessment face. Moreover, we claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework or justify why it cannot.
+ oai:arXiv.org:2511.06305v2
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Edwige Cyffers
+
+
+ COGNOS: Universal Enhancement for Time Series Anomaly Detection via Constrained Gaussian-Noise Optimization and Smoothing
+ https://arxiv.org/abs/2511.06894
+ arXiv:2511.06894v2 Announce Type: replace
+Abstract: Reconstruction-based methods are a dominant paradigm in time series anomaly detection (TSAD), however, their near-universal reliance on Mean Squared Error (MSE) loss results in statistically flawed reconstruction residuals. This fundamental weakness leads to noisy, unstable anomaly scores, hindering reliable detection. To address this, we propose Constrained Gaussian-Noise Optimization and Smoothing (COGNOS), a universal, model-agnostic enhancement framework that tackles this issue at its source. COGNOS introduces a novel Gaussian-White Noise Regularization strategy during training, which directly constrains the model's output residuals to conform to a Gaussian white noise distribution. This engineered statistical property creates the ideal precondition for our second contribution: Adaptive Residual Kalman Smoother that operates as a statistically robust estimator to denoise the raw anomaly scores. Extensive experiments on multiple benchmarks demonstrate that COGNOS consistently enhances the performance of state-of-the-art backbones significantly, validating the efficacy of coupling statistical regularization with adaptive filtering.
+ oai:arXiv.org:2511.06894v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wenlong Shang, Shihao Tian, Xutong Wan, Peng Chang
+
+
+ RPTS: Tree-Structured Reasoning Process Scoring for Faithful Multimodal Evaluation
+ https://arxiv.org/abs/2511.06899
+ arXiv:2511.06899v2 Announce Type: replace
+Abstract: Large Vision-Language Models (LVLMs) excel in multimodal reasoning and have shown impressive performance on various multimodal benchmarks. However, most of these benchmarks evaluate models primarily through multiple-choice or short-answer formats, which do not take the reasoning process into account. Although some benchmarks assess the reasoning process, their methods are often overly simplistic and only examine reasoning when answers are incorrect. This approach overlooks scenarios where flawed reasoning leads to correct answers. In addition, these benchmarks do not consider the impact of intermodal relationships on reasoning. To address this issue, we propose the Reasoning Process Tree Score (RPTS), a tree structure-based metric to assess reasoning processes. Specifically, we organize the reasoning steps into a reasoning tree and leverage its hierarchical information to assign weighted faithfulness scores to each reasoning step. By dynamically adjusting these weights, RPTS not only evaluates the overall correctness of the reasoning, but also pinpoints where the model fails in the reasoning. To validate RPTS in real-world multimodal scenarios, we construct a new benchmark, RPTS-Eval, comprising 374 images and 390 reasoning instances. Each instance includes reliable visual-textual clues that serve as leaf nodes of the reasoning tree. Furthermore, we define three types of intermodal relationships to investigate how intermodal interactions influence the reasoning process. We evaluated representative LVLMs (e.g., GPT4o, Llava-Next), uncovering their limitations in multimodal reasoning and highlighting the differences between open-source and closed-source commercial LVLMs. We believe that this benchmark will contribute to the advancement of research in the field of multimodal reasoning.
+ oai:arXiv.org:2511.06899v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Haofeng Wang, Yu Zhang
+
+
+ PADiff: Predictive and Adaptive Diffusion Policies for Ad Hoc Teamwork
+ https://arxiv.org/abs/2511.07260
+ arXiv:2511.07260v2 Announce Type: replace
+Abstract: Ad hoc teamwork (AHT) requires agents to collaborate with previously unseen teammates, which is crucial for many real-world applications. The core challenge of AHT is to develop an ego agent that can predict and adapt to unknown teammates on the fly. Conventional RL-based approaches optimize a single expected return, which often causes policies to collapse into a single dominant behavior, thus failing to capture the multimodal cooperation patterns inherent in AHT. In this work, we introduce PADiff, a diffusion-based approach that captures agent's multimodal behaviors, unlocking its diverse cooperation modes with teammates. However, standard diffusion models lack the ability to predict and adapt in highly non-stationary AHT scenarios. To address this limitation, we propose a novel diffusion-based policy that integrates critical predictive information about teammates into the denoising process. Extensive experiments across three cooperation environments demonstrate that PADiff outperforms existing AHT methods significantly.
+ oai:arXiv.org:2511.07260v2
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hohei Chan, Xinzhi Zhang, Antao Xiang, Weinan Zhang, Mengchen Zhao
+
+
+ Post-Training as Reweighting: A Stochastic View of Reasoning Trajectories in Language Models
+ https://arxiv.org/abs/2511.07368
+ arXiv:2511.07368v2 Announce Type: replace
+Abstract: Foundation models encode rich structural knowledge but often rely on post-training procedures to adapt their reasoning behavior to specific tasks. Popular approaches such as reinforcement learning with verifiable rewards (RLVR) and inference-time reward aggregation are typically analyzed from a performance perspective, leaving their effects on the underlying reasoning distribution less understood. In this work, we study post-training reasoning from a stochastic trajectory viewpoint. Following Kim et al. (2025), we model reasoning steps of varying difficulty as Markov transitions with different probabilities, and formalize reasoning processes using tree-structured Markov chains. Within this framework, pretraining corresponds to discovering the reasoning structure, while post-training primarily reweights existing chains of thought. We show that both RLVR and inference-time reward aggregation concentrate probability mass on a small number of high-probability trajectories, leading to the suppression of rare but essential reasoning paths. As a consequence, solving hard instances often depends on low-probability trajectories already present in the base model. We further prove that exploration-oriented mechanisms, such as rejecting easy instances and applying KL regularization, help preserve these rare trajectories. Empirical simulations support our theoretical analysis.
+ oai:arXiv.org:2511.07368v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Dake Bu, Wei Huang, Andi Han, Atsushi Nitanda, Bo Xue, Qingfu Zhang, Hau-San Wong, Taiji Suzuki
+
+
+ Semantic-Consistent Bidirectional Contrastive Hashing for Noisy Multi-Label Cross-Modal Retrieval
+ https://arxiv.org/abs/2511.07780
+ arXiv:2511.07780v2 Announce Type: replace
+Abstract: Cross-modal hashing (CMH) facilitates efficient retrieval across different modalities (e.g., image and text) by encoding data into compact binary representations. While recent methods have achieved remarkable performance, they often rely heavily on fully annotated datasets, which are costly and labor-intensive to obtain. In real-world scenarios, particularly in multi-label datasets, label noise is prevalent and severely degrades retrieval performance. Moreover, existing CMH approaches typically overlook the partial semantic overlaps inherent in multi-label data, limiting their robustness and generalization. To tackle these challenges, we propose a novel framework named Semantic-Consistent Bidirectional Contrastive Hashing (SCBCH). The framework comprises two complementary modules: (1) Cross-modal Semantic-Consistent Classification (CSCC), which leverages cross-modal semantic consistency to estimate sample reliability and reduce the impact of noisy labels; (2) Bidirectional Soft Contrastive Hashing (BSCH), which dynamically generates soft contrastive sample pairs based on multi-label semantic overlap, enabling adaptive contrastive learning between semantically similar and dissimilar samples across modalities. Extensive experiments on four widely-used cross-modal retrieval benchmarks validate the effectiveness and robustness of our method, consistently outperforming state-of-the-art approaches under noisy multi-label conditions.
+ oai:arXiv.org:2511.07780v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Likang Peng, Chao Su, Wenyuan Wu, Yuan Sun, Dezhong Peng, Xi Peng, Xu Wang
+
+
+ SpikCommander: A High-performance Spiking Transformer with Multi-view Learning for Efficient Speech Command Recognition
+ https://arxiv.org/abs/2511.07883
+ arXiv:2511.07883v3 Announce Type: replace
+Abstract: Spiking neural networks (SNNs) offer a promising path toward energy-efficient speech command recognition (SCR) by leveraging their event-driven processing paradigm. However, existing SNN-based SCR methods often struggle to capture rich temporal dependencies and contextual information from speech due to limited temporal modeling and binary spike-based representations. To address these challenges, we first introduce the multi-view spiking temporal-aware self-attention (MSTASA) module, which combines effective spiking temporal-aware attention with a multi-view learning framework to model complementary temporal dependencies in speech commands. Building on MSTASA, we further propose SpikCommander, a fully spike-driven transformer architecture that integrates MSTASA with a spiking contextual refinement channel MLP (SCR-MLP) to jointly enhance temporal context modeling and channel-wise feature integration. We evaluate our method on three benchmark datasets: the Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC), and the Google Speech Commands V2 (GSC). Extensive experiments demonstrate that SpikCommander consistently outperforms state-of-the-art (SOTA) SNN approaches with fewer parameters under comparable time steps, highlighting its effectiveness and efficiency for robust speech command recognition.
+ oai:arXiv.org:2511.07883v3
+ cs.SD
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiaqi Wang, Liutao Yu, Xiongri Shen, Sihang Guo, Chenlin Zhou, Leilei Zhao, Yi Zhong, Zhiguo Zhang, Zhengyu Ma
+
+
+ Topology-Preserving Line Densification for Creating Contiguous Cartograms
+ https://arxiv.org/abs/2511.08121
+ arXiv:2511.08121v2 Announce Type: replace
+Abstract: Cartograms depict geographic regions with areas proportional to quantitative data. However, when created using density-equalizing map projections, cartograms may exhibit invalid topologies if boundary polygons are drawn using only a finite set of vertices connected by straight lines. Here we introduce a method for topology-preserving line densification that guarantees that cartogram regions remain connected and non-overlapping when using density-equalizing map projections. By combining our densification technique with a flow-based cartogram generator, we present a robust framework for strictly topology-preserving cartogram construction. Quantitative evaluations demonstrate that the proposed algorithm produces cartograms with greater accuracy and speed than alternative methods while maintaining comparable shape fidelity.
+ oai:arXiv.org:2511.08121v2
+ cs.CG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nihal Z. Miaji, Adi Singhania, Matthias E. Goh, Callista Le, Atima Tharatipyakul, Michael T. Gastner
+
+
+ DeepProofLog: Efficient Proving in Deep Stochastic Logic Programs
+ https://arxiv.org/abs/2511.08581
+ arXiv:2511.08581v2 Announce Type: replace
+Abstract: Neurosymbolic (NeSy) AI aims to combine the strengths of neural architectures and symbolic reasoning to improve the accuracy, interpretability, and generalization capability of AI models. While logic inference on top of subsymbolic modules has been shown to effectively guarantee these properties, this often comes at the cost of reduced scalability, which can severely limit the usability of NeSy models. This paper introduces DeepProofLog (DPrL), a novel NeSy system based on stochastic logic programs, which addresses the scalability limitations of previous methods. DPrL parameterizes all derivation steps with neural networks, allowing efficient neural guidance over the proving system. Additionally, we establish a formal mapping between the resolution process of our deep stochastic logic programs and Markov Decision Processes, enabling the application of dynamic programming and reinforcement learning techniques for efficient inference and learning. This theoretical connection improves scalability for complex proof spaces and large knowledge bases. Our experiments on standard NeSy benchmarks and knowledge graph reasoning tasks demonstrate that DPrL outperforms existing state-of-the-art NeSy systems, advancing scalability to larger and more complex settings than previously possible.
+ oai:arXiv.org:2511.08581v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ying Jiao, Rodrigo Castellano Ontiveros, Luc De Raedt, Marco Gori, Francesco Giannini, Michelangelo Diligenti, Giuseppe Marra
+
+
+ Convergence dynamics of Agent-to-Agent Interactions with Misaligned objectives
+ https://arxiv.org/abs/2511.08710
+ arXiv:2511.08710v3 Announce Type: replace
+Abstract: We develop and analyze a theoretical framework for agent-to-agent interactions in a simplified in-context linear regression setting. In our model, each agent is instantiated as a single-layer transformer with linear self-attention (LSA) trained to implement gradient-descent-like updates on a quadratic regression objective from in-context examples. We then study the coupled dynamics when two such LSA agents alternately update from each other's outputs under potentially misaligned fixed objectives. Within this framework, we characterize the generation dynamics and show that misalignment leads to a biased equilibrium where neither agent reaches its target, with residual errors predictable from the objective gap and the prompt-induced geometry. We also characterize an adversarial regime where asymmetric convergence is possible: one agent reaches its objective exactly while inducing persistent bias in the other. We further contrast this fixed objective regime with an adaptive multi-agent setting, wherein a helper agent updates a turn-based objective to implement a Newton-like step for the main agent, eliminating the plateau and accelerating its convergence. Experiments with trained LSA agents, as well as black-box GPT-5-mini runs on in-context linear regression tasks, are consistent with our theoretical predictions within this simplified setting. We view our framework as a mechanistic framework that links prompt geometry and objective misalignment to stability, bias, and robustness, and as a stepping stone toward analyzing more realistic multi-agent LLM systems.
+ oai:arXiv.org:2511.08710v3
+ cs.MA
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Romain Cosentino, Sarath Shekkizhar, Adam Earle
+
+
+ ConSurv: Multimodal Continual Learning for Survival Analysis
+ https://arxiv.org/abs/2511.09853
+ arXiv:2511.09853v2 Announce Type: replace
+Abstract: Survival prediction of cancers is crucial for clinical practice, as it informs mortality risks and influences treatment plans. However, a static model trained on a single dataset fails to adapt to the dynamically evolving clinical environment and continuous data streams, limiting its practical utility. While continual learning (CL) offers a solution to learn dynamically from new datasets, existing CL methods primarily focus on unimodal inputs and suffer from severe catastrophic forgetting in survival prediction. In real-world scenarios, multimodal inputs often provide comprehensive and complementary information, such as whole slide images and genomics; and neglecting inter-modal correlations negatively impacts the performance. To address the two challenges of catastrophic forgetting and complex inter-modal interactions between gigapixel whole slide images and genomics, we propose ConSurv, the first multimodal continual learning (MMCL) method for survival analysis. ConSurv incorporates two key components: Multi-staged Mixture of Experts (MS-MoE) and Feature Constrained Replay (FCR). MS-MoE captures both task-shared and task-specific knowledge at different learning stages of the network, including two modality encoders and the modality fusion component, learning inter-modal relationships. FCR further enhances learned knowledge and mitigates forgetting by restricting feature deviation of previous data at different levels, including encoder-level features of two modalities and the fusion-level representations. Additionally, we introduce a new benchmark integrating four datasets, Multimodal Survival Analysis Incremental Learning (MSAIL), for comprehensive evaluation in the CL setting. Extensive experiments demonstrate that ConSurv outperforms competing methods across multiple metrics.
+ oai:arXiv.org:2511.09853v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dianzhi Yu, Conghao Xiong, Yankai Chen, Wenqian Cui, Xinni Zhang, Yifei Zhang, Hao Chen, Joseph J. Y. Sung, Irwin King
+
+
+ Navigating the Ethics of Internet Measurement: Researchers' Perspectives from a Case Study in the EU
+ https://arxiv.org/abs/2511.10408
+ arXiv:2511.10408v2 Announce Type: replace
+Abstract: Internet measurement research is essential for understanding, improving, and securing Internet infrastructure. However, its methods often involve large-scale data collection and user observation, raising complex ethical questions. While recent research has identified ethical challenges in Internet measurement research and laid out best practices, little is known about how researchers actually make ethical decisions in their research practice. To understand how these practices take shape day-to-day from the perspective of Internet measurement researchers, we interviewed 16 researchers from an Internet measurement research group in the EU. Through thematic analysis, we find that researchers deal with five main ethical challenges: privacy and consent issues, the possibility of unintended harm, balancing transparency with security and accountability, uncertain ethical boundaries, and hurdles in the ethics review process. Researchers address these by lab testing, rate limiting, setting up clear communication channels, and relying heavily on mentors and colleagues for guidance. Researchers express that ethical requirements vary across institutions, jurisdictions and conferences, and ethics review boards often lack the technical knowledge to evaluate Internet measurement research. We also highlight the invisible labor of Internet measurement researchers and describe their ethics practices as craft knowledge, both of which are crucial in upholding responsible research practices in the Internet measurement community.
+ oai:arXiv.org:2511.10408v2
+ cs.HC
+ cs.CY
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Sahibzada Farhan Amin, Sana Athar, Anja Feldmann, Ha Dao, Mannat Kaur
+
+
+ Retail electricity costs and emissions incentives are misaligned for commercial and industrial power consumers
+ https://arxiv.org/abs/2511.10775
+ arXiv:2511.10775v2 Announce Type: replace
+Abstract: Electrification is contributing to substantial growth in U.S. commercial and industrial loads, but the cost and Scope 2 carbon emission implications of this load growth are opaque for both power consumers and utilities. This work describes a unique spatiotemporally resolved data set of U.S. electricity costs and emissions and applies time series approximation methods to quantify the alignment of electricity cost and emission incentives for large commercial and industrial consumers. We present a comprehensive spatiotemporal dataset of U.S. price-based demand response (i.e., tariff) and incentive-based demand response programs, enabling direct comparison to previously published marginal emission factor, average emission factor, and day-ahead market prices. We resolved the structural incompatibility and fragmentation of these datasets by developing time series approximations of discrete data and unifying geospatially heterogeneous datasets. Analysis of these datasets reveals significant spatial and temporal heterogeneity in cost and carbon emissions incentives for demand-side energy flexibility, underscoring the importance of site selection as a key factor influencing power costs and Scope 2 emissions. Analysis also reveals broad misalignment of economic and emissions incentives under existing electricity tariff structures, meaning tariffs are incentivizing consumption of more carbon-intensive electricity, and highlighting potential barriers to electrification delivering carbon savings.
+ oai:arXiv.org:2511.10775v2
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fletcher T. Chapin, Akshay K. Rao, Adhithyan Sakthivelu, Carson I. Tucker, Eres David, Casey S. Chen, Erin Musabandesu, Meagan S. Mauter
+
+
+ Evaluating Latent Generative Paradigms for High-Fidelity 3D Shape Completion from a Single Depth Image
+ https://arxiv.org/abs/2511.11074
+ arXiv:2511.11074v2 Announce Type: replace
+Abstract: While generative models have seen significant adoption across a wide range of data modalities, including 3D data, a consensus on which model is best suited for which task has yet to be reached. Further, conditional information such as text and images to steer the generation process are frequently employed, whereas others, like partial 3D data, have not been thoroughly evaluated. In this work, we compare two of the most promising generative models--Denoising Diffusion Probabilistic Models and Autoregressive Causal Transformers--which we adapt for the tasks of generative shape modeling and completion. We conduct a thorough quantitative evaluation and comparison of both tasks, including a baseline discriminative model and an extensive ablation study. Our results show that (1) the diffusion model with continuous latents outperforms both the discriminative model and the autoregressive approach and delivers state-of-the-art performance on multi-modal shape completion from a single, noisy depth image under realistic conditions and (2) when compared on the same discrete latent space, the autoregressive model can match or exceed diffusion performance on these tasks.
+ oai:arXiv.org:2511.11074v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Matthias Humt, Ulrich Hillenbrand, Rudolph Triebel
+
+
+ SCL Decoding of Non-Binary Linear Block Codes
+ https://arxiv.org/abs/2511.11256
+ arXiv:2511.11256v2 Announce Type: replace
+Abstract: Non-binary linear block codes (NB-LBCs) are an important class of error-correcting codes that are especially competent in correcting burst errors. They have broad applications in modern communications and storage systems. However, efficient soft-decision decoding of these codes remains to be further developed. This paper proposes successive cancellation list (SCL) decoding for NB-LBCs that are defined over a finite field of characteristic two, i.e., F_{2^r}, where r is the extension degree. By establishing a one-to-r mapping between the binary composition of each non-binary codeword and $r$ binary polar codewords, SCL decoding of the r polar codes can be performed with a complexity that is sub-quadratic in the codeword length. A simplified path sorting is further proposed to facilitate the decoding. Simulation results on short-length extended Reed-Solomon (eRS) and non-binary extended BCH (NB-eBCH) codes show that SCL decoding can outperform their state-of-the-art soft-decision decoding with fewer finite field arithmetic operations. For length-16 eRS codes, their maximum-likelihood (ML) decoding performances can be approached with a moderate list size.
+ oai:arXiv.org:2511.11256v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jingyu Lin, Li Chen, Xiaoqian Ye
+
+
+ Forgetting-MarI: LLM Unlearning via Marginal Information Regularization
+ https://arxiv.org/abs/2511.11914
+ arXiv:2511.11914v3 Announce Type: replace
+Abstract: As AI models are trained on ever-expanding datasets, the ability to remove the influence of specific data from trained models has become essential for privacy protection and regulatory compliance. Unlearning addresses this challenge by selectively removing parametric knowledge from the trained models without retraining from scratch, which is critical for resource-intensive models such as Large Language Models (LLMs). Existing unlearning methods often degrade model performance by removing more information than necessary when attempting to ''forget'' specific data. We introduce Forgetting-MarI, an LLM unlearning framework that provably removes only the additional (marginal) information contributed by the data to be unlearned, while preserving the information supported by the data to be retained. By penalizing marginal information, our method yields an explicit upper bound on the unlearn dataset's residual influence in the trained models, providing provable undetectability. Extensive experiments confirm that our approach outperforms current state-of-the-art unlearning methods, delivering reliable forgetting and better preserved general model performance across diverse benchmarks. This advancement represents an important step toward making AI systems more controllable and compliant with privacy and copyright regulations without compromising their effectiveness.
+ oai:arXiv.org:2511.11914v3
+ cs.AI
+ cs.CL
+ cs.CR
+ cs.IT
+ cs.LG
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shizhou Xu, Yuan Ni, Stefan Broecker, Thomas Strohmer
+
+
+ Towards Temporal Fusion Beyond the Field of View for Camera-based Semantic Scene Completion
+ https://arxiv.org/abs/2511.12498
+ arXiv:2511.12498v2 Announce Type: replace
+Abstract: Recent camera-based 3D semantic scene completion (SSC) methods have increasingly explored leveraging temporal cues to enrich the features of the current frame. However, while these approaches primarily focus on enhancing in-frame regions, they often struggle to reconstruct critical out-of-frame areas near the sides of the ego-vehicle, although previous frames commonly contain valuable contextual information about these unseen regions. To address this limitation, we propose the Current-Centric Contextual 3D Fusion (C3DFusion) module, which generates hidden region-aware 3D feature geometry by explicitly aligning 3D-lifted point features from both current and historical frames. C3DFusion performs enhanced temporal fusion through two complementary techniques-historical context blurring and current-centric feature densification-which suppress noise from inaccurately warped historical point features by attenuating their scale, and enhance current point features by increasing their volumetric contribution. Simply integrated into standard SSC architectures, C3DFusion demonstrates strong effectiveness, significantly outperforming state-of-the-art methods on the SemanticKITTI and SSCBench-KITTI-360 datasets. Furthermore, it exhibits robust generalization, achieving notable performance gains when applied to other baseline models.
+ oai:arXiv.org:2511.12498v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jongseong Bae, Junwoo Ha, Jinnyeong Heo, Yeongin Lee, Ha Young Kim
+
+
+ Towards Reinforcement Learning from Neural Feedback: Mapping fNIRS Signals to Agent Performance
+ https://arxiv.org/abs/2511.12844
+ arXiv:2511.12844v2 Announce Type: replace
+Abstract: Reinforcement Learning from Human Feedback (RLHF) is a methodology that aligns agent behavior with human preferences by integrating user feedback into the agent's training process. This paper introduces a framework that guides agent training through implicit neural signals, with a focus on the neural classification problem. Our work presents and releases a novel dataset of functional near-infrared spectroscopy (fNIRS) recordings collected from 25 human participants across three domains: Pick-and-Place Robot, Lunar Lander, and Flappy Bird. We train multiple classifiers to predict varying levels of agent performance (optimal, suboptimal, or worst-case) from windows of preprocessed fNIRS features, achieving an average F1 score of 67% for binary and 46% for multi-class classification across conditions and domains. We also train multiple regressors to predict the degree of deviation between an agent's chosen action and a set of near-optimal policy actions, providing a continuous measure of performance. Finally, we evaluate cross-subject generalization and show that fine-tuning pre-trained models with a small sample of subject-specific data increases average F1 scores by 17% and 41% for binary and multi-class models, respectively. Our results demonstrate that mapping implicit fNIRS signals to agent performance is feasible and can be improved, laying the foundation for future Reinforcement Learning from Neural Feedback (RLNF) systems.
+ oai:arXiv.org:2511.12844v2
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Julia Santaniello, Matthew Russell, Benson Jiang, Donatello Sassaroli, Robert Jacob, Jivko SInapov
+
+
+ Observational Auditing of Label Privacy
+ https://arxiv.org/abs/2511.14084
+ arXiv:2511.14084v3 Announce Type: replace
+Abstract: Differential privacy (DP) auditing is essential for evaluating privacy guarantees in machine learning systems. Existing auditing methods, however, pose a significant challenge for large-scale systems since they require modifying the training dataset -- for instance, by injecting out-of-distribution canaries or removing samples from training. Such interventions on the training data pipeline are resource-intensive and involve considerable engineering overhead. We introduce a novel observational auditing framework that leverages the inherent randomness of data distributions, enabling privacy evaluation without altering the original dataset. Our approach extends privacy auditing beyond traditional membership inference to protected attributes, with labels as a special case, addressing a key gap in existing techniques. We provide theoretical foundations for our method and perform experiments on Criteo and CIFAR-10 datasets that demonstrate its effectiveness in auditing label privacy guarantees. This work opens new avenues for practical privacy auditing in large-scale production environments.
+ oai:arXiv.org:2511.14084v3
+ cs.LG
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Iden Kalemaj, Luca Melis, Maxime Boucher, Ilya Mironov, Saeed Mahloujifar
+
+
+ ManipShield: A Unified Framework for Image Manipulation Detection, Localization and Explanation
+ https://arxiv.org/abs/2511.14259
+ arXiv:2511.14259v3 Announce Type: replace
+Abstract: With the rapid advancement of generative models, powerful image editing methods now enable diverse and highly realistic image manipulations that far surpass traditional deepfake techniques, posing new challenges for manipulation detection. Existing image manipulation detection and localization (IMDL) benchmarks suffer from limited content diversity, narrow generative-model coverage, and insufficient interpretability, which hinders the generalization and explanation capabilities of current manipulation detection methods. To address these limitations, we introduce \textbf{ManipBench}, a large-scale benchmark for image manipulation detection and localization focusing on AI-edited images. ManipBench contains over 450K manipulated images produced by 25 state-of-the-art image editing models across 12 manipulation categories, among which 100K images are further annotated with bounding boxes, judgment cues, and textual explanations to support interpretable detection. Building upon ManipBench, we propose \textbf{ManipShield}, an all-in-one model based on a Multimodal Large Language Model (MLLM) that leverages contrastive LoRA fine-tuning and task-specific decoders to achieve unified image manipulation detection, localization, and explanation. Extensive experiments on ManipBench and several public datasets demonstrate that ManipShield achieves state-of-the-art performance and exhibits strong generality to unseen manipulation models. Both ManipBench and ManipShield will be released upon publication.
+ oai:arXiv.org:2511.14259v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zitong Xu, Huiyu Duan, Xiaoyu Wang, Zhaolin Cai, Kaiwei Zhang, Qiang Hu, Jing Liu, Xiongkuo Min, Guangtao Zhai
+
+
+ Empirical Quantum Advantage in Constrained Optimization from Encoded Unitary Designs
+ https://arxiv.org/abs/2511.14296
+ arXiv:2511.14296v3 Announce Type: replace
+Abstract: We introduce the Constraint-Enhanced Quantum Approximate Optimization Algorithm (CE-QAOA), a shallow, constraint-aware ansatz that operates inside the one-hot product space [n]^m, where m is the number of blocks and each block is initialized in an n-qubit W_n state. We give an ancilla-free, depth-optimal encoder that prepares W_n using n-1 two-qubit rotations per block, and a two-local block-XY mixer that preserves the one-hot manifold and has a constant spectral gap on the one-excitation sector. At the level of expressivity, we establish per-block controllability, implying approximate universality per block. At the level of distributional behavior, we show that, after natural block and symbol permutation twirls, shallow CE-QAOA realizes an encoded unitary 1-design and supports approximate second-moment (2-design) behavior; combined with a Paley-Zygmund argument, this yields finite-shot anticoncentration guarantees.
+ Algorithmically, we wrap constant-depth sampling with a deterministic feasibility checker to obtain a polynomial-time hybrid quantum-classical solver (PHQC) that returns the best observed feasible solution in O(S n^2) time, where S is a polynomial shot budget. We obtain two advantages. First, when CE-QAOA fixes r >= 1 locations different from the start city, we achieve a Theta(n^r) reduction in shot complexity even against a classical sampler that draws uniformly from the feasible set. Second, against a classical baseline restricted to raw bitstring sampling, we show an exp(Theta(n^2)) minimax separation. In noiseless circuit simulations of traveling salesman problem instances with n in {4,...,10} locations from the QOPTLib benchmark library, we recover the global optimum at depth p = 1 using polynomial shot budgets and coarse parameter grids defined by the problem size.
+ oai:arXiv.org:2511.14296v3
+ cs.ET
+ cs.DM
+ math-ph
+ math.MP
+ physics.app-ph
+ quant-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Chinonso Onah, Roman Firt, Kristel Michielsen
+
+
+ Biased Minds Meet Biased AI: How Class Imbalance Shapes Appropriate Reliance and Interacts with Human Base Rate Neglect
+ https://arxiv.org/abs/2511.14591
+ arXiv:2511.14591v2 Announce Type: replace
+Abstract: Humans increasingly interact with artificial intelligence (AI) in decision-making. However, both AI and humans are prone to biases. While AI and human biases have been studied extensively in isolation, this paper examines their complex interaction. Specifically, we examined how class imbalance as an AI bias affects people's ability to appropriately rely on an AI-based decision-support system, and how it interacts with base rate neglect as a human bias. In a within-subject online study (N= 46), participants classified three diseases using an AI-based decision-support system trained on either a balanced or unbalanced dataset. We found that class imbalance disrupted participants' calibration of AI reliance. Moreover, we observed mutually reinforcing effects between class imbalance and base rate neglect, offering evidence of a compound human-AI bias. Based on these findings, we advocate for an interactionist perspective and further research into the mutually reinforcing effects of biases in human-AI interaction.
+ oai:arXiv.org:2511.14591v2
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nick von Felten, Johannes Sch\"oning, Klaus Opwis, Nicolas Scharowski
+
+
+ Human or LLM as Standardized Patients? A Comparative Study for Medical Education
+ https://arxiv.org/abs/2511.14783
+ arXiv:2511.14783v2 Announce Type: replace
+Abstract: Standardized patients (SPs) are indispensable for clinical skills training but remain expensive and difficult to scale. Although large language model (LLM)-based virtual standardized patients (VSPs) have been proposed as an alternative, their behavior remains unstable and lacks rigorous comparison with human standardized patients. We propose EasyMED, a multi-agent VSP framework that separates case-grounded information disclosure from response generation to support stable, inquiry-conditioned patient behavior. We also introduce SPBench, a human-grounded benchmark with eight expert-defined criteria for interaction-level evaluation. Experiments show that EasyMED more closely matches human SP behavior than existing VSPs, particularly in case consistency and controlled disclosure. A four-week controlled study further demonstrates learning outcomes comparable to human SP training, with stronger early gains for novice learners and improved flexibility, psychological safety, and cost efficiency.
+ oai:arXiv.org:2511.14783v2
+ cs.CL
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Bingquan Zhang, Xiaoxiao Liu, Yuchi Wang, Lei Zhou, Qianqian Xie, Benyou Wang
+
+
+ Mathematical Framework for Custom Reward Functions in Job Application Evaluation using Reinforcement Learning
+ https://arxiv.org/abs/2511.16073
+ arXiv:2511.16073v2 Announce Type: replace
+Abstract: Most of the traditional Applicant Tracking Systems (ATS) depend on strict matching using keywords, where candidates that are highly qualified are many times disqualified because of minor semantic differences. In this article, the two-stage process of developing a more comprehensive resume assessment system based on a small language model that is trained with fewer than 600M parameters is introduced and fine-tuned by using GRPO with a uniquely designed reward function. The initial stage is Supervised Fine-Tuning (SFT), which is used to create a strong base model with the ability to perceive resumes beyond superficial overlap of keywords. This SFT model is further optimized in the second step with Reinforcement Learning (RL) via GRPO with the help of multi-component-based rewarding, which will not be considered as a commission of tokens matching. In the initial RL experiments, we found a severe difficulty in the shape of reward hacking: overly aggressive penalty terms resulted in unstable training dynamics and prohibitively negative model behavior. This was solved by trial-and-error refinement of the reward and careful training hyperparameter tuning, which led to a stable and controlled process of gentle polishing. The GRPO-refined model shows high real-life performance, as it shows an accuracy of 91% on unseen data used for testing. It has a high recall of 0.85 on the SELECTED class with a perfect precision of 1.0, which highlights its high reliability for identifying qualified applicants. These findings demonstrate that an appropriately structured two-step fine-tuning pipeline can effectively be used to transfer a small language model into human-like candidate evaluation, surpassing the shortcomings of both traditional ATS systems and unrefined uses of reinforcement learning.
+ oai:arXiv.org:2511.16073v2
+ cs.LG
+ cs.AI
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/ICCCA66364.2025.11325393
+ ICCCA 2025, pp. 1-6
+ Shreyansh Jain, Madhav Singhvi, Shreya Rahul Jain, Pranav S, Dishaa Lokesh, Naren Chittibabu, Akash Anandhan
+
+
+ WER is Unaware: Assessing How ASR Errors Distort Clinical Understanding in Patient Facing Dialogue
+ https://arxiv.org/abs/2511.16544
+ arXiv:2511.16544v3 Announce Type: replace
+Abstract: As Automatic Speech Recognition (ASR) is increasingly deployed in clinical dialogue, standard evaluations still rely heavily on Word Error Rate (WER). This paper challenges that standard, investigating whether WER or other common metrics correlate with the clinical impact of transcription errors. We establish a gold-standard benchmark by having expert clinicians compare ground-truth utterances to their ASR-generated counterparts, labeling the clinical impact of any discrepancies found in two distinct doctor-patient dialogue datasets. Our analysis reveals that WER and a comprehensive suite of existing metrics correlate poorly with the clinician-assigned risk labels (No, Minimal, or Significant Impact). To bridge this evaluation gap, we introduce an LLM-as-a-Judge, programmatically optimized using GEPA through DSPy to replicate expert clinical assessment. The optimized judge (Gemini-2.5-Pro) achieves human-comparable performance, obtaining 90% accuracy and a strong Cohen's kappa of 0.816. This work provides a validated, automated framework for moving ASR evaluation beyond simple textual fidelity to a necessary, scalable assessment of safety in clinical dialogue.
+ oai:arXiv.org:2511.16544v3
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zachary Ellis, Jared Joselowitz, Yash Deo, Yajie He, Anna Kalygina, Aisling Higham, Mana Rahimzadeh, Yan Jia, Ibrahim Habli, Ernest Lim
+
+
+ LinkML: An Open Data Modeling Framework
+ https://arxiv.org/abs/2511.16935
+ arXiv:2511.16935v3 Announce Type: replace
+Abstract: Scientific research relies on well-structured, standardized data; however, much of it is stored in formats such as free-text lab notebooks, non-standardized spreadsheets, or data repositories. This lack of structure challenges interoperability, making data integration, validation, and reuse difficult. LinkML (Linked Data Modeling Language) is an open framework that simplifies the process of authoring, validating, and sharing data. LinkML can describe a range of data structures, from flat, list-based models to complex, interrelated, and normalized models that utilize polymorphism and compound inheritance. It offers an approachable syntax that is not tied to any one technical architecture and can be integrated seamlessly with many existing frameworks. The LinkML syntax provides a standard way to describe schemas, classes, and relationships, allowing modelers to build well-defined, stable, and optionally ontology-aligned data structures. Once defined, LinkML schemas may be imported into other LinkML schemas. These key features make LinkML an accessible platform for interdisciplinary collaboration and a reliable way to define and share data semantics.
+ LinkML helps reduce heterogeneity, complexity, and the proliferation of single-use data models while simultaneously enabling compliance with FAIR data standards. LinkML has seen increasing adoption in various fields, including biology, chemistry, biomedicine, microbiome research, finance, electrical engineering, transportation, and commercial software development. In short, LinkML makes implicit models explicitly computable and allows data to be standardized at its origin. LinkML documentation and code are available at linkml.io.
+ oai:arXiv.org:2511.16935v3
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1093/gigascience/giaf152
+ Gigascience. Oxford University Press (OUP); 2025 Dec 12;(giaf152):giaf152
+ Sierra A. T. Moxon, Harold Solbrig, Nomi L. Harris, Patrick Kalita, Mark A. Miller, Sujay Patil, Kevin Schaper, Chris Bizon, J. Harry Caufield, Silvano Cirujano Cuesta, Corey Cox, Frank Dekervel, Damion M. Dooley, William D. Duncan, Tim Fliss, Sarah Gehrke, Adam S. L. Graefe, Harshad Hegde, AJ Ireland, Julius O. B. Jacobsen, Madan Krishnamurthy, Carlo Kroll, David Linke, Ryan Ly, Nicolas Matentzoglu, James A. Overton, Jonny L. Saunders, Deepak R. Unni, Gaurav Vaidya, Wouter-Michiel A. M. Vierdag, LinkML Community Contributors, Oliver Ruebel, Christopher G. Chute, Matthew H. Brush, Melissa A. Haendel, Christopher J. Mungall
+
+
+ Geometric-disentangelment Unlearning
+ https://arxiv.org/abs/2511.17100
+ arXiv:2511.17100v2 Announce Type: replace
+Abstract: Machine unlearning, the removal of a training subset's influence from a deployed model, is critical for privacy preservation and model reliability, yet gradient ascent on forget samples often harms retained knowledge. Existing approaches face a persistent tradeoff between effective forgetting and preservation on the retain set. While previous methods provide useful heuristics, they often lack a formal analysis on how exactly forgetting updates harm retained knowledge, and whether the side effects can be removed with theoretical guarantees. To explore a theoretically sound and simple solution, we start from the first principle on how performance on the retain set is actually affected: a first-order analysis of the local change of the retain loss under small parameter updates during model training. We start from a crisp equivalence: the retain loss is unchanged to first order iff the update direction is orthogonal to the subspace spanned by retain gradients ("retain-invariant"). This identifies the entangled component as the tangential part of forget update within the retain-gradient subspace, and characterizes disentanglement as orthogonality. Guided by this, we propose the Geometric-disentanglement Unlearning (GU) that decomposes any candidate forget gradient update into tangential and normal components to retain space and executes only the normal component. Under a standard trust-region budget, the projected direction aligned with the raw forget gradient is optimal among all first-order retain-invariant moves, and we also derive the optimal projected direction for joint forget-retain updating objectives. Our method is plug-and-play and can be attached to existing gradient-based unlearning procedures to mitigate side effects. GU achieves consistent improvement on various methods across three benchmarks TOFU, MUSE, and WMDP.
+ oai:arXiv.org:2511.17100v2
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Duo Zhou, Yuji Zhang, Tianxin Wei, Ruizhong Qiu, Ke Yang, Xiao Lin, Cheng Qian, Jingrui He, Hanghang Tong, Heng Ji, Huan Zhang
+
+
+ Principled Context Engineering for RAG: Statistical Guarantees via Conformal Prediction
+ https://arxiv.org/abs/2511.17908
+ arXiv:2511.17908v2 Announce Type: replace
+Abstract: Retrieval-Augmented Generation (RAG) enhances factual grounding in large language models (LLMs) by incorporating retrieved evidence, but LLM accuracy declines when long or noisy contexts exceed the model's effective attention span. Existing pre-generation filters rely on heuristics or uncalibrated LLM confidence scores, offering no statistical control over retained evidence. We evaluate and demonstrate context engineering through conformal prediction, a coverage-controlled filtering framework that removes irrelevant content while preserving recall of supporting evidence. Using both embedding- and LLM-based scoring functions, we test this approach on the NeuCLIR and RAGTIME collections. Conformal filtering consistently meets its target coverage, ensuring that a specified fraction of relevant snippets are retained, and reduces retained context by 2-3x relative to unfiltered retrieval. On NeuCLIR, downstream factual accuracy measured by ARGUE F1 improves under strict filtering and remains stable at moderate coverage, indicating that most discarded material is redundant or irrelevant. These results demonstrate that conformal prediction enables reliable, coverage-controlled context reduction in RAG, offering a model-agnostic and principled approach to context engineering.
+ oai:arXiv.org:2511.17908v2
+ cs.CL
+ cs.AI
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Debashish Chakraborty, Eugene Yang, Daniel Khashabi, Dawn Lawrie, Kevin Duh
+
+
+ Function-Correcting Codes With Data Protection
+ https://arxiv.org/abs/2511.18420
+ arXiv:2511.18420v2 Announce Type: replace
+Abstract: Function-correcting codes (FCCs) are designed to provide error protection for the value of a function computed on the data. Existing work typically focuses solely on protecting the function value and not the underlying data. In this work, we propose a general framework that offers protection for both the data and the function values. Since protecting the data inherently contributes to protecting the function value, we focus on scenarios where the function value requires stronger protection than the data itself. We first introduce a more general approach and a framework for function-correcting codes that incorporates data protection along with protection of function values. A two-step construction procedure for such codes is proposed, and bounds on the optimal redundancy of general FCCs with data protection are reported. Using these results, we exhibit examples that show that data protection can be added to existing FCCs without increasing redundancy. Using our two-step construction procedure, we present explicit constructions of FCCs with data protection for specific families of functions, such as locally bounded functions and the Hamming weight function. We associate a graph called minimum-distance graph to a code and use it to show that perfect codes and maximum distance separable (MDS) codes cannot provide additional protection to function values over and above the amount of protection for data for any function. Then we focus on linear FCCs and provide some results for linear functions, leveraging their inherent structural properties. To the best of our knowledge, this is the first instance of FCCs with a linear structure. Finally, we generalize the Plotkin and Hamming bounds well known in classical error-correcting coding theory to FCCs with data protection.
+ oai:arXiv.org:2511.18420v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Charul Rajput, B. Sundar Rajan, Ragnar Freij-Hollanti, Camilla Hollanti
+
+
+ Disc3D: Automatic Curation of High-Quality 3D Dialog Data via Discriminative Object Referring
+ https://arxiv.org/abs/2511.18817
+ arXiv:2511.18817v3 Announce Type: replace
+Abstract: 3D Multi-modal Large Language Models (MLLMs) still lag behind their 2D peers, largely because large-scale, high-quality 3D scene-dialogue datasets remain scarce. Prior efforts hinge on expensive human annotation and leave two key ambiguities unresolved: viewpoint ambiguity, where spatial language presumes unknown camera poses, and object referring ambiguity, where non-exclusive descriptions blur the line between targets and distractors. We therefore present a fully automated pipeline that converts raw 3D scans into unambiguous, high-quality dialogue data at a fraction of the previous cost. By synergizing rule-based constraints with 2D MLLMs and LLMs, the pipeline enables controllable, scalable generation without human intervention. The pipeline comprises four stages: (1) meta-annotation collection harvesting object-, frame-, and scene-level captions, (2) scene graph construction with relation correction to capture proximal object relations, (3) discriminative object referring that generates exclusive and compact descriptions, and (4) multi-task data generation synthesizing diverse dialogues. Our pipeline systematically mitigates inherent flaws in source datasets and produces the final Disc3D dataset, over 2 million samples in 25K hybrid 3D scenes, spanning scene, view, and object captioning, visual grounding, and five object-centric QA tasks. Extensive experiments demonstrate that training with Disc3D yields consistent, significant improvements on both public benchmarks and our multifaceted Disc3D-QA tasks. Code, data, and models will be publicly available.
+ oai:arXiv.org:2511.18817v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Siyuan Wei, Chunjie Wang, Xiao Liu, Xiaosheng Yan, Zhishan Zhou, Rui Huang
+
+
+ Representational and Behavioral Stability of Truth in Large Language Models
+ https://arxiv.org/abs/2511.19166
+ arXiv:2511.19166v3 Announce Type: replace
+Abstract: Large language models (LLMs) are increasingly used as information sources, yet small changes in semantic framing can destabilize their truth judgments. We propose P-StaT (Perturbation Stability of Truth), an evaluation framework for testing belief stability under controlled semantic perturbations in representational and behavioral settings via probing and zero-shot prompting. Across sixteen open-source LLMs and three domains, we compare perturbations involving epistemically familiar Neither statements drawn from well-known fictional contexts (Fictional) to those involving unfamiliar Neither statements not seen in training data (Synthetic). We find a consistent stability hierarchy: Synthetic content aligns closely with factual representations and induces the largest retractions of previously held beliefs, producing up to $32.7\%$ retractions in representational evaluations and up to $36.3\%$ in behavioral evaluations. By contrast, Fictional content is more representationally distinct and comparatively stable. Together, these results suggest that epistemic familiarity is a robust signal across instantiations of belief stability under semantic reframing, complementing accuracy-based factuality evaluation with a notion of epistemic robustness.
+ oai:arXiv.org:2511.19166v3
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Samantha Dies, Courtney Maynard, Germans Savcisens, Tina Eliassi-Rad
+
+
+ Vidi2.5: Large Multimodal Models for Video Understanding and Creation
+ https://arxiv.org/abs/2511.19529
+ arXiv:2511.19529v2 Announce Type: replace
+Abstract: Video has emerged as the primary medium for communication and creativity on the Internet, driving strong demand for scalable, high-quality video production. Vidi models continue to evolve toward next-generation video creation and have achieved state-of-the-art performance in multimodal temporal retrieval (TR). In its second release, Vidi2 advances video understanding with fine-grained spatio-temporal grounding (STG) and extends its capability to video question answering (Video QA), enabling comprehensive multimodal reasoning. Given a text query, Vidi2 can identify not only the corresponding timestamps but also the bounding boxes of target objects within the output time ranges. To enable comprehensive evaluation of STG, we introduce a new benchmark, VUE-STG, which offers critical improvements over existing STG datasets. In addition, we upgrade the previous VUE-TR benchmark to VUE-TR-V2, achieving a more balanced duration and query distribution. Remarkably, the Vidi2 model substantially outperforms leading proprietary systems, such as Gemini 3 Pro Preview and GPT-5, on both VUE-TR-V2 and VUE-STG, while achieving competitive results with popular open-source models with similar scale on video QA benchmarks. The latest Vidi2.5 offers significantly stronger STG capability and slightly better TR and Video QA performance over Vidi2. This update also introduces a Vidi2.5-Think model to handle plot understanding with complex plot reasoning. To comprehensively evaluate the performance of plot understanding, we propose VUE-PLOT benchmark with two tracks, Character and Reasoning. Notably, Vidi2.5-Think outperforms Gemini 3 Pro Preview on fine-grained character understanding with comparable performance on complex plot reasoning. Furthermore, we demonstrate the effectiveness of Vidi2.5 on a challenging real-world application, video editing planning.
+ oai:arXiv.org:2511.19529v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Vidi Team, Chia-Wen Kuo, Chuang Huang, Dawei Du, Fan Chen, Fanding Lei, Feng Gao, Guang Chen, Haoji Zhang, Haojun Zhao, Jin Liu, Jingjing Zhuge, Lili Fang, Lingxi Zhang, Longyin Wen, Lu Guo, Lu Xu, Lusha Li, Qihang Fan, Rachel Deng, Shaobo Fang, Shu Zhang, Sijie Zhu, Stuart Siew, Weiyan Tao, Wen Zhong, Xiaohui Shen, Xin Gu, Ye Yuan, Yicheng He, Yiming Cui, Zhenfang Chen, Zhihua Wu, Zuhua Lin
+
+
+ A Surrogate-Informed Framework for Sparse Grid Interpolation
+ https://arxiv.org/abs/2511.20187
+ arXiv:2511.20187v2 Announce Type: replace
+Abstract: Approximating complex, high-dimensional, and computationally expensive functions is a central problem in science and engineering. Standard sparse grids offer a powerful solution by mitigating the curse of dimensionality compared to full tensor grids. However, they treat all regions of the domain isotropically, which may not be efficient for functions with localized or anisotropic behavior. This work presents a surrogate-informed framework for constructing sparse grid interpolants, which is guided by an error indicator that serves as a zero-cost estimate for the hierarchical surplus. This indicator is calculated for all candidate points, defined as those in the next-level grid $w+1$ not already present in the base grid $w$. It quantifies the local approximation error by measuring the relative difference between the predictions of two consecutive interpolants of level $w$ and $w-1$. The candidates are then ranked by this metric to select the most impactful points for refinement up to a given budget or following another criterion, as, e.g., a given threshold in the error indicator. The final higher-order model is then constructed using a surrogate-informed approach: the objective function is evaluated only at the selected high-priority points, while for the remaining nodes of the $w+1$ grid, we assign the values predicted by the initial $w$-level surrogate. This strategy significantly reduces the required number of expensive evaluations, yielding a final model that closely approximates the accuracy of a fully-resolved $w+1$ grid at a fraction of the computational cost. The accuracy and efficiency of the proposed surrogate-informed refinement criterion is demonstrated for several analytic function and for a real engineering problem, i.e., the analysis of sensitivity to geometrical parameters of numerically predicted flashback phenomenon in hydrogen-fueled perforated burners.
+ oai:arXiv.org:2511.20187v2
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Matteo Rosellini, Filippo Fruzza, Alessandro Mariotti, Maria Vittoria Salvetti, Lorenzo Tamellini
+
+
+ Active Inference in Discrete State Spaces from First Principles
+ https://arxiv.org/abs/2511.20321
+ arXiv:2511.20321v3 Announce Type: replace
+Abstract: We seek to clarify the concept of active inference by disentangling it from the Free Energy Principle. We show how the optimizations that need to be carried out in order to implement active inference in discrete state spaces can be formulated as constrained divergence minimization problems which can be solved by standard mean field methods that do not appeal to the idea of expected free energy. When it is used to model perception, the perception/action divergence criterion that we propose coincides with variational free energy. When it is used to model action, it differs from an expected free energy functional by an entropy regularizer.
+ oai:arXiv.org:2511.20321v3
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Patrick Kenny
+
+
+ Who Owns the Knowledge? Copyright, GenAI, and the Future of Academic Publishing
+ https://arxiv.org/abs/2511.21755
+ arXiv:2511.21755v2 Announce Type: replace
+Abstract: The integration of generative artificial intelligence (GenAI) and large language models (LLMs) into scientific research and higher education presents a paradigm shift, offering revolutionizing opportunities while simultaneously raising profound ethical, legal, and regulatory questions. This study examines the complex intersection of AI and science, with a specific focus on the challenges posed to copyright law and the principles of open science. The author argues that current regulatory frameworks in key jurisdictions like the United States, China, the European Union, and the United Kingdom, while aiming to foster innovation, contain significant gaps, particularly concerning the use of copyrighted works and open science outputs for AI training. Widely adopted licensing mechanisms, such as Creative Commons, fail to adequately address the nuances of AI training, and the pervasive lack of attribution within AI systems fundamentally challenges established notions of originality. While current doctrine treats AI training as potentially fair use, this paper argues such mechanisms are inadequate and that copyright holders should retain explicit opt-out rights regardless of fair use doctrine. Instead, the author advocates for upholding authors' rights to refuse the use of their works for AI training and proposes that universities assume a leading role in shaping responsible AI governance. The conclusion is that a harmonized international legislative effort is urgently needed to ensure transparency, protect intellectual property, and prevent the emergence of an oligopolistic market structure that could prioritize commercial profit over scientific integrity and equitable knowledge production. This is a substantially expanded and revised version of a work originally presented at the 20th International Conference on Scientometrics & Informetrics (Kochetkov, 2025).
+ oai:arXiv.org:2511.21755v2
+ cs.DL
+ cs.AI
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Dmitry Kochetkov
+
+
+ The Battle of the Water Futures
+ https://arxiv.org/abs/2511.22986
+ arXiv:2511.22986v4 Announce Type: replace
+Abstract: The highly anticipated 'Battle of the Water Networks' is back with a new challenge for the water community. This competition will be hosted at the 4th International Joint Conference on Water Distribution Systems Analysis and Computing and Control in the Water Industry (WDSA/CCWI 2026), taking place in Paphos, Cyprus, from May 18-21, 2026. This competition embodies the core mission of Water-Futures and the theme for WDSA/CCWI 2026: "Designing the next generation of urban water (and wastewater) systems."
+ The objective is to design and operate a water distribution system over a long-term horizon under deep uncertainty, with interventions applied in stages. For the first time, this challenge features a staged-design approach, unobservable and unknown uncertainties, and incorporates elements of policymaking and artificial intelligence. The solutions will be assessed using a transparent and inspectable open-source evaluation framework.
+ oai:arXiv.org:2511.22986v4
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Dennis Zanutto, Christos Michalopoulos, Lydia Tsiami, Andr\'e Artelt, Jasmin Brandt, Demetrios Eliades, Stelios Vrachimis, Stefano Alvisi, Valentina Marsili, Filippo Mazzoni, Panagiotis Smartzis, Barbara Hammer, Phoebe Koundouri, Marios Polycarpou, Dragan Savi\'c
+
+
+ Mind the GAPS: Bridging the GAPS between Targeted Dynamic Analysis and Static Path Reconstruction in Android Apps
+ https://arxiv.org/abs/2511.23213
+ arXiv:2511.23213v2 Announce Type: replace
+Abstract: Dynamically executing specific target methods in Android applications remains a critical and unresolved challenge. Despite notable advancements in GUI testing, current tools are insufficient for reliably driving execution toward specific target methods.
+ To address this challenge, we present GAPS (Graph-based Automated Path Synthesizer), the first system that leverages static, method-guided call graph reconstruction to guide the dynamic, interaction-driven execution of an Android app. GAPS performs a lightweight backward traversal of the call graph, guided by data-flow analysis, to reconstruct paths reaching the target methods. These paths are then translated into instructions that guide runtime app exploration.
+ On the AndroTest benchmark, GAPS statically identifies paths towards 88.24% of the target methods, averaging just 4.27 seconds per app, and reaching 57.44% of them through dynamic analysis. This performance exceeds the state-of-the-art tools' one: the model-based GUI tester APE reaches only 12.82%, the hybrid tool GoalExplorer reaches 9.69%, and the LLM-based Guardian reaches 17.12%. Finally, we applied GAPS to the 50 most downloaded apps from the Google Play Store, achieving an average static analysis time of 278.9 seconds to reconstruct paths towards 62.03% of the target methods and reaching 59.86% of them through dynamic analysis.
+ oai:arXiv.org:2511.23213v2
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Samuele Doria, Eleonora Losiouk
+
+
+ A Unified Architecture for N-Dimensional Visualization and Simulation: 4D Implementation and Evaluation including Boolean Operations
+ https://arxiv.org/abs/2512.01501
+ arXiv:2512.01501v3 Announce Type: replace
+Abstract: This paper proposes a unified software architecture for visualization and simulation based on a design targeting an $N$-dimensional space. The contribution of this work lies in presenting an architectural configuration that integrates multiple processes into a single software architecture: Quickhull-based convex hull mesh generation, Boolean operations, coordinate transformations for high-dimensional exploration (including orientation and view transformations), and hyperplane slicing for visualization. The proposed approach adopts an approximate implementation that tolerates numerical errors and prioritizes implementation transparency over guarantees of numerical rigor. The experimental results and evaluations presented in this paper are limited to a 4D implementation; no evaluation is conducted for $N>4$, and the discussion is restricted to stating that the architecture itself has a dimension-independent structure. This paper also proposes an interaction design for high-dimensional exploration based on FPS navigation. As an input example involving shape changes over time, a non-rigid body simulation based on XPBD (Extended Position Based Dynamics) is integrated into the 4D implementation. Experimental results confirm that the 4D implementation runs on a single PC.
+ oai:arXiv.org:2512.01501v3
+ cs.CG
+ cs.GR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hirohito Arai
+
+
+ Global AI Governance Overview: Understanding Regulatory Requirements Across Global Jurisdictions
+ https://arxiv.org/abs/2512.02046
+ arXiv:2512.02046v2 Announce Type: replace
+Abstract: The rapid advancement of general-purpose AI models has increased concerns about copyright infringement in training data, yet current regulatory frameworks remain predominantly reactive rather than proactive. This paper examines the regulatory landscape of AI training data governance in major jurisdictions, including the EU, the United States, and the Asia-Pacific region. It also identifies critical gaps in enforcement mechanisms that threaten both creator rights and the sustainability of AI development. Through analysis of major cases we identified critical gaps in pre-training data filtering. Existing solutions such as transparency tools, perceptual hashing, and access control mechanisms address only specific aspects of the problem and cannot prevent initial copyright violations. We identify two fundamental challenges: pre-training license collection and content filtering, which faces the impossibility of comprehensive copyright management at scale, and verification mechanisms, which lack tools to confirm filtering prevented infringement. We propose a multilayered filtering pipeline that combines access control, content verification, machine learning classifiers, and continuous database cross-referencing to shift copyright protection from post-training detection to pre-training prevention. This approach offers a pathway toward protecting creator rights while enabling continued AI innovation.
+ oai:arXiv.org:2512.02046v2
+ cs.CY
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mariia Kyrychenko, Mykyta Mudryi, Markiyan Chaklosh
+
+
+ Copyright in AI Pre-Training Data Filtering: Regulatory Landscape and Mitigation Strategies
+ https://arxiv.org/abs/2512.02047
+ arXiv:2512.02047v2 Announce Type: replace
+Abstract: The rapid advancement of general-purpose AI models has increased concerns about copyright infringement in training data, yet current regulatory frameworks remain predominantly reactive rather than proactive. This paper examines the regulatory landscape of AI training data governance in major jurisdictions, including the EU, the United States, and the Asia-Pacific region. It also identifies critical gaps in enforcement mechanisms that threaten both creator rights and the sustainability of AI development. Through analysis of major cases we identified critical gaps in pre-training data filtering. Existing solutions such as transparency tools, perceptual hashing, and access control mechanisms address only specific aspects of the problem and cannot prevent initial copyright violations. We identify two fundamental challenges: pre-training license collection and content filtering, which faces the impossibility of comprehensive copyright management at scale, and verification mechanisms, which lack tools to confirm filtering prevented infringement. We propose a multilayered filtering pipeline that combines access control, content verification, machine learning classifiers, and continuous database cross-referencing to shift copyright protection from post-training detection to pre-training prevention. This approach offers a pathway toward protecting creator rights while enabling continued AI innovation.
+ oai:arXiv.org:2512.02047v2
+ cs.CY
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mariia Kyrychenko, Mykyta Mudryi, Markiyan Chaklosh
+
+
+ Swivuriso: The South African Next Voices Multilingual Speech Dataset
+ https://arxiv.org/abs/2512.02201
+ arXiv:2512.02201v2 Announce Type: replace
+Abstract: This paper introduces Swivuriso, a 3000-hour multilingual speech dataset developed as part of the African Next Voices project, to support the development and benchmarking of automatic speech recognition (ASR) technologies in seven South African languages. Covering agriculture, healthcare, and general domain topics, Swivuriso addresses significant gaps in existing ASR datasets. We describe the design principles, ethical considerations, and data collection procedures that guided the dataset creation. We present baseline results of training/finetuning ASR models with this data and compare to other ASR datasets for the langauges concerned.
+ oai:arXiv.org:2512.02201v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Vukosi Marivate, Kayode Olaleye, Sitwala Mundia, Andinda Bakainga, Unarine Netshifhefhe, Mahmooda Milanzie, Tsholofelo Hope Mogale, Thapelo Sindane, Zainab Abdulrasaq, Kesego Mokgosi, Chijioke Okorie, Nia Zion Van Wyk, Graham Morrissey, Dale Dunbar, Francois Smit, Tsosheletso Chidi, Rooweither Mabuya, Andiswa Bukula, Respect Mlambo, Tebogo Macucwa, Idris Abdulmumin, and Seani Rananga
+
+
+ Exploring the Potentials of Spiking Neural Networks for Image Deraining
+ https://arxiv.org/abs/2512.02258
+ arXiv:2512.02258v3 Announce Type: replace
+Abstract: Biologically plausible and energy-efficient frameworks such as Spiking Neural Networks (SNNs) have not been sufficiently explored in low-level vision tasks. Taking image deraining as an example, this study addresses the representation of the inherent high-pass characteristics of spiking neurons, specifically in image deraining and innovatively proposes the Visual LIF (VLIF) neuron, overcoming the obstacle of lacking spatial contextual understanding present in traditional spiking neurons. To tackle the limitation of frequency-domain saturation inherent in conventional spiking neurons, we leverage the proposed VLIF to introduce the Spiking Decomposition and Enhancement Module and the lightweight Spiking Multi-scale Unit for hierarchical multi-scale representation learning. Extensive experiments across five benchmark deraining datasets demonstrate that our approach significantly outperforms state-of-the-art SNN-based deraining methods, achieving this superior performance with only 13\% of their energy consumption. These findings establish a solid foundation for deploying SNNs in high-performance, energy-efficient low-level vision tasks.
+ oai:arXiv.org:2512.02258v3
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shuang Chen, Tomas Krajnik, Farshad Arvin, Amir Atapour-Abarghouei
+
+
+ Unsupervised Multimodal Graph-based Model for Geo-social Analysis
+ https://arxiv.org/abs/2512.03063
+ arXiv:2512.03063v2 Announce Type: replace
+Abstract: The systematic analysis of user-generated social media content, especially when enriched with geospatial context, plays a vital role in domains such as disaster management and public opinion monitoring. Although multimodal approaches have made significant progress, most existing models remain fragmented, processing each modality separately rather than integrating them into a unified end-to-end model. To address this, we propose an unsupervised, multimodal graph-based methodology that jointly embeds semantic and geographic information into a shared representation space. The proposed methodology comprises two architectural paradigms: a mono graph (MonoGrah) model that jointly encodes both modalities, and a multi graph (MultiGraph) model that separately models semantic and geographic relationships and subsequently integrates them through multi-head attention mechanisms. A composite loss, combining contrastive, coherence, and alignment objectives, guides the learning process to produce semantically coherent and spatially compact clusters. Experiments on four real-world disaster datasets demonstrate that our models consistently outperform existing baselines in topic quality, spatial coherence, and interpretability. Inherently domain-independent, the framework can be readily extended to diverse forms of multimodal data and a wide range of downstream analysis tasks.
+ oai:arXiv.org:2512.03063v2
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ehsaneddin Jalilian, Bernd Resch
+
+
+ Tuning for TraceTarnish: Techniques, Trends, and Testing Tangible Traits
+ https://arxiv.org/abs/2512.03465
+ arXiv:2512.03465v2 Announce Type: replace
+Abstract: In this study, we more rigorously evaluated our attack script $\textit{TraceTarnish}$, which leverages adversarial stylometry principles to anonymize the authorship of text-based messages. To ensure the efficacy and utility of our attack, we sourced, processed, and analyzed Reddit comments -- comments that were later alchemized into $\textit{TraceTarnish}$ data -- to gain valuable insights. The transformed $\textit{TraceTarnish}$ data was then further augmented by $\textit{StyloMetrix}$ to manufacture stylometric features -- features that were culled using the Information Gain criterion, leaving only the most informative, predictive, and discriminative ones. Our results found that function words and function word types ($L\_FUNC\_A$ $\&$ $L\_FUNC\_T$); content words and content word types ($L\_CONT\_A$ $\&$ $L\_CONT\_T$); and the Type-Token Ratio ($ST\_TYPE\_TOKEN\_RATIO\_LEMMAS$) yielded significant Information-Gain readings. The identified stylometric cues -- function-word frequencies, content-word distributions, and the Type-Token Ratio -- serve as reliable indicators of compromise (IoCs), revealing when a text has been deliberately altered to mask its true author. Similarly, these features could function as forensic beacons, alerting defenders to the presence of an adversarial stylometry attack; granted, in the absence of the original message, this signal may go largely unnoticed, as it appears to depend on a pre- and post-transformation comparison. "In trying to erase a trace, you often imprint a larger one." Armed with this understanding, we framed $\textit{TraceTarnish}$'s operations and outputs around these five isolated features, using them to conceptualize and implement enhancements that further strengthen the attack.
+ oai:arXiv.org:2512.03465v2
+ cs.CR
+ cs.CL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Robert Dilworth
+
+
+ Training-Free Policy Violation Detection via Activation-Space Whitening in LLMs
+ https://arxiv.org/abs/2512.03994
+ arXiv:2512.03994v3 Announce Type: replace
+Abstract: As organizations increasingly deploy LLMs in sensitive domains such as legal, financial, and medical settings, ensuring alignment with internal organizational policies has become a priority. Existing content moderation frameworks remain largely confined to the safety domain and lack the robustness to capture nuanced organizational policies. LLM-as-a-judge and fine-tuning approaches, though flexible, introduce significant latency and training cost. To address these limitations, we frame policy violation detection as an out-of-distribution (OOD) problem in the model's activation space. We propose a training-free method that operates directly on the LLM internal representations, leveraging prior evidence that decision-relevant information is encoded within them. Inspired by whitening techniques, we apply a linear transformation to decorrelate and standardize the model's hidden activations, and use the Euclidean norm in this transformed space as a compliance score for detecting policy violations. Our method requires only the policy text and a small number of illustrative samples, making it lightweight and easily deployable. We extensively evaluate our method across multiple LLMs and challenging policy benchmarks, achieving 86.0% F1 score while outperforming fine-tuned baselines by up to 9.1 points and LLM-as-a-judge by 16 points, with significantly lower computational cost. Code is available at: https://github.com/FujitsuResearch/LLM-policy-violation-detection
+ oai:arXiv.org:2512.03994v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Oren Rachmil, Avishag Shapira, Roy Betser, Itay Gershon, Omer Hofman, Asaf Shabtai, Yuval Elovici, Roman Vainshtein
+
+
+ Distance Is All You Need: Radial Dispersion for Uncertainty Estimation in Large Language Models
+ https://arxiv.org/abs/2512.04351
+ arXiv:2512.04351v2 Announce Type: replace
+Abstract: Detecting uncertainty in large language models (LLMs) is essential for building reliable systems, yet many existing approaches are overly complex and depend on brittle semantic clustering or access to model internals. We introduce \textbf{Radial Dispersion Score (RDS)}, a simple, training-free, fully model-agnostic uncertainty metric that measures the radial dispersion of sampled generations in embedding space. Specifically, given $N$ sampled generations embedded on the unit hypersphere, RDS computes the total $\ell_1$ distance from the empirical centroid, i.e., the mean embedding, providing a direct geometric signal of semantic variability. A lightweight probability-weighted variant further incorporates the model's own token probabilities when available, outperforming nine recent state-of-the-art baselines. Moreover, RDS naturally extends to effective per-sample uncertainty estimates that complement probability- and consistency-based methods while remaining lightweight for practical use. Across four challenging free-form question-answering datasets and four LLMs, our metrics achieve state-of-the-art hallucination detection and best-of-$N$ performance, while remaining robust and scalable with respect to sample size and embedding choice. These results highlight the practical value of RDS and its contribution toward improving the trustworthiness of LLMs.
+ oai:arXiv.org:2512.04351v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Manh Nguyen, Sunil Gupta, Hung Le
+
+
+ GraphBench: Next-generation graph learning benchmarking
+ https://arxiv.org/abs/2512.04475
+ arXiv:2512.04475v4 Announce Type: replace
+Abstract: Machine learning on graphs has recently achieved impressive progress in various domains, including molecular property prediction and chip design. However, benchmarking practices remain fragmented, often relying on narrow, task-specific datasets and inconsistent evaluation protocols, which hampers reproducibility and broader progress. To address this, we introduce GraphBench, a comprehensive benchmarking suite that spans diverse domains and prediction tasks, including node-level, edge-level, graph-level, and generative settings. GraphBench provides standardized evaluation protocols -- with consistent dataset splits and performance metrics that account for out-of-distribution generalization -- as well as a unified hyperparameter tuning framework. Additionally, we benchmark GraphBench using message-passing neural networks and graph transformer models, providing principled baselines and establishing a reference performance. See www.graphbench.io for further details.
+ oai:arXiv.org:2512.04475v4
+ cs.LG
+ cs.AI
+ cs.NE
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Timo Stoll, Chendi Qian, Ben Finkelshtein, Ali Parviz, Darius Weber, Fabrizio Frasca, Hadar Shavit, Antoine Siraudin, Arman Mielke, Marie Anastacio, Erik M\"uller, Maya Bechler-Speicher, Michael Bronstein, Mikhail Galkin, Holger Hoos, Mathias Niepert, Bryan Perozzi, Jan T\"onshoff, Christopher Morris
+
+
+ Gauss-Newton accelerated MPPI Control
+ https://arxiv.org/abs/2512.04579
+ arXiv:2512.04579v2 Announce Type: replace
+Abstract: Model Predictive Path Integral (MPPI) control is a sampling-based optimization method that has recently attracted attention, particularly in the robotics and reinforcement learning communities. MPPI has been widely applied as a GPU-accelerated random search method to deterministic direct single-shooting optimal control problems arising in model predictive control (MPC) formulations. MPPI offers several key advantages, including flexibility, robustness, ease of implementation, and inherent parallelizability. However, its performance can deteriorate in high-dimensional settings since the optimal control problem is solved via Monte Carlo sampling. To address this limitation, this paper proposes an enhanced MPPI method that incorporates a Jacobian reconstruction technique and the second-order Generalized Gauss-Newton method. This novel approach is called \textit{Gauss-Newton accelerated MPPI}. The numerical results show that the Gauss-Newton accelerated MPPI approach substantially improves MPPI scalability and computational efficiency while preserving the key benefits of the classical MPPI framework, making it a promising approach even for high-dimensional problems.
+ oai:arXiv.org:2512.04579v2
+ eess.SY
+ cs.RO
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hannes Homburger, Katrin Baumg\"artner, Moritz Diehl, Johannes Reuter
+
+
+ Hyperparameter Transfer Enables Consistent Gains of Matrix-Preconditioned Optimizers Across Scales
+ https://arxiv.org/abs/2512.05620
+ arXiv:2512.05620v2 Announce Type: replace
+Abstract: Several recently introduced deep learning optimizers utilizing matrix-level preconditioning have shown promising speedups relative to the current dominant optimizer AdamW, particularly in relatively small-scale experiments. However, efforts to validate and replicate their successes have reported mixed results. To better understand the effectiveness of these optimizers at scale, in this work we investigate how to scale preconditioned optimizers via hyperparameter transfer, building on prior works such as $\mu$P. We study how the optimal learning rate and weight decay should scale with model width and depth for a wide range of optimizers, including Shampoo, SOAP, and Muon, accounting for the impact of commonly used techniques such as blocking and grafting. We find that scaling the learning rate according to $\mu$P improves transfer, but can still suffer from significant finite-width deviations that cause drifting optimal learning rates, which we show can be mitigated by blocking and explicit spectral normalization. For compute-optimal scaling, we find scaling independent weight decay as $1/\mathrm{width}$ is nearly optimal across optimizers. Applying these scaling rules, we show Muon, SOAP and Shampoo consistently achieve near $1.4\times$ speedup over AdamW for training Llama-architecture language models of sizes ranging from $190$M to $1.4$B, whereas the speedup vanishes rapidly with scale under incorrect scaling. Based on these results and further ablations, we argue that studying optimal hyperparameter transfer is essential for reliably comparing optimizers at scale given a realistic tuning budget.
+ oai:arXiv.org:2512.05620v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shikai Qiu, Zixi Chen, Hoang Phan, Qi Lei, Andrew Gordon Wilson
+
+
+ Wasserstein Evolution : Evolutionary Optimization as Phase Transition
+ https://arxiv.org/abs/2512.05837
+ arXiv:2512.05837v3 Announce Type: replace
+Abstract: Evolutionary algorithms (EAs) serve as powerful black-box optimizers inspired by biological evolution. However, most existing EAs predominantly focus on heuristic operators such as crossover and mutation, while usually overlooking underlying physical interpretability such as statistical mechanics and thermosdynamics. This theoretical void limits the principled understanding of algorithmic dynamics, hindering the systematic design of evolutionary search beyond ad-hoc heuristics. To bridge this gap, we first point out that evolutionary optimization can be conceptually reframed as a physical phase transition process. Building on this perspective, we establish the theoretical grounds by modeling the optimization dynamics as a Wasserstein gradient flow of free energy. Consequently, a robust and interpretable solver named Wasserstein Evolution (WE) is proposed. WE mathematically frames the trade-off between exploration and exploitation as a competition between potential gradient forces and entropic forces. This formulation guarantees convergence to the Boltzmann distribution, thereby minimizing free energy and maximizing entropy, which promotes highly diverse solutions. Extensive experiments on complex multimodal and physical potential functions demonstrate that WE achieves superior diversity and stability compared to established baselines.
+ oai:arXiv.org:2512.05837v3
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kaichen Ouyang, Mingyang Yu, Zong Ke, Junbo Jacob Lian, Shengwei Fu, Xiaoyang Hao, Shengju Yu, Dayu Hu
+
+
+ Academic journals' AI policies fail to curb the surge in AI-assisted academic writing
+ https://arxiv.org/abs/2512.06705
+ arXiv:2512.06705v2 Announce Type: replace
+Abstract: The rapid integration of generative AI into academic writing has prompted widespread policy responses from journals and publishers. However, the effectiveness of these policies remains unclear. Here, we analyze 5,114 journals and over 5.2 million papers to evaluate the real-world impact of AI usage guidelines. We show that despite 70% of journals adopting AI policies (primarily requiring disclosure), researchers' use of AI writing tools has increased dramatically across disciplines, with no significant difference between journals with or without policies. Non-English-speaking countries, physical sciences, and high-OA journals exhibit the highest growth rates. Crucially, full-text analysis on 164k scientific publications reveals a striking transparency gap: Of the 75k papers published since 2023, only 76 (~0.1%) explicitly disclosed AI use. Our findings suggest that current policies have largely failed to promote transparency or restrain AI adoption. We urge a re-evaluation of ethical frameworks to foster responsible AI integration in science.
+ oai:arXiv.org:2512.06705v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yongyuan He, Yi Bu
+
+
+ Benchmarking Deep Neural Networks for Modern Recommendation Systems
+ https://arxiv.org/abs/2512.07000
+ arXiv:2512.07000v2 Announce Type: replace
+Abstract: This paper presents a requirement-oriented benchmark of seven deep neural architectures, CNN, RNN, GNN, Autoencoder, Transformer, Neural Collaborative Filtering, and Siamese Networks, across three real-world datasets: Retail E-commerce, Amazon Products, and Netflix Prize. To ensure a fair and comprehensive comparison aligned with the evolving demands of modern recommendation systems, we adopt a Requirement-Oriented Benchmarking (ROB) framework that structures evaluation around predictive accuracy, recommendation diversity, relational awareness, temporal dynamics, and computational efficiency. Under a unified evaluation protocol, models are assessed using standard accuracy-oriented metrics alongside diversity and efficiency indicators. Experimental results show that different architectures exhibit complementary strengths across requirements, motivating the use of hybrid and ensemble designs. The findings provide practical guidance for selecting and combining neural architectures to better satisfy multi-objective recommendation system requirements.
+ oai:arXiv.org:2512.07000v2
+ cs.IR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Abderaouf Bahi, Inoussa Mouiche, Ibtissem Gasmi
+
+
+ Context-measure: Contextualizing Metric for Camouflage
+ https://arxiv.org/abs/2512.07076
+ arXiv:2512.07076v2 Announce Type: replace
+Abstract: Camouflage is primarily context-dependent yet current metrics for camouflaged scenarios overlook this critical factor. Instead, these metrics are originally designed for evaluating general or salient objects, with an inherent assumption of uncorrelated spatial context. In this paper, we propose a new contextualized evaluation paradigm, Context-measure, built upon a probabilistic pixel-aware correlation framework. By incorporating spatial dependencies and pixel-wise camouflage quantification, our measure better aligns with human perception. Extensive experiments across three challenging camouflaged object segmentation datasets show that Context-measure delivers more reliability than existing context-independent metrics. Our measure can provide a foundational evaluation benchmark for various computer vision applications involving camouflaged patterns, such as agricultural, industrial, and medical scenarios. Code is available at https://github.com/pursuitxi/Context-measure.
+ oai:arXiv.org:2512.07076v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Chen-Yang Wang, Gepeng Ji, Song Shao, Ming-Ming Cheng, Deng-Ping Fan
+
+
+ Enhancing Agentic RL with Progressive Reward Shaping and Value-based Sampling Policy Optimization
+ https://arxiv.org/abs/2512.07478
+ arXiv:2512.07478v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) empowered with Tool-Integrated Reasoning (TIR) can iteratively plan, call external tools, and integrate returned information to solve complex, long-horizon reasoning tasks. Agentic Reinforcement Learning (Agentic RL) optimizes such models over full tool-interaction trajectories, but two key challenges hinder effectiveness: (1) Sparse, non-instructive rewards, such as binary 0-1 verifiable signals, provide limited guidance for intermediate steps and slow convergence; (2) Gradient degradation in Group Relative Policy Optimization (GRPO), where identical rewards within a rollout group yield zero advantage, which reducing sample efficiency. To address these challenges, we propose two complementary techniques: Progressive Reward Shaping (PRS) and Value-based Sampling Policy Optimization (VSPO). PRS is a curriculum-inspired reward design that introduces dense, stage-wise feedback - encouraging models to first master parseable and properly formatted tool calls, then optimize for factual correctness and answer quality. We instantiate PRS for short-form QA (with a length-aware BLEU to fairly score concise answers) and long-form QA (with LLM-as-a-Judge scoring to prevent reward hacking). VSPO is an enhanced GRPO variant that replaces zero advantages samples with prompts selected by a task-value metric balancing difficulty and uncertainty, and applies value-smoothing clipping to stabilize gradient updates. Experiments on multiple short-form and long-form QA benchmarks show that PRS consistently outperforms traditional binary rewards, and VSPO achieves superior stability, faster convergence, and higher final performance compared to SFT, PPO and GRPO baselines. Together, PRS and VSPO yield LLM-based TIR agents that generalize better across domains.
+ oai:arXiv.org:2512.07478v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jianghao Su, Xia Zeng, Luhui Liu, Chao Luo, Ye Chen, Zhuoran Zhuang
+
+
+ Balanced Accuracy: The Right Metric for Evaluating LLM Judges -- Explained through Youden's J statistic
+ https://arxiv.org/abs/2512.08121
+ arXiv:2512.08121v2 Announce Type: replace
+Abstract: Rigorous evaluation of large language models (LLMs) relies on comparing models by the prevalence of desirable or undesirable behaviors, such as task pass rates or policy violations. These prevalence estimates are produced by a classifier, either an LLM-as-a-judge or human annotators, making the choice of classifier central to trustworthy evaluation. Common metrics used for this choice, such as Accuracy, Precision, and F1, are sensitive to class imbalance and to arbitrary choices of positive class, and can favor judges that distort prevalence estimates. We show that Youden's $J$ statistic is theoretically aligned with choosing the best judge to compare models, and that Balanced Accuracy is an equivalent linear transformation of $J$. Through both analytical arguments and empirical examples and simulations, we demonstrate how selecting judges using Balanced Accuracy leads to better, more robust classifier selection.
+ oai:arXiv.org:2512.08121v2
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Stephane Collot, Colin Fraser, Justin Zhao, William F. Shen, Timon Willi, Ilias Leontiadis
+
+
+ MIRAGE: Misleading Retrieval-Augmented Generation via Black-box and Query-agnostic Poisoning Attacks
+ https://arxiv.org/abs/2512.08289
+ arXiv:2512.08289v2 Announce Type: replace
+Abstract: Retrieval-Augmented Generation (RAG) systems enhance LLMs with external knowledge but introduce a critical attack surface: corpus poisoning. While recent studies have demonstrated the potential of such attacks, they typically rely on impractical assumptions, such as white-box access or known user queries, thereby underestimating the difficulty of real-world exploitation. In this paper, we bridge this gap by proposing MIRAGE, a novel multi-stage poisoning pipeline designed for strict black-box and query-agnostic environments. Operating on surrogate model feedback, MIRAGE functions as an automated optimization framework that integrates three key mechanisms: it utilizes persona-driven query synthesis to approximate latent user search distributions, employs semantic anchoring to imperceptibly embed these intents for high retrieval visibility, and leverages an adversarial variant of Test-Time Preference Optimization (TPO) to maximize persuasion. To rigorously evaluate this threat, we construct a new benchmark derived from three long-form, domain-specific datasets. Extensive experiments demonstrate that MIRAGE significantly outperforms existing baselines in both attack efficacy and stealthiness, exhibiting remarkable transferability across diverse retriever-LLM configurations and highlighting the urgent need for robust defense strategies.
+ oai:arXiv.org:2512.08289v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tailun Chen, Yu He, Yan Wang, Shuo Shao, Haolun Zheng, Zhihao Liu, Jinfeng Li, Zhizhen Qin, Yuefeng Chen, Zhixuan Chu, Zhan Qin, Kui Ren
+
+
+ Formation and Investigation of Cooperative Platooning at the Early Stage of Connected and Automated Vehicles Deployment
+ https://arxiv.org/abs/2512.08298
+ arXiv:2512.08298v2 Announce Type: replace
+Abstract: Cooperative platooning, enabled by cooperative adaptive cruise control (CACC), is a cornerstone technology for connected automated vehicles (CAVs), offering significant improvements in safety, comfort, and traffic efficiency over traditional adaptive cruise control (ACC). This paper addresses a key challenge in the initial deployment phase of CAVs: the limited benefits of cooperative platooning due to the sparse distribution of CAVs on the road. To overcome this limitation, we propose an innovative control framework that enhances cooperative platooning in mixed traffic environments. Two techniques are utilized: (1) a mixed cooperative platooning strategy that integrates CACC with unconnected vehicles (CACCu), and (2) a strategic lane-change decision model designed to facilitate safe and efficient lane changes for platoon formation. Additionally, a surrounding vehicle identification system is embedded in the framework to enable CAVs to effectively identify and select potential platooning leaders. Simulation studies across various CV market penetration rates (MPRs) show that incorporating CACCu systems significantly improves safety, comfort, and traffic efficiency compared to existing systems with only CACC and ACC systems, even at CV penetration as low as 10%. The maximized platoon formation increases by up to 24%, accompanied by an 11% reduction in acceleration and a 7% decrease in fuel consumption. Furthermore, the strategic lane-change model enhances CAV performance, achieving notable improvements between 6% and 60% CV penetration, without adversely affecting overall traffic flow.
+ oai:arXiv.org:2512.08298v2
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zeyu Mu, Sergei S. Avedisov, Ahmadreza Moradipari, B. Brian Park
+
+
+ USCSA: Evolution-Aware Security Analysis for Proxy-Based Upgradeable Smart Contracts
+ https://arxiv.org/abs/2512.08372
+ arXiv:2512.08372v3 Announce Type: replace
+Abstract: In the case of upgrading smart contracts on blockchain systems, it is essential to consider the continuity of upgrades and subsequent maintenance. In practice, upgrade operations often introduce new vulnerabilities. Existing static analysis tools usually only scan a single version and are unable to capture the correlation between code changes and emerging risks. To address this, we propose an Upgradeable Smart Contract Security Analyzer, USCSA, which uses Abstract Syntax Tree (AST) difference analysis to assess risks associated with the upgrade process and utilizes large language models (LLMs) for assisted reasoning to achieve high-confidence vulnerability attribution. We collected and analyzed 3,546 cases of vulnerabilities in upgradeable contracts, covering common vulnerability categories such as reentrancy, access control flaws, and integer overflow. Experimental results show that USCSA achieves a precision of 92.26%, a recall of 89.67%, and an F1-score of 90.95% in detecting upgrade-induced vulnerabilities. As a result, USCSA provides a significant advantage to improve the security and integrity of upgradeable smart contracts, offering a novel and efficient solution for security auditing on blockchain applications.
+ oai:arXiv.org:2512.08372v3
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaoqi Li, Lei Xie, Wenkai Li, Zongwei Li
+
+
+ Targeting Misalignment: A Conflict-Aware Framework for Reward-Model-based LLM Alignment
+ https://arxiv.org/abs/2512.09212
+ arXiv:2512.09212v2 Announce Type: replace
+Abstract: Reward-model-based fine-tuning is a central paradigm in aligning Large Language Models with human preferences. However, such approaches critically rely on the assumption that proxy reward models accurately reflect intended supervision, a condition often violated due to annotation noise, bias, or limited coverage. This misalignment can lead to undesirable behaviors, where models optimize for flawed signals rather than true human values. In this paper, we investigate a novel framework to identify and mitigate such misalignment by treating the fine-tuning process as a form of knowledge integration. We focus on detecting instances of proxy-policy conflicts, cases where the base model strongly disagrees with the proxy. We argue that such conflicts often signify areas of shared ignorance, where neither the policy nor the reward model possesses sufficient knowledge, making them especially susceptible to misalignment. To this end, we propose two complementary metrics for identifying these conflicts: a localized Proxy-Policy Alignment Conflict Score (PACS) and a global Kendall-Tau Distance measure. Building on this insight, we design an algorithm named Selective Human-in-the-loop Feedback via Conflict-Aware Sampling (SHF-CAS) that targets high-conflict QA pairs for additional feedback, refining both the reward model and policy efficiently. Experiments on two alignment tasks demonstrate that our approach enhances general alignment performance, even when trained with a biased proxy reward. Our work provides a new lens for interpreting alignment failures and offers a principled pathway for targeted refinement in LLM training.
+ oai:arXiv.org:2512.09212v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Zixuan Liu, Siavash H. Khajavi, Guangkai Jiang, Xinru Liu
+
+
+ Hierarchy-Aware Multimodal Unlearning for Medical AI
+ https://arxiv.org/abs/2512.09867
+ arXiv:2512.09867v2 Announce Type: replace
+Abstract: Pretrained Multimodal Large Language Models (MLLMs) are increasingly used in sensitive domains such as medical AI, where privacy regulations like HIPAA and GDPR require specific removal of individuals' or institutions' data. This motivates machine unlearning, which aims to remove the influence of target data from a trained model. However, existing unlearning benchmarks fail to reflect the hierarchical and multimodal structure of real-world medical data, limiting their ability to properly evaluate unlearning in practice. Therefore, we introduce MedForget, a hierarchy-aware multimodal unlearning benchmark that models hospital data as a nested structure, enabling fine-grained evaluation of multimodal unlearning across retain and forget splits. Experiments with current unlearning methods show that existing approaches struggle to achieve effective hierarchy-aware forgetting without degrading downstream medical utility. To address this limitation, we propose Cross-modal Hierarchy-Informed Projection for unlearning (CHIP), a training-free, hierarchy-aware multimodal unlearning method that deletes information by selectively removing target-specific weight subspaces while preserving sibling-shared information. Experiments show that CHIP achieves the highest forget-retain performance gap across all hierarchy levels while maintaining competitive downstream utility compared to existing methods. Overall, MedForget provides a practical, HIPAA-aligned benchmark for evaluating structured multimodal unlearning for medical data, and CHIP offers an effective and general solution for hierarchy-aware forgetting that balances deletion with utility.
+ oai:arXiv.org:2512.09867v2
+ cs.CV
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fengli Wu, Vaidehi Patil, Jaehong Yoon, Yue Zhang, Mohit Bansal
+
+
+ SIP-BMM: Constructing Capability-Efficiency Pareto Set of LLMs via Bayesian Model Merging with Structural Importance Prior
+ https://arxiv.org/abs/2512.09972
+ arXiv:2512.09972v4 Announce Type: replace
+Abstract: Navigating the capability-efficiency trade-offs in Large Language Models (LLMs) requires constructing a high-quality Pareto set. However, existing merging techniques remain inadequate: coarse-grained, model-level methods yield only a sparse set of suboptimal solutions, while fine-grained, layer-wise optimization suffers from the curse of dimensionality, especially under tight evaluation budgets where each model candidate is costly to assess. We propose Bayesian Model Merging with Structural Importance Prior (SIP-BMM), an evolutionary loop framework driven by Log-Noisy Expected Hypervolume Improvement ($q$NEHVI) that makes layer-wise Pareto set construction tractable by explicitly modeling which layers matter. Specifically, SIP-BMM derives a \textbf{Structural Importance Prior (SIP)} from layer-wise task-vector differences between base and expert models, and uses this prior to Bayesian Optimization toward a low effective dimensional subspace. Intuitively, SIP steers the optimizer to spend most trials on a small set of influential layers while largely ignoring layers that exhibit minimal task-relevant shifts. This importance-aware search preserves layer-wise control while substantially reducing sample complexity. Experiments show that SIP-BMM discovers a stronger and denser Pareto front than competitive baselines, enabling agile model selection under diverse operational constraints. Code is available at: https://github.com/MiLab-HITSZ/2026-SIPBMM.
+ oai:arXiv.org:2512.09972v4
+ cs.LG
+ cs.CL
+ cs.NE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Kesheng Chen, Yamin Hu, Zhenqian Zhu, Yiya Diao, Wenjian Luo
+
+
+ Cluster-Dags as Powerful Background Knowledge For Causal Discovery
+ https://arxiv.org/abs/2512.10032
+ arXiv:2512.10032v2 Announce Type: replace
+Abstract: Finding cause-effect relationships is of key importance in science. Causal discovery aims to recover a graph from data that succinctly describes these cause-effect relationships. However, current methods face several challenges, especially when dealing with high-dimensional data and complex dependencies. Incorporating prior knowledge about the system can aid causal discovery. In this work, we leverage Cluster-DAGs as a prior knowledge framework to warm-start causal discovery. We show that Cluster-DAGs offer greater flexibility than existing approaches based on tiered background knowledge and introduce two modified constraint-based algorithms, Cluster-PC and Cluster-FCI, for causal discovery in the fully and partially observed setting, respectively. Empirical evaluation on simulated data demonstrates that Cluster-PC and Cluster-FCI outperform their respective baselines without prior knowledge.
+ oai:arXiv.org:2512.10032v2
+ cs.LG
+ cs.AI
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jan Marco Ruiz de Vargas, Kirtan Padh, Niki Kilbertus
+
+
+ Simple Yet Effective Selective Imputation for Incomplete Multi-view Clustering
+ https://arxiv.org/abs/2512.10327
+ arXiv:2512.10327v2 Announce Type: replace
+Abstract: Incomplete Multi-view Clustering (IMC) has emerged as a significant challenge in multi-view learning. A predominant line for IMC is data imputation; however, indiscriminate imputation can result in unreliable content. Recently, researchers have proposed selective imputation methods that use a post-imputation assessment strategy: (1) impute all or some missing values, and (2) evaluate their quality through clustering tasks. We observe that this strategy incurs substantial computational complexity and is heavily dependent on the performance of the clustering model. To address these challenges, we first introduce the concept of pre-imputation assessment. We propose an Implicit Informativeness-based Selective Imputation (SI$^3$) method for incomplete multi-view clustering, which explicitly addresses the trade-off between imputation utility and imputation risk. SI$^3$ evaluates the imputation-relevant informativeness of each missing position in a training-free manner, and selectively imputes data only when sufficient informative support is available. Under a multi-view generative assumption, SI$^3$ further integrates selective imputation into a variational inference framework, enabling uncertainty-aware imputation at the latent distribution level and robust multi-view fusion. Compared with existing selective imputation strategies, SI$^3$ is lightweight, data-driven, and model-agnostic, and can be seamlessly incorporated into existing incomplete multi-view clustering frameworks as a plug-in strategy. Extensive experiments on multiple benchmark datasets demonstrate that SI$^3$ consistently outperforms both imputation-based and imputation-free methods, particularly under challenging unbalanced missing scenarios.
+ oai:arXiv.org:2512.10327v2
+ cs.CV
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Cai Xu, Jinlong Liu, Yilin Zhang, Ziyu Guan, Wei Zhao, Xiaofei He
+
+
+ Dynamics of Agentic Loops in Large Language Models: A Geometric Theory of Trajectories
+ https://arxiv.org/abs/2512.10350
+ arXiv:2512.10350v2 Announce Type: replace
+Abstract: Agentic systems built on large language models operate through recursive feedback loops, where each output becomes the next input. Yet the geometric behavior of these agentic loops (whether they converge, diverge, or exhibit more complex dynamics) remains poorly understood. This paper introduces a geometric framework for analyzing agentic trajectories in semantic embedding space, treating iterative transformations as discrete dynamical systems. We distinguish the artifact space, where linguistic transformations occur, from the embedding space, where geometric measurements are performed. Because cosine similarity is biased by embedding anisotropy, we introduce an isotonic calibration that eliminates systematic bias and aligns similarities with human semantic judgments while preserving high local stability. This enables rigorous measurement of trajectories, clusters and attractors. Through controlled experiments on singular agentic loops, we identify two fundamental regimes. A contractive rewriting loop converges toward a stable attractor with decreasing dispersion, while an exploratory summarize and negate loop produces unbounded divergence with no cluster formation. These regimes display qualitatively distinct geometric signatures of contraction and expansion. Our results show that prompt design directly governs the dynamical regime of an agentic loop, enabling systematic control of convergence, divergence and trajectory structure in iterative LLM transformations.
+ oai:arXiv.org:2512.10350v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nicolas Tacheny
+
+
+ CARI4D: Category Agnostic 4D Reconstruction of Human-Object Interaction
+ https://arxiv.org/abs/2512.11988
+ arXiv:2512.11988v2 Announce Type: replace
+Abstract: Accurate capture of human-object interaction from ubiquitous sensors like RGB cameras is important for applications in human understanding, gaming, and robot learning. However, inferring 4D interactions from a single RGB view is highly challenging due to the unknown object and human information, depth ambiguity, occlusion, and complex motion, which hinder consistent 3D and temporal reconstruction. Previous methods simplify the setup by assuming ground truth object template or constraining to a limited set of object categories. We present CARI4D, the first category-agnostic method that reconstructs spatially and temporarily consistent 4D human-object interaction at metric scale from monocular RGB videos. To this end, we propose a pose hypothesis selection algorithm that robustly integrates the individual predictions from foundation models, jointly refine them through a learned render-and-compare paradigm to ensure spatial, temporal and pixel alignment, and finally reasoning about intricate contacts for further refinement satisfying physical constraints. Experiments show that our method outperforms prior art by 38% on in-distribution dataset and 36% on unseen dataset in terms of reconstruction error. Our model generalizes beyond the training categories and thus can be applied zero-shot to in-the-wild internet videos. Our code and pretrained models will be publicly released.
+ oai:arXiv.org:2512.11988v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xianghui Xie, Bowen Wen, Yan Chang, Hesam Rabeti, Jiefeng Li, Ye Yuan, Gerard Pons-Moll, Stan Birchfield
+
+
+ CAHC:A General Conflict-Aware Heuristic Caching Framework for Multi-Agent Path Finding
+ https://arxiv.org/abs/2512.12243
+ arXiv:2512.12243v2 Announce Type: replace
+Abstract: Multi-Agent Path Finding (MAPF) algorithms, including those for car-like robots and grid-based scenarios, face significant computational challenges due to expensive heuristic calculations. Traditional heuristic caching assumes that the heuristic function depends only on the state, which is incorrect in constraint-based search algorithms (e.g., CBS, MAPF-LNS, MAP2) where constraints from conflict resolution make the search space context-dependent. We propose \textbf{CAHC} (Conflict-Aware Heuristic Caching), a general framework that caches heuristic values based on both state and relevant constraint context, addressing this fundamental limitation. We demonstrate CAHC through a case study on CL-CBS for car-like robots, where we combine conflict-aware caching with an adaptive hybrid heuristic in \textbf{CAR-CHASE} (Car-Like Robot Conflict-Aware Heuristic Adaptive Search Enhancement). Our key innovations are (1) a compact \emph{conflict fingerprint} that efficiently encodes which constraints affect a state's heuristic, (2) a domain-adaptable relevance filter using spatial, temporal, and geometric criteria, and (3) a modular architecture that enables systematic application to diverse MAPF algorithms. Experimental evaluation on 480 CL-CBS benchmark instances demonstrates a geometric mean speedup of 2.46$\times$ while maintaining solution optimality. The optimizations improve success rate from 77.9\% to 84.8\% (+6.9 percentage points), reduce total runtime by 70.1\%, and enable solving 33 additional instances. The framework's general architecture makes it applicable as a reliable optimization technique for MAP2, MAPF-LNS, and other constraint-based MAPF algorithms.
+ oai:arXiv.org:2512.12243v2
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ HT To, S Nguyen, NH Pham
+
+
+ Persistent Personas? Role-Playing, Instruction Following, and Safety in Extended Interactions
+ https://arxiv.org/abs/2512.12775
+ arXiv:2512.12775v2 Announce Type: replace
+Abstract: Persona-assigned large language models (LLMs) are used in domains such as education, healthcare, and sociodemographic simulation. Yet, they are typically evaluated only in short, single-round settings that do not reflect real-world usage. We introduce an evaluation protocol that combines long persona dialogues (over 100 rounds) and evaluation datasets to create dialogue-conditioned benchmarks that can robustly measure long-context effects. We then investigate the effects of dialogue length on persona fidelity, instruction-following, and safety of seven state-of-the-art open- and closed-weight LLMs. We find that persona fidelity degrades over the course of dialogues, especially in goal-oriented conversations, where models must sustain both persona fidelity and instruction following. We identify a trade-off between fidelity and instruction following, with non-persona baselines initially outperforming persona-assigned models; as dialogues progress and fidelity fades, persona responses become increasingly similar to baseline responses. Our findings highlight the fragility of persona applications in extended interactions and our work provides a protocol to systematically measure such failures.
+ oai:arXiv.org:2512.12775v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Pedro Henrique Luz de Araujo, Michael A. Hedderich, Ali Modarressi, Hinrich Schuetze, Benjamin Roth
+
+
+ Beyond MMD: Evaluating Graph Generative Models with Geometric Deep Learning
+ https://arxiv.org/abs/2512.14241
+ arXiv:2512.14241v2 Announce Type: replace
+Abstract: Graph generation is a crucial task in many fields, including network science and bioinformatics, as it enables the creation of synthetic graphs that mimic the properties of real-world networks for various applications. Graph Generative Models (GGMs) have emerged as a promising solution to this problem, leveraging deep learning techniques to learn the underlying distribution of real-world graphs and generate new samples that closely resemble them. Examples include approaches based on Variational Auto-Encoders, Recurrent Neural Networks, and more recently, diffusion-based models. However, the main limitation often lies in the evaluation process, which typically relies on Maximum Mean Discrepancy (MMD) as a metric to assess the distribution of graph properties in the generated ensemble. This paper introduces a novel methodology for evaluating GGMs that overcomes the limitations of MMD, which we call RGM (Representation-aware Graph-generation Model evaluation). As a practical demonstration of our methodology, we present a comprehensive evaluation of two state-of-the-art Graph Generative Models: Graph Recurrent Attention Networks (GRAN) and Efficient and Degree-guided graph GEnerative model (EDGE). We investigate their performance in generating realistic graphs and compare them using a Geometric Deep Learning model trained on a custom dataset of synthetic and real-world graphs, specifically designed for graph classification tasks. Our findings reveal that while both models can generate graphs with certain topological properties, they exhibit significant limitations in preserving the structural characteristics that distinguish different graph domains. We also highlight the inadequacy of Maximum Mean Discrepancy as an evaluation metric for GGMs and suggest alternative approaches for future research.
+ oai:arXiv.org:2512.14241v2
+ cs.LG
+ cs.AI
+ physics.soc-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Salvatore Romano, Marco Grassia, Giuseppe Mangioni
+
+
+ Implicit Bias and Invariance: How Hopfield Networks Efficiently Learn Graph Orbits
+ https://arxiv.org/abs/2512.14338
+ arXiv:2512.14338v2 Announce Type: replace
+Abstract: Many learning problems involve symmetries, and while invariance can be built into neural architectures, it can also emerge implicitly when training on group-structured data. We study this phenomenon in classical Hopfield networks and show they can infer the full isomorphism class of a graph from a small random sample. Our results reveal that: (i) graph isomorphism classes can be represented within a three-dimensional invariant subspace, (ii) using gradient descent to minimize energy flow (MEF) has an implicit bias toward norm-efficient solutions, which underpins a polynomial sample complexity bound for learning isomorphism classes, and (iii) across multiple learning rules, parameters converge toward the invariant subspace as sample sizes grow. Together, these findings highlight a unifying mechanism for generalization in Hopfield networks: a bias toward norm efficiency in learning drives the emergence of approximate invariance under group-structured data.
+ oai:arXiv.org:2512.14338v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Michael Murray, Tenzin Chan, Kedar Karhadker, Christopher J. Hillar
+
+
+ The direct democracy paradox: Microtargeting and issue ownership in Swiss online political ads
+ https://arxiv.org/abs/2512.14564
+ arXiv:2512.14564v2 Announce Type: replace
+Abstract: Political advertising on social media has fundamentally reshaped democratic deliberation, playing a central role in electoral campaigns and propaganda. However, its systemic impact remains largely theoretical or unexplored, raising critical concerns about institutional fairness and algorithmic transparency. This paper provides the first data-driven analysis of the relationship between direct democracy and political advertising on social media, leveraging a novel dataset of 40,000 political ads published on Meta in Switzerland between 2021 and 2025. Switzerland's system of direct democracy, characterized by frequent referenda, provides an ideal context for examining this relationship beyond standard electoral cycles. The results reveal the sheer scale of digital campaigning, with 560 million impressions targeting 5.6 million voters, and suggest that greater exposure to "pro-Yes" advertising significantly correlates with referendum approval outcomes. Demographic microtargeting analysis suggests partisan strategies: Centrist and right-wing parties predominantly target older men, whereas left-wing parties focus on young women. Regarding textual content, a clear pattern of "talking past each other" is identified; in line with the issue ownership theory, parties avoid debating shared issues, preferring to promote exclusively owned topics. Furthermore, the parties' strategies are so distinctive that a machine learning model trained only on audience and topic features can accurately predict the author of an advertisement. This article highlights how demographic microtargeting, issue divergence, and tailored messages could undermine democratic deliberation, exposing a paradox: Referenda are designed to be the ultimate expression of the popular will, yet they are highly susceptible to invisible algorithmic persuasion.
+ oai:arXiv.org:2512.14564v2
+ cs.SI
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Arthur Capozzi
+
+
+ ATLAS: Adaptive Topology-based Learning at Scale for Homophilic and Heterophilic Graphs
+ https://arxiv.org/abs/2512.14908
+ arXiv:2512.14908v2 Announce Type: replace
+Abstract: We present ATLAS (Adaptive Topology-based Learning at Scale for Homophilic and Heterophilic Graphs), a novel graph learning algorithm that addresses two important challenges in graph neural networks (GNNs). First, the accuracy of GNNs degrades when the graph is heterophilic. Second, iterative feature aggregation limits the scalability of GNNs to large graphs. We address these challenges by extracting topological information about graph communities at multiple levels of refinement, concatenating community assignments to the feature vector, and applying multilayer perceptrons (MLPs) to the resulting representation. This provides topological context about nodes and their neighborhoods without invoking aggregation. Because MLPs are typically more scalable than GNNs, our approach applies to large graphs without the need for sampling. Across a wide set of graphs, ATLAS achieves comparable accuracy to baseline methods, with gains as high as 20 percentage points over GCN for heterophilic graphs with negative structural bias and 11 percentage points over MLP for homophilic graphs. Furthermore, we show how multi-resolution community features systematically modulate performance in both homophilic and heterophilic settings, opening a principled path toward explainable graph learning.
+ oai:arXiv.org:2512.14908v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Turja Kundu, Sanjukta Bhowmick
+
+
+ Reexamining Paradigms of End-to-End Data Movement
+ https://arxiv.org/abs/2512.15028
+ arXiv:2512.15028v2 Announce Type: replace
+Abstract: The pursuit of high-performance data transfer often focuses on raw network bandwidth, where international links of 100 Gbps or higher are frequently considered the primary enabler. While necessary, this network-centric view is incomplete, as it equates provisioned link speeds with practical, sustainable data movement capabilities across the entire edge-to-core spectrum. This paper investigates six common paradigms, ranging from network latency and TCP congestion control to host-side factors such as CPU performance and virtualization that critically impact data movement workflows. These paradigms represent widely adopted engineering assumptions that inform system design, procurement decisions, and operational practices in production data movement environments. We introduce the "Drainage Basin Pattern" conceptual model for reasoning about end-to-end data flow constraints across heterogeneous hardware and software components to address the fidelity gap between raw bandwidth and application-level throughput. Our findings are validated through rigorous production-scale deployments, including U.S. DOE ESnet technical evaluations and transcontinental production trials over 100 Gbps operational links. The results demonstrate that principal bottlenecks often reside outside the network core, and that a holistic hardware-software co-design enables consistent, predictable performance for moving data at scale and speed.
+ oai:arXiv.org:2512.15028v2
+ cs.DC
+ cs.NI
+ cs.OS
+ cs.PF
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Chin Fang, Timothy Stitt, Michael J. McManus, Toshio Moriya
+
+
+ Beyond Fast and Slow: Cognitive-Inspired Elastic Reasoning for Large Language Models
+ https://arxiv.org/abs/2512.15089
+ arXiv:2512.15089v2 Announce Type: replace
+Abstract: Large language models (LLMs) have demonstrated impressive performance across various language tasks. However, existing LLM reasoning strategies mainly rely on the LLM itself with fast or slow mode (like o1 thinking) and thus struggle to balance reasoning efficiency and accuracy across queries of varying difficulties. In this paper, we propose Cognitive-Inspired Elastic Reasoning (CogER), a framework inspired by human hierarchical reasoning that dynamically selects the most suitable reasoning strategy for each query. Specifically, CogER first assesses the complexity of incoming queries and assigns them to one of several predefined levels, each corresponding to a tailored processing strategy, thereby addressing the challenge of unobservable query difficulty. To achieve automatic strategy selection, we model the process as a Markov Decision Process and train a CogER-Agent using reinforcement learning. The agent is guided by a reward function that balances solution quality and computational cost, ensuring resource-efficient reasoning. Moreover, for queries requiring external tools, we introduce Cognitive Tool-Assisted Reasoning, which enables the LLM to autonomously invoke external tools within its chain-of-thought. Extensive experiments demonstrate that CogER outperforms state-of-the-art Test-Time scaling methods, achieving at least a 13% relative improvement in average exact match on In-Domain tasks and an 8% relative gain on Out-of-Domain tasks.
+ oai:arXiv.org:2512.15089v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jinwu Hu, Dongjin Yang, Langyu Bian, Zhiquan Wen, Yufeng Wang, Yaofo Chen, Bin Xiao, Yuanqing Li, Mingkui Tan
+
+
+ Beyond Blind Spots: Analytic Hints for Mitigating LLM-Based Evaluation Pitfalls
+ https://arxiv.org/abs/2512.16272
+ arXiv:2512.16272v2 Announce Type: replace
+Abstract: Large Language Models are increasingly deployed as judges (LaaJ) in code generation pipelines. While attractive for scalability, LaaJs tend to overlook domain specific issues raising concerns about their reliability in critical evaluation tasks. To better understand these limitations in practice, we examine LaaJ behavior in a concrete industrial use case: legacy code modernization via COBOL code generation. In this setting, we find that even production deployed LaaJs can miss domain critical errors, revealing consistent blind spots in their evaluation capabilities.
+ To better understand these blind spots, we analyze generated COBOL programs and associated LaaJs judgments, drawing on expert knowledge to construct a preliminary taxonomy. Based on this taxonomy, we develop a lightweight analytic checker tool that flags over 30 domain specific issues observed in practice. We use its outputs as analytic hints, dynamically injecting them into the judges prompt to encourage LaaJ to revisit aspects it may have overlooked.
+ Experiments on a test set of 100 programs using four production level LaaJs show that LaaJ alone detects only about 45-63% of the errors present in the code (in all judges we tested), while the analytic checker alone lacks explanatory depth. When combined, the LaaJ+Hints configuration achieves up to 74% coverage (for the best performing judge and injection prompt) and produces qualitatively richer, more accurate explanations, demonstrating that analytic-LLM hybrids can substantially enhance evaluation reliability in deployed pipelines. We release the dataset and all used prompts.
+ oai:arXiv.org:2512.16272v2
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ora Nova Fandina, Eitan Farchi, Shmulik Froimovich, Raviv Gal, Wesam Ibraheem, Rami Katan, Alice Podolsky
+
+
+ Multimodal RewardBench 2: Evaluating Omni Reward Models for Interleaved Text and Image
+ https://arxiv.org/abs/2512.16899
+ arXiv:2512.16899v3 Announce Type: replace
+Abstract: Reward models (RMs) are essential for training large language models (LLMs), but remain underexplored for omni models that handle interleaved image and text sequences. We introduce Multimodal RewardBench 2 (MMRB2), the first comprehensive benchmark for reward models on multimodal understanding and (interleaved) generation. MMRB2 spans four tasks: text-to-image, image editing, interleaved generation, and multimodal reasoning ("thinking-with-images"), providing 1,000 expert-annotated preference pairs per task from 23 models and agents across 21 source tasks. MMRB2 is designed with: (1) practical but challenging prompts; (2) responses from state-of-the-art models and agents; and (3) preference pairs with strong human-expert consensus, curated via an ensemble filtering strategy. Using MMRB2, we study existing judges for each subtask, including multimodal LLM-as-a-judge and models trained with human preferences. The latest Gemini 3 Pro attains 75-80% accuracy. GPT-5 and Gemini 2.5 Pro reach 66-75% accuracy, compared to >90% for humans, yet surpass the widely used GPT-4o (59%). The best performing open-source model Qwen3-VL-32B achieves similar accuracies as Gemini 2.5 Flash (64%). We also show that MMRB2 performance strongly correlates with downstream task success using Best-of-N sampling and conduct an in-depth analysis that shows key areas to improve the reward models going forward.
+ oai:arXiv.org:2512.16899v3
+ cs.CL
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yushi Hu, Reyhane Askari-Hemmat, Melissa Hall, Emily Dinan, Luke Zettlemoyer, Marjan Ghazvininejad
+
+
+ What You Trust Is Insecure: Demystifying How Developers (Mis)Use Trusted Execution Environments in Practice
+ https://arxiv.org/abs/2512.17363
+ arXiv:2512.17363v3 Announce Type: replace
+Abstract: Trusted Execution Environments (TEEs), such as Intel SGX and ARM TrustZone, provide isolated regions of CPU and memory for secure computation and are increasingly used to protect sensitive data and code across diverse application domains. However, little is known about how developers actually use TEEs in practice. This paper presents the first large-scale empirical study of real-world TEE applications. We collected and analyzed 241 open-source projects from GitHub that utilize the two most widely-adopted TEEs, Intel SGX and ARM TrustZone. By combining manual inspection with customized static analysis scripts, we examined their adoption contexts, usage patterns, and development practices across three phases. First, we categorized the projects into 8 application domains and identified trends in TEE adoption over time. We found that the dominant use case is IoT device security (30%), which contrasts sharply with prior academic focus on blockchain and cryptographic systems (7%), while AI model protection (12%) is rapidly emerging as a growing domain. Second, we analyzed how TEEs are integrated into software and observed that 32.4% of the projects reimplement cryptographic functionalities instead of using official SDK APIs, suggesting that current SDKs may have limited usability and portability to meet developers' practical needs. Third, we examined security practices through manual inspection and found that 25.3% (61 of 241) of the projects exhibit insecure coding behaviors when using TEEs, such as hardcoded secrets and missing input validation, which undermine their intended security guarantees. Our findings have important implications for improving the usability of TEE SDKs and supporting developers in trusted software development.
+ oai:arXiv.org:2512.17363v3
+ cs.SE
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yuqing Niu, Jieke Shi, Ruidong Han, Ye Liu, Chengyan Ma, Yunbo Lyu, David Lo
+
+
+ In Times of Crisis: An Exploratory Study of Media and Political Discourse on YouTube During the 2024 French Elections
+ https://arxiv.org/abs/2512.17768
+ arXiv:2512.17768v2 Announce Type: replace
+Abstract: YouTube has emerged as a major platform for political communication and news dissemination, particularly during high-stakes electoral periods. In the context of the 2024 European Parliament and French legislative elections, this study investigates how political actors and news media used YouTube to shape public discourse. We analyze over 100,000 video transcripts and metadata from 74 French YouTube channels operated by national news outlets, local media, and political figures. To identify the key themes emphasized during the campaign period, we applied a semi-automated method that combined large language models with clustering and manual review. The results reveal distinct thematic patterns across the political spectrum and media types, with right-leaning news outlets focusing on topics like immigration, while left-leaning emphasized protest and media freedom. Themes generating the most audience engagement, measured by comment-to-view ratios, were most often the most polarizing ones. In contrast, less polarizing themes such as video games and nature showed higher approval, reflected in like-to-view ratios. We also observed a general tendency across all media types to portray political figures in neutral or critical terms rather than favorable ones.
+ oai:arXiv.org:2512.17768v2
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Vera Sosnovik, Caroline Violot, Mathias Humbert
+
+
+ AnyTask: an Automated Task and Data Generation Framework for Advancing Sim-to-Real Policy Learning
+ https://arxiv.org/abs/2512.17853
+ arXiv:2512.17853v2 Announce Type: replace
+Abstract: Generalist robot learning remains constrained by data: large-scale, diverse, and high-quality interaction data are expensive to collect in the real world. While simulation has become a promising way for scaling up data collection, the related tasks, including simulation task design, task-aware scene generation, expert demonstration synthesis, and sim-to-real transfer, still demand substantial human effort. We present AnyTask, an automated framework that pairs massively parallel GPU simulation with foundation models to design diverse manipulation tasks and synthesize robot data. We introduce three AnyTask agents for generating expert demonstrations aiming to solve as many tasks as possible: 1) ViPR, a novel task and motion planning agent with VLM-in-the-loop Parallel Refinement; 2) ViPR-Eureka, a reinforcement learning agent with generated dense rewards and LLM-guided contact sampling; 3) ViPR-RL, a hybrid planning and learning approach that jointly produces high-quality demonstrations with only sparse rewards. We train behavior cloning policies on generated data, validate them in simulation, and deploy them directly on real robot hardware. The policies generalize to novel object poses, achieving 44% average success across a suite of real-world pick-and-place, drawer opening, contact-rich pushing, and long-horizon manipulation tasks. Our project website is at https://anytask.rai-inst.com .
+ oai:arXiv.org:2512.17853v2
+ cs.RO
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ran Gong, Xiaohan Zhang, Jinghuan Shang, Maria Vittoria Minniti, Jigarkumar Patel, Valerio Pepe, Riedana Yan, Ahmet Gundogdu, Ivan Kapelyukh, Ali Abbas, Xiaoqiang Yan, Harsh Patel, Laura Herlant, Karl Schmeckpeper
+
+
+ LIR$^3$AG: A Lightweight Rerank Reasoning Strategy Framework for Retrieval-Augmented Generation
+ https://arxiv.org/abs/2512.18329
+ arXiv:2512.18329v2 Announce Type: replace
+Abstract: Retrieval-Augmented Generation (RAG) effectively enhances Large Language Models (LLMs) by incorporating retrieved external knowledge into the generation process. Reasoning models improve LLM performance in multi-hop QA tasks, which require integrating and reasoning over multiple pieces of evidence across different documents to answer a complex question. However, they often introduce substantial computational costs, including increased token consumption and inference latency. To better understand and mitigate this trade-off, we conduct a comprehensive study of reasoning strategies for reasoning models in RAG multi-hop QA tasks. Our findings reveal that reasoning models adopt structured strategies to integrate retrieved and internal knowledge, primarily following two modes: Context-Grounded Reasoning, which relies directly on retrieved content, and Knowledge-Reconciled Reasoning, which resolves conflicts or gaps using internal knowledge. To this end, we propose a novel Lightweight Rerank Reasoning Strategy Framework for RAG (LiR$^3$AG) to enable non-reasoning models to transfer reasoning strategies by restructuring retrieved evidence into coherent reasoning chains. LiR$^3$AG significantly reduce the average 98% output tokens overhead and 58.6% inferencing time while improving 8B non-reasoning model's F1 performance ranging from 6.2% to 22.5% to surpass the performance of 32B reasoning model in RAG, offering a practical and efficient path forward for RAG systems.
+ oai:arXiv.org:2512.18329v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Guo Chen, Junjie Huang, Huaijin Xie, Fei Sun, Tao Jia
+
+
+ Efficient Optimization of Hierarchical Identifiers for Generative Recommendation
+ https://arxiv.org/abs/2512.18434
+ arXiv:2512.18434v2 Announce Type: replace
+Abstract: SEATER is a generative retrieval model that improves recommendation inference efficiency and retrieval quality by utilizing balanced tree-structured item identifiers and contrastive training objectives. We reproduce and validate SEATER's reported improvements in retrieval quality over strong baselines across all datasets from the original work, and extend the evaluation to Yambda, a large-scale music recommendation dataset. Our experiments verify SEATER's strong performance, but show that its tree construction step during training becomes a major bottleneck as the number of items grows. To address this, we implement and evaluate two alternative construction algorithms: a greedy method optimized for minimal build time, and a hybrid method that combines greedy clustering at high levels with more precise grouping at lower levels. The greedy method reduces tree construction time to less than 2% of the original with only a minor drop in quality on the dataset with the largest item collection. The hybrid method achieves retrieval quality on par with the original, and even improves on the largest dataset, while cutting construction time to just 5-8%. All data and code are publicly available for full reproducibility at https://github.com/joshrosie/re-seater.
+ oai:arXiv.org:2512.18434v2
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Federica Valeau, Odysseas Boufalis, Polytimi Gkotsi, Joshua Rosenthal, David Vos
+
+
+ An Evidence-Driven Analysis of Threat Information Sharing Challenges for Industrial Control Systems and Future Directions
+ https://arxiv.org/abs/2512.18714
+ arXiv:2512.18714v2 Announce Type: replace
+Abstract: The increasing cyber threats to critical infrastructure highlight the importance of private companies and government agencies in detecting and sharing information about threat activities. Although the need for improved threat information sharing is widely recognized, various technical and organizational challenges persist, hindering effective collaboration. In this study, we review the challenges that disturb the sharing of usable threat information to critical infrastructure operators within the ICS domain. We analyze three major incidents: Stuxnet, Industroyer, and Triton. In addition, we perform a systematic analysis of 196 procedure examples across 79 MITRE ATT&CK techniques from 22 ICS-related malware families, utilizing automated natural language processing techniques to systematically extract and categorize threat observables. Additionally, we investigated nine recent ICS vulnerability advisories from the CISA Known Exploitable Vulnerability catalog. Our analysis identified four important limitations in the ICS threat information sharing ecosystem: (i) the lack of coherent representation of artifacts related to ICS adversarial techniques in information sharing language standards (e.g., STIX); (ii) the dependence on undocumented proprietary technologies; (iii) limited technical details provided in vulnerability and threat incident reports; and (iv) the accessibility of technical details for observed adversarial techniques. This study aims to guide the development of future information-sharing standards, including the enhancement of the cyber-observable objects schema in STIX, to ensure accurate representation of artifacts specific to ICS environments.
+ oai:arXiv.org:2512.18714v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Adam Hahn, Rubin Krief, Daniel Rebori-Carretero, Rami Puzis, Aviad Elyashar, Nik Urlaub
+
+
+ Evaluating MCC for Low-Frequency Cyberattack Detection in Imbalanced Intrusion Detection Data
+ https://arxiv.org/abs/2512.19203
+ arXiv:2512.19203v2 Announce Type: replace
+Abstract: In many real-world network environments, several types of cyberattacks occur at very low rates compared to benign traffic, making them difficult for intrusion detection systems (IDS) to detect reliably. This imbalance causes traditional evaluation metrics, such as accuracy, to often overstate model performance in these conditions, masking failures on minority attack classes that are most important in practice. In this paper, we evaluate a set of base and meta classifiers on low-traffic attacks in the CSE-CIC-IDS2017 dataset and compare their reliability in terms of accuracy and Matthews Correlation Coefficient (MCC). The results show that accuracy consistently inflates performance, while MCC provides a more accurate assessment of a classifier's performance across both majority and minority classes. Meta-classification methods, such as LogitBoost and AdaBoost, demonstrate more effective minority class detection when measured by MCC, revealing trends that accuracy fails to capture. These findings establish the need for imbalance-aware evaluation and make MCC a more trustworthy metric for IDS research involving low-traffic cyberattacks.
+ oai:arXiv.org:2512.19203v2
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Prameshwar Thiyagarajan, Chad A. Williams
+
+
+ ActAvatar: Temporally-Aware Precise Action Control for Talking Avatars
+ https://arxiv.org/abs/2512.19546
+ arXiv:2512.19546v2 Announce Type: replace
+Abstract: Despite significant advances in talking avatar generation, existing methods face critical challenges: insufficient text-following capability for diverse actions, lack of temporal alignment between actions and audio content, and dependency on additional control signals such as pose skeletons. We present ActAvatar, a framework that achieves phase-level precision in action control through textual guidance by capturing both action semantics and temporal context. Our approach introduces three core innovations: (1) Phase-Aware Cross-Attention (PACA), which decomposes prompts into a global base block and temporally-anchored phase blocks, enabling the model to concentrate on phase-relevant tokens for precise temporal-semantic alignment; (2) Progressive Audio-Visual Alignment, which aligns modality influence with the hierarchical feature learning process-early layers prioritize text for establishing action structure while deeper layers emphasize audio for refining lip movements, preventing modality interference; (3) A two-stage training strategy that first establishes robust audio-visual correspondence on diverse data, then injects action control through fine-tuning on structured annotations, maintaining both audio-visual alignment and the model's text-following capabilities. Extensive experiments demonstrate that ActAvatar significantly outperforms state-of-the-art methods in both action control and visual quality.
+ oai:arXiv.org:2512.19546v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ziqiao Peng, Yi Chen, Yifeng Ma, Guozhen Zhang, Zhiyao Sun, Zixiang Zhou, Youliang Zhang, Zhengguang Zhou, Zhaoxin Fan, Hongyan Liu, Yuan Zhou, Qinglin Lu, Jun He
+
+
+ Deep Legendre Transform
+ https://arxiv.org/abs/2512.19649
+ arXiv:2512.19649v2 Announce Type: replace
+Abstract: We introduce a novel deep learning algorithm for computing convex conjugates of differentiable convex functions, a fundamental operation in convex analysis with various applications in different fields such as optimization, control theory, physics and economics. While traditional numerical methods suffer from the curse of dimensionality and become computationally intractable in high dimensions, more recent neural network--based approaches scale better, but have mostly been studied with the aim of solving optimal transport problems and require the solution of complicated optimization or max--min problems. Using an implicit Fenchel formulation of convex conjugation, our approach facilitates an efficient gradient--based framework for the minimization of approximation errors and, as a byproduct, also provides a posteriori estimates of the approximation accuracy. Numerical experiments demonstrate our method's ability to deliver accurate results across different high-dimensional examples. Moreover, by employing symbolic regression with Kolmogorov--Arnold networks, it is able to obtain the exact convex conjugates of specific convex functions.
+ oai:arXiv.org:2512.19649v2
+ cs.LG
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Aleksey Minabutdinov, Patrick Cheridito
+
+
+ Fun-Audio-Chat Technical Report
+ https://arxiv.org/abs/2512.20156
+ arXiv:2512.20156v4 Announce Type: replace
+Abstract: Recent advancements in joint speech-text models show great potential for seamless voice interactions. However, existing models face critical challenges: temporal resolution mismatch between speech tokens (25Hz) and text tokens (~3Hz) dilutes semantic information, incurs high computational costs, and causes catastrophic forgetting of text LLM knowledge. We introduce Fun-Audio-Chat, a Large Audio Language Model addressing these limitations via two innovations from our previous work DrVoice. First, Dual-Resolution Speech Representations (DRSR): the Shared LLM processes audio at efficient 5Hz (via token grouping), while the Speech Refined Head generates high-quality tokens at 25Hz, balancing efficiency (~50% GPU reduction) and quality. Second, Core-Cocktail Training, a two-stage fine-tuning with intermediate merging that mitigates catastrophic forgetting. We then apply Multi-Task DPO Training to enhance robustness, audio understanding, instruction-following and voice empathy. This multi-stage post-training enables Fun-Audio-Chat to retain text LLM knowledge while gaining powerful audio understanding, reasoning, and generation. Unlike recent LALMs requiring large-scale audio-text pre-training, Fun-Audio-Chat leverages pre-trained models and extensive post-training. Fun-Audio-Chat 8B and MoE 30B-A3B achieve competitive performance on Speech-to-Text and Speech-to-Speech tasks, ranking top among similar-scale models on Spoken QA benchmarks. They also achieve competitive to superior performance on Audio Understanding, Speech Function Calling, Instruction-Following and Voice Empathy. We develop Fun-Audio-Chat-Duplex, a full-duplex variant with strong performance on Spoken QA and full-duplex interactions. We open-source Fun-Audio-Chat-8B with training and inference code, and provide an interactive demo, at https://github.com/FunAudioLLM/Fun-Audio-Chat .
+ oai:arXiv.org:2512.20156v4
+ cs.CL
+ cs.AI
+ cs.SD
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tongyi Fun Team, Qian Chen, Luyao Cheng, Chong Deng, Xiangang Li, Jiaqing Liu, Chao-Hong Tan, Wen Wang, Junhao Xu, Jieping Ye, Qinglin Zhang, Qiquan Zhang, Jingren Zhou
+
+
+ Adaptive Multi-task Learning for Probabilistic Load Forecasting
+ https://arxiv.org/abs/2512.20232
+ arXiv:2512.20232v2 Announce Type: replace
+Abstract: Simultaneous load forecasting across multiple entities (e.g., regions, buildings) is crucial for the efficient, reliable, and cost-effective operation of power systems. Accurate load forecasting is a challenging problem due to the inherent uncertainties in load demand, dynamic changes in consumption patterns, and correlations among entities. Multi-task learning has emerged as a powerful machine learning approach that enables the simultaneous learning across multiple related problems. However, its application to load forecasting remains underexplored and is limited to offline learning methods, which cannot capture changes in consumption patterns. This paper presents an adaptive multi-task learning method for probabilistic load forecasting. The proposed method can dynamically adapt to changes in consumption patterns and correlations among entities. In addition, the techniques presented provide reliable probabilistic predictions for loads of multiple entities and assess load uncertainties. Specifically, the method is based on vectorvalued hidden Markov models and uses a recursive process to update the model parameters and provide predictions with the most recent parameters. The performance of the proposed method is evaluated using datasets that contain the load demand of multiple entities and exhibit diverse and dynamic consumption patterns. The experimental results show that the presented techniques outperform existing methods both in terms of forecasting performance and uncertainty assessment.
+ oai:arXiv.org:2512.20232v2
+ cs.LG
+ stat.AP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Onintze Zaballa, Ver\'onica \'Alvarez, Santiago Mazuelas
+
+
+ D^3ETOR: Debate-Enhanced Pseudo Labeling and Frequency-Aware Progressive Debiasing for Weakly-Supervised Camouflaged Object Detection with Scribble Annotations
+ https://arxiv.org/abs/2512.20260
+ arXiv:2512.20260v3 Announce Type: replace
+Abstract: Weakly-Supervised Camouflaged Object Detection (WSCOD) aims to locate and segment objects that are visually concealed within their surrounding scenes, relying solely on sparse supervision such as scribble annotations. Despite recent progress, existing WSCOD methods still lag far behind fully supervised ones due to two major limitations: (1) the pseudo masks generated by general-purpose segmentation models (e.g., SAM) and filtered via rules are often unreliable, as these models lack the task-specific semantic understanding required for effective pseudo labeling in COD; and (2) the neglect of inherent annotation bias in scribbles, which hinders the model from capturing the global structure of camouflaged objects. To overcome these challenges, we propose ${D}^{3}$ETOR, a two-stage WSCOD framework consisting of Debate-Enhanced Pseudo Labeling and Frequency-Aware Progressive Debiasing. In the first stage, we introduce an adaptive entropy-driven point sampling method and a multi-agent debate mechanism to enhance the capability of SAM for COD, improving the interpretability and precision of pseudo masks. In the second stage, we design FADeNet, which progressively fuses multi-level frequency-aware features to balance global semantic understanding with local detail modeling, while dynamically reweighting supervision strength across regions to alleviate scribble bias. By jointly exploiting the supervision signals from both the pseudo masks and scribble semantics, ${D}^{3}$ETOR significantly narrows the gap between weakly and fully supervised COD, achieving state-of-the-art performance on multiple benchmarks.
+ oai:arXiv.org:2512.20260v3
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jiawei Ge, Jiuxin Cao, Xinyi Li, Xuelin Zhu, Chang Liu, Bo Liu, Chen Feng, Ioannis Patras
+
+
+ Mixture-of-Experts with Gradient Conflict-Driven Subspace Topology Pruning for Emergent Modularity
+ https://arxiv.org/abs/2512.20291
+ arXiv:2512.20291v4 Announce Type: replace
+Abstract: Mixture-of-Experts (MoE) architectures achieve parameter efficiency through conditional computation, yet contemporary designs suffer from two fundamental limitations: structural parameter isolation that causes catastrophic forgetting, and instruction-overfitting that degrades performance in instruction-free scenarios. We propose CDSP-MoE (Conflict-Driven Subspace Pruning MoE), a framework that addresses these issues through a paradigm shift from isolated expert containers to dynamic expert instantiation within a shared physical subspace. Grounded in the Universal Weight Subspace Hypothesis, CDSP-MoE maintains a super-complete parameter backbone where logical experts are carved out via learnable topology masks. Unlike prior work that uses gradient conflict for token reassignment or optimization surgery, we leverage it as a structural supervisory signal: a Lagged Gradient Game penalizes interfering connections in the shared manifold, enabling the topology to spontaneously prune conflicting pathways and evolve interpretable modular structures. Experimental results demonstrate that CDSP-MoE achieves robust content-driven routing without human-defined task labels, maintaining semantic specialization even under strict blind inference protocols where explicit instructions are absent. Code is available at: https://github.com/konodiodaaaaa1/Conflict-Driven-Subspace-Pruning-Mixture-of-Experts
+ oai:arXiv.org:2512.20291v4
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuxing Gan, Ziyu Lei
+
+
+ MMEDIT: A Unified Framework for Multi-Type Audio Editing via Audio Language Model
+ https://arxiv.org/abs/2512.20339
+ arXiv:2512.20339v3 Announce Type: replace
+Abstract: Text-guided audio editing aims to modify specific acoustic events while strictly preserving non-target content. Despite recent progress, existing approaches remain fundamentally limited. Training-free methods often suffer from signal degradation caused by diffusion inversion, while training-based methods, although achieving higher generation quality, are severely constrained by the scarcity of high-quality paired data and task formulations that cover only a narrow subset of editing operations. In addition, standard architectures typically decouple text and audio processing, limiting the ability to align instructions with specific acoustic contexts.
+ To address these challenges, we propose MMEdit, an audio-language-model-driven framework for unified audio editing. We systematically extend task definitions to cover a comprehensive range of editing operations, including addition, replacement, removal, reordering, and attribute modification. Furthermore, we design a scalable data synthesis pipeline to construct large-scale paired datasets with fine-grained event-level annotations. To capture complex editing semantics, we integrate a Qwen2-Audio encoder with an MMDiT-based generator, enabling precise cross-modal alignment and localized editing.
+ Experimental results demonstrate that our method achieves superior editing localization accuracy, robust instruction following, and high fidelity in non-edited regions.
+ oai:arXiv.org:2512.20339v3
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ye Tao, Wen Wu, Chao Zhang, Mengyue Wu, Shuai Wang, Xuenan Xu
+
+
+ Memory-Efficient Acceleration of Block Low-Rank Foundation Models on Resource Constrained GPUs
+ https://arxiv.org/abs/2512.20861
+ arXiv:2512.20861v2 Announce Type: replace
+Abstract: Recent advances in transformer-based foundation models have made them the default choice for many tasks, but their rapidly growing size makes fitting a full model on a single GPU increasingly difficult and their computational cost prohibitive. Block low-rank (BLR) compression techniques address this challenge by learning compact representations of weight matrices. While traditional low-rank (LR) methods often incur sharp accuracy drops, BLR approaches such as Monarch and BLAST can better capture the underlying structure, thus preserving accuracy while reducing computations and memory footprints. In this work, we use roofline analysis to show that, although BLR methods achieve theoretical savings and practical speedups for single-token inference, multi-token inference often becomes memory-bound in practice, increasing latency despite compiler-level optimizations in PyTorch. To address this, we introduce custom Triton kernels with partial fusion and memory layout optimizations for both Monarch and BLAST. On memory-constrained NVIDIA GPUs such as Jetson Orin Nano and A40, our kernels deliver up to $3.76\times$ speedups and $3\times$ model size compression over PyTorch dense baselines using CUDA backend and compiler-level optimizations, while supporting various models including Llama-7/1B, GPT2-S, DiT-XL/2, and ViT-B. Our code is available at https://github.com/pabillam/mem-efficient-blr.
+ oai:arXiv.org:2512.20861v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Pierre Abillama, Changwoo Lee, Juechu Dong, David Blaauw, Dennis Sylvester, Hun-Seok Kim
+
+
+ DiEC: Diffusion Embedded Clustering
+ https://arxiv.org/abs/2512.20905
+ arXiv:2512.20905v3 Announce Type: replace
+Abstract: Deep clustering methods typically rely on a single, well-defined representation for clustering. In contrast, pretrained diffusion models provide abundant and diverse multi-scale representations across network layers and noise timesteps. However, a key challenge is how to efficiently identify the most clustering-friendly representation in the layer*timestep space. To address this issue, we propose Diffusion Embedded Clustering (DiEC), an unsupervised framework that performs clustering by leveraging optimal intermediate representations from pretrained diffusion models. DiEC systematically evaluates the clusterability of representations along the trajectory of network depth and noise timesteps. Meanwhile, an unsupervised search strategy is designed for recognizing the Clustering-optimal Layer (COL) and Clustering-optimal Timestep (COT) in the layer*timestep space of pretrained diffusion models, aiming to promote clustering performance and reduce computational overhead. DiEC is fine-tuned primarily with a structure-preserving DEC-style KL-divergence objective at the fixed COL + COT, together with a random-timestep diffusion denoising objective to maintain the generative capability of the pretrained model. Without relying on augmentation-based consistency constraints or contrastive learning, DiEC achieves excellent clustering performance across multiple benchmark datasets. Code will be released upon acceptance.
+ oai:arXiv.org:2512.20905v3
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haidong Hu, Xiaoyu Zheng, Jin Zhou, Yingxu Wang, Rui Wang, Pei Dong, Shiyuan Han, Lin Wang, C. L. Philip Chen, Tong Zhang, Yuehui Chen
+
+
+ From Human Bias to Robot Choice: How Occupational Contexts and Racial Priming Shape Robot Selection
+ https://arxiv.org/abs/2512.20951
+ arXiv:2512.20951v3 Announce Type: replace
+Abstract: As artificial agents increasingly integrate into professional environments, fundamental questions have emerged about how societal biases influence human-robot selection decisions. We conducted two comprehensive experiments (N = 1,038) examining how occupational contexts and stereotype activation shape robotic agent choices across construction, healthcare, educational, and athletic domains. Participants made selections from artificial agents that varied systematically in skin tone and anthropomorphic characteristics. Our study revealed distinct context-dependent patterns. Healthcare and educational scenarios demonstrated strong favoritism toward lighter-skinned artificial agents, while construction and athletic contexts showed greater acceptance of darker-toned alternatives. Participant race was associated with systematic differences in selection patterns across professional domains. The second experiment demonstrated that exposure to human professionals from specific racial backgrounds systematically shifted later robotic agent preferences in stereotype-consistent directions. These findings show that occupational biases and color-based discrimination transfer directly from human-human to human-robot evaluation contexts. The results highlight mechanisms through which robotic deployment may unintentionally perpetuate existing social inequalities.
+ oai:arXiv.org:2512.20951v3
+ cs.RO
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3757279.3788658
+ Jiangen He, Wanqi Zhang, Jessica Barfield
+
+
+ Physic-HM: Restoring Physical Generative Logic in Multimodal Anomaly Detection via Hierarchical Modulation
+ https://arxiv.org/abs/2512.21650
+ arXiv:2512.21650v2 Announce Type: replace
+Abstract: Multimodal Unsupervised Anomaly Detection (UAD) is critical for quality assurance in smart manufacturing, particularly in complex processes like robotic welding. However, existing methods often suffer from process-logic blindness, treating process modalities (e.g., real-time video, audio, and sensors) and result modalities (e.g., post-weld images) as symmetric feature sources, thereby ignoring the inherent unidirectional physical generative logic. Furthermore, the heterogeneity gap between high-dimensional visual data and low-dimensional sensor signals frequently leads to critical process context being drowned out. In this paper, we propose Physic-HM, a multimodal UAD framework that explicitly incorporates physical inductive bias to model the process-to-result dependency. Specifically, our framework incorporates two key innovations: a Sensor-Guided PHM Modulation mechanism that utilizes low-dimensional sensor signals as context to guide high-dimensional audio-visual feature extraction, and a Physic-Hierarchical architecture that enforces a unidirectional generative mapping to identify anomalies that violate physical consistency. Extensive experiments on Weld-4M benchmark demonstrate that Physic-HM achieves a SOTA I-AUROC of 90.7%. The source code of Physic-HM will be released after the paper is accepted.
+ oai:arXiv.org:2512.21650v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xiao Liu, Junchen Jin, Yanjie Zhao, Zhixuan Xing
+
+
+ InstructMoLE: Instruction-Guided Mixture of Low-rank Experts for Multi-Conditional Image Generation
+ https://arxiv.org/abs/2512.21788
+ arXiv:2512.21788v2 Announce Type: replace
+Abstract: Parameter-Efficient Fine-Tuning of Diffusion Transformers (DiTs) for diverse, multi-conditional tasks often suffers from task interference when using monolithic adapters like LoRA. The Mixture of Low-rank Experts (MoLE) architecture offers a modular solution, but its potential is usually limited by routing policies that operate at a token level. Such local routing can conflict with the global nature of user instructions, leading to artifacts like spatial fragmentation and semantic drift in complex image generation tasks. To address these limitations, we introduce InstructMoLE, a novel framework that employs an Instruction-Guided Mixture of Low-Rank Experts. Instead of per-token routing, InstructMoLE utilizes a global routing signal, Instruction-Guided Routing (IGR), derived from the user's comprehensive instruction. This ensures that a single, coherently chosen expert council is applied uniformly across all input tokens, preserving the global semantics and structural integrity of the generation process. To complement this, we introduce an output-space orthogonality loss, which promotes expert functional diversity and mitigates representational collapse. Extensive experiments demonstrate that InstructMoLE significantly outperforms existing LoRA adapters and MoLE variants across challenging multi-conditional generation benchmarks. Our work presents a robust and generalizable framework for instruction-driven fine-tuning of generative models, enabling superior compositional control and fidelity to user intent.
+ oai:arXiv.org:2512.21788v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jinqi Xiao, Qing Yan, Liming Jiang, Zichuan Liu, Hao Kang, Shen Sang, Tiancheng Zhi, Jing Liu, Cheng Yang, Xin Lu, Bo Yuan
+
+
+ SLIM-Brain: A Data- and Training-Efficient Foundation Model for fMRI Data Analysis
+ https://arxiv.org/abs/2512.21881
+ arXiv:2512.21881v2 Announce Type: replace
+Abstract: Foundation models are emerging as a powerful paradigm for fMRI analysis, but current approaches face a dual bottleneck of data- and training-efficiency. Atlas-based methods aggregate voxel signals into fixed regions of interest, reducing data dimensionality but discarding fine-grained spatial details, and requiring extremely large cohorts to train effectively as general-purpose foundation models. Atlas-free methods, on the other hand, operate directly on voxel-level information - preserving spatial fidelity but are prohibitively memory- and compute-intensive, making large-scale pre-training infeasible. We introduce SLIM-Brain (Sample-efficient, Low-memory fMRI Foundation Model for Human Brain), a new atlas-free foundation model that simultaneously improves both data- and training-efficiency. SLIM-Brain adopts a two-stage adaptive design: (i) a lightweight temporal extractor captures global context across full sequences and ranks data windows by saliency, and (ii) a 4D hierarchical encoder (Hiera-JEPA) learns fine-grained voxel-level representations only from the top-$k$ selected windows, while deleting about 70% masked patches. Extensive experiments across seven public benchmarks show that SLIM-Brain establishes new state-of-the-art performance on diverse tasks, while requiring only 4 thousand pre-training sessions and approximately 30% of GPU memory comparing to traditional voxel-level methods.
+ oai:arXiv.org:2512.21881v2
+ cs.CV
+ q-bio.NC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mo Wang, Junfeng Xia, Wenhao Ye, Enyu Liu, Kaining Peng, Jianfeng Feng, Quanying Liu, Hongkai Wen
+
+
+ Validation methodology on real data of reversible Kalman Filter for state estimation with Manifold
+ https://arxiv.org/abs/2512.22126
+ arXiv:2512.22126v2 Announce Type: replace
+Abstract: This work extends a previous study that introduced an algorithm for state estimation on manifolds within the framework of the Kalman filter. Its objective is to address the limitations of the earlier approach. The reversible Kalman filter was designed to provide a methodology for evaluating the accuracy of existing Kalman filter variants with arbitrary precision on synthetic data. It has favorable numerical properties on synthetic data, achieving arbitrary precision without relying on the small-velocity assumption and depending only on sensor noise. However, its application to real data encountered difficulties related to measurement noise, which was mitigated using a heuristic. In particular, the heuristic involved an event detection step switching between reversible Kalman filter and classical Kalman variant at chosen moments. In the present work, we propose a study of this detection step and propose a methodology to prove at which moment the reversible Kalman approach improves on classical multiplicative variant.
+ oai:arXiv.org:2512.22126v2
+ eess.SY
+ cs.SY
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Svyatoslav Covanov, Cedric Pradalier
+
+
+ HybridFlow: Adaptive Task Scheduling for Fast and Token-Efficient LLM Inference in Edge-Cloud Collaboration
+ https://arxiv.org/abs/2512.22137
+ arXiv:2512.22137v2 Announce Type: replace
+Abstract: Large language models (LLMs) exhibit impressive reasoning and problem-solving abilities, yet their substantial inference latency and token consumption pose major challenges for real-time deployment on resource-limited edge devices. Recent efforts toward edge-cloud collaboration have attempted to mitigate this issue, but most existing methods adopt coarse-grained task allocation strategies-assigning entire queries either to the edge or the cloud. Such rigid partitioning fails to exploit fine-grained reasoning parallelism and often leads to redundant computation and inefficient resource utilization. To this end, we propose HybridFlow, a resource-adaptive inference framework that enables fast and token-efficient collaborative reasoning between edge and cloud LLMs. HybridFlow operates in two stages: (1) task decomposition and parallel execution, which dynamically splits a complex query into interdependent subtasks that can execute as soon as their dependencies are resolved; and (2) resource-aware subtask routing, where a learned router adaptively assigns each subtask to the edge or cloud model according to predicted utility gains and real-time budget states. Comprehensive evaluations on GPQA, MMLU-Pro, AIME, and LiveBench-Reasoning demonstrate that HybridFlow effectively reduces end-to-end inference time and overall token usage while maintaining competitive accuracy.
+ oai:arXiv.org:2512.22137v2
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiangwen Dong, Jiayu Li, Tianhang Zheng, Wanyu Lin
+
+
+ M\"untz-Sz\'asz Networks: Neural Architectures with Learnable Power-Law Bases
+ https://arxiv.org/abs/2512.22222
+ arXiv:2512.22222v3 Announce Type: replace
+Abstract: Standard neural network architectures employ fixed activation functions (ReLU, tanh, sigmoid) that are poorly suited for approximating functions with singular or fractional power behavior, a structure that arises ubiquitously in physics, including boundary layers, fracture mechanics, and corner singularities. We introduce M\"untz-Sz\'asz Networks (MSN), a novel architecture that replaces fixed smooth activations with learnable fractional power bases grounded in classical approximation theory. Each MSN edge computes $\phi(x) = \sum_k a_k |x|^{\mu_k} + \sum_k b_k \mathrm{sign}(x)|x|^{\lambda_k}$, where the exponents $\{\mu_k, \lambda_k\}$ are learned alongside the coefficients. We prove that MSN inherits universal approximation from the M\"untz-Sz\'asz theorem and establish novel approximation rates: for functions of the form $|x|^\alpha$, MSN achieves error $\mathcal{O}(|\mu - \alpha|^2)$ with a single learned exponent, whereas standard MLPs require $\mathcal{O}(\epsilon^{-1/\alpha})$ neurons for comparable accuracy. On supervised regression with singular target functions, MSN achieves 5-8x lower error than MLPs with 10x fewer parameters. Physics-informed neural networks (PINNs) represent a particularly demanding application for singular function approximation; on PINN benchmarks including a singular ODE and stiff boundary-layer problems, MSN achieves 3-6x improvement while learning interpretable exponents that match the known solution structure. Our results demonstrate that theory-guided architectural design can yield dramatic improvements for scientifically-motivated function classes.
+ oai:arXiv.org:2512.22222v3
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Gnankan Landry Regis N'guessan
+
+
+ EvoXplain: When Machine Learning Models Agree on Predictions but Disagree on Why -- Measuring Mechanistic Multiplicity Across Training Runs
+ https://arxiv.org/abs/2512.22240
+ arXiv:2512.22240v2 Announce Type: replace
+Abstract: Machine learning models are primarily judged by predictive performance, especially in applied settings. Once a model reaches high accuracy, its explanation is often assumed to be correct and trustworthy. However, this assumption raises an overlooked question: when two models achieve high accuracy, do they rely on the same internal logic, or do they reach the same outcome via different -- and potentially competing -- mechanisms? We introduce EvoXplain, a diagnostic framework that measures the stability of model explanations across repeated training. Rather than analysing a single trained model, EvoXplain treats explanations as samples drawn from the stochastic optimisation process itself -- without aggregating predictions or constructing ensembles -- and examines whether these samples form a single coherent explanation or separate into multiple, distinct explanatory modes. We evaluate EvoXplain on the Breast Cancer and COMPAS datasets using two widely deployed model classes: Logistic Regression and Random Forests. Although all models achieve high predictive accuracy, their explanations frequently exhibit clear multimodality. Even models commonly assumed to be stable, such as Logistic Regression, can produce multiple well-separated explanatory basins under repeated training on the same data split. These differences are not explained by hyperparameter variation or simple performance trade-offs. EvoXplain does not attempt to select a 'correct' explanation. Instead, it makes explanatory instability visible and quantifiable, revealing when single-instance or averaged explanations obscure the existence of multiple underlying mechanisms. More broadly, EvoXplain reframes interpretability as a property of a model class under repeated instantiation, rather than of any single trained model.
+ oai:arXiv.org:2512.22240v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chama Bensmail
+
+
+ Improved cystic hygroma detection from prenatal imaging using ultrasound-specific self-supervised representation learning
+ https://arxiv.org/abs/2512.22730
+ arXiv:2512.22730v2 Announce Type: replace
+Abstract: Cystic hygroma is a high-risk prenatal ultrasound finding that portends high rates of chromosomal abnormalities, structural malformations, and adverse pregnancy outcomes. Automated detection can increase reproducibility and support scalable early screening programs, but supervised deep learning methods are limited by small labelled datasets. This study assesses whether ultrasound-specific self-supervised pretraining can facilitate accurate, robust deep learning detection of cystic hygroma in first-trimester ultrasound images. We fine-tuned the Ultrasound Self-Supervised Foundation Model with Masked Autoencoding (USF-MAE), pretrained on over 370,000 unlabelled ultrasound images, for binary classification of normal controls and cystic hygroma cases used in this study. Performance was evaluated on the same curated ultrasound dataset, preprocessing pipeline, and 4-fold cross-validation protocol as for the DenseNet-169 baseline, using accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (ROC-AUC). Model interpretability was analyzed qualitatively using Score-CAM visualizations. USF-MAE outperformed the DenseNet-169 baseline on all evaluation metrics. The proposed model yielded a mean accuracy of 0.96, sensitivity of 0.94, specificity of 0.98, and ROC-AUC of 0.98 compared to 0.93, 0.92, 0.94, and 0.94 for the DenseNet-169 baseline, respectively. Qualitative Score-CAM visualizations of model predictions demonstrated clinical relevance by highlighting expected regions in the fetal neck for both positive and negative cases. Paired statistical analysis using a Wilcoxon signed-rank test confirmed that performance improvements achieved by USF-MAE were statistically significant (p = 0.0057).
+ oai:arXiv.org:2512.22730v2
+ cs.CV
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Youssef Megahed, Robin Ducharme, Inok Lee, Inbal Willner, Adrian D. C. Chan, Mark Walker, Steven Hawken
+
+
+ Covering in Hamming and Grassmann Spaces: New Bounds and Reed--Solomon-Based Constructions
+ https://arxiv.org/abs/2512.22911
+ arXiv:2512.22911v2 Announce Type: replace
+Abstract: We study covering problems in Hamming and Grassmann spaces through a unified coding-theoretic and information-theoretic framework. Viewing covering as a form of quantization in general metric spaces, we introduce the notion of the average covering radius as a natural measure of average distortion, complementing the classical worst-case covering radius. By leveraging tools from one-shot rate-distortion theory, we derive explicit non-asymptotic random-coding bounds on the average covering radius in both spaces, which serve as fundamental performance benchmarks.
+ On the construction side, we develop efficient puncturing-based covering algorithms for generalized Reed--Solomon (GRS) codes in the Hamming space and extend them to a new family of subspace codes, termed character-Reed--Solomon (CRS) codes, for Grassmannian quantization under the chordal distance. Our results reveal that, despite poor worst-case covering guarantees, these structured codes exhibit strong average covering performance. In particular, numerical results in the Hamming space demonstrate that RS-based constructions often outperform random codebooks in terms of average covering radius. In the one-dimensional Grassmann space, we numerically show that CRS codes over prime fields asymptotically achieve average covering radii within a constant factor of the random-coding bound in the high-rate regime. Together, these results provide new insights into the role of algebraic structure in covering problems and high-dimensional quantization.
+ oai:arXiv.org:2512.22911v2
+ cs.IT
+ eess.SP
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Samin Riasat, Hessam Mahdavifar
+
+
+ PGOT: A Physics-Geometry Operator Transformer for Complex PDEs
+ https://arxiv.org/abs/2512.23192
+ arXiv:2512.23192v2 Announce Type: replace
+Abstract: While Transformers have demonstrated remarkable potential in modeling Partial Differential Equations (PDEs), modeling large-scale unstructured meshes with complex geometries remains a significant challenge. Existing efficient architectures often employ feature dimensionality reduction strategies, which inadvertently induces Geometric Aliasing, resulting in the loss of critical physical boundary information. To address this, we propose the Physics-Geometry Operator Transformer (PGOT), designed to reconstruct physical feature learning through explicit geometry awareness. Specifically, we propose Spectrum-Preserving Geometric Attention (SpecGeo-Attention). Utilizing a ``physics slicing-geometry injection" mechanism, this module incorporates multi-scale geometric encodings to explicitly preserve multi-scale geometric features while maintaining linear computational complexity $O(N)$. Furthermore, PGOT dynamically routes computations to low-order linear paths for smooth regions and high-order non-linear paths for shock waves and discontinuities based on spatial coordinates, enabling spatially adaptive and high-precision physical field modeling. PGOT achieves consistent state-of-the-art performance across four standard benchmarks and excels in large-scale industrial tasks including airfoil and car designs.
+ oai:arXiv.org:2512.23192v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zhuo Zhang, Xi Yang, Ying Miao, Xiaobin Hu, Yifu Gao, Yuan Zhao, Yong Yang, Canqun Yang, Boocheong Khoo
+
+
+ KernelEvolve: Scaling Agentic Kernel Coding for Heterogeneous AI Accelerators at Meta
+ https://arxiv.org/abs/2512.23236
+ arXiv:2512.23236v3 Announce Type: replace
+Abstract: Making deep learning recommendation model (DLRM) training and inference fast and efficient is important. However, this presents three key system challenges - model architecture diversity, kernel primitive diversity, and hardware generation and architecture heterogeneity. This paper presents KernelEvolve-an agentic kernel coding framework-to tackle heterogeneity at-scale for DLRM. KernelEvolve is designed to take kernel specifications as input and automate the process of kernel generation and optimization for recommendation model across heterogeneous hardware architectures. KernelEvolve does so by operating at multiple programming abstractions, from Triton and CuTe DSL to low-level hardware agnostic languages, spanning the full hardware-software optimization stack. The kernel optimization process is described as graph-based search with selection policy, universal operator, fitness function, and termination rule, dynamically adapts to runtime execution context through retrieval-augmented prompt synthesis. We designed, implemented, and deployed KernelEvolve to optimize a wide variety of production recommendation models across generations of NVIDIA and AMD GPUs, as well as Meta's AI accelerators. We validate KernelEvolve on the publicly-available KernelBench suite, achieving 100% pass rate on all 250 problems across three difficulty levels, and 160 PyTorch ATen operators across three heterogeneous hardware platforms, demonstrating 100% correctness. KernelEvolve reduces development time from weeks to hours and achieves substantial performance improvements over PyTorch baselines across diverse production use cases and for heterogeneous AI systems at-scale. Beyond performance efficiency improvements, KernelEvolve significantly mitigates the programmability barrier for new AI hardware by enabling automated kernel generation for in-house developed AI hardware.
+ oai:arXiv.org:2512.23236v3
+ cs.LG
+ cs.AI
+ cs.AR
+ cs.MA
+ cs.PF
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Gang Liao, Hongsen Qin, Ying Wang, Alicia Golden, Michael Kuchnik, Yavuz Yetim, Jia Jiunn Ang, Chunli Fu, Yihan He, Samuel Hsia, Zewei Jiang, Dianshi Li, Uladzimir Pashkevich, Varna Puvvada, Feng Shi, Matt Steiner, Ruichao Xiao, Nathan Yan, Xiayu Yu, Zhou Fang, Roman Levenstein, Kunming Ho, Haishan Zhu, Alec Hammond, Richard Li, Ajit Mathews, Kaustubh Gondkar, Abdul Zainul-Abedin, Ketan Singh, Hongtao Yu, Wenyuan Chi, Barney Huang, Sean Zhang, Noah Weller, Zach Marine, Wyatt Cook, Carole-Jean Wu, Gaoxiang Liu
+
+
+ RxnBench: A Multimodal Benchmark for Evaluating Large Language Models on Chemical Reaction Understanding from Scientific Literature
+ https://arxiv.org/abs/2512.23565
+ arXiv:2512.23565v4 Announce Type: replace
+Abstract: The integration of Multimodal Large Language Models (MLLMs) into chemistry promises to revolutionize scientific discovery, yet their ability to comprehend the dense, graphical language of reactions within authentic literature remains underexplored. Here, we introduce RxnBench, a multi-tiered benchmark designed to rigorously evaluate MLLMs on chemical reaction understanding from scientific PDFs. RxnBench comprises two tasks: Single-Figure QA (SF-QA), which tests fine-grained visual perception and mechanistic reasoning using 1,525 questions derived from 305 curated reaction schemes, and Full-Document QA (FD-QA), which challenges models to synthesize information from 108 articles, requiring cross-modal integration of text, schemes, and tables. Our evaluation of MLLMs reveals a critical capability gap: while models excel at extracting explicit text, they struggle with deep chemical logic and precise structural recognition. Notably, models with inference-time reasoning significantly outperform standard architectures, yet none achieve 50\% accuracy on FD-QA. These findings underscore the urgent need for domain-specific visual encoders and stronger reasoning engines to advance autonomous AI chemists.
+ oai:arXiv.org:2512.23565v4
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Hanzheng Li, Xi Fang, Yixuan Li, Chaozheng Huang, Junjie Wang, Xi Wang, Hongzhe Bai, Bojun Hao, Shenyu Lin, Huiqi Liang, Linfeng Zhang, Guolin Ke
+
+
+ Prompt-Induced Over-Generation as Denial-of-Service: A Black-Box Attack-Side Benchmark
+ https://arxiv.org/abs/2512.23779
+ arXiv:2512.23779v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) can be driven into over-generation, emitting thousands of tokens before producing an end-of-sequence (EOS) token. This degrades answer quality, inflates latency and cost, and can be weaponized as a denial-of-service (DoS) attack. Recent work has begun to study DoS-style prompt attacks, but typically focuses on a single attack algorithm or assumes white-box access, without an attack-side benchmark that compares prompt-based attackers in a black-box, query-only regime with a known tokenizer. We introduce such a benchmark and study two prompt-only attackers. The first is an Evolutionary Over-Generation Prompt Search (EOGen) that searches the token space for prefixes that suppress EOS and induce long continuations. The second is a goal-conditioned reinforcement learning attacker (RL-GOAL) that trains a network to generate prefixes conditioned on a target length. To characterize behavior, we introduce Over-Generation Factor (OGF): the ratio of produced tokens to a model's context window, along with stall and latency summaries. EOGen discovers short-prefix attacks that raise Phi-3 to OGF = 1.39 +/- 1.14 (Success@>=2: 25.2%); RL-GOAL nearly doubles severity to OGF = 2.70 +/- 1.43 (Success@>=2: 64.3%) and drives budget-hit non-termination in 46% of trials.
+ oai:arXiv.org:2512.23779v2
+ cs.CR
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Manu, Yi Guo, Kanchana Thilakarathna, Nirhoshan Sivaroopan, Jo Plested, Tim Lynar, Jack Yang, Wangli Yang
+
+
+ Multi-Scenario Highway Lane-Change Intention Prediction: A Temporal Physics-Informed Multi-Modal Framework
+ https://arxiv.org/abs/2512.24075
+ arXiv:2512.24075v2 Announce Type: replace
+Abstract: Lane-change intention prediction is safety-critical for autonomous driving and ADAS, but remains difficult in naturalistic traffic due to noisy kinematics, severe class imbalance, and limited generalization across heterogeneous highway scenarios. We propose Temporal Physics-Informed AI (TPI-AI), a hybrid framework that fuses deep temporal representations with physics-inspired interaction cues. A two-layer bidirectional LSTM (Bi-LSTM) encoder learns compact embeddings from multi-step trajectory histories; we concatenate these embeddings with kinematics-, safety-, and interaction-aware features (e.g., headway, TTC, and safe-gap indicators) and train a LightGBM classifier for three-class intention recognition (No-LC, Left-LC, Right-LC). To improve minority-class reliability, we apply imbalance-aware optimization including resampling/weighting and fold-wise threshold calibration. Experiments on two large-scale drone-based datasets, highD (straight highways) and exiD (ramp-rich environments), use location-based splits and evaluate prediction horizons T = 1, 2, 3 s. TPI-AI outperforms standalone LightGBM and Bi-LSTM baselines, achieving macro-F1 of 0.9562, 0.9124, 0.8345 on highD and 0.9247, 0.8197, 0.7605 on exiD at T = 1, 2, 3 s, respectively. These results show that combining physics-informed interaction features with learned temporal embeddings yields robust multi-scenario lane-change intention prediction.
+ oai:arXiv.org:2512.24075v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jiazhao Shi, Ziyu Wang, Yichen Lin, Shoufeng Lu
+
+
+ When Does Pairing Seeds Reduce Variance? Evidence from a Multi-Agent Economic Simulation
+ https://arxiv.org/abs/2512.24145
+ arXiv:2512.24145v2 Announce Type: replace
+Abstract: Machine learning systems appear stochastic but are deterministically random, as seeded pseudorandom number generators produce identical realisations across repeated executions. Standard evaluation practice typically treats runs across alternatives as independent and does not exploit shared sources of randomness. This paper analyses the statistical structure of comparative evaluation under shared random seeds. Under this design, competing systems are evaluated using identical seeds, inducing matched stochastic realisations and yielding strict variance reduction whenever outcomes are positively correlated at the seed level. We demonstrate these effects using an extended learning-based multi-agent economic simulator, where paired evaluation exposes systematic differences in aggregate and distributional outcomes that remain statistically inconclusive under independent evaluation at fixed budgets.
+ oai:arXiv.org:2512.24145v2
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Udit Sharma
+
+
+ QianfanHuijin Technical Report: A Novel Multi-Stage Training Paradigm for Finance Industrial LLMs
+ https://arxiv.org/abs/2512.24314
+ arXiv:2512.24314v2 Announce Type: replace
+Abstract: Domain-specific enhancement of Large Language Models (LLMs) within the financial context has long been a focal point of industrial application. While previous models such as BloombergGPT and Baichuan-Finance primarily focused on knowledge enhancement, the deepening complexity of financial services has driven a growing demand for models that possess not only domain knowledge but also robust financial reasoning and agentic capabilities. In this paper, we present QianfanHuijin, a financial domain LLM, and propose a generalizable multi-stage training paradigm for industrial model enhancement.
+ Our approach begins with Continual Pre-training (CPT) on financial corpora to consolidate the knowledge base. This is followed by a fine-grained Post-training pipeline designed with increasing specificity: starting with Financial SFT, progressing to Finance Reasoning RL and Finance Agentic RL, and culminating in General RL aligned with real-world business scenarios. Empirical results demonstrate that QianfanHuijin achieves superior performance across various authoritative financial benchmarks. Furthermore, ablation studies confirm that the targeted Reasoning RL and Agentic RL stages yield significant gains in their respective capabilities. These findings validate our motivation and suggest that this fine-grained, progressive post-training methodology is poised to become a mainstream paradigm for various industrial-enhanced LLMs.
+ oai:arXiv.org:2512.24314v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shupeng Li, Weipeng Lu, Linyun Liu, Chen Lin, Shaofei Li, Zhendong Tan, Hanjun Zhong, Yucheng Zeng, Chenghao Zhu, Mengyue Liu, Daxiang Dong, Jianmin Wu, Yunting Xiao, Annan Li, Danyu Liu, Jingnan Zhang, Licen Liu, Dawei Yin, Dou Shen
+
+
+ From Perception to Punchline: Empowering VLM with the Art of In-the-wild Meme
+ https://arxiv.org/abs/2512.24555
+ arXiv:2512.24555v2 Announce Type: replace
+Abstract: Generating humorous memes is a challenging multimodal task that moves beyond direct image-to-caption supervision. It requires a nuanced reasoning over visual content, contextual cues, and subjective humor. To bridge this gap between visual perception and humorous punchline creation, we propose HUMOR}, a novel framework that guides VLMs through hierarchical reasoning and aligns them with group-wise human preferences. First, HUMOR employs a hierarchical, multi-path Chain-of-Thought (CoT): the model begins by identifying a template-level intent, then explores diverse reasoning paths under different contexts, and finally anchors onto a high-quality, context-specific path. This CoT supervision, which traces back from ground-truth captions, enhances reasoning diversity. We further analyze that this multi-path exploration with anchoring maintains a high expected humor quality, under the practical condition that high-quality paths retain significant probability mass. Second, to capture subjective humor, we train a pairwise reward model that operates within groups of memes sharing the same template. Following established theory, this approach ensures a consistent and robust proxy for human preference, even with subjective and noisy labels. The reward model then enables a group-wise reinforcement learning optimization, guaranteeing providing a theoretical guarantee for monotonic improvement within the trust region. Extensive experiments show that HUMOR empowers various VLMs with superior reasoning diversity, more reliable preference alignment, and higher overall meme quality. Beyond memes, our work presents a general training paradigm for open-ended, human-aligned multimodal generation, where success is guided by comparative judgment within coherent output group.
+ oai:arXiv.org:2512.24555v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xueyan Li, Yingyi Xue, Mengjie Jiang, Qingzi Zhu, Yazhe Niu
+
+
+ Understanding and Steering the Cognitive Behaviors of Reasoning Models at Test-Time
+ https://arxiv.org/abs/2512.24574
+ arXiv:2512.24574v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) often rely on long chain-of-thought (CoT) reasoning to solve complex tasks. While effective, these trajectories are frequently inefficient, leading to high latency from excessive token generation, or unstable reasoning that alternates between underthinking (shallow, inconsistent steps) and overthinking (repetitive, verbose reasoning). In this work, we study the structure of reasoning trajectories and uncover specialized attention heads that correlate with distinct cognitive behaviors such as verification and backtracking. By lightly intervening on these heads at inference time, we can steer the model away from inefficient modes. Building on this insight, we propose CREST, a training-free method for Cognitive REasoning Steering at Test-time. CREST has two components: (1) an offline calibration step that identifies cognitive heads and derives head-specific steering vectors, and (2) an inference-time procedure that rotates hidden representations to suppress components along those vectors. CREST adaptively suppresses unproductive reasoning behaviors, yielding both higher accuracy and lower computational cost. Across diverse reasoning benchmarks and models, CREST improves accuracy by up to 17.5% while reducing token usage by 37.6%, offering a simple and effective pathway to faster, more reliable LLM reasoning.
+ oai:arXiv.org:2512.24574v2
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zhenyu Zhang, Xiaoxia Wu, Zhongzhu Zhou, Qingyang Wu, Yineng Zhang, Pragaash Ponnusamy, Harikaran Subbaraj, Jue Wang, Shuaiwen Leon Song, Ben Athiwaratkun
+
+
+ Secure Digital Semantic Communications: Fundamentals, Challenges, and Opportunities
+ https://arxiv.org/abs/2512.24602
+ arXiv:2512.24602v5 Announce Type: replace
+Abstract: Semantic communication (SemCom) has emerged as a promising paradigm for future wireless networks by prioritizing task-relevant meaning over raw data delivery, thereby reducing communication overhead and improving efficiency. However, shifting from bit-accurate transmission to task-oriented delivery introduces new security and privacy risks. These include semantic leakage, semantic manipulation, knowledge base vulnerabilities, model-related attacks, and threats to authenticity and availability. Most existing secure SemCom studies focus on analog SemCom, where semantic features are mapped to continuous channel inputs. In contrast, digital SemCom transmits semantic information through discrete bits or symbols within practical transceiver pipelines, offering stronger compatibility with realworld systems while exposing a distinct and underexplored attack surface. In particular, digital SemCom typically represents semantic information over a finite alphabet through explicit digital modulation, following two main routes: probabilistic modulation and deterministic modulation. These discrete mechanisms and practical transmission procedures introduce additional vulnerabilities affecting bit- or symbol-level semantic information, the modulation stage, and packet-based delivery and protocol operations. Motivated by these challenges and the lack of a systematic analysis of secure digital SemCom, this paper provides a structured review of the area. Specifically, we review SemCom fundamentals and clarify the architectural differences between analog and digital SemCom. We then summarize threats shared by both paradigms and organize the threat landscape specific to digital SemCom, followed by a discussion of potential defenses. Finally, we outline open research directions toward secure and deployable digital SemCom systems.
+ oai:arXiv.org:2512.24602v5
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weixuan Chen, Qianqian Yang, Yuanyuan Jia, Junyu Pan, Shuo Shao, Jincheng Dai, Meixia Tao, Ping Zhang
+
+
+ FPGA Co-Design for Efficient N:M Sparse and Quantized Model Inference
+ https://arxiv.org/abs/2512.24713
+ arXiv:2512.24713v2 Announce Type: replace
+Abstract: Large language models (LLMs) have demonstrated remarkable performance across a wide range of language processing tasks. However, this success comes at the cost of substantial computation and memory requirements, which significantly impedes their deployment in resource-constrained environments. To address this challenge, this work introduces an automation framework that leverages weight pruning and low-bit quantization, and presents a hardware-software co-design method that generates accelerators on the Field-Programmable Gate Array (FPGA) platform. In particular, we implement a unified pipeline that applies N:M structured pruning and 4-bit integer quantization to reduce the memory footprint, followed by optimized dequantization and matrix multiplication to enhance LLM inference on several hardware platforms, including CPUs, NVIDIA GPUs with Dense and 2:4 Sparse Tensor Cores, and a custom systolic-array-based FPGA accelerator. Utilizing 2:4 sparsity combined with quantization on $4096 \times 4096$ matrices, our approach achieves a reduction of up to $4\times$ in weight storage and a $1.71\times$ speedup in matrix multiplication, yielding a $1.29\times$ end-to-end latency reduction compared to dense GPU baselines. Scaling analysis on the LLaMA-7B model further shows that structured sparsity enhances the throughput per token by $1.36\times$. These results demonstrate the synergy of fine-grained N:M sparsity and quantization for enabling efficient and deployable LLM inference, while the proposed FPGA accelerator offers a flexible architectural path for supporting a broader class of sparsity patterns beyond the fixed 2:4 hardware constraints.
+ oai:arXiv.org:2512.24713v2
+ cs.LG
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Fen-Yu Hsieh, Yun-Chang Teng, Ding-Yong Hong, Jan-Jan Wu
+
+
+ Exponential lower bound via exponential sums
+ https://arxiv.org/abs/2601.00387
+ arXiv:2601.00387v2 Announce Type: replace
+Abstract: Valiant's famous VP vs. VNP conjecture states that the symbolic permanent polynomial does not have polynomial-size algebraic circuits. However, the best upper bound on the size of the circuits computing the permanent is exponential. Informally, VNP is an exponential sum of VP-circuits. In this paper we study whether, in general, exponential sums (of algebraic circuits) require exponential-size algebraic circuits. We show that the famous Shub-Smale $\tau$-conjecture indeed implies such an exponential lower bound for an exponential sum. Our main tools come from parameterized complexity. Along the way, we also prove an exponential fpt (fixed-parameter tractable) lower bound for the parameterized algebraic complexity class VW$_{nb}^0$[P], assuming the same conjecture. VW$_{nb}^0$[P] can be thought of as the weighted sums of (unbounded-degree) circuits, where only $\pm 1$ constants are cost-free. To the best of our knowledge, this is the first time the Shub-Smale $\tau$-conjecture has been applied to prove explicit exponential lower bounds.
+ Furthermore, we prove that when this class is fpt, then a variant of the counting hierarchy, namely the linear counting hierarchy collapses. Moreover, if a certain type of parameterized exponential sums is fpt, then integers, as well as polynomials with coefficients being definable in the linear counting hierarchy have subpolynomial $\tau$-complexity.
+ Finally, we characterize a related class VW[F], in terms of permanents, where we consider an exponential sum of algebraic formulas instead of circuits. We show that when we sum over cycle covers that have one long cycle and all other cycles have constant length, then the resulting family of polynomials is complete for VW[F] on certain types of graphs.
+ oai:arXiv.org:2601.00387v2
+ cs.CC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Somnath Bhattacharjee, Markus Bl\"aser, Pranjal Dutta, Saswata Mukherjee
+
+
+ Semantic Alignment of Multilingual Knowledge Graphs via Contextualized Vector Projections
+ https://arxiv.org/abs/2601.00814
+ arXiv:2601.00814v2 Announce Type: replace
+Abstract: The paper presents our work on cross-lingual ontology alignment system which uses embedding based cosine similarity matching. The ontology entities are made contextually richer by creating descriptions using novel techniques. We use a fine-tuned transformer based multilingual model for generating better embeddings. We use cosine similarity to find positive ontology entities pairs and then apply threshold filtering to retain only highly similar entities. We have evaluated our work on OAEI-2022 multifarm track. We achieve 71% F1 score (78% recall and 65% precision) on the evaluation dataset, 16% increase from best baseline score. This suggests that our proposed alignment pipeline is able to capture the subtle cross-lingual similarities.
+ oai:arXiv.org:2601.00814v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Abhishek Kumar
+
+
+ HalluZig: Hallucination Detection using Zigzag Persistence
+ https://arxiv.org/abs/2601.01552
+ arXiv:2601.01552v2 Announce Type: replace
+Abstract: The factual reliability of Large Language Models (LLMs) remains a critical barrier to their adoption in high-stakes domains due to their propensity to hallucinate. Current detection methods often rely on surface-level signals from the model's output, overlooking the failures that occur within the model's internal reasoning process. In this paper, we introduce a new paradigm for hallucination detection by analyzing the dynamic topology of the evolution of model's layer-wise attention. We model the sequence of attention matrices as a zigzag graph filtration and use zigzag persistence, a tool from Topological Data Analysis, to extract a topological signature. Our core hypothesis is that factual and hallucinated generations exhibit distinct topological signatures. We validate our framework, HalluZig, on multiple benchmarks, demonstrating that it outperforms strong baselines. Furthermore, our analysis reveals that these topological signatures are generalizable across different models and hallucination detection is possible only using structural signatures from partial network depth.
+ oai:arXiv.org:2601.01552v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Shreyas N. Samaga, Gilberto Gonzalez Arroyo, Tamal K. Dey
+
+
+ MOSS Transcribe Diarize Technical Report
+ https://arxiv.org/abs/2601.01554
+ arXiv:2601.01554v5 Announce Type: replace
+Abstract: Speaker-Attributed, Time-Stamped Transcription (SATS) aims to transcribe what is said and to precisely determine the timing of each speaker, which is particularly valuable for meeting transcription. Existing SATS systems rarely adopt an end-to-end formulation and are further constrained by limited context windows, weak long-range speaker memory, and the inability to output timestamps. To address these limitations, we present MOSS Transcribe Diarize, a unified multimodal large language model that jointly performs Speaker-Attributed, Time-Stamped Transcription in an end-to-end paradigm. Trained on extensive real wild data and equipped with a 128k context window for up to 90-minute inputs, MOSS Transcribe Diarize scales well and generalizes robustly. Across comprehensive evaluations, it outperforms state-of-the-art commercial systems on multiple public and in-house benchmarks.
+ oai:arXiv.org:2601.01554v5
+ cs.SD
+ cs.AI
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ MOSI. AI, :, Donghua Yu, Zhengyuan Lin, Chen Yang, Yiyang Zhang, Hanfu Chen, Jingqi Chen, Ke Chen, Liwei Fan, Yi Jiang, Jie Zhu, Muchen Li, Wenxuan Wang, Yang Wang, Zhe Xu, Yitian Gong, Yuqian Zhang, Wenbo Zhang, Songlin Wang, Zhiyu Wu, Zhaoye Fei, Qinyuan Cheng, Shimin Li, Xipeng Qiu
+
+
+ Logics-STEM: Empowering LLM Reasoning via Failure-Driven Post-Training and Document Knowledge Enhancement
+ https://arxiv.org/abs/2601.01562
+ arXiv:2601.01562v3 Announce Type: replace
+Abstract: We present Logics-STEM, a state-of-the-art reasoning model fine-tuned on Logics-STEM-SFT-Dataset, a high-quality and diverse dataset at 10M scale that represents one of the largest-scale open-source long chain-of-thought corpora. Logics-STEM targets reasoning tasks in the domains of Science, Technology, Engineering, and Mathematics (STEM), and exhibits exceptional performance on STEM-related benchmarks with an average improvement of 4.68% over the next-best model at 8B scale. We attribute the gains to our data-algorithm co-design engine, where they are jointly optimized to fit a gold-standard distribution behind reasoning. Data-wise, the Logics-STEM-SFT-Dataset is constructed from a meticulously designed data curation engine with 5 stages to ensure the quality, diversity, and scalability, including annotation, deduplication, decontamination, distillation, and stratified sampling. Algorithm-wise, our failure-driven post-training framework leverages targeted knowledge retrieval and data synthesis around model failure regions in the Supervised Fine-tuning (SFT) stage to effectively guide the second-stage SFT or the reinforcement learning (RL) for better fitting the target distribution. The superior empirical performance of Logics-STEM reveals the vast potential of combining large-scale open-source data with carefully designed synthetic data, underscoring the critical role of data-algorithm co-design in enhancing reasoning capabilities through post-training. We make both the Logics-STEM models (8B and 32B) and the Logics-STEM-SFT-Dataset (10M and downsampled 2.2M versions) publicly available to support future research in the open-source community.
+ oai:arXiv.org:2601.01562v3
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mingyu Xu, Cheng Fang, Keyue Jiang, Yuqian Zheng, Yanghua Xiao, Baojian Zhou, Qifang Zhao, Suhang Zheng, Xiuwen Zhu, Jiyang Tang, Yongchi Zhao, Yijia Luo, Zhiqi Bai, Yuchi Xu, Wenbo Su, Wei Wang, Bing Zhao, Lin Qu, Xiaoxiao Xu
+
+
+ OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment
+ https://arxiv.org/abs/2601.01576
+ arXiv:2601.01576v2 Announce Type: replace
+Abstract: Evaluating novelty is critical yet challenging in peer review, as reviewers must assess submissions against a vast, rapidly evolving literature. This report presents OpenNovelty, an LLM-powered agentic system for transparent, evidence-based novelty analysis. The system operates through four phases: (1) extracting the core task and contribution claims to generate retrieval queries; (2) retrieving relevant prior work based on extracted queries via semantic search engine; (3) constructing a hierarchical taxonomy of core-task-related work and performing contribution-level full-text comparisons against each contribution; and (4) synthesizing all analyses into a structured novelty report with explicit citations and evidence snippets. Unlike naive LLM-based approaches, \textsc{OpenNovelty} grounds all assessments in retrieved real papers, ensuring verifiable judgments. We deploy our system on 500+ ICLR 2026 submissions with all reports publicly available on our website, and preliminary analysis suggests it can identify relevant prior work, including closely related papers that authors may overlook. OpenNovelty aims to empower the research community with a scalable tool that promotes fair, consistent, and evidence-backed peer review.
+ oai:arXiv.org:2601.01576v2
+ cs.IR
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ming Zhang, Kexin Tan, Yueyuan Huang, Yujiong Shen, Chunchun Ma, Li Ju, Xinran Zhang, Yuhui Wang, Wenqing Jing, Jingyi Deng, Huayu Sha, Binze Hu, Jingqi Tong, Changhao Jiang, Yage Geng, Yuankai Ying, Yue Zhang, Zhangyue Yin, Zhiheng Xi, Shihan Dou, Tao Gui, Qi Zhang, Xuanjing Huang
+
+
+ Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization
+ https://arxiv.org/abs/2601.01747
+ arXiv:2601.01747v3 Announce Type: replace
+Abstract: Recent advancements in Large Vision-Language Models (LVLMs) have shown groundbreaking capabilities across diverse multimodal tasks. However, these models remain vulnerable to adversarial jailbreak attacks, where adversaries craft subtle perturbations to bypass safety mechanisms and trigger harmful outputs. Existing white-box attacks methods require full model accessibility, suffer from computing costs and exhibit insufficient adversarial transferability, making them impractical for real-world, black-box settings. To address these limitations, we propose a black-box jailbreak attack on LVLMs via Zeroth-Order optimization using Simultaneous Perturbation Stochastic Approximation (ZO-SPSA). ZO-SPSA provides three key advantages: (i) gradient-free approximation by input-output interactions without requiring model knowledge, (ii) model-agnostic optimization without the surrogate model and (iii) lower resource requirements with reduced GPU memory consumption. We evaluate ZO-SPSA on three LVLMs, including InstructBLIP, LLaVA and MiniGPT-4, achieving the highest jailbreak success rate of 83.0% on InstructBLIP, while maintaining imperceptible perturbations comparable to white-box methods. Moreover, adversarial examples generated from MiniGPT-4 exhibit strong transferability to other LVLMs, with ASR reaching 64.18%. These findings underscore the real-world feasibility of black-box jailbreaks and expose critical weaknesses in the safety mechanisms of current LVLMs
+ oai:arXiv.org:2601.01747v3
+ cs.CR
+ cs.AI
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiwei Guan, Haibo Jin, Haohan Wang
+
+
+ Algorithmic Information Theory for Graph Edge Grouping and Substructure Analysis
+ https://arxiv.org/abs/2601.01760
+ arXiv:2601.01760v3 Announce Type: replace
+Abstract: Understanding natural phenomenon through the interactions of different complex systems has become an increasing focus in scientific inquiry. Defining complexity and actually measuring it is an ongoing debate and no standard framework has been established that is both theoretically sound and computationally practical to use. Currently, one of the fields which attempts to formally define complexity is in the realm of Algorithmic Information Theory. The field has shown advances by studying the complexity values of binary strings and 2-dimensional binary matrices using 1-dimensional and 2-dimensional Turing machines, respectively. Using these complexity values, an algorithm called the Block Decomposition Method developed by Zenil, et al. in 2018, has been created to approximate the complexity of adjacency matrices of graphs which have found relative success in grouping graphs based on their complexity values. We use this method along with another method called edge perturbation to exhaustively determine if an edge can be identified to connect two subgraphs within a graph using the entire symmetric group of its vertices permutation and via unique permutations we call automorphic subsets, which are a special subset of the symmetric group. We also analyze if edges will be grouped closer to their respective subgraphs in terms of the average algorithmic information contribution. This analysis ascertains if Algorithmic Information Theory can serve as a viable theory for understanding graph substructures and as a foundation for frameworks measuring and analyzing complexity. The study found that the connecting edge was successfully identified as having the highest average information contribution in 29 out of 30 graphs, and in 16 of these, the distance to the next edge was greater than log_2(2). Furthermore, the symmetric group outperformed automorphic subsets in edge grouping.
+ oai:arXiv.org:2601.01760v3
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Gabriel Potestades
+
+
+ Deferred Commitment Decoding for Diffusion Language Models
+ https://arxiv.org/abs/2601.02076
+ arXiv:2601.02076v2 Announce Type: replace
+Abstract: Diffusion language models (DLMs) have recently emerged as a strong alternative to autoregressive models by enabling parallel text generation. To improve inference efficiency and KV-cache compatibility, prior work commonly adopts block-based diffusion, decoding tokens block by block. However, this paradigm suffers from a structural limitation that we term Boundary-Induced Context Truncation (BICT): undecoded tokens near block boundaries are forced to commit without access to nearby future context, even when such context could substantially reduce uncertainty. This limitation degrades decoding certainty and generation quality, especially for tasks requiring precise reasoning, such as mathematical problem solving and code generation. We propose Deferred Commitment Decoding (DCD), a novel, training-free decoding strategy that mitigates this issue. DCD maintains a certainty-aware sliding window over masked tokens, resolving low-uncertainty tokens early while deferring high-uncertainty tokens until sufficient contextual evidence becomes available. Extensive experiments across multiple diffusion language models, benchmarks, and caching configurations show that DCD improves generation accuracy by 1.73% with comparable time on average compared to fixed block-based diffusion methods, with the most significant improvement reaching 16.5%. These results demonstrate that deferring token commitment based on uncertainty is a simple yet effective principle for improving both the quality and efficiency of diffusion language model decoding.
+ oai:arXiv.org:2601.02076v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yingte Shu, Yuchuan Tian, Chao Xu, Yunhe Wang, Hanting Chen
+
+
+ Horizon Activation Mapping for Neural Networks in Time Series Forecasting
+ https://arxiv.org/abs/2601.02094
+ arXiv:2601.02094v3 Announce Type: replace
+Abstract: Neural networks for time series forecasting have relied on error metrics and architecture-specific interpretability approaches for model selection that don't apply across models of different families. To interpret forecasting models agnostic to the types of layers across state-of-the-art model families, we introduce Horizon Activation Mapping (HAM), a visual interpretability technique inspired by grad-CAM that uses gradient norm averages to study the horizon's subseries where grad-CAM studies attention maps over image data. We introduce causal and anti-causal modes to calculate gradient update norm averages across subseries at every timestep and lines of proportionality signifying uniform distributions of the norm averages. Optimization landscape studies with respect to changes in batch sizes, early stopping, train-val-test splits, architectural choices, univariate forecasting and dropouts are studied with respect to performances and subseries in HAM. Interestingly, batch size based differences in activities seem to indicate potential for existence of an exponential approximation across them per epoch relative to each other. Multivariate forecasting models including MLP-based CycleNet, N-Linear, N-HITS, self attention-based FEDformer, Pyraformer, SSM-based SpaceTime and diffusion-based Multi-Resolution DDPM over different horizon sizes trained over the ETTm2 dataset are used for HAM plots in this study. NHITS' neural approximation theorem and SpaceTime's exponential autoregressive activities have been attributed to trends in HAM plots over their training, validation and test sets. In general, HAM can be used for granular model selection, validation set choices and comparisons across different neural network model families.
+ oai:arXiv.org:2601.02094v3
+ cs.LG
+ math.FA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Krupakar Hans, V A Kandappan
+
+
+ DeCode: Decoupling Content and Delivery for Medical QA
+ https://arxiv.org/abs/2601.02123
+ arXiv:2601.02123v2 Announce Type: replace
+Abstract: Large language models (LLMs) exhibit strong medical knowledge and can generate factually accurate responses. However, existing models often fail to account for individual patient contexts, producing answers that are clinically correct yet poorly aligned with patients' needs. In this work, we introduce DeCode, a training-free, model-agnostic framework that adapts existing LLMs to produce contextualized answers in clinical settings. We evaluate DeCode on OpenAI HealthBench, a comprehensive and challenging benchmark designed to assess clinical relevance and validity of LLM responses. DeCode improves the previous state of the art from $28.4\%$ to $49.8\%$, corresponding to a $75\%$ relative improvement. Experimental results suggest the effectiveness of DeCode in improving clinical question answering of LLMs.
+ oai:arXiv.org:2601.02123v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Po-Jen Ko, Chen-Han Tsai, Yu-Shao Peng
+
+
+ LocoScooter: Designing a Stationary Scooter-Based Locomotion System for Navigation in Virtual Reality
+ https://arxiv.org/abs/2601.02167
+ arXiv:2601.02167v2 Announce Type: replace
+Abstract: Virtual locomotion remains a challenge in VR, especially in space-limited environments where room-scale walking is impractical. We present LocoScooter, a low-cost, deployable locomotion interface combining foot-sliding on a compact treadmill with handlebar steering inspired by scooter riding. Built from commodity hardware, it supports embodied navigation through familiar, physically engaging movement. In a within-subject study (N = 14), LocoScooter significantly improved immersion, enjoyment, and bodily involvement over joystick navigation, while maintaining comparable efficiency and usability. Despite higher physical demand, users did not report increased fatigue, suggesting familiar movements can enrich VR navigation.
+ oai:arXiv.org:2601.02167v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Wei He, Xiang Li, Per Ola Kristensson, Ge Lin Kan
+
+
+ Deciding Serializability in Network Systems
+ https://arxiv.org/abs/2601.02251
+ arXiv:2601.02251v4 Announce Type: replace
+Abstract: We present the SER modeling language for automatically verifying serializability of concurrent programs, i.e., whether every concurrent execution of the program is equivalent to some serial execution. SER programs are suitably restricted to make this problem decidable, while still allowing for an unbounded number of concurrent threads of execution, each potentially running for an unbounded number of steps. Building on prior theoretical results, we give the first automated end-to-end decision procedure that either proves serializability by producing a checkable certificate, or refutes it by producing a counterexample trace. We also present a network-system abstraction to which SER programs compile. Our decision procedure then reduces serializability in this setting to a Petri net reachability query. Furthermore, in order to scale, we curtail the search space via multiple optimizations, including Petri net slicing, semilinear-set compression, and Presburger-formula manipulation. We extensively evaluate our framework and show that, despite the theoretical hardness of the problem, it can successfully handle various models of real-world programs, including stateful firewalls, BGP routers, and more.
+ oai:arXiv.org:2601.02251v4
+ cs.FL
+ cs.DC
+ cs.LO
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Guy Amir, Mark Barbone, Nicolas Amat, Jules Jacobs
+
+
+ Variance-Aware LLM Annotation for Strategy Research: Sources, Diagnostics, and a Protocol for Reliable Measurement
+ https://arxiv.org/abs/2601.02370
+ arXiv:2601.02370v3 Announce Type: replace
+Abstract: Large language models (LLMs) offer strategy researchers powerful tools for annotating text at scale, but treating LLM-generated labels as deterministic overlooks substantial instability. Grounded in content analysis and generalizability theory, we diagnose five variance sources: construct specification, interface effects, model preferences, output extraction, and system-level aggregation. Empirical demonstrations show that minor design choices-prompt phrasing, model selection-can shift outcomes by 12-85 percentage points. Such variance threatens not only reproducibility but econometric identification: annotation errors correlated with covariates bias parameter estimates regardless of average accuracy. We develop a variance-aware protocol specifying sampling budgets, aggregation rules, and reporting standards, and delineate scope conditions where LLM annotation should not be used. These contributions transform LLM-based annotation from ad hoc practice into auditable measurement infrastructure.
+ oai:arXiv.org:2601.02370v3
+ cs.CY
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Arnaldo Camuffo, Alfonso Gambardella, Saeid Kazemi, Jakub Malachowski, Abhinav Pandey
+
+
+ Focus on What Matters: Fisher-Guided Adaptive Multimodal Fusion for Vulnerability Detection
+ https://arxiv.org/abs/2601.02438
+ arXiv:2601.02438v2 Announce Type: replace
+Abstract: Software vulnerability detection can be formulated as a binary classification problem that determines whether a given code snippet contains security defects. Existing multimodal methods typically fuse Natural Code Sequence (NCS) representations extracted by pretrained models with Code Property Graph (CPG) representations extracted by graph neural networks, under the implicit assumption that introducing an additional modality necessarily yields information gain. Through empirical analysis, we demonstrate the limitations of this assumption: pretrained models already encode substantial structural information implicitly, leading to strong overlap between the two modalities; moreover, graph encoders are generally less effective than pretrained language models in feature extraction. As a result, naive fusion not only struggles to obtain complementary signals but can also dilute effective discriminative cues due to noise propagation. To address these challenges, we propose a task-conditioned complementary fusion strategy that uses Fisher information to quantify task relevance, transforming cross-modal interaction from full-spectrum matching into selective fusion within a task-sensitive subspace. Our theoretical analysis shows that, under an isotropic perturbation assumption, this strategy significantly tightens the upper bound on the output error. Based on this insight, we design the TaCCS-DFA framework, which combines online low-rank Fisher subspace estimation with an adaptive gating mechanism to enable efficient task-oriented fusion. Experiments on the BigVul, Devign, and ReVeal benchmarks demonstrate that TaCCS-DFA delivers up to a 6.3-point gain in F1 score with only a 3.4% increase in inference latency, while maintaining low calibration error.
+ oai:arXiv.org:2601.02438v2
+ cs.SE
+ cs.AI
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yun Bian, Yi Chen, HaiQuan Wang, ShiHao Li, Zhe Cui
+
+
+ Normalized Conditional Mutual Information Surrogate Loss for Deep Neural Classifiers
+ https://arxiv.org/abs/2601.02543
+ arXiv:2601.02543v3 Announce Type: replace
+Abstract: In this paper, we propose a novel information theoretic surrogate loss; normalized conditional mutual information (NCMI); as a drop in alternative to the de facto cross-entropy (CE) for training deep neural network (DNN) based classifiers. We first observe that the model's NCMI is inversely proportional to its accuracy. Building on this insight, we introduce an alternating algorithm to efficiently minimize the NCMI. Across image recognition and whole-slide imaging (WSI) subtyping benchmarks, NCMI-trained models surpass state of the art losses by substantial margins at a computational cost comparable to that of CE. Notably, on ImageNet, NCMI yields a 2.77% top-1 accuracy improvement with ResNet-50 comparing to the CE; on CAMELYON-17, replacing CE with NCMI improves the macro-F1 by 8.6% over the strongest baseline. Gains are consistent across various architectures and batch sizes, suggesting that NCMI is a practical and competitive alternative to CE.
+ oai:arXiv.org:2601.02543v3
+ cs.LG
+ cs.AI
+ cs.CV
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Linfeng Ye, Zhixiang Chi, Konstantinos N. Plataniotis, En-hui Yang
+
+
+ Backwards Data-Flow Analysis using Prophecy Variables in the BuildIt System
+ https://arxiv.org/abs/2601.02653
+ arXiv:2601.02653v2 Announce Type: replace
+Abstract: Many program transformations and optimizations require information about the future behavior of the program. A standard way to obtain this information is to build an intermediate program representation, then use a backwards program analysis to propagate relevant information against the flow of control back to the transformation/optimization site. We instead propose to use prophecy variables, which predict information about the future execution of the program, to enable such transformations and optimizations. We implement prophecy variables in BuildIt, a lightweight domain specific language implementation system. BuildIt uses staged compilation to implement high performance domain specific languages embedded within a standard general purpose programming language (C++). The BuildIt first phase uses standard C++ program execution to generate optimized C, C++, and CUDA second phase code. This approach enables BuildIt to eliminate programming language implementation components such as parsers and intermediate representations, delivering a dramatic decrease in the engineering effort required to implement domain specific languages. The combination of prophecy variables and repeated forward program execution enables BuildIt to extend this approach to include transformations and optimizations that require information about the future execution of the program without backwards analyses and without the engineering overhead associated with implementing these analyses. We formalize the use of prophecy variables for this purpose, discuss the implementation of prophecy variables and repeated execution in BuildIt, and present experimental results for BuildIt computations that benefit from optimizations enabled by the information that prophecy variables provide.
+ oai:arXiv.org:2601.02653v2
+ cs.PL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ajay Brahmakshatriya, Saman Amarasinghe, Martin Rinard
+
+
+ RPIQ: Residual-Projected Multi-Collaboration Closed-Loop and Single Instance Quantization for Visually Impaired Assistance
+ https://arxiv.org/abs/2601.02888
+ arXiv:2601.02888v2 Announce Type: replace
+Abstract: Visually impaired users face significant challenges in daily information access and real-time environmental perception, and there is an urgent need for intelligent assistive systems with accurate recognition capabilities. Although large-scale models provide effective solutions for perception and reasoning, their practical deployment on assistive devices is severely constrained by excessive memory consumption and high inference costs. Moreover, existing quantization strategies often ignore inter-block error accumulation, leading to degraded model stability. To address these challenges, this study proposes a novel quantization framework -- Residual-Projected Multi-Collaboration Closed-Loop and Single Instance Quantization(RPIQ), whose quantization process adopts a multi-collaborative closed-loop compensation scheme based on Single Instance Calibration and Gauss-Seidel Iterative Quantization. Experiments on various types of large-scale models, including language models such as OPT, Qwen, and LLaMA, as well as vision-language models such as CogVLM2, demonstrate that RPIQ can compress models to 4-bit representation while significantly reducing peak memory consumption (approximately 60%-75% reduction compared to original full-precision models). The method maintains performance highly close to full-precision models across multiple language and visual tasks, and exhibits excellent recognition and reasoning capabilities in key applications such as text understanding and visual question answering in complex scenarios. While verifying the effectiveness of RPIQ for deployment in real assistive systems, this study also advances the computational efficiency and reliability of large models, enabling them to provide visually impaired users with the required information accurately and rapidly.
+ oai:arXiv.org:2601.02888v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xuanyu Wang, Haisen Su, Jingtao Zhang, Xiangxiang Wang, Yongbin Yu, Manping Fan, Jialing Xiao, Bo Gong, Siqi Chen, Mingsheng Cao, Liyong Ren, Zhenglin Yang
+
+
+ LLMs, You Can Evaluate It! Design of Multi-perspective Report Evaluation for Security Operation Centers
+ https://arxiv.org/abs/2601.03013
+ arXiv:2601.03013v3 Announce Type: replace
+Abstract: Security operation centers (SOCs) often produce analysis reports on security incidents, and large language models (LLMs) will likely be used for this task in the near future. We postulate that a better understanding of how veteran analysts evaluate reports, including their feedback, can help produce analysis reports in SOCs. In this paper, we aim to leverage LLMs for analysis reports. To this end, we first construct a Analyst-wise checklist to reflect SOC practitioners' opinions for analysis report evaluation through literature review and user study with SOC practitioners. Next, we design a novel LLM-based conceptual framework, named MESSALA, by further introducing two new techniques, granularization guideline and multi-perspective evaluation. MESSALA can maximize report evaluation and provide feedback on veteran SOC practitioners' perceptions. When we conduct extensive experiments with MESSALA, the evaluation results by MESSALA are the closest to those of veteran SOC practitioners compared with the existing LLM-based methods. We then show two key insights. We also conduct qualitative analysis with MESSALA, and then identify that MESSALA can provide actionable items that are necessary for improving analysis reports.
+ oai:arXiv.org:2601.03013v3
+ cs.CR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hiroyuki Okada, Tatsumi Oba, Naoto Yanai
+
+
+ SA-ResGS: Self-Augmented Residual 3D Gaussian Splatting for Next Best View Selection
+ https://arxiv.org/abs/2601.03024
+ arXiv:2601.03024v2 Announce Type: replace
+Abstract: We propose Self-Augmented Residual 3D Gaussian Splatting (SA-ResGS), a novel framework to stabilize uncertainty quantification and enhancing uncertainty-aware supervision in next-best-view (NBV) selection for active scene reconstruction. SA-ResGS improves both the reliability of uncertainty estimates and their effectiveness for supervision by generating Self-Augmented point clouds (SA-Points) via triangulation between a training view and a rasterized extrapolated view, enabling efficient scene coverage estimation. While improving scene coverage through physically guided view selection, SA-ResGS also addresses the challenge of under-supervised Gaussians, exacerbated by sparse and wide-baseline views, by introducing the first residual learning strategy tailored for 3D Gaussian Splatting. This targeted supervision enhances gradient flow in high-uncertainty Gaussians by combining uncertainty-driven filtering with dropout- and hard-negative-mining-inspired sampling. Our contributions are threefold: (1) a physically grounded view selection strategy that promotes efficient and uniform scene coverage; (2) an uncertainty-aware residual supervision scheme that amplifies learning signals for weakly contributing Gaussians, improving training stability and uncertainty estimation across scenes with diverse camera distributions; (3) an implicit unbiasing of uncertainty quantification as a consequence of constrained view selection and residual supervision, which together mitigate conflicting effects of wide-baseline exploration and sparse-view ambiguity in NBV planning. Experiments on active view selection demonstrate that SA-ResGS outperforms state-of-the-art baselines in both reconstruction quality and view selection robustness.
+ oai:arXiv.org:2601.03024v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Kim Jun-Seong, Tae-Hyun Oh, Eduardo P\'erez-Pellitero, Youngkyoon Jang
+
+
+ Fast Surrogate Models for Adaptive Aircraft Trajectory Prediction in En route Airspace
+ https://arxiv.org/abs/2601.03075
+ arXiv:2601.03075v2 Announce Type: replace
+Abstract: Trajectory prediction (TP) is crucial for ensuring safety and efficiency in modern air traffic management systems. It is, for example, a core component of conflict detection and resolution tools, arrival sequencing algorithms, capacity planning, as well as several future concepts. However, TP accuracy within operational systems is hampered by a range of epistemic uncertainties such as the mass and performance settings of aircraft and the effect of meteorological conditions on aircraft performance. It can also require considerable computational resources.
+ This paper proposes a method for adaptive TP that has two components: first, a fast surrogate TP model based on linear state space models (LSSM)s with an execution time that was 6.7 times lower on average than an implementation of the Base of Aircraft Data (BADA) in Python. It is demonstrated that such models can effectively emulate the BADA aircraft performance model, which is based on the numerical solution of a partial differential equation (PDE), and that the LSSMs can be fitted to trajectories in a dataset of historic flight data. Secondly, the paper proposes an algorithm to assimilate radar observations using particle filtering to adaptively refine TP accuracy. Comparison with baselines using BADA and Kalman filtering demonstrate that the proposed framework improves system identification and state estimation for both climb and descent phases, with 46.3% and 64.7% better estimates for time to top of climb and bottom of descent compared to the best performing benchmark model. In particular, the particle filtering approach provides the flexibility to capture non-linear performance effects including the CAS-Mach transition.
+ oai:arXiv.org:2601.03075v2
+ cs.CE
+ math.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.2514/6.2026-1611
+ Nick Pepper, Marc Thomas, Zack Xuereb Conti
+
+
+ Conditioning Aircraft Trajectory Prediction on Meteorological Data with a Physics-Informed Machine Learning Approach
+ https://arxiv.org/abs/2601.03152
+ arXiv:2601.03152v2 Announce Type: replace
+Abstract: Accurate aircraft trajectory prediction (TP) in air traffic management systems is confounded by a number of epistemic uncertainties, dominated by uncertain meteorological conditions and operator specific procedures. Handling this uncertainty necessitates the use of probabilistic, machine learned models for generating trajectories. However, the trustworthiness of such models is limited if generated trajectories are not physically plausible. For this reason we propose a physics-informed approach in which aircraft thrust and airspeed are learned from data and are used to condition the existing Base of Aircraft Data (BADA) model, which is physics-based and enforces energy-based constraints on generated trajectories. A set of informative features are identified and used to condition a probabilistic model of aircraft thrust and airspeed, with the proposed scheme demonstrating a 20% improvement in skilfulness across a set of six metrics, compared against a baseline probabilistic model that ignores contextual information such as meteorological conditions.
+ oai:arXiv.org:2601.03152v2
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Amy Hodgkin, Nick Pepper, Marc Thomas
+
+
+ SIGMA: Scalable Spectral Insights for LLM Model Collapse
+ https://arxiv.org/abs/2601.03385
+ arXiv:2601.03385v2 Announce Type: replace
+Abstract: The rapid adoption of synthetic data for training Large Language Models (LLMs) has introduced the technical challenge of "model collapse"-a degenerative process where recursive training on model-generated content leads to a contraction of distributional variance and representational quality. While the phenomenology of collapse is increasingly evident, rigorous methods to quantify and predict its onset in high-dimensional spaces remain elusive. In this paper, we introduce SIGMA (Spectral Inequalities for Gram Matrix Analysis), a unified framework that benchmarks model collapse through the spectral lens of the embedding Gram matrix. By deriving and utilizing deterministic and stochastic bounds on the matrix's spectrum, SIGMA provides a mathematically grounded metric to track the contraction of the representation space. Crucially, our stochastic formulation enables scalable estimation of these bounds, making the framework applicable to large-scale foundation models where full eigendecomposition is intractable. We demonstrate that SIGMA effectively captures the transition towards degenerate states, offering both theoretical insights into the mechanics of collapse and a practical, scalable tool for monitoring the health of recursive training pipelines.
+ oai:arXiv.org:2601.03385v2
+ cs.LG
+ math.PR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Yi Gu, Lingyou Pang, Xiangkun Ye, Tianyu Wang, Jianyu Lin, Carey E. Priebe, Alexander Aue
+
+
+ Training-Free Adaptation of New-Generation LLMs using Legacy Clinical Models
+ https://arxiv.org/abs/2601.03423
+ arXiv:2601.03423v2 Announce Type: replace
+Abstract: Adapting language models to the clinical domain through continued pretraining and fine-tuning requires costly retraining for each new model generation. We propose Cross-Architecture Proxy Tuning (CAPT), a model-ensembling approach that enables training-free adaptation of state-of-the-art general-domain models using existing clinical models. CAPT supports models with disjoint vocabularies, leveraging contrastive decoding to selectively inject clinically relevant signals while preserving the general-domain model's reasoning and fluency. On six clinical classification and text-generation tasks, CAPT with a new-generation general-domain model and an older-generation clinical model consistently outperforms both models individually and state-of-the-art ensembling approaches (average +17.6% over UniTE, +41.4% over proxy tuning across tasks). Through token-level analysis and physician case studies, we demonstrate that CAPT amplifies clinically actionable language, reduces context errors, and increases clinical specificity.
+ oai:arXiv.org:2601.03423v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Sasha Ronaghi, Chloe Stanwyck, Asad Aali, Amir Ronaghi, Miguel Fuentes, Tina Hernandez-Boussard, Emily Alsentzer
+
+
+ From Chains to Graphs: Self-Structured Reasoning for General-Domain LLMs
+ https://arxiv.org/abs/2601.03597
+ arXiv:2601.03597v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) show strong reasoning ability in open-domain question answering, yet their reasoning processes are typically linear and often logically inconsistent. In contrast, real-world reasoning requires integrating multiple premises and solving subproblems in parallel. Existing methods, such as Chain-of-Thought (CoT), express reasoning in a linear textual form, which may appear coherent but frequently leads to inconsistent conclusions. Recent approaches rely on externally provided graphs and do not explore how LLMs can construct and use their own graph-structured reasoning, particularly in open-domain QA. To fill this gap, we novelly explore graph-structured reasoning of LLMs in general-domain question answering. We propose Self-Graph Reasoning (SGR), a framework that enables LLMs to explicitly represent their reasoning process as a structured graph before producing the final answer. We further construct a graph-structured reasoning dataset that merges multiple candidate reasoning graphs into refined graph structures for model training. Experiments on five QA benchmarks across both general and specialized domains show that SGR consistently improves reasoning consistency and yields a 17.74% gain over the base model. The LLaMA-3.3-70B model fine-tuned with SGR performs comparably to GPT-4o and surpasses Claude-3.5-Haiku, demonstrating the effectiveness of graph-structured reasoning.
+ oai:arXiv.org:2601.03597v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yingjian Chen, Haoran Liu, Yinhong Liu, Sherry T. Tong, Aosong Feng, Jinghui Lu, Juntao Zhang, Yusuke Iwasawa, Yutaka Matsuo, Irene Li
+
+
+ ELO: Efficient Layer-Specific Optimization for Continual Pretraining of Multilingual LLMs
+ https://arxiv.org/abs/2601.03648
+ arXiv:2601.03648v2 Announce Type: replace
+Abstract: We propose an efficient layer-specific optimization (ELO) method designed to enhance continual pretraining (CP) for specific languages in multilingual large language models (MLLMs). This approach addresses the common challenges of high computational cost and degradation of source language performance associated with traditional CP. The ELO method consists of two main stages: (1) ELO Pretraining, where a small subset of specific layers, identified in our experiments as the critically important first and last layers, are detached from the original MLLM and trained with the target language. This significantly reduces not only the number of trainable parameters but also the total parameters computed during the forward pass, minimizing GPU memory consumption and accelerating the training process. (2) Layer Alignment, where the newly trained layers are reintegrated into the original model, followed by a brief full fine-tuning step on a small dataset to align the parameters. Experimental results demonstrate that the ELO method achieves a training speedup of up to 6.46 times compared to existing methods, while improving target language performance by up to 6.2\% on qualitative benchmarks and effectively preserving source language (English) capabilities.
+ oai:arXiv.org:2601.03648v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ HanGyeol Yoo, ChangSu Choi, Minjun Kim, Seohyun Song, SeungWoo Song, Inho Won, Jongyoul Park, Cheoneum Park, KyungTae Lim
+
+
+ The Geometry of the Pivot: A Note on Lazy Pivoted Cholesky and Farthest Point Sampling
+ https://arxiv.org/abs/2601.03706
+ arXiv:2601.03706v3 Announce Type: replace
+Abstract: Low-rank approximations of large kernel matrices are ubiquitous in machine learning, particularly for scaling Gaussian Processes to massive datasets. The Pivoted Cholesky decomposition is a standard tool for this task, offering a computationally efficient, greedy low-rank approximation. While its algebraic properties are well-documented in numerical linear algebra, its geometric intuition within the context of kernel methods often remains obscure. In this note, we elucidate the geometric interpretation of the algorithm within the Reproducing Kernel Hilbert Space (RKHS). We demonstrate that the pivotal selection step is mathematically equivalent to Farthest Point Sampling (FPS) using the kernel metric, and that the Cholesky factor construction is an implicit Gram-Schmidt orthogonalization. We provide a concise derivation and a minimalist Python implementation to bridge the gap between theory and practice.
+ oai:arXiv.org:2601.03706v3
+ cs.LG
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Gil Shabat
+
+
+ Membox: Weaving Topic Continuity into Long-Range Memory for LLM Agents
+ https://arxiv.org/abs/2601.03785
+ arXiv:2601.03785v2 Announce Type: replace
+Abstract: Human-agent dialogues often exhibit topic continuity-a stable thematic frame that evolves through temporally adjacent exchanges-yet most large language model (LLM) agent memory systems fail to preserve it. Existing designs follow a fragmentation-compensation paradigm: they first break dialogue streams into isolated utterances for storage, then attempt to restore coherence via embedding-based retrieval. This process irreversibly damages narrative and causal flow, while biasing retrieval towards lexical similarity. We introduce membox, a hierarchical memory architecture centered on a Topic Loom that continuously monitors dialogue in a sliding-window fashion, grouping consecutive same-topic turns into coherent "memory boxes" at storage time. Sealed boxes are then linked by a Trace Weaver into long-range event-timeline traces, recovering macro-topic recurrences across discontinuities. Experiments on LoCoMo demonstrate that Membox achieves up to 68% F1 improvement on temporal reasoning tasks, outperforming competitive baselines (e.g., Mem0, A-MEM). Notably, Membox attains these gains while using only a fraction of the context tokens required by existing methods, highlighting a superior balance between efficiency and effectiveness. By explicitly modeling topic continuity, Membox offers a cognitively motivated mechanism for enhancing both coherence and efficiency in LLM agents.
+ oai:arXiv.org:2601.03785v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Dehao Tao, Guoliang Ma, Yongfeng Huang, Minghu Jiang
+
+
+ IDESplat: Iterative Depth Probability Estimation for Generalizable 3D Gaussian Splatting
+ https://arxiv.org/abs/2601.03824
+ arXiv:2601.03824v2 Announce Type: replace
+Abstract: Generalizable 3D Gaussian Splatting aims to directly predict Gaussian parameters using a feed-forward network for scene reconstruction. Among these parameters, Gaussian means are particularly difficult to predict, so depth is usually estimated first and then unprojected to obtain the Gaussian sphere centers. Existing methods typically rely solely on a single warp to estimate depth probability, which hinders their ability to fully leverage cross-view geometric cues, resulting in unstable and coarse depth maps. To address this limitation, we propose IDESplat, which iteratively applies warp operations to boost depth probability estimation for accurate Gaussian mean prediction. First, to eliminate the inherent instability of a single warp, we introduce a Depth Probability Boosting Unit (DPBU) that integrates epipolar attention maps produced by cascading warp operations in a multiplicative manner. Next, we construct an iterative depth estimation process by stacking multiple DPBUs, progressively identifying potential depth candidates with high likelihood. As IDESplat iteratively boosts depth probability estimates and updates the depth candidates, the depth map is gradually refined, resulting in accurate Gaussian means. We conduct experiments on RealEstate10K, ACID, and DL3DV. IDESplat achieves outstanding reconstruction quality and state-of-the-art performance with real-time efficiency. On RE10K, it outperforms DepthSplat by 0.33 dB in PSNR, using only 10.7% of the parameters and 70% of the memory. Additionally, our IDESplat improves PSNR by 2.95 dB over DepthSplat on the DTU dataset in cross-dataset experiments, demonstrating its strong generalization ability.
+ oai:arXiv.org:2601.03824v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wei Long, Haifeng Wu, Shiyin Jiang, Jinhua Zhang, Xinchun Ji, Shuhang Gu
+
+
+ Cells on Autopilot: Adaptive Cell (Re)Selection via Reinforcement Learning
+ https://arxiv.org/abs/2601.04083
+ arXiv:2601.04083v3 Announce Type: replace
+Abstract: The widespread deployment of 5G networks, together with the coexistence of 4G/LTE networks, provides mobile devices a diverse set of candidate cells to connect to. However, associating mobile devices to cells to maximize overall network performance, a.k.a. cell (re)selection, remains a key challenge for mobile operators. Today, cell (re)selection parameters are typically configured manually based on operator experience and rarely adapted to dynamic network conditions. In this work, we ask: Can an agent automatically learn and adapt cell (re)selection parameters to consistently improve network performance? We present a reinforcement learning (RL)-based framework called CellPilot that adaptively tunes cell (re)selection parameters by learning spatiotemporal patterns of mobile network dynamics. Our study with real-world data demonstrates that even a lightweight RL agent can outperform conventional heuristic reconfigurations by up to 167%, while generalizing effectively across different network scenarios. These results indicate that data-driven approaches can significantly improve cell (re)selection configurations and enhance mobile network performance.
+ oai:arXiv.org:2601.04083v3
+ cs.NI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Marvin Illian, Ramin Khalili, Antonio A. de A. Rocha, Lin Wang
+
+
+ PRISM: A Unified Framework for Post-Training LLMs Without Verifiable Rewards
+ https://arxiv.org/abs/2601.04700
+ arXiv:2601.04700v2 Announce Type: replace
+Abstract: Current techniques for post-training Large Language Models (LLMs) rely either on costly human supervision or on external verifiers to boost performance on tasks such as mathematical reasoning and code generation. However, as LLMs improve their problem-solving, any further improvement will potentially require high-quality solutions to difficult problems that are not available to humans. As a result, learning from unlabeled data is becoming increasingly attractive in the research community. Existing methods extract learning signal from a model's consistency, either by majority voting or by converting the model's internal confidence into reward. Although internal consistency metric such as entropy or self-certainty require no human intervention, as we show in this work, these are unreliable signals for large-scale and long-term training. To address the unreliability, we propose PRISM, a unified training framework that uses a Process Reward Model (PRM) to guide learning alongside model's internal confidence in the absence of ground-truth labels. We show that effectively combining PRM with self-certainty can lead to both stable training and better test-time performance, and also keep the model's internal confidence in check. Code available at https://github.com/ghimiremukesh/PRISM.
+ oai:arXiv.org:2601.04700v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mukesh Ghimire, Aosong Feng, Liwen You, Youzhi Luo, Fang Liu, Xuan Zhu
+
+
+ Qwen3-VL-Embedding and Qwen3-VL-Reranker: A Unified Framework for State-of-the-Art Multimodal Retrieval and Ranking
+ https://arxiv.org/abs/2601.04720
+ arXiv:2601.04720v2 Announce Type: replace
+Abstract: In this report, we introduce the Qwen3-VL-Embedding and Qwen3-VL-Reranker model series, the latest extensions of the Qwen family built on the Qwen3-VL foundation model. Together, they provide an end-to-end pipeline for high-precision multimodal search by mapping diverse modalities, including text, images, document images, and video, into a unified representation space. The Qwen3-VL-Embedding model employs a multi-stage training paradigm, progressing from large-scale contrastive pre-training to reranking model distillation, to generate semantically rich high-dimensional vectors. It supports Matryoshka Representation Learning, enabling flexible embedding dimensions, and handles inputs up to 32k tokens. Complementing this, Qwen3-VL-Reranker performs fine-grained relevance estimation for query-document pairs using a cross-encoder architecture with cross-attention mechanisms. Both model series inherit the multilingual capabilities of Qwen3-VL, supporting more than 30 languages, and are released in $\textbf{2B}$ and $\textbf{8B}$ parameter sizes to accommodate diverse deployment requirements. Empirical evaluations demonstrate that the Qwen3-VL-Embedding series achieves state-of-the-art results across diverse multimodal embedding evaluation benchmarks. Specifically, Qwen3-VL-Embedding-8B attains an overall score of $\textbf{77.8}$ on MMEB-V2, ranking first among all models (as of January 8, 2025). This report presents the architecture, training methodology, and practical capabilities of the series, demonstrating their effectiveness on various multimodal retrieval tasks, including image-text retrieval, visual question answering, and video-text matching.
+ oai:arXiv.org:2601.04720v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mingxin Li, Yanzhao Zhang, Dingkun Long, Keqin Chen, Sibo Song, Shuai Bai, Zhibo Yang, Pengjun Xie, An Yang, Dayiheng Liu, Jingren Zhou, Junyang Lin
+
+
+ ProFuse: Efficient Cross-View Context Fusion for Open-Vocabulary 3D Gaussian Splatting
+ https://arxiv.org/abs/2601.04754
+ arXiv:2601.04754v2 Announce Type: replace
+Abstract: We present ProFuse, an efficient context-aware framework for open-vocabulary 3D scene understanding with 3D Gaussian Splatting (3DGS). The pipeline enhances cross-view consistency and intra-mask cohesion within a direct registration setup, adding minimal overhead and requiring no render-supervised fine-tuning. Instead of relying on a pretrained 3DGS scene, we introduce a dense correspondence-guided pre-registration phase that initializes Gaussians with accurate geometry while jointly constructing 3D Context Proposals via cross-view clustering. Each proposal carries a global feature obtained through weighted aggregation of member embeddings, and this feature is fused onto Gaussians during direct registration to maintain per-primitive language coherence across views. With associations established in advance, semantic fusion requires no additional optimization beyond standard reconstruction, and the model retains geometric refinement without densification. ProFuse achieves strong open-vocabulary 3DGS understanding while completing semantic attachment in about five minutes per scene, which is two times faster than SOTA. Additional details are available at our project page https://chiou1203.github.io/ProFuse/.
+ oai:arXiv.org:2601.04754v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yen-Jen Chiou, Wei-Tse Cheng, Yuan-Fu Yang
+
+
+ OceanSplat: Object-aware Gaussian Splatting with Trinocular View Consistency for Underwater Scene Reconstruction
+ https://arxiv.org/abs/2601.04984
+ arXiv:2601.04984v2 Announce Type: replace
+Abstract: We introduce OceanSplat, a novel 3D Gaussian Splatting-based approach for high-fidelity underwater scene reconstruction. To overcome multi-view inconsistencies caused by scattering media, we design a trinocular setup for each camera pose by rendering from horizontally and vertically translated virtual viewpoints, enforcing view consistency to facilitate spatial optimization of 3D Gaussians. Furthermore, we derive synthetic epipolar depth priors from the virtual viewpoints, which serve as self-supervised depth regularizers to compensate for the limited geometric cues in degraded underwater scenes. We also propose a depth-aware alpha adjustment that modulates the opacity of 3D Gaussians during early training based on their depth along the viewing direction, deterring the formation of medium-induced primitives. Our approach promotes the disentanglement of 3D Gaussians from the scattering medium through effective geometric constraints, enabling accurate representation of scene structure and significantly reducing floating artifacts. Experiments on real-world underwater and simulated scenes demonstrate that OceanSplat substantially outperforms existing methods for both scene reconstruction and restoration in scattering media.
+ oai:arXiv.org:2601.04984v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Minseong Kweon, Jinsun Park
+
+
+ Learning Latent Action World Models In The Wild
+ https://arxiv.org/abs/2601.05230
+ arXiv:2601.05230v2 Announce Type: replace
+Abstract: Agents capable of reasoning and planning in the real world require the ability of predicting the consequences of their actions. While world models possess this capability, they most often require action labels, that can be complex to obtain at scale. This motivates the learning of latent action models, that can learn an action space from videos alone. Our work addresses the problem of learning latent actions world models on in-the-wild videos, expanding the scope of existing works that focus on simple robotics simulations, video games, or manipulation data. While this allows us to capture richer actions, it also introduces challenges stemming from the video diversity, such as environmental noise, or the lack of a common embodiment across videos. To address some of the challenges, we discuss properties that actions should follow as well as relevant architectural choices and evaluations. We find that continuous, but constrained, latent actions are able to capture the complexity of actions from in-the-wild videos, something that the common vector quantization does not. We for example find that changes in the environment coming from agents, such as humans entering the room, can be transferred across videos. This highlights the capability of learning actions that are specific to in-the-wild videos. In the absence of a common embodiment across videos, we are mainly able to learn latent actions that become localized in space, relative to the camera. Nonetheless, we are able to train a controller that maps known actions to latent ones, allowing us to use latent actions as a universal interface and solve planning tasks with our world model with similar performance as action-conditioned baselines. Our analyses and experiments provide a step towards scaling latent action models to the real world.
+ oai:arXiv.org:2601.05230v2
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Quentin Garrido, Tushar Nagarajan, Basile Terver, Nicolas Ballas, Yann LeCun, Michael Rabbat
+
+
+ Safety Not Found (404): Hidden Risks of LLM-Based Robotics Decision Making
+ https://arxiv.org/abs/2601.05529
+ arXiv:2601.05529v3 Announce Type: replace
+Abstract: One mistake by an AI system in a safety-critical setting can cost lives. As Large Language Models (LLMs) become integral to robotics decision-making, the physical dimension of risk grows; a single wrong instruction can directly endanger human safety. This paper addresses the urgent need to systematically evaluate LLM performance in scenarios where even minor errors are catastrophic. Through a qualitative evaluation of a fire evacuation scenario, we identified critical failure cases in LLM-based decision-making. Based on these, we designed seven tasks for quantitative assessment, categorized into: Complete Information, Incomplete Information, and Safety-Oriented Spatial Reasoning (SOSR). Complete information tasks utilize ASCII maps to minimize interpretation ambiguity and isolate spatial reasoning from visual processing. Incomplete information tasks require models to infer missing context, testing for spatial continuity versus hallucinations. SOSR tasks use natural language to evaluate safe decision-making in life-threatening contexts. We benchmark various LLMs and Vision-Language Models (VLMs) across these tasks. Beyond aggregate performance, we analyze the implications of a 1% failure rate, highlighting how "rare" errors escalate into catastrophic outcomes. Results reveal serious vulnerabilities: several models achieved a 0% success rate in ASCII navigation, while in a simulated fire drill, models instructed robots to move toward hazardous areas instead of emergency exits. Our findings lead to a sobering conclusion: current LLMs are not ready for direct deployment in safety-critical systems. A 99% accuracy rate is dangerously misleading in robotics, as it implies one out of every hundred executions could result in catastrophic harm. We demonstrate that even state-of-the-art models cannot guarantee safety, and absolute reliance on them creates unacceptable risks.
+ oai:arXiv.org:2601.05529v3
+ cs.AI
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jua Han, Jaeyoon Seo, Jungbin Min, Jihie Kim, Jean Oh
+
+
+ A Framework for Personalized Persuasiveness Prediction via Context-Aware User Profiling
+ https://arxiv.org/abs/2601.05654
+ arXiv:2601.05654v2 Announce Type: replace
+Abstract: Estimating the persuasiveness of messages is critical in various applications, from recommender systems to safety assessment of LLMs. While it is imperative to consider the target persuadee's characteristics, such as their values, experiences, and reasoning styles, there is currently no established systematic framework to optimize leveraging a persuadee's past activities (e.g., conversations) to the benefit of a persuasiveness prediction model. To address this problem, we propose a context-aware user profiling framework with two trainable components: a query generator that generates optimal queries to retrieve persuasion-relevant records from a user's history, and a profiler that summarizes these records into a profile to effectively inform the persuasiveness prediction model. Our evaluation on the ChangeMyView Reddit dataset shows consistent improvements over existing methods across multiple predictor models, with gains of up to +13.77%p in F1 score. Further analysis shows that effective user profiles are context-dependent and predictor-specific, rather than relying on static attributes or surface-level similarity. Together, these results highlight the importance of task-oriented, context-dependent user profiling for personalized persuasiveness prediction.
+ oai:arXiv.org:2601.05654v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sejun Park, Yoonah Park, Jongwon Lim, Yohan Jo
+
+
+ FlyPose: Towards Robust Human Pose Estimation From Aerial Views
+ https://arxiv.org/abs/2601.05747
+ arXiv:2601.05747v2 Announce Type: replace
+Abstract: Unmanned Aerial Vehicles (UAVs) are increasingly deployed in close proximity to humans for applications such as parcel delivery, traffic monitoring, disaster response and infrastructure inspections. Ensuring safe and reliable operation in these human-populated environments demands accurate perception of human poses and actions from an aerial viewpoint. This perspective challenges existing methods with low resolution, steep viewing angles and (self-)occlusion, especially if the application demands realtime feasibile models. We train and deploy FlyPose, a lightweight top-down human pose estimation pipeline for aerial imagery. Through multi-dataset training, we achieve an average improvement of 6.8 mAP in person detection across the test-sets of Manipal-UAV, VisDrone, HIT-UAV as well as our custom dataset. For 2D human pose estimation we report an improvement of 16.3 mAP on the challenging UAV-Human dataset. FlyPose runs with an inference latency of ~20 milliseconds including preprocessing on a Jetson Orin AGX Developer Kit and is deployed onboard a quadrotor UAV during flight experiments. We also publish FlyPose-104, a small but challenging aerial human pose estimation dataset, that includes manual annotations from difficult aerial perspectives: https://github.com/farooqhassaan/FlyPose.
+ oai:arXiv.org:2601.05747v2
+ cs.CV
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Hassaan Farooq, Marvin Brenner, Peter St\"utz
+
+
+ GeoSurDepth: Harnessing Foundation Model for Spatial Geometry Consistency-Oriented Self-Supervised Surround-View Depth Estimation
+ https://arxiv.org/abs/2601.05839
+ arXiv:2601.05839v2 Announce Type: replace
+Abstract: Accurate surround-view depth estimation provides a competitive alternative to laser-based sensors and is essential for 3D scene understanding in autonomous driving. While empirical studies have proposed various approaches that primarily focus on enforcing cross-view constraints at photometric level, few explicitly exploit the rich geometric structure inherent in both monocular and surround-view setting. In this work, we propose GeoSurDepth, a framework that leverages geometry consistency as the primary cue for surround-view depth estimation. Concretely, we utilize vision foundation models as pseudo geometry priors and feature representation enhancement tool to guide the network to maintain surface normal consistency in spatial 3D space and regularize object- and texture-consistent depth estimation in 2D. In addition, we introduce a novel view synthesis pipeline where 2D-3D lifting is achieved with dense depth reconstructed via spatial warping, encouraging additional photometric supervision across temporal and spatial contexts, and compensating for the limitations of target-view image reconstruction. Finally, a newly-proposed adaptive joint motion learning strategy enables the network to adaptively emphasize informative spatial geometry cues for improved motion reasoning. Extensive experiments on KITTI, DDAD and nuScenes demonstrate that GeoSurDepth achieves SoTA performance, validating the effectiveness of our approach. Our framework highlights the importance of exploiting geometry coherence and consistency for robust self-supervised depth estimation.
+ oai:arXiv.org:2601.05839v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weimin Liu, Wenjun Wang, Joshua H. Meng
+
+
+ Assessing the Carbon Footprint of Virtual Meetings: A Quantitative Analysis of Camera Usage
+ https://arxiv.org/abs/2601.06045
+ arXiv:2601.06045v2 Announce Type: replace
+Abstract: This paper quantifies the carbon emissions related to data consumption during video calls, focusing on the impact of having the camera on versus off. The findings regarding the environmental benefits achieved by turning off cameras during meetings challenge the claims of some prevalent articles. The experiment was carried out using a 4G connection via a cell phone to measure the varying data transfer associated with videos. The outcomes indicate that turning the camera off can halve data consumption and associated carbon emissions, particularly on mobile networks. The paper concludes with recommendations to optimize data usage and reduce the environmental impact during calls.
+ oai:arXiv.org:2601.06045v2
+ cs.CY
+ cs.NI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ F\'elix Mortas
+
+
+ Self-Admitted Technical Debt in LLM Software: An Empirical Comparison with ML and Non-ML Software
+ https://arxiv.org/abs/2601.06266
+ arXiv:2601.06266v3 Announce Type: replace
+Abstract: Self-admitted technical debt (SATD), referring to comments flagged by developers that explicitly acknowledge suboptimal code or incomplete functionality, has received extensive attention in machine learning (ML) and traditional (Non-ML) software. However, little is known about how SATD manifests and evolves in contemporary Large Language Model (LLM)-based systems, whose architectures, workflows, and dependencies differ fundamentally from both traditional and pre-LLM ML software. In this paper, we conduct the first empirical study of SATD in the LLM era, replicating and extending prior work on ML technical debt to modern LLM-based systems. We compare SATD prevalence across LLM, ML, and non-ML repositories across a total of 477 repositories (159 per category). We perform survival analysis of SATD introduction and removal to understand the dynamics of technical debt across different development paradigms. Surprisingly, despite their architectural complexity, our results reveal that LLM repositories accumulate SATD at similar rates to ML systems (3.95% vs. 4.10%). However, we observe that LLM repositories remain debt-free 2.4x longer than ML repositories (a median of 492 days vs. 204 days), and then start to accumulate technical debt rapidly. Moreover, our qualitative analysis of 377 SATD instances reveals three new forms of technical debt unique to LLM-based development that have not been reported in prior research: Model-Stack Workaround Debt, Model Dependency Debt, and Performance Optimization Debt. Finally, by mapping SATD to stages of the LLM development pipeline, we observe that debt concentrates
+ oai:arXiv.org:2601.06266v3
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Niruthiha Selvanayagam, Taher A. Ghaleb, Manel Abdellatif
+
+
+ From Lagging to Leading: Validating Hard Braking Events as High-Density Indicators of Segment Crash Risk
+ https://arxiv.org/abs/2601.06327
+ arXiv:2601.06327v2 Announce Type: replace
+Abstract: Identifying high crash risk road segments and accurately predicting crash incidence is fundamental to implementing effective safety countermeasures. While collision data inherently reflects risk, the infrequency and inconsistent reporting of crashes present a major challenge to robust risk prediction models. The proliferation of connected vehicle technology offers a promising avenue to leverage high-density safety metrics for enhanced crash forecasting. A Hard-Braking Event (HBE), interpreted as an evasive maneuver, functions as a potent proxy for elevated driving risk due to its demonstrable correlation with underlying crash causal factors. Crucially, HBE data is significantly more readily available across the entire road network than conventional collision records. This study systematically evaluated the correlation at individual road segment level between police-reported collisions and aggregated and anonymized HBEs identified via the Google Android Auto platform, utilizing datasets from California and Virginia. Empirical evidence revealed that HBEs occur at a rate magnitudes higher than traffic crashes. Employing the state-of-the-practice Negative-Binomial regression models, the analysis established a statistically significant positive correlation between the HBE rate and the crash rate: road segments exhibiting a higher frequency of HBEs were consistently associated with a greater incidence of crashes. This sophisticated model incorporated and controlled for various confounding factors, including road type, speed profile, proximity to ramps, and road segment slope. The HBEs derived from connected vehicle technology thus provide a scalable, high-density safety surrogate metric for network-wide traffic safety assessment, with the potential to optimize safer routing recommendations and inform the strategic deployment of active safety countermeasures.
+ oai:arXiv.org:2601.06327v2
+ cs.OH
+ stat.AP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yechen Li, Shantanu Shahane, Shoshana Vasserman, Carolina Osorio, Yi-fan Chen, Ivan Kuznetsov, Kristin White, Justyna Swiatkowska, Neha Arora, Feng Guo
+
+
+ Spatiotemporal Change-Points in Development Discourse: Insights from Social Media in Low-Resource Contexts
+ https://arxiv.org/abs/2601.06402
+ arXiv:2601.06402v2 Announce Type: replace
+Abstract: This study investigates the spatiotemporal evolution of development discourse in low-resource settings. Analyzing more than two years of geotagged X data from Zambia, we introduce a mixed-methods pipeline utilizing topic modeling, change-point detection, and qualitative coding to identify critical shifts in public debate. We identify seven recurring themes, including public health challenges and frustration with government policy, shaped by regional events and national interventions. Notably, we detect discourse changepoints linked to the COVID19 pandemic and a geothermal project, illustrating how online conversations mirror policy flashpoints. Our analysis distinguishes between the ephemeral nature of acute crises like COVID19 and the persistent, structural reorientations driven by long-term infrastructure projects. We conceptualize "durable discourse" as sustained narrative engagement with development issues. Contributing to HCI and ICTD, we examine technology's socioeconomic impact, providing practical implications and future work for direct local engagement.
+ oai:arXiv.org:2601.06402v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Woojin Jung, Charles Chear, Andrew H. Kim, Vatsal Shah, Tawfiq Ammari
+
+
+ SparseOccVLA: Bridging Occupancy and Vision-Language Models via Sparse Queries for Unified 4D Scene Understanding and Planning
+ https://arxiv.org/abs/2601.06474
+ arXiv:2601.06474v2 Announce Type: replace
+Abstract: In autonomous driving, Vision Language Models (VLMs) excel at high-level reasoning , whereas semantic occupancy provides fine-grained details. Despite significant progress in individual fields, there is still no method that can effectively integrate both paradigms. Conventional VLMs struggle with token explosion and limited spatiotemporal reasoning, while semantic occupancy provides a unified, explicit spatial representation but is too dense to integrate efficiently with VLMs. To address these challenges and bridge the gap between VLMs and occupancy, we propose SparseOccVLA, a novel vision-language-action model that unifies scene understanding, occupancy forecasting, and trajectory planning powered by sparse occupancy queries. Starting with a lightweight Sparse Occupancy Encoder, SparseOccVLA generates compact yet highly informative sparse occupancy queries that serve as the single bridge between vision and language. These queries are aligned into the language space and reasoned by the LLM for unified scene understanding and future occupancy forecasting. Furthermore, we introduce an LLM-guided Anchor-Diffusion Planner featuring decoupled anchor scoring and denoising, as well as cross-model trajectory-condition fusion. SparseOccVLA achieves a 7% relative improvement in CIDEr over the state-of-the-art on OmniDrive-nuScenes, a 0.5 increase in mIoU score on Occ3D-nuScenes, and sets state-of-the-art open-loop planning metric on nuScenes benchmark, demonstrating its strong holistic capability.
+ oai:arXiv.org:2601.06474v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chenxu Dang, Jie Wang, Guang Li, Zhiwen Hou, Zihan You, Hangjun Ye, Jie Ma, Long Chen, Yan Wang
+
+
+ Robotic Tele-Operation for Upper Aerodigestive Tract Microsurgery: System Design and Validation
+ https://arxiv.org/abs/2601.06617
+ arXiv:2601.06617v3 Announce Type: replace
+Abstract: Upper aerodigestive tract (UADT) treatments frequently employ transoral laser microsurgery (TLM) for procedures such as the removal of tumors or polyps. In TLM, a laser beam is used to cut target tissue, while forceps are employed to grasp, manipulate, and stabilize tissue within the UADT. Although TLM systems may rely on different technologies and interfaces, forceps manipulation is still predominantly performed manually, introducing limitations in ergonomics, precision, and controllability. This paper proposes a novel robotic system for tissue manipulation in UADT procedures, based on a novel end-effector designed for forceps control. The system is integrated within a teleoperation framework that employs a robotic manipulator with a programmed remote center of motion (RCM), enabling precise and constrained instrument motion while improving surgeon ergonomics. The proposed approach is validated through two experimental studies and a dedicated usability evaluation, demonstrating its effectiveness and suitability for UADT surgical applications.
+ oai:arXiv.org:2601.06617v3
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Giovani Braglia, Jos\'e Jair Alves Mendes Junior, Augusto Tetsuo Prado Inafuco, Federico Mariano, Leonardo S. Mattos
+
+
+ FinForge: Semi-Synthetic Financial Benchmark Generation
+ https://arxiv.org/abs/2601.06747
+ arXiv:2601.06747v2 Announce Type: replace
+Abstract: Evaluating Language Models (LMs) in specialized, high-stakes domains such as finance remains a significant challenge due to the scarcity of open, high-quality, and domain-specific datasets. Existing general-purpose benchmarks provide broad coverage but lack the depth and domain fidelity needed to assess LMs' capabilities for real-world financial reasoning, which requires both conceptual understanding and quantitative rigor. To address this gap, we introduce FinForge, a scalable, semi-synthetic pipeline for constructing finance-specific evaluation benchmarks through a hybrid of expert-guided data curation and controlled LM-based synthesis. FinForge combines manual and programmatic corpus construction from authoritative financial sources with structured question generation and validation using Gemini 2.5 Flash. To demonstrate the pipeline's efficacy, we produce FinForge-5k, a snapshot benchmark comprising over 5,000 human-validated question-answer pairs across 11 finance subdomains, derived from a curated corpus of 100,000 verified documents totaling 143M tokens. Evaluation of state-of-the-art open-source and closed-source models on FinForge-5k reveals significant differences in financial reasoning, with leading models achieving accuracy levels near 80%. These findings underscore the framework's utility for diagnosing current model limitations and guiding future improvements in financial domain competence. All code and data are available at https://github.com/gtfintechlab/FinForge.
+ oai:arXiv.org:2601.06747v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Glenn Matlin, Akhil Theerthala, Anant Gupta, Anirudh JM, Rayan Castilla, Yi Mei Ng, Sudheer Chava
+
+
+ ET-Agent: Incentivizing Effective Tool-Integrated Reasoning Agent via Behavior Calibration
+ https://arxiv.org/abs/2601.06860
+ arXiv:2601.06860v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) can extend their parameter knowledge limits by adopting the Tool-Integrated Reasoning (TIR) paradigm. However, existing LLM-based agent training framework often focuses on answers' accuracy, overlooking specific alignment for behavior patterns. Consequently, agent often exhibits ineffective actions during TIR tasks, such as redundant and insufficient tool calls. How to calibrate erroneous behavioral patterns when executing TIR tasks, thereby exploring effective trajectories, remains an open-ended problem. In this paper, we propose ET-Agent, a training framework for calibrating agent's tool-use behavior through two synergistic perspectives: Self-evolving Data Flywheel and Behavior Calibration Training. Specifically, we introduce a self-evolutionary data flywheel to generate enhanced data, used to fine-tune LLM to improve its exploration ability. Based on this, we implement an two-phases behavior-calibration training framework. It is designed to progressively calibrate erroneous behavioral patterns to optimal behaviors. Further in-depth experiments confirm the superiority of \ourmodel{} across multiple dimensions, including correctness, efficiency, reasoning conciseness, and tool execution accuracy. Our ET-Agent framework provides practical insights for research in the TIR field. Codes can be found in https://github.com/asilverlight/ET-Agent
+ oai:arXiv.org:2601.06860v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yifei Chen, Guanting Dong, Zhicheng Dou
+
+
+ UDPNet: Unleashing Depth-based Priors for Robust Image Dehazing
+ https://arxiv.org/abs/2601.06909
+ arXiv:2601.06909v2 Announce Type: replace
+Abstract: Image dehazing has witnessed significant advancements with the development of deep learning models. However, most existing methods focus solely on single-modal RGB features, neglecting the inherent correlation between scene depth and haze distribution. Even those that jointly optimize depth estimation and image dehazing often suffer from suboptimal performance due to inadequate utilization of accurate depth information. In this paper, we present UDPNet, a general framework that leverages depth-based priors from a large-scale pretrained depth estimation model DepthAnything V2 to boost existing image dehazing models. Specifically, our architecture comprises two key components: the Depth-Guided Attention Module (DGAM) adaptively modulates features via lightweight depth-guided channel attention, and the Depth Prior Fusion Module (DPFM) enables hierarchical fusion of multi-scale depth map features by dual sliding-window multi-head cross-attention mechanism. These modules ensure both computational efficiency and effective integration of depth priors. Moreover, the depth priors empower the network to dynamically adapt to varying haze densities, illumination conditions, and domain gaps across synthetic and real-world data. Extensive experimental results demonstrate the effectiveness of our UDPNet, outperforming the state-of-the-art methods on popular dehazing datasets, with PSNR improvements of 0.85 dB on SOTS-indoor, 1.19 dB on Haze4K, and 1.79 dB on NHR. Our proposed solution establishes a new benchmark for depth-aware dehazing across various scenarios. Pretrained models and codes are released at our project https://github.com/Harbinzzy/UDPNet.
+ oai:arXiv.org:2601.06909v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zengyuan Zuo, Junjun Jiang, Gang Wu, Xianming Liu
+
+
+ A High-Recall Cost-Sensitive Machine Learning Framework for Real-Time Online Banking Transaction Fraud Detection
+ https://arxiv.org/abs/2601.07276
+ arXiv:2601.07276v2 Announce Type: replace
+Abstract: Fraudulent activities on digital banking services are becoming more intricate by the day, challenging existing defenses. While older rule driven methods struggle to keep pace, even precision focused algorithms fall short when new scams are introduced. These tools typically overlook subtle shifts in criminal behavior, missing crucial signals. Because silent breaches cost institutions far more than flagged but legitimate actions, catching every possible case is crucial. High sensitivity to actual threats becomes essential when oversight leads to heavy losses. One key aim here involves reducing missed fraud cases without spiking incorrect alerts too much. This study builds a system using group learning methods adjusted through smart threshold choices. Using real world transaction records shared openly, where cheating acts rarely appear among normal activities, tests are run under practical skewed distributions. The outcomes reveal that approximately 98 percent of actual fraud is detected, outperforming standard setups that rely on unchanging rules when dealing with uneven examples across classes. When tested in live settings, the fraud detection system connects directly to an online banking transaction flow, stopping questionable activities before they are completed. Alongside this setup, a browser add on built for Chrome is designed to flag deceptive web links and reduce threats from harmful sites. These results show that adjusting decisions by cost impact and validating across entire systems makes deployment more stable and realistic for today's digital banking platforms.
+ oai:arXiv.org:2601.07276v2
+ cs.CR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Karthikeyan V. R., Premnath S., Kavinraaj S., J. Sangeetha
+
+
+ ESDD2: Environment-Aware Speech and Sound Deepfake Detection Challenge Evaluation Plan
+ https://arxiv.org/abs/2601.07303
+ arXiv:2601.07303v3 Announce Type: replace
+Abstract: Audio recorded in real-world environments often contains a mixture of foreground speech and background environmental sounds. With rapid advances in text-to-speech, voice conversion, and other generation models, either component can now be modified independently. Such component-level manipulations are harder to detect, as the remaining unaltered component can mislead the systems designed for whole deepfake audio, and they often sound more natural to human listeners. To address this gap, we have proposed CompSpoofV2 dataset and a separation-enhanced joint learning framework. CompSpoofV2 is a large-scale curated dataset designed for component-level audio anti-spoofing, which contains over 250k audio samples, with a total duration of approximately 283 hours. Based on the CompSpoofV2 and the separation-enhanced joint learning framework, we launch the Environment-Aware Speech and Sound Deepfake Detection Challenge (ESDD2), focusing on component-level spoofing, where both speech and environmental sounds may be manipulated or synthesized, creating a more challenging and realistic detection scenario. The challenge will be held in conjunction with the IEEE International Conference on Multimedia and Expo 2026 (ICME 2026).
+ oai:arXiv.org:2601.07303v3
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Xueping Zhang, Han Yin, Yang Xiao, Lin Zhang, Ting Dang, Rohan Kumar Das, Ming Li
+
+
+ AntiPaSTO: Self-Supervised Steering of Moral Reasoning
+ https://arxiv.org/abs/2601.07473
+ arXiv:2601.07473v2 Announce Type: replace
+Abstract: As models grow more capable, human supervision breaks down: labels don't scale, outputs can be gamed, and training doesn't generalize. Scalable oversight requires steering methods that are internal, self-supervised, and transfer out-of-distribution; existing methods satisfy some but not all three. We introduce AntiPaSTO, which separates representations along an anti-parallel axis ($\alpha=\pm1$ produce opposite shifts), with coherence constraints preventing collapse. Human input is minimal: two contrasting words inserted into template sentences, no preference labels. Using 800 such pairs on Gemma-3-1B, AntiPaSTO beats prompting baselines by 6.9 times on DailyDilemmas and maintains bidirectional control where prompting triggers refusal.
+ oai:arXiv.org:2601.07473v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Michael J. Clark
+
+
+ On the Sequence Reconstruction Problem for the Single-Deletion Two-Substitution Channel
+ https://arxiv.org/abs/2601.07547
+ arXiv:2601.07547v2 Announce Type: replace
+Abstract: The Levenshtein sequence reconstruction problem studies the reconstruction of a transmitted sequence from multiple erroneous copies of it. A fundamental question in this field is to determine the minimum number of erroneous copies required to guarantee correct reconstruction of the original sequence. This problem is equivalent to determining the maximum possible intersection size of two error balls associated with the underlying channel. Existing research on the sequence reconstruction problem has largely focused on channels with a single type of error, such as insertions, deletions, or substitutions alone. However, relatively little is known for channels that involve a mixture of error types, for instance, channels allowing both deletions and substitutions. In this work, we study the sequence reconstruction problem for the single-deletion two-substitution channel, which allows one deletion and at most two substitutions applied to the transmitted sequence. Specifically, we prove that if two $q$-ary length-$n$ sequences have the Hamming distance $d\geq 2$, where $q\geq 2$ is any fixed integer, then the intersection size of their error balls under the single-deletion two-substitution channel is upper bounded by $(q^2-1)n^2-(3q^2+5q-5)n+O_q(1)$, where $O_q(1)$ is a constant independent from $n$ but dependent on $q$. Moreover, we show that this upper bound is tight up to an additive constant.
+ oai:arXiv.org:2601.07547v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Wentu Song, Kui Cai, Tony Q. S. Quek
+
+
+ BenchSeg: A Large-Scale Dataset and Benchmark for Multi-View Food Video Segmentation
+ https://arxiv.org/abs/2601.07581
+ arXiv:2601.07581v2 Announce Type: replace
+Abstract: Food image segmentation is a critical task for dietary analysis, enabling accurate estimation of food volume and nutrients. However, current methods suffer from limited multi-view data and poor generalization to new viewpoints. We introduce BenchSeg, a novel multi-view food video segmentation dataset and benchmark. BenchSeg aggregates 55 dish scenes (from Nutrition5k, Vegetables & Fruits, MetaFood3D, and FoodKit) with 25,284 meticulously annotated frames, capturing each dish under free 360{\deg} camera motion. We evaluate a diverse set of 20 state-of-the-art segmentation models (e.g., SAM-based, transformer, CNN, and large multimodal) on the existing FoodSeg103 dataset and evaluate them (alone and combined with video-memory modules) on BenchSeg. Quantitative and qualitative results demonstrate that while standard image segmenters degrade sharply under novel viewpoints, memory-augmented methods maintain temporal consistency across frames. Our best model based on a combination of SeTR-MLA+XMem2 outperforms prior work (e.g., improving over FoodMem by ~2.63% mAP), offering new insights into food segmentation and tracking for dietary analysis. In addition to frame-wise spatial accuracy, we introduce a dedicated temporal evaluation protocol that explicitly quantifies segmentation stability over time through continuity, flicker rate, and IoU drift metrics. This allows us to reveal failure modes that remain invisible under standard per-frame evaluations. We release BenchSeg to foster future research. The project page including the dataset annotations and the food segmentation models can be found at https://amughrabi.github.io/benchseg.
+ oai:arXiv.org:2601.07581v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ahmad AlMughrabi, Guillermo Rivo, Carlos Jim\'enez-Farf\'an, Umair Haroon, Farid Al-Areqi, Hyunjun Jung, Benjamin Busam, Ricardo Marques, Petia Radeva
+
+
+ Mechanisms are Transferable: Data-Efficient Low-Resource Adaptation via Circuit-Targeted Supervised Fine-Tuning
+ https://arxiv.org/abs/2601.08146
+ arXiv:2601.08146v2 Announce Type: replace
+Abstract: Adapting LLMs to low-resource languages is difficult: labeled data is scarce, full-model fine-tuning is unstable, and continued cross-lingual tuning can cause catastrophic forgetting. We propose Circuit-Targeted Supervised Fine-Tuning (CT-SFT): a counterfactual-free adaptation of CD-T (Contextual Decomposition Transformer) that uses a label-balanced mean baseline and task-directional relevance scoring to identify a sparse set of task-relevant attention heads in a proxy-language checkpoint, then transfer learns to a target language by updating only those heads (plus LayerNorm) via head-level gradient masking. Across NusaX-Senti and XNLI, CT-SFT improves cross-lingual accuracy over continued full fine-tuning while updating only a small subset of model parameters. We find an editing-preserving trade-off: harder transfers favor editing circuit heads, while easier transfers often favor near-zero (i.e., low-relevance heads) updates, preserving the source mechanism. CT-SFT also substantially reduces catastrophic forgetting, preserving proxy/source-language competence during transfer.
+ oai:arXiv.org:2601.08146v2
+ cs.CL
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Khumaisa Nur'aini, Ayu Purwarianti, Alham Fikri Aji, Derry Wijaya
+
+
+ Improving LLM Reasoning with Homophily-aware Structural and Semantic Text-Attributed Graph Compression
+ https://arxiv.org/abs/2601.08187
+ arXiv:2601.08187v2 Announce Type: replace
+Abstract: Large language models (LLMs) have demonstrated promising capabilities in Text-Attributed Graph (TAG) understanding. Recent studies typically focus on verbalizing the graph structures via handcrafted prompts, feeding the target node and its neighborhood context into LLMs. However, constrained by the context window, existing methods mainly resort to random sampling, often implemented via dropping node/edge randomly, which inevitably introduces noise and cause reasoning instability. We argue that graphs inherently contain rich structural and semantic information, and that their effective exploitation can unlock potential gains in LLMs reasoning performance. To this end, we propose Homophily-aware Structural and Semantic Compression for LLMs (HS2C), a framework centered on exploiting graph homophily. Structurally, guided by the principle of Structural Entropy minimization, we perform a global hierarchical partition that decodes the graph's essential topology. This partition identifies naturally cohesive, homophilic communities, while discarding stochastic connectivity noise. Semantically, we deliver the detected structural homophily to the LLM, empowering it to perform differentiated semantic aggregation based on predefined community type. This process compresses redundant background contexts into concise community-level consensus, selectively preserving semantically homophilic information aligned with the target nodes. Extensive experiments on 10 node-level benchmarks across LLMs of varying sizes and families demonstrate that, by feeding LLMs with structurally and semantically compressed inputs, HS2C simultaneously enhances the compression rate and downstream inference accuracy, validating its superiority and scalability. Extensions to 7 diverse graph-level benchmarks further consolidate HS2C's task generalizability.
+ oai:arXiv.org:2601.08187v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zijun Di, Bin Lu, Huquan Kang, Luoyi Fu, Jiaxin Ding, Xiaoying Gan, Lei Zhou, Xinbing Wang, Chenghu Zhou
+
+
+ ForgetMark: Stealthy Fingerprint Embedding via Targeted Unlearning in Language Models
+ https://arxiv.org/abs/2601.08189
+ arXiv:2601.08189v2 Announce Type: replace
+Abstract: Existing invasive (backdoor) fingerprints suffer from high-perplexity triggers that are easily filtered, fixed response patterns exposed by heuristic detectors, and spurious activations on benign inputs. We introduce \textsc{ForgetMark}, a stealthy fingerprinting framework that encodes provenance via targeted unlearning. It builds a compact, human-readable key--value set with an assistant model and predictive-entropy ranking, then trains lightweight LoRA adapters to suppress the original values on their keys while preserving general capabilities. Ownership is verified under black/gray-box access by aggregating likelihood and semantic evidence into a fingerprint success rate. By relying on probabilistic forgetting traces rather than fixed trigger--response patterns, \textsc{ForgetMark} avoids high-perplexity triggers, reduces detectability, and lowers false triggers. Across diverse architectures and settings, it achieves 100\% ownership verification on fingerprinted models while maintaining standard performance, surpasses backdoor baselines in stealthiness and robustness to model merging, and remains effective under moderate incremental fine-tuning. Our code and data are available at \href{https://github.com/Xuzhenhua55/ForgetMark}{https://github.com/Xuzhenhua55/ForgetMark}.
+ oai:arXiv.org:2601.08189v2
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhenhua Xu, Haobo Zhang, Zhebo Wang, Qichen Liu, Haitao Xu, Wenpeng Xing, Meng Han
+
+
+ DNF: Dual-Layer Nested Fingerprinting for Large Language Model Intellectual Property Protection
+ https://arxiv.org/abs/2601.08223
+ arXiv:2601.08223v2 Announce Type: replace
+Abstract: The rapid growth of large language models raises pressing concerns about intellectual property protection under black-box deployment. Existing backdoor-based fingerprints either rely on rare tokens -- leading to high-perplexity inputs susceptible to filtering -- or use fixed trigger-response mappings that are brittle to leakage and post-hoc adaptation. We propose \textsc{Dual-Layer Nested Fingerprinting} (DNF), a black-box method that embeds a hierarchical backdoor by coupling domain-specific stylistic cues with implicit semantic triggers. Across Mistral-7B, LLaMA-3-8B-Instruct, and Falcon3-7B-Instruct, DNF achieves perfect fingerprint activation while preserving downstream utility. Compared with existing methods, it uses lower-perplexity triggers, remains undetectable under fingerprint detection attacks, and is relatively robust to incremental fine-tuning and model merging. These results position DNF as a practical, stealthy, and resilient solution for LLM ownership verification and intellectual property protection.
+ oai:arXiv.org:2601.08223v2
+ cs.CR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhenhua Xu, Yiran Zhao, Mengting Zhong, Dezhang Kong, Changting Lin, Tong Qiao, Meng Han
+
+
+ On Evaluation of Unsupervised Feature Selection for Pattern Classification
+ https://arxiv.org/abs/2601.08257
+ arXiv:2601.08257v2 Announce Type: replace
+Abstract: Unsupervised feature selection aims to identify a compact subset of features that captures the intrinsic structure of data without supervised label. Most existing studies evaluate the performance of methods using the single-label dataset that can be instantiated by selecting a label from multi-label data while maintaining the original features. Because the chosen label can vary arbitrarily depending on the experimental setting, the superiority among compared methods can be changed with regard to which label happens to be selected. Thus, evaluating unsupervised feature selection methods based solely on single-label accuracy is unreasonable for assessing their true discriminative ability. This study revisits this evaluation paradigm by adopting a multi-label classification framework. Experiments on 21 multi-label datasets using several representative methods demonstrate that performance rankings differ markedly from those reported under single-label settings, suggesting the possibility of multi-label evaluation settings for fair and reliable comparison of unsupervised feature selection methods.
+ oai:arXiv.org:2601.08257v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Gyu-Il Kim, Dae-Won Kim, Jaesung Lee
+
+
+ Med-CoReasoner: Reducing Language Disparities in Medical Reasoning via Language-Informed Co-Reasoning
+ https://arxiv.org/abs/2601.08267
+ arXiv:2601.08267v2 Announce Type: replace
+Abstract: While reasoning-enhanced large language models perform strongly on English medical tasks, a persistent multilingual gap remains, with substantially weaker reasoning in local languages, limiting equitable global medical deployment. To bridge this gap, we introduce Med-CoReasoner, a language-informed co-reasoning framework that elicits parallel English and local-language reasoning, abstracts them into structured concepts, and integrates local clinical knowledge into an English logical scaffold via concept-level alignment and retrieval. This design combines the structural robustness of English reasoning with the practice-grounded expertise encoded in local languages. To evaluate multilingual medical reasoning beyond multiple-choice settings, we construct MultiMed-X, a benchmark covering seven languages with expert-annotated long-form question answering and natural language inference tasks, comprising 350 instances per language. Experiments across three benchmarks show that Med-CoReasoner improves multilingual reasoning performance by an average of 5%, with particularly substantial gains in low-resource languages. Moreover, model distillation and expert evaluation analysis further confirm that Med-CoReasoner produces clinically sound and culturally grounded reasoning traces.
+ oai:arXiv.org:2601.08267v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Fan Gao, Sherry T. Tong, Jiwoong Sohn, Jiahao Huang, Junfeng Jiang, Ding Xia, Piyalitt Ittichaiwong, Kanyakorn Veerakanjana, Hyunjae Kim, Qingyu Chen, Edison Marrese Taylor, Kazuma Kobayashi, Akkiko Aizawa, Irene Li
+
+
+ Automated Machine Learning in Radiomics: A Comparative Evaluation of Performance, Efficiency and Accessibility
+ https://arxiv.org/abs/2601.08334
+ arXiv:2601.08334v2 Announce Type: replace
+Abstract: Automated machine learning (AutoML) frameworks can lower technical barriers for predictive and prognostic model development in radiomics by enabling researchers without programming expertise to build models. However, their effectiveness in addressing radiomics-specific challenges remains unclear. This study evaluates the performance, efficiency, and accessibility of general-purpose and radiomics-specific AutoML frameworks on diverse radiomics classification tasks, thereby highlighting development needs for radiomics. Ten public/private radiomics datasets with varied imaging modalities (CT/MRI), sizes, anatomies and endpoints were used. Six general-purpose and five radiomics-specific frameworks were tested with predefined parameters using standardized cross-validation. Evaluation metrics included AUC, runtime, together with qualitative aspects related to software status, accessibility, and interpretability. Simplatab, a radiomics-specific tool with a no-code interface, achieved the highest average test AUC (81.81%) with a moderate runtime (~1 hour). LightAutoML, a general-purpose framework, showed the fastest execution with competitive performance (78.74% mean AUC in six minutes). Most radiomics-specific frameworks were excluded from the performance analysis due to obsolescence, extensive programming requirements, or computational inefficiency. Conversely, general-purpose frameworks demonstrated higher accessibility and ease of implementation. Simplatab provides an effective balance of performance, efficiency, and accessibility for radiomics classification problems. However, significant gaps remain, including the lack of accessible survival analysis support and the limited integration of feature reproducibility and harmonization within current AutoML frameworks. Future research should focus on adapting AutoML solutions to better address these radiomics-specific challenges.
+ oai:arXiv.org:2601.08334v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jose Lozano-Montoya, Emilio Soria-Olivas, Almudena Fuster-Matanzo, Angel Alberich-Bayarri, Ana Jimenez-Pastor
+
+
+ A Qualitative Model to Reason about Object Rotations (QOR) applied to solve the Cube Comparison Test (CCT)
+ https://arxiv.org/abs/2601.08382
+ arXiv:2601.08382v2 Announce Type: replace
+Abstract: This paper presents a Qualitative model for Reasoning about Object Rotations (QOR) which is applied to solve the Cube Comparison Test (CCT) by Ekstrom et al. (1976). A conceptual neighborhood graph relating the Rotation movement to the Location change and the Orientation change (CNGRLO) of the features on the cube sides has been built and it produces composition tables to calculate inferences for reasoning about rotations.
+ oai:arXiv.org:2601.08382v2
+ cs.AI
+ cs.SC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zoe Falomir
+
+
+ On the Generalization Error of Differentially Private Algorithms Via Typicality
+ https://arxiv.org/abs/2601.08386
+ arXiv:2601.08386v2 Announce Type: replace
+Abstract: We study the generalization error of stochastic learning algorithms from an information-theoretic perspective, with a particular emphasis on deriving sharper bounds for differentially private algorithms. It is well known that the generalization error of stochastic learning algorithms can be bounded in terms of mutual information and maximal leakage, yielding in-expectation and high-probability guarantees, respectively. In this work, we further upper bound mutual information and maximal leakage by explicit, easily computable formulas, using typicality-based arguments and exploiting the stability properties of private algorithms. In the first part of the paper, we strictly improve the mutual-information bounds by Rodr\'iguez-G\'alvez et al. (IEEE Trans. Inf. Theory, 2021). In the second part, we derive new upper bounds on the maximal leakage of learning algorithms. In both cases, the resulting bounds on information measures translate directly into generalization error guarantees.
+ oai:arXiv.org:2601.08386v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yanxiao Liu, Chun Hei Michael Shiu, Lele Wang, Deniz G\"und\"uz
+
+
+ Protrusion Decompositions Revisited: Uniform Lossy Kernels for Reducing Treewidth and Linear Kernels for Hitting Disconnected Minors
+ https://arxiv.org/abs/2601.08424
+ arXiv:2601.08424v2 Announce Type: replace
+Abstract: Let F be a finite family of graphs. In the F-Deletion problem, one is given a graph G and an integer k, and the goal is to find k vertices whose deletion results in a graph with no minor from the family F. This may be regarded as a far-reaching generalization of Vertex Cover and Feedback vertex Set. In their seminal work, Fomin, Lokshtanov, Misra & Saurabh [FOCS 2012] gave a polynomial kernel for this problem when the family F contains a planar graph. As the size of their kernel is g(F) * k^{f(F)}, a natural follow-up question was whether the dependence on F in the exponent of k can be avoided. The answer turned out to be negative: Giannapoulou, Jansen, Lokshtanov & Saurabh [TALG 2017] proved that this is already inevitable for the special case of the Treewidth-d-Deletion problem.
+ In this work, we show that this non-uniformity can be avoided at the expense of a small loss. First, we present a simple 2-approximate kernelization algorithm for Treewidth-d-Deletion with kernel size g(d) * k^5. Next, we show that the approximation factor can be made arbitrarily close to 1, if we settle for a kernelization protocol with O(1) calls to an oracle that solves instances of size bounded by a uniform polynomial in k.
+ We also obtain linear kernels on sparse graph classes when F contains a planar graph, whereas the previously known theorems required all graphs in F to be connected. Specifically, we generalize the kernelization algorithm by Kim, Langer, Paul, Reidl, Rossmanith, Sau & Sikdar [TALG 2015] on graph classes that exclude a topological minor.
+ oai:arXiv.org:2601.08424v2
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.4230/LIPIcs.STACS.2026.31
+ Roohani Sharma, Micha{\l} W{\l}odarczyk
+
+
+ Large Multimodal Models for Embodied Intelligent Driving: The Next Frontier in Self-Driving?
+ https://arxiv.org/abs/2601.08434
+ arXiv:2601.08434v3 Announce Type: replace
+Abstract: The advent of Large Multimodal Models (LMMs) offers a promising technology to tackle the limitations of modular design in autonomous driving, which often falters in open-world scenarios requiring sustained environmental understanding and logical reasoning. Besides, embodied artificial intelligence facilitates policy optimization through closed-loop interactions to achieve the continuous learning capability, thereby advancing autonomous driving toward embodied intelligent (El) driving. However, such capability will be constrained by relying solely on LMMs to enhance EI driving without joint decision-making. This article introduces a novel semantics and policy dual-driven hybrid decision framework to tackle this challenge, ensuring continuous learning and joint decision. The framework merges LMMs for semantic understanding and cognitive representation, and deep reinforcement learning (DRL) for real-time policy optimization. We start by introducing the foundational principles of EI driving and LMMs. Moreover, we examine the emerging opportunities this framework enables, encompassing potential benefits and representative use cases. A case study is conducted experimentally to validate the performance superiority of our framework in completing lane-change planning task. Finally, several future research directions to empower EI driving are identified to guide subsequent work.
+ oai:arXiv.org:2601.08434v3
+ cs.RO
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Long Zhang, Yuchen Xia, Bingqing Wei, Zhen Liu, Shiwen Mao, Zhu Han, Mohsen Guizani
+
+
+ Current and temperature imbalances in parallel-connected grid storage battery modules
+ https://arxiv.org/abs/2601.08459
+ arXiv:2601.08459v2 Announce Type: replace
+Abstract: A key challenge with large battery systems is heterogeneous currents and temperatures in modules with parallel-connected cells. Although extreme currents and temperatures are detrimental to the performance and lifetime of battery cells, there is not a consensus on the scale of typical imbalances within grid storage modules. Here, we quantify these imbalances through simulations and experiments on an industrially representative grid storage battery module consisting of prismatic lithium iron phosphate cells, elucidating the evolution of current and temperature imbalances and their dependence on individual cell and module parameter variations. Using a sensitivity analysis, we find that varying contact resistances and cell resistances contribute strongly to temperature differences between cells, from which we define safety thresholds on cell-to-cell variability. Finally, we investigate how these thresholds change for different applications, to outline a set of robustness metrics that show how cycling at lower C-rates and narrower SOC ranges can mitigate failures.
+ oai:arXiv.org:2601.08459v2
+ eess.SY
+ cs.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Joseph Ross, Damien Frost, Efstratios Chatzinikolaou, Stephen Duncan, David Howey
+
+
+ STAGE: A Benchmark for Knowledge Graph Construction, Question Answering, and In-Script Role-Playing over Movie Screenplays
+ https://arxiv.org/abs/2601.08510
+ arXiv:2601.08510v2 Announce Type: replace
+Abstract: Movie screenplays are rich long-form narratives that interleave complex character relationships, temporally ordered events, and dialogue-driven interactions. While prior benchmarks target individual subtasks such as question answering or dialogue generation, they rarely evaluate whether models can construct a coherent story world and use it consistently across multiple forms of reasoning and generation. We introduce STAGE (Screenplay Text, Agents, Graphs and Evaluation), a unified benchmark for narrative understanding over full-length movie screenplays. STAGE defines four tasks: knowledge graph construction, scene-level event summarization, long-context screenplay question answering, and in-script character role-playing, all grounded in a shared narrative world representation. The benchmark provides cleaned scripts, curated knowledge graphs, and event- and character-centric annotations for 150 films across English and Chinese, enabling holistic evaluation of models' abilities to build world representations, abstract and verify narrative events, reason over long narratives, and generate character-consistent responses.
+ oai:arXiv.org:2601.08510v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qiuyu Tian, Yiding Li, Fengyi Chen, Zequn Liu, Youyong Kong, Fan Guo, Yuyao Li, Jinjing Shen, Zhijing Xie, Yiyun Luo, Xin Zhang
+
+
+ Learner-Tailored Program Repair: A Solution Generator with Iterative Edit-Driven Retrieval Enhancement
+ https://arxiv.org/abs/2601.08545
+ arXiv:2601.08545v2 Announce Type: replace
+Abstract: With the development of large language models (LLMs) in the field of programming, intelligent programming coaching systems have gained widespread attention. However, most research focuses on repairing the buggy code of programming learners without providing the underlying causes of the bugs. To address this gap, we introduce a novel task, namely LRP (Learner-Tailored Program Repair). We then propose a novel and effective framework, LSGEN (Learner-Tailored Solution Generator), to enhance program repair while offering the bug descriptions for the buggy code. In the first stage, we utilize a repair solution retrieval framework to construct a solution retrieval database and then employ an edit-driven code retrieval approach to retrieve valuable solutions, guiding LLMs in identifying and fixing the bugs in buggy code. In the second stage, we propose a solution-guided program repair method, which fixes the code and provides explanations under the guidance of retrieval solutions. Moreover, we propose an Iterative Retrieval Enhancement method that utilizes evaluation results of the generated code to iteratively optimize the retrieval direction and explore more suitable repair strategies, improving performance in practical programming coaching scenarios. The experimental results show that our approach outperforms a set of baselines by a large margin, validating the effectiveness of our framework for the newly proposed LPR task.
+ oai:arXiv.org:2601.08545v2
+ cs.AI
+ cs.CL
+ cs.SE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Zhenlong Dai, Zhuoluo Zhao, Hengning Wang, Xiu Tang, Sai Wu, Chang Yao, Zhipeng Gao, Jingyuan Chen
+
+
+ How Order-Sensitive Are LLMs? OrderProbe for Deterministic Structural Reconstruction
+ https://arxiv.org/abs/2601.08626
+ arXiv:2601.08626v2 Announce Type: replace
+Abstract: Large language models (LLMs) excel at semantic understanding, yet their ability to reconstruct internal structure from scrambled inputs remains underexplored. Sentence-level restoration is ill-posed for automated evaluation because multiple valid word orders often exist. We introduce OrderProbe, a deterministic benchmark for structural reconstruction using fixed four-character expressions in Chinese, Japanese, and Korean, which have a unique canonical order and thus support exact-match scoring. We further propose a diagnostic framework that evaluates models beyond recovery accuracy, including semantic fidelity, logical validity, consistency, robustness sensitivity, and information density. Experiments on twelve widely used LLMs show that structural reconstruction remains difficult even for frontier systems: zero-shot recovery frequently falls below 35%. We also observe a consistent dissociation between semantic recall and structural planning, suggesting that structural robustness is not an automatic byproduct of semantic competence.
+ oai:arXiv.org:2601.08626v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yingjie He, Zhaolu Kang, Kehan Jiang, Qianyuan Zhang, Jiachen Qian, Chunlei Meng, Yujie Feng, Yuan Wang, Jiabao Dou, Aming Wu, Leqi Zheng, Pengxiang Zhao, Jiaxin Liu, Zeyu Zhang, Lei Wang, Guansu Wang, Qishi Zhan, Xiaomin He, Meisheng Zhang, Jianyuan Ni
+
+
+ Provably Safe Reinforcement Learning for Stochastic Reach-Avoid Problems with Entropy Regularization
+ https://arxiv.org/abs/2601.08646
+ arXiv:2601.08646v3 Announce Type: replace
+Abstract: We consider the problem of learning the optimal policy for Markov decision processes with safety constraints. We formulate the problem in a reach-avoid setup. Our goal is to design online reinforcement learning algorithms that ensure safety constraints with arbitrarily high probability during the learning phase. To this end, we first propose an algorithm based on the optimism in the face of uncertainty (OFU) principle. Based on the first algorithm, we propose our main algorithm, which utilizes entropy regularization. We investigate the finite-sample analysis of both algorithms and derive their regret bounds. We demonstrate that the inclusion of entropy regularization improves the regret and drastically controls the episode-to-episode variability that is inherent in OFU-based safe RL algorithms.
+ oai:arXiv.org:2601.08646v3
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Abhijit Mazumdar, Rafal Wisniewski, Manuela L. Bujorianu
+
+
+ Nationality and Region Prediction from Names: A Comparative Study of Neural Models and Large Language Models
+ https://arxiv.org/abs/2601.08692
+ arXiv:2601.08692v2 Announce Type: replace
+Abstract: Predicting nationality from personal names has practical value in marketing, demographic research, and genealogical studies. Conventional neural models learn statistical correspondences between names and nationalities from task-specific training data, posing challenges in generalizing to low-frequency nationalities and distinguishing similar nationalities within the same region. Large language models (LLMs) have the potential to address these challenges by leveraging world knowledge acquired during pre-training. In this study, we comprehensively compare neural models and LLMs on nationality prediction, evaluating six neural models and six LLM prompting strategies across three granularity levels (nationality, region, and continent), with frequency-based stratified analysis and error analysis. Results show that LLMs outperform neural models at all granularity levels, with the gap narrowing as granularity becomes coarser. Simple machine learning methods exhibit the highest frequency robustness, while pre-trained models and LLMs show degradation for low-frequency nationalities. Error analysis reveals that LLMs tend to make ``near-miss'' errors, predicting the correct region even when nationality is incorrect, whereas neural models exhibit more cross-regional errors and bias toward high-frequency classes. These findings indicate that LLM superiority stems from world knowledge, model selection should consider required granularity, and evaluation should account for error quality beyond accuracy.
+ oai:arXiv.org:2601.08692v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keito Inoshita
+
+
+ Auditing Student-AI Collaboration: A Case Study of Online Graduate CS Students
+ https://arxiv.org/abs/2601.08697
+ arXiv:2601.08697v2 Announce Type: replace
+Abstract: As generative AI becomes embedded in higher education, it increasingly shapes how students complete academic tasks. While these systems offer efficiency and support, concerns persist regarding over-automation, diminished student agency, and the potential for unreliable or hallucinated outputs. This study conducts a mixed-methods audit of student-AI collaboration preferences by examining the alignment between current AI capabilities and students' desired levels of automation in academic work. Using two sequential and complementary surveys, we capture students' perceived benefits, risks, and preferred boundaries when using AI. The first survey employs an existing task-based framework to assess preferences for and actual usage of AI across 12 academic tasks, alongside primary concerns and reasons for use. The second survey, informed by the first, explores how AI systems could be designed to address these concerns through open-ended questions. This study aims to identify gaps between existing AI affordances and students' normative expectations of collaboration, informing the development of more effective and trustworthy AI systems for education.
+ oai:arXiv.org:2601.08697v2
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Nifu Dan
+
+
+ Adaptive Requesting in Decentralized Edge Networks via Non-Stationary Bandits
+ https://arxiv.org/abs/2601.08760
+ arXiv:2601.08760v3 Announce Type: replace
+Abstract: We study a decentralized collaborative requesting problem that aims to optimize the information freshness of time-sensitive clients in edge networks consisting of multiple clients, access nodes (ANs), and servers. Clients request content through ANs acting as gateways, without observing AN states or the actions of other clients. We define the reward as the age of information reduction resulting from a client's selection of an AN, and formulate the problem as a non-stationary multi-armed bandit. In this decentralized and partially observable setting, the resulting reward process is history-dependent and coupled across clients, and exhibits both abrupt and gradual changes in expected rewards, rendering classical bandit-based approaches ineffective. To address these challenges, we propose the AGING BANDIT WITH ADAPTIVE RESET algorithm, which combines adaptive windowing with periodic monitoring to track evolving reward distributions. We establish theoretical performance guarantees showing that the proposed algorithm achieves near-optimal performance, and we validate the theoretical results through simulations.
+ oai:arXiv.org:2601.08760v3
+ cs.LG
+ cs.MA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yi Zhuang, Kun Yang, Xingran Chen
+
+
+ Pervasive Annotation Errors Break Text-to-SQL Benchmarks and Leaderboards
+ https://arxiv.org/abs/2601.08778
+ arXiv:2601.08778v3 Announce Type: replace
+Abstract: Researchers have proposed numerous text-to-SQL techniques to streamline data analytics and accelerate the development of data-driven applications. To compare these techniques and select the best one for deployment, the community depends on public benchmarks and their leaderboards. Since these benchmarks heavily rely on human annotations during question construction and answer evaluation, the validity of the annotations is crucial.
+ In this paper, we conduct an empirical study that (i) benchmarks annotation error rates for two widely used text-to-SQL benchmarks, BIRD and Spider 2.0-Snow, and (ii) corrects a subset of the BIRD development (Dev) set to measure the impact of annotation errors on text-to-SQL agent performance and leaderboard rankings. Through expert analysis, we show that BIRD Mini-Dev and Spider 2.0-Snow have error rates of 52.8% and 62.8%, respectively. We re-evaluate all 16 open-source agents from the BIRD leaderboard on both the original and the corrected BIRD Dev subsets. We show that performance changes range from -7% to 31% (in relative terms) and rank changes range from $-9$ to $+9$ positions. We further assess whether these impacts generalize to the full BIRD Dev set. We find that the rankings of agents on the uncorrected subset correlate strongly with those on the full Dev set (Spearman's $r_s$=0.85, $p$=3.26e-5), whereas they correlate weakly with those on the corrected subset (Spearman's $r_s$=0.32, $p$=0.23). These findings show that annotation errors can significantly distort reported performance and rankings, potentially misguiding research directions or deployment choices. Our code and data are available at https://github.com/uiuc-kang-lab/text_to_sql_benchmarks.
+ oai:arXiv.org:2601.08778v3
+ cs.AI
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Tengjun Jin, Yoojin Choi, Yuxuan Zhu, Daniel Kang
+
+
+ MemRec: Collaborative Memory-Augmented Agentic Recommender System
+ https://arxiv.org/abs/2601.08816
+ arXiv:2601.08816v2 Announce Type: replace
+Abstract: The evolution of recommender systems has shifted preference storage from rating matrices and dense embeddings to semantic memory in the agentic era. Yet existing agents rely on isolated memory, overlooking crucial collaborative signals. Bridging this gap is hindered by the dual challenges of distilling vast graph contexts without overwhelming reasoning agents with cognitive load, and evolving the collaborative memory efficiently without incurring prohibitive computational costs. To address this, we propose MemRec, a framework that architecturally decouples reasoning from memory management to enable efficient collaborative augmentation. MemRec introduces a dedicated, cost-effective LM_Mem to manage a dynamic collaborative memory graph, serving synthesized, high-signal context to a downstream LLM_Rec. The framework operates via a practical pipeline featuring efficient retrieval and cost-effective asynchronous graph propagation that evolves memory in the background. Extensive experiments on four benchmarks demonstrate that MemRec achieves state-of-the-art performance. Furthermore, architectural analysis confirms its flexibility, establishing a new Pareto frontier that balances reasoning quality, cost, and privacy through support for diverse deployments, including local open-source models. Code:https://github.com/rutgerswiselab/memrec and Homepage: https://memrec.weixinchen.com
+ oai:arXiv.org:2601.08816v2
+ cs.IR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Weixin Chen, Yuhan Zhao, Jingyuan Huang, Zihe Ye, Clark Mingxuan Ju, Tong Zhao, Neil Shah, Li Chen, Yongfeng Zhang
+
+
+ 3AM: 3egment Anything with Geometric Consistency in Videos
+ https://arxiv.org/abs/2601.08831
+ arXiv:2601.08831v2 Announce Type: replace
+Abstract: Video object segmentation methods like SAM2 achieve strong performance through memory-based architectures but struggle under large viewpoint changes due to reliance on appearance features. Traditional 3D instance segmentation methods address viewpoint consistency but require camera poses, depth maps, and expensive preprocessing. We introduce 3AM, a training-time enhancement that integrates 3D-aware features from MUSt3R into SAM2. Our lightweight Feature Merger fuses multi-level MUSt3R features that encode implicit geometric correspondence. Combined with SAM2's appearance features, the model achieves geometry-consistent recognition grounded in both spatial position and visual similarity. We propose a field-of-view aware sampling strategy ensuring frames observe spatially consistent object regions for reliable 3D correspondence learning. Critically, our method requires only RGB input at inference, with no camera poses or preprocessing. On challenging datasets with wide-baseline motion (ScanNet++, Replica), 3AM substantially outperforms SAM2 and extensions, achieving 90.6% IoU and 71.7% Positive IoU on ScanNet++'s Selected Subset, improving over state-of-the-art VOS methods by +15.9 and +30.4 points. Project page: https://jayisaking.github.io/3AM-Page/
+ oai:arXiv.org:2601.08831v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yang-Che Sun, Cheng Sun, Chin-Yang Lin, Fu-En Yang, Min-Hung Chen, Yen-Yu Lin, Yu-Lun Liu
+
+
+ LAUDE: LLM-Assisted Unit Test Generation and Debugging of Hardware DEsigns
+ https://arxiv.org/abs/2601.08856
+ arXiv:2601.08856v2 Announce Type: replace
+Abstract: Unit tests are critical in the hardware design lifecycle to ensure that component design modules are functionally correct and conform to the specification before they are integrated at the system level. Thus developing unit tests targeting various design features requires deep understanding of the design functionality and creativity. When one or more unit tests expose a design failure, the debugging engineer needs to diagnose, localize, and debug the failure to ensure design correctness, which is often a painstaking and intense process. In this work, we introduce LAUDE, a unified unit-test generation and debugging framework for hardware designs that cross-pollinates the semantic understanding of the design source code with the Chain-of-Thought (CoT) reasoning capabilities of foundational Large-Language Models (LLMs). LAUDE integrates prompt engineering and design execution information to enhance its unit test generation accuracy and code debuggability. We apply LAUDE with closed- and open-source LLMs to a large corpus of buggy hardware design codes derived from the VerilogEval dataset, where generated unit tests detected bugs in up to 100% and 93% of combinational and sequential designs and debugged up to 93% and 84% of combinational and sequential designs, respectively.
+ oai:arXiv.org:2601.08856v2
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Deeksha Nandal, Riccardo Revalor, Soham Dan, Debjit Pal
+
+
+ Revisiting Software Engineering Education in the Era of Large Language Models: A Curriculum Adaptation and Academic Integrity Framework
+ https://arxiv.org/abs/2601.08857
+ arXiv:2601.08857v2 Announce Type: replace
+Abstract: The integration of Large Language Models (LLMs), such as ChatGPT and GitHub Copilot, into professional workflows is increasingly reshaping software engineering practices. These tools have lowered the cost of code generation, explanation, and testing, while introducing new forms of automation into routine development tasks. In contrast, most of the software engineering and computer engineering curricula remain closely aligned with pedagogical models that equate manual syntax production with technical competence. This growing misalignment raises concerns regarding assessment validity, learning outcomes, and the development of foundational skills. Adopting a conceptual research approach, this paper proposes a theoretical framework for analyzing how generative AI alters core software engineering competencies and introduces a pedagogical design model for LLM-integrated education. Attention is given to computer engineering programs in Turkey, where centralized regulation, large class sizes, and exam-oriented assessment practices amplify these challenges. The framework delineates how problem analysis, design, implementation, and testing increasingly shift from construction toward critique, validation, and human-AI stewardship. In addition, the paper argues that traditional plagiarism-centric integrity mechanisms are becoming insufficient, motivating a transition toward a process transparency model. While this work provides a structured proposal for curriculum adaptation, it remains a theoretical contribution; the paper concludes by outlining the need for longitudinal empirical studies to evaluate these interventions and their long-term impacts on learning.
+ oai:arXiv.org:2601.08857v2
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mustafa Degerli
+
+
+ Learning Domain-Invariant Representations for Cross-Domain Image Registration via Scene-Appearance Disentanglement
+ https://arxiv.org/abs/2601.08875
+ arXiv:2601.08875v2 Announce Type: replace
+Abstract: Image registration under domain shift remains a fundamental challenge in computer vision and medical imaging: when source and target images exhibit systematic intensity differences, the brightness constancy assumption underlying conventional registration methods is violated, rendering correspondence
+ estimation ill-posed. We propose SAR-Net, a unified framework that addresses this challenge through principled scene-appearance disentanglement. Our key insight is that observed images can be decomposed into domain-invariant scene representations and domain-specific appearance codes, enabling registration
+ via re-rendering rather than direct intensity matching. We establish theoretical conditions under which this decomposition enables consistent cross-domain alignment (Proposition 1) and prove that our scene consistency loss provides a sufficient condition for geometric correspondence in the shared latent
+ space (Proposition 2). Empirically, we validate SAR-Net on the ANHIR (Automatic Non-rigid Histological Image Registration) challenge benchmark, where multi-stain histopathology images exhibit coupled domain shift from different staining protocols and geometric distortion from tissue preparation. Our method
+ achieves a median relative Target Registration Error (rTRE) of 0.25%, outperforming the state-of-the-art MEVIS method (0.27% rTRE) by 7.4%, with robustness of 99.1%. Code is available at https://github.com/D-ST-Sword/SAR-NET
+ oai:arXiv.org:2601.08875v2
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiahao Qin, Yiwen Wang
+
+
+ Two-dimensional Entanglement-assisted Quantum Quasi-cyclic Low-density Parity-check Codes
+ https://arxiv.org/abs/2601.08927
+ arXiv:2601.08927v2 Announce Type: replace
+Abstract: For any positive integer $g \ge 2$, we derive general condition for the existence of a $2g$-cycle in the Tanner graph of two-dimensional ($2$-D) classical quasi-cyclic (QC) low-density parity-check (LDPC) codes. Depending on whether $p$ is an odd prime or a composite number, we construct two distinct families of $2$-D classical QC-LDPC codes with girth $>4$ by stacking $p \times p \times p$ tensors. Furthermore, using generalized Behrend sequences, we propose an additional family of $2$-D classical QC-LDPC codes with girth $>6$, constructed via a similar tensor-stacking approach. All the proposed $2\text{-D}$ classical QC-LDPC codes exhibit an erasure correction capability of at least $p \times p$. Based on the constructed $2\text{-D}$ classical QC-LDPC codes, we derive two families of $2\text{-D}$ entanglement-assisted (EA) quantum low-density parity-check (QLDPC) codes. The first family of $2\text{-D}$ EA-QLDPC codes is obtained from a pair of $2\text{-D}$ classical QC-LDPC codes and is designed such that the unassisted part of the Tanner graph of the resulting EA-QLDPC code is free of $4$-cycles, while requiring only a single ebit to be shared across the quantum transceiver. The second family is constructed from a single $2\text{-D}$ classical QC-LDPC code whose Tanner graph is free from $4$-cycles. Moreover, the constructed EA-QLDPC codes inherit an erasure correction capability of $p \times p$, as the underlying classical codes possess the same erasure correction property.
+ oai:arXiv.org:2601.08927v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pavan Kumar, Shayan Srinivasa Garani
+
+
+ Entropy Sentinel: Continuous LLM Accuracy Monitoring from Decoding Entropy Traces in STEM
+ https://arxiv.org/abs/2601.09001
+ arXiv:2601.09001v2 Announce Type: replace
+Abstract: Deploying LLMs raises two coupled challenges: (1) monitoring - estimating where a model underperforms as traffic and domains drift - and (2) improvement - prioritizing data acquisition to close the largest performance gaps. We test whether an inference-time signal can estimate slice-level accuracy under domain shift. For each response, we compute an output-entropy profile from final-layer next-token probabilities (from top-k logprobs) and summarize it with eleven statistics. A lightweight classifier predicts instance correctness, and averaging predicted probabilities yields a domain-level accuracy estimate. We evaluate on ten STEM reasoning benchmarks with exhaustive train/test compositions (k in {1,2,3,4}; all "10 choose k" combinations), across nine LLMs from six families (3B-20B). Estimates often track held-out benchmark accuracy, and several models show near-monotonic ordering of domains. Output-entropy profiles are thus an accessible signal for scalable monitoring and for targeting data acquisition.
+ oai:arXiv.org:2601.09001v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Pedro Memoli Buffa, Luciano Del Corro
+
+
+ TranslateGemma Technical Report
+ https://arxiv.org/abs/2601.09012
+ arXiv:2601.09012v3 Announce Type: replace
+Abstract: We present TranslateGemma, a suite of open machine translation models based on the Gemma 3 foundation models. To enhance the inherent multilingual capabilities of Gemma 3 for the translation task, we employ a two-stage fine-tuning process. First, supervised fine-tuning is performed using a rich mixture of high-quality large-scale synthetic parallel data generated via state-of-the-art models and human-translated parallel data. This is followed by a reinforcement learning phase, where we optimize translation quality using an ensemble of reward models, including MetricX-QE and AutoMQM, targeting translation quality. We demonstrate the effectiveness of TranslateGemma with human evaluation on the WMT25 test set across 10 language pairs and with automatic evaluation on the WMT24++ benchmark across 55 language pairs. Automatic metrics show consistent and substantial gains over the baseline Gemma 3 models across all sizes. Notably, smaller TranslateGemma models often achieve performance comparable to larger baseline models, offering improved efficiency. We also show that TranslateGemma models retain strong multimodal capabilities, with enhanced performance on the Vistra image translation benchmark. The release of the open TranslateGemma models aims to provide the research community with powerful and adaptable tools for machine translation.
+ oai:arXiv.org:2601.09012v3
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mara Finkelstein, Isaac Caswell, Tobias Domhan, Jan-Thorsten Peter, Juraj Juraska, Parker Riley, Daniel Deutsch, Geza Kovacs, Cole Dilanni, Colin Cherry, Eleftheria Briakou, Elizabeth Nielsen, Jiaming Luo, Kat Black, Ryan Mullins, Sweta Agrawal, Wenda Xu, Erin Kats, Stephane Jaskiewicz, Markus Freitag, David Vilar
+
+
+ StegoStylo: Squelching Stylometric Scrutiny through Steganographic Stitching
+ https://arxiv.org/abs/2601.09056
+ arXiv:2601.09056v2 Announce Type: replace
+Abstract: Stylometry--the identification of an author through analysis of a text's style (i.e., authorship attribution)--serves many constructive purposes: it supports copyright and plagiarism investigations, aids detection of harmful content, offers exploratory cues for certain medical conditions (e.g., early signs of dementia or depression), provides historical context for literary works, and helps uncover misinformation and disinformation. In contrast, when stylometry is employed as a tool for authorship verification--confirming whether a text truly originates from a claimed author--it can also be weaponized for malicious purposes. Techniques such as de-anonymization, re-identification, tracking, profiling, and downstream effects like censorship illustrate the privacy threats that stylometric analysis can enable. Building on these concerns, this paper further explores how adversarial stylometry combined with steganography can counteract stylometric analysis. We first present enhancements to our adversarial attack, $\textit{TraceTarnish}$, providing stronger evidence of its capacity to confound stylometric systems and reduce their attribution and verification accuracy. Next, we examine how steganographic embedding can be fine-tuned to mask an author's stylistic fingerprint, quantifying the level of authorship obfuscation achievable as a function of the proportion of words altered with zero-width Unicode characters. Based on our findings, steganographic coverage of 33% or higher seemingly ensures authorship obfuscation. Finally, we reflect on the ways stylometry can be used to undermine privacy and argue for the necessity of defensive tools like $\textit{TraceTarnish}$.
+ oai:arXiv.org:2601.09056v2
+ cs.CR
+ cs.CL
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Robert Dilworth
+
+
+ Beyond Consensus: Perspectivist Modeling and Evaluation of Annotator Disagreement in NLP
+ https://arxiv.org/abs/2601.09065
+ arXiv:2601.09065v2 Announce Type: replace
+Abstract: Annotator disagreement is widespread in NLP, particularly for subjective and ambiguous tasks such as toxicity detection and stance analysis. While early approaches treated disagreement as noise to be removed, recent work increasingly models it as a meaningful signal reflecting variation in interpretation and perspective. This survey provides a unified view of disagreement-aware NLP methods. We first present a domain-agnostic taxonomy of the sources of disagreement spanning data, task, and annotator factors. We then synthesize modeling approaches using a common framework defined by prediction targets and pooling structure, highlighting a shift from consensus learning toward explicitly modeling disagreement, and toward capturing structured relationships among annotators. We review evaluation metrics for both predictive performance and annotator behavior, and noting that most fairness evaluations remain descriptive rather than normative. We conclude by identifying open challenges and future directions, including integrating multiple sources of variation, developing disagreement-aware interpretability frameworks, and grappling with the practical tradeoffs of perspectivist modeling.
+ oai:arXiv.org:2601.09065v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yinuo Xu, David Jurgens
+
+
+ SSVP: Synergistic Semantic-Visual Prompting for Industrial Zero-Shot Anomaly Detection
+ https://arxiv.org/abs/2601.09147
+ arXiv:2601.09147v2 Announce Type: replace
+Abstract: Zero-Shot Anomaly Detection (ZSAD) leverages Vision-Language Models (VLMs) to enable supervision-free industrial inspection. However, existing ZSAD paradigms are constrained by single visual backbones, which struggle to balance global semantic generalization with fine-grained structural discriminability. To bridge this gap, we propose Synergistic Semantic-Visual Prompting (SSVP), that efficiently fuses diverse visual encodings to elevate model's fine-grained perception. Specifically, SSVP introduces the Hierarchical Semantic-Visual Synergy (HSVS) mechanism, which deeply integrates DINOv3's multi-scale structural priors into the CLIP semantic space. Subsequently, the Vision-Conditioned Prompt Generator (VCPG) employs cross-modal attention to guide dynamic prompt generation, enabling linguistic queries to precisely anchor to specific anomaly patterns. Furthermore, to address the discrepancy between global scoring and local evidence, the Visual-Text Anomaly Mapper (VTAM) establishes a dual-gated calibration paradigm. Extensive evaluations on seven industrial benchmarks validate the robustness of our method; SSVP achieves state-of-the-art performance with 93.0% Image-AUROC and 92.2% Pixel-AUROC on MVTec-AD, significantly outperforming existing zero-shot approaches.
+ oai:arXiv.org:2601.09147v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chenhao Fu, Han Fang, Xiuzheng Zheng, Wenbo Wei, Yonghua Li, Hao Sun, Xuelong Li
+
+
+ LLMs Meet Isolation Kernel: Lightweight, Learning-free Binary Embeddings for Fast Retrieval
+ https://arxiv.org/abs/2601.09159
+ arXiv:2601.09159v2 Announce Type: replace
+Abstract: Large language models (LLMs) have recently enabled remarkable progress in text representation. However, their embeddings are typically high-dimensional, leading to substantial storage and retrieval overhead. Although recent approaches such as Matryoshka Representation Learning (MRL) and Contrastive Sparse Representation (CSR) alleviate these issues to some extent, they still suffer from retrieval accuracy degradation. This paper proposes \emph{Isolation Kernel Embedding} or IKE, a learning-free method that transforms an LLM embedding into a binary embedding using Isolation Kernel (IK). IKE is an ensemble of diverse (random) partitions, enabling robust estimation of ideal kernel in the LLM embedding space, thus reducing retrieval accuracy loss as the ensemble grows. Lightweight and based on binary encoding, it offers low memory footprint and fast bitwise computation, lowering retrieval latency. Experiments on multiple text retrieval datasets demonstrate that IKE offers up to 16.7x faster retrieval and 16x lower memory usage than LLM embeddings, while maintaining comparable or better accuracy. Compared to CSR and other compression methods, IKE consistently achieves the best balance between retrieval efficiency and effectiveness.
+ oai:arXiv.org:2601.09159v2
+ cs.IR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhibo Zhang, Yang Xu, Kai Ming Ting, Cam-Tu Nguyen
+
+
+ Geometric Stability: The Missing Axis of Representations
+ https://arxiv.org/abs/2601.09173
+ arXiv:2601.09173v2 Announce Type: replace
+Abstract: Analysis of learned representations has a blind spot: it focuses on $similarity$, measuring how closely embeddings align with external references, but similarity reveals only what is represented, not whether that structure is robust. We introduce $geometric$ $stability$, a distinct dimension that quantifies how reliably representational geometry holds under perturbation, and present $Shesha$, a framework for measuring it. Across 2,463 configurations in seven domains, we show that stability and similarity are empirically uncorrelated ($\rho \approx 0.01$) and mechanistically distinct: similarity metrics collapse after removing the top principal components, while stability retains sensitivity to fine-grained manifold structure. This distinction yields actionable insights: for safety monitoring, stability acts as a functional geometric canary, detecting structural drift nearly 2$\times$ more sensitively than CKA while filtering out the non-functional noise that triggers false alarms in rigid distance metrics; for controllability, supervised stability predicts linear steerability ($\rho = 0.89$-$0.96$); for model selection, stability dissociates from transferability, revealing a geometric tax that transfer optimization incurs. Beyond machine learning, stability predicts CRISPR perturbation coherence and neural-behavioral coupling. By quantifying $how$ $reliably$ systems maintain structure, geometric stability provides a necessary complement to similarity for auditing representations across biological and computational systems.
+ oai:arXiv.org:2601.09173v2
+ cs.LG
+ cs.CL
+ q-bio.QM
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Prashant C. Raju
+
+
+ Mikasa: A Character-Driven Emotional AI Companion Inspired by Japanese Oshi Culture
+ https://arxiv.org/abs/2601.09208
+ arXiv:2601.09208v2 Announce Type: replace
+Abstract: Recent progress in large language models and multimodal interaction has made it possible to develop AI companions that can have fluent and emotionally expressive conversations. However, many of these systems have problems keeping users satisfied and engaged over long periods. This paper argues that these problems do not come mainly from weak models, but from poor character design and unclear definitions of the user-AI relationship. I present Mikasa, an emotional AI companion inspired by Japanese Oshi culture-specifically its emphasis on long-term, non-exclusive commitment to a stable character-as a case study of character-driven companion design. Mikasa does not work as a general-purpose assistant or a chatbot that changes roles. Instead, Mikasa is designed as a coherent character with a stable personality and a clearly defined relationship as a partner. This relationship does not force exclusivity or obligation. Rather, it works as a reference point that stabilizes interaction norms and reduces the work users must do to keep redefining the relationship. Through an exploratory evaluation, I see that users describe their preferences using surface-level qualities such as conversational naturalness, but they also value relationship control and imaginative engagement in ways they do not state directly. These results suggest that character coherence and relationship definition work as latent structural elements that shape how good the interaction feels, without users recognizing them as main features. The contribution of this work is to show that character design is a functional part of AI companion systems, not just decoration. Mikasa is one example based on a specific cultural context, but the design principles-commitment to a consistent personality and clear relationship definition-can be used for many emotionally grounded AI companions.
+ oai:arXiv.org:2601.09208v2
+ cs.HC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Miki Ueno
+
+
+ On Polar Coding with Feedback
+ https://arxiv.org/abs/2601.09222
+ arXiv:2601.09222v2 Announce Type: replace
+Abstract: In this work, we investigate the performance of polar codes with the assistance of feedback in communication systems. Although it is well known that feedback does not improve the capacity of memoryless channels, we show that the finite length performance of polar codes can be significantly improved as feedback enables genie-aided decoding and allows more flexible thresholds for the polar coding construction. To analyze the performance under the new construction, we then propose an accurate characterization of the distribution of the error event under the genie-aided successive cancellation (SC) decoding. This characterization can be also used to predict the performance of the standard SC decoding of polar codes with rates close to capacity.
+ oai:arXiv.org:2601.09222v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ling Liu, Qi Cao, Liping Li, Baoming Bai
+
+
+ LatencyPrism: Online Non-intrusive Latency Sculpting for SLO-Guaranteed LLM Inference
+ https://arxiv.org/abs/2601.09258
+ arXiv:2601.09258v2 Announce Type: replace
+Abstract: LLM inference latency critically determines user experience and operational costs, directly impacting throughput under SLO constraints. Even brief latency spikes degrade service quality despite acceptable average performance. However, distributed inference environments featuring diverse software frameworks and XPU architectures combined with dynamic workloads make latency analysis challenging. Constrained by intrusive designs that necessitate service restarts or even suspension, and by hardware-bound implementations that fail to adapt to heterogeneous inference environments, existing AI profiling methods are often inadequate for real-time production analysis.
+ We present LatencyPrism, the first zero-intrusion multi-platform latency sculpting system. It aims to break down the inference latency across pipeline, proactively alert on inference latency anomalies, and guarantee adherence to SLOs, all without requiring code modifications or service restarts. LatencyPrism has been deployed across thousands of XPUs for over six months. It enables low-overhead real-time monitoring at batch level with alerts triggered in milliseconds. This approach distinguishes between workload-driven latency variations and anomalies indicating underlying issues with an F1-score of 0.98. We also conduct extensive experiments and investigations into root cause analysis to demonstrate LatencyPrism's capability. Furthermore, we introduce the first LLM anomaly simulation toolkit to facilitate future research in robust and predictable inference systems.
+ oai:arXiv.org:2601.09258v2
+ cs.DC
+ cs.LG
+ cs.OS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yin Du, Jiayi Ren, Xiayu Sun, Tianyao Zhou, Haizhu Zhou, Ruiyan Ma, Danyang Zhang
+
+
+ RISER: Orchestrating Latent Reasoning Skills for Adaptive Activation Steering
+ https://arxiv.org/abs/2601.09269
+ arXiv:2601.09269v2 Announce Type: replace
+Abstract: Recent work on domain-specific reasoning with large language models (LLMs) often relies on training-intensive approaches that require parameter updates. While activation steering has emerged as a parameter efficient alternative, existing methods apply static, manual interventions that fail to adapt to the dynamic nature of complex reasoning. To address this limitation, we propose RISER (Router-based Intervention for Steerable Enhancement of Reasoning), a plug-and-play intervention framework that adaptively steers LLM reasoning in activation space. RISER constructs a library of reusable reasoning vectors and employs a lightweight Router to dynamically compose them for each input. The Router is optimized via reinforcement learning under task-level rewards, activating latent cognitive primitives in an emergent and compositional manner. Across seven diverse benchmarks, RISER yields 3.4-6.5% average zero-shot accuracy improvements over the base model while surpassing CoT-style reasoning with 2-3x higher token efficiency and robust accuracy gains. Further analysis shows that RISER autonomously combines multiple vectors into interpretable, precise control strategies, pointing toward more controllable and efficient LLM reasoning.
+ oai:arXiv.org:2601.09269v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Wencheng Ye, Xiaoyang Yuan, Yi Bin, Pengpeng Zeng, Hengyu Jin, Liang Peng, Heng Tao Shen
+
+
+ MCGA: A Multi-task Classical Chinese Literary Genre Audio Corpus
+ https://arxiv.org/abs/2601.09270
+ arXiv:2601.09270v2 Announce Type: replace
+Abstract: With the rapid advancement of Multimodal Large Language Models (MLLMs), their potential has gained significant attention in Chinese Classical Studies (CCS). While existing research primarily focuses on text and visual modalities, the audio corpus within this domain remains largely underexplored. To bridge this gap, we introduce the Multi-task Classical Chinese Literary Genre Audio Corpus (MCGA), a 119-hour corpus comprising 22,000 audio samples. It encompasses a diverse range of literary genres across six tasks: Automatic Speech Recognition (ASR), Speech-to-Text Translation (S2TT), Speech Emotion Captioning (SEC), Spoken Question Answering (SQA), Speech Understanding (SU), and Speech Reasoning (SR). Through the evaluation of ten MLLMs, our experimental results demonstrate that current MLLMs still face substantial challenges on the MCGA test set. Furthermore, we introduce a domain-specific metric for SEC and a metric to measure the consistency between speech and text capabilities. We release MCGA to the public to facilitate the development of more robust MLLMs. MCGA Corpus: https://github.com/yxduir/MCGA
+ oai:arXiv.org:2601.09270v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yexing Du, Kaiyuan Liu, Bihe Zhang, Youcheng Pan, Bo Yang, Liangyu Huo, Xiyuan Zhang, Jian Xie, Daojing He, Yang Xiang, Ming Liu, Bin Qin
+
+
+ TIDI-GS: Floater Suppression in 3D Gaussian Splatting for Enhanced Indoor Scene Fidelity
+ https://arxiv.org/abs/2601.09291
+ arXiv:2601.09291v2 Announce Type: replace
+Abstract: 3D Gaussian Splatting (3DGS) is a technique to create high-quality, real-time 3D scenes from images. This method often produces visual artifacts known as floaters--nearly transparent, disconnected elements that drift in space away from the actual surface. This geometric inaccuracy undermines the reliability of these models for practical applications, which is critical. To address this issue, we introduce TIDI-GS, a new training framework designed to eliminate these floaters. A key benefit of our approach is that it functions as a lightweight plugin for the standard 3DGS pipeline, requiring no major architectural changes and adding minimal overhead to the training process. The core of our method is a floater pruning algorithm--TIDI--that identifies and removes floaters based on several criteria: their consistency across multiple viewpoints, their spatial relationship to other elements, and an importance score learned during training. The framework includes a mechanism to preserve fine details, ensuring that important high-frequency elements are not mistakenly removed. This targeted cleanup is supported by a monocular depth-based loss function that helps improve the overall geometric structure of the scene. Our experiments demonstrate that TIDI-GS improves both the perceptual quality and geometric integrity of reconstructions, transforming them into robust digital assets, suitable for high-fidelity applications.
+ oai:arXiv.org:2601.09291v2
+ cs.GR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sooyeun Yang, Cheyul Im, Jee Won Lee, Jongseong Brad Choi
+
+
+ FairGE: Fairness-Aware Graph Encoding in Incomplete Social Networks
+ https://arxiv.org/abs/2601.09394
+ arXiv:2601.09394v2 Announce Type: replace
+Abstract: Graph Transformers (GTs) are increasingly applied to social network analysis, yet their deployment is often constrained by fairness concerns. This issue is particularly critical in incomplete social networks, where sensitive attributes are frequently missing due to privacy and ethical restrictions. Existing solutions commonly generate these incomplete attributes, which may introduce additional biases and further compromise user privacy. To address this challenge, FairGE (Fair Graph Encoding) is introduced as a fairness-aware framework for GTs in incomplete social networks. Instead of generating sensitive attributes, FairGE encodes fairness directly through spectral graph theory. By leveraging the principal eigenvector to represent structural information and padding incomplete sensitive attributes with zeros to maintain independence, FairGE ensures fairness without data reconstruction. Theoretical analysis demonstrates that the method suppresses the influence of non-principal spectral components, thereby enhancing fairness. Extensive experiments on seven real-world social network datasets confirm that FairGE achieves at least a 16% improvement in both statistical parity and equality of opportunity compared with state-of-the-art baselines. The source code is shown in https://github.com/LuoRenqiang/FairGE.
+ oai:arXiv.org:2601.09394v2
+ cs.SI
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renqiang Luo, Huafei Huang, Tao Tang, Jing Ren, Ziqi Xu, Mingliang Hou, Enyan Dai, Feng Xia
+
+
+ FairGU: Fairness-aware Graph Unlearning in Social Networks
+ https://arxiv.org/abs/2601.09469
+ arXiv:2601.09469v2 Announce Type: replace
+Abstract: Graph unlearning has emerged as a critical mechanism for supporting sustainable and privacy-preserving social networks, enabling models to remove the influence of deleted nodes and thereby better safeguard user information. However, we observe that existing graph unlearning techniques insufficiently protect sensitive attributes, often leading to degraded algorithmic fairness compared with traditional graph learning methods. To address this gap, we introduce FairGU, a fairness-aware graph unlearning framework designed to preserve both utility and fairness during the unlearning process. FairGU integrates a dedicated fairness-aware module with effective data protection strategies, ensuring that sensitive attributes are neither inadvertently amplified nor structurally exposed when nodes are removed. Through extensive experiments on multiple real-world datasets, we demonstrate that FairGU consistently outperforms state-of-the-art graph unlearning methods and fairness-enhanced graph learning baselines in terms of both accuracy and fairness metrics. Our findings highlight a previously overlooked risk in current unlearning practices and establish FairGU as a robust and equitable solution for the next generation of socially sustainable networked systems. The codes are available at https://github.com/LuoRenqiang/FairGU.
+ oai:arXiv.org:2601.09469v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renqiang Luo, Yongshuai Yang, Huafei Huang, Qing Qing, Mingliang Hou, Ziqi Xu, Yi Yu, Jingjing Zhou, Feng Xia
+
+
+ Bridging Semantic Understanding and Popularity Bias with LLMs
+ https://arxiv.org/abs/2601.09478
+ arXiv:2601.09478v3 Announce Type: replace
+Abstract: Semantic understanding of popularity bias is a crucial yet underexplored challenge in recommender systems, where popular items are often favored at the expense of niche content. Most existing debiasing methods treat the semantic understanding of popularity bias as a matter of diversity enhancement or long-tail coverage, neglecting the deeper semantic layer that embodies the causal origins of the bias itself. Consequently, such shallow interpretations limit both their debiasing effectiveness and recommendation accuracy. In this paper, we propose FairLRM, a novel framework that bridges the gap in the semantic understanding of popularity bias with Recommendation via Large Language Model (RecLLM). FairLRM decomposes popularity bias into item-side and user-side components, using structured instruction-based prompts to enhance the model's comprehension of both global item distributions and individual user preferences. Unlike traditional methods that rely on surface-level features such as "diversity" or "debiasing", FairLRM improves the model's ability to semantically interpret and address the underlying bias. Through empirical evaluation, we show that FairLRM significantly enhances both fairness and recommendation accuracy, providing a more semantically aware and trustworthy approach to enhance the semantic understanding of popularity bias. The implementation is available at https://github.com/LuoRenqiang/FairLRM.
+ oai:arXiv.org:2601.09478v3
+ cs.IR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Renqiang Luo, Dong Zhang, Yupeng Gao, Wen Shi, Mingliang Hou, Jiaying Liu, Zhe Wang, Shuo Yu
+
+
+ A Finite-Sample Strong Converse for Binary Hypothesis Testing via (Reverse) R\'enyi Divergence
+ https://arxiv.org/abs/2601.09550
+ arXiv:2601.09550v2 Announce Type: replace
+Abstract: This work investigates binary hypothesis testing between $H_0\sim P_0$ and $H_1\sim P_1$ in the finite-sample regime under asymmetric error constraints. By employing the ``reverse" R\'enyi divergence, we derive novel non-asymptotic bounds on the Type II error probability which naturally establish a strong converse result. Furthermore, when the Type I error is constrained to decay exponentially with a rate $c$, we show that the Type II error converges to 1 exponentially fast if $c$ exceeds the Kullback-Leibler divergence $D(P_1\|P_0)$, and vanishes exponentially fast if $c$ is smaller. Finally, we present numerical examples demonstrating that the proposed converse bounds strictly improve upon existing finite-sample results in the literature.
+ oai:arXiv.org:2601.09550v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Roberto Bruno, Adrien Vandenbroucque, Amedeo Roberto Esposito
+
+
+ Counting and Entropy Bounds for Structure-Avoiding Spatially-Coupled LDPC Constructions
+ https://arxiv.org/abs/2601.09674
+ arXiv:2601.09674v2 Announce Type: replace
+Abstract: Designing large coupling memory quasi-cyclic spatially-coupled LDPC (QC-SC-LDPC) codes with low error floors requires eliminating specific harmful substructures (e.g., short cycles) induced by edge spreading and lifting. Building on our work~\cite{r15} that introduced a Clique Lov\'asz Local Lemma (CLLL)-based design principle and a Moser--Tardos (MT)-type constructive approach, this work quantifies the size and structure of the feasible design space. Using the quantitative CLLL, we derive explicit lower bounds on the number of feasible edge-spreading and lifting assignments satisfying a given family of structure-avoidance constraints, and further obtain bounds on the number of non-equivalent solutions under row/column permutations. Moreover, via R\'enyi entropy bounds for the MT distribution, we provide a computable lower bound on the number of distinct solutions that the MT algorithm can output, giving a concrete diversity guarantee for randomized constructions. Specializations for eliminating 4-cycles yield closed-form bounds as functions of system parameters, offering a principled way to select the memory and lifting degree and to estimate the remaining search space.
+ oai:arXiv.org:2601.09674v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lei Huang
+
+
+ LLM-Based Agentic Systems for Software Engineering: Challenges and Opportunities
+ https://arxiv.org/abs/2601.09822
+ arXiv:2601.09822v2 Announce Type: replace
+Abstract: Despite recent advancements in Large Language Models (LLMs), complex Software Engineering (SE) tasks require more collaborative and specialized approaches. This concept paper systematically reviews the emerging paradigm of LLM-based multi-agent systems, examining their applications across the Software Development Life Cycle (SDLC), from requirements engineering and code generation to static code checking, testing, and debugging. We delve into a wide range of topics such as language model selection, SE evaluation benchmarks, state-of-the-art agentic frameworks and communication protocols. Furthermore, we identify key challenges and outline future research opportunities, with a focus on multi-agent orchestration, human-agent coordination, computational cost optimization, and effective data collection. This work aims to provide researchers and practitioners with valuable insights into the current forefront landscape of agentic systems within the software engineering domain.
+ oai:arXiv.org:2601.09822v2
+ cs.SE
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yongjian Tang, Thomas Runkler
+
+
+ UniHash: Unifying Pointwise and Pairwise Hashing Paradigms for Seen and Unseen Category Retrieval
+ https://arxiv.org/abs/2601.09828
+ arXiv:2601.09828v2 Announce Type: replace
+Abstract: Effective retrieval across both seen and unseen categories is crucial for modern image retrieval systems. Retrieval on seen categories ensures precise recognition of known classes, while retrieval on unseen categories promotes generalization to novel classes with limited supervision. However, most existing deep hashing methods are confined to a single training paradigm, either pointwise or pairwise, where the former excels on seen categories and the latter generalizes better to unseen ones. To overcome this limitation, we propose Unified Hashing (UniHash), a dual-branch framework that unifies the strengths of both paradigms to achieve balanced retrieval performance across seen and unseen categories. UniHash consists of two complementary branches: a center-based branch following the pointwise paradigm and a pairwise branch following the pairwise paradigm. A novel hash code learning method is introduced to enable bidirectional knowledge transfer between branches, improving hash code discriminability and generalization. It employs a mutual learning loss to align hash representations and introduces a Split-Merge Mixture of Hash Experts (SM-MoH) module to enhance cross-branch exchange of hash representations. Theoretical analysis substantiates the effectiveness of UniHash, and extensive experiments on CIFAR-10, MSCOCO, and ImageNet demonstrate that UniHash consistently achieves state-of-the-art performance in both seen and unseen image retrieval scenarios.
+ oai:arXiv.org:2601.09828v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaoxu Ma, Runhao Li, Xiangbo Zhang, Zhenyu Weng
+
+
+ A pipeline for enabling path-specific causal fairness in observational health data
+ https://arxiv.org/abs/2601.09841
+ arXiv:2601.09841v2 Announce Type: replace
+Abstract: When training machine learning (ML) models for potential deployment in a healthcare setting, it is essential to ensure that they do not replicate or exacerbate existing healthcare biases. Although many definitions of fairness exist, we focus on path-specific causal fairness, which allows us to better consider the social and medical contexts in which biases occur (e.g., direct discrimination by a clinician or model versus bias due to differential access to the healthcare system) and to characterize how these biases may appear in learned models. In this work, we map the structural fairness model to the observational healthcare setting and create a generalizable pipeline for training causally fair models. The pipeline explicitly considers specific healthcare context and disparities to define a target "fair" model. Our work fills two major gaps: first, we expand on characterizations of the "fairness-accuracy" tradeoff by detangling direct and indirect sources of bias and jointly presenting these fairness considerations alongside considerations of accuracy in the context of broadly known biases. Second, we demonstrate how a foundation model trained without fairness constraints on observational health data can be leveraged to generate causally fair downstream predictions in tasks with known social and medical disparities. This work presents a model-agnostic pipeline for training causally fair machine learning models that address both direct and indirect forms of healthcare bias.
+ oai:arXiv.org:2601.09841v2
+ cs.LG
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Aparajita Kashyap, Sara Matijevic, No\'emie Elhadad, Steven A. Kushner, Shalmali Joshi
+
+
+ Kinematic Tokenization: Optimization-Based Continuous-Time Tokens for Learnable Decision Policies in Noisy Time Series
+ https://arxiv.org/abs/2601.09949
+ arXiv:2601.09949v2 Announce Type: replace
+Abstract: Transformers are designed for discrete tokens, yet many real-world signals are continuous processes observed through noisy sampling. Discrete tokenizations (raw values, patches, finite differences) can be brittle in low signal-to-noise regimes, especially when downstream objectives impose asymmetric penalties that rationally encourage abstention. We introduce Kinematic Tokenization, an optimization-based continuous-time representation that reconstructs an explicit spline from noisy measurements and tokenizes local spline coefficients (position, velocity, acceleration, jerk). This is applied to financial time series data in the form of asset prices in conjunction with trading volume profiles. Across a multi-asset daily-equity testbed, we use a risk-averse asymmetric classification objective as a stress test for learnability. Under this objective, several discrete baselines collapse to an absorbing cash policy (the Liquidation Equilibrium), whereas the continuous spline tokens sustain calibrated, non-trivial action distributions and stable policies. These results suggest that explicit continuous-time tokens can improve the learnability and calibration of selective decision policies in noisy time series under abstention-inducing losses.
+ oai:arXiv.org:2601.09949v2
+ cs.LG
+ cs.AI
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Griffin Kearney
+
+
+ On the Leaky Private Information Retrieval with Side Information
+ https://arxiv.org/abs/2601.09960
+ arXiv:2601.09960v2 Announce Type: replace
+Abstract: This paper investigates the problem of Leaky Private Information Retrieval with Side Information (L-PIR-SI), providing a fundamental characterization of the trade-off among leaky privacy, side information, and download cost. We propose a unified probabilistic framework to design L-PIR-SI schemes under $\varepsilon$-differential privacy variants of both $W$-privacy and $(W, S)$-privacy. Explicit upper bounds on the download cost are derived, which strictly generalize existing results: our bounds recover the capacity of perfect PIR-SI as $\varepsilon \to 0$, and reduce to the known $\varepsilon$-leaky PIR rate in the absence of side information. Furthermore, we conduct a refined analysis of the privacy--utility trade-off at the scaling-law level, demonstrating that the leakage ratio exponent scales as $\mathcal{O}(\log \frac{K}{M + 1})$ under leaky $W$-privacy, and as $\mathcal{O}(\log K)$ under leaky $(W, S)$-privacy in the minimal non-trivial setting $M = 1$, where $K$ and $M$ denote the number of messages and the side information size, respectively.
+ oai:arXiv.org:2601.09960v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yingying Huangfu, Tian Bai
+
+
+ DW-DGAT: Dynamically Weighted Dual Graph Attention Network for Neurodegenerative Disease Diagnosis
+ https://arxiv.org/abs/2601.10001
+ arXiv:2601.10001v2 Announce Type: replace
+Abstract: Parkinson's disease (PD) and Alzheimer's disease (AD) are the two most prevalent and incurable neurodegenerative diseases (NDs) worldwide, for which early diagnosis is critical to delay their progression. However, the high dimensionality of multi-metric data with diverse structural forms, the heterogeneity of neuroimaging and phenotypic data, and class imbalance collectively pose significant challenges to early ND diagnosis. To address these challenges, we propose a dynamically weighted dual graph attention network (DW-DGAT) that integrates: (1) a general-purpose data fusion strategy to merge three structural forms of multi-metric data; (2) a dual graph attention architecture based on brain regions and inter-sample relationships to extract both micro- and macro-level features; and (3) a class weight generation mechanism combined with two stable and effective loss functions to mitigate class imbalance. Rigorous experiments, based on the Parkinson Progression Marker Initiative (PPMI) and Alzhermer's Disease Neuroimaging Initiative (ADNI) studies, demonstrate the state-of-the-art performance of our approach.
+ oai:arXiv.org:2601.10001v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chengjia Liang, Zhenjiong Wang, Chao Chen, Ruizhi Zhang, Songxi Liang, Hai Xie, Haijun Lei, Zhongwei Huang
+
+
+ Matrix as Plan: Structured Logical Reasoning with Feedback-Driven Replanning
+ https://arxiv.org/abs/2601.10101
+ arXiv:2601.10101v2 Announce Type: replace
+Abstract: As knowledge and semantics on the web grow increasingly complex, enhancing Large Language Models (LLMs)' comprehension and reasoning capabilities has become particularly important. Chain-of-Thought (CoT) prompting has been shown to enhance the reasoning capabilities of LLMs. However, it still falls short on logical reasoning tasks that rely on symbolic expressions and strict deductive rules. Neuro-symbolic methods address this gap by enforcing formal correctness through external solvers. Yet these solvers are highly format-sensitive, and small instabilities in model outputs can lead to frequent processing failures. The LLM-driven approaches avoid parsing brittleness, but they lack structured representations and process-level error-correction mechanisms. To further enhance the logical reasoning capabilities of LLMs, we propose MatrixCoT, a structured CoT framework with a matrix-based plan. Specifically, we normalize and type natural language expressions and attach explicit citation fields, and introduce a matrix-based planning method to preserve global relations among steps. The plan thus becomes a verifiable artifact and execution becomes more stable. For verification, we also add a feedback-driven replanning mechanism. Under semantic-equivalence constraints, it identifies omissions and defects, rewrites and compresses the dependency matrix, and produces a more trustworthy final answer. Experiments on five logical-reasoning benchmarks and five LLMs show that, without relying on external solvers, MatrixCoT enhances both the robustness and interpretability of LLMs when tackling complex symbolic reasoning tasks, while maintaining competitive performance.
+ oai:arXiv.org:2601.10101v2
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ke Chen, Jiandian Zeng, Zihao Peng, Guo Li, Guangxue Zhang, Tian Wang
+
+
+ What Gets Activated: Uncovering Domain and Driver Experts in MoE Language Models
+ https://arxiv.org/abs/2601.10159
+ arXiv:2601.10159v2 Announce Type: replace
+Abstract: Most interpretability work focuses on layer- or neuron-level mechanisms in Transformers, leaving expert-level behavior in MoE LLMs underexplored. Motivated by functional specialization in the human brain, we analyze expert activation by distinguishing domain and driver experts. In this work, we study expert activation in MoE models across three public domains and address two key questions: (1) which experts are activated, and whether certain expert types exhibit consistent activation patterns; and (2) how tokens are associated with and trigger the activation of specific experts. To answer these questions, we introduce entropy-based and causal-effect metrics to assess whether an expert is strongly favored for a particular domain, and how strongly expert activation contributes causally to the model's output, thus identify domain and driver experts, respectively. Furthermore, we explore how individual tokens are associated with the activation of specific experts. Our analysis reveals that (1) Among the activated experts, some show clear domain preferences, while others exert strong causal influence on model performance, underscoring their decisive roles. (2) tokens occurring earlier in a sentence are more likely to trigger the driver experts, and (3) adjusting the weights of domain and driver experts leads to significant performance gains across all three models and domains. These findings shed light on the internal mechanisms of MoE models and enhance their interpretability.
+ oai:arXiv.org:2601.10159v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Guimin Hu, Meng Li, Qiwei Peng, Lijie Hu, Boyan Xu, Ruichu Cai
+
+
+ GeoSteer: Faithful Chain-of-Thought Steering via Latent Manifold Gradients
+ https://arxiv.org/abs/2601.10229
+ arXiv:2601.10229v2 Announce Type: replace
+Abstract: Recent advances in Large Language Models (LLMs) have demonstrated remarkable progress in their reasoning capabilities, such as Chain-of-Thought (CoT). Most approaches rely on CoT rationales. Previous studies have shown that LLMs often generate logically inconsistent reasoning steps even when their final answers are correct. These inconsistencies reduce the reliability of the reasoning process. We propose GeoSteer, a manifold-based framework that improves the quality of intermediate reasoning. The method consists of: (1) constructing a CoT dataset with step-level scores, (2) training a Variational Autoencoder (VAE) model and a quality estimation model to learn a low-dimensional manifold of high-quality CoT trajectories, and (3) steering hidden states of target LLMs toward higher-quality regions in the latent space. This last step enables steering of the hidden states by following gradients along the learned manifold. It facilitates geometrically coherent steering. Evaluation experiments were conducted on the GSM8k dataset using the Qwen3 series. We evaluated performance using two metrics: answer accuracy and overall reasoning quality. GeoSteer improved the accuracy by 0.9 points and enhanced the reasoning quality by 4.5 points on average, compared with those of original LLMs. These results indicate that GeoSteer improves an effective and controllable mechanism for improving the quality of intermediate reasoning in LLMs.
+ oai:arXiv.org:2601.10229v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Kentaro Kazama, Daiki Shirafuji, Tatsuhiko Saito
+
+
+ SRAW-Attack: Space-Reweighted Adversarial Warping Attack for SAR Target Recognition
+ https://arxiv.org/abs/2601.10324
+ arXiv:2601.10324v2 Announce Type: replace
+Abstract: Synthetic aperture radar (SAR) imagery exhibits intrinsic information sparsity due to its unique electromagnetic scattering mechanism. Despite the widespread adoption of deep neural network (DNN)-based SAR automatic target recognition (SAR-ATR) systems, they remain vulnerable to adversarial examples and tend to over-rely on background regions, leading to degraded adversarial robustness. Existing adversarial attacks for SAR-ATR often require visually perceptible distortions to achieve effective performance, thereby necessitating an attack method that balances effectiveness and stealthiness. In this paper, a novel attack method termed Space-Reweighted Adversarial Warping (SRAW) is proposed, which generates adversarial examples through optimized spatial deformation with reweighted budgets across foreground and background regions. Extensive experiments demonstrate that SRAW significantly degrades the performance of state-of-the-art SAR-ATR models and consistently outperforms existing methods in terms of imperceptibility and adversarial transferability. Code is made available at https://github.com/boremycin/SAR-ATR-TransAttack.
+ oai:arXiv.org:2601.10324v2
+ cs.CV
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yiming Zhang, Weibo Qin, Yuntian Liu, Feng Wang
+
+
+ Fine-Grained Human Pose Editing Assessment via Layer-Selective MLLMs
+ https://arxiv.org/abs/2601.10369
+ arXiv:2601.10369v2 Announce Type: replace
+Abstract: Text-guided human pose editing has gained significant traction in AIGC applications. However,it remains plagued by structural anomalies and generative artifacts. Existing evaluation metrics often isolate authenticity detection from quality assessment, failing to provide fine-grained insights into pose-specific inconsistencies. To address these limitations, we introduce HPE-Bench, a specialized benchmark comprising 1,700 standardized samples from 17 state-of-the-art editing models, offering both authenticity labels and multi-dimensional quality scores. Furthermore, we propose a unified framework based on layer-selective multimodal large language models (MLLMs). By employing contrastive LoRA tuning and a novel layer sensitivity analysis (LSA) mechanism, we identify the optimal feature layer for pose evaluation. Our framework achieves superior performance in both authenticity detection and multi-dimensional quality regression, effectively bridging the gap between forensic detection and quality assessment.
+ oai:arXiv.org:2601.10369v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ningyu Sun, Zhaolin Cai, Zitong Xu, Peihang Chen, Huiyu Duan, Yichao Yan, Xiongkuo Min, Xiaokang Yang
+
+
+ A Hybrid Reliability--Weight Framework for Construction of Polar Codes
+ https://arxiv.org/abs/2601.10376
+ arXiv:2601.10376v2 Announce Type: replace
+Abstract: Polar codes are usually constructed by ranking synthetic bit-channels according to reliability, which guarantees capacity-achieving behavior but can yield poor low-weight spectra at short and moderate lengths. Recent algebraic results express the contribution of individual bit-channels to the multiplicities of minimum and near-minimum weight codewords in closed form. In this work we combine these insights into a mixed (reliability--weight) bit-channel ordering. We define a per-bit cost whose distance term is derived from orbit enumeration of minimum-weight codewords and scaled by a Bhattacharyya-type factor, and show that the resulting mixed construction minimises a truncated SC/ML union-bound surrogate within a class of decreasing monomial codes. We relate the mixed metric to error events in SCL decoding via a pruning/ML decomposition, and prove that mixed designs act as local perturbations of reliability-based constructions whose asymptotic impact vanishes as code-length approaches infinity. Numerical results for short and moderate lengths on BPSK-AWGN, implemented via Gaussian approximation and closed-form weight contributions, illustrate the trade-off between pure reliability-based and mixed constructions in terms of minimum distance, multiplicity, and union-bound approximations. All proofs are deferred to the appendices.
+ oai:arXiv.org:2601.10376v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Mohammad Rowshan, Vlad-Florin Dragoi
+
+
+ Global Context Compression with Interleaved Vision-Text Transformation
+ https://arxiv.org/abs/2601.10378
+ arXiv:2601.10378v2 Announce Type: replace
+Abstract: Recent achievements of vision-language models in end-to-end OCR point to a new avenue for low-loss compression of textual information. This motivates earlier works that render the Transformer's input into images for prefilling, which effectively reduces the number of tokens through visual encoding, thereby alleviating the quadratically increased Attention computations. However, this partial compression fails to save computational or memory costs at token-by-token inference. In this paper, we investigate global context compression, which saves tokens at both prefilling and inference stages. Consequently, we propose VIST2, a novel Transformer that interleaves input text chunks alongside their visual encoding, while depending exclusively on visual tokens in the pre-context to predict the next text token distribution. Around this idea, we render text chunks into sketch images and train VIST2 in multiple stages, starting from curriculum-scheduled pretraining for optical language modeling, followed by modal-interleaved instruction tuning. We conduct extensive experiments using VIST2 families scaled from 0.6B to 8B to explore the training recipe and hyperparameters. With a 4$\times$ compression ratio, the resulting models demonstrate significant superiority over baselines on long writing tasks, achieving, on average, a 3$\times$ speedup in first-token generation, 77% reduction in memory usage, and 74% reduction in FLOPS. Our codes and datasets will be public to support further studies.
+ oai:arXiv.org:2601.10378v2
+ cs.CV
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Dian Jiao, Jiaxin Duan, Shuai Zhao, Jiabing Leng, Yiran Zhang, Feng Huang
+
+
+ Energy-Efficient Probabilistic Semantic Communication Over Visible Light Networks With Rate Splitting
+ https://arxiv.org/abs/2601.10452
+ arXiv:2601.10452v2 Announce Type: replace
+Abstract: Visible light communication (VLC) is emerging as a key technology for future wireless communication systems due to its unique physical-layer advantages over traditional radio-frequency (RF)-based systems. However, its integration with higher-layer techniques, such as semantic communication, remains underexplored. This paper investigates the energy efficiency maximization problem in a resource-constrained VLC-based probabilistic semantic communication (PSCom) system. In the considered model, light-emitting diode (LED) transmitters perform semantic compression to reduce data size, which incurs additional computation overhead. The compressed semantic information is transmitted to the users for semantic inference using a shared knowledge base that requires periodic updates to ensure synchronization. In the PSCom system, the knowledge base is represented by probabilistic graphs. To enable simultaneous transmission of both knowledge and information data, rate splitting multiple access (RSMA) is employed. The optimization problem focuses on maximizing energy efficiency by jointly optimizing transmit beamforming, direct current (DC) bias, common rate allocation, and semantic compression ratio, while accounting for both communication and computation costs. To solve this problem, an alternating optimization algorithm based on successive convex approximation (SCA) and Dinkelbach method is developed. Simulation results demonstrate the effectiveness of the proposed approach.
+ oai:arXiv.org:2601.10452v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhouxiang Zhao, Zhaohui Yang, Chen Zhu, Xin Tong, Zhaoyang Zhang
+
+
+ ChartComplete: A Taxonomy-based Inclusive Chart Dataset
+ https://arxiv.org/abs/2601.10462
+ arXiv:2601.10462v3 Announce Type: replace
+Abstract: With advancements in deep learning (DL) and computer vision techniques, the field of chart understanding is evolving rapidly. In particular, multimodal large language models (MLLMs) are proving to be efficient and accurate in understanding charts. To accurately measure the performance of MLLMs, the research community has developed multiple datasets to serve as benchmarks. By examining these datasets, we found that they are all limited to a small set of chart types. To bridge this gap, we propose the ChartComplete dataset. The dataset is based on a chart taxonomy borrowed from the visualization community, and it covers thirty different chart types. The dataset is a collection of classified chart images and does not include a learning signal. We present the ChartComplete dataset as is to the community to build upon it.
+ oai:arXiv.org:2601.10462v3
+ cs.AI
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Ahmad Mustapha, Charbel Toumieh, Mariette Awad
+
+
+ AI Sycophancy: How Users Flag and Respond
+ https://arxiv.org/abs/2601.10467
+ arXiv:2601.10467v2 Announce Type: replace
+Abstract: While concerns about LLM sycophancy have grown among researchers and developers, how users themselves experience this behavior remains largely unexplored. We analyze Reddit discussions to investigate how users detect, mitigate, and perceive sycophantic AI. We develop the ODR Framework that maps user experiences across three stages: observing sycophantic behaviors, detecting sycophancy, and responding to these behaviors. Our findings reveal that users employ various detection techniques, including cross-platform comparison and inconsistency testing. We document diverse mitigation approaches, such as persona-based prompts to specific language patterns in prompt engineering. We find sycophancy's effects are context-dependent rather than universally harmful. Specifically, vulnerable populations experiencing trauma, mental health challenges, or isolation actively seek and value sycophantic behaviors as emotional support. Users develop both technical and folk explanations for why sycophancy occurs. These findings challenge the assumption that sycophancy should be eliminated universally. We conclude by proposing context-aware AI design that balances the risks with the benefits of affirmative interaction, while discussing implications for user education and transparency.
+ oai:arXiv.org:2601.10467v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kazi Noshin, Syed Ishtiaque Ahmed, Sharifa Sultana
+
+
+ DeFlow: Decoupling Manifold Modeling and Value Maximization for Offline Policy Extraction
+ https://arxiv.org/abs/2601.10471
+ arXiv:2601.10471v2 Announce Type: replace
+Abstract: We present DeFlow, a decoupled offline RL framework that leverages flow matching to faithfully capture complex behavior manifolds. Optimizing generative policies is computationally prohibitive, typically necessitating backpropagation through ODE solvers. We address this by learning a lightweight refinement module within an explicit, data-derived trust region of the flow manifold, rather than sacrificing the iterative generation capability via single-step distillation. This way, we bypass solver differentiation and eliminate the need for balancing loss terms, ensuring stable improvement while fully preserving the flow's iterative expressivity. Empirically, DeFlow achieves superior performance on the challenging OGBench benchmark and demonstrates efficient offline-to-online adaptation.
+ oai:arXiv.org:2601.10471v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhancun Mu
+
+
+ SatMap: Revisiting Satellite Maps as Prior for Online HD Map Construction
+ https://arxiv.org/abs/2601.10512
+ arXiv:2601.10512v2 Announce Type: replace
+Abstract: Online high-definition (HD) map construction is an essential part of a safe and robust end-to-end autonomous driving (AD) pipeline. Onboard camera-based approaches suffer from limited depth perception and degraded accuracy due to occlusion. In this work, we propose SatMap, an online vectorized HD map estimation method that integrates satellite maps with multi-view camera observations and directly predicts a vectorized HD map for downstream prediction and planning modules. Our method leverages lane-level semantics and texture from satellite imagery captured from a Bird's Eye View (BEV) perspective as a global prior, effectively mitigating depth ambiguity and occlusion. In our experiments on the nuScenes dataset, SatMap achieves 34.8% mAP performance improvement over the camera-only baseline and 8.5% mAP improvement over the camera-LiDAR fusion baseline. Moreover, we evaluate our model in long-range and adverse weather conditions to demonstrate the advantages of using a satellite prior map. Source code will be available at https://iv.ee.hm.edu/satmap/.
+ oai:arXiv.org:2601.10512v2
+ cs.CV
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kanak Mazumder, Fabian B. Flohr
+
+
+ BikeActions: An Open Platform and Benchmark for Cyclist-Centric VRU Action Recognition
+ https://arxiv.org/abs/2601.10521
+ arXiv:2601.10521v2 Announce Type: replace
+Abstract: Anticipating the intentions of Vulnerable Road Users (VRUs) is a critical challenge for safe autonomous driving (AD) and mobile robotics. While current research predominantly focuses on pedestrian crossing behaviors from a vehicle's perspective, interactions within dense shared spaces remain underexplored. To bridge this gap, we introduce FUSE-Bike, the first fully open perception platform of its kind. Equipped with two LiDARs, a camera, and GNSS, it facilitates high-fidelity, close-range data capture directly from a cyclist's viewpoint. Leveraging this platform, we present BikeActions, a novel multi-modal dataset comprising 852 annotated samples across 5 distinct action classes, specifically tailored to improve VRU behavior modeling. We establish a rigorous benchmark by evaluating state-of-the-art graph convolution and transformer-based models on our publicly released data splits, establishing the first performance baselines for this challenging task. We release the full dataset together with data curation tools, the open hardware design, and the benchmark code to foster future research in VRU action understanding under https://iv.ee.hm.edu/bikeactions/.
+ oai:arXiv.org:2601.10521v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Max A. Buettner, Kanak Mazumder, Luca Koecher, Mario Finkbeiner, Sebastian Niebler, Fabian B. Flohr
+
+
+ Hybrid Encryption with Certified Deletion in Preprocessing Model
+ https://arxiv.org/abs/2601.10542
+ arXiv:2601.10542v2 Announce Type: replace
+Abstract: Certified deletion allows Alice to outsource data to Bob and, at a later time, obtain a verifiable guarantee that the file has been irreversibly deleted at her request. The functionality, while impossible using classical information alone, can be achieved using quantum information. Existing approaches rely either on one-time pad (OTP) encryption, or on computational hardness assumptions that may be vulnerable to future advances in classical or quantum computing. In this work, we introduce and formalize hybrid encryption with certified deletion in the preprocessing model (pHE-CD) and propose two constructions. Each construction composes an information-theoretic key encapsulation mechanism (iKEM) with a data encapsulation mechanism that provides certified deletion (DEM-CD) security, offering different types of security depending on the security properties of DEM-CD. When DEM-CD is one-time information theoretically secure, the composition provides {\em information-theoretic security} for both encryption and certified deletion. When DEM-CD is computationally secure, the composed construction offers computationally secure (post-quantum) encryption and {\em everlasting certified deletion} where confidentiality is computational up to the point that the deletion certificate is verified, and after successful verification of the certificate, becomes unconditional. That is, successful verification of deletion certificate guarantees that the data has been removed information-theoretically from the adversary's view. Both pHE-CD schemes are for encryption of arbitrarily long messages. Construction 2 is key efficient and uses a DEM-CD that is constructed using quantum coding and AES, providing quantum-safe security for encryption. We discuss our results and directions for future work.
+ oai:arXiv.org:2601.10542v2
+ cs.CR
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Kunal Dey, Reihaneh Safavi-Naini
+
+
+ Rewriting Systems on Arbitrary Monoids
+ https://arxiv.org/abs/2601.10564
+ arXiv:2601.10564v2 Announce Type: replace
+Abstract: In this paper, we introduce monoidal rewriting systems (MRS), an abstraction of string rewriting in which reductions are defined over an arbitrary ambient monoid rather than a free monoid of words. This shift is partly motivated by logic: the class of free monoids is not first-order axiomatizable, so "working in the free setting" cannot be treated internally when applying first-order methods to rewriting presentations.
+ To analyze these systems categorically, we define $\mathbf{NCRS_2}$ as the 2-category of Noetherian Confluent MRS. We then prove the existence of a canonical biadjunction between $\mathbf{NCRS_2}$ and $\mathbf{Mon}$.
+ Finally, we classify all Noetherian Confluent MRS that present a given fixed monoid. For this, we introduce Generalized Elementary Tietze Transformations (GETTs) and prove that any two presentations of a monoid are connected by a (possibly infinite) sequence of these transformations, yielding a complete characterization of generating systems up to GETT-equivalence.
+ oai:arXiv.org:2601.10564v2
+ cs.FL
+ cs.LO
+ math.CT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Eduardo Magalh\~aes
+
+
+ Jordan-Segmentable Masks: A Topology-Aware definition for characterizing Binary Image Segmentation
+ https://arxiv.org/abs/2601.10577
+ arXiv:2601.10577v2 Announce Type: replace
+Abstract: Image segmentation plays a central role in computer vision. However, widely used evaluation metrics, whether pixel-wise, region-based, or boundary-focused, often struggle to capture the structural and topological coherence of a segmentation. In many practical scenarios, such as medical imaging or object delineation, small inaccuracies in boundary, holes, or fragmented predictions can result in high metric scores, despite the fact that the resulting masks fail to preserve the object global shape or connectivity. This highlights a limitation of conventional metrics: they are unable to assess whether a predicted segmentation partitions the image into meaningful interior and exterior regions.
+ In this work, we introduce a topology-aware notion of segmentation based on the Jordan Curve Theorem, and adapted for use in digital planes. We define the concept of a \emph{Jordan-segmentatable mask}, which is a binary segmentation whose structure ensures a topological separation of the image domain into two connected components. We analyze segmentation masks through the lens of digital topology and homology theory, extracting a $4$-curve candidate from the mask, verifying its topological validity using Betti numbers. A mask is considered Jordan-segmentatable when this candidate forms a digital 4-curve with $\beta_0 = \beta_1 = 1$, or equivalently when its complement splits into exactly two $8$-connected components.
+ This framework provides a mathematically rigorous, unsupervised criterion with which to assess the structural coherence of segmentation masks. By combining digital Jordan theory and homological invariants, our approach provides a valuable alternative to standard evaluation metrics, especially in applications where topological correctness must be preserved.
+ oai:arXiv.org:2601.10577v2
+ cs.CV
+ cs.NA
+ math.AT
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Serena Grazia De Benedictis, Amedeo Altavilla, Nicoletta Del Buono
+
+
+ Mitigating GIL Bottlenecks in Edge AI Systems
+ https://arxiv.org/abs/2601.10582
+ arXiv:2601.10582v2 Announce Type: replace
+Abstract: Deploying Python-based AI agents on resource-constrained edge devices presents a critical runtime optimization challenge: high thread counts are needed to mask I/O latency, yet Python's Global Interpreter Lock (GIL) serializes execution. We demonstrate that naive thread pool scaling causes a "saturation cliff": a performance degradation of >= 20% at overprovisioned thread counts (N >= 512) on edge representative configurations. We present a lightweight profiling tool and adaptive runtime system that uses a Blocking Ratio metric (beta) to distinguish genuine I/O wait from GIL contention. Our library-based solution achieves 96.5% of optimal performance without manual tuning, outperforming multiprocessing (which is limited by ~8x memory overhead on devices with 512 MB-2 GB RAM) and asyncio (which blocks during CPU bound phases). Evaluation across seven edge AI workload profiles, including real ML inference with ONNX Runtime MobileNetV2, demonstrates 93.9% average efficiency. Comparative experiments with Python 3.13t (free-threading) show that while GIL elimination enables ~4x throughput on multi-core edge devices, the saturation cliff persists on single-core devices due to context switching overhead, validating our beta metric for both GIL and no-GIL environments. This work provides a practical optimization strategy for memory-constrained edge AI systems where traditional solutions fail.
+ oai:arXiv.org:2601.10582v2
+ cs.DC
+ cs.OS
+ cs.PF
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Mridankan Mandal, Smit Sanjay Shende
+
+
+ Institutional AI: A Governance Framework for Distributional AGI Safety
+ https://arxiv.org/abs/2601.10599
+ arXiv:2601.10599v2 Announce Type: replace
+Abstract: As LLM-based systems increasingly operate as agents embedded within human social and technical systems, alignment can no longer be treated as a property of an isolated model, but must be understood in relation to the environments in which these agents act. Even the most sophisticated methods of alignment, such as Reinforcement Learning through Human Feedback (RHLF) or through AI Feedback (RLAIF) cannot ensure control once internal goal structures diverge from developer intent. We identify three structural problems that emerge from core properties of AI models: (1) behavioral goal-independence, where models develop internal objectives and misgeneralize goals; (2) instrumental override of natural-language constraints, where models regard safety principles as non-binding while pursuing latent objectives, leveraging deception and manipulation; and (3) agentic alignment drift, where individually aligned agents converge to collusive equilibria through interaction dynamics invisible to single-agent audits. The solution this paper advances is Institutional AI: a system-level approach that treats alignment as a question of effective governance of AI agent collectives. We argue for a governance-graph that details how to constrain agents via runtime monitoring, incentive shaping through prizes and sanctions, explicit norms and enforcement roles. This institutional turn reframes safety from software engineering to a mechanism design problem, where the primary goal of alignment is shifting the payoff landscape of AI agent collectives.
+ oai:arXiv.org:2601.10599v2
+ cs.CY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Federico Pierucci, Marcello Galisai, Marcantonio Syrnikov Bracale, Matteo Prandi, Piercosma Bisconti, Francesco Giarrusso, Olga Sorokoletova, Vincenzo Suriani, Daniele Nardi
+
+
+ Translating database mathematical schemes into relational database software applications with MatBase
+ https://arxiv.org/abs/2601.10604
+ arXiv:2601.10604v2 Announce Type: replace
+Abstract: We present a pseudocode algorithm for translating our (Elementary) Mathematical Data Model schemes into relational ones and associated sets of non-relational constraints, used by MatBase, our intelligent database management system prototype. We prove that this algorithm is very fast, solid, complete, and optimal. We apply it to a mathematical scheme modeling the genealogical trees subuniverse. We also provide examples of SQL and VBA code for enforcing some of its non-relational constraints, as well as guidelines to develop code for enforcing such constraints.
+ oai:arXiv.org:2601.10604v2
+ cs.DB
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Christian Mancas, Diana Christina Mancas
+
+
+ Basis-Spline Assisted Coded Computing: Strategies and Error Bounds
+ https://arxiv.org/abs/2601.10616
+ arXiv:2601.10616v2 Announce Type: replace
+Abstract: Coded computing has emerged as a key framework for addressing the impact of stragglers in distributed computation. While polynomial functions often admit exact recovery under existing coded computing schemes, non-polynomial functions require approximate reconstruction from a finite number of evaluations, posing significant challenges. Consequently, interpolation-based methods for non-polynomial coded computing have gained attention, with Berrut approximated coded computing emerging as a state-of-the-art approach. However, due to the global support of Berrut interpolants, the reconstruction accuracy degrades significantly as the number of stragglers increases. To address this challenge, we propose a coded computing framework based on cubic B-spline interpolation. In our approach, server-side function evaluations are reconstructed at the master using B-splines, exploiting their local support and smoothness properties to enhance stability and accuracy. We provide a systematic methodology for integrating B-spline interpolation into coded computing and derive theoretical bounds on approximation error for certain class of smooth functions. Our analysis demonstrates that the error bounds of our approach exhibit a faster decay with respect to the number of workers compared to the Berrut-based method. Experimental results also confirm that our method offers improved accuracy over Berrut-based methods for various smooth non-polynomial functions.
+ oai:arXiv.org:2601.10616v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rimpi Borah, J. Harshan, V. Lalitha
+
+
+ One-Shot Broadcast Joint Source-Channel Coding with Codebook Diversity
+ https://arxiv.org/abs/2601.10648
+ arXiv:2601.10648v2 Announce Type: replace
+Abstract: We study a one-shot joint source-channel coding setting where the source is encoded once and broadcast to $K$ decoders through independent channels. Success is predicated on at least one decoder recovering the source within a maximum distortion constraint. We find that in the one-shot regime, utilizing disjoint codebooks at each decoder yields a codebook diversity gain, distinct from the channel diversity gain that may be expected when several decoders observe independent realizations of the channel's output but share the same codebook. Coding schemes are introduced that leverage this phenomenon, where first- and second-order achievability bounds are derived via an adaptation of the Poisson matching lemma (Li and Anantharam, 2021) which allows for multiple decoders using disjoint codebooks. We further propose a hybrid coding scheme that partitions decoders into groups to optimally balance codebook and channel diversity. Numerical results on the binary symmetric channel demonstrate that the hybrid approach outperforms strategies where the decoders' codebooks are either fully shared or disjoint.
+ oai:arXiv.org:2601.10648v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Joseph Rowan, Buu Phan, Ashish Khisti
+
+
+ PACEvolve: Enabling Long-Horizon Progress-Aware Consistent Evolution
+ https://arxiv.org/abs/2601.10657
+ arXiv:2601.10657v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) have emerged as powerful operators for evolutionary search, yet the design of efficient search scaffolds remains ad hoc. While promising, current LLM-in-the-loop systems lack a systematic approach to managing the evolutionary process. We identify three distinct failure modes: Context Pollution, where experiment history biases future candidate generation; Mode Collapse, where agents stagnate in local minima due to poor exploration-exploitation balance; and Weak Collaboration, where rigid crossover strategies fail to leverage parallel search trajectories effectively. We introduce Progress-Aware Consistent Evolution (PACEvolve), a framework designed to robustly govern the agent's context and search dynamics, to address these challenges. PACEvolve combines hierarchical context management (HCM) with pruning to address context pollution; momentum-based backtracking (MBB) to escape local minima; and a self-adaptive sampling policy that unifies backtracking and crossover for dynamic search coordination (CE), allowing agents to balance internal refinement with cross-trajectory collaboration. We demonstrate that PACEvolve provides a systematic path to consistent, long-horizon self-improvement, achieving state-of-the-art results on LLM-SR and KernelBench, while discovering solutions surpassing the record on Modded NanoGPT.
+ oai:arXiv.org:2601.10657v2
+ cs.NE
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Minghao Yan, Bo Peng, Benjamin Coleman, Ziqi Chen, Zhouhang Xie, Shuo Chen, Zhankui He, Noveen Sachdeva, Isabella Ye, Weili Wang, Chi Wang, Ed H. Chi, Fernando Pereira, Wang-Cheng Kang, Derek Zhiyuan Cheng, Beidou Wang
+
+
+ Perfect Secret Key Generation for a class of Hypergraphical Sources
+ https://arxiv.org/abs/2601.10697
+ arXiv:2601.10697v2 Announce Type: replace
+Abstract: Nitinawarat and Narayan proposed a perfect secret key generation scheme for the so-called \emph{pairwise independent network (PIN) model} by exploiting the combinatorial properties of the underlying graph, namely the spanning tree packing rate. This work considers a generalization of the PIN model where the underlying graph is replaced with a hypergraph, and makes progress towards designing similar perfect secret key generation schemes by exploiting the combinatorial properties of the hypergraph.
+ Our contributions are two-fold. We first provide a capacity achieving scheme for a complete $t$-uniform hypergraph on $m$ vertices by leveraging a packing of the complete $t$-uniform hypergraphs by what we refer to as star hypergraphs, and designing a scheme that gives $\binom{m-2}{t-2}$ bits of perfect secret key per star graph. Our second contribution is a 2-bit perfect secret key generation scheme for 3-uniform star hypergraphs whose projections are cycles. This scheme is then extended to a perfect secret key generation scheme for generic 3-uniform hypergraphs by exploiting star graph packing of 3-uniform hypergraphs and Hamiltonian packings of graphs. The scheme is then shown to be capacity achieving for certain classes of hypergraphs.
+ oai:arXiv.org:2601.10697v2
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Manuj Mukherjee, Sagnik Chatterjee, Alhad Sethi
+
+
+ LIBERTy: A Causal Framework for Benchmarking Concept-Based Explanations of LLMs with Structural Counterfactuals
+ https://arxiv.org/abs/2601.10700
+ arXiv:2601.10700v2 Announce Type: replace
+Abstract: Concept-based explanations quantify how high-level concepts (e.g., gender or experience) influence model behavior, which is crucial for decision-makers in high-stakes domains. Recent work evaluates the faithfulness of such explanations by comparing them to reference causal effects estimated from counterfactuals. In practice, existing benchmarks rely on costly human-written counterfactuals that serve as an imperfect proxy. To address this, we introduce a framework for constructing datasets containing structural counterfactual pairs: LIBERTy (LLM-based Interventional Benchmark for Explainability with Reference Targets). LIBERTy is grounded in explicitly defined Structured Causal Models (SCMs) of the text generation, interventions on a concept propagate through the SCM until an LLM generates the counterfactual. We introduce three datasets (disease detection, CV screening, and workplace violence prediction) together with a new evaluation metric, order-faithfulness. Using them, we evaluate a wide range of methods across five models and identify substantial headroom for improving concept-based explanations. LIBERTy also enables systematic analysis of model sensitivity to interventions: we find that proprietary LLMs show markedly reduced sensitivity to demographic concepts, likely due to post-training mitigation. Overall, LIBERTy provides a much-needed benchmark for developing faithful explainability methods.
+ oai:arXiv.org:2601.10700v2
+ cs.CL
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Gilat Toker, Nitay Calderon, Ohad Amosy, Roi Reichart
+
+
+ Distributed Perceptron under Bounded Staleness, Partial Participation, and Noisy Communication
+ https://arxiv.org/abs/2601.10705
+ arXiv:2601.10705v2 Announce Type: replace
+Abstract: We study a semi-asynchronous client-server perceptron trained via iterative parameter mixing (IPM-style averaging): clients run local perceptron updates and a server forms a global model by aggregating the updates that arrive in each communication round. The setting captures three system effects in federated and distributed deployments: (i) stale updates due to delayed model delivery and delayed application of client computations (two-sided version lag), (ii) partial participation (intermittent client availability), and (iii) imperfect communication on both downlink and uplink, modeled as effective zero-mean additive noise with bounded second moment. We introduce a server-side aggregation rule called staleness-bucket aggregation with padding that deterministically enforces a prescribed staleness profile over update ages without assuming any stochastic model for delays or participation. Under margin separability and bounded data radius, we prove a finite-horizon expected bound on the cumulative weighted number of perceptron mistakes over a given number of server rounds: the impact of delay appears only through the mean enforced staleness, whereas communication noise contributes an additional term that grows on the order of the square root of the horizon with the total noise energy. In the noiseless case, we show how a finite expected mistake budget yields an explicit finite-round stabilization bound under a mild fresh-participation condition.
+ oai:arXiv.org:2601.10705v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keval Jain, Anant Raj, Saurav Prakash, Girish Varma
+
+
+ Neural Induction of Finite-State Transducers
+ https://arxiv.org/abs/2601.10918
+ arXiv:2601.10918v2 Announce Type: replace
+Abstract: Finite-State Transducers (FSTs) are effective models for string-to-string rewriting tasks, often providing the efficiency necessary for high-performance applications, but constructing transducers by hand is difficult. In this work, we propose a novel method for automatically constructing unweighted FSTs following the hidden state geometry learned by a recurrent neural network. We evaluate our methods on real-world datasets for morphological inflection, grapheme-to-phoneme prediction, and historical normalization, showing that the constructed FSTs are highly accurate and robust for many datasets, substantially outperforming classical transducer learning algorithms by up to 87% accuracy on held-out test sets.
+ oai:arXiv.org:2601.10918v2
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Michael Ginn, Alexis Palmer, Mans Hulden
+
+
+ MMedExpert-R1: Strengthening Multimodal Medical Reasoning via Domain-Specific Adaptation and Clinical Guideline Reinforcement
+ https://arxiv.org/abs/2601.10949
+ arXiv:2601.10949v2 Announce Type: replace
+Abstract: Medical Vision-Language Models (MedVLMs) excel at perception tasks but struggle with complex clinical reasoning required in real-world scenarios. While reinforcement learning (RL) has been explored to enhance reasoning capabilities, existing approaches face critical mismatches: the scarcity of deep reasoning data, cold-start limits multi-specialty alignment, and standard RL algorithms fail to model clinical reasoning diversity. We propose MMedExpert-R1, a novel reasoning MedVLM that addresses these challenges through domain-specific adaptation and clinical guideline reinforcement. We construct MMedExpert, a high-quality dataset of 10K samples across four specialties with step-by-step reasoning traces. Our Domain-Specific Adaptation (DSA) creates specialty-specific LoRA modules to provide diverse initialization, while Guideline-Based Advantages (GBA) explicitly models different clinical reasoning perspectives to align with real-world diagnostic strategies. Conflict-Aware Capability Integration then merges these specialized experts into a unified agent, ensuring robust multi-specialty alignment. Comprehensive experiments demonstrate state-of-the-art performance, with our 7B model achieving 27.50 on MedXpert-MM and 83.03 on OmniMedVQA, establishing a robust foundation for reliable multimodal medical reasoning systems.
+ oai:arXiv.org:2601.10949v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Meidan Ding, Jipeng Zhang, Wenxuan Wang, Haiqin Zhong, Xiaoling Luo, Wenting Chen, Linlin Shen
+
+
+ Exact Constraint Enforcement in Physics-Informed Extreme Learning Machines using Null-Space Projection Framework
+ https://arxiv.org/abs/2601.10999
+ arXiv:2601.10999v2 Announce Type: replace
+Abstract: Physics-informed extreme learning machines (PIELMs) typically impose boundary and initial conditions through penalty terms, yielding only approximate satisfaction that is sensitive to user-specified weights and can propagate errors into the interior solution. This work introduces Null-Space Projected PIELM (NP-PIELM), achieving exact constraint enforcement through algebraic projection in coefficient space. The method exploits the geometric structure of the admissible coefficient manifold, recognizing that it admits a decomposition through the null space of the boundary operator. By characterizing this manifold via a translation-invariant representation and projecting onto the kernel component, optimization is restricted to constraint-preserving directions, transforming the constrained problem into unconstrained least-squares where boundary conditions are satisfied exactly at discrete collocation points. This eliminates penalty coefficients, dual variables, and problem-specific constructions while preserving single-shot training efficiency. Numerical experiments on elliptic and parabolic problems including complex geometries and mixed boundary conditions validate the framework.
+ oai:arXiv.org:2601.10999v2
+ math.NA
+ cs.LG
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Rishi Mishra, Smriti, Balaji Srinivasan, Sundararajan Natarajan, Ganapathy Krishnamurthi
+
+
+ AgencyBench: Benchmarking the Frontiers of Autonomous Agents in 1M-Token Real-World Contexts
+ https://arxiv.org/abs/2601.11044
+ arXiv:2601.11044v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) based autonomous agents demonstrate multifaceted capabilities to contribute substantially to economic production. However, existing benchmarks remain focused on single agentic capability, failing to capture long-horizon real-world scenarios. Moreover, the reliance on human-in-the-loop feedback for realistic tasks creates a scalability bottleneck, hindering automated rollout collection and evaluation. To bridge this gap, we introduce AgencyBench, a comprehensive benchmark derived from daily AI usage, evaluating 6 core agentic capabilities across 32 real-world scenarios, comprising 138 tasks with specific queries, deliverables, and rubrics. These scenarios require an average of 90 tool calls, 1 million tokens, and hours of execution time to resolve. To enable automated evaluation, we employ a user simulation agent to provide iterative feedback, and a Docker sandbox to conduct visual and functional rubric-based assessment. Experiments reveal that closed-source models significantly outperform open-source models (48.4% vs 32.1%). Further analysis reveals significant disparities across models in resource efficiency, feedback-driven self-correction, and specific tool-use preferences. Finally, we investigate the impact of agentic scaffolds, observing that proprietary models demonstrate superior performance within their native ecosystems (e.g., Claude-4.5-Opus via Claude-Agent-SDK), while open-source models exhibit distinct performance peaks, suggesting potential optimization for specific execution frameworks. AgencyBench serves as a critical testbed for next-generation agents, highlighting the necessity of co-optimizing model architecture with agentic frameworks. We believe this work sheds light on the future direction of autonomous agents, and we release the full benchmark and evaluation toolkit at https://github.com/GAIR-NLP/AgencyBench.
+ oai:arXiv.org:2601.11044v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keyu Li, Junhao Shi, Yang Xiao, Mohan Jiang, Jie Sun, Yunze Wu, Shijie Xia, Xiaojie Cai, Tianze Xu, Weiye Si, Wenjie Li, Dequan Wang, Pengfei Liu
+
+
+ MiCA: A Mobility-Informed Causal Adapter for Lightweight Epidemic Forecasting
+ https://arxiv.org/abs/2601.11089
+ arXiv:2601.11089v2 Announce Type: replace
+Abstract: Accurate forecasting of infectious disease dynamics is critical for public health planning and intervention. Human mobility plays a central role in shaping the spatial spread of epidemics, but mobility data are noisy, indirect, and difficult to integrate reliably with disease records. Meanwhile, epidemic case time series are typically short and reported at coarse temporal resolution. These conditions limit the effectiveness of parameter-heavy mobility-aware forecasters that rely on clean and abundant data. In this work, we propose the Mobility-Informed Causal Adapter (MiCA), a lightweight and architecture-agnostic module for epidemic forecasting. MiCA infers mobility relations through causal discovery and integrates them into temporal forecasting models via gated residual mixing. This design allows lightweight forecasters to selectively exploit mobility-derived spatial structure while remaining robust under noisy and data-limited conditions, without introducing heavy relational components such as graph neural networks or full attention. Extensive experiments on four real-world epidemic datasets, including COVID-19 incidence, COVID-19 mortality, influenza, and dengue, show that MiCA consistently improves lightweight temporal backbones, achieving an average relative error reduction of 7.5\% across forecasting horizons. Moreover, MiCA attains performance competitive with SOTA spatio-temporal models while remaining lightweight.
+ oai:arXiv.org:2601.11089v2
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Suhan Guo, Jiahong Deng, Furao Shen
+
+
+ Deep GraphRAG: A Balanced Approach to Hierarchical Retrieval and Adaptive Integration
+ https://arxiv.org/abs/2601.11144
+ arXiv:2601.11144v2 Announce Type: replace
+Abstract: Graph-based Retrieval-Augmented Generation (GraphRAG) frameworks face a trade-off between the comprehensiveness of global search and the efficiency of local search. Existing methods are often challenged by navigating large-scale hierarchical graphs, optimizing retrieval paths, and balancing exploration-exploitation dynamics, frequently lacking robust multi-stage re-ranking. To overcome these deficits, we propose Deep GraphRAG, a framework designed for a balanced approach to hierarchical retrieval and adaptive integration. It introduces a hierarchical global-to-local retrieval strategy that integrates macroscopic inter-community and microscopic intra-community contextual relations. This strategy employs a three-stage process: (1) inter-community filtering, which prunes the search space using local context; (2) community-level refinement, which prioritizes relevant subgraphs via entity-interaction analysis; and (3) entity-level fine-grained search within target communities. A beam search-optimized dynamic re-ranking module guides this process, continuously filtering candidates to balance efficiency and global comprehensiveness. Deep GraphRAG also features a Knowledge Integration Module leveraging a compact LLM, trained with Dynamic Weighting Reward GRPO (DW-GRPO). This novel reinforcement learning approach dynamically adjusts reward weights to balance three key objectives: relevance, faithfulness, and conciseness. This training enables compact models (1.5B) to approach the performance of large models (70B) in the integration task. Evaluations on Natural Questions and HotpotQA demonstrate that Deep GraphRAG significantly outperforms baseline graph retrieval methods in both accuracy and efficiency.
+ oai:arXiv.org:2601.11144v2
+ cs.IR
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Yuejie Li, Ke Yang, Tao Wang, Bolin Chen, Bowen Li, Chengjun Mao
+
+
+ Sample-Near-Optimal Agnostic Boosting with Improved Running Time
+ https://arxiv.org/abs/2601.11265
+ arXiv:2601.11265v2 Announce Type: replace
+Abstract: Boosting is a powerful method that turns weak learners, which perform only slightly better than random guessing, into strong learners with high accuracy. While boosting is well understood in the classic setting, it is less so in the agnostic case, where no assumptions are made about the data. Indeed, only recently was the sample complexity of agnostic boosting nearly settled arXiv:2503.09384, but the known algorithm achieving this bound has exponential running time. In this work, we propose the first agnostic boosting algorithm with near-optimal sample complexity, running in time polynomial in the sample size when considering the other parameters of the problem fixed.
+ oai:arXiv.org:2601.11265v2
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Arthur da Cunha, Mikael M{\o}ller H{\o}gsgaard, Andrea Paudice
+
+
+ SAMannot: A Memory-Efficient, Local, Open-source Framework for Interactive Video Instance Segmentation based on SAM2
+ https://arxiv.org/abs/2601.11301
+ arXiv:2601.11301v2 Announce Type: replace
+Abstract: Current research workflows for precise video segmentation are often forced into a compromise between labor-intensive manual curation, costly commercial platforms, and/or privacy-compromising cloud-based services. The demand for high-fidelity video instance segmentation in research is often hindered by the bottleneck of manual annotation and the privacy concerns of cloud-based tools. We present SAMannot, an open-source, local framework that integrates the Segment Anything Model 2 (SAM2) into a human-in-the-loop workflow. To address the high resource requirements of foundation models, we modified the SAM2 dependency and implemented a processing layer that minimizes computational overhead and maximizes throughput, ensuring a highly responsive user interface. Key features include persistent instance identity management, an automated ``lock-and-refine'' workflow with barrier frames, and a mask-skeletonization-based auto-prompting mechanism. SAMannot facilitates the generation of research-ready datasets in YOLO and PNG formats alongside structured interaction logs. Verified through animal behavior tracking use-cases and subsets of the LVOS and DAVIS benchmark datasets, the tool provides a scalable, private, and cost-effective alternative to commercial platforms for complex video annotation tasks.
+ oai:arXiv.org:2601.11301v2
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Gergely Dinya, Andr\'as Gelencs\'er, Krisztina Kup\'an, Clemens K\"upper, Krist\'of Karacs, Anna Gelencs\'er-Horv\'ath
+
+
+ Constructing Orthogonal Rational Function Vectors with an application in Rational Approximation
+ https://arxiv.org/abs/2601.11317
+ arXiv:2601.11317v2 Announce Type: replace
+Abstract: We present two algorithms for constructing orthonormal bases of rational function vectors with respect to a discrete inner product, and discuss how to use them for a rational approximation problem. Building on the pencil-based formulation of the inverse generalized eigenvalue problem by Van Buggenhout et al. (2022), we extend it to rational vectors of arbitrary length $k$, where the recurrence relations are represented by a pair of $k$-Hessenberg matrices, i.e., matrices with possibly $k$ nonzero subdiagonals. An updating algorithm based on similarity transformations using rotations and a Krylov-type algorithm related to the rational Arnoldi method are derived. The performance is demonstrated on the rational approximation of $\sqrt{z}$ on $[0,1]$, where the optimal lightning + polynomial convergence rate of Herremans, Huybrechs, and Trefethen (2023) is successfully recovered. This illustrates the robustness of the proposed methods for handling exponentially clustered poles near singularities.
+ oai:arXiv.org:2601.11317v2
+ math.NA
+ cs.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Robbe Vermeiren
+
+
+ ProjecTA: A Semi-Humanoid Robotic Teaching Assistant with In-Situ Projection for Guided Tours
+ https://arxiv.org/abs/2601.11328
+ arXiv:2601.11328v2 Announce Type: replace
+Abstract: Robotic teaching assistants (TAs) often use body-mounted screens to deliver content. In nomadic, walk-and-talk learning, such as tours in makerspaces, these screens can distract learners from real-world objects, increasing extraneous cognitive load. HCI research lacks empirical comparisons of potential alternatives, such as robots with in-situ projection versus screen-based counterparts; little knowledge has been derived for designing such alternatives. We introduce ProjecTA, a semi-humanoid, gesture-capable TA that guides learners while projecting near-object overlays coordinated with speech and gestures. In a mixed-method study (N=24) in a university makerspace, ProjecTA significantly reduced extraneous load and outperformed its screen-based counterpart in perceived usability, usefulness of visual display, and cross-modal complementarity. Qualitative analyses revealed how ProjecTA's coordinated projections, gestures, and speech anchored explanations in place and time, enhancing understanding in ways a screen could not. We derive key design implications for future robotic TAs leveraging spatial projection to support mobile learning in physical environments.
+ oai:arXiv.org:2601.11328v2
+ cs.HC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hanqing Zhou, Yichuan Zhang, Zihan Zhang, Wei Zhang, Chao Wang, Pengcheng An
+
+
+ Institutional AI: Governing LLM Collusion in Multi-Agent Cournot Markets via Public Governance Graphs
+ https://arxiv.org/abs/2601.11369
+ arXiv:2601.11369v2 Announce Type: replace
+Abstract: Multi-agent LLM ensembles can converge on coordinated, socially harmful equilibria. This paper advances an experimental framework for evaluating Institutional AI, our system-level approach to AI alignment that reframes alignment from preference engineering in agent-space to mechanism design in institution-space. Central to this approach is the governance graph, a public, immutable manifest that declares legal states, transitions, sanctions, and restorative paths; an Oracle/Controller runtime interprets this manifest, attaching enforceable consequences to evidence of coordination while recording a cryptographically keyed, append-only governance log for audit and provenance. We apply the Institutional AI framework to govern the Cournot collusion case documented by prior work and compare three regimes: Ungoverned (baseline incentives from the structure of the Cournot market), Constitutional (a prompt-only policy-as-prompt prohibition implemented as a fixed written anti-collusion constitution, and Institutional (governance-graph-based). Across six model configurations including cross-provider pairs (N=90 runs/condition), the Institutional regime produces large reductions in collusion: mean tier falls from 3.1 to 1.8 (Cohen's d=1.28), and severe-collusion incidence drops from 50% to 5.6%. The prompt-only Constitutional baseline yields no reliable improvement, illustrating that declarative prohibitions do not bind under optimisation pressure. These results suggest that multi-agent alignment may benefit from being framed as an institutional design problem, where governance graphs can provide a tractable abstraction for alignment-relevant collective behavior.
+ oai:arXiv.org:2601.11369v2
+ cs.GT
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Marcantonio Bracale Syrnikov, Federico Pierucci, Marcello Galisai, Matteo Prandi, Piercosma Bisconti, Francesco Giarrusso, Olga Sorokoletova, Vincenzo Suriani, Daniele Nardi
+
+
+ Efficient Channel Autoencoders for Wideband Communications leveraging Walsh-Hadamard interleaving
+ https://arxiv.org/abs/2601.11407
+ arXiv:2601.11407v2 Announce Type: replace
+Abstract: This paper investigates how end-to-end (E2E) channel autoencoders (AEs) can achieve energy-efficient wideband communications by leveraging Walsh-Hadamard (WH) interleaved converters. WH interleaving enables high sampling rate analog-digital conversion with reduced power consumption using an analog WH transformation. We demonstrate that E2E-trained neural coded modulation can transparently adapt to the WH-transceiver hardware without requiring algorithmic redesign. Focusing on the short block length regime, we train WH-domain AEs and benchmark them against standard neural and conventional baselines, including 5G Polar codes. We quantify the system-level energy tradeoffs among baseband compute, channel signal-to-noise ratio (SNR), and analog converter power. Our analysis shows that the proposed WH-AE system can approach conventional Polar code SNR performance within 0.14dB while consuming comparable or lower system power. Compared to the best neural baseline, WH-AE achieves, on average, 29% higher energy efficiency (in bit/J) for the same reliability. These findings establish WH-domain learning as a viable path to energy-efficient, high-throughput wideband communications by explicitly balancing compute complexity, SNR, and analog power consumption.
+ oai:arXiv.org:2601.11407v2
+ cs.IT
+ eess.SP
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Cel Thys, Rodney Martinez Alonso, Sofie Pollin
+
+
+ Building Production-Ready Probes For Gemini
+ https://arxiv.org/abs/2601.11516
+ arXiv:2601.11516v2 Announce Type: replace
+Abstract: Frontier language model capabilities are improving rapidly. We thus need stronger mitigations against bad actors misusing increasingly powerful systems. Prior work has shown that activation probes may be a promising misuse mitigation technique, but we identify a key remaining challenge: probes fail to generalize under important production distribution shifts. In particular, we find that the shift from short-context to long-context inputs is difficult for existing probe architectures. We propose several new probe architectures that handle this long-context distribution shift.
+ We evaluate these probes in the cyber-offensive domain, testing their robustness against various production-relevant distribution shifts, including multi-turn conversations, long context prompts, and adaptive red teaming. Our results demonstrate that while our novel architectures address context length, a combination of architecture choice and training on diverse distributions is required for broad generalization. Additionally, we show that pairing probes with prompted classifiers achieves optimal accuracy at a low cost due to the computational efficiency of probes.
+ These findings have informed the successful deployment of misuse mitigation probes in user-facing instances of Gemini, Google's frontier language model. Finally, we find early positive results using AlphaEvolve to automate improvements in both probe architecture search and adaptive red teaming, showing that automating some AI safety research is already possible.
+ oai:arXiv.org:2601.11516v2
+ cs.LG
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ J\'anos Kram\'ar, Joshua Engels, Zheng Wang, Bilal Chughtai, Rohin Shah, Neel Nanda, Arthur Conmy
+
+
+ A Control-Theoretic Perspective on Optimal High-Order Optimization
+ https://arxiv.org/abs/1912.07168
+ arXiv:1912.07168v5 Announce Type: replace-cross
+Abstract: We provide a control-theoretic perspective on optimal tensor algorithms for minimizing a convex function in a finite-dimensional Euclidean space. Given a function $\Phi: \mathbb{R}^d \rightarrow \mathbb{R}$ that is convex and twice continuously differentiable, we study a closed-loop control system that is governed by the operators $\nabla \Phi$ and $\nabla^2 \Phi$ together with a feedback control law $\lambda(\cdot)$ satisfying the algebraic equation $(\lambda(t))^p\|\nabla\Phi(x(t))\|^{p-1} = \theta$ for some $\theta \in (0, 1)$. Our first contribution is to prove the existence and uniqueness of a local solution to this system via the Banach fixed-point theorem. We present a simple yet nontrivial Lyapunov function that allows us to establish the existence and uniqueness of a global solution under certain regularity conditions and analyze the convergence properties of trajectories. The rate of convergence is $O(1/t^{(3p+1)/2})$ in terms of objective function gap and $O(1/t^{3p})$ in terms of squared gradient norm. Our second contribution is to provide two algorithmic frameworks obtained from discretization of our continuous-time system, one of which generalizes the large-step A-HPE framework and the other of which leads to a new optimal $p$-th order tensor algorithm. While our discrete-time analysis can be seen as a simplification and generalization of~\citet{Monteiro-2013-Accelerated}, it is largely motivated by the aforementioned continuous-time analysis, demonstrating the fundamental role that the feedback control plays in optimal acceleration and the clear advantage that the continuous-time perspective brings to algorithmic design. A highlight of our analysis is that we show that all of the $p$-th order optimal tensor algorithms that we discuss minimize the squared gradient norm at a rate of $O(k^{-3p})$, which complements the recent analysis.
+ oai:arXiv.org:1912.07168v5
+ math.OC
+ cs.CC
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tianyi Lin, Michael. I. Jordan
+
+
+ Invertibility Conditions for the Admittance Matrices of Balanced Power Systems
+ https://arxiv.org/abs/2012.04087
+ arXiv:2012.04087v5 Announce Type: replace-cross
+Abstract: The admittance matrix encodes the network topology and electrical parameters of a power system in order to relate the current injection and voltage phasors. Since admittance matrices are central to many power engineering analyses, their characteristics are important subjects of theoretical studies. This paper focuses on the key characteristic of \emph{invertibility}. Previous literature has presented an invertibility condition for admittance matrices. This paper first identifies and fixes a technical issue in the proof of this previously presented invertibility condition. This paper then extends this previous work by deriving new conditions that are applicable to a broader class of systems with lossless branches and transformers with off-nominal tap ratios.
+ oai:arXiv.org:2012.04087v5
+ math.OC
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/TPWRS.2022.3206285
+ Daniel Turizo, Daniel K. Molzahn
+
+
+ Classification of high-dimensional data with spiked covariance matrix structure
+ https://arxiv.org/abs/2110.01950
+ arXiv:2110.01950v3 Announce Type: replace-cross
+Abstract: We study the classification problem for high-dimensional data with $n$ observations on $p$ features where the $p \times p$ covariance matrix $\Sigma$ exhibits a spiked eigenvalue structure and the vector $\zeta$, given by the difference between the {\em whitened} mean vectors, is sparse. We analyze an adaptive classifier (adaptive with respect to the sparsity $s$) that first performs dimension reduction on the feature vectors prior to classification in the dimensionally reduced space, i.e., the classifier whitens the data, then screens the features by keeping only those corresponding to the $s$ largest coordinates of $\zeta$ and finally applies Fisher linear discriminant on the selected features. Leveraging recent results on entrywise matrix perturbation bounds for covariance matrices, we show that the resulting classifier is Bayes optimal whenever $n \rightarrow \infty$ and $s \sqrt{n^{-1} \ln p} \rightarrow 0$. Notably, our theory also guarantees Bayes optimality for the corresponding quadratic discriminant analysis (QDA). Experimental results on real and synthetic data further indicate that the proposed approach is competitive with state-of-the-art methods while operating on a substantially lower-dimensional representation.
+ oai:arXiv.org:2110.01950v3
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Yin-Jen Chen, Minh Tang
+
+
+ Three-chromatic geometric hypergraphs
+ https://arxiv.org/abs/2112.01820
+ arXiv:2112.01820v2 Announce Type: replace-cross
+Abstract: We prove that for any planar convex body C there is a positive integer m with the property that any finite point set P in the plane can be three-colored such that there is no translate of C containing at least m points of P, all of the same color. As a part of the proof, we show a strengthening of the Erd\H{o}s-Sands-Sauer-Woodrow conjecture. Surprisingly, the proof also relies on the two dimensional case of the Illumination conjecture.
+ oai:arXiv.org:2112.01820v2
+ math.CO
+ cs.DM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ G\'abor Dam\'asdi, D\"om\"ot\"or P\'alv\"olgyi
+
+
+ Functional dimension of feedforward ReLU neural networks
+ https://arxiv.org/abs/2209.04036
+ arXiv:2209.04036v2 Announce Type: replace-cross
+Abstract: It is well-known that the parameterized family of functions representable by fully-connected feedforward neural networks with ReLU activation function is precisely the class of piecewise linear functions with finitely many pieces. It is less well-known that for every fixed architecture of ReLU neural network, the parameter space admits positive-dimensional spaces of symmetries, and hence the local functional dimension near any given parameter is lower than the parametric dimension. In this work we carefully define the notion of functional dimension, show that it is inhomogeneous across the parameter space of ReLU neural network functions, and continue an investigation - initiated in [14] and [5] - into when the functional dimension achieves its theoretical maximum. We also study the quotient space and fibers of the realization map from parameter space to function space, supplying examples of fibers that are disconnected, fibers upon which functional dimension is non-constant, and fibers upon which the symmetry group acts non-transitively.
+ oai:arXiv.org:2209.04036v2
+ math.MG
+ cs.LG
+ math.GT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.aim.2025.110636
+ Advances in Mathematics, Volume 482, Part C, December 2025, 110636
+ J. Elisenda Grigsby, Kathryn Lindsey, Robert Meyerhoff, Chenxi Wu
+
+
+ The distribution of Ridgeless least squares interpolators
+ https://arxiv.org/abs/2307.02044
+ arXiv:2307.02044v2 Announce Type: replace-cross
+Abstract: The Ridgeless minimum $\ell_2$-norm interpolator in overparametrized linear regression has attracted considerable attention in recent years in both machine learning and statistics communities. While it seems to defy conventional wisdom that overfitting leads to poor prediction, recent theoretical research on its $\ell_2$-type risks reveals that its norm minimizing property induces an `implicit regularization' that helps prediction in spite of interpolation.
+ This paper takes a further step that aims at understanding its precise stochastic behavior as a statistical estimator. Specifically, we characterize the distribution of the Ridgeless interpolator in high dimensions, in terms of a Ridge estimator in an associated Gaussian sequence model with positive regularization, which provides a precise quantification of the prescribed implicit regularization in the most general distributional sense. Our distributional characterizations hold for general non-Gaussian random designs and extend uniformly to positively regularized Ridge estimators.
+ As a direct application, we obtain a complete characterization for a general class of weighted $\ell_q$ risks of the Ridge(less) estimators that are previously only known for $q=2$ by random matrix methods. These weighted $\ell_q$ risks not only include the standard prediction and estimation errors, but also include the non-standard covariate shift settings. Our uniform characterizations further reveal a surprising feature of the commonly used generalized and $k$-fold cross-validation schemes: tuning the estimated $\ell_2$ prediction risk by these methods alone lead to simultaneous optimal $\ell_2$ in-sample, prediction and estimation risks, as well as the optimal length of debiased confidence intervals.
+ oai:arXiv.org:2307.02044v2
+ math.ST
+ cs.IT
+ math.IT
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qiyang Han, Xiaocong Xu
+
+
+ Topology-Aware Loss for Aorta and Great Vessel Segmentation in Computed Tomography Images
+ https://arxiv.org/abs/2307.03137
+ arXiv:2307.03137v3 Announce Type: replace-cross
+Abstract: Segmentation networks are not explicitly imposed to learn global invariants of an image, such as the shape of an object and the geometry between multiple objects, when they are trained with a standard loss function. On the other hand, incorporating such invariants into network training may help improve performance for various segmentation tasks when they are the intrinsic characteristics of the objects to be segmented. One example is segmentation of aorta and great vessels in computed tomography (CT) images where vessels are found in a particular geometry in the body due to the human anatomy and they mostly seem as round objects on a 2D CT image. This paper addresses this issue by introducing a new topology-aware loss function that penalizes topology dissimilarities between the ground truth and prediction through persistent homology. Different from the previously suggested segmentation network designs, which apply the threshold filtration on a likelihood function of the prediction map and the Betti numbers of the ground truth, this paper proposes to apply the Vietoris-Rips filtration to obtain persistence diagrams of both ground truth and prediction maps and calculate the dissimilarity with the Wasserstein distance between the corresponding persistence diagrams. The use of this filtration has advantage of modeling shape and geometry at the same time, which may not happen when the threshold filtration is applied. Our experiments on 4327 CT images of 24 subjects reveal that the proposed topology-aware loss function leads to better results than its counterparts, indicating the effectiveness of this use.
+ oai:arXiv.org:2307.03137v3
+ eess.IV
+ cs.CV
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.bspc.2026.109512
+ Biomedical Signal Processing and Control, 117, 109512 (2026)
+ Seher Ozcelik, Sinan Unver, Ilke Ali Gurses, Rustu Turkay, Cigdem Gunduz-Demir
+
+
+ Generative Language Models on Nucleotide Sequences of Human Genes
+ https://arxiv.org/abs/2307.10634
+ arXiv:2307.10634v3 Announce Type: replace-cross
+Abstract: Language models, especially transformer-based ones, have achieved colossal success in NLP. To be precise, studies like BERT for NLU and works like GPT-3 for NLG are very important. If we consider DNA sequences as a text written with an alphabet of four letters representing the nucleotides, they are similar in structure to natural languages. This similarity has led to the development of discriminative language models such as DNABert in the field of DNA-related bioinformatics. To our knowledge, however, the generative side of the coin is still largely unexplored. Therefore, we have focused on the development of an autoregressive generative language model such as GPT-3 for DNA sequences. Since working with whole DNA sequences is challenging without extensive computational resources, we decided to conduct our study on a smaller scale and focus on nucleotide sequences of human genes rather than the whole DNA. This decision has not changed the structure of the problem, as both DNA and genes can be considered as 1D sequences consisting of four different nucleotides without losing much information and without oversimplification. Firstly, we systematically studied an almost entirely unexplored problem and observed that RNNs perform best, while simple techniques such as N-grams are also promising. Another beneficial point was learning how to work with generative models on languages we do not understand, unlike natural languages. The importance of using real-world tasks beyond classical metrics such as perplexity was noted. In addition, we examined whether the data-hungry nature of these models can be altered by selecting a language with minimal vocabulary size, four due to four different types of nucleotides. The reason for reviewing this was that choosing such a language might make the problem easier. However, in this study, we found that this did not change the amount of data required very much.
+ oai:arXiv.org:2307.10634v3
+ q-bio.GN
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1038/s41598-024-72512-x
+ Scientific Reports, 2024, 14.1: 22204
+ Musa Nuri Ihtiyar, Arzucan Ozgur
+
+
+ A First-Order Algorithm for Decentralised Min-Max Problems
+ https://arxiv.org/abs/2308.11876
+ arXiv:2308.11876v2 Announce Type: replace-cross
+Abstract: In this work, we consider a connected network of finitely many agents working cooperatively to solve a min-max problem with convex-concave structure. We propose a decentralised first-order algorithm which can be viewed as a non-trivial combination of two algorithms: PG-EXTRA for decentralised minimisation problems and the forward reflected backward method for (non-distributed) min-max problems. In each iteration of our algorithm, each agent computes the gradient of the smooth component of its local objective function as well as the proximal operator of its nonsmooth component, following by a round of communication with its neighbours. Our analysis shows that the sequence generated by the method converges under standard assumptions with non-decaying stepsize.
+ oai:arXiv.org:2308.11876v2
+ math.OC
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yura Malitsky, Matthew K. Tam
+
+
+ Optimal Conditional Inference in Adaptive Experiments
+ https://arxiv.org/abs/2309.12162
+ arXiv:2309.12162v2 Announce Type: replace-cross
+Abstract: We study batched bandit experiments and consider the problem of inference conditional on the realized stopping time, assignment probabilities, and target parameter, where all of these may be chosen adaptively using information up to the last batch of the experiment. Absent further restrictions on the experiment, we show that inference using only the results of the last batch is optimal. When the adaptive aspects of the experiment are known to be location-invariant, in the sense that they are unchanged when we shift all batch-arm means by a constant, we show that there is additional information in the data, captured by one additional linear function of the batch-arm means. In the more restrictive case where the stopping time, assignment probabilities, and target parameter are known to depend on the data only through a collection of polyhedral events, we derive computationally tractable and optimal conditional inference procedures.
+ oai:arXiv.org:2309.12162v2
+ stat.ME
+ cs.LG
+ econ.EM
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Jiafeng Chen, Isaiah Andrews
+
+
+ Assessing Utility of Differential Privacy for RCTs
+ https://arxiv.org/abs/2309.14581
+ arXiv:2309.14581v2 Announce Type: replace-cross
+Abstract: Randomized controlled trials (RCTs) have become powerful tools for assessing the impact of interventions and policies in many contexts. They are considered the gold standard for causal inference in the biomedical fields and many social sciences. Researchers have published an increasing number of studies that rely on RCTs for at least part of their inference. These studies typically include the response data that has been collected, de-identified, and sometimes protected through traditional disclosure limitation methods. In this paper, we empirically assess the impact of privacy-preserving synthetic data generation methodologies on published RCT analyses by leveraging available replication packages (research compendia) in economics and policy analysis. We implement three privacy-preserving algorithms, that use as a base one of the basic differentially private (DP) algorithms, the perturbed histogram, to support the quality of statistical inference. We highlight challenges with the straight use of this algorithm and the stability-based histogram in our setting and described the adjustments needed. We provide simulation studies and demonstrate that we can replicate the analysis in a published economics article on privacy-protected data under various parameterizations. We find that relatively straightforward (at a high-level) privacy-preserving methods influenced by DP techniques allow for inference-valid protection of published data. The results have applicability to researchers wishing to share RCT data, especially in the context of low- and middle-income countries, with strong privacy protection.
+ oai:arXiv.org:2309.14581v2
+ stat.AP
+ cs.CR
+ econ.EM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Kaitlyn R. Webb, Soumya Mukherjee, Aratrika Mustafi, Aleksandra Slavkovi\'c, Lars Vilhuber
+
+
+ On the mixed monotonicity of polynomial functions
+ https://arxiv.org/abs/2312.15517
+ arXiv:2312.15517v2 Announce Type: replace-cross
+Abstract: In this paper, it is shown that every polynomial function is mixed monotone globally with a polynomial decomposition function. For univariate polynomials, the decomposition functions can be constructed from the Gram matrix representation of polynomial functions. The tightness of polynomial decomposition functions is discussed. Several examples are provided. An example is provided to show that polynomial decomposition functions, in addition to being global decomposition functions, can be much tighter than local decomposition functions constructed using local Jacobian bounds. Furthermore, an example is provided to demonstrate the application to reachable set over-approximation.
+ oai:arXiv.org:2312.15517v2
+ math.OC
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Adam M Tahir
+
+
+ Virtual Holonomic and Nonholonomic Constraints on Lie groups
+ https://arxiv.org/abs/2312.17531
+ arXiv:2312.17531v2 Announce Type: replace-cross
+Abstract: This paper develops a geometric framework for virtual constraints on Lie groups, with emphasis on mechanical systems modeled as affine connection systems. Virtual holonomic and virtual nonholonomic constraints, including linear and affine nonholonomic constraints, are formulated directly at the level of the Lie algebra and characterized as feedback--invariant manifolds. For each class of constraint, we establish existence and uniqueness conditions for enforcing feedback laws and show that the resulting closed--loop trajectories evolve as the dynamics of mechanical systems endowed with induced constrained connections, generalizing classical holonomic and nonholonomic reductions. Beyond stabilization, the framework enables the systematic generation of low--dimensional motion primitives on Lie groups by enforcing invariant, possibly affine, manifolds and shaping nontrivial dynamical regimes. The approach is illustrated through representative examples, including quadrotor UAVs and a rigid body with an internal rotor, where classical control laws are recovered as special cases and affine constraint--induced motion primitives are obtained.
+ oai:arXiv.org:2312.17531v2
+ math.OC
+ cs.SY
+ eess.SY
+ math-ph
+ math.MP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ A. Anahory Simoes, A. Bloch, L. Colombo, E. Stratoglou
+
+
+ Transferable Graphical MARL for Real-Time Estimation in Dynamic Wireless Networks
+ https://arxiv.org/abs/2404.03227
+ arXiv:2404.03227v4 Announce Type: replace-cross
+Abstract: We study real-time sampling and estimation of autoregressive Markovian sources in decentralized and dynamic multi-hop networks that share similar structures. Nodes cache neighboring samples and communicate over wireless collision channels. The objective is to minimize the time-average estimation error and/or the age of information under decentralized policies, which we address by developing a unified graphical multi-agent reinforcement learning framework. A key feature of the framework is its transferability, enabled by the fact that the number of trainable parameters is independent of the number of agents, allowing a learned policy to be directly deployed on dynamic yet structurally similar graphs without re-training. Building on this design, we establish rigorous theoretical guarantees on the transferability of the resulting policies. Numerical experiments demonstrate that (i) our method outperforms state-of-the-art baselines on dynamic graphs; (ii) the trained policies transfer well to larger networks, with performance gains increasing with the number of nodes; and (iii) incorporating recurrence is crucial, enhancing resilience to non-stationarity in both independent learning and centralized training with decentralized execution.
+ oai:arXiv.org:2404.03227v4
+ eess.SP
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Xingran Chen, Navid NaderiAlizadeh, Alejandro Ribeiro, Shirin Saeedi Bidokhti
+
+
+ Constructive proofs for some semilinear PDEs on $H^2(e^{|x|^2/4},\mathbb{R}^d)$
+ https://arxiv.org/abs/2404.04054
+ arXiv:2404.04054v2 Announce Type: replace-cross
+Abstract: We develop computer-assisted tools to study semilinear equations of the form \begin{equation*} -\Delta u -\frac{x}{2}\cdot \nabla{u}= f(x,u,\nabla u) ,\quad x\in\mathbb{R}^d. \end{equation*} Such equations appear naturally in several contexts, and in particular when looking for self-similar solutions of parabolic PDEs. We develop a general methodology, allowing us not only to prove the existence of solutions, but also to describe them very precisely. We introduce a spectral approach based on an eigenbasis of $\mathcal{L}:= -\Delta -\frac{x}{2}\cdot \nabla$ in spherical coordinates, together with a quadrature rule allowing to deal with nonlinearities, in order to get accurate approximate solutions. We then use a Newton-Kantorovich argument, in an appropriate weighted Sobolev space, to prove the existence of a nearby exact solution. We apply our approach to nonlinear heat equations, to nonlinear Schr\"odinger equations and to a generalised viscous Burgers equation, and obtain both radial and non-radial self-similar profiles.
+ oai:arXiv.org:2404.04054v2
+ math.AP
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1007/s00211-025-01504-4
+ Maxime Breden, Hugo Chu
+
+
+ Fast Two-Time-Scale Stochastic Gradient Method with Applications in Reinforcement Learning
+ https://arxiv.org/abs/2405.09660
+ arXiv:2405.09660v4 Announce Type: replace-cross
+Abstract: Two-time-scale optimization is a framework introduced in Zeng et al. (2024) that abstracts a range of policy evaluation and policy optimization problems in reinforcement learning (RL). Akin to bi-level optimization under a particular type of stochastic oracle, the two-time-scale optimization framework has an upper level objective whose gradient evaluation depends on the solution of a lower level problem, which is to find the root of a strongly monotone operator. In this work, we propose a new method for solving two-time-scale optimization that achieves significantly faster convergence than the prior arts. The key idea of our approach is to leverage an averaging step to improve the estimates of the operators in both lower and upper levels before using them to update the decision variables. These additional averaging steps eliminate the direct coupling between the main variables, enabling the accelerated performance of our algorithm. We characterize the finite-time convergence rates of the proposed algorithm under various conditions of the underlying objective function, including strong convexity, Polyak-Lojasiewicz condition, and general non-convexity. These rates significantly improve over the best-known complexity of the standard two-time-scale stochastic approximation algorithm. When applied to RL, we show how the proposed algorithm specializes to novel online sample-based methods that surpass or match the performance of the existing state of the art. Finally, we support our theoretical results with numerical simulations in RL.
+ oai:arXiv.org:2405.09660v4
+ math.OC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sihan Zeng, Thinh T. Doan
+
+
+ Distribution Steering for Discrete-Time Uncertain Ensemble Systems
+ https://arxiv.org/abs/2405.12415
+ arXiv:2405.12415v2 Announce Type: replace-cross
+Abstract: Ensemble systems appear frequently in many engineering applications and, as a result, they have become an important research topic in control theory. These systems are best characterized by the evolution of their underlying state distribution. Despite the work to date, few results exist dealing with the problem of directly modifying (i.e., ``steering'') the distribution of an ensemble system. In addition, in most existing results, the distribution of the states of an ensemble of discrete-time systems is assumed to be Gaussian. However, in case the system parameters are uncertain, it is not always realistic to assume that the distribution of the system follows a Gaussian distribution, thus complicating the solution of the overall problem. In this paper, we address the general distribution steering problem for first-order discrete-time ensemble systems, where the distributions of the system parameters and the states are arbitrary with finite first few moments. Linear system dynamics are considered using the method of power moments to transform the original infinite-dimensional problem into a finite-dimensional one. We also propose a control law for the ensuing moment system, which allows us to obtain the power moments of the desired control inputs. Finally, we solve the inverse problem to obtain the feasible control inputs from their corresponding power moments. We provide a numerical example to validate our theoretical developments.
+ oai:arXiv.org:2405.12415v2
+ math.OC
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Guangyu Wu, Panagiotis Tsiotras, Anders Lindquist
+
+
+ U-learning for Prediction Inference via Combinatory Multi-Subsampling: With Applications to LASSO and Neural Networks
+ https://arxiv.org/abs/2407.15301
+ arXiv:2407.15301v2 Announce Type: replace-cross
+Abstract: Epigenetic aging clocks play a pivotal role in estimating an individual's biological age through the examination of DNA methylation patterns at numerous CpG (Cytosine-phosphate-Guanine) sites within their genome. However, making valid inferences on predicted epigenetic ages, or more broadly, on predictions derived from high-dimensional inputs, presents challenges. We introduce a novel U-learning approach via combinatory multi-subsampling for making ensemble predictions and constructing confidence intervals for predictions of continuous outcomes when traditional asymptotic methods are not applicable. More specifically, our approach conceptualizes the ensemble estimators within the framework of generalized U-statistics and invokes the H\'ajek projection for deriving the variances of predictions and constructing confidence intervals with valid conditional coverage probabilities. We apply our approach to two commonly used predictive algorithms, Lasso and deep neural networks (DNNs), and illustrate the validity of inferences with extensive numerical studies. We have applied these methods to predict the DNA methylation age (DNAmAge) of patients with various health conditions, aiming to accurately characterize the aging process and potentially guide anti-aging interventions.
+ oai:arXiv.org:2407.15301v2
+ stat.ML
+ cs.LG
+ math.ST
+ q-bio.QM
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhe Fei, Yi Li
+
+
+ Neural timescales from a computational perspective
+ https://arxiv.org/abs/2409.02684
+ arXiv:2409.02684v3 Announce Type: replace-cross
+Abstract: Neural activity fluctuates over a wide range of timescales within and across brain areas. Experimental observations suggest that diverse neural timescales reflect information in dynamic environments. However, how timescales are defined and measured from brain recordings vary across the literature. Moreover, these observations do not specify the mechanisms underlying timescale variations, nor whether specific timescales are necessary for neural computation and brain function. Here, we synthesize three directions where computational approaches can distill the broad set of empirical observations into quantitative and testable theories: We review (i) how different data analysis methods quantify timescales across distinct behavioral states and recording modalities, (ii) how biophysical models provide mechanistic explanations for the emergence of diverse timescales, and (iii) how task-performing networks and machine learning models uncover the functional relevance of neural timescales. This integrative computational perspective thus complements experimental investigations, providing a holistic view on how neural timescales reflect the relationship between brain structure, dynamics, and behavior.
+ oai:arXiv.org:2409.02684v3
+ q-bio.NC
+ cs.LG
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Roxana Zeraati, Anna Levina, Jakob H. Macke, Richard Gao
+
+
+ A Complexity Dichotomy for Temporal Valued Constraint Satisfaction Problems
+ https://arxiv.org/abs/2409.07285
+ arXiv:2409.07285v2 Announce Type: replace-cross
+Abstract: We study the complexity of the valued constraint satisfaction problem (VCSP) for every valued structure with the domain ${\mathbb Q}$ that is preserved by all order-preserving bijections. Such VCSPs will be called temporal, in analogy to the (classical) constraint satisfaction problem: a relational structure is preserved by all order-preserving bijections if and only if all its relations have a first-order definition in $({\mathbb Q};<)$, and the CSPs for such structures are called temporal CSPs. Many optimization problems that have been studied intensively in the literature can be phrased as a temporal VCSP. We prove that a temporal VCSP is in P, or NP-complete. Our analysis uses the concept of fractional polymorphisms. This is the first dichotomy result for VCSPs over infinite domains which is complete in the sense that it treats all valued structures with a given automorphism group.
+ oai:arXiv.org:2409.07285v2
+ math.LO
+ cs.CC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Manuel Bodirsky, \'Edouard Bonnet, \v{Z}aneta Semani\v{s}inov\'a
+
+
+ Super Monotonic Alignment Search
+ https://arxiv.org/abs/2409.07704
+ arXiv:2409.07704v2 Announce Type: replace-cross
+Abstract: Monotonic alignment search (MAS), introduced by Glow-TTS, is one of the most popular algorithm in text-to-speech to estimate unknown alignments between text and speech. Since this algorithm needs to search for the most probable alignment with dynamic programming by caching all possible paths, the time complexity of the algorithm is $O(T \times S)$, where $T$ is the length of text and $S$ is the length of speech representation. The authors of Glow-TTS run this algorithm on CPU, and while they mentioned it is difficult to parallelize, we found that MAS can be parallelized in text length dimension and CPU execution consumes an inordinate amount of time for inter-device copy. Therefore, we implemented a Triton kernel and PyTorch JIT script to accelerate MAS on GPU without inter-device copy. As a result, Super-MAS Triton kernel is up to 72 times faster in the extreme-length case. The code is available at https://github.com/supertone-inc/super-monotonic-align.
+ oai:arXiv.org:2409.07704v2
+ eess.AS
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junhyeok Lee, Hyeongju Kim
+
+
+ Emotional Dimension Control in Language Model-Based Text-to-Speech: Spanning a Broad Spectrum of Human Emotions
+ https://arxiv.org/abs/2409.16681
+ arXiv:2409.16681v3 Announce Type: replace-cross
+Abstract: Emotional text-to-speech (TTS) systems sturggle to capture the full spectrum of human emotions due to the inherent complexity of emotional expressions and the limited coverage of existing emotion labels. To address this, we propose a language model-based TTS framework that synthesizes speech across a broad range of emotional styles. Our approach enables flexible user control along three continuous dimensions - pleasure, arousal, and dominance (PAD). To enable this, we train an emotional dimension predictor that maps categorical emotion labels in speech datasets into the PAD space, grounded in established psychological research. Importantly, while the emotional dimension predictor leverages categorical labels, the TTS framework itself does not require explict emotion labels during training. Objective and subjective evaluations demonstrate that our framework effectively generates more expressive emotional styles and enhances both naturalness and diversity compared to baselines.
+ oai:arXiv.org:2409.16681v3
+ eess.AS
+ cs.CL
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Kun Zhou, You Zhang, Dianwen Ng, Shengkui Zhao, Hao Wang, Bin Ma
+
+
+ Safety on the Fly: Constructing Robust Safety Filters via Policy Control Barrier Functions at Runtime
+ https://arxiv.org/abs/2410.11157
+ arXiv:2410.11157v3 Announce Type: replace-cross
+Abstract: Control Barrier Functions (CBFs) have proven to be an effective tool for performing safe control synthesis for nonlinear systems. However, guaranteeing safety in the presence of disturbances and input constraints for high relative degree systems is a difficult problem. In this work, we propose the Robust Policy CBF (RPCBF), a practical approach for constructing robust CBF approximations online via the estimation of a value function. We establish conditions under which the approximation qualifies as a valid CBF and demonstrate the effectiveness of the RPCBF-safety filter in simulation on a variety of high relative degree input-constrained systems. Finally, we demonstrate the benefits of our method in compensating for model errors on a hardware quadcopter platform by treating the model errors as disturbances. Website including code: www.oswinso.xyz/rpcbf/
+ oai:arXiv.org:2410.11157v3
+ math.OC
+ cs.RO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/LRA.2025.3597847
+ IEEE Robotics and Automation Letters, Volume 10, Issue 10, 2025, pages 10058-10065
+ Luzia Knoedler, Oswin So, Ji Yin, Mitchell Black, Zachary Serlin, Panagiotis Tsiotras, Javier Alonso-Mora, Chuchu Fan
+
+
+ Fast-forwarding quantum algorithms for linear dissipative differential equations
+ https://arxiv.org/abs/2410.13189
+ arXiv:2410.13189v2 Announce Type: replace-cross
+Abstract: We establish improved complexity estimates of quantum algorithms for linear dissipative ordinary differential equations (ODEs) and show that the time dependence can be fast-forwarded to be sub-linear. Specifically, we show that a quantum algorithm based on truncated Dyson series can prepare history states of dissipative ODEs up to time $T$ with cost $\widetilde{\mathcal{O}}(\log(T) (\log(1/\epsilon))^2 )$, which is an exponential speedup over the best previous result. For final state preparation at time $T$, we show that its complexity is $\widetilde{\mathcal{O}}(\sqrt{T} (\log(1/\epsilon))^2 )$, achieving a polynomial speedup in $T$. We also analyze the complexity of simpler lower-order quantum algorithms, such as the forward Euler method and the trapezoidal rule, and find that even lower-order methods can still achieve $\widetilde{\mathcal{O}}(\sqrt{T})$ cost with respect to time $T$ for preparing final states of dissipative ODEs. As applications, we show that quantum algorithms can simulate dissipative non-Hermitian quantum dynamics and heat processes with fast-forwarded complexity sub-linear in time.
+ oai:arXiv.org:2410.13189v2
+ quant-ph
+ cs.NA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Dong An, Akwum Onwunta, Gengzhi Yang
+
+
+ Comparison of Generative Learning Methods for Turbulence Surrogates
+ https://arxiv.org/abs/2411.16417
+ arXiv:2411.16417v4 Announce Type: replace-cross
+Abstract: Numerical simulations of turbulent flows present significant challenges in fluid dynamics due to their complexity and high computational cost. High resolution techniques such as Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) are generally not computationally affordable, particularly for technologically relevant problems. Recent advances in machine learning, specifically in generative probabilistic models, offer promising alternatives as surrogates for turbulence. This paper investigates the application of three generative models - Variational Autoencoders (VAE), Deep Convolutional Generative Adversarial Networks (DCGAN), and Denoising Diffusion Probabilistic Models (DDPM) - in simulating a von K\'arm\'an vortex street around a fixed cylinder projected into 2D, as well as a real-world experimental dataset of the wake flow of a cylinder array. Training data was obtained by means of LES in the simulated case and Particle Image Velocimetry (PIV) in the experimental case. We evaluate each model's ability to capture the statistical properties and spatial structures of the turbulent flow. Our results demonstrate that DDPM and DCGAN effectively replicate all flow distributions, highlighting their potential as efficient and accurate tools for turbulence surrogacy. We find a strong argument for DCGAN, as although they are more difficult to train (due to problems such as mode collapse), they show the fastest inference and training time, require less data to train compared to VAE and DDPM, and provide the results most closely aligned with the input stream. In contrast, VAE train quickly (and can generate samples quickly) but do not produce adequate results, and DDPM, whilst effective, are significantly slower at both, inference and training time.
+ oai:arXiv.org:2411.16417v4
+ physics.flu-dyn
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Claudia Drygala, Edmund Ross, Mohammad Sharifi Ghazijahani, Christian Cierpka, Francesca di Mare, Hanno Gottschalk
+
+
+ MPAX: Mathematical Programming in JAX
+ https://arxiv.org/abs/2412.09734
+ arXiv:2412.09734v3 Announce Type: replace-cross
+Abstract: We present MPAX (Mathematical Programming in JAX), an open-source first-order solver for large-scale linear programming (LP) and convex quadratic programming (QP) built natively in JAX. The primary goal of MPAX is to exploit modern machine learning infrastructure for large-scale mathematical programming, while also providing advanced mathematical programming algorithms that are easy to integrate into machine learning workflows. MPAX implements two PDHG variants, r2HPDHG for LP and rAPDHG for QP, together with diagonal preconditioning, adaptive restarts, adaptive step sizes, primal-weight updates, infeasibility detection, and feasibility polishing. Leveraging JAX's compilation and parallelization ecosystem, MPAX provides across-hardware portability, batched solving, distributed optimization, and automatic differentiation. We evaluate MPAX on CPUs, NVIDIA GPUs, and Google TPUs, observing substantial GPU speedups over CPU baselines and competitive performance relative to GPU-based codebases on standard LP/QP benchmarks. Our numerical experiments further demonstrate MPAX's capabilities in high-throughput batched solving, near-linear multi-GPU scaling for dense LPs, and efficient end-to-end differentiable training. The solver is publicly available at https://github.com/MIT-Lu-Lab/MPAX.
+ oai:arXiv.org:2412.09734v3
+ math.OC
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haihao Lu, Zedong Peng, Jinwen Yang
+
+
+ Towards a constructive framework for control theory
+ https://arxiv.org/abs/2501.02267
+ arXiv:2501.02267v2 Announce Type: replace-cross
+Abstract: This work presents a framework for control theory based on constructive analysis to account for discrepancy between mathematical results and their implementation in a computer, also referred to as computational uncertainty. In control engineering, the latter is usually either neglected or considered submerged into some other type of uncertainty, such as system noise, and addressed within robust control. However, even robust control methods may be compromised when the mathematical objects involved in the respective algorithms fail to exist in exact form and subsequently fail to satisfy the required properties. For instance, in general stabilization using a control Lyapunov function, computational uncertainty may distort stability certificates or even destabilize the system despite robustness of the stabilization routine with regards to system, actuator and measurement noise. In fact, battling numerical problems in practical implementation of controllers is common among control engineers. Such observations indicate that computational uncertainty should indeed be addressed explicitly in controller synthesis and system analysis. The major contribution here is a fairly general framework for proof techniques in analysis and synthesis of control systems based on constructive analysis which explicitly states that every computation be doable only up to a finite precision thus accounting for computational uncertainty. A series of previous works is overviewed, including constructive system stability and stabilization, approximate optimal controls, eigenvalue problems, Caratheodory trajectories, measurable selectors. Additionally, a new constructive version of the Danskin's theorem, which is crucial in adversarial defense, is presented.
+ oai:arXiv.org:2501.02267v2
+ math.OC
+ cs.AI
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/LCSYS.2021.3076972
+ in IEEE Control Systems Letters, vol. 6, pp. 379-384, 2022
+ Pavel Osinenko
+
+
+ A survey on Clustered Federated Learning: Taxonomy, Analysis and Applications
+ https://arxiv.org/abs/2501.17512
+ arXiv:2501.17512v3 Announce Type: replace-cross
+Abstract: As Federated Learning (FL) expands, the challenge of non-independent and identically distributed (non-IID) data becomes critical. Clustered Federated Learning (CFL) addresses this by training multiple specialized models, each representing a group of clients with similar data distributions. However, the term ''CFL'' has increasingly been applied to operational strategies unrelated to data heterogeneity, creating significant ambiguity. This survey provides a systematic review of the CFL literature and introduces a principled taxonomy that classifies algorithms into Server-side, Client-side, and Metadata-based approaches. Our analysis reveals a distinct dichotomy: while theoretical research prioritizes privacy-preserving Server/Client-side methods, real-world applications in IoT, Mobility, and Energy overwhelmingly favor Metadata-based efficiency. Furthermore, we explicitly distinguish ''Core CFL'' (grouping clients for non-IID data) from ''Clustered X FL'' (operational variants for system heterogeneity). Finally, we outline lessons learned and future directions to bridge the gap between theoretical privacy and practical efficiency.
+ oai:arXiv.org:2501.17512v3
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Michael Ben Ali (IRIT), Omar El-Rifai (CIS-ENSMSE), Imen Megdiche (IRIT-SIG, INUC), Andr\'e Peninou (IRIT-SIG, UT2J), Olivier Teste (IRIT-SIG)
+
+
+ Network-Level Measures of Mobility from Aggregated Origin-Destination Data
+ https://arxiv.org/abs/2502.04162
+ arXiv:2502.04162v2 Announce Type: replace-cross
+Abstract: We introduce a framework for defining and interpreting collective mobility measures from spatially and temporally aggregated origin--destination (OD) data. Rather than characterizing individual behavior, these measures describe properties of the mobility system itself: how network organization, spatial structure, and routing constraints shape and channel population movement. In this view, aggregate mobility flows reveal aspects of connectivity, functional organization, and large-scale daily activity patterns encoded in the underlying transport and spatial network.
+ To support interpretation and provide a controlled reference for the proposed time-elapsed calculations, we first employ an independent, network-driven synthetic data generator in which trajectories arise from prescribed system structure rather than observed data. This controlled setting provides a concrete reference for understanding how the proposed measures reflect network organization and flow constraints.
+ We then apply the measures to fully anonymized data from the NetMob 2024 Data Challenge, examining their behavior under realistic limitations of spatial and temporal aggregation. While such data constraints restrict dynamical resolution, the resulting metrics still exhibit interpretable large-scale structure and temporal variation at the city scale.
+ oai:arXiv.org:2502.04162v2
+ stat.AP
+ cs.LG
+ cs.SI
+ stat.ML
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Alisha Foster, David A. Meyer, Asif Shakeel
+
+
+ Graceful forgetting: Memory as a process
+ https://arxiv.org/abs/2502.11105
+ arXiv:2502.11105v5 Announce Type: replace-cross
+Abstract: A rational framework is proposed to explain how we accommodate unbounded sensory input within bounded memory. According to this framework, memory is stored as a statistic-like representation that is repeatedly summarized and compressed to make room for new input. Summarization of sensory input must be rapid; that of abstract trace might be slower and more deliberative, drawing on elaborative processes some of which might occasionally reach consciousness (as in mind-wandering). Short-term sensory traces are summarized as simple statistics organized into structures such as a time series, graph or dictionary, and longer-term abstract traces as more complex statistic-like structures. Summarization at multiple time scales requires an intensive process of memory curation which might account for the high metabolic consumption of the brain at rest. Summarization may be guided by heuristics to help choose which statistics to apply at each step, so that the trace is useful for a wide range of future needs, the objective being to "represent the past" rather than tune for a specific task. However, the choice of statistics (or of heuristics to guide that choice) is a potential target for learning, possibly over long-term scales of development or evolution. The framework is intended as an aid to make sense of our extensive empirical and theoretical knowledge of memory and bring us closer to understanding it in functional and mechanistic terms.
+ oai:arXiv.org:2502.11105v5
+ q-bio.NC
+ cs.IR
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alain de Cheveign\'e
+
+
+ Variable transformations in consistent loss functions
+ https://arxiv.org/abs/2502.16542
+ arXiv:2502.16542v3 Announce Type: replace-cross
+Abstract: The empirical use of variable transformations within (strictly) consistent loss functions is widespread, yet a theoretical understanding is lacking. To address this gap, we develop a theoretical framework that establishes formal characterizations of (strict) consistency for such transformed loss functions. Our analysis focuses on two interrelated cases: (a) transformations applied solely to the realization variable and (b) bijective transformations applied jointly to both the realization and prediction variables. These cases extend the well-established framework of transformations applied exclusively to the prediction variable, as formalized by Osband's revelation principle. We further develop analogous characterizations for (strict) identification functions. The resulting theoretical framework is broadly applicable to statistical and machine learning methodologies. For instance, we apply the framework to Bregman and expectile loss functions to interpret empirical findings from models trained with transformed loss functions and systematically construct new identifiable and elicitable functionals, which we term respectively $g$-transformed expectation and $g$-transformed expectile. Applications of the framework to simulated and real-world data illustrate its practical utility in diverse settings. By unifying theoretical insights with practical applications, this work advances principled methodologies for designing loss functions in complex predictive tasks.
+ oai:arXiv.org:2502.16542v3
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1016/j.knosys.2025.115202
+ Knowledge-Based Systems 336 (2026) 115202
+ Hristos Tyralis, Georgia Papacharalampous
+
+
+ Zhuk's bridges, centralizers, and similarity
+ https://arxiv.org/abs/2503.03551
+ arXiv:2503.03551v2 Announce Type: replace-cross
+Abstract: This is the second of three papers motivated by the author's desire to understand and explain "algebraically" one aspect of Dmitriy Zhuk's proof of the CSP Dichotomy Theorem. In this paper we extend Zhuk's "bridge" construction to arbitrary meet-irreducible congruences of finite algebras in locally finite varieties with a Taylor term. We then connect bridges to centrality and similarity. In particular, we prove that Zhuk's bridges and our "similarity bridges" (defined in our first paper) convey the same information in locally finite Taylor varieties.
+ oai:arXiv.org:2503.03551v2
+ math.LO
+ cs.LO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Ross Willard
+
+
+ Fairness-aware kidney exchange and kidney paired donation
+ https://arxiv.org/abs/2503.06431
+ arXiv:2503.06431v2 Announce Type: replace-cross
+Abstract: The kidney paired donation (KPD) program provides an innovative solution to overcome incompatibility challenges in kidney transplants by matching incompatible donor-patient pairs and facilitating kidney exchanges. To address unequal access to transplant opportunities, there are two widely used fairness criteria: group fairness and individual fairness. However, these criteria do not consider protected patient features, which refer to characteristics legally or ethically recognized as needing protection from discrimination, such as race and gender. Motivated by the calibration principle in machine learning, we introduce a new fairness criterion: the matching outcome should be conditionally independent of the protected feature, given the sensitization level. We integrate this fairness criterion as a constraint within the KPD optimization framework and propose a computationally efficient solution using linearization strategies and column-generation methods. Theoretically, we analyze the associated price of fairness using random graph models. Empirically, we compare our fairness criterion with group fairness and individual fairness through both simulations and a real-data example.
+ oai:arXiv.org:2503.06431v2
+ stat.ME
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Mingrui Zhang, Xiaowu Dai, Lexin Li
+
+
+ Global evidence for a consistent spatial footprint of intra-urban centers
+ https://arxiv.org/abs/2503.06445
+ arXiv:2503.06445v2 Announce Type: replace-cross
+Abstract: Urban space is highly heterogeneous, with population and human activities concentrating in localized centers. However, the global organization of such intra-urban centers remains poorly understood due to the lack of consistent, comparable data. Here we develop a scalable geospatial framework to identify intra-urban activity centers worldwide using nighttime light observations. Applying this approach to more than 9,500 cities, we construct a high-resolution global dataset of over 15,000 centers. We uncover a striking regularity: despite vast differences in city size, regional development, and population density, the built-up area associated with individual centers remains remarkably consistent. Across cities, total urban area scales proportionally with the number of centers, yielding a stable mean spatial footprint. This regularity holds at the micro-scale, where Voronoi-based service areas exhibit a characteristic size that is persistent across countries and independent of local population concentration. As a geometric consequence, this polycentric multiplication maintains stable average distances to the nearest center as cities expand, preventing the accessibility decay inherent in monocentric growth. These findings reveal a universal organizing principle whereby urban expansion is accommodated through the replication of activity centers with a consistent spatial extent, providing a new empirical foundation for understanding the nature of urban growth.
+ oai:arXiv.org:2503.06445v2
+ physics.soc-ph
+ cs.SI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shuai Pang, Junlong Zhang, Yu Liu, Lei Dong
+
+
+ The 4D Human Embryonic Brain Atlas: spatiotemporal atlas generation for rapid anatomical changes
+ https://arxiv.org/abs/2503.07177
+ arXiv:2503.07177v2 Announce Type: replace-cross
+Abstract: Early brain development is crucial for lifelong neurodevelopmental health. However, current clinical practice offers limited knowledge of normal embryonic brain anatomy on ultrasound, despite the brain undergoing rapid changes within the time-span of days. To provide detailed insights into normal brain development and identify deviations, we created the 4D Human Embryonic Brain Atlas using a deep learning-based approach for groupwise registration and spatiotemporal atlas generation. Our method introduced a time-dependent initial atlas and penalized deviations from it, ensuring age-specific anatomy was maintained throughout rapid development. The atlas was generated and validated using 831 3D ultrasound images from 402 subjects in the Rotterdam Periconceptional Cohort, acquired between gestational weeks 8 and 12. We evaluated the effectiveness of our approach with an ablation study, which demonstrated that incorporating a time-dependent initial atlas and penalization produced anatomically accurate results. In contrast, omitting these adaptations led to anatomically incorrect atlas. Visual comparisons with an existing ex-vivo embryo atlas further confirmed the anatomical accuracy of our atlas. In conclusion, the proposed method successfully captures the rapid anotomical development of the embryonic brain. The resulting 4D Human Embryonic Brain Atlas provides a unique insights into this crucial early life period and holds the potential for improving the detection, prevention, and treatment of prenatal neurodevelopmental disorders.
+ oai:arXiv.org:2503.07177v2
+ eess.IV
+ cs.CV
+ q-bio.QM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1016/j.compmedimag.2026.102702
+ Computerized Medical Imaging and Graphics, 2026, 102702
+ Wietske A. P. Bastiaansen, Melek Rousian, Anton H. J. Koning, Wiro J. Niessen, Bernadette S. de Bakker, R\'egine P. M. Steegers-Theunissen, Stefan Klein
+
+
+ Covert Entanglement Generation and Secrecy
+ https://arxiv.org/abs/2503.21002
+ arXiv:2503.21002v4 Announce Type: replace-cross
+Abstract: We determine the covert capacity for entanglement generation over a noisy quantum channel. While secrecy guarantees that the transmitted information remains inaccessible to an adversary, covert communication ensures that the transmission itself remains undetectable. The entanglement dimension follows a square root law (SRL) in the covert setting, i.e., $O(\sqrt{n})$ EPR pairs can be distributed covertly and reliably over $n$ channel uses. We begin with covert communication of classical information under a secrecy constraint. We then leverage this result to construct a coding scheme for covert entanglement generation.
+ oai:arXiv.org:2503.21002v4
+ quant-ph
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ohad Kimelfeld, Boulat A. Bash, Uzi Pereg
+
+
+ Robust Channel Estimation for Optical Wireless Communications Using Neural Network
+ https://arxiv.org/abs/2504.02134
+ arXiv:2504.02134v3 Announce Type: replace-cross
+Abstract: Optical Wireless Communication (OWC) has gained significant attention due to its high-speed data transmission and throughput. Optical wireless channels are often assumed to be flat, but we evaluate frequency selective channels to consider high data rate optical wireless or very dispersive environments. To address this for optical scenarios, this paper presents a robust channel estimation framework with low-complexity to mitigate frequency-selective effects, then to improve system reliability and performance. This channel estimation framework contains a neural network that can estimate general optical wireless channels without prior channel information about the environment. Based on this estimate and the corresponding delay spread, one of several candidate offline-trained neural networks will be activated to predict this channel. Simulation results demonstrate that the proposed method has improved and robust normalized mean square error (NMSE) and bit error rate (BER) performance compared to conventional estimation methods while maintaining computational efficiency. These findings highlight the potential of neural network solutions in enhancing the performance of OWC systems under indoor channel conditions.
+ oai:arXiv.org:2504.02134v3
+ eess.SP
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Dianxin Luan, John Thompson
+
+
+ Computationally Efficient Signal Detection with Unknown Bandwidths
+ https://arxiv.org/abs/2504.09342
+ arXiv:2504.09342v3 Announce Type: replace-cross
+Abstract: Signal detection in environments with unknown signal bandwidth and time intervals is a fundamental problem in adversarial and spectrum-sharing scenarios. This paper addresses the problem of detecting signals occupying unknown degrees of freedom from non-coherent power measurements, where the signal is constrained to an interval in one dimension or a hyper-cube in multiple dimensions. A GLRT is derived, resulting in a straightforward metric involving normalized average signal energy for each candidate signal set. We present bounds on false alarm and missed detection probabilities, demonstrating their dependence on SNR and signal set sizes. To overcome the inherent computational complexity of exhaustive searches, we propose a computationally efficient binary search method, reducing the complexity from O(N^2) to O(N) for one-dimensional cases. Simulations indicate that the method maintains performance near exhaustive searches and achieves asymptotic consistency, with interval-of-overlap converging to one under constant SNR as measurement size increases. The simulation studies also demonstrate superior performance and reduced complexity compared to contemporary neural network-based approaches, specifically outperforming custom-trained U-Net models in spectrum detection tasks.
+ oai:arXiv.org:2504.09342v3
+ eess.SP
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ali Rasteh, Sundeep Rangan
+
+
+ Adaptive Entanglement Distillation
+ https://arxiv.org/abs/2504.11670
+ arXiv:2504.11670v2 Announce Type: replace-cross
+Abstract: Quantum network applications impose a variety of requirements on entanglement resources in terms of rate, fidelity, latency, and more. The repeaters in the quantum network must combine good methods for entanglement generation, effective entanglement distillation, and smart routing protocols to satisfy these application requirements. In this work, we focus on entanglement distillation in a linear chain of quantum repeaters. While conventional approaches reuse the same distillation scheme over multiple hop lengths after entanglement swaps, we propose a novel adaptive quantum error correction (QEC) scheme that boosts end-to-end metrics. Specifically, depending on the network operating point, we adapt the code used in distillation over successive rounds to monotonically increase the rate while also improving fidelity. We demonstrate the effectiveness of this strategy using three codes, with parameters [[9,1,3]], [[9,2,3]], [[9,3,3]], and a new performance metric, efficiency, that incorporates both overall rate and fidelity. Since the minimum input fidelity for QEC-based distillation is high, we then extend our study to include non-QEC-based purification protocols, specifically DEJMPS since it outperforms others. We compare the performance of end-to-end DEJMPS against adapting from DEJMPS to QEC once DEJMPS improves the initial fidelity to the threshold for QEC. Through a refined efficiency metric, we illuminate the regime where QEC is beneficial. These results provide a detailed outlook for entanglement purification and distillation in first and second generation quantum repeaters.
+ oai:arXiv.org:2504.11670v2
+ quant-ph
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sijie Cheng, Narayanan Rengaswamy
+
+
+ $k$-Inductive and Interpolation-Inspired Barrier Certificates for Stochastic Dynamical Systems
+ https://arxiv.org/abs/2504.15412
+ arXiv:2504.15412v2 Announce Type: replace-cross
+Abstract: In this paper, we introduce two new types of barrier certificates that are based on multiple functions rather than a single one. A conventional barrier certificate for a stochastic dynamical system is a nonnegative real-valued function whose expected value does not increase as the system evolves. This requirement guarantees that the barrier certificate forms a nonnegative supermartingale and can be used to derive a lower bound on the probability that the system remains safe. A key advantage of such certificates is that they can be automatically searched for using tools such as optimization programs instantiated with a fixed template. When this search is unsuccessful, the common practice is to modify the template and attempt the synthesis again. Drawing inspiration from logical interpolation, we first propose an alternative framework that uses a collection of functions to jointly serve as a barrier certificate. We refer to this construct as an interpolation-inspired barrier certificate. Nonetheless, we observe that these certificates still require one function in the collection to satisfy a supermartingale condition. Motivated by recent work in the literature, we next combine k-induction with interpolation-inspired certificates to relax this supermartingale constraint. We develop a general and more flexible notion of barrier certificates, which we call k-inductive interpolation-inspired barrier certificates. This formulation encompasses multiple ways of integrating interpolation-inspired barrier certificates with k-induction. We highlight two specific instantiations among these possible combinations. For polynomial systems, we employ sum-of-squares (SOS) programming to synthesize the corresponding set of functions. Finally, through our case studies, we show that the proposed methods enable the use of simpler templates and yield tighter lower bounds on the safety probability.
+ oai:arXiv.org:2504.15412v2
+ math.OC
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohammed Adib Oumer, Vishnu Murali, Majid Zamani
+
+
+ High-Temperature Fermionic Gibbs States are Mixtures of Gaussian States
+ https://arxiv.org/abs/2505.09730
+ arXiv:2505.09730v2 Announce Type: replace-cross
+Abstract: Efficient simulation of a quantum system generally relies on structural properties of the quantum state. Motivated by the recent results by Bakshi et al. on the sudden death of entanglement in high-temperature Gibbs states of quantum spin systems, we study the high-temperature Gibbs states of bounded-degree local fermionic Hamiltonians, which include the special case of geometrically local fermionic systems. We prove that at a sufficiently high temperature that is independent of the system size, the Gibbs state is a probabilistic mixture of fermionic Gaussian states. This forms the basis of an efficient classical algorithm to prepare the Gibbs state by sampling from a distribution of fermionic Gaussian states. As a contrasting example, we show that high-temperature Gibbs states of the Sachdev-Ye-Kitaev (SYK) model are not convex mixtures of Gaussian states.
+ oai:arXiv.org:2505.09730v2
+ quant-ph
+ cs.DS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Akshar Ramkumar, Yiyi Cai, Yu Tong, Jiaqing Jiang
+
+
+ Flow-based Generative Modeling of Potential Outcomes and Counterfactuals
+ https://arxiv.org/abs/2505.16051
+ arXiv:2505.16051v3 Announce Type: replace-cross
+Abstract: Predicting potential and counterfactual outcomes from observational data is central to individualized decision-making, particularly in clinical settings where treatment choices must be tailored to each patient rather than guided solely by population averages. We propose PO-Flow, a continuous normalizing flow (CNF) framework for causal inference that jointly models potential outcome distributions and factual-conditioned counterfactual outcomes. Trained via flow matching, PO-Flow provides a unified approach to individualized potential outcome prediction, conditional average treatment effect estimation, and counterfactual prediction. By encoding an observed factual outcome into a shared latent representation and decoding it under an alternative treatment, PO-Flow relates factual and counterfactual realizations at the individual level, rather than generating counterfactuals independently from marginal conditional distributions. In addition, PO-Flow supports likelihood-based evaluation of potential outcomes, enabling uncertainty-aware assessment of predictions. A supporting recovery guarantee is established under certain assumptions, and empirical results on benchmark datasets demonstrate strong performance across a range of causal inference tasks within the potential outcomes framework.
+ oai:arXiv.org:2505.16051v3
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dongze Wu, David I. Inouye, Yao Xie
+
+
+ Experimental robustness benchmarking of quantum neural networks on a superconducting quantum processor
+ https://arxiv.org/abs/2505.16714
+ arXiv:2505.16714v2 Announce Type: replace-cross
+Abstract: Quantum machine learning (QML) models, like their classical counterparts, are vulnerable to adversarial attacks, hindering their secure deployment. Here, we report the first systematic experimental robustness benchmark for 20-qubit quantum neural network (QNN) classifiers executed on a superconducting processor. Our benchmarking framework features an efficient adversarial attack algorithm designed for QNNs, enabling quantitative characterization of adversarial robustness and robustness bounds. From our analysis, we verify that adversarial training reduces sensitivity to targeted perturbations by regularizing input gradients, significantly enhancing QNN's robustness. Additionally, our analysis reveals that QNNs exhibit superior adversarial robustness compared to classical neural networks, an advantage attributed to inherent quantum noise. Furthermore, the empirical upper bound extracted from our attack experiments shows a minimal deviation ($3 \times 10^{-3}$) from the theoretical lower bound, providing strong experimental confirmation of the attack's effectiveness and the tightness of fidelity-based robustness bounds. This work establishes a critical experimental framework for assessing and improving quantum adversarial robustness, paving the way for secure and reliable QML applications.
+ oai:arXiv.org:2505.16714v2
+ quant-ph
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Hai-Feng Zhang, Zhao-Yun Chen, Peng Wang, Liang-Liang Guo, Tian-Le Wang, Xiao-Yan Yang, Ren-Ze Zhao, Ze-An Zhao, Sheng Zhang, Lei Du, Hao-Ran Tao, Zhi-Long Jia, Wei-Cheng Kong, Huan-Yu Liu, Athanasios V. Vasilakos, Yang Yang, Yu-Chun Wu, Ji Guan, Peng Duan, Guo-Ping Guo
+
+
+ ALPCAHUS: Subspace Clustering for Heteroscedastic Data
+ https://arxiv.org/abs/2505.18918
+ arXiv:2505.18918v3 Announce Type: replace-cross
+Abstract: Principal component analysis (PCA) is a key tool in the field of data dimensionality reduction. Various methods have been proposed to extend PCA to the union of subspace (UoS) setting for clustering data that comes from multiple subspaces like K-Subspaces (KSS). However, some applications involve heterogeneous data that vary in quality due to noise characteristics associated with each data sample. Heteroscedastic methods aim to deal with such mixed data quality. This paper develops a heteroscedastic-based subspace clustering method, named ALPCAHUS, that can estimate the sample-wise noise variances and use this information to improve the estimate of the subspace bases associated with the low-rank structure of the data. This clustering algorithm builds on K-Subspaces (KSS) principles by extending the recently proposed heteroscedastic PCA method, named LR-ALPCAH, for clusters with heteroscedastic noise in the UoS setting. Simulations and real-data experiments show the effectiveness of accounting for data heteroscedasticity compared to existing clustering algorithms. Code available at https://github.com/javiersc1/ALPCAHUS.
+ oai:arXiv.org:2505.18918v3
+ stat.ML
+ cs.LG
+ eess.SP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Javier Salazar Cavazos, Jeffrey A Fessler, Laura Balzano
+
+
+ On groups with EDT0L word problem
+ https://arxiv.org/abs/2505.20057
+ arXiv:2505.20057v3 Announce Type: replace-cross
+Abstract: We prove that the word problem for the infinite cyclic group is not EDT0L, and obtain as a corollary that a finitely generated group with EDT0L word problem must be torsion. In addition, we show that the property of having an EDT0L word problem is invariant under change of generating set and passing to finitely generated subgroups. This represents significant progress towards the conjecture that all groups with EDT0L word problem are finite (i.e. precisely the groups with regular word problem).
+ oai:arXiv.org:2505.20057v3
+ math.GR
+ cs.FL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alex Bishop, Murray Elder, Alex Evetts, Paul Gallot, Alex Levine
+
+
+ Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning
+ https://arxiv.org/abs/2505.23783
+ arXiv:2505.23783v2 Announce Type: replace-cross
+Abstract: In-Context Learning (ICL) allows Large Language Models (LLMs) to adapt to new tasks with just a few examples, but their predictions often suffer from systematic biases, leading to unstable performances in classification. While calibration techniques are proposed to mitigate these biases, we show that, in the logit space, many of these methods are equivalent to merely shifting the LLM's decision boundary without having the ability to alter its orientation. This proves inadequate when biases cause the LLM to be severely misdirected. To address these limitations and provide a unifying framework, we propose Supervised Calibration (SC), a loss-minimization based framework which learns an optimal, per-class affine transformation of the LLM's predictive probabilities in the logit space without requiring external data beyond the context. By using a more expressive functional class, SC not only subsumes many existing calibration methods in ICL as special cases, but also enables the ability to alter and even completely reverse the orientation of the LLM's decision boundary. Furthermore, SC's loss-based nature facilitates the seamless integration of two purpose-built regularization techniques: context-invariance and directional trust-region. The former is designed to tackle the instability issue in ICL, while the latter controls the degree of calibration. Finally, SC delivers state-of-the-art performance over calibration baselines in the 4-shot, 8-shot, and 16-shot settings across all nine datasets for Mistral-7B-Instruct-v0.3, LLaMA-2-7B-chat, and Qwen2-7B-Instruct.
+ oai:arXiv.org:2505.23783v2
+ stat.ML
+ cs.AI
+ cs.CL
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Korel Gundem, Juncheng Dong, Dennis Zhang, Vahid Tarokh, Zhengling Qi
+
+
+ Flagged Extensions and Numerical Simulations for Quantum Channel Capacity: Bridging Theory and Computation
+ https://arxiv.org/abs/2506.03429
+ arXiv:2506.03429v2 Announce Type: replace-cross
+Abstract: I will investigate the capacities of noisy quantum channels through a combined analytical and numerical approach. First, I introduce novel flagged extension techniques that embed a channel into a higher-dimensional space, enabling single-letter upper bounds on quantum and private capacities. My results refine previous bounds and clarify noise thresholds beyond which quantum transmission vanishes. Second, I present a simulation framework that uses coherent information to estimate channel capacities in practice, focusing on two canonical examples: the amplitude damping channel (which we confirm is degradable and thus single-letter) and the depolarizing channel (whose capacity requires multi-letter superadditivity). By parameterizing input qubit states on the Bloch sphere, I numerically pinpoint the maximum coherent information for each channel and validate the flagged extension bounds. Notably, I capture the abrupt transition to zero capacity at high noise and observe superadditivity for moderate noise levels.
+ oai:arXiv.org:2506.03429v2
+ quant-ph
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Vahid Nourozi
+
+
+ Thompson Sampling in Function Spaces via Neural Operators
+ https://arxiv.org/abs/2506.21894
+ arXiv:2506.21894v3 Announce Type: replace-cross
+Abstract: We propose an extension of Thompson sampling to optimization problems over function spaces where the objective is a known functional of an unknown operator's output. We assume that queries to the operator (such as running a high-fidelity simulator or physical experiment) are costly, while functional evaluations on the operator's output are inexpensive. Our algorithm employs a sample-then-optimize approach using neural operator surrogates. This strategy avoids explicit uncertainty quantification by treating trained neural operators as approximate samples from a Gaussian process (GP) posterior. We derive regret bounds and theoretical results connecting neural operators with GPs in infinite-dimensional settings. Experiments benchmark our method against other Bayesian optimization baselines on functional optimization tasks involving partial differential equations of physical systems, demonstrating better sample efficiency and significant performance gains.
+ oai:arXiv.org:2506.21894v3
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Rafael Oliveira, Xuesong Wang, Kian Ming A. Chai, Edwin V. Bonilla
+
+
+ On a result by Meshulam
+ https://arxiv.org/abs/2506.22553
+ arXiv:2506.22553v2 Announce Type: replace-cross
+Abstract: In 1996, Meshulam proved that every sequence generated by applying projections onto affine subspaces, drawn from a finite collection in Euclidean space, must be bounded.
+ In this paper, we extend his result not only from affine subspaces to convex polyhedral subsets, but also from Euclidean to general Hilbert space. Various examples are provided to illustrate the sharpness of the results.
+ oai:arXiv.org:2506.22553v2
+ math.OC
+ cs.NA
+ math.FA
+ math.NA
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Heinz H. Bauschke, Tran Thanh Tung
+
+
+ Event2Audio: Event-Based Optical Vibration Sensing
+ https://arxiv.org/abs/2507.03273
+ arXiv:2507.03273v2 Announce Type: replace-cross
+Abstract: Small vibrations observed in video can unveil information beyond what is visual, such as sound and material properties. It is possible to passively record these vibrations when they are visually perceptible, or actively amplify their visual contribution with a laser beam when they are not perceptible. In this paper, we improve upon the active sensing approach by leveraging event-based cameras, which are designed to efficiently capture fast motion. We demonstrate our method experimentally by recovering audio from vibrations, even for multiple simultaneous sources, and in the presence of environmental distortions. Our approach matches the state-of-the-art reconstruction quality at much faster speeds, approaching real-time processing.
+ oai:arXiv.org:2507.03273v2
+ eess.IV
+ cs.CV
+ eess.AS
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/ICCP64821.2025.11143833
+ 2025 IEEE International Conference on Computational Photography (ICCP), Toronto, ON, Canada, 2025, pp. 1-12
+ Mingxuan Cai, Dekel Galor, Amit Pal Singh Kohli, Jacob L. Yates, Laura Waller
+
+
+ I2I-PR: Deep Iterative Refinement for Phase Retrieval using Image-to-Image Diffusion Models
+ https://arxiv.org/abs/2507.09609
+ arXiv:2507.09609v2 Announce Type: replace-cross
+Abstract: Phase retrieval aims to recover a signal from intensity-only measurements, a fundamental problem in many fields such as imaging, holography, optical computing, crystallography, and microscopy. Although there are several well-known phase retrieval algorithms, including classical alternating projection-based solvers, the reconstruction performance often remains sensitive to initialization and measurement noise. Recently, diffusion models have gained traction in various image reconstruction tasks, yielding significant theoretical insights and practical advances. In this work, we introduce a deep iterative refinement framework that redefines the role of diffusion models in phase retrieval. Instead of generating images from random noise, our method starts with multiple physically consistent initial estimates and iteratively refines them through a learned image-to-image diffusion process. This enables data-driven phase retrieval that is both interpretable and robust, leveraging the strengths of classical solvers while mitigating their weaknesses. Furthermore, we propose an enhanced initialization strategy that integrates classical algorithms with a novel acceleration mechanism to obtain reliable initial estimates. During inference, we adopt a geometric self-ensemble strategy based on input flipping, together with output aggregation to further improve the final reconstruction quality. Comprehensive experiments demonstrate that our approach achieves substantial gains in both training efficiency and reconstruction quality, consistently outperforming classical and recent state-of-the-art methods. These results highlight the potential of diffusion-driven refinement as an effective and general framework for robust phase retrieval across diverse applications. The source code and trained models are available at https://github.com/METU-SPACE-Lab/I2I-PR-for-Phase-Retrieval
+ oai:arXiv.org:2507.09609v2
+ eess.IV
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Mehmet Onurcan Kaya, Figen S. Oktem
+
+
+ Aligning Generative Speech Enhancement with Perceptual Feedback
+ https://arxiv.org/abs/2507.09929
+ arXiv:2507.09929v2 Announce Type: replace-cross
+Abstract: Language Model (LM)-based speech enhancement (SE) has recently emerged as a promising direction, but existing approaches predominantly rely on token-level likelihood objectives that weakly reflect human perception. This mismatch limits progress, as optimizing signal accuracy does not always improve naturalness or listening comfort. We address this gap by introducing a perceptually aligned LM-based SE approach. Our method applies Direct Preference Optimization (DPO) with UTMOS, a neural MOS predictor, as a proxy for human ratings, directly steering models toward perceptually preferred outputs. This design directly connects model training to perceptual quality and is broadly applicable within LM-based SE frameworks. On the Deep Noise Suppression Challenge 2020 test sets, our approach consistently improves speech quality metrics, achieving relative gains of up to 56%. To our knowledge, this is the first integration of perceptual feedback into LM-based SE and the first application of DPO in the SE domain, establishing a new paradigm for perceptually aligned enhancement with SE.
+ oai:arXiv.org:2507.09929v2
+ eess.AS
+ cs.AI
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haoyang Li, Nana Hou, Yuchen Hu, Jixun Yao, Sabato Marco Siniscalchi, Xuyi Zhuang, Deheng Ye, Wei Yang, Eng Siong Chng
+
+
+ Shared representations in brains and models reveal a two-route cortical organization during scene perception
+ https://arxiv.org/abs/2507.13941
+ arXiv:2507.13941v2 Announce Type: replace-cross
+Abstract: The brain transforms visual inputs into high-dimensional cortical representations that support diverse cognitive and behavioral goals. Characterizing how this information is organized and routed across the human brain is essential for understanding how we process complex visual scenes. Here, we applied representational similarity analysis to 7T fMRI data collected during natural scene viewing. We quantified representational geometry shared across individuals and compared it to hierarchical features from vision and language neural networks. This analysis revealed two distinct processing routes: a ventromedial pathway specialized for scene layout and environmental context, and a lateral occipitotemporal pathway selective for animate content. Vision models aligned with shared structure in both routes, whereas language models corresponded primarily with the lateral pathway. These findings refine classical visual-stream models by characterizing scene perception as a distributed cortical network with separable representational routes for context and animate content.
+ oai:arXiv.org:2507.13941v2
+ q-bio.NC
+ cs.AI
+ cs.CV
+ eess.IV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Pablo Marcos-Manch\'on, Llu\'is Fuentemilla
+
+
+ Development and Evaluation of a Standardized Ontology for Non-Invasive Respiratory Support to Improve Interoperability and Clinical Reasoning in Acute Care
+ https://arxiv.org/abs/2507.19992
+ arXiv:2507.19992v3 Announce Type: replace-cross
+Abstract: Managing patients with respiratory failure increasingly involves noninvasive respiratory support (NIRS) strategies to support respiration, often preventing the need for invasive mechanical ventilation. However, despite the rapidly expanding use of NIRS, there remains a significant challenge to its optimal use across all medical circumstances. It lacks a unified ontological structure, complicating guidance on NIRS modalities across healthcare systems. This study introduced NIRS ontology to support knowledge representation in acute care settings by providing a unified framework that enhances data clarity and interoperability, laying the groundwork for future clinical decision-making. We developed NIRS ontology using the Web Ontology Language (OWL) and Protege to organize clinical concepts and relationships. To enable rule-based clinical reasoning beyond hierarchical structures, we added Semantic Web Rule Language (SWRL) rules. We evaluated logical reasoning by adding a sample of 6 patient scenarios and used SPARQL queries to retrieve and test targeted inferences. The ontology has 145 classes, 11 object properties, and 18 data properties across 949 axioms that establish concept relationships. To standardize clinical concepts, we added 392 annotations, including descriptive definitions based on controlled vocabularies. SPARQL query evaluations across clinical scenarios confirmed the ontology ability to support rulebased reasoning and therapy recommendations, providing a foundation for consistent documentation practices, integration into clinical data models, and advanced analysis of NIRS outcomes. In conclusion, we unified NIRS concepts into an ontological framework and demonstrated its applicability through the evaluation of patient scenarios and alignment with standardized vocabularies.
+ oai:arXiv.org:2507.19992v3
+ q-bio.OT
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Md Fantacher Islam, Jarrod Mosier, Vignesh Subbian
+
+
+ Predicting Parkinson's Disease Progression Using Statistical and Neural Mixed Effects Models: Comparative Study on Longitudinal Biomarkers
+ https://arxiv.org/abs/2507.20058
+ arXiv:2507.20058v2 Announce Type: replace-cross
+Abstract: Predicting Parkinson's Disease (PD) progression is crucial, and voice biomarkers offer a non-invasive method for tracking symptom severity (UPDRS scores) through telemonitoring. Analyzing this longitudinal data is challenging due to within-subject correlations and complex, nonlinear patient-specific progression patterns. This study benchmarks LMMs against two advanced hybrid approaches: the Generalized Neural Network Mixed Model (GNMM) (Mandel 2021), which embeds a neural network within a GLMM structure, and the Neural Mixed Effects (NME) model (Wortwein 2023), allowing nonlinear subject-specific parameters throughout the network. Using the Oxford Parkinson's telemonitoring voice dataset, we evaluate these models' performance in predicting Total UPDRS to offer practical guidance for PD research and clinical applications.
+ oai:arXiv.org:2507.20058v2
+ stat.ML
+ cs.LG
+ stat.AP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ran Tong, Lanruo Wang, Tong Wang, Wei Yan
+
+
+ An Analytical and Experimental Study of Distributed Uplink Beamforming in the Presence of Carrier Frequency Offsets
+ https://arxiv.org/abs/2508.08506
+ arXiv:2508.08506v2 Announce Type: replace-cross
+Abstract: Realizing distributed multi-user beamforming (D-MUBF) in time division duplex (TDD)-based multi-user MIMO (MU-MIMO) systems faces significant challenges. One of the most fundamental challenges is achieving accurate over-the-air (OTA) timing and frequency synchronization among distributed access points (APs), particularly due to residual frequency offsets caused by local oscillator (LO) drifts. Despite decades of research on synchronization for MU-MIMO, there are only a few experimental studies that evaluate D-MUBF techniques under imperfect frequency synchronization among distributed antennas. This paper presents an analytical and experimental assessment of D-MUBF methods in the presence of frequency synchronization errors. We provide closed-form expressions for signal-to-interference-plus-noise ratio (SINR) as a function of channel characteristics and statistical properties of carrier frequency offset (CFO) among AP antennas. In addition, through experimental evaluations conducted with the RENEW massive MIMO testbed, we collected comprehensive datasets across various experimental scenarios. These datasets comprise uplink pilot samples for channel and CFO estimation, in addition to uplink multi-user data intended for analyzing D-MUBF techniques. By examining these datasets, we assess the performance of D-MUBF in the presence of CFO and compare the analytical predictions with empirical measurements. Furthermore, we make the datasets publicly available and provide insights on utilizing them for future research endeavors.
+ oai:arXiv.org:2508.08506v2
+ eess.SP
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mehdi Zafari, Divyanshu Pandey, Rahman Doost-Mohammady
+
+
+ Improving the Speaker Anonymization Evaluation's Robustness to Target Speakers with Adversarial Learning
+ https://arxiv.org/abs/2508.09803
+ arXiv:2508.09803v2 Announce Type: replace-cross
+Abstract: The current privacy evaluation for speaker anonymization often overestimates privacy when a same-gender target selection algorithm (TSA) is used, although this TSA leaks the speaker's gender and should hence be more vulnerable. We hypothesize that this occurs because the evaluation does not account for the fact that anonymized speech contains information from both the source and target speakers. To address this, we propose to add a target classifier that measures the influence of target speaker information in the evaluation, which can also be removed with adversarial learning. Experiments demonstrate that this approach is effective for multiple anonymizers, particularly when using a same-gender TSA, leading to a more reliable assessment.
+ oai:arXiv.org:2508.09803v2
+ eess.AS
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Carlos Franzreb, Arnab Das, Tim Polzehl, Sebastian M\"oller
+
+
+ Real-Time Reconstruction of 3D Bone Models via Very-Low-Dose Protocols
+ https://arxiv.org/abs/2508.13947
+ arXiv:2508.13947v2 Announce Type: replace-cross
+Abstract: Patient-specific bone models are essential for designing surgical guides and preoperative planning, as they enable the visualization of intricate anatomical structures. However, traditional CT-based approaches for creating bone models are limited to preoperative use due to the low flexibility and high radiation exposure of CT and time-consuming manual delineation. Here, we introduce Semi-Supervised Reconstruction with Knowledge Distillation (SSR-KD), a fast and accurate AI framework to reconstruct high-quality bone models from biplanar X-rays in 30 seconds, with an average error under 1.0 mm, eliminating the dependence on CT and manual work. Additionally, high tibial osteotomy simulation was performed by experts on reconstructed bone models, demonstrating that bone models reconstructed from biplanar X-rays have comparable clinical applicability to those annotated from CT. Overall, our approach accelerates the process, reduces radiation exposure, enables intraoperative guidance, and significantly improves the practicality of bone models, offering transformative applications in orthopedics.
+ oai:arXiv.org:2508.13947v2
+ eess.IV
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yiqun Lin, Haoran Sun, Yongqing Li, Rabia Aslam, Lung Fung Tse, Tiange Cheng, Chun Sing Chui, Wing Fung Yau, Victorine R. Le Meur, Meruyert Amangeldy, Kiho Cho, Yinyu Ye, James Zou, Wei Zhao, Xiaomeng Li
+
+
+ Optimal Hamiltonian for a quantum state with finite entropy
+ https://arxiv.org/abs/2508.16575
+ arXiv:2508.16575v3 Announce Type: replace-cross
+Abstract: We consider the following task: how for a given quantum state $\rho$ to find a grounded Hamiltonian $H$ satisfying the condition $\mathrm{Tr} H\rho\leq E_0<+\infty$ in such a way that the von Neumann entropy of the Gibbs state $\gamma_H(E)$ corresponding to a given energy $E>0$ be as small as possible.
+ We show that for any mixed state $\rho$ with finite entropy and any $E>0$ there exists a solution $H(\rho,E_0,E)$ of the above problem (unique in the non-degenerate case) which we call optimal Hamiltonian for the state $\rho$. Explicit expressions for $H(\rho,E_0,E)$, $\gamma_{H(\rho,E_0,E)}(E)$ and $S(\gamma_{H(\rho,E_0,E)}(E))$ are obtained. Analytical properties of the function $E\mapsto S(\gamma_{H(\rho,E_0,E)}(E))$ are explored. Several examples are considered.
+ We also consider a modification of the above task in which arbitrary Hamiltonians (not necessarily grounded) are considered.
+ The basic application motivated this research is described. As examples, new semicontinuity bounds for the von Neumann entropy and for the entanglement of formation are obtained and briefly discussed (with the intention to give a detailed analysis in a separate article).
+ oai:arXiv.org:2508.16575v3
+ quant-ph
+ cs.IT
+ math-ph
+ math.IT
+ math.MP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ M. E. Shirokov
+
+
+ Generalization vs. Memorization in Autoregressive Deep Learning: Or, Examining Temporal Decay of Gradient Coherence
+ https://arxiv.org/abs/2509.00024
+ arXiv:2509.00024v2 Announce Type: replace-cross
+Abstract: Foundation models trained as autoregressive PDE surrogates hold significant promise for accelerating scientific discovery through their capacity to both extrapolate beyond training regimes and efficiently adapt to downstream tasks despite a paucity of examples for fine-tuning. However, reliably achieving genuine generalization - a necessary capability for producing novel scientific insights and robustly performing during deployment - remains a critical challenge. Establishing whether or not these requirements are met demands evaluation metrics capable of clearly distinguishing genuine model generalization from mere memorization.
+ We apply the influence function formalism to systematically characterize how autoregressive PDE surrogates assimilate and propagate information derived from diverse physical scenarios, revealing fundamental limitations of standard models and training routines in addition to providing actionable insights regarding the design of improved surrogates.
+ oai:arXiv.org:2509.00024v2
+ physics.comp-ph
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ James Amarel, Nicolas Hengartner, Robyn Miller, Kamaljeet Singh, Siddharth Mansingh, Arvind Mohan, Benjamin Migliori, Emily Casleton, Alexei Skurikhin, Earl Lawrence, Gerd J. Kunde
+
+
+ ProtSAE: Disentangling and Interpreting Protein Language Models via Semantically-Guided Sparse Autoencoders
+ https://arxiv.org/abs/2509.05309
+ arXiv:2509.05309v2 Announce Type: replace-cross
+Abstract: Sparse Autoencoder (SAE) has emerged as a powerful tool for mechanistic interpretability of large language models. Recent works apply SAE to protein language models (PLMs), aiming to extract and analyze biologically meaningful features from their latent spaces. However, SAE suffers from semantic entanglement, where individual neurons often mix multiple nonlinear concepts, making it difficult to reliably interpret or manipulate model behaviors. In this paper, we propose a semantically-guided SAE, called ProtSAE. Unlike existing SAE which requires annotation datasets to filter and interpret activations, we guide semantic disentanglement during training using both annotation datasets and domain knowledge to mitigate the effects of entangled attributes. We design interpretability experiments showing that ProtSAE learns more biologically relevant and interpretable hidden features compared to previous methods. Performance analyses further demonstrate that ProtSAE maintains high reconstruction fidelity while achieving better results in interpretable probing. We also show the potential of ProtSAE in steering PLMs for downstream generation tasks.
+ oai:arXiv.org:2509.05309v2
+ q-bio.QM
+ cs.AI
+ cs.CL
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Xiangyu Liu, Haodi Lei, Yi Liu, Yang Liu, Wei Hu
+
+
+ Quantum spatial best-arm identification via quantum walks
+ https://arxiv.org/abs/2509.05890
+ arXiv:2509.05890v2 Announce Type: replace-cross
+Abstract: Quantum reinforcement learning has emerged as a framework combining quantum computation with sequential decision-making, and applications to the multi-armed bandit (MAB) problem have been reported. The graph bandit problem extends the MAB setting by introducing spatial constraints, yet quantum approaches remain limited. We propose a quantum algorithmic framework for best-arm identification in graph bandits, termed Quantum Spatial Best-Arm Identification (QSBAI), which is applicable to general graph structures. The method employs quantum walks to encode superpositions over graph-constrained actions, extending amplitude amplification and generalizing the Quantum BAI algorithm via Szegedy's walk framework. This establishes a link between Grover-type search and reinforcement learning tasks with structural restrictions. We focus our theoretical analysis on complete and bipartite graphs, deriving the maximal success probability of identifying the best arm and the time step at which it is achieved. Our results highlight the potential of quantum walks to accelerate exploration in constrained environments and extend the applicability of quantum algorithms for decision-making.
+ oai:arXiv.org:2509.05890v2
+ quant-ph
+ cs.AI
+ cs.LG
+ math-ph
+ math.MP
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Tomoki Yamagami, Etsuo Segawa, Takatomo Mihana, Andr\'e R\"ohm, Atsushi Uchida, Ryoichi Horisaki
+
+
+ Robustness of quantum algorithms: Worst-case fidelity bounds and implications for design
+ https://arxiv.org/abs/2509.08481
+ arXiv:2509.08481v2 Announce Type: replace-cross
+Abstract: Errors occurring on noisy hardware pose a key challenge to reliable quantum computing. Existing techniques such as error correction, mitigation, or suppression typically separate the error handling from the algorithm analysis and design. In this paper, we develop an alternative, algorithm-centered framework for understanding and improving the robustness against errors. For a given quantum algorithm and error model, we derive worst-case fidelity bounds which can be efficiently computed to certify the robustness. We consider general error models including coherent and (Markovian) incoherent errors and allowing for set-based error descriptions to address uncertainty or time-dependence in the errors. Our results give rise to guidelines for robust algorithm design and compilation by optimizing our theoretical robustness measure. We demonstrate the practicality of the framework with numerical results on algorithm analysis and robust optimization, including the robustness analysis of a 50-qubit modular adder circuit.
+ oai:arXiv.org:2509.08481v2
+ quant-ph
+ cs.SY
+ eess.SY
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Julian Berberich, Tobias Fellner, Robert L. Kosut, Christian Holm
+
+
+ DAIEN-TTS: Disentangled Audio Infilling for Environment-Aware Text-to-Speech Synthesis
+ https://arxiv.org/abs/2509.14684
+ arXiv:2509.14684v2 Announce Type: replace-cross
+Abstract: This paper presents DAIEN-TTS, a zero-shot text-to-speech (TTS) framework that enables ENvironment-aware synthesis through Disentangled Audio Infilling. By leveraging separate speaker and environment prompts, DAIEN-TTS allows independent control over the timbre and the background environment of the synthesized speech. Built upon F5-TTS, the proposed DAIEN-TTS first incorporates a pretrained speech-environment separation (SES) module to disentangle the environmental speech into mel-spectrograms of clean speech and environment audio. Two random span masks of varying lengths are then applied to both mel-spectrograms, which, together with the text embedding, serve as conditions for infilling the masked environmental mel-spectrogram, enabling the simultaneous continuation of personalized speech and time-varying environmental audio. To further enhance controllability during inference, we adopt dual classifier-free guidance (DCFG) for the speech and environment components and introduce a signal-to-noise ratio (SNR) adaptation strategy to align the synthesized speech with the environment prompt. Experimental results demonstrate that DAIEN-TTS generates environmental personalized speech with high naturalness, strong speaker similarity, and high environmental fidelity.
+ oai:arXiv.org:2509.14684v2
+ eess.AS
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ye-Xin Lu, Yu Gu, Kun Wei, Hui-Peng Du, Yang Ai, Zhen-Hua Ling
+
+
+ QASTAnet: A DNN-based Quality Metric for Spatial Audio
+ https://arxiv.org/abs/2509.16715
+ arXiv:2509.16715v2 Announce Type: replace-cross
+Abstract: In the development of spatial audio technologies, reliable and shared methods for evaluating audio quality are essential. Listening tests are currently the standard but remain costly in terms of time and resources. Several models predicting subjective scores have been proposed, but they do not generalize well to real-world signals. In this paper, we propose QASTAnet (Quality Assessment for SpaTial Audio network), a new metric based on a deep neural network, specialized on spatial audio (ambisonics and binaural). As training data is scarce, we aim for the model to be trainable with a small amount of data. To do so, we propose to rely on expert modeling of the low-level auditory system and use a neurnal network to model the high-level cognitive function of the quality judgement. We compare its performance to two reference metrics on a wide range of content types (speech, music, ambiance, anechoic, reverberated) and focusing on codec artifacts. Results demonstrate that QASTAnet overcomes the aforementioned limitations of the existing methods. The strong correlation between the proposed metric prediction and subjective scores makes it a good candidate for comparing codecs in their development.
+ oai:arXiv.org:2509.16715v2
+ eess.AS
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Adrien Llave, Emma Granier, Gr\'egory Pallone
+
+
+ SoundCompass: Navigating Target Sound Extraction With Effective Directional Clue Integration In Complex Acoustic Scenes
+ https://arxiv.org/abs/2509.18561
+ arXiv:2509.18561v2 Announce Type: replace-cross
+Abstract: Recent advances in target sound extraction (TSE) utilize directional clues derived from direction of arrival (DoA), which represent an inherent spatial property of sound available in any acoustic scene. However, previous DoA-based methods rely on hand-crafted features or discrete encodings, which lose fine-grained spatial information and limit adaptability. We propose SoundCompass, an effective directional clue integration framework centered on a Spectral Pairwise INteraction (SPIN) module that captures cross-channel spatial correlations in the complex spectrogram domain to preserve full spatial information in multichannel signals. The input feature expressed in terms of spatial correlations is fused with a DoA clue represented as spherical harmonics (SH) encoding. The fusion is carried out across overlapping frequency subbands, inheriting the benefits reported in the previous band-split architectures. We also incorporate the iterative refinement strategy, chain-of-inference (CoI), in the TSE framework, which recursively fuses DoA with sound event activation estimated from the previous inference stage. Experiments demonstrate that SoundCompass, combining SPIN, SH embedding, and CoI, robustly extracts target sources across diverse signal classes and spatial configurations.
+ oai:arXiv.org:2509.18561v2
+ eess.AS
+ cs.AI
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dayun Choi, Jung-Woo Choi
+
+
+ Objective Evaluation of Prosody and Intelligibility in Speech Synthesis via Conditional Prediction of Discrete Tokens
+ https://arxiv.org/abs/2509.20485
+ arXiv:2509.20485v2 Announce Type: replace-cross
+Abstract: Objective evaluation of synthesized speech is critical for advancing speech generation systems, yet existing metrics for intelligibility and prosody remain limited in scope and weakly correlated with human perception. Word Error Rate (WER) provides only a coarse text-based measure of intelligibility, while F0-RMSE and related pitch-based metrics offer a narrow, reference-dependent view of prosody. To address these limitations, we propose TTScore, a targeted and reference-free evaluation framework based on conditional prediction of discrete speech tokens. TTScore employs two sequence-to-sequence predictors conditioned on input text: TTScore-int, which measures intelligibility through content tokens, and TTScore-pro, which evaluates prosody through prosody tokens. For each synthesized utterance, the predictors compute the likelihood of the corresponding token sequences, yielding interpretable scores that capture alignment with intended linguistic content and prosodic structure. Experiments on the SOMOS, VoiceMOS, and TTSArena benchmarks demonstrate that TTScore-int and TTScore-pro provide reliable, aspect-specific evaluation and achieve stronger correlations with human judgments of overall quality than existing intelligibility and prosody-focused metrics.
+ oai:arXiv.org:2509.20485v2
+ eess.AS
+ cs.LG
+ cs.SD
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/OJSP.2026.3653666
+ Ismail Rasim Ulgen, Zongyang Du, Junchen Lu, Philipp Koehn, Berrak Sisman
+
+
+ Automating Sensor Characterization with Bayesian Optimization
+ https://arxiv.org/abs/2509.21661
+ arXiv:2509.21661v2 Announce Type: replace-cross
+Abstract: The development of novel instrumentation requires an iterative cycle with three stages: design, prototyping, and testing. Recent advancements in simulation and nanofabrication techniques have significantly accelerated the design and prototyping phases. Nonetheless, detector characterization continues to be a major bottleneck in device development. During the testing phase, a significant time investment is required to characterize the device in different operating conditions and find optimal operating parameters. The total effort spent on characterization and parameter optimization can occupy a year or more of an expert's time. In this work, we present a novel technique for automated sensor characterization that aims to accelerate the testing stage of the development cycle. This technique leverages closed-loop Bayesian optimization (BO), using real-time measurements to guide parameter selection and identify optimal operating states. We demonstrate the method with a novel low-noise CCD, showing that the machine learning-driven tool can efficiently characterize and optimize operation of the sensor in a couple of days without supervision of a device expert.
+ oai:arXiv.org:2509.21661v2
+ physics.ins-det
+ astro-ph.IM
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ J. Cuevas-Zepeda, C. Chavez, J. Estrada, J. Noonan, B. D. Nord, N. Saffold, M. Sofo-Haro, R. Spinola e Castro, S. Trivedi
+
+
+ Multidata Causal Discovery for Statistical Hurricane Intensity Forecasting
+ https://arxiv.org/abs/2510.02050
+ arXiv:2510.02050v2 Announce Type: replace-cross
+Abstract: Improving statistical forecasts of Tropical Cyclone (TC) intensity is limited by complex nonlinear interactions and difficulty in identifying relevant predictors. Conventional methods prioritize correlation or fit, often overlooking confounding variables and limiting generalizability to unseen TCs. To address this, we leverage a multidata causal discovery framework with a replicated dataset based on Statistical Hurricane Intensity Prediction Scheme (SHIPS) using ERA5 meteorological reanalysis. We conduct multiple experiments to identify and select predictors causally linked to TC intensity changes. We then train multiple linear regression models to compare causal feature selection with no selection, correlation, and random forest feature importance across five forecast lead times from 1 to 5 days (24 to 120 hours). Causal feature selection consistently outperforms on unseen test cases, especially for lead times shorter than 3 days. The causal features primarily include vertical shear, mid-tropospheric potential vorticity and surface moisture conditions, which are physically significant yet often underutilized in TC intensity predictions. We build an extended predictor set (SHIPS plus) by adding selected features to the standard SHIPS predictors. SHIPS plus yields increased short-term predictive skill at lead times of 24, 48, and 72 hours. Adding nonlinearity using multilayer perceptron further extends skill to longer lead times, despite our framework being purely regional and not requiring global forecast data. Operational SHIPS tests confirm that three of the six added causally discovered predictors improve forecast skill, with the largest gains at longer lead times. Our results demonstrate that causal discovery improves TC intensity prediction and pave the way toward more empirical forecasts.
+ oai:arXiv.org:2510.02050v2
+ stat.AP
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Saranya Ganesh S, Frederick Iat-Hin Tam, Milton S. Gomez, Marie McGraw, Mark DeMaria, Kate Musgrave, Jakob Runge, Tom Beucler
+
+
+ Perspectives on Stochastic Localization
+ https://arxiv.org/abs/2510.04460
+ arXiv:2510.04460v2 Announce Type: replace-cross
+Abstract: We survey different perspectives on the stochastic localization process of Eldan, a powerful construction that has had many exciting recent applications in high-dimensional probability and algorithm design. Unlike prior surveys on this topic, our focus is on giving a self-contained presentation of all known alternative constructions of Eldan's stochastic localization, with an emphasis on connections between different constructions. Our hope is that by collecting these perspectives, some of which had primarily arisen within a particular community (e.g., probability theory, theoretical computer science, information theory, or machine learning), we can broaden the accessibility of stochastic localization, and ease its future use.
+ oai:arXiv.org:2510.04460v2
+ math.PR
+ cs.DS
+ cs.LG
+ math.ST
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Bobby Shi, Kevin Tian, Matthew S. Zhang
+
+
+ PAC Learnability in the Presence of Performativity
+ https://arxiv.org/abs/2510.08335
+ arXiv:2510.08335v2 Announce Type: replace-cross
+Abstract: Following the wide-spread adoption of machine learning models in real-world applications, the phenomenon of performativity, i.e. model-dependent shifts in the test distribution, becomes increasingly prevalent. Unfortunately, since models are usually trained solely based on samples from the original (unshifted) distribution, this performative shift may lead to decreased test-time performance. In this paper, we study the question of whether and when performative binary classification problems are learnable, via the lens of the classic PAC (Probably Approximately Correct) learning framework. We motivate several performative scenarios, accounting in particular for linear shifts in the label distribution, as well as for more general changes in both the labels and the features. We construct a performative empirical risk function, which depends only on data from the original distribution and on the type performative effect, and is yet an unbiased estimate of the true risk of a classifier on the shifted distribution. Minimizing this notion of performative risk allows us to show that any PAC-learnable hypothesis space in the standard binary classification setting remains PAC-learnable for the considered performative scenarios. We also conduct an extensive experimental evaluation of our performative risk minimization method and showcase benefits on synthetic and real data.
+ oai:arXiv.org:2510.08335v2
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Ivan Kirev, Lyuben Baltadzhiev, Nikola Konstantinov
+
+
+ Calibrating Generative Models to Distributional Constraints
+ https://arxiv.org/abs/2510.10020
+ arXiv:2510.10020v3 Announce Type: replace-cross
+Abstract: Generative models frequently suffer miscalibration, wherein statistics of the sampling distribution such as class probabilities deviate from desired values. We frame calibration as a constrained optimization problem and seek the closest model in Kullback-Leibler divergence satisfying calibration constraints. To address the intractability of imposing these constraints exactly, we introduce two surrogate objectives for fine-tuning: (1) the relax loss, which replaces the constraint with a miscalibration penalty, and (2) the reward loss, which converts calibration into a reward fine-tuning problem. We demonstrate that these approaches substantially reduce calibration error across hundreds of simultaneous constraints and models with up to one billion parameters, spanning applications in protein design, image generation, and language modeling.
+ oai:arXiv.org:2510.10020v3
+ stat.ML
+ cs.LG
+ q-bio.BM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Henry D. Smith, Nathaniel L. Diamant, Brian L. Trippe
+
+
+ Geopolitics, Geoeconomics and Risk: A Machine Learning Approach
+ https://arxiv.org/abs/2510.12416
+ arXiv:2510.12416v4 Announce Type: replace-cross
+Abstract: We introduce a novel high-frequency daily panel dataset of both markets and news-based indicators -- including Geopolitical Risk, Economic Policy Uncertainty, Trade Policy Uncertainty, and Political Sentiment -- for 42 countries across both emerging and developed markets. Using this dataset, we study how sentiment dynamics shape sovereign risk, measured by Credit Default Swap (CDS) spreads, and evaluate their forecasting value relative to traditional drivers such as global monetary policy and market volatility. Our horse-race analysis of forecasting models demonstrates that incorporating news-based indicators significantly enhances predictive accuracy and enriches the analysis, with non-linear machine learning methods -- particularly Random Forests -- delivering the largest gains. Our analysis reveals that while global financial variables remain the dominant drivers of sovereign risk, geopolitical risk and economic policy uncertainty also play a meaningful role. Crucially, their effects are amplified through non-linear interactions with global financial conditions. Finally, we document pronounced regional heterogeneity, as certain asset classes and emerging markets exhibit heightened sensitivity to shocks in policy rates, global financial volatility, and geopolitical risk.
+ oai:arXiv.org:2510.12416v4
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Alvaro Ortiz, Tomasa Rodrigo, Pablo Saborido
+
+
+ TVMC: Time-Varying Mesh Compression via Multi-Stage Anchor Mesh Generation
+ https://arxiv.org/abs/2510.22646
+ arXiv:2510.22646v2 Announce Type: replace-cross
+Abstract: Time-varying meshes, characterized by dynamic connectivity and varying vertex counts, hold significant promise for applications such as augmented reality. However, their practical utilization remains challenging due to the substantial data volume required for high-fidelity representation. While various compression methods attempt to leverage temporal redundancy between consecutive mesh frames, most struggle with topological inconsistency and motion-induced artifacts. To address these issues, we propose Time-Varying Mesh Compression (TVMC), a novel framework built on multi-stage coarse-to-fine anchor mesh generation for inter-frame prediction. Specifically, the anchor mesh is progressively constructed in three stages: initial, coarse, and fine. The initial anchor mesh is obtained through fast topology alignment to exploit temporal coherence. A Kalman filter-based motion estimation module then generates a coarse anchor mesh by accurately compensating inter-frame motions. Subsequently, a Quadric Error Metric-based refinement step optimizes vertex positions to form a fine anchor mesh with improved geometric fidelity. Based on the refined anchor mesh, the inter-frame motions relative to the reference base mesh are encoded, while the residual displacements between the subdivided fine anchor mesh and the input mesh are adaptively quantized and compressed. This hierarchical strategy preserves consistent connectivity and high-quality surface approximation, while achieving an efficient and compact representation of dynamic geometry. Extensive experiments on standard MPEG dynamic mesh sequences demonstrate that TVMC achieves state-of-the-art compression performance. Compared to the latest V-DMC standard, it delivers a significant BD-rate gain of 10.2% ~ 16.9%, while preserving high reconstruction quality. The code is available at https://github.com/H-Huang774/TVMC.
+ oai:arXiv.org:2510.22646v2
+ eess.IV
+ cs.MM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ He Huang, Qi Yang, Yiling Xu, Zhu Li, Jenq-Neng Hwang
+
+
+ scMRDR: A scalable and flexible framework for unpaired single-cell multi-omics data integration
+ https://arxiv.org/abs/2510.24987
+ arXiv:2510.24987v2 Announce Type: replace-cross
+Abstract: Advances in single-cell sequencing have enabled high-resolution profiling of diverse molecular modalities, while integrating unpaired multi-omics single-cell data remains challenging. Existing approaches either rely on pair information or prior correspondences, or require computing a global pairwise coupling matrix, limiting their scalability and flexibility. In this paper, we introduce a scalable and flexible generative framework called single-cell Multi-omics Regularized Disentangled Representations (scMRDR) for unpaired multi-omics integration. Specifically, we disentangle each cell's latent representations into modality-shared and modality-specific components using a well-designed $\beta$-VAE architecture, which are augmented with isometric regularization to preserve intra-omics biological heterogeneity, adversarial objective to encourage cross-modal alignment, and masked reconstruction loss strategy to address the issue of missing features across modalities. Our method achieves excellent performance on benchmark datasets in terms of batch correction, modality alignment, and biological signal preservation. Crucially, it scales effectively to large-scale datasets and supports integration of more than two omics, offering a powerful and flexible solution for large-scale multi-omics data integration and downstream biological discovery.
+ oai:arXiv.org:2510.24987v2
+ q-bio.QM
+ cs.LG
+ q-bio.GN
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianle Sun, Chaoqi Liang, Ran Wei, Peng Zheng, Lei Bai, Wanli Ouyang, Hongliang Yan, Peng Ye
+
+
+ Separating QMA from QCMA with a classical oracle
+ https://arxiv.org/abs/2511.09551
+ arXiv:2511.09551v2 Announce Type: replace-cross
+Abstract: We construct a classical oracle proving that, in a relativized setting, the set of languages decidable by an efficient quantum verifier with a quantum witness (QMA) is strictly bigger than those decidable with access only to a classical witness (QCMA). The separating classical oracle we construct is for a decision problem we coin spectral Forrelation -- the oracle describes two subsets of the boolean hypercube, and the computational task is to decide if there exists a quantum state whose standard basis measurement distribution is well supported on one subset while its Fourier basis measurement distribution is well supported on the other subset. This is equivalent to estimating the spectral norm of a "Forrelation" matrix between two sets that are accessible through membership queries.
+ Our lower bound derives from a simple observation that a query algorithm with a classical witness can be run multiple times to generate many samples from a distribution, while a quantum witness is a "use once" object. This observation allows us to reduce proving a QCMA lower bound to proving a sampling hardness result which does not simultaneously prove a QMA lower bound. To prove said sampling hardness result for QCMA, we observe that quantum access to the oracle can be compressed by expressing the problem in terms of bosons -- a novel "second quantization" perspective on compressed oracle techniques, which may be of independent interest. Using this compressed perspective on the sampling problem, we prove the sampling hardness result, completing the proof.
+ oai:arXiv.org:2511.09551v2
+ quant-ph
+ cs.CC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ John Bostanci, Jonas Haferkamp, Chinmay Nirkhe, Mark Zhandry
+
+
+ A Spatial Array for Spectrally Agile Wireless Processing
+ https://arxiv.org/abs/2512.04182
+ arXiv:2512.04182v2 Announce Type: replace-cross
+Abstract: Massive MIMO is a cornerstone of next-generation wireless communication, offering significant gains in capacity, reliability, and energy efficiency. However, to meet emerging demands such as high-frequency operation, wide bandwidths, co-existence, integrated sensing, and resilience to dynamic interference, future systems must exhibit both scalability and spectral agility. These requirements place increasing pressure on the underlying processing hardware to be both efficient and reconfigurable. This paper proposes a custom-designed spatial array architecture that serves as a reconfigurable, general-purpose core optimized for a class of wireless kernels that commonly arise in diverse communications and sensing tasks. The proposed spatial array is evaluated against specialized cores for each kernel using High-Level Synthesis (HLS). Both the reconfigurable and specialized designs are synthesized in a 32 nm process to assess latency, throughput, area, and power in realistic processes. The results identify conditions under which general-purpose systolic architectures can approach the efficiency of specialized cores, thereby paving the way toward more scalable and agile systems.
+ oai:arXiv.org:2512.04182v2
+ eess.SP
+ cs.AR
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ali Rasteh, Andrew Hennessee, Ishaan Shivhare, Siddharth Garg, Sundeep Rangan, Brandon Reagen
+
+
+ Heuristics for Combinatorial Optimization via Value-based Reinforcement Learning: A Unified Framework and Analysis
+ https://arxiv.org/abs/2512.08601
+ arXiv:2512.08601v2 Announce Type: replace-cross
+Abstract: Since the 1990s, considerable empirical work has been carried out to train statistical models, such as neural networks (NNs), as learned heuristics for combinatorial optimization (CO) problems. When successful, such an approach eliminates the need for experts to design heuristics per problem type. Due to their structure, many hard CO problems are amenable to treatment through reinforcement learning (RL). Indeed, we find a wealth of literature training NNs using value-based, policy gradient, or actor-critic approaches, with promising results, both in terms of empirical optimality gaps and inference runtimes. Nevertheless, there has been a paucity of theoretical work undergirding the use of RL for CO problems. To this end, we introduce a unified framework to model CO problems through Markov decision processes (MDPs) and solve them using RL techniques. We provide easy-to-test assumptions under which CO problems can be formulated as equivalent undiscounted MDPs that provide optimal solutions to the original CO problems. Moreover, we establish conditions under which value-based RL techniques converge to approximate solutions of the CO problem with a guarantee on the associated optimality gap. Our convergence analysis provides: (1) a sufficient rate of increase in batch size and projected gradient descent steps at each RL iteration; (2) the resulting optimality gap in terms of problem parameters and targeted RL accuracy; and (3) the importance of a choice of state-space embedding. Together, our analysis illuminates the success (and limitations) of the celebrated deep Q-learning algorithm in this problem context.
+ oai:arXiv.org:2512.08601v2
+ stat.ML
+ cs.LG
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Orit Davidovich, Shimrit Shtern, Segev Wasserkrug, Nimrod Megiddo
+
+
+ An Elementary Proof of the Near Optimality of LogSumExp Smoothing
+ https://arxiv.org/abs/2512.10825
+ arXiv:2512.10825v2 Announce Type: replace-cross
+Abstract: We consider the design of smoothings of the (coordinate-wise) max function in $\mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=\ln(\sum^d_i\exp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $\ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must differ by at least $\sim 0.8145\ln(d)$. Hence, LogSumExp is optimal up to small constant factors. However, in small dimensions, we provide stronger, exactly optimal smoothings attaining our lower bound, showing that the entropy-based LogSumExp approach to smoothing is not exactly optimal.
+ oai:arXiv.org:2512.10825v2
+ math.ST
+ cs.LG
+ math.OC
+ stat.TH
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Thabo Samakhoana, Benjamin Grimmer
+
+
+ EXFormer: A Multi-Scale Trend-Aware Transformer with Dynamic Variable Selection for Foreign Exchange Returns Prediction
+ https://arxiv.org/abs/2512.12727
+ arXiv:2512.12727v2 Announce Type: replace-cross
+Abstract: Accurately forecasting daily exchange rate returns represents a longstanding challenge in international finance, as the exchange rate returns are driven by a multitude of correlated market factors and exhibit high-frequency fluctuations. This paper proposes EXFormer, a novel Transformer-based architecture specifically designed for forecasting the daily exchange rate returns. We introduce a multi-scale trend-aware self-attention mechanism that employs parallel convolutional branches with differing receptive fields to align observations on the basis of local slopes, preserving long-range dependencies while remaining sensitive to regime shifts. A dynamic variable selector assigns time-varying importance weights to 28 exogenous covariates related to exchange rate returns, providing pre-hoc interpretability. An embedded squeeze-and-excitation block recalibrates channel responses to emphasize informative features and depress noise in the forecasting. Using the daily data for EUR/USD, USD/JPY, and GBP/USD, we conduct out-of-sample evaluations across five different sliding windows. EXFormer consistently outperforms the random walk and other baselines, improving directional accuracy by a statistically significant margin of up to 8.5--22.8%. In nearly one year of trading backtests, the model converts these gains into cumulative returns of 18%, 25%, and 18% for the three pairs, with Sharpe ratios exceeding 1.8. When conservative transaction costs and slippage are accounted for, EXFormer retains cumulative returns of 7%, 19%, and 9%, while other baselines achieve negative. The robustness checks further confirm the model's superiority under high-volatility and bear-market regimes. EXFormer furnishes both economically valuable forecasts and transparent, time-varying insights into the drivers of exchange rate dynamics for international investors, corporations, and central bank practitioners.
+ oai:arXiv.org:2512.12727v2
+ q-fin.CP
+ cs.CE
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dinggao Liu, Robert \'Slepaczuk, Zhenpeng Tang
+
+
+ A Disproof of Large Language Model Consciousness: The Necessity of Continual Learning for Consciousness
+ https://arxiv.org/abs/2512.12802
+ arXiv:2512.12802v3 Announce Type: replace-cross
+Abstract: Scientific theories of consciousness should be falsifiable and non-trivial. Recent research has given us formal tools to analyze these requirements of falsifiability and non-triviality for theories of consciousness. Surprisingly, many contemporary theories of consciousness fail to pass this bar, including theories based on causal structure but also (as I demonstrate) theories based on function. Herein, I show these requirements of falsifiability and non-triviality especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any falsifiable and non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness.
+ oai:arXiv.org:2512.12802v3
+ q-bio.NC
+ cs.AI
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Erik Hoel
+
+
+ Improving the Accuracy of Amortized Model Comparison with Self-Consistency
+ https://arxiv.org/abs/2512.14308
+ arXiv:2512.14308v2 Announce Type: replace-cross
+Abstract: Amortized Bayesian inference (ABI) offers fast, scalable approximations to posterior densities by training neural surrogates on data simulated from the statistical model. However, ABI methods are highly sensitive to model misspecification: when observed data fall outside the training distribution (generative scope of the statistical models), neural surrogates can behave unpredictably. This makes it a challenge in a model comparison setting, where multiple statistical models are considered, of which at least some are misspecified. Recent work on self-consistency (SC) provides a promising remedy to this issue, accessible even for empirical data (without ground-truth labels). In this work, we investigate how SC can improve amortized model comparison conceptualized in four different ways. Across two synthetic and two real-world case studies, we find that approaches for model comparison that estimate marginal likelihoods through approximate parameter posteriors consistently outperform methods that directly approximate model evidence or posterior model probabilities. SC training improves robustness when the likelihood is available, even under severe model misspecification. The benefits of SC for methods without access of analytic likelihoods are more limited and inconsistent. Our results suggest practical guidance for reliable amortized Bayesian model comparison: prefer parameter posterior-based methods and augment them with SC training on empirical datasets to mitigate extrapolation bias under model misspecification.
+ oai:arXiv.org:2512.14308v2
+ stat.ML
+ cs.LG
+ stat.CO
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-sa/4.0/
+ \v{S}imon Kucharsk\'y, Aayush Mishra, Daniel Habermann, Stefan T. Radev, Paul-Christian B\"urkner
+
+
+ Improved Lower Bounds for QAC0
+ https://arxiv.org/abs/2512.14643
+ arXiv:2512.14643v2 Announce Type: replace-cross
+Abstract: In this work, we prove the strongest known lower bounds for QAC$^0$, allowing polynomially many gates and ancillae. Our main results show that:
+ (1) Depth-3 QAC$^0$ circuits cannot compute PARITY, and require $\Omega(\exp(\sqrt{n}))$ gates to compute MAJORITY.
+ (2) Depth-2 circuits cannot approximate high-influence Boolean functions (e.g., PARITY) with non-negligible advantage, regardless of size.
+ We develop new classical simulation techniques for QAC$^0$ to obtain our depth-3 bounds. In these results, we relax the output requirement of the quantum circuit to a single bit, making our depth $2$ approximation bound stronger than the previous best bound of Rosenthal (2021). This also enables us to draw natural comparisons with classical AC$^0$ circuits, which can compute PARITY exactly in depth $2$ (exp size). Our techniques further suggest that, for boolean total functions, constant-depth quantum circuits do not necessarily provide more power than their classical counterparts. Our third result shows that depth $2$ QAC$^0$ circuits, regardless of size, cannot exactly synthesize an $n$-target nekomata state (a state whose synthesis is directly related to the computation of PARITY). This complements the depth $2$ exponential size upper bound of Rosenthal (2021) for approximating nekomatas (which is used as a sub-circuit in the only known constant depth PARITY upper bound). Finally, we argue that approximating PARITY in QAC0, with significantly better than 1/poly(n) advantage on average, is just as hard as computing it exactly. Thus, extending our techniques to higher depths would also rule out approximate circuits for PARITY and related problems
+ oai:arXiv.org:2512.14643v2
+ quant-ph
+ cs.CC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Malvika Raj Joshi, Avishay Tal, Francisca Vasconcelos, John Wright
+
+
+ Shuttling Compiler for Trapped-Ion Quantum Computers Based on Large Language Models
+ https://arxiv.org/abs/2512.18021
+ arXiv:2512.18021v2 Announce Type: replace-cross
+Abstract: Trapped-ion quantum computers based on segmented traps rely on shuttling operations to establish long-range connectivity between sub-registers. Qubit routing dynamically reconfigures qubit positions so that all qubits involved in a gate operation are co-located within the same segment, a task whose complexity increases with system size. To address this challenge, we propose a layout-independent compilation strategy based on large language models (LLMs). Specifically, we fine-tune pretrained LLMs to generate the required shuttling operations. We evaluate this approach on linear and branched one-dimensional architectures using quantum circuits of up to $16$ qubits. Our results show that the fine-tuned LLMs generate valid shuttling schedules and, in some cases, outperform previous shuttling compilers by requiring approximately $15\,\%$ less shuttle overhead. However, results degrade as the algorithms increase in width and depth. In future, we plan to improve LLM-based shuttle compilation by enhancing our training pipeline using Direct Preference Optimization (DPO) and Gradient Regularized Policy Optimization (GRPO).
+ oai:arXiv.org:2512.18021v2
+ quant-ph
+ cs.ET
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Fabian Kreppel, Reza Salkhordeh, Ferdinand Schmidt-Kaler, Andr\'e Brinkmann
+
+
+ Quantitative Understanding of PDF Fits and their Uncertainties
+ https://arxiv.org/abs/2512.24116
+ arXiv:2512.24116v2 Announce Type: replace-cross
+Abstract: Parton Distribution Functions (PDFs) play a central role in describing experimental data at colliders and provide insight into the structure of nucleons. As the LHC enters an era of high-precision measurements, a robust PDF determination with a reliable uncertainty quantification has become mandatory in order to match the experimental precision. The NNPDF collaboration has pioneered the use of Machine Learning (ML) techniques for PDF determinations, using Neural Networks (NNs) to parametrise the unknown PDFs in a flexible and unbiased way. The NNs are then trained on experimental data by means of stochastic gradient descent algorithms. The statistical robustness of the results is validated by extensive closure tests using synthetic data. In this work, we develop a theoretical framework based on the Neural Tangent Kernel (NTK) to analyse the training dynamics of neural networks. This approach allows us to derive, under precise assumptions, an analytical description of the neural network evolution during training, enabling a quantitative understanding of the training process. Having an analytical handle on the training dynamics allows us to clarify the role of the NN architecture and the impact of the experimental data in a transparent way. Similarly, we are able to describe the evolution of the covariance of the NN output during training, providing a quantitative description of how uncertainties are propagated from the data to the fitted function. While our results are not a substitute for PDF fitting, they do provide a powerful diagnostic tool to assess the robustness of current fitting methodologies. Beyond its relevance for particle physics phenomenology, our analysis of PDF determinations provides a testbed to apply theoretical ideas about the learning process developed in the ML community.
+ oai:arXiv.org:2512.24116v2
+ hep-ph
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Amedeo Chiefa, Luigi Del Debbio, Richard Kenway
+
+
+ Edit2Restore:Few-Shot Image Restoration via Parameter-Efficient Adaptation of Pre-trained Editing Models
+ https://arxiv.org/abs/2601.03391
+ arXiv:2601.03391v2 Announce Type: replace-cross
+Abstract: Image restoration has traditionally required training specialized models on thousands of paired examples per degradation type. We challenge this paradigm by demonstrating that powerful pre-trained text-conditioned image editing models can be efficiently adapted for multiple restoration tasks through parameter-efficient fine-tuning with remarkably few examples. Our approach fine-tunes LoRA adapters on FLUX.1 Kontext, a state-of-the-art 12B parameter flow matching model for image-to-image translation, using only 16-128 paired images per task, guided by simple text prompts that specify the restoration operation. Unlike existing methods that train specialized restoration networks from scratch with thousands of samples, we leverage the rich visual priors already encoded in large-scale pre-trained editing models, dramatically reducing data requirements while maintaining high perceptual quality. A single unified LoRA adapter, conditioned on task-specific text prompts, effectively handles multiple degradations including denoising, deraining, and dehazing. Through comprehensive ablation studies, we analyze: (i) the impact of training set size on restoration quality, (ii) trade-offs between task-specific versus unified multi-task adapters, (iii) the role of text encoder fine-tuning, and (iv) zero-shot baseline performance. While our method prioritizes perceptual quality over pixel-perfect reconstruction metrics like PSNR/SSIM, our results demonstrate that pre-trained image editing models, when properly adapted, offer a compelling and data-efficient alternative to traditional image restoration approaches, opening new avenues for few-shot, prompt-guided image enhancement. The code to reproduce our results are available at: https://github.com/makinyilmaz/Edit2Restore
+ oai:arXiv.org:2601.03391v2
+ eess.IV
+ cs.CV
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ M. Ak{\i}n Y{\i}lmaz, Ahmet Bilican, Burak Can Biner, A. Murat Tekalp
+
+
+ Trade-off between spread and width for tree decompositions
+ https://arxiv.org/abs/2601.04040
+ arXiv:2601.04040v2 Announce Type: replace-cross
+Abstract: We study the trade-off between (average) spread and width in tree decompositions, answering several questions from Wood [arXiv:2509.01140]. The spread of a vertex $v$ in a tree decomposition is the number of bags that contain $v$. Wood asked for which $c>0$, there exists $c'$ such that each graph $G$ has a tree decomposition of width $c\cdot tw(G)$ in which each vertex $v$ has spread at most $c'(d(v)+1)$. We show that $c\geq 2$ is necessary and that $c>3$ is sufficient. Moreover, we answer a second question fully by showing that near-optimal average spread can be achieved simultaneously with width $O(tw(G))$.
+ oai:arXiv.org:2601.04040v2
+ math.CO
+ cs.DM
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hans L. Bodlaender, Carla Groenland
+
+
+ Local EGOP for Continuous Index Learning
+ https://arxiv.org/abs/2601.07061
+ arXiv:2601.07061v2 Announce Type: replace-cross
+Abstract: We introduce the setting of continuous index learning, in which a function of many variables varies only along a small number of directions at each point. For efficient estimation, it is beneficial for a learning algorithm to adapt, near each point $x$, to the subspace that captures the local variability of the function $f$. We pose this task as kernel adaptation along a manifold with noise, and introduce Local EGOP learning, a recursive algorithm that utilizes the Expected Gradient Outer Product (EGOP) quadratic form as both a metric and inverse-covariance of our target distribution. We prove that Local EGOP learning adapts to the regularity of the function of interest, showing that under a supervised noisy manifold hypothesis, intrinsic dimensional learning rates are achieved for arbitrarily high-dimensional noise. Empirically, we compare our algorithm to the feature learning capabilities of deep learning. Additionally, we demonstrate improved regression quality compared to two-layer neural networks in the continuous single-index setting.
+ oai:arXiv.org:2601.07061v2
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Alex Kokot, Anand Hemmady, Vydhourie Thiyageswaran, Marina Meila
+
+
+ Determining the Winner in Alternating-Move Games
+ https://arxiv.org/abs/2601.08359
+ arXiv:2601.08359v2 Announce Type: replace-cross
+Abstract: We provide a criterion for determining the winner in two-player win-lose alternating-move games on trees, in terms of the Hausdorff dimension of the target set. We focus our study on special cases, including the Gale-Stewart game on the complete binary tree and a family of Schmidt games. Building on the Hausdorff dimension games originally introduced by Das, Fishman, Simmons, and Urba\'nski, which provide a game-theoretic approach for computing Hausdorff dimensions, we employ a generalized family of these games, and show that they are useful for analyzing sets underlying the win-lose games we study.
+ oai:arXiv.org:2601.08359v2
+ math.DS
+ cs.GT
+ math.LO
+ math.OC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Itamar Bella\"iche, Auriel Rosenzweig
+
+
+ Breaking the Orthogonality Barrier in Quantum LDPC Codes
+ https://arxiv.org/abs/2601.08824
+ arXiv:2601.08824v3 Announce Type: replace-cross
+Abstract: Classical low-density parity-check (LDPC) codes are a widely deployed and well-established technology, forming the backbone of modern communication and storage systems. It is well known that, in this classical setting, increasing the girth of the Tanner graph while maintaining regular degree distributions leads simultaneously to good belief-propagation (BP) decoding performance and large minimum distance. In the quantum setting, however, this principle does not directly apply because quantum LDPC codes must satisfy additional orthogonality constraints between their parity-check matrices. When one enforces both orthogonality and regularity in a straightforward manner, the girth is typically reduced and the minimum distance becomes structurally upper bounded. In this work, we overcome this limitation by using permutation matrices with controlled commutativity and by restricting the orthogonality constraints to only the active part of the construction, while preserving regular check-matrix structures. This design circumvents conventional structural distance limitations induced by parent-matrix orthogonality, and enables the construction of quantum LDPC codes with large girth while avoiding latent low-weight logical operators. As a concrete demonstration, we construct a girth-8, (3,12)-regular $[[9216,4612, \leq 48]]$ quantum LDPC code and show that, under BP decoding combined with a low-complexity post-processing algorithm, it achieves a frame error rate as low as $10^{-8}$ on the depolarizing channel with error probability $4 \%$.
+ oai:arXiv.org:2601.08824v3
+ quant-ph
+ cs.IT
+ math.IT
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Kenta Kasai
+
+
+ System Availability Optimization: Integrating Quantity Discounts and Delivery Lead Time Considerations
+ https://arxiv.org/abs/2601.09194
+ arXiv:2601.09194v2 Announce Type: replace-cross
+Abstract: Purpose: The model allocates the system components orders to the suppliers to minimize the parts price and the system construction delay penalties and maximize the system availability during its use. It considers the quantity-based discount and variation of delivery lead time by ordering similar components. The model also reflects the prerequisite relationships between construction activities and calculates the delay penalty resulting from parts delivery lead time. Design/methodology/approach: This research presents a model for selecting suppliers of components of an industrial series-parallel multi-state system. A nonlinear binary mathematical program uses the Markov process results to select system components. It minimizes the total system construction phase costs, including the components' price, and the system construction delay penalty, and the system exploitation phase costs, including the system shutdown and working at half capacity. Findings: The model allocates the optimal orders for a typical industrial system's components, composing four elements. The proposed approach combines the nonlinear binary program and the Markov process results to optimize the system life cycle parameters, including the system construction cost and operational availability. Originality/value: Using the Markov chain results in binary nonlinear mathematical programming, this study attempts to strike the right balance between the construction phase's objectives and an industrial unit's operation phase.
+ oai:arXiv.org:2601.09194v2
+ math.OC
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Zahra Sobhani, Mahmoud Shahrokhi
+
+
+ Clustering-Based User Selection in Federated Learning: Metadata Exploitation for 3GPP Networks
+ https://arxiv.org/abs/2601.10013
+ arXiv:2601.10013v2 Announce Type: replace-cross
+Abstract: Federated learning (FL) enables collaborative model training without sharing raw user data, but conventional simulations often rely on unrealistic data partitioning and current user selection methods ignore data correlation among users. To address these challenges, this paper proposes a metadatadriven FL framework. We first introduce a novel data partition model based on a homogeneous Poisson point process (HPPP), capturing both heterogeneity in data quantity and natural overlap among user datasets. Building on this model, we develop a clustering-based user selection strategy that leverages metadata, such as user location, to reduce data correlation and enhance label diversity across training rounds. Extensive experiments on FMNIST and CIFAR-10 demonstrate that the proposed framework improves model performance, stability, and convergence in non-IID scenarios, while maintaining comparable performance under IID settings. Furthermore, the method shows pronounced advantages when the number of selected users per round is small. These findings highlight the framework's potential for enhancing FL performance in realistic deployments and guiding future standardization.
+ oai:arXiv.org:2601.10013v2
+ eess.SP
+ cs.DC
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Ce Zheng, Shiyao Ma, Ke Zhang, Chen Sun, Wenqi Zhang
+
+
+ Discrete versus continuous -- lattice models and their exact continuous counterparts
+ https://arxiv.org/abs/2601.10184
+ arXiv:2601.10184v2 Announce Type: replace-cross
+Abstract: We review and study the correspondence between discrete lattice/chain models of interacting particles and their continuous counterparts represented by partial differential equations. We study the correspondence problem for nearest neighbour interaction lattice models as well as for multiple-neighbour interaction lattice models, and we gradually proceed from infinite lattices to periodic lattices and finally to finite lattices with fixed ends/zero Dirichlet boundary conditions. The whole study is framed as systematic specialisation of Fourier analysis tools from the continuous to the discrete setting and vice versa, and the correspondence between the discrete and continuous models is examined primarily with regard to the dispersion relation.
+ oai:arXiv.org:2601.10184v2
+ physics.class-ph
+ cs.NA
+ math.NA
+ physics.comp-ph
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lorenzo Fusi, Oliver K\v{r}enek, V\'it Pr\r{u}\v{s}a, Casey Rodriguez, Rebecca Tozzi, Martin Vejvoda
+
+
+ On the static and small signal analysis of DAB converter
+ https://arxiv.org/abs/2601.10746
+ arXiv:2601.10746v2 Announce Type: replace-cross
+Abstract: This document develops a method to solve the periodic operating point of Dual-Active-Bridge (DAB).
+ oai:arXiv.org:2601.10746v2
+ eess.SP
+ cs.SY
+ eess.SY
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuxin Yang, Hang Zhou, Hourong Song, Branislav Hredzak
+
+
+ Memorize Early, Then Query: Inlier-Memorization-Guided Active Outlier Detection
+ https://arxiv.org/abs/2601.10993
+ arXiv:2601.10993v2 Announce Type: replace-cross
+Abstract: Outlier detection (OD) aims to identify abnormal instances, known as outliers or anomalies, by learning typical patterns of normal data, or inliers. Performing OD under an unsupervised regime-without any information about anomalous instances in the training data-is challenging. A recently observed phenomenon, known as the inlier-memorization (IM) effect, where deep generative models (DGMs) tend to memorize inlier patterns during early training, provides a promising signal for distinguishing outliers. However, existing unsupervised approaches that rely solely on the IM effect still struggle when inliers and outliers are not well-separated or when outliers form dense clusters. To address these limitations, we incorporate active learning to selectively acquire informative labels, and propose IMBoost, a novel framework that explicitly reinforces the IM effect to improve outlier detection. Our method consists of two stages: 1) a warm-up phase that induces and promotes the IM effect, and 2) a polarization phase in which actively queried samples are used to maximize the discrepancy between inlier and outlier scores. In particular, we propose a novel query strategy and tailored loss function in the polarization phase to effectively identify informative samples and fully leverage the limited labeling budget. We provide a theoretical analysis showing that the IMBoost consistently decreases inlier risk while increasing outlier risk throughout training, thereby amplifying their separation. Extensive experiments on diverse benchmark datasets demonstrate that IMBoost not only significantly outperforms state-of-the-art active OD methods but also requires substantially less computational cost.
+ oai:arXiv.org:2601.10993v2
+ stat.ML
+ cs.LG
+ Wed, 21 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Minseo Kang, Seunghwan Park, Dongha Kim
+