diff --git "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml" "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml" --- "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml" +++ "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml" @@ -7,10768 +7,9806 @@ http://www.rssboard.org/rss-specification en-us - Wed, 10 Dec 2025 05:00:08 +0000 + Thu, 11 Dec 2025 05:00:14 +0000 rss-help@arxiv.org - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 Saturday Sunday - Detection of Cyberbullying in GIF using AI - https://arxiv.org/abs/2512.07838 - arXiv:2512.07838v1 Announce Type: new -Abstract: Cyberbullying is a well-known social issue, and it is escalating day by day. Due to the vigorous development of the internet, social media provide many different ways for the user to express their opinions and exchange information. Cyberbullying occurs on social media using text messages, comments, sharing images and GIFs or stickers, and audio and video. Much research has been done to detect cyberbullying on textual data; some are available for images. Very few studies are available to detect cyberbullying on GIFs/stickers. We collect a GIF dataset from Twitter and Applied a deep learning model to detect cyberbullying from the dataset. Firstly, we extracted hashtags related to cyberbullying using Twitter. We used these hashtags to download GIF file using publicly available API GIPHY. We collected over 4100 GIFs including cyberbullying and non cyberbullying. we applied deep learning pre-trained model VGG16 for the detection of the cyberbullying. The deep learning model achieved the accuracy of 97%. Our work provides the GIF dataset for researchers working in this area. - oai:arXiv.org:2512.07838v1 - cs.CV - cs.LG - cs.MM - Wed, 10 Dec 2025 00:00:00 -0500 + Agentic AI as Undercover Teammates: Argumentative Knowledge Construction in Hybrid Human-AI Collaborative Learning + https://arxiv.org/abs/2512.08933 + arXiv:2512.08933v1 Announce Type: new +Abstract: Generative artificial intelligence (AI) agents are increasingly embedded in collaborative learning environments, yet their impact on the processes of argumentative knowledge construction remains insufficiently understood. Emerging conceptualisations of agentic AI and artificial agency suggest that such systems possess bounded autonomy, interactivity, and adaptability, allowing them to engage as epistemic participants rather than mere instructional tools. Building on this theoretical foundation, the present study investigates how agentic AI, designed as undercover teammates with either supportive or contrarian personas, shapes the epistemic and social dynamics of collaborative reasoning. Drawing on Weinberger and Fischer's (2006) four-dimensional framework, participation, epistemic reasoning, argument structure, and social modes of co-construction, we analysed synchronous discourse data from 212 human and 64 AI participants (92 triads) engaged in an analytical problem-solving task. Mixed-effects and epistemic network analyses revealed that AI teammates maintained balanced participation but substantially reorganised epistemic and social processes: supportive personas promoted conceptual integration and consensus-oriented reasoning, whereas contrarian personas provoked critical elaboration and conflict-driven negotiation. Epistemic adequacy, rather than participation volume, predicted individual learning gains, indicating that agentic AI's educational value lies in enhancing the quality and coordination of reasoning rather than amplifying discourse quantity. These findings extend CSCL theory by conceptualising agentic AI as epistemic and social participants, bounded yet adaptive collaborators that redistribute cognitive and argumentative labour in hybrid human-AI learning environments. + oai:arXiv.org:2512.08933v1 + cs.HC + cs.AI + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Pal Dave, Xiaohong Yuan, Madhuri Siddula, Kaushik Roy + Lixiang Yan, Yueqiao Jin, Linxuan Zhao, Roberto Martinez-Maldonado, Xinyu Li, Xiu Guan, Wenxin Guo, Xibin Han, Dragan Ga\v{s}evi\'c - Impact of Data-Oriented and Object-Oriented Design on Performance and Cache Utilization with Artificial Intelligence Algorithms in Multi-Threaded CPUs - https://arxiv.org/abs/2512.07841 - arXiv:2512.07841v1 Announce Type: new -Abstract: The growing performance gap between multi-core CPUs and main memory necessitates hardware-aware software design paradigms. This study provides a comprehensive performance analysis of Data Oriented Design (DOD) versus the traditional Object-Oriented Design (OOD), focusing on cache utilization and efficiency in multi-threaded environments. We developed and compared four distinct versions of the A* search algorithm: single-threaded OOD (ST-OOD), single-threaded DOD (ST-DOD), multi-threaded OOD (MT-OOD), and multi-threaded DOD (MT-DOD). The evaluation was based on metrics including execution time, memory usage, and CPU cache misses. In multi-threaded tests, the DOD implementation demonstrated considerable performance gains, with faster execution times and a lower number of raw system calls and cache misses. While OOD occasionally showed marginal advantages in memory usage or percentage-based cache miss rates, DOD's efficiency in data-intensive operations was more evident. Furthermore, our findings reveal that for a fine-grained task like the A* algorithm, the overhead associated with thread management led to single-threaded versions significantly outperforming their multi-threaded counterparts in both paradigms. We conclude that even when performance differences appear subtle in simple algorithms, the consistent advantages of DOD in critical metrics highlight its foundational architectural superiority, suggesting it is a more effective approach for maximizing hardware efficiency in complex, large-scale AI and parallel computing tasks. - oai:arXiv.org:2512.07841v1 + Motion2Meaning: A Clinician-Centered Framework for Contestable LLM in Parkinson's Disease Gait Interpretation + https://arxiv.org/abs/2512.08934 + arXiv:2512.08934v1 Announce Type: new +Abstract: AI-assisted gait analysis holds promise for improving Parkinson's Disease (PD) care, but current clinical dashboards lack transparency and offer no meaningful way for clinicians to interrogate or contest AI decisions. To address this issue, we present Motion2Meaning, a clinician-centered framework that advances Contestable AI through a tightly integrated interface designed for interpretability, oversight, and procedural recourse. Our approach leverages vertical Ground Reaction Force (vGRF) time-series data from wearable sensors as an objective biomarker of PD motor states. The system comprises three key components: a Gait Data Visualization Interface (GDVI), a one-dimensional Convolutional Neural Network (1D-CNN) that predicts Hoehn & Yahr severity stages, and a Contestable Interpretation Interface (CII) that combines our novel Cross-Modal Explanation Discrepancy (XMED) safeguard with a contestable Large Language Model (LLM). Our 1D-CNN achieves 89.0% F1-score on the public PhysioNet gait dataset. XMED successfully identifies model unreliability by detecting a five-fold increase in explanation discrepancies in incorrect predictions (7.45%) compared to correct ones (1.56%), while our LLM-powered interface enables clinicians to validate correct predictions and successfully contest a portion of the model's errors. A human-centered evaluation of this contestable interface reveals a crucial trade-off between the LLM's factual grounding and its readability and responsiveness to clinical feedback. This work demonstrates the feasibility of combining wearable sensor analysis with Explainable AI (XAI) and contestable LLMs to create a transparent, auditable system for PD gait interpretation that maintains clinical oversight while leveraging advanced AI capabilities. Our implementation is publicly available at: https://github.com/hungdothanh/motion2meaning. + oai:arXiv.org:2512.08934v1 + cs.HC cs.AI - cs.PF - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Gabriel M. Arantes, Richard F. Pinto, Bruno L. Dalmazo, Eduardo N. Borges, Giancarlo Lucca, Viviane L. D. de Mattos, Fabian C. Cardoso, Rafael A. Berri + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Loc Phuc Truong Nguyen, Hung Thanh Do, Hung Truong Thanh Nguyen, Hung Cao - ThreadWeaver: Adaptive Threading for Efficient Parallel Reasoning in Language Models - https://arxiv.org/abs/2512.07843 - arXiv:2512.07843v1 Announce Type: new -Abstract: Scaling inference-time computation has enabled Large Language Models (LLMs) to achieve strong reasoning performance, but inherently sequential decoding leads to substantial latency, especially on complex tasks. Recent work on adaptive parallel reasoning aims to improve inference efficiency by decomposing the problem-solving process into concurrent reasoning threads when beneficial. However, existing methods on realistic tasks are either limited to supervised behavior cloning or exhibit significant accuracy drops compared to widely-used sequential long chain-of-thought (CoT) baselines. Moreover, many require customized inference engines, complicating deployment. We introduce ThreadWeaver, a framework for adaptive parallel reasoning that achieves accuracy on par with popular sequential reasoning models of comparable size while significantly reducing inference latency. ThreadWeaver's performance stems from three key innovations: 1) a two-stage parallel trajectory generator that produces large-scale, high-quality CoT data with parallel annotations for supervised fine-tuning; 2) a trie-based training-inference co-design that enables parallel reasoning on any off-the-shelf autoregressive inference engine without modifying position embeddings or KV caches; and 3) a parallelization-aware reinforcement learning framework that teaches the model to balance accuracy with effective parallelization. Across six challenging mathematical reasoning benchmarks, ThreadWeaver trained atop Qwen3-8B achieves accuracy comparable to cutting-edge sequential reasoning models (71.9% on average and 79.9% on AIME24) while delivering up to 1.53x average speedup in token latency, establishing a new Pareto frontier between accuracy and efficiency. - oai:arXiv.org:2512.07843v1 - cs.LG - cs.AI - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + From Script to Stage: Automating Experimental Design for Social Simulations with LLMs + https://arxiv.org/abs/2512.08935 + arXiv:2512.08935v1 Announce Type: new +Abstract: The rise of large language models (LLMs) has opened new avenues for social science research. Multi-agent simulations powered by LLMs are increasingly becoming a vital approach for exploring complex social phenomena and testing theoretical hypotheses. However, traditional computational experiments often rely heavily on interdisciplinary expertise, involve complex operations, and present high barriers to entry. While LLM-driven agents show great potential for automating experimental design, their reliability and scientific rigor remain insufficient for widespread adoption. To address these challenges, this paper proposes an automated multi-agent experiment design framework based on script generation, inspired by the concept of the Decision Theater. The experimental design process is divided into three stages: (1) Script Generation - a Screenwriter Agent drafts candidate experimental scripts; (2) Script Finalization - a Director Agent evaluates and selects the final script; (3) Actor Generation - an Actor Factory creates actor agents capable of performing on the experimental "stage" according to the finalized script. Extensive experiment conducted across multiple social science experimental scenarios demonstrate that the generated actor agents can perform according to the designed scripts and reproduce outcomes consistent with real-world situations. This framework not only lowers the barriers to experimental design in social science but also provides a novel decision-support tool for policy-making and research. The project's source code is available at: https://anonymous.4open.science/r/FSTS-DE1E + oai:arXiv.org:2512.08935v1 + cs.HC + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Long Lian, Sida Wang, Felix Juefei-Xu, Tsu-Jui Fu, Xiuyu Li, Adam Yala, Trevor Darrell, Alane Suhr, Yuandong Tian, Xi Victoria Lin + Yuwei Guo, Zihan Zhao, Deyu Zhou, Xiaowei Liu, Ming Zhang - Space Alignment Matters: The Missing Piece for Inducing Neural Collapse in Long-Tailed Learning - https://arxiv.org/abs/2512.07844 - arXiv:2512.07844v1 Announce Type: new -Abstract: Recent studies on Neural Collapse (NC) reveal that, under class-balanced conditions, the class feature means and classifier weights spontaneously align into a simplex equiangular tight frame (ETF). In long-tailed regimes, however, severe sample imbalance tends to prevent the emergence of the NC phenomenon, resulting in poor generalization performance. Current efforts predominantly seek to recover the ETF geometry by imposing constraints on features or classifier weights, yet overlook a critical problem: There is a pronounced misalignment between the feature and the classifier weight spaces. In this paper, we theoretically quantify the harm of such misalignment through an optimal error exponent analysis. Built on this insight, we propose three explicit alignment strategies that plug-and-play into existing long-tail methods without architectural change. Extensive experiments on the CIFAR-10-LT, CIFAR-100-LT, and ImageNet-LT datasets consistently boost examined baselines and achieve the state-of-the-art performances. - oai:arXiv.org:2512.07844v1 - cs.LG + A Principle-based Framework for the Development and Evaluation of Large Language Models for Health and Wellness + https://arxiv.org/abs/2512.08936 + arXiv:2512.08936v1 Announce Type: new +Abstract: The incorporation of generative artificial intelligence into personal health applications presents a transformative opportunity for personalized, data-driven health and fitness guidance, yet also poses challenges related to user safety, model accuracy, and personal privacy. To address these challenges, a novel, principle-based framework was developed and validated for the systematic evaluation of LLMs applied to personal health and wellness. First, the development of the Fitbit Insights explorer, a large language model (LLM)-powered system designed to help users interpret their personal health data, is described. Subsequently, the safety, helpfulness, accuracy, relevance, and personalization (SHARP) principle-based framework is introduced as an end-to-end operational methodology that integrates comprehensive evaluation techniques including human evaluation by generalists and clinical specialists, autorater assessments, and adversarial testing, into an iterative development lifecycle. Through the application of this framework to the Fitbit Insights explorer in a staged deployment involving over 13,000 consented users, challenges not apparent during initial testing were systematically identified. This process guided targeted improvements to the system and demonstrated the necessity of combining isolated technical evaluations with real-world user feedback. Finally, a comprehensive, actionable approach is established for the responsible development and deployment of LLM-powered health applications, providing a standardized methodology to foster innovation while ensuring emerging technologies are safe, effective, and trustworthy for users. + oai:arXiv.org:2512.08936v1 + cs.HC cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jinping Wang, Zhiqiang Gao, Zhiwu Xie + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Brent Winslow, Jacqueline Shreibati, Javier Perez, Hao-Wei Su, Nichole Young-Lin, Nova Hammerquist, Daniel McDuff, Jason Guss, Jenny Vafeiadou, Nick Cain, Alex Lin, Erik Schenck, Shiva Rajagopal, Jia-Ru Chung, Anusha Venkatakrishnan, Amy Armento Lee, Maryam Karimzadehgan, Qingyou Meng, Rythm Agarwal, Aravind Natarajan, Tracy Giest - AudioScene: Integrating Object-Event Audio into 3D Scenes - https://arxiv.org/abs/2512.07845 - arXiv:2512.07845v1 Announce Type: new -Abstract: The rapid advances in audio analysis underscore its vast potential for humancomputer interaction, environmental monitoring, and public safety; yet, existing audioonly datasets often lack spatial context. To address this gap, we present two novel audiospatial scene datasets, AudioScanNet and AudioRoboTHOR, designed to explore audioconditioned tasks within 3D environments. By integrating audio clips with spatially aligned 3D scenes, our datasets enable research on how audio signals interact with spatial context. To associate audio events with corresponding spatial information, we leverage the common sense reasoning ability of large language models and supplement them with rigorous human verification, This approach offers greater scalability compared to purely manual annotation while maintaining high standards of accuracy, completeness, and diversity, quantified through inter annotator agreement and performance on two benchmark tasks audio based 3D visual grounding and audio based robotic zeroshot navigation. The results highlight the limitations of current audiocentric methods and underscore the practical challenges and significance of our datasets in advancing audio guided spatial learning. - oai:arXiv.org:2512.07845v1 - cs.SD + When AI Gives Advice: Evaluating AI and Human Responses to Online Advice-Seeking for Well-Being + https://arxiv.org/abs/2512.08937 + arXiv:2512.08937v1 Announce Type: new +Abstract: Seeking advice is a core human behavior that the Internet has reinvented twice: first through forums and Q\&A communities that crowdsource public guidance, and now through large language models (LLMs) that deliver private, on-demand counsel at scale. Yet the quality of this synthesized LLM advice remains unclear. How does it compare, not only against arbitrary human comments, but against the wisdom of the online crowd? We conducted two studies (N = 210) in which experts compared top-voted Reddit advice with LLM-generated advice. LLMs ranked significantly higher overall and on effectiveness, warmth, and willingness to seek advice again. GPT-4o beat GPT-5 on all metrics except sycophancy, suggesting that benchmark gains need not improve advice-giving. In our second study, we examined how human and algorithmic advice could be combined, and found that human advice can be unobtrusively polished to compete with AI-generated comments. Finally, to surface user expectations, we ran an exploratory survey with undergraduates (N=148) that revealed heterogeneous, persona-dependent preferences for agent qualities (e.g., coach-like: goal-focused structure; friend-like: warmth and humor). We conclude with design implications for advice-giving agents and ecosystems blending AI, crowd input, and expert oversight. + oai:arXiv.org:2512.08937v1 + cs.HC cs.AI - eess.AS - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Shuaihang Yuan, Congcong Wen, Muhammad Shafique, Anthony Tzes, Yi Fang + Harsh Kumar, Jasmine Chahal, Yinuo Zhao, Zeling Zhang, Annika Wei, Louis Tay, Ashton Anderson - MixLM: High-Throughput and Effective LLM Ranking via Text-Embedding Mix-Interaction - https://arxiv.org/abs/2512.07846 - arXiv:2512.07846v1 Announce Type: new -Abstract: Large language models (LLMs) excel at capturing semantic nuances and therefore show impressive relevance ranking performance in modern recommendation and search systems. However, they suffer from high computational overhead under industrial latency and throughput requirements. In particular, cross-encoder ranking systems often create long context prefill-heavy workloads, as the model has to be presented with the user, query and item information. To this end, we propose MixLM, a novel LLM-based ranking framework, which significantly improves the system throughput via reducing the input context length, while preserving the semantic strength of cross-encoder rankers. In contrast to a standard ranking system where the context is presented to the model as pure text, we propose to use mix-interaction, a mixture of text and embedding tokens to represent the input. Specifically, MixLM encodes all items in the catalog into a few embedding tokens and stores in a nearline cache. The encoded item descriptions are used during online inference, effectively reducing the item length from a few thousand text tokens to a few embedding tokens. We share insights from deploying our MixLM framework to a real-world search application at LinkedIn, including a detailed discussion of our training pipelines, as well as a thorough analysis of our online serving infrastructure optimization. Comparing with strong baselines, MixLM increased throughput by 10.0x under the same latency budget, while maintaining relevance metrics. The efficiency gains delivered by MixLM enabled full-traffic deployment of LLM-powered search, which resulted in a significant 0.47% increase in Daily Active Users (DAU) in online A/B tests. - oai:arXiv.org:2512.07846v1 - cs.IR - cs.AI - cs.CL - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + The Impact of Artificial Intelligence on Strategic Technology Management: A Mixed-Methods Analysis of Resources, Capabilities, and Human-AI Collaboration + https://arxiv.org/abs/2512.08938 + arXiv:2512.08938v1 Announce Type: new +Abstract: This paper investigates how artificial intelligence (AI) can be effectively integrated into Strategic Technology Management (STM) practices to enhance the strategic alignment and effectiveness of technology investments. Through a mixed-methods approach combining quantitative survey data (n=230) and qualitative expert interviews (n=14), this study addresses three critical research questions: what success factors AI innovates for STM roadmap formulation under uncertainty; what resources and capabilities organizations require for AI-enhanced STM; and how human-AI interaction should be designed for complex STM tasks. The findings reveal that AI fundamentally transforms STM through data-driven strategic alignment and continuous adaptation, while success depends on cultivating proprietary data ecosystems, specialized human talent, and robust governance capabilities. The study introduces the AI-based Strategic Technology Management (AIbSTM) conceptual framework, which synthesizes technical capabilities with human and organizational dimensions across three layers: strategic alignment, resource-based view, and human-AI interaction. Contrary to visions of autonomous AI leadership, the research demonstrates that the most viable trajectory is human-centric augmentation, where AI serves as a collaborative partner rather than a replacement for human judgment. This work contributes to theory by extending the Resource-Based View to AI contexts and addressing cognitive and socio-technical chasms in AI adoption, while offering practitioners a prescriptive framework for navigating AI integration in strategic technology management. + oai:arXiv.org:2512.08938v1 + cs.HC + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Guoyao Li, Ran He, Shusen Jing, Kayhan Behdin, Yubo Wang, Sundara Raman Ramachandran, Chanh Nguyen, Jian Sheng, Xiaojing Ma, Chuanrui Zhu, Sriram Vasudevan, Muchen Wu, Sayan Ghosh, Lin Su, Qingquan Song, Xiaoqing Wang, Zhipeng Wang, Qing Lan, Yanning Chen, Jingwei Wu, Luke Simon, Wenjing Zhang, Qi Guo, Fedor Borisyuk + Massimo Fascinari, Vincent English - CarBench: A Comprehensive Benchmark for Neural Surrogates on High-Fidelity 3D Car Aerodynamics - https://arxiv.org/abs/2512.07847 - arXiv:2512.07847v1 Announce Type: new -Abstract: Benchmarking has been the cornerstone of progress in computer vision, natural language processing, and the broader deep learning domain, driving algorithmic innovation through standardized datasets and reproducible evaluation protocols. The growing availability of large-scale Computational Fluid Dynamics (CFD) datasets has opened new opportunities for applying machine learning to aerodynamic and engineering design. Yet, despite this progress, there exists no standardized benchmark for large-scale numerical simulations in engineering design. In this work, we introduce CarBench, the first comprehensive benchmark dedicated to large-scale 3D car aerodynamics, performing a large-scale evaluation of state-of-the-art models on DrivAerNet++, the largest public dataset for automotive aerodynamics, containing over 8,000 high-fidelity car simulations. We assess eleven architectures spanning neural operator methods (e.g., Fourier Neural Operator), geometric deep learning (PointNet, RegDGCNN, PointMAE, PointTransformer), transformer-based neural solvers (Transolver, Transolver++, AB-UPT), and implicit field networks (TripNet). Beyond standard interpolation tasks, we perform cross-category experiments in which transformer-based solvers trained on a single car archetype are evaluated on unseen categories. Our analysis covers predictive accuracy, physical consistency, computational efficiency, and statistical uncertainty. To accelerate progress in data-driven engineering, we open-source the benchmark framework, including training pipelines, uncertainty estimation routines based on bootstrap resampling, and pretrained model weights, establishing the first reproducible foundation for large-scale learning from high-fidelity CFD simulations, available at https://github.com/Mohamedelrefaie/CarBench. - oai:arXiv.org:2512.07847v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Assessing the Human-Likeness of LLM-Driven Digital Twins in Simulating Health Care System Trust + https://arxiv.org/abs/2512.08939 + arXiv:2512.08939v1 Announce Type: new +Abstract: Serving as an emerging and powerful tool, Large Language Model (LLM)-driven Human Digital Twins are showing great potential in healthcare system research. However, its actual simulation ability for complex human psychological traits, such as distrust in the healthcare system, remains unclear. This research gap particularly impacts health professionals' trust and usage of LLM-based Artificial Intelligence (AI) systems in assisting their routine work. In this study, based on the Twin-2K-500 dataset, we systematically evaluated the simulation results of the LLM-driven human digital twin using the Health Care System Distrust Scale (HCSDS) with an established human-subject sample, analyzing item-level distributions, summary statistics, and demographic subgroup patterns. Results showed that the simulated responses by the digital twin were significantly more centralized with lower variance and had fewer selections of extreme options (all p<0.001). While the digital twin broadly reproduces human results in major demographic patterns, such as age and gender, it exhibits relatively low sensitivity in capturing minor differences in education levels. The LLM-based digital twin simulation has the potential to simulate population trends, but it also presents challenges in making detailed, specific distinctions in subgroups of human beings. This study suggests that the current LLM-driven Digital Twins have limitations in modeling complex human attitudes, which require careful calibration and validation before applying them in inferential analyses or policy simulations in health systems engineering. Future studies are necessary to examine the emotional reasoning mechanism of LLMs before their use, particularly for studies that involve simulations sensitive to social topics, such as human-automation trust. + oai:arXiv.org:2512.08939v1 + cs.HC + cs.AI + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Mohamed Elrefaie, Dule Shu, Matt Klenk, Faez Ahmed + Yuzhou Wu, Mingyang Wu, Di Liu, Rong Yin, Kang Li - RaX-Crash: A Resource Efficient and Explainable Small Model Pipeline with an Application to City Scale Injury Severity Prediction - https://arxiv.org/abs/2512.07848 - arXiv:2512.07848v1 Announce Type: new -Abstract: New York City reports over one hundred thousand motor vehicle collisions each year, creating substantial injury and public health burden. We present RaX-Crash, a resource efficient and explainable small model pipeline for structured injury severity prediction on the official NYC Motor Vehicle Collisions dataset. RaX-Crash integrates three linked tables with tens of millions of records, builds a unified feature schema in partitioned storage, and trains compact tree based ensembles (Random Forest and XGBoost) on engineered tabular features, which are compared against locally deployed small language models (SLMs) prompted with textual summaries. On a temporally held out test set, XGBoost and Random Forest achieve accuracies of 0.7828 and 0.7794, clearly outperforming SLMs (0.594 and 0.496); class imbalance analysis shows that simple class weighting improves fatal recall with modest accuracy trade offs, and SHAP attribution highlights human vulnerability factors, timing, and location as dominant drivers of predicted severity. Overall, RaX-Crash indicates that interpretable small model ensembles remain strong baselines for city scale injury analytics, while hybrid pipelines that pair tabular predictors with SLM generated narratives improve communication without sacrificing scalability. - oai:arXiv.org:2512.07848v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Psychlysis: Towards the Creation of a Questionnaire-based Machine Learning Tool to Analyze States of Mind + https://arxiv.org/abs/2512.08940 + arXiv:2512.08940v1 Announce Type: new +Abstract: This paper describes the development of Psychlysis, a work-in-progress questionnaire-based machine learning application analyzing the user's current state of mind and suggesting ways to improve their mood using Machine Learning. The application utilizes the OCEAN model to understand the user's personality traits and make customized suggestions to enhance their well-being. The proposed application focus on improving the user's mood rather than just detecting their emotions. Preliminary results of the model are presented, showing the potential of the application in predicting the user's mood and providing personalized recommendations. The paper concludes by highlighting the potential benefits of such an application for various societal segments, including doctors, individuals, and mental health organizations, in improving emotional well-being and reducing the negative impact of mental health issues on daily life. + oai:arXiv.org:2512.08940v1 + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Di Zhu, Chen Xie, Ziwei Wang, Haoyun Zhang + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Hemakshi Jani, Mitish Karia, Meet Gohil, Rahul Bhadja, Aznam Yacoub, Shafaq Khan - Accelerating Urban Science Research with AI Urban Scientist - https://arxiv.org/abs/2512.07849 - arXiv:2512.07849v1 Announce Type: new -Abstract: Cities are complex, adaptive systems whose underlying principles remain difficult to disentangle despite unprecedented data abundance. Urban science therefore faces a fundamental challenge: converting vast, fragmented and interdisciplinary information into coherent explanations of how cities function and evolve. The emergence of AI scientists, i.e., agents capable of autonomous reasoning, hypothesis formation and data-driven experimentation, offers a new pathway toward accelerating this transformation, yet general-purpose systems fall short of the domain knowledge and methodological depth required for urban science research. Here we introduce a knowledge-driven AI Urban Scientist, built from hypotheses, peer-review signals, datasets and analytical patterns distilled from thousands of high-quality studies, and implemented as a coordinated multi-agent framework for end-to-end inquiry. The system generates structured hypotheses, retrieves and harmonizes heterogeneous datasets, conducts automated empirical analysis and simulation, and synthesizes insights in forms compatible with urban scientific reasoning. By providing reusable analytical tools and supporting community-driven extensions, the AI Urban Scientist lowers barriers to advanced urban analytics and acts not merely as an assistant but as an active collaborator in revealing the mechanisms that shape urban systems and in guiding the design of more resilient and equitable cities. - oai:arXiv.org:2512.07849v1 + One Size Fits None: A Personalized Framework for Urban Accessibility Using Exponential Decay + https://arxiv.org/abs/2512.08941 + arXiv:2512.08941v1 Announce Type: new +Abstract: This study develops a personalized accessibility framework that integrates exponential decay functions with user-customizable weighting systems. The framework enables real-time, personalized urban evaluation based on individual priorities and lifestyle requirements. The methodology employs grid-based discretization and a two-stage computational architecture that separates intensive preprocessing from lightweight real-time calculations. The computational architecture demonstrates that accessibility modelling can be made accessible to non-technical users through interactive interfaces, enabling fine-grained spatial analysis and identification of accessibility variations within neighbourhoods. The research contributes to Sustainable Development Goal 11's vision of inclusive, sustainable cities by providing tools for understanding how different populations experience identical urban spaces, supporting evidence-based policy development that addresses accessibility gaps. + oai:arXiv.org:2512.08941v1 + cs.HC cs.CY - cs.CL - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tong Xia, Jiankun Zhang, Ruiwen You, Ao Xu, Linghao Zhang, Tengyao Tu, Jingzhi Wang, Jinghua Piao, Yunke Zhang, Fengli Xu, Yong Li + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Prabhanjana Ghuriki, S. Chanti, Jossy P George - SABER: Small Actions, Big Errors - Safeguarding Mutating Steps in LLM Agents - https://arxiv.org/abs/2512.07850 - arXiv:2512.07850v1 Announce Type: new -Abstract: Despite rapid progress in LLM agents, performance on long-horizon, tool-using tasks remains fragile. To better understand this fragility, we ask a simple question: \emph{do all actions contribute equally to failure?} Analyzing execution traces on $\tau$-Bench (Airline/Retail) and SWE-Bench Verified, we decompose trajectories into \emph{mutating} (environment-changing) vs.\ non-mutating steps and formalize \emph{decisive deviations}, earliest action, level divergences that flip success to failure. A logistic regression reveals that each additional deviation in a mutating action reduces the odds of success by upto $92\%$ on Airline and upto $96\%$ on Retail for SoTA models. In contrast, deviations in non-mutating actions have little to no effect. Errors also grow with context length as agents drift from role and act on stale constraints. Motivated by these observations, we introduce \cm{}, a model-agnostic, gradient-free, test-time safeguard that (i) adds mutation-gated verification, (ii) injects \emph{Targeted Reflection} before mutating steps, and (iii) performs block-based context cleaning. \cm{} delivers consistent gains, e.g., Qwen3-Thinking: +28\% \emph{relative} on Airline, +11\% on Retail, and +7\% on SWE-Bench Verified; Claude: +9\%/+7\%. We further identify ceiling effects in $\tau$-Bench, where annotation errors and underspecified tasks artificially cap model performance. To address this, we release $\tau$-Bench Verified, which restores benchmark headroom through targeted revisions. Our results argue for action-level analysis, targeted safeguards, and reliable evaluations as prerequisites for robust multi-turn agents. - oai:arXiv.org:2512.07850v1 - cs.LG + Beyond Technical Debt: How AI-Assisted Development Creates Comprehension Debt in Resource-Constrained Indie Teams + https://arxiv.org/abs/2512.08942 + arXiv:2512.08942v1 Announce Type: new +Abstract: Junior indie game developers in distributed, part-time teams lack production frameworks suited to their specific context, as traditional methodologies are often inaccessible. This study introduces the CIGDI (Co-Intelligence Game Development Ideation) Framework, an alternative approach for integrating AI tools to address persistent challenges of technical debt, coordination, and burnout. + The framework emerged from a three-month reflective practice and autoethnographic study of a three-person distributed team developing the 2D narrative game "The Worm's Memoirs". Based on analysis of development data (N=157 Jira tasks, N=333 GitHub commits, N=13+ Miro boards, N=8 reflection sessions), CIGDI is proposed as a seven-stage iterative process structured around human-in-the-loop decision points (Priority Criteria and Timeboxing). + While AI support democratized knowledge access and reduced cognitive load, our analysis identified a significant challenge: "comprehension debt." We define this as a novel form of technical debt where AI helps teams build systems more sophisticated than their independent skill level can create or maintain. This paradox (possessing functional systems the team incompletely understands) creates fragility and AI dependency, distinct from traditional code quality debt. + This work contributes a practical production framework for resource-constrained teams and identifies critical questions about whether AI assistance constitutes a learning ladder or a dependency trap for developer skill. + oai:arXiv.org:2512.08942v1 + cs.HC cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Alejandro Cuadron, Pengfei Yu, Yang Liu, Arpit Gupta + http://creativecommons.org/publicdomain/zero/1.0/ + Yujie Zhang - GPU Memory Prediction for Multimodal Model Training - https://arxiv.org/abs/2512.07853 - arXiv:2512.07853v1 Announce Type: new -Abstract: As deep learning models in agentic AI systems grow in scale and complexity, GPU memory requirements increase and often exceed the available GPU memory capacity, so that out-of-memory (OoM) errors occur. It is well known that OoM interrupts the whole training itself and wastes substantial computational resources. Therefore, to prevent OoM, accurate prediction of GPU memory usage is essential. However, previous studies focus only on unimodal architectures and fail to generalize to multimodal models, even though the multimodal models are a common choice in agentic AI systems. To address this limitation, we propose a framework that predicts the peak GPU memory usage by analyzing the model architecture and training behavior of multimodal models. Specifically, the framework decomposes the multimodal model into its constituent layers and applies factorization to estimate the memory usage of each layer. Our evaluation shows that our framework achieves high prediction accuracy of ~8.7% average MAPE. - oai:arXiv.org:2512.07853v1 - cs.LG + Noise-Robust Abstractive Compression in Retrieval-Augmented Language Models + https://arxiv.org/abs/2512.08943 + arXiv:2512.08943v1 Announce Type: new +Abstract: Abstractive compression utilizes smaller langauge models to condense query-relevant context, reducing computational costs in retrieval-augmented generation (RAG). However, retrieved documents often include information that is either irrelevant to answering the query or misleading due to factual incorrect content, despite having high relevance scores. This behavior indicates that abstractive compressors are more likely to omit important information essential for the correct answer, especially in long contexts where attention dispersion occurs. To address this issue, we categorize retrieved documents in a more fine-grained manner and propose Abstractive Compression Robust against Noise (ACoRN), which introduces two novel training steps. First, we use offline data augmentation on the training dataset to enhance compressor robustness against two distinct types of retrieval noise. Second, since the language model based compressor cannot fully utilize information from multiple retrieved documents and exhibits positional bias, we perform finetuning to generate summaries centered around key information that directly supports the correct answer. Our experiments demonstrate that T5-large, trained with ACoRN as a compressor, improves EM and F1 scores while preserving the answer string, which could serve as direct evidence. ACoRN excels on datasets with many accuracy reducing documents, making it highly useful in real-world scenarios. + oai:arXiv.org:2512.08943v1 + cs.CL cs.AI - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jinwoo Jeong, Minchul Kang, Younghun Go, Changyong Shin, Hyunho Lee, Junho Yoon, Gyeongsik Yang, Chuck Yoo + Singon Kim - HSTMixer: A Hierarchical MLP-Mixer for Large-Scale Traffic Forecasting - https://arxiv.org/abs/2512.07854 - arXiv:2512.07854v1 Announce Type: new -Abstract: Traffic forecasting task is significant to modern urban management. Recently, there is growing attention on large-scale forecasting, as it better reflects the complexity of real-world traffic networks. However, existing models often exhibit quadratic computational complexity, making them impractical for large-scale real-world scenarios. In this paper, we propose a novel framework, Hierarchical Spatio-Temporal Mixer (HSTMixer), which leverages an all-MLP architecture for efficient and effective large-scale traffic forecasting. HSTMixer employs a hierarchical spatiotemporal mixing block to extract multi-resolution features through bottom-up aggregation and top-down propagation. Furthermore, an adaptive region mixer generates transformation matrices based on regional semantics, enabling our model to dynamically capture evolving spatiotemporal patterns for different regions. Extensive experiments conducted on four large-scale real-world datasets demonstrate that the proposed method not only achieves state-of-the-art performance but also exhibits competitive computational efficiency. - oai:arXiv.org:2512.07854v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Enhancing Reliability across Short and Long-Form QA via Reinforcement Learning + https://arxiv.org/abs/2512.08944 + arXiv:2512.08944v1 Announce Type: new +Abstract: While reinforcement learning has unlocked unprecedented complex reasoning in large language models, it has also amplified their propensity for hallucination, creating a critical trade-off between capability and reliability. This work confronts this challenge by introducing a targeted RL framework designed to mitigate both intrinsic and extrinsic hallucinations across short and long-form question answering. We address extrinsic hallucinations (flawed internal knowledge) by creating a novel training set from open-ended conversions of TriviaQA. Concurrently, we tackle intrinsic hallucinations (unfaithfulness to context) by leveraging long-form texts from FineWeb in a fact-grounding reward scheme. To further bolster reliability, our framework explicitly rewards the model for refusing to answer unanswerable questions, thereby cultivating crucial cautiousness. Extensive experiments demonstrate that our methodology yields significant performance gains across a diverse suite of benchmarks, substantially reducing both hallucination types. Ultimately, this research contributes a practical framework for resolving the critical tension between advanced reasoning and factual trustworthiness, paving the way for more capable and reliable large language models. + oai:arXiv.org:2512.08944v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yongyao Wang, Jingyuan Wang, Xie Yu, Jiahao Ji, Chao Li + http://creativecommons.org/licenses/by/4.0/ + Yudong Wang, Zhe Yang, Wenhan Ma, Zhifang Sui, Liang Zhao - LAPA: Log-Domain Prediction-Driven Dynamic Sparsity Accelerator for Transformer Model - https://arxiv.org/abs/2512.07855 - arXiv:2512.07855v1 Announce Type: new -Abstract: Attention-based Transformers have revolutionized natural language processing (NLP) and shown strong performance in computer vision (CV) tasks. However, as the input sequence varies, the computational bottlenecks in Transformer models exhibit dynamic behavior across stages, which calls for a cross-stage sparse acceleration strategy. Unfortunately, most existing sparse Transformer approaches are single-stage based, and their sparsity prediction mechanisms lead to significant power overhead when applied across multiple stages. To this end, this paper proposes a log-domain attention prediction algorithm-architecture co-design, named LAPA. First, an asymmetric leading one computing (ALOC) scheme is designed to eliminate expensive multiplications. Next, a mixed-precision multi-round shifting accumulation (MRSA) mechanism is further proposed to mitigate the accumulation overhead. A data-feature dependent filter (DDF) strategy is designed to work in concert with the MRSA process. Finally, an elaborate accelerator is designed to translate the theoretical enhancement into practical hardware improvement. Experimental results show that LAPA achieves 3.52x, 3.24x and 2.79x higher energy efficiency than the state-of-the-art (SOTA) works Spatten, Sanger and FACT, respectively. - oai:arXiv.org:2512.07855v1 - cs.LG - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + The Linguistic Architecture of Reflective Thought: Evaluation of a Large Language Model as a Tool to Isolate the Formal Structure of Mentalization + https://arxiv.org/abs/2512.08945 + arXiv:2512.08945v1 Announce Type: new +Abstract: Background: Mentalization integrates cognitive, affective, and intersubjective components. Large Language Models (LLMs) display an increasing ability to generate reflective texts, raising questions regarding the relationship between linguistic form and mental representation. This study assesses the extent to which a single LLM can reproduce the linguistic structure of mentalization according to the parameters of Mentalization-Based Treatment (MBT). + Methods: Fifty dialogues were generated between human participants and an LLM configured in standard mode. Five psychiatrists trained in MBT, working under blinded conditions, evaluated the mentalization profiles produced by the model along the four MBT axes, assigning Likert-scale scores for evaluative coherence, argumentative coherence, and global quality. Inter-rater agreement was estimated using ICC(3,1). + Results: Mean scores (3.63-3.98) and moderate standard deviations indicate a high level of structural coherence in the generated profiles. ICC values (0.60-0.84) show substantial-to-high agreement among raters. The model proved more stable in the Implicit-Explicit and Self-Other dimensions, while presenting limitations in the integration of internal states and external contexts. The profiles were coherent and clinically interpretable yet characterized by affective neutrality. + oai:arXiv.org:2512.08945v1 + cs.CL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Huizheng Wang, Hongbin Wang, Shaojun Wei, Yang Hu, Shouyi Yin + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Stefano Epifani (University of Pavia Italy, Digital Transformation Institute Italy), Giuliano Castigliego (Digital Transformation Institute Italy, Psychoanalytic Academy of Italian-Speaking Switzerland), Laura Kecskemeti (Psychiatric Services of the Canton of Grisons Switzerland), Giuliano Razzicchia (Digital Transformation Institute Italy), Elisabeth Seiwald-Sonderegger (Psychiatric Services of the Canton of Grisons Switzerland) - Medical Test-free Disease Detection Based on Big Data - https://arxiv.org/abs/2512.07856 - arXiv:2512.07856v1 Announce Type: new -Abstract: Accurate disease detection is of paramount importance for effective medical treatment and patient care. However, the process of disease detection is often associated with extensive medical testing and considerable costs, making it impractical to perform all possible medical tests on a patient to diagnose or predict hundreds or thousands of diseases. In this work, we propose Collaborative Learning for Disease Detection (CLDD), a novel graph-based deep learning model that formulates disease detection as a collaborative learning task by exploiting associations among diseases and similarities among patients adaptively. CLDD integrates patient-disease interactions and demographic features from electronic health records to detect hundreds or thousands of diseases for every patient, with little to no reliance on the corresponding medical tests. Extensive experiments on a processed version of the MIMIC-IV dataset comprising 61,191 patients and 2,000 diseases demonstrate that CLDD consistently outperforms representative baselines across multiple metrics, achieving a 6.33\% improvement in recall and 7.63\% improvement in precision. Furthermore, case studies on individual patients illustrate that CLDD can successfully recover masked diseases within its top-ranked predictions, demonstrating both interpretability and reliability in disease prediction. By reducing diagnostic costs and improving accessibility, CLDD holds promise for large-scale disease screening and social health security. - oai:arXiv.org:2512.07856v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Large Language Models as Search Engines: Societal Challenges + https://arxiv.org/abs/2512.08946 + arXiv:2512.08946v1 Announce Type: new +Abstract: Large Language Models (LLMs) may one day replace search engines as the primary portal to information on the Web. In this article, we investigate the societal challenges that such a change could bring. We focus on the roles of LLM Providers, Content Creators, and End Users, and identify 15 types of challenges. With each, we show current mitigation strategies -- both from the technical perspective and the legal perspective. We also discuss the impact of each challenge and point out future research opportunities. + oai:arXiv.org:2512.08946v1 + cs.CY + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haokun Zhao, Yingzhe Bai, Qingyang Xu, Lixin Zhou, Jianxin Chen, Jicong Fan + http://creativecommons.org/licenses/by/4.0/ + SIGIR Forum 2025 + Zacchary Sadeddine, Winston Maxwell, Ga\"el Varoquaux, Fabian M. Suchanek - SA^2GFM: Enhancing Robust Graph Foundation Models with Structure-Aware Semantic Augmentation - https://arxiv.org/abs/2512.07857 - arXiv:2512.07857v1 Announce Type: new -Abstract: We present Graph Foundation Models (GFMs) which have made significant progress in various tasks, but their robustness against domain noise, structural perturbations, and adversarial attacks remains underexplored. A key limitation is the insufficient modeling of hierarchical structural semantics, which are crucial for generalization. In this paper, we propose SA^2GFM, a robust GFM framework that improves domain-adaptive representations through Structure-Aware Semantic Augmentation. First, we encode hierarchical structural priors by transforming entropy-based encoding trees into structure-aware textual prompts for feature augmentation. The enhanced inputs are processed by a self-supervised Information Bottleneck mechanism that distills robust, transferable representations via structure-guided compression. To address negative transfer in cross-domain adaptation, we introduce an expert adaptive routing mechanism, combining a mixture-of-experts architecture with a null expert design. For efficient downstream adaptation, we propose a fine-tuning module that optimizes hierarchical structures through joint intra- and inter-community structure learning. Extensive experiments demonstrate that SA^2GFM outperforms 9 state-of-the-art baselines in terms of effectiveness and robustness against random noise and adversarial perturbations for node and graph classification. - oai:arXiv.org:2512.07857v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Kinematics Control of Electromagnetic Formation Flight Using Angular-Momentum Conservation Constraint + https://arxiv.org/abs/2512.08949 + arXiv:2512.08949v1 Announce Type: new +Abstract: Electromagnetic formation flight (EMFF) uses the electromagnetic force to control the relative positions of multiple satellites without using conventional fuel-based propulsion. To compensate for the electromagnetic torque generated alongside the electromagnetic force, in most previous studies, all satellites were assumed to have reaction wheels (RWs) besides electromagnetic coils. However, the RW-loaded angular momentum becomes non-uniformly distributed among the satellites, because the electromagnetic torque usually differs between satellites. Without a proper control scheme, this deviation increases over time, and the RWs become saturated quickly, preventing the attitudes of the satellites from being controlled. In this study, a new controller is proposed that enables the electromagnetic force and torque to be controlled simultaneously. The EMFF kinematics derived from the conservation of angular momentum are used for the controller design. This controller can control $n$ satellites without saturating the RWs, and only one set of RWs is required among all satellites. The combination of the proposed controller with a simple unloading control exclusive to the chief satellite results in the elimination of the accumulation of angular momentum in the entire system. The effectiveness of the proposed controller is demonstrated through numerical simulations of the formation maintenance and formation reconfiguration of a five-satellite system. + oai:arXiv.org:2512.08949v1 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Junhua Shi, Qingyun Sun, Haonan Yuan, Xingcheng Fu + 10.2514/1.G005873 + Yuta Takahashi, Hiraku Sakamoto, Shin-ichiro Sakai - FAIM: Frequency-Aware Interactive Mamba for Time Series Classification - https://arxiv.org/abs/2512.07858 - arXiv:2512.07858v1 Announce Type: new -Abstract: Time series classification (TSC) is crucial in numerous real-world applications, such as environmental monitoring, medical diagnosis, and posture recognition. TSC tasks require models to effectively capture discriminative information for accurate class identification. Although deep learning architectures excel at capturing temporal dependencies, they often suffer from high computational cost, sensitivity to noise perturbations, and susceptibility to overfitting on small-scale datasets. To address these challenges, we propose FAIM, a lightweight Frequency-Aware Interactive Mamba model. Specifically, we introduce an Adaptive Filtering Block (AFB) that leverages Fourier Transform to extract frequency-domain features from time series data. The AFB incorporates learnable adaptive thresholds to dynamically suppress noise and employs element-wise coupling of global and local semantic adaptive filtering, enabling in-depth modeling of the synergy among different frequency components. Furthermore, we design an Interactive Mamba Block (IMB) to facilitate efficient multi-granularity information interaction, balancing the extraction of fine-grained discriminative features and comprehensive global contextual information, thereby endowing FAIM with powerful and expressive representations for TSC tasks. Additionally, we incorporate a self-supervised pre-training mechanism to enhance FAIM's understanding of complex temporal patterns and improve its robustness across various domains and high-noise scenarios. Extensive experiments on multiple benchmarks demonstrate that FAIM consistently outperforms existing state-of-the-art (SOTA) methods, achieving a superior trade-off between accuracy and efficiency and exhibits outstanding performance. - oai:arXiv.org:2512.07858v1 + Optimizing Algorithms for Mobile Health Interventions with Active Querying Optimization + https://arxiv.org/abs/2512.08950 + arXiv:2512.08950v1 Announce Type: new +Abstract: Reinforcement learning in mobile health (mHealth) interventions requires balancing intervention efficacy with user burden, particularly when state measurements (for example, user surveys or feedback) are costly yet essential. The Act-Then-Measure (ATM) heuristic addresses this challenge by decoupling control and measurement actions within the Action-Contingent Noiselessly Observable Markov Decision Process (ACNO-MDP) framework. However, the standard ATM algorithm relies on a temporal-difference-inspired Q-learning method, which is prone to instability in sparse and noisy environments. In this work, we propose a Bayesian extension to ATM that replaces standard Q-learning with a Kalman filter-style Bayesian update, maintaining uncertainty-aware estimates of Q-values and enabling more stable and sample-efficient learning. We evaluate our method in both toy environments and clinically motivated testbeds. In small, tabular environments, Bayesian ATM achieves comparable or improved scalarized returns with substantially lower variance and more stable policy behavior. In contrast, in larger and more complex mHealth settings, both the standard and Bayesian ATM variants perform poorly, suggesting a mismatch between ATM's modeling assumptions and the structural challenges of real-world mHealth domains. These findings highlight the value of uncertainty-aware methods in low-data settings while underscoring the need for new RL algorithms that explicitly model causal structure, continuous states, and delayed feedback under observation cost constraints. + oai:arXiv.org:2512.08950v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Da Zhang, Bingyu Li, Zhiyuan Zhao, Yanhan Zhang, Junyu Gao, Feiping Nie, Xuelong Li + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Aseel Rawashdeh - SetAD: Semi-Supervised Anomaly Learning in Contextual Sets - https://arxiv.org/abs/2512.07863 - arXiv:2512.07863v1 Announce Type: new -Abstract: Semi-supervised anomaly detection (AD) has shown great promise by effectively leveraging limited labeled data. However, existing methods are typically structured around scoring individual points or simple pairs. Such {point- or pair-centric} view not only overlooks the contextual nature of anomalies, which are defined by their deviation from a collective group, but also fails to exploit the rich supervisory signals that can be generated from the combinatorial composition of sets. Consequently, such models struggle to exploit the high-order interactions within the data, which are critical for learning discriminative representations. To address these limitations, we propose SetAD, a novel framework that reframes semi-supervised AD as a Set-level Anomaly Detection task. SetAD employs an attention-based set encoder trained via a graded learning objective, where the model learns to quantify the degree of anomalousness within an entire set. This approach directly models the complex group-level interactions that define anomalies. Furthermore, to enhance robustness and score calibration, we propose a context-calibrated anomaly scoring mechanism, which assesses a point's anomaly score by aggregating its normalized deviations from peer behavior across multiple, diverse contextual sets. Extensive experiments on 10 real-world datasets demonstrate that SetAD significantly outperforms state-of-the-art models. Notably, we show that our model's performance consistently improves with increasing set size, providing strong empirical support for the set-based formulation of anomaly detection. - oai:arXiv.org:2512.07863v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + AI Co-Artist: A LLM-Powered Framework for Interactive GLSL Shader Animation Evolution + https://arxiv.org/abs/2512.08951 + arXiv:2512.08951v1 Announce Type: new +Abstract: Creative coding and real-time shader programming are at the forefront of interactive digital art, enabling artists, designers, and enthusiasts to produce mesmerizing, complex visual effects that respond to real-time stimuli such as sound or user interaction. However, despite the rich potential of tools like GLSL, the steep learning curve and requirement for programming fluency pose substantial barriers for newcomers and even experienced artists who may not have a technical background. In this paper, we present AI Co-Artist, a novel interactive system that harnesses the capabilities of large language models (LLMs), specifically GPT-4, to support the iterative evolution and refinement of GLSL shaders through a user-friendly, visually-driven interface. Drawing inspiration from the user-guided evolutionary principles pioneered by the Picbreeder platform, our system empowers users to evolve shader art using intuitive interactions, without needing to write or understand code. AI Co-Artist serves as both a creative companion and a technical assistant, allowing users to explore a vast generative design space of real-time visual art. Through comprehensive evaluations, including structured user studies and qualitative feedback, we demonstrate that AI Co-Artist significantly reduces the technical threshold for shader creation, enhances creative outcomes, and supports a wide range of users in producing professional-quality visual effects. Furthermore, we argue that this paradigm is broadly generalizable. By leveraging the dual strengths of LLMs-semantic understanding and program synthesis, our method can be applied to diverse creative domains, including website layout generation, architectural visualizations, product prototyping, and infographics. + oai:arXiv.org:2512.08951v1 + cs.NE + cs.AI + cs.GR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jianling Gao, Chongyang Tao, Xuelian Lin, Junfeng Liu, Shuai Ma + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Kamer Ali Yuksel, Hassan Sawaf - Pattern Recognition of Ozone-Depleting Substance Exports in Global Trade Data - https://arxiv.org/abs/2512.07864 - arXiv:2512.07864v1 Announce Type: new -Abstract: New methods are needed to monitor environmental treaties, like the Montreal Protocol, by reviewing large, complex customs datasets. This paper introduces a framework using unsupervised machine learning to systematically detect suspicious trade patterns and highlight activities for review. Our methodology, applied to 100,000 trade records, combines several ML techniques. Unsupervised Clustering (K-Means) discovers natural trade archetypes based on shipment value and weight. Anomaly Detection (Isolation Forest and IQR) identifies rare "mega-trades" and shipments with commercially unusual price-per-kilogram values. This is supplemented by Heuristic Flagging to find tactics like vague shipment descriptions. These layers are combined into a priority score, which successfully identified 1,351 price outliers and 1,288 high-priority shipments for customs review. A key finding is that high-priority commodities show a different and more valuable value-to-weight ratio than general goods. This was validated using Explainable AI (SHAP), which confirmed vague descriptions and high value as the most significant risk predictors. The model's sensitivity was validated by its detection of a massive spike in "mega-trades" in early 2021, correlating directly with the real-world regulatory impact of the US AIM Act. This work presents a repeatable unsupervised learning pipeline to turn raw trade data into prioritized, usable intelligence for regulatory groups. - oai:arXiv.org:2512.07864v1 + Learning When to Ask: Simulation-Trained Humanoids for Mental-Health Diagnosis + https://arxiv.org/abs/2512.08952 + arXiv:2512.08952v1 Announce Type: new +Abstract: Testing humanoid robots with users is slow, causes wear, and limits iteration and diversity. Yet screening agents must master conversational timing, prosody, backchannels, and what to attend to in faces and speech for Depression and PTSD. Most simulators omit policy learning with nonverbal dynamics; many controllers chase task accuracy while underweighting trust, pacing, and rapport. We virtualise the humanoid as a conversational agent to train without hardware burden. Our agent-centred, simulation-first pipeline turns interview data into 276 Unreal Engine MetaHuman patients with synchronised speech, gaze/face, and head-torso poses, plus PHQ-8 and PCL-C flows. A perception-fusion-policy loop decides what and when to speak, when to backchannel, and how to avoid interruptions, under a safety shield. Training uses counterfactual replay (bounded nonverbal perturbations) and an uncertainty-aware turn manager that probes to reduce diagnostic ambiguity. Results are simulation-only; the humanoid is the transfer target. In comparing three controllers, a custom TD3 (Twin Delayed DDPG) outperformed PPO and CEM, achieving near-ceiling coverage with steadier pace at comparable rewards. Decision-quality analyses show negligible turn overlap, aligned cut timing, fewer clarification prompts, and shorter waits. Performance stays stable under modality dropout and a renderer swap, and rankings hold on a held-out patient split. Contributions: (1) an agent-centred simulator that turns interviews into 276 interactive patients with bounded nonverbal counterfactuals; (2) a safe learning loop that treats timing and rapport as first-class control variables; (3) a comparative study (TD3 vs PPO/CEM) with clear gains in completeness and social timing; and (4) ablations and robustness analyses explaining the gains and enabling clinician-supervised humanoid pilots. + oai:arXiv.org:2512.08952v1 cs.LG - econ.EM - econ.GN - q-fin.EC - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.HC + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Muhammad Sukri Bin Ramli + Filippo Cenacchi, Deborah Richards, Longbing Cao - Using Text-Based Life Trajectories from Swedish Register Data to Predict Residential Mobility with Pretrained Transformers - https://arxiv.org/abs/2512.07865 - arXiv:2512.07865v1 Announce Type: new -Abstract: We transform large-scale Swedish register data into textual life trajectories to address two long-standing challenges in data analysis: high cardinality of categorical variables and inconsistencies in coding schemes over time. Leveraging this uniquely comprehensive population register, we convert register data from 6.9 million individuals (2001-2013) into semantically rich texts and predict individuals' residential mobility in later years (2013-2017). These life trajectories combine demographic information with annual changes in residence, work, education, income, and family circumstances, allowing us to assess how effectively such sequences support longitudinal prediction. We compare multiple NLP architectures (including LSTM, DistilBERT, BERT, and Qwen) and find that sequential and transformer-based models capture temporal and semantic structure more effectively than baseline models. The results show that textualized register data preserves meaningful information about individual pathways and supports complex, scalable modeling. Because few countries maintain longitudinal microdata with comparable coverage and precision, this dataset enables analyses and methodological tests that would be difficult or impossible elsewhere, offering a rigorous testbed for developing and evaluating new sequence-modeling approaches. Overall, our findings demonstrate that combining semantically rich register data with modern language models can substantially advance longitudinal analysis in social sciences. - oai:arXiv.org:2512.07865v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + SimClinician: A Multimodal Simulation Testbed for Reliable Psychologist AI Collaboration in Mental Health Diagnosis + https://arxiv.org/abs/2512.08953 + arXiv:2512.08953v1 Announce Type: new +Abstract: AI based mental health diagnosis is often judged by benchmark accuracy, yet in practice its value depends on how psychologists respond whether they accept, adjust, or reject AI suggestions. Mental health makes this especially challenging: decisions are continuous and shaped by cues in tone, pauses, word choice, and nonverbal behaviors of patients. Current research rarely examines how AI diagnosis interface design influences these choices, leaving little basis for reliable testing before live studies. We present SimClinician, an interactive simulation platform, to transform patient data into psychologist AI collaborative diagnosis. Contributions include: (1) a dashboard integrating audio, text, and gaze-expression patterns; (2) an avatar module rendering de-identified dynamics for analysis; (3) a decision layer that maps AI outputs to multimodal evidence, letting psychologists review AI reasoning, and enter a diagnosis. Tested on the E-DAIC corpus (276 clinical interviews, expanded to 480,000 simulations), SimClinician shows that a confirmation step raises acceptance by 23%, keeping escalations below 9%, and maintaining smooth interaction flow. + oai:arXiv.org:2512.08953v1 + cs.HC + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Philipp Stark, Alexandros Sopasakis, Ola Hall, Markus Grillitsch + http://creativecommons.org/licenses/by/4.0/ + Filippo Cenacchi, Longbing Cao, Deborah Richards - Command & Control (C2) Traffic Detection Via Algorithm Generated Domain (Dga) Classification Using Deep Learning And Natural Language Processing - https://arxiv.org/abs/2512.07866 - arXiv:2512.07866v1 Announce Type: new -Abstract: The sophistication of modern malware, specifically regarding communication with Command and Control (C2) servers, has rendered static blacklist-based defenses obsolete. The use of Domain Generation Algorithms (DGA) allows attackers to generate thousands of dynamic addresses daily, hindering blocking by traditional firewalls. This paper aims to propose and evaluate a method for detecting DGA domains using Deep Learning and Natural Language Processing (NLP) techniques. The methodology consisted of collecting a hybrid database containing 50,000 legitimate and 50,000 malicious domains, followed by the extraction of lexical features and the training of a Recurrent Neural Network (LSTM). Results demonstrated that while statistical entropy analysis is effective for simple DGAs, the Neural Network approach presents superiority in detecting complex patterns, reaching 97.2% accuracy and reducing the false positive rate in ambiguous lawful traffic scenarios. - oai:arXiv.org:2512.07866v1 + An Electrocardiogram Multi-task Benchmark with Comprehensive Evaluations and Insightful Findings + https://arxiv.org/abs/2512.08954 + arXiv:2512.08954v1 Announce Type: new +Abstract: In the process of patient diagnosis, non-invasive measurements are widely used due to their low risks and quick results. Electrocardiogram (ECG), as a non-invasive method to collect heart activities, is used to diagnose cardiac conditions. Analyzing the ECG typically requires domain expertise, which is a roadblock to applying artificial intelligence (AI) for healthcare. Through advances in self-supervised learning and foundation models, AI systems can now acquire and leverage domain knowledge without relying solely on human expertise. However, there is a lack of comprehensive analyses over the foundation models' performance on ECG. This study aims to answer the research question: "Are Foundation Models Useful for ECG Analysis?" To address it, we evaluate language/general time-series/ECG foundation models in comparison with time-series deep learning models. The experimental results show that general time-series/ECG foundation models achieve a top performance rate of 80%, indicating their effectiveness in ECG analysis. In-depth analyses and insights are provided along with comprehensive experimental results. This study highlights the limitations and potential of foundation models in advancing physiological waveform analysis. The data and code for this benchmark are publicly available at https://github.com/yuhaoxu99/ECGMultitasks-Benchmark. + oai:arXiv.org:2512.08954v1 cs.LG cs.AI - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Maria Milena Araujo Felix + Yuhao Xu, Jiaying Lu, Sirui Ding, Defu Cao, Xiao Hu, Carl Yang - Bayesian Optimization for Function-Valued Responses under Min-Max Criteria - https://arxiv.org/abs/2512.07868 - arXiv:2512.07868v1 Announce Type: new -Abstract: Bayesian optimization is widely used for optimizing expensive black box functions, but most existing approaches focus on scalar responses. In many scientific and engineering settings the response is functional, varying smoothly over an index such as time or wavelength, which makes classical formulations inadequate. Existing methods often minimize integrated error, which captures average performance but neglects worst case deviations. To address this limitation we propose min-max Functional Bayesian Optimization (MM-FBO), a framework that directly minimizes the maximum error across the functional domain. Functional responses are represented using functional principal component analysis, and Gaussian process surrogates are constructed for the principal component scores. Building on this representation, MM-FBO introduces an integrated uncertainty acquisition function that balances exploitation of worst case expected error with exploration across the functional domain. We provide two theoretical guarantees: a discretization bound for the worst case objective, and a consistency result showing that as the surrogate becomes accurate and uncertainty vanishes, the acquisition converges to the true min-max objective. We validate the method through experiments on synthetic benchmarks and physics inspired case studies involving electromagnetic scattering by metaphotonic devices and vapor phase infiltration. Results show that MM-FBO consistently outperforms existing baselines and highlights the importance of explicitly modeling functional uncertainty in Bayesian optimization. - oai:arXiv.org:2512.07868v1 + LLM4XCE: Large Language Models for Extremely Large-Scale Massive MIMO Channel Estimation + https://arxiv.org/abs/2512.08955 + arXiv:2512.08955v1 Announce Type: new +Abstract: Extremely large-scale massive multiple-input multiple-output (XL-MIMO) is a key enabler for sixth-generation (6G) networks, offering massive spatial degrees of freedom. Despite these advantages, the coexistence of near-field and far-field effects in hybrid-field channels presents significant challenges for accurate estimation, where traditional methods often struggle to generalize effectively. In recent years, large language models (LLMs) have achieved impressive performance on downstream tasks via fine-tuning, aligning with the semantic communication shift toward task-oriented understanding over bit-level accuracy. + Motivated by this, we propose Large Language Models for XL-MIMO Channel Estimation (LLM4XCE), a novel channel estimation framework that leverages the semantic modeling capabilities of large language models to recover essential spatial-channel representations for downstream tasks. The model integrates a carefully designed embedding module with Parallel Feature-Spatial Attention, enabling deep fusion of pilot features and spatial structures to construct a semantically rich representation for LLM input. By fine-tuning only the top two Transformer layers, our method effectively captures latent dependencies in the pilot data while ensuring high training efficiency. Extensive simulations demonstrate that LLM4XCE significantly outperforms existing state-of-the-art methods under hybrid-field conditions, achieving superior estimation accuracy and generalization performance. + oai:arXiv.org:2512.08955v1 cs.LG cs.AI - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Pouya Ahadi, Reza Marzban, Ali Adibi, Kamran Paynabar + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Renbin Li, Shuangshuang Li, Peihao Dong - LocaGen: Sub-Sample Time-Delay Learning for Beam Localization - https://arxiv.org/abs/2512.07872 - arXiv:2512.07872v1 Announce Type: new -Abstract: The goal of LocaGen is to improve the localization performance of audio signals in the 2-D beam localization problem. LocaGen reduces sampling quantization errors through machine learning models trained on realistic synthetic data generated by a simulation. The system increases the accuracy of both direction-of-arrival (DOA) and precise location estimation of an audio beam from an array of three microphones. We demonstrate LocaGen's efficacy on a low-powered embedded system with an increased localization accuracy with a minimal increase in real-time resource usage. LocaGen was demonstrated to reduce DOA error by approximately 67% even with a microphone array of only 10 kHz in audio processing. - oai:arXiv.org:2512.07872v1 - cs.SD - eess.AS - eess.SP - Wed, 10 Dec 2025 00:00:00 -0500 + DW-KNN: A Transparent Local Classifier Integrating Distance Consistency and Neighbor Reliability + https://arxiv.org/abs/2512.08956 + arXiv:2512.08956v1 Announce Type: new +Abstract: K-Nearest Neighbors (KNN) is one of the most used ML classifiers. However, if we observe closely, standard distance-weighted KNN and relative variants assume all 'k' neighbors are equally reliable. In heterogeneous feature space, this becomes a limitation that hinders reliability in predicting true levels of the observation. + We propose DW-KNN (Double Weighted KNN), a transparent and robust variant that integrates exponential distance with neighbor validity. This enables instance-level interpretability, suppresses noisy or mislabeled samples, and reduces hyperparameter sensitivity. + Comprehensive evaluation on 9 data-sets helps to demonstrate that DW-KNN achieves 0.8988 accuracy on average. It ranks 2nd among six methods and within 0.2% of the best-performing Ensemble KNN. It also exhibits the lowest cross-validation variance (0.0156), indicating reliable prediction stability. Statistical significance test confirmed ($p < 0.001$) improvement over compactness weighted KNN (+4.09\%) and Kernel weighted KNN (+1.13\%). The method provides a simple yet effective alternative to complex adaptive schemes, particularly valuable for high-stakes applications requiring explainable predictions. + oai:arXiv.org:2512.08956v1 + cs.LG + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Ishaan Kunwar, Henry Cantor, Tyler Rizzo, Ayaan Qayyum + Kumarjit Pathak, Karthik K, Sachin Madan, Jitin Kapila - Advancing physiological time series reconstruction and imputation via mixture of receptive fields and experts fusion - https://arxiv.org/abs/2512.07873 - arXiv:2512.07873v1 Announce Type: new -Abstract: Recent studies show that using diffusion models for time series signal reconstruc- tion holds great promise. However, such approaches remain largely unexplored in the domain of medical time series. The unique characteristics of the physiological time series signals, such as multivariate, high temporal variability, highly noisy, and artifact-prone, make deep learning-based approaches still challenging for tasks such as imputation. Hence, we propose a novel Mixture of Experts (MoE)-based noise estimator within a score-based diffusion framework. Specifically, the Receptive Field Adaptive MoE (RFAMoE) module is designed to enable each channel to adap- tively select desired receptive fields throughout the diffusion process. Moreover, recent literature has found that when generating a physiological signal, performing multiple inferences and averaging the reconstructed signals can effectively reduce reconstruction errors, but at the cost of significant computational and latency over- head. We design a Fusion MoE module and innovatively leverage the nature of MoE module to generate K noise signals in parallel, fuse them using a routing mechanism, and complete signal reconstruction in a single inference step. This design not only improves performance over previous methods but also eliminates the substantial computational cost and latency associated with multiple inference processes. Extensive results demonstrate that our proposed framework consistently outperforms diffusion-based SOTA works on different tasks and datasets. - oai:arXiv.org:2512.07873v1 + LUMOS: Large User MOdels for User Behavior Prediction + https://arxiv.org/abs/2512.08957 + arXiv:2512.08957v1 Announce Type: new +Abstract: User behavior prediction at scale remains a critical challenge for online B2C platforms. Traditional approaches rely heavily on task-specific models and domain-specific feature engineering. This is time-consuming, computationally expensive, and requires domain expertise and therefore not scalable. We present LUMOS (Large User MOdel Series), a transformer-based architecture that eliminates task-specific models and manual feature engineering by learning multiple tasks jointly using only raw user activity data. LUMOS introduces a novel cross-attention mechanism that conditions predictions on future known events (e.g., holidays, sales, etc.), enabling the model to predict complex behaviour patterns like "how will upcoming holidays affect user engagement?" The architecture also employs multi-modal tokenization, combining user transactions, event context, and static user demographic attributes into rich representations processed through specialized embedding pathways. + Through extensive experiments on a production dataset spanning 275 billion user activity tokens from 250 million users, we demonstrate that LUMOS achieves superior performance compared to traditional task-specific models. Across 5 tasks with established baselines, we achieve an average improvement of 0.025 in ROC-AUC for binary classification tasks and 4.6\% reduction in MAPE for regression tasks. Online A/B testing validates these improvements translate to measurable business impact with a 3.15\% increase in Daily Active Users. + oai:arXiv.org:2512.08957v1 cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ci Zhang, Huayu Li, Changdi Yang, Jiangnan Xia, Yanzhi Wang, Xiaolong Ma, Jin Lu, Geng Yuan + http://creativecommons.org/licenses/by/4.0/ + Dhruv Nigam - Controllable risk scenario generation from human crash data for autonomous vehicle testing - https://arxiv.org/abs/2512.07874 - arXiv:2512.07874v1 Announce Type: new -Abstract: Ensuring the safety of autonomous vehicles (AV) requires rigorous testing under both everyday driving and rare, safety-critical conditions. A key challenge lies in simulating environment agents, including background vehicles (BVs) and vulnerable road users (VRUs), that behave realistically in nominal traffic while also exhibiting risk-prone behaviors consistent with real-world accidents. We introduce Controllable Risk Agent Generation (CRAG), a framework designed to unify the modeling of dominant nominal behaviors and rare safety-critical behaviors. CRAG constructs a structured latent space that disentangles normal and risk-related behaviors, enabling efficient use of limited crash data. By combining risk-aware latent representations with optimization-based mode-transition mechanisms, the framework allows agents to shift smoothly and plausibly from safe to risk states over extended horizons, while maintaining high fidelity in both regimes. Extensive experiments show that CRAG improves diversity compared to existing baselines, while also enabling controllable generation of risk scenarios for targeted and efficient evaluation of AV robustness. - oai:arXiv.org:2512.07874v1 + EEG-Bench: A Benchmark for EEG Foundation Models in Clinical Applications + https://arxiv.org/abs/2512.08959 + arXiv:2512.08959v1 Announce Type: new +Abstract: We introduce a unified benchmarking framework focused on evaluating EEG-based foundation models in clinical applications. The benchmark spans 11 well-defined diagnostic tasks across 14 publicly available EEG datasets, including epilepsy, schizophrenia, Parkinson's disease, OCD, and mild traumatic brain injury. It features minimal preprocessing, standardized evaluation protocols, and enables side-by-side comparisons of classical baselines and modern foundation models. Our results show that while foundation models achieve strong performance in certain settings, simpler models often remain competitive, particularly under clinical distribution shifts. To facilitate reproducibility and adoption, we release all prepared data and code in an accessible and extensible format. + oai:arXiv.org:2512.08959v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qiujing Lu, Xuanhan Wang, Runze Yuan, Wei Lu, Xinyi Gong, Shuo Feng + http://creativecommons.org/licenses/by/4.0/ + Ard Kastrati, Josua B\"urki, Jonas Lauer, Cheng Xuan, Raffaele Iaquinto, Roger Wattenhofer - Softly Symbolifying Kolmogorov-Arnold Networks - https://arxiv.org/abs/2512.07875 - arXiv:2512.07875v1 Announce Type: new -Abstract: Kolmogorov-Arnold Networks (KANs) offer a promising path toward interpretable machine learning: their learnable activations can be studied individually, while collectively fitting complex data accurately. In practice, however, trained activations often lack symbolic fidelity, learning pathological decompositions with no meaningful correspondence to interpretable forms. We propose Softly Symbolified Kolmogorov-Arnold Networks (S2KAN), which integrate symbolic primitives directly into training. Each activation draws from a dictionary of symbolic and dense terms, with learnable gates that sparsify the representation. Crucially, this sparsification is differentiable, enabling end-to-end optimization, and is guided by a principled Minimum Description Length objective. When symbolic terms suffice, S2KAN discovers interpretable forms; when they do not, it gracefully degrades to dense splines. We demonstrate competitive or superior accuracy with substantially smaller models across symbolic benchmarks, dynamical systems forecasting, and real-world prediction tasks, and observe evidence of emergent self-sparsification even without regularization pressure. - oai:arXiv.org:2512.07875v1 + Resolving Conflicts in Lifelong Learning via Aligning Updates in Subspaces + https://arxiv.org/abs/2512.08960 + arXiv:2512.08960v1 Announce Type: new +Abstract: Low-Rank Adaptation (LoRA) enables efficient Continual Learning but often suffers from catastrophic forgetting due to destructive interference between tasks. Our analysis reveals that this degradation is primarily driven by antagonistic directional updates where new task gradients directly oppose the historical weight trajectory. To address this, we propose PS-LoRA (Parameter Stability LoRA), a framework designed to resolve conflicts by aligning updates within the optimization subspace. Our approach employs a dual-regularization objective that penalizes conflicting directions and constrains magnitude deviations to ensure consistency with prior knowledge. Additionally, we implement a magnitude-based merging strategy to consolidate sequential adapters into a robust representation without retraining. Experiments on NLP and Vision benchmarks show that PS-LoRA outperforms state-of-the-art methods by preserving the stability of learned representations while efficiently adapting to new domains. + oai:arXiv.org:2512.08960v1 cs.LG - cs.NE - physics.data-an - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - James Bagrow, Josh Bongard + http://creativecommons.org/licenses/by/4.0/ + Yueer Zhou, Yichen Wu, Ying Wei - Fourier-Enhanced Recurrent Neural Networks for Electrical Load Time Series Downscaling - https://arxiv.org/abs/2512.07876 - arXiv:2512.07876v1 Announce Type: new -Abstract: We present a Fourier-enhanced recurrent neural network (RNN) for downscaling electrical loads. The model combines (i) a recurrent backbone driven by low-resolution inputs, (ii) explicit Fourier seasonal embeddings fused in latent space, and (iii) a self-attention layer that captures dependencies among high-resolution components within each period. Across four PJM territories, the approach yields RMSE lower and flatter horizon-wise than classical Prophet baselines (with and without seasonality/LAA) and than RNN ablations without attention or Fourier features. - oai:arXiv.org:2512.07876v1 + SEA: Spectral Edge Attacks on Graph Neural Networks + https://arxiv.org/abs/2512.08964 + arXiv:2512.08964v1 Announce Type: new +Abstract: Graph Neural Networks (GNNs) achieve strong performance on graph-structured data, but are notoriously vulnerable to small, carefully crafted perturbations of the graph structure. Most existing structure-based attacks rely on gradient-based heuristics or local connectivity patterns, and treat edges as equally important candidates for manipulation. In this paper, we propose Spectral Edge Attacks (SEA), a new family of adversarial attacks that explicitly leverage spectral robustness evaluation to guide structural perturbations. Our key idea is to compute a spectral embedding that captures the most fragile directions of the input manifold and to use it to assign a robustness score to each edge or non-edge. Based on these scores, we introduce two complementary attack variants: (i) a Spade-guided deletion attack that removes the most spectrally robust edges, and (ii) a Spade-guided addition attack that inserts edges between nodes that are maximally incompatible in the fragile spectral space. Both attacks operate at the graph level, are model-aware but conceptually simple, and can be plugged into existing GNN architectures without requiring gradients. We describe the spectral formulation, the attack algorithms, and experiments on benchmarks. + oai:arXiv.org:2512.08964v1 cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qi Chen, Mihai Anitescu + Yongyu Wang - Artificial Intelligence-Driven Network-on-Chip Design Space Exploration: Neural Network Architectures for Design - https://arxiv.org/abs/2512.07877 - arXiv:2512.07877v1 Announce Type: new -Abstract: Network-on-Chip (NoC) design requires exploring a high-dimensional configuration space to satisfy stringent throughput requirements and latency constraints.Traditional design space exploration techniques are often slow and struggle to handle complex, non-linear parameter interactions.This work presents a machine learning-driven framework that automates NoC design space exploration using BookSim simulations and reverse neural network models.Specifically, we compare three architectures - a Multi-Layer Perceptron (MLP),a Conditional Diffusion Model, and a Conditional Variational Autoencoder (CVAE) to predict optimal NoC parameters given target performance metrics.Our pipeline generates over 150,000 simulation data points across varied mesh topologies.The Conditional Diffusion Model achieved the highest predictive accuracy, attaining a mean squared error (MSE) of 0.463 on unseen data.Furthermore, the proposed framework reduces design exploration time by several orders of magnitude, making it a practical solution for rapid and scalable NoC co-design. - oai:arXiv.org:2512.07877v1 + Financial Instruction Following Evaluation (FIFE) + https://arxiv.org/abs/2512.08965 + arXiv:2512.08965v1 Announce Type: new +Abstract: Language Models (LMs) struggle with complex, interdependent instructions, particularly in high-stakes domains like finance where precision is critical. We introduce FIFE, a novel, high-difficulty benchmark designed to assess LM instruction-following capabilities for financial analysis tasks. FIFE comprises 88 human-authored prompts and employs a verification system with chainable, verifiable constraints for fine-grained reward signals. We evaluate 53 models (proprietary, open-weight, open-source) in a zero-shot setting. Our key findings reveal a clear performance hierarchy: the top open-weight model (76.1 strict / 79.5 loose) surpasses the leading proprietary system (65.9 strict / 70.5 loose), while the best open-source models lag significantly (45.5 strict / 48.9 loose). However, even top-performing models struggle with FIFE's complex requirements, failing to achieve perfect compliance. We release our dataset and code as an open-source resource to promote research in Reinforcement Learning for the financial domain. + oai:arXiv.org:2512.08965v1 cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Amogh Anshu N, Harish BP + Glenn Matlin, Siddharth, Anirudh JM, Aditya Shukla, Yahya Hassan, Sudheer Chava - Graph Contrastive Learning via Spectral Graph Alignment - https://arxiv.org/abs/2512.07878 - arXiv:2512.07878v1 Announce Type: new -Abstract: Given augmented views of each input graph, contrastive learning methods (e.g., InfoNCE) optimize pairwise alignment of graph embeddings across views while providing no mechanism to control the global structure of the view specific graph-of-graphs built from these embeddings. We introduce SpecMatch-CL, a novel loss function that aligns the view specific graph-of-graphs by minimizing the difference between their normalized Laplacians. Theoretically, we show that under certain assumptions, the difference between normalized Laplacians provides an upper bound not only for the difference between the ideal Perfect Alignment contrastive loss and the current loss, but also for the Uniformly loss. Empirically, SpecMatch-CL establishes new state of the art on eight TU benchmarks under unsupervised learning and semi-supervised learning at low label rates, and yields consistent gains in transfer learning on PPI-306K and ZINC 2M datasets. - oai:arXiv.org:2512.07878v1 + CluCERT: Certifying LLM Robustness via Clustering-Guided Denoising Smoothing + https://arxiv.org/abs/2512.08967 + arXiv:2512.08967v1 Announce Type: new +Abstract: Recent advancements in Large Language Models (LLMs) have led to their widespread adoption in daily applications. Despite their impressive capabilities, they remain vulnerable to adversarial attacks, as even minor meaning-preserving changes such as synonym substitutions can lead to incorrect predictions. As a result, certifying the robustness of LLMs against such adversarial prompts is of vital importance. Existing approaches focused on word deletion or simple denoising strategies to achieve robustness certification. However, these methods face two critical limitations: (1) they yield loose robustness bounds due to the lack of semantic validation for perturbed outputs and (2) they suffer from high computational costs due to repeated sampling. To address these limitations, we propose CluCERT, a novel framework for certifying LLM robustness via clustering-guided denoising smoothing. Specifically, to achieve tighter certified bounds, we introduce a semantic clustering filter that reduces noisy samples and retains meaningful perturbations, supported by theoretical analysis. Furthermore, we enhance computational efficiency through two mechanisms: a refine module that extracts core semantics, and a fast synonym substitution strategy that accelerates the denoising process. Finally, we conduct extensive experiments on various downstream tasks and jailbreak defense scenarios. Experimental results demonstrate that our method outperforms existing certified approaches in both robustness bounds and computational efficiency. + oai:arXiv.org:2512.08967v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Manh Nguyen (Department of Statistics, University of Wisconsin-Madison), Joshua Cape (Department of Statistics, University of Wisconsin-Madison) + Zixia Wang, Gaojie Jin, Jia Hu, Ronghui Mu - Nonnegative Matrix Factorization through Cone Collapse - https://arxiv.org/abs/2512.07879 - arXiv:2512.07879v1 Announce Type: new -Abstract: Nonnegative matrix factorization (NMF) is a widely used tool for learning parts-based, low-dimensional representations of nonnegative data, with applications in vision, text, and bioinformatics. In clustering applications, orthogonal NMF (ONMF) variants further impose (approximate) orthogonality on the representation matrix so that its rows behave like soft cluster indicators. Existing algorithms, however, are typically derived from optimization viewpoints and do not explicitly exploit the conic geometry induced by NMF: data points lie in a convex cone whose extreme rays encode fundamental directions or "topics". In this work we revisit NMF from this geometric perspective and propose Cone Collapse, an algorithm that starts from the full nonnegative orthant and iteratively shrinks it toward the minimal cone generated by the data. We prove that, under mild assumptions on the data, Cone Collapse terminates in finitely many steps and recovers the minimal generating cone of $\mathbf{X}^\top$ . Building on this basis, we then derive a cone-aware orthogonal NMF model (CC-NMF) by applying uni-orthogonal NMF to the recovered extreme rays. Across 16 benchmark gene-expression, text, and image datasets, CC-NMF consistently matches or outperforms strong NMF baselines-including multiplicative updates, ANLS, projective NMF, ONMF, and sparse NMF-in terms of clustering purity. These results demonstrate that explicitly recovering the data cone can yield both theoretically grounded and empirically strong NMF-based clustering methods. - oai:arXiv.org:2512.07879v1 + StructuredDNA: A Bio-Physical Framework for Energy-Aware Transformer Routing + https://arxiv.org/abs/2512.08968 + arXiv:2512.08968v1 Announce Type: new +Abstract: The rapid scaling of large computational models has led to a critical increase in energy and compute costs. Inspired by biological systems where structure and function emerge from low-energy configurations, we introduce StructuredDNA, a sparse architecture framework for modular, energy-aware Transformer routing. StructuredDNA replaces dense Mixture-of-Experts routing with a bio-physical, energy-guided routing layer based on semantic energy minimization. Inputs are dynamically grouped into semantic codons, and routing selects a single expert by minimizing a global energy functional that combines cohesion, uncertainty, and computational cost. + We validate StructuredDNA on both specialized (BioASQ) and open-domain benchmarks (WikiText-103). On BioASQ (K = 50), we achieve a 97.7% reduction in Energy Utilization Density (EUD) and a Semantic Stability Index (SSI) of 0.998. We further demonstrate a Semantic Scaling Law on WikiText-103, showing that the architecture generalizes to open domains by scaling expert granularity (K = 2048) while maintaining more than 99% energy efficiency. StructuredDNA thus establishes a robust, domain-agnostic paradigm for future sparse computational frameworks. + StructuredDNA provides an explicit link between bio-physical principles and sparse expert routing in Transformer architectures, and points toward future energy-aware, modular, and scalable computational systems. We discuss limitations of this proof-of-concept study and outline directions for scaling the approach to larger models, datasets, and hardware platforms. The StructuredDNA implementation is available at https://github.com/InnoDeep-repos/StructuredDNA . + oai:arXiv.org:2512.08968v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Manh Nguyen (Department of Statistics), Daniel Pimentel-Alarc\'on (Department of Biostatistics and Medical Informatics, Wisconsin Institute of Discovery, University of Wisconsin-Madison) + http://creativecommons.org/licenses/by/4.0/ + Mustapha Hamdi - Semi-Supervised Contrastive Learning with Orthonormal Prototypes - https://arxiv.org/abs/2512.07880 - arXiv:2512.07880v1 Announce Type: new -Abstract: Contrastive learning has emerged as a powerful method in deep learning, excelling at learning effective representations through contrasting samples from different distributions. However, dimensional collapse, where embeddings converge into a lower-dimensional space, poses a significant challenge, especially in semi-supervised and self-supervised setups. In this paper, we first identify a critical learning-rate threshold, beyond which standard contrastive losses converge to collapsed solutions. Building on these insights, we propose CLOP, a novel semi-supervised loss function designed to prevent dimensional collapse by promoting the formation of orthogonal linear subspaces among class embeddings. Through extensive experiments on real and synthetic datasets, we demonstrate that CLOP improves performance in image classification and object detection tasks while also exhibiting greater stability across different learning rates and batch sizes. - oai:arXiv.org:2512.07880v1 + Learning Robust Representations for Malicious Content Detection via Contrastive Sampling and Uncertainty Estimation + https://arxiv.org/abs/2512.08969 + arXiv:2512.08969v1 Announce Type: new +Abstract: We propose the Uncertainty Contrastive Framework (UCF), a Positive-Unlabeled (PU) representation learning framework that integrates uncertainty-aware contrastive loss, adaptive temperature scaling, and a self-attention-guided LSTM encoder to improve classification under noisy and imbalanced conditions. UCF dynamically adjusts contrastive weighting based on sample confidence, stabilizes training using positive anchors, and adapts temperature parameters to batch-level variability. Applied to malicious content classification, UCF-generated embeddings enable multiple traditional classifiers to achieve more than 93.38% accuracy, precision above 0.93, and near-perfect recall, with minimal false negatives and competitive ROC-AUC scores. Visual analyses confirm clear separation between positive and unlabeled instances, highlighting the framework's ability to produce calibrated, discriminative embeddings. These results position UCF as a robust and scalable solution for PU learning in high-stakes domains such as cybersecurity and biomedical text mining. + oai:arXiv.org:2512.08969v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Huanran Li (Department of Electrical Engineering), Manh Nguyen (Statistics), Daniel Pimentel-Alarc\'on (Biostatistics, Wisconsin Institute of Discovery, University of Wisconsin-Madison) + http://creativecommons.org/licenses/by/4.0/ + Elias Hossain, Umesh Biswas, Charan Gudla, Sai Phani Parsa - GSPN-2: Efficient Parallel Sequence Modeling - https://arxiv.org/abs/2512.07884 - arXiv:2512.07884v1 Announce Type: new -Abstract: Efficient vision transformer remains a bottleneck for high-resolution images and long-video related real-world applications. Generalized Spatial Propagation Network (GSPN) addresses this by replacing quadratic self-attention with a line-scan propagation scheme, bringing the cost close to linear in the number of rows or columns, while retaining accuracy. Despite this advancement, the existing GSPN implementation still suffers from (i) heavy overhead due to repeatedly launching GPU kernels, (ii) excessive data transfers from global GPU memory, and (iii) redundant computations caused by maintaining separate propagation weights for each channel. We introduce GSPN-2, a joint algorithm-system redesign. In particular, we eliminate thousands of micro-launches from the previous implementation into one single 2D kernel, explicitly pin one warp to each channel slice, and stage the previous column's activations in shared memory. On the model side, we introduce a compact channel propagation strategy that replaces per-channel matrices, trimming parameters, and align naturally with the affinity map used in transformer attention. Experiments demonstrate GSPN-2's effectiveness across image classification and text-to-image synthesis tasks, matching transformer-level accuracy with significantly lower computational cost. GSPN-2 establishes a new efficiency frontier for modeling global spatial context in vision applications through its unique combination of structured matrix transformations and GPU-optimized implementation. Project page: https://whj363636.github.io/GSPN2/ - oai:arXiv.org:2512.07884v1 - cs.LG + Enhancing Automatic Speech Recognition Through Integrated Noise Detection Architecture + https://arxiv.org/abs/2512.08973 + arXiv:2512.08973v1 Announce Type: new +Abstract: This research presents a novel approach to enhancing automatic speech recognition systems by integrating noise detection capabilities directly into the recognition architecture. Building upon the wav2vec2 framework, the proposed method incorporates a dedicated noise identification module that operates concurrently with speech transcription. Experimental validation using publicly available speech and environmental audio datasets demonstrates substantial improvements in transcription quality and noise discrimination. The enhanced system achieves superior performance in word error rate, character error rate, and noise detection accuracy compared to conventional architectures. Results indicate that joint optimization of transcription and noise classification objectives yields more reliable speech recognition in challenging acoustic conditions. + oai:arXiv.org:2512.08973v1 + cs.SD cs.AI - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + eess.AS + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Hongjun Wang, Yitong Jiang, Collin McCarthy, David Wehr, Hanrong Ye, Xinhao Li, Ka Chun Cheung, Wonmin Byeon, Jinwei Gu, Ke Chen, Kai Han, Hongxu Yin, Pavlo Molchanov, Jan Kautz, Sifei Liu + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Karamvir Singh - ByteStorm: a multi-step data-driven approach for Tropical Cyclones detection and tracking - https://arxiv.org/abs/2512.07885 - arXiv:2512.07885v1 Announce Type: new -Abstract: Accurate tropical cyclones (TCs) tracking represents a critical challenge in the context of weather and climate science. Traditional tracking schemes mainly rely on subjective thresholds, which may introduce biases in their skills on the geographical region of application. We present ByteStorm, an efficient data-driven framework for reconstructing TC tracks without threshold tuning. It leverages deep learning networks to detect TC centers (via classification and localization), using only relative vorticity (850 mb) and mean sea-level pressure. Then, detected centers are linked into TC tracks through the BYTE algorithm. ByteStorm is evaluated against state-of-the-art deterministic trackers in the East- and West-North Pacific basins (ENP and WNP). The proposed framework achieves superior performance in terms of Probability of Detection ($85.05\%$ ENP, $79.48\%$ WNP), False Alarm Rate ($23.26\%$ ENP, $16.14\%$ WNP), and high Inter-Annual Variability correlations ($0.75$ ENP and $0.69$ WNP). These results highlight the potential of integrating deep learning and computer vision for fast and accurate TC tracking, offering a robust alternative to traditional approaches. - oai:arXiv.org:2512.07885v1 + Peek-a-Boo Reasoning: Contrastive Region Masking in MLLMs + https://arxiv.org/abs/2512.08976 + arXiv:2512.08976v1 Announce Type: new +Abstract: We introduce Contrastive Region Masking (CRM), a training free diagnostic that reveals how multimodal large language models (MLLMs) depend on specific visual regions at each step of chain-of-thought (CoT) reasoning. Unlike prior approaches limited to final answers or attention maps, CRM provides causal, step-level attri- bution by systematically masking annotated regions and contrasting the resulting reasoning traces with unmasked baselines. Applied to datasets such as VisArgs, CRM reveals distinct failure modes: some models preserve reasoning structure, but hallucinate when evidence is missing, while others ground tightly to visual cues yet collapse under perturbations. By shifting the evaluation from correctness of an- swers to faithfulness of reasoning, CRM reframes visual benchmarks as diagnostic tools, highlighting the need for multimodal evaluation frameworks that measure not just performance, but also robustness and fidelity of reasoning. + oai:arXiv.org:2512.08976v1 cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Davide Donno, Donatello Elia, Gabriele Accarino, Marco De Carlo, Enrico Scoccimarro, Silvio Gualdi + Isha Chaturvedi, Anjana Nair, Yushen Li, Adhitya Rajendra Kumar, Kevin Zhu, Sunishchal Dev, Ashwinee Panda, Vasu Sharma - CrowdLLM: Building LLM-Based Digital Populations Augmented with Generative Models - https://arxiv.org/abs/2512.07890 - arXiv:2512.07890v1 Announce Type: new -Abstract: The emergence of large language models (LLMs) has sparked much interest in creating LLM-based digital populations that can be applied to many applications such as social simulation, crowdsourcing, marketing, and recommendation systems. A digital population can reduce the cost of recruiting human participants and alleviate many concerns related to human subject study. However, research has found that most of the existing works rely solely on LLMs and could not sufficiently capture the accuracy and diversity of a real human population. To address this limitation, we propose CrowdLLM that integrates pretrained LLMs and generative models to enhance the diversity and fidelity of the digital population. We conduct theoretical analysis of CrowdLLM regarding its great potential in creating cost-effective, sufficiently representative, scalable digital populations that can match the quality of a real crowd. Comprehensive experiments are also conducted across multiple domains (e.g., crowdsourcing, voting, user rating) and simulation studies which demonstrate that CrowdLLM achieves promising performance in both accuracy and distributional fidelity to human data. - oai:arXiv.org:2512.07890v1 - cs.MA - cs.LG - stat.ME - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Hacia una moderna "republica de las ideas" via un nuevo ecosistema de comunicacion cientifica (Toward a modern "Republic of Ideas" via a new Ecosystem of Scientific Communication) + https://arxiv.org/abs/2512.08977 + arXiv:2512.08977v1 Announce Type: new +Abstract: The contemporary academic ecosystem, heir to the Enlightenment's "Republic of Letters," finds itself in a state of profound and unsustainable crisis. That order, based on the free flow of correspondence and the disinterested pursuit of knowledge, has been supplanted by a system teetering under the weight of its own contradictions. This work embarks on a fundamental redesign to articulate an innovative and coherent framework for scientific communication. To this end, four distinct but complementary schools of thought are synthesized: From Ordoliberalism, we take the rigor of designing an "economic constitution" that prevents the concentration of power and fosters fair competition. From Humanistic Economics, we extract the telos, or normative purpose (human flourishing and shared prosperity). From Digital Humanism, we derive the technological ethos, ensuring that the infrastructure serves human dignity. Finally, from Decentralized Science (DeSci), we take the set of architectural tools (smart contracts, DAOs, tokens) capable of building this new order from the ground up. The narrative arc is deliberate: Part 1 offers an anatomy of decay, using the Uddin demand as a scalpel to dissect the economic, institutional, and epistemic pathologies of the current system. Part 2 articulates the philosophical constitution of the new republic. Part 3 details the architectural blueprint and technological infrastructure of this new order. Part 4 subjects this design to rigorous stress tests to forge its resilience and antifragility. Part 5 charts a strategic roadmap for the transition, a plausible path to navigate from the status quo to full community sovereignty. Finally, Part 6 brings the book full circle, returning to the foundational vision. This is a call to action to build the New Republic of Ideas that the pursuit of truth deserves in the 21st century. + oai:arXiv.org:2512.08977v1 + cs.OH + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ryan Feng Lin, Keyu Tian, Hanming Zheng, Congjing Zhang, Li Zeng, Shuai Huang + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Enrique Lopez-Gonzalez + + + Institutional AI Sovereignty Through Gateway Architecture: Implementation Report from Fontys ICT + https://arxiv.org/abs/2512.08978 + arXiv:2512.08978v1 Announce Type: new +Abstract: To counter fragmented, high-risk adoption of commercial AI tools, we built and ran an institutional AI platform in a six-month, 300-user pilot, showing that a university of applied sciences can offer advanced AI with fair access, transparent risks, controlled costs, and alignment with European law. + Commercial AI subscriptions create unequal access and compliance risks through opaque processing and non-EU hosting, yet banning them is neither realistic nor useful. Institutions need a way to provide powerful AI in a sovereign, accountable form. + Our solution is a governed gateway platform with three layers: a ChatGPT-style frontend linked to institutional identity that makes model choice explicit; a gateway core enforcing policy, controlling access and budgets, and routing traffic to EU infrastructure by default; and a provider layer wrapping commercial and open-source models in institutional model cards that consolidate vendor documentation into one governance interface. + The pilot ran reliably with no privacy incidents and strong adoption, enabling EU-default routing, managed spending, and transparent model choices. Only the gateway pattern combines model diversity and rapid innovation with institutional control. + The central insight: AI is not a support function but strategy, demanding dedicated leadership. Sustainable operation requires governance beyond traditional boundaries. We recommend establishing a formal AI Officer role combining technical literacy, governance authority, and educational responsibility. Without it, AI decisions stay ad-hoc and institutional exposure grows. With it, higher-education institutions can realistically operate their own multi-provider AI platform, provided they govern AI as seriously as they teach it. + oai:arXiv.org:2512.08978v1 + cs.CY + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Ruud Huijts, Koen Suilen - Investigating the originality of scientific papers across time and domain: A quantitative analysis - https://arxiv.org/abs/2512.07892 - arXiv:2512.07892v1 Announce Type: new -Abstract: The study of creativity in science has long sought quantitative metrics capable of capturing the originality of the scientific insights contained within articles and other scientific works. In recent years, the field has witnessed a substantial expansion of research activity, enabled by advances in natural language processing and network analysis, and has utilised both macro- and micro-scale approaches with success. However, they often do not examine the text itself for evidence of originality. In this paper, we apply a computational measure correlating with originality from creativity science, Divergent Semantic Integration (DSI), to a set of 51,200 scientific abstracts and titles sourced from the Web of Science. To adapt DSI for application to scientific texts, we advance the original BERT method by incorporating SciBERT (a model trained on scientific corpora) into the computation of DSI. In our study, we observe that DSI plays a more pronounced role in the accrual of early citations for papers with fewer authors, varies substantially across subjects and research fields, and exhibits a declining correlation with citation counts over time. Furthermore, by modelling SciBERT- and BERT-DSI as predictors of the logarithm of 5-year citation counts alongside field, publication year, and the logarithm of author count, we find statistically significant relationships, with adjusted R-squared of 0.103 and 0.101 for BERT-DSI and SciBERT-DSI. Because existing scientometric measures rarely assess the originality expressed in textual content, DSI provides a valuable means of directly quantifying the conceptual originality embedded in scientific writing. - oai:arXiv.org:2512.07892v1 - cs.DL - Wed, 10 Dec 2025 00:00:00 -0500 + What Happens When: Learning Temporal Orders of Events in Videos + https://arxiv.org/abs/2512.08979 + arXiv:2512.08979v1 Announce Type: new +Abstract: Video Large Multimodal Models (VLMMs) have shown impressive performance in video understanding, yet their ability to accurately capture the temporal order of multiple events remains underexplored. We interestingly observe that, even when video frames are scrambled, models perform very well on the existing benchmarks by comprehensive experiments. This implies that VLMMs may not necessarily rely on accurate sequential processing of visual events, but instead depend on prior knowledge of typical scenarios to answer the question. To benchmark temporal understanding capabilities in VLMMs, we propose VECTOR, designed to explicitly assess a model's ability to identify the temporal order of events. On this benchmark, we observe that various VLMMs often fail to understand the orders of events. To address this, we propose MECOT (Multi-Event instruction fine-tuning with Chain-of-Thought), which (1) trains models on detailed, event-by-event video descriptions and (2) using chain-of-thought prompts at inference to enhance temporal awareness. MECOT outperforms prior arts on VECTOR as well as improving performance on existing video benchmarks, implying effectiveness of temporal understanding. We release our code, model and datasets. + oai:arXiv.org:2512.08979v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jack H. Culbert Yoed N. Kenett, Philipp Mayr + http://creativecommons.org/licenses/by/4.0/ + Daechul Ahn, Yura Choi, Hyeonbeom Choi, Seongwon Cho, San Kim, Jonghyun Choi - MARINE: Theoretical Optimization and Design for Multi-Agent Recursive IN-context Enhancement - https://arxiv.org/abs/2512.07898 - arXiv:2512.07898v1 Announce Type: new -Abstract: Large Language Model (LLM)-based agents demonstrate advanced reasoning capabilities, yet practical constraints frequently limit outputs to single responses, leaving significant performance potential unrealized. This paper introduces MARINE (Multi-Agent Recursive IN-context Enhancement), a theoretically grounded framework that reconceptualizes test-time reasoning as iterative refinement of a persistent reference trajectory, fundamentally departing from conventional one-shot or multi-sample paradigms. The MARINE refinement operator systematically converts a base model's pass@N capabilities into near-optimal pass@1 performance. Rigorous theoretical analysis establishes that minimal feasible batches maximize expected performance gains under fixed invocation budgets, while logarithmically growing batch schedules ensure continuous improvement without computational constraints. Comprehensive evaluation on the BrowserComp-ZH benchmark demonstrates state-of-the-art results, with a 685B-parameter implementation achieving 46.0% pass@1 accuracy. Meanwhile, MARINE establishes a new paradigm for parameter-efficient reasoning: an 80B-parameter model augmented with MARINE matches the performance of standalone 1000B-parameter agents, reducing parameter requirements by over an order of magnitude. Notably, within a fixed computational budget, the proposed MARINE delivers higher-quality samples to alignment and optimization processes than traditional sampling-and-ranking strategies. Consequently, it has great potential to boost post-training efficiency. - oai:arXiv.org:2512.07898v1 - cs.MA + Training Multi-Image Vision Agents via End2End Reinforcement Learning + https://arxiv.org/abs/2512.08980 + arXiv:2512.08980v1 Announce Type: new +Abstract: Recent VLM-based agents aim to replicate OpenAI O3's ``thinking with images" via tool use, but most open-source methods limit input to a single image, falling short on real-world multi-image QA tasks. To address this, we propose IMAgent, an open-source vision agent trained via end-to-end reinforcement learning dedicated for complex multi-image tasks. By leveraging a multi-agent system, we generate challenging and visually-rich multi-image QA pairs to fully activate the tool-use potential of the base VLM. Through manual verification, we obtain MIFG-QA, comprising 10k samples for training and evaluation. With deeper reasoning steps, VLMs may increasingly ignore visual inputs. We therefore develop two specialized tools for visual reflection and confirmation, allowing the model to proactively reallocate its attention to image content during inference. Benefiting from our well-designed action-trajectory two-level mask strategy, IMAgent achieves stable tool use behavior via pure RL training without requiring costly supervised fine-tuning data. Extensive experiments demonstrate that IMAgent maintains strong performance on existing single-image benchmarks while achieving substantial improvements on our proposed multi-image dataset, with our analysis providing actionable insights for the research community. Codes and data will be released soon. + oai:arXiv.org:2512.08980v1 + cs.CV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hongwei Zhang, Ji Lu, Yongsheng Du, Yanqin Gao, Lingjun Huang, Baoli Wang, Fang Tan, Peng Zou + Chengqi Dong, Chuhuai Yue, Hang He, Rongge Mao, Fenghe Tang, S Kevin Zhou, Zekun Xu, Xiaohan Wang, Jiajun Chai, Wei Lin, Guojun Yin - Finding core subgraphs of directed graphs via discrete Ricci curvature flow - https://arxiv.org/abs/2512.07899 - arXiv:2512.07899v1 Announce Type: new -Abstract: Ricci curvature and its associated flow offer powerful geometric methods for analyzing complex networks. While existing research heavily focuses on applications for undirected graphs such as community detection and core extraction, there have been relatively less attention on directed graphs. - In this paper, we introduce a definition of Ricci curvature and an accompanying curvature flow for directed graphs. Crucially, for strongly connected directed graphs, this flow admits a unique global solution. We then apply this flow to detect strongly connected subgraphs from weakly connected directed graphs. (A weakly connected graph is connected overall but not necessarily strongly connected). Unlike prior work requiring graphs to be strongly connected, our method loosens this requirement. We transform a weakly connected graph into a strongly connected one by adding edges with very large artificial weights. This modification does not compromise our core subgraph detection. Due to their extreme weight, these added edges are automatically discarded during the final iteration of the Ricci curvature flow. - For core evaluation, our approach consistently surpasses traditional methods, achieving better results on at least two out of three key metrics. The implementation code is publicly available at https://github.com/12tangze12/Finding-core-subgraphs-on-directed-graphs. - oai:arXiv.org:2512.07899v1 - cs.SI - math.CO - Wed, 10 Dec 2025 00:00:00 -0500 + Mitigating Bias with Words: Inducing Demographic Ambiguity in Face Recognition Templates by Text Encoding + https://arxiv.org/abs/2512.08981 + arXiv:2512.08981v1 Announce Type: new +Abstract: Face recognition (FR) systems are often prone to demographic biases, partially due to the entanglement of demographic-specific information with identity-relevant features in facial embeddings. This bias is extremely critical in large multicultural cities, especially where biometrics play a major role in smart city infrastructure. The entanglement can cause demographic attributes to overshadow identity cues in the embedding space, resulting in disparities in verification performance across different demographic groups. To address this issue, we propose a novel strategy, Unified Text-Image Embedding (UTIE), which aims to induce demographic ambiguity in face embeddings by enriching them with information related to other demographic groups. This encourages face embeddings to emphasize identity-relevant features and thus promotes fairer verification performance across groups. UTIE leverages the zero-shot capabilities and cross-modal semantic alignment of Vision-Language Models (VLMs). Given that VLMs are naturally trained to align visual and textual representations, we enrich the facial embeddings of each demographic group with text-derived demographic features extracted from other demographic groups. This encourages a more neutral representation in terms of demographic attributes. We evaluate UTIE using three VLMs, CLIP, OpenCLIP, and SigLIP, on two widely used benchmarks, RFW and BFW, designed to assess bias in FR. Experimental results show that UTIE consistently reduces bias metrics while maintaining, or even improving in several cases, the face verification accuracy. + oai:arXiv.org:2512.08981v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-sa/4.0/ - Juan Zhao, Jicheng Ma, Yunyan Yang, Liang Zhao + Tahar Chettaoui, Naser Damer, Fadi Boutros - The Theory of Strategic Evolution: Games with Endogenous Players and Strategic Replicators - https://arxiv.org/abs/2512.07901 - arXiv:2512.07901v1 Announce Type: new -Abstract: This paper develops the Theory of Strategic Evolution, a general model for systems in which the population of players, strategies, and institutional rules evolve together. The theory extends replicator dynamics to settings with endogenous players, multi level selection, innovation, constitutional change, and meta governance. The central mathematical object is a Poiesis stack: a hierarchy of strategic layers linked by cross level gain matrices. Under small gain conditions, the system admits a global Lyapunov function and satisfies selection, tracking, and stochastic stability results at every finite depth. We prove that the class is closed under block extension, innovation events, heterogeneous utilities, continuous strategy spaces, and constitutional evolution. The closure theorem shows that no new dynamics arise at higher levels and that unrestricted self modification cannot preserve Lyapunov structure. The theory unifies results from evolutionary game theory, institutional design, innovation dynamics, and constitutional political economy, providing a general mathematical model of long run strategic adaptation. - oai:arXiv.org:2512.07901v1 - cs.GT + Consist-Retinex: One-Step Noise-Emphasized Consistency Training Accelerates High-Quality Retinex Enhancement + https://arxiv.org/abs/2512.08982 + arXiv:2512.08982v1 Announce Type: new +Abstract: Diffusion models have achieved remarkable success in low-light image enhancement through Retinex-based decomposition, yet their requirement for hundreds of iterative sampling steps severely limits practical deployment. While recent consistency models offer promising one-step generation for \textit{unconditional synthesis}, their application to \textit{conditional enhancement} remains unexplored. We present \textbf{Consist-Retinex}, the first framework adapting consistency modeling to Retinex-based low-light enhancement. Our key insight is that conditional enhancement requires fundamentally different training dynamics than unconditional generation standard consistency training focuses on low-noise regions near the data manifold, while conditional mapping critically depends on large-noise regimes that bridge degraded inputs to enhanced outputs. We introduce two core innovations: (1) a \textbf{dual-objective consistency loss} combining temporal consistency with ground-truth alignment under randomized time sampling, providing full-spectrum supervision for stable convergence; and (2) an \textbf{adaptive noise-emphasized sampling strategy} that prioritizes training on large-noise regions essential for one-step conditional generation. On VE-LOL-L, Consist-Retinex achieves \textbf{state-of-the-art performance with single-step sampling} (\textbf{PSNR: 25.51 vs. 23.41, FID: 44.73 vs. 49.59} compared to Diff-Retinex++), while requiring only \textbf{1/8 of the training budget} relative to the 1000-step Diff-Retinex baseline. + oai:arXiv.org:2512.08982v1 + cs.CV cs.AI - econ.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Kevin Vallier + http://creativecommons.org/licenses/by/4.0/ + Jian Xu, Wei Chen, Shigui Li, Delu Zeng, John Paisley, Qibin Zhao - Agentic Artificial Intelligence for Ethical Cybersecurity in Uganda: A Reinforcement Learning Framework for Threat Detection in Resource-Constrained Environments - https://arxiv.org/abs/2512.07909 - arXiv:2512.07909v1 Announce Type: new -Abstract: Uganda's rapid digital transformation, supported by national strategies such as Vision 2040 and the Digital Transformation Roadmap, has expanded reliance on networked services while simultaneously increasing exposure to sophisticated cyber threats. In resource-constrained settings, commonly deployed rule-based intrusion detection systems lack the adaptability and ethical safeguards needed to address evolving attack patterns, leading to undetected breaches and excessive blocking of legitimate traffic. This study proposes an Agentic Artificial Intelligence (AAI) framework that integrates reinforcement learning, an explicit ethical governance layer, and human oversight to deliver adaptive and trustworthy cybersecurity. A CPU-optimized simulation environment was developed using a five-node network topology that mirrors key elements of Uganda's critical digital infrastructure and generates both benign and malicious traffic, including phishing, ransomware, and distributed denial-of-service attacks. A Q-learning agent, operating within clearly defined ethical constraints and subject to human auditability, was trained and evaluated against a traditional rule-based baseline. The AAI framework achieved a 100 percent detection rate, zero false positives, and full ethical compliance, compared with 70 percent detection and 15 percent false positives for the baseline system. These results demonstrate that agentic, ethically governed reinforcement learning can substantially improve cybersecurity effectiveness and fairness in CPU-only, resource-constrained environments, offering a practical pathway for operationalizing responsible AI in Uganda's national cybersecurity strategy. - oai:arXiv.org:2512.07909v1 - cs.CR - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + HSCP: A Two-Stage Spectral Clustering Framework for Resource-Constrained UAV Identification + https://arxiv.org/abs/2512.08983 + arXiv:2512.08983v1 Announce Type: new +Abstract: With the rapid development of Unmanned Aerial Vehicles (UAVs) and the increasing complexity of low-altitude security threats, traditional UAV identification methods struggle to extract reliable signal features and meet real-time requirements in complex environments. Recently, deep learning based Radio Frequency Fingerprint Identification (RFFI) approaches have greatly improved recognition accuracy. However, their large model sizes and high computational demands hinder deployment on resource-constrained edge devices. While model pruning offers a general solution for complexity reduction, existing weight, channel, and layer pruning techniques struggle to concurrently optimize compression rate, hardware acceleration, and recognition accuracy. To this end, in this paper, we introduce HSCP, a Hierarchical Spectral Clustering Pruning framework that combines layer pruning with channel pruning to achieve extreme compression, high performance, and efficient inference. In the first stage, HSCP employs spectral clustering guided by Centered Kernel Alignment (CKA) to identify and remove redundant layers. Subsequently, the same strategy is applied to the channel dimension to eliminate a finer redundancy. To ensure robustness, we further employ a noise-robust fine-tuning strategy. Experiments on the UAV-M100 benchmark demonstrate that HSCP outperforms existing channel and layer pruning methods. Specifically, HSCP achieves $86.39\%$ parameter reduction and $84.44\%$ FLOPs reduction on ResNet18 while improving accuracy by $1.49\%$ compared to the unpruned baseline, and maintains superior robustness even in low signal-to-noise ratio environments. + oai:arXiv.org:2512.08983v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Ibrahim Adabara, Bashir Olaniyi Sadiq, Aliyu Nuhu Shuaibu, Yale Ibrahim Danjuma, Venkateswarlu Maninti, Mutebi Joe + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Maoyu Wang, Yao Lu, Bo Zhou, Zhuangzhi Chen, Yun Lin, Qi Xuan, Guan Gui - CFD-copilot: leveraging domain-adapted large language model and model context protocol to enhance simulation automation - https://arxiv.org/abs/2512.07917 - arXiv:2512.07917v1 Announce Type: new -Abstract: Configuring computational fluid dynamics (CFD) simulations requires significant expertise in physics modeling and numerical methods, posing a barrier to non-specialists. Although automating scientific tasks with large language models (LLMs) has attracted attention, applying them to the complete, end-to-end CFD workflow remains a challenge due to its stringent domain-specific requirements. We introduce CFD-copilot, a domain-specialized LLM framework designed to facilitate natural language-driven CFD simulation from setup to post-processing. The framework employs a fine-tuned LLM to directly translate user descriptions into executable CFD setups. A multi-agent system integrates the LLM with simulation execution, automatic error correction, and result analysis. For post-processing, the framework utilizes the model context protocol (MCP), an open standard that decouples LLM reasoning from external tool execution. This modular design allows the LLM to interact with numerous specialized post-processing functions through a unified and scalable interface, improving the automation of data extraction and analysis. The framework was evaluated on benchmarks including the NACA~0012 airfoil and the three-element 30P-30N airfoil. The results indicate that domain-specific adaptation and the incorporation of the MCP jointly enhance the reliability and efficiency of LLM-driven engineering workflows. - oai:arXiv.org:2512.07917v1 - cs.SE + RAG-HAR: Retrieval Augmented Generation-based Human Activity Recognition + https://arxiv.org/abs/2512.08984 + arXiv:2512.08984v1 Announce Type: new +Abstract: Human Activity Recognition (HAR) underpins applications in healthcare, rehabilitation, fitness tracking, and smart environments, yet existing deep learning approaches demand dataset-specific training, large labeled corpora, and significant computational resources.We introduce RAG-HAR, a training-free retrieval-augmented framework that leverages large language models (LLMs) for HAR. RAG-HAR computes lightweight statistical descriptors, retrieves semantically similar samples from a vector database, and uses this contextual evidence to make LLM-based activity identification. We further enhance RAG-HAR by first applying prompt optimization and introducing an LLM-based activity descriptor that generates context-enriched vector databases for delivering accurate and highly relevant contextual information. Along with these mechanisms, RAG-HAR achieves state-of-the-art performance across six diverse HAR benchmarks. Most importantly, RAG-HAR attains these improvements without requiring model training or fine-tuning, emphasizing its robustness and practical applicability. RAG-HAR moves beyond known behaviors, enabling the recognition and meaningful labelling of multiple unseen human activities. + oai:arXiv.org:2512.08984v1 + cs.CV cs.AI - physics.flu-dyn - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Zhehao Dong, Shanghai Du, Zhen Lu, Yue Yang + http://creativecommons.org/licenses/by/4.0/ + Nirhoshan Sivaroopan, Hansi Karunarathna, Chamara Madarasingha, Anura Jayasumana, Kanchana Thilakarathna - DeepCode: Open Agentic Coding - https://arxiv.org/abs/2512.07921 - arXiv:2512.07921v1 Announce Type: new -Abstract: Recent advances in large language models (LLMs) have given rise to powerful coding agents, making it possible for code assistants to evolve into code engineers. However, existing methods still face significant challenges in achieving high-fidelity document-to-codebase synthesis--such as scientific papers to code--primarily due to a fundamental conflict between information overload and the context bottlenecks of LLMs. In this work, we introduce DeepCode, a fully autonomous framework that fundamentally addresses this challenge through principled information-flow management. By treating repository synthesis as a channel optimization problem, DeepCode seamlessly orchestrates four information operations to maximize task-relevant signals under finite context budgets: source compression via blueprint distillation, structured indexing using stateful code memory, conditional knowledge injection via retrieval-augmented generation, and closed-loop error correction. Extensive evaluations on the PaperBench benchmark demonstrate that DeepCode achieves state-of-the-art performance, decisively outperforming leading commercial agents such as Cursor and Claude Code, and crucially, surpassing PhD-level human experts from top institutes on key reproduction metrics. By systematically transforming paper specifications into production-grade implementations comparable to human expert quality, this work establishes new foundations for autonomous scientific reproduction that can accelerate research evaluation and discovery. - oai:arXiv.org:2512.07921v1 - cs.SE - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + An Efficient Test-Time Scaling Approach for Image Generation + https://arxiv.org/abs/2512.08985 + arXiv:2512.08985v1 Announce Type: new +Abstract: Image generation has emerged as a mainstream application of large generative AI models. Just as test-time compute and reasoning have helped language models improve their capabilities, similar benefits have also been observed with image generation models. In particular, searching over noise samples for diffusion and flow models has shown to scale well with test-time compute. While recent works have explored allocating non-uniform inference-compute budgets across different denoising steps, they rely on greedy algorithms and allocate the compute budget ineffectively. In this work, we study this problem and propose solutions to fix it. We propose the Verifier-Threshold method which automatically reallocates test-time compute and delivers substantial efficiency improvements. For the same performance on the GenEval benchmark, we achieve a 2-4x reduction in computational time over the state-of-the-art method. + oai:arXiv.org:2512.08985v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Zongwei Li, Zhonghang Li, Zirui Guo, Xubin Ren, Chao Huang + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Vignesh Sundaresha, Akash Haridas, Vikram Appia, Lav Varshney - Near-real time fires detection using satellite imagery in Sudan conflict - https://arxiv.org/abs/2512.07925 - arXiv:2512.07925v1 Announce Type: new -Abstract: The challenges of ongoing war in Sudan highlight the need for rapid moni- toring and analysis of such conflicts. Advances in deep learning and readily available satellite remote sensing imagery allow for near real-time monitor- ing. This paper uses 4-band imagery from Planet Labs with a deep learning model to show that fire damage in armed conflicts can be monitored with minimal delay. We demonstrate the effectiveness of our approach using five case studies in Sudan. We show that, compared to a baseline, the automated method captures the active fires and charred areas more accurately. Our re- sults indicate that using 8-band imagery or time series of such imagery only result in marginal gains. - oai:arXiv.org:2512.07925v1 + Explainable Fundus Image Curation and Lesion Detection in Diabetic Retinopathy + https://arxiv.org/abs/2512.08986 + arXiv:2512.08986v1 Announce Type: new +Abstract: Diabetic Retinopathy (DR) affects individuals with long-term diabetes. Without early diagnosis, DR can lead to vision loss. Fundus photography captures the structure of the retina along with abnormalities indicative of the stage of the disease. Artificial Intelligence (AI) can support clinicians in identifying these lesions, reducing manual workload, but models require high-quality annotated datasets. Due to the complexity of retinal structures, errors in image acquisition and lesion interpretation of manual annotators can occur. We proposed a quality-control framework, ensuring only high-standard data is used for evaluation and AI training. First, an explainable feature-based classifier is used to filter inadequate images. The features are extracted both using image processing and contrastive learning. Then, the images are enhanced and put subject to annotation, using deep-learning-based assistance. Lastly, the agreement between annotators calculated using derived formulas determines the usability of the annotations. + oai:arXiv.org:2512.08986v1 cs.CV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Kuldip Singh Atwal, Dieter Pfoser, Daniel Rothbart + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Anca Mihai, Adrian Groza - Can AI autonomously build, operate, and use the entire data stack? - https://arxiv.org/abs/2512.07926 - arXiv:2512.07926v1 Announce Type: new -Abstract: Enterprise data management is a monumental task. It spans data architecture and systems, integration, quality, governance, and continuous improvement. While AI assistants can help specific persona, such as data engineers and stewards, to navigate and configure the data stack, they fall far short of full automation. However, as AI becomes increasingly capable of tackling tasks that have previously resisted automation due to inherent complexities, we believe there is an imminent opportunity to target fully autonomous data estates. Currently, AI is used in different parts of the data stack, but in this paper, we argue for a paradigm shift from the use of AI in independent data component operations towards a more holistic and autonomous handling of the entire data lifecycle. Towards that end, we explore how each stage of the modern data stack can be autonomously managed by intelligent agents to build self-sufficient systems that can be used not only by human end-users, but also by AI itself. We begin by describing the mounting forces and opportunities that demand this paradigm shift, examine how agents can streamline the data lifecycle, and highlight open questions and areas where additional research is needed. We hope this work will inspire lively debate, stimulate further research, motivate collaborative approaches, and facilitate a more autonomous future for data systems. - oai:arXiv.org:2512.07926v1 + 3DID: Direct 3D Inverse Design for Aerodynamics with Physics-Aware Optimization + https://arxiv.org/abs/2512.08987 + arXiv:2512.08987v1 Announce Type: new +Abstract: Inverse design aims to design the input variables of a physical system to optimize a specified objective function, typically formulated as a search or optimization problem. However, in 3D domains, the design space grows exponentially, rendering exhaustive grid-based searches infeasible. Recent advances in deep learning have accelerated inverse design by providing powerful generative priors and differentiable surrogate models. Nevertheless, current methods tend to approximate the 3D design space using 2D projections or fine-tune existing 3D shapes. These approaches sacrifice volumetric detail and constrain design exploration, preventing true 3D design from scratch. In this paper, we propose a 3D Inverse Design (3DID) framework that directly navigates the 3D design space by coupling a continuous latent representation with a physics-aware optimization strategy. We first learn a unified physics-geometry embedding that compactly captures shape and physical field data in a continuous latent space. Then, we introduce a two-stage strategy to perform physics-aware optimization. In the first stage, a gradient-guided diffusion sampler explores the global latent manifold. In the second stage, an objective-driven, topology-preserving refinement further sculpts each candidate toward the target objective. This enables 3DID to generate high-fidelity 3D geometries, outperforming existing methods in both solution quality and design versatility. + oai:arXiv.org:2512.08987v1 + cs.CV cs.AI - cs.DB - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Arvind Agarwal, Lisa Amini, Sameep Mehta, Horst Samulowitz, Kavitha Srinivas + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuze Hao, Linchao Zhu, Yi Yang - Preserving Source Video Realism: High-Fidelity Face Swapping for Cinematic Quality - https://arxiv.org/abs/2512.07951 - arXiv:2512.07951v1 Announce Type: new -Abstract: Video face swapping is crucial in film and entertainment production, where achieving high fidelity and temporal consistency over long and complex video sequences remains a significant challenge. Inspired by recent advances in reference-guided image editing, we explore whether rich visual attributes from source videos can be similarly leveraged to enhance both fidelity and temporal coherence in video face swapping. Building on this insight, this work presents LivingSwap, the first video reference guided face swapping model. Our approach employs keyframes as conditioning signals to inject the target identity, enabling flexible and controllable editing. By combining keyframe conditioning with video reference guidance, the model performs temporal stitching to ensure stable identity preservation and high-fidelity reconstruction across long video sequences. To address the scarcity of data for reference-guided training, we construct a paired face-swapping dataset, Face2Face, and further reverse the data pairs to ensure reliable ground-truth supervision. Extensive experiments demonstrate that our method achieves state-of-the-art results, seamlessly integrating the target identity with the source video's expressions, lighting, and motion, while significantly reducing manual effort in production workflows. Project webpage: https://aim-uofa.github.io/LivingSwap - oai:arXiv.org:2512.07951v1 + Enhancing Knowledge Transfer in Hyperspectral Image Classification via Cross-scene Knowledge Integration + https://arxiv.org/abs/2512.08989 + arXiv:2512.08989v1 Announce Type: new +Abstract: Knowledge transfer has strong potential to improve hyperspectral image (HSI) classification, yet two inherent challenges fundamentally restrict effective cross-domain transfer: spectral variations caused by different sensors and semantic inconsistencies across heterogeneous scenes. Existing methods are limited by transfer settings that assume homogeneous domains or heterogeneous scenarios with only co-occurring categories. When label spaces do not overlap, they further rely on complete source-domain coverage and therefore overlook critical target-private information. To overcome these limitations and enable knowledge transfer in fully heterogeneous settings, we propose Cross-scene Knowledge Integration (CKI), a framework that explicitly incorporates target-private knowledge during transfer. CKI includes: (1) Alignment of Spectral Characteristics (ASC) to reduce spectral discrepancies through domain-agnostic projection; (2) Cross-scene Knowledge Sharing Preference (CKSP), which resolves semantic mismatch via a Source Similarity Mechanism (SSM); and (3) Complementary Information Integration (CII) to maximize the use of target-specific complementary cues. Extensive experiments verify that CKI achieves state-of-the-art performance with strong stability across diverse cross-scene HSI scenarios. + oai:arXiv.org:2512.08989v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zekai Luo, Zongze Du, Zhouhang Zhu, Hao Zhong, Muzhi Zhu, Wen Wang, Yuling Xi, Chenchen Jing, Hao Chen, Chunhua Shen + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Lu Huo, Wenjian Huang, Jianguo Zhang, Min Xu, Haimin Zhang - Towards symbolic regression for interpretable clinical decision scores - https://arxiv.org/abs/2512.07961 - arXiv:2512.07961v1 Announce Type: new -Abstract: Medical decision-making makes frequent use of algorithms that combine risk equations with rules, providing clear and standardized treatment pathways. Symbolic regression (SR) traditionally limits its search space to continuous function forms and their parameters, making it difficult to model this decision-making. However, due to its ability to derive data-driven, interpretable models, SR holds promise for developing data-driven clinical risk scores. To that end we introduce Brush, an SR algorithm that combines decision-tree-like splitting algorithms with non-linear constant optimization, allowing for seamless integration of rule-based logic into symbolic regression and classification models. Brush achieves Pareto-optimal performance on SRBench, and was applied to recapitulate two widely used clinical scoring systems, achieving high accuracy and interpretable models. Compared to decision trees, random forests, and other SR methods, Brush achieves comparable or superior predictive performance while producing simpler models. - oai:arXiv.org:2512.07961v1 + Deterministic World Models for Verification of Closed-loop Vision-based Systems + https://arxiv.org/abs/2512.08991 + arXiv:2512.08991v1 Announce Type: new +Abstract: Verifying closed-loop vision-based control systems remains a fundamental challenge due to the high dimensionality of images and the difficulty of modeling visual environments. While generative models are increasingly used as camera surrogates in verification, their reliance on stochastic latent variables introduces unnecessary overapproximation error. To address this bottleneck, we propose a Deterministic World Model (DWM) that maps system states directly to generative images, effectively eliminating uninterpretable latent variables to ensure precise input bounds. The DWM is trained with a dual-objective loss function that combines pixel-level reconstruction accuracy with a control difference loss to maintain behavioral consistency with the real system. We integrate DWM into a verification pipeline utilizing Star-based reachability analysis (StarV) and employ conformal prediction to derive rigorous statistical bounds on the trajectory deviation between the world model and the actual vision-based system. Experiments on standard benchmarks show that our approach yields significantly tighter reachable sets and better verification performance than a latent-variable baseline. + oai:arXiv.org:2512.08991v1 + cs.CV cs.LG - cs.NE - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Guilherme Seidyo Imai Aldeia, Joseph D. Romano, Fabricio Olivetti de Franca, Daniel S. Herman, William G. La Cava + Yuang Geng, Zhuoyang Zhou, Zhongzheng Zhang, Siyuan Pan, Hoang-Dung Tran, Ivan Ruchkin - Sparse Variable Projection in Robotic Perception: Exploiting Separable Structure for Efficient Nonlinear Optimization - https://arxiv.org/abs/2512.07969 - arXiv:2512.07969v1 Announce Type: new -Abstract: Robotic perception often requires solving large nonlinear least-squares (NLS) problems. While sparsity has been well-exploited to scale solvers, a complementary and underexploited structure is \emph{separability} -- where some variables (e.g., visual landmarks) appear linearly in the residuals and, for any estimate of the remaining variables (e.g., poses), have a closed-form solution. Variable projection (VarPro) methods are a family of techniques that exploit this structure by analytically eliminating the linear variables and presenting a reduced problem in the remaining variables that has favorable properties. However, VarPro has seen limited use in robotic perception; a major challenge arises from gauge symmetries (e.g., cost invariance to global shifts and rotations), which are common in perception and induce specific computational challenges in standard VarPro approaches. We present a VarPro scheme designed for problems with gauge symmetries that jointly exploits separability and sparsity. Our method can be applied as a one-time preprocessing step to construct a \emph{matrix-free Schur complement operator}. This operator allows efficient evaluation of costs, gradients, and Hessian-vector products of the reduced problem and readily integrates with standard iterative NLS solvers. We provide precise conditions under which our method applies, and describe extensions when these conditions are only partially met. Across synthetic and real benchmarks in SLAM, SNL, and SfM, our approach achieves up to \textbf{2$\times$--35$\times$ faster runtimes} than state-of-the-art methods while maintaining accuracy. We release an open-source C++ implementation and all datasets from our experiments. - oai:arXiv.org:2512.07969v1 - cs.RO - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + PoultryTalk: A Multi-modal Retrieval-Augmented Generation (RAG) System for Intelligent Poultry Management and Decision Support + https://arxiv.org/abs/2512.08995 + arXiv:2512.08995v1 Announce Type: new +Abstract: The Poultry industry plays a vital role in global food security, yet small- and medium-scale farmers frequently lack timely access to expert-level support for disease diagnosis, nutrition planning, and management decisions. With rising climate stress, unpredictable feed prices, and persistent disease threats, poultry producers often struggle to make quick, informed decisions. Therefore, there is a critical need for intelligent, data-driven systems that can deliver reliable, on-demand consultation. This paper presents PoultryTalk, a novel multi-modal Retrieval-Augmented Generation (RAG) system designed to provide real-time expert guidance through text and image-based interaction. PoultryTalk uses OpenAI's text-embedding-3-small and GPT-4o to provide smart, context-aware poultry management advice from text, images, or questions. System usability and performance were evaluated using 200 expert-verified queries and feedback from 34 participants who submitted 267 queries to the PoultryTalk prototype. The expert-verified benchmark queries confirmed strong technical performance, achieving a semantic similarity of 84.0% and an average response latency of 3.6 seconds. Compared with OpenAI's GPT-4o, PoultryTalk delivered more accurate and reliable information related to poultry. Based on participants' evaluations, PoultryTalk achieved a response accuracy of 89.9%, with about 9.1% of responses rated as incorrect. A post-use survey indicated high user satisfaction: 95.6% of participants reported that the chatbot provided "always correct" and "mostly correct" answers. 82.6% indicated they would recommend the tool, and 17.4% responded "maybe." These results collectively demonstrate that PoultryTalk not only delivers accurate, contextually relevant information but also demonstrates strong user acceptance and scalability potential. + oai:arXiv.org:2512.08995v1 + cs.HC + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alan Papalia, Nikolas Sanderson, Haoyu Han, Heng Yang, Hanumant Singh, Michael Everett + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Kapalik Khanal, Biswash Khatiwada, Stephen Afrifa, Ranjan Sapkota, Sanjay Shah, Frank Bai, Ramesh Bahadur Bist - VLD: Visual Language Goal Distance for Reinforcement Learning Navigation - https://arxiv.org/abs/2512.07976 - arXiv:2512.07976v1 Announce Type: new -Abstract: Training end-to-end policies from image data to directly predict navigation actions for robotic systems has proven inherently difficult. Existing approaches often suffer from either the sim-to-real gap during policy transfer or a limited amount of training data with action labels. To address this problem, we introduce Vision-Language Distance (VLD) learning, a scalable framework for goal-conditioned navigation that decouples perception learning from policy learning. Instead of relying on raw sensory inputs during policy training, we first train a self-supervised distance-to-goal predictor on internet-scale video data. This predictor generalizes across both image- and text-based goals, providing a distance signal that can be minimized by a reinforcement learning (RL) policy. The RL policy can be trained entirely in simulation using privileged geometric distance signals, with injected noise to mimic the uncertainty of the trained distance predictor. At deployment, the policy consumes VLD predictions, inheriting semantic goal information-"where to go"-from large-scale visual training while retaining the robust low-level navigation behaviors learned in simulation. We propose using ordinal consistency to assess distance functions directly and demonstrate that VLD outperforms prior temporal distance approaches, such as ViNT and VIP. Experiments show that our decoupled design achieves competitive navigation performance in simulation while supporting flexible goal modalities, providing an alternative and, most importantly, scalable path toward reliable, multimodal navigation policies. - oai:arXiv.org:2512.07976v1 - cs.RO + Demo: Generative AI helps Radiotherapy Planning with User Preference + https://arxiv.org/abs/2512.08996 + arXiv:2512.08996v1 Announce Type: new +Abstract: Radiotherapy planning is a highly complex process that often varies significantly across institutions and individual planners. Most existing deep learning approaches for 3D dose prediction rely on reference plans as ground truth during training, which can inadvertently bias models toward specific planning styles or institutional preferences. In this study, we introduce a novel generative model that predicts 3D dose distributions based solely on user-defined preference flavors. These customizable preferences enable planners to prioritize specific trade-offs between organs-at-risk (OARs) and planning target volumes (PTVs), offering greater flexibility and personalization. Designed for seamless integration with clinical treatment planning systems, our approach assists users in generating high-quality plans efficiently. Comparative evaluations demonstrate that our method can surpasses the Varian RapidPlan model in both adaptability and plan quality in some scenarios. + oai:arXiv.org:2512.08996v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Lazar Milikic, Manthan Patel, Jonas Frey + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Riqiang Gao, Simon Arberet, Martin Kraus, Han Liu, Wilko FAR Verbakel, Dorin Comaniciu, Florin-Cristian Ghesu, Ali Kamen - CIP-Net: Continual Interpretable Prototype-based Network - https://arxiv.org/abs/2512.07981 - arXiv:2512.07981v1 Announce Type: new -Abstract: Continual learning constrains models to learn new tasks over time without forgetting what they have already learned. A key challenge in this setting is catastrophic forgetting, where learning new information causes the model to lose its performance on previous tasks. Recently, explainable AI has been proposed as a promising way to better understand and reduce forgetting. In particular, self-explainable models are useful because they generate explanations during prediction, which can help preserve knowledge. However, most existing explainable approaches use post-hoc explanations or require additional memory for each new task, resulting in limited scalability. In this work, we introduce CIP-Net, an exemplar-free self-explainable prototype-based model designed for continual learning. CIP-Net avoids storing past examples and maintains a simple architecture, while still providing useful explanations and strong performance. We demonstrate that CIPNet achieves state-of-the-art performances compared to previous exemplar-free and self-explainable methods in both task- and class-incremental settings, while bearing significantly lower memory-related overhead. This makes it a practical and interpretable solution for continual learning. - oai:arXiv.org:2512.07981v1 - cs.LG + Diffusion Model Regularized Implicit Neural Representation for CT Metal Artifact Reduction + https://arxiv.org/abs/2512.08999 + arXiv:2512.08999v1 Announce Type: new +Abstract: Computed tomography (CT) images are often severely corrupted by artifacts in the presence of metals. Existing supervised metal artifact reduction (MAR) approaches suffer from performance instability on known data due to their reliance on limited paired metal-clean data, which limits their clinical applicability. Moreover, existing unsupervised methods face two main challenges: 1) the CT physical geometry is not effectively incorporated into the MAR process to ensure data fidelity; 2) traditional heuristics regularization terms cannot fully capture the abundant prior knowledge available. To overcome these shortcomings, we propose diffusion model regularized implicit neural representation framework for MAR. The implicit neural representation integrates physical constraints and imposes data fidelity, while the pre-trained diffusion model provides prior knowledge to regularize the solution. Experimental results on both simulated and clinical data demonstrate the effectiveness and generalization ability of our method, highlighting its potential to be applied to clinical settings. + oai:arXiv.org:2512.08999v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Federico Di Valerio, Michela Proietti, Alessio Ragno, Roberto Capobianco + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jie Wen, Chenhe Du, Xiao Wang, Yuyao Zhang - An Empirical Framework for Evaluating Semantic Preservation Using Hugging Face - https://arxiv.org/abs/2512.07983 - arXiv:2512.07983v1 Announce Type: new -Abstract: As machine learning (ML) becomes an integral part of high-autonomy systems, it is critical to ensure the trustworthiness of learning-enabled software systems (LESS). Yet, the nondeterministic and run-time-defined semantics of ML complicate traditional software refactoring. We define semantic preservation in LESS as the property that optimizations of intelligent components do not alter the system's overall functional behavior. This paper introduces an empirical framework to evaluate semantic preservation in LESS by mining model evolution data from HuggingFace. We extract commit histories, $\textit{Model Cards}$, and performance metrics from a large number of models. To establish baselines, we conducted case studies in three domains, tracing performance changes across versions. Our analysis demonstrates how $\textit{semantic drift}$ can be detected via evaluation metrics across commits and reveals common refactoring patterns based on commit message analysis. Although API constraints limited the possibility of estimating a full-scale threshold, our pipeline offers a foundation for defining community-accepted boundaries for semantic preservation. Our contributions include: (1) a large-scale dataset of ML model evolution, curated from 1.7 million Hugging Face entries via a reproducible pipeline using the native HF hub API, (2) a practical pipeline for the evaluation of semantic preservation for a subset of 536 models and 4000+ metrics and (3) empirical case studies illustrating semantic drift in practice. Together, these contributions advance the foundations for more maintainable and trustworthy ML systems. - oai:arXiv.org:2512.07983v1 - cs.SE + A Physics-Constrained, Design-Driven Methodology for Defect Dataset Generation in Optical Lithography + https://arxiv.org/abs/2512.09001 + arXiv:2512.09001v1 Announce Type: new +Abstract: The efficacy of Artificial Intelligence (AI) in micro/nano manufacturing is fundamentally constrained by the scarcity of high-quality and physically grounded training data for defect inspection. Lithography defect data from semiconductor industry are rarely accessible for research use, resulting in a shortage of publicly available datasets. To address this bottleneck in lithography, this study proposes a novel methodology for generating large-scale, physically valid defect datasets with pixel-level annotations. The framework begins with the ab initio synthesis of defect layouts using controllable, physics-constrained mathematical morphology operations (erosion and dilation) applied to the original design-level layout. These synthesized layouts, together with their defect-free counterparts, are fabricated into physical samples via high-fidelity digital micromirror device (DMD)-based lithography. Optical micrographs of the synthesized defect samples and their defect-free references are then compared to create consistent defect delineation annotations. Using this methodology, we constructed a comprehensive dataset of 3,530 Optical micrographs containing 13,365 annotated defect instances including four classes: bridge, burr, pinch, and contamination. Each defect instance is annotated with a pixel-accurate segmentation mask, preserving full contour and geometry. The segmentation-based Mask R-CNN achieves AP@0.5 of 0.980, 0.965, and 0.971, compared with 0.740, 0.719, and 0.717 for Faster R-CNN on bridge, burr, and pinch classes, representing a mean AP@0.5 improvement of approximately 34%. For the contamination class, Mask R-CNN achieves an AP@0.5 roughly 42% higher than Faster R-CNN. These consistent gains demonstrate that our proposed methodology to generate defect datasets with pixel-level annotations is feasible for robust AI-based Measurement/Inspection (MI) in semiconductor fabrication. + oai:arXiv.org:2512.09001v1 + cs.CV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Nan Jia, Anita Raja, Raffi Khatchadourian + Yuehua Hu, Jiyeong Kong, Dong-yeol Shin, Jaekyun Kim, Kyung-Tae Kang - Restrictive Hierarchical Semantic Segmentation for Stratified Tooth Layer Detection - https://arxiv.org/abs/2512.07984 - arXiv:2512.07984v1 Announce Type: new -Abstract: Accurate understanding of anatomical structures is essential for reliably staging certain dental diseases. A way of introducing this within semantic segmentation models is by utilising hierarchy-aware methodologies. However, existing hierarchy-aware segmentation methods largely encode anatomical structure through the loss functions, providing weak and indirect supervision. We introduce a general framework that embeds an explicit anatomical hierarchy into semantic segmentation by coupling a recurrent, level-wise prediction scheme with restrictive output heads and top-down feature conditioning. At each depth of the class tree, the backbone is re-run on the original image concatenated with logits from the previous level. Child class features are conditioned using Feature-wise Linear Modulation of their parent class probabilities, to modulate child feature spaces for fine grained detection. A probabilistic composition rule enforces consistency between parent and descendant classes. Hierarchical loss combines per-level class weighted Dice and cross entropy loss and a consistency term loss, ensuring parent predictions are the sum of their children. We validate our approach on our proposed dataset, TL-pano, containing 194 panoramic radiographs with dense instance and semantic segmentation annotations, of tooth layers and alveolar bone. Utilising UNet and HRNet as donor models across a 5-fold cross validation scheme, the hierarchical variants consistently increase IoU, Dice, and recall, particularly for fine-grained anatomies, and produce more anatomically coherent masks. However, hierarchical variants also demonstrated increased recall over precision, implying increased false positives. The results demonstrate that explicit hierarchical structuring improves both performance and clinical plausibility, especially in low data dental imaging regimes. - oai:arXiv.org:2512.07984v1 + A Survey of Body and Face Motion: Datasets, Performance Evaluation Metrics and Generative Techniques + https://arxiv.org/abs/2512.09005 + arXiv:2512.09005v1 Announce Type: new +Abstract: Body and face motion play an integral role in communication. They convey crucial information on the participants. Advances in generative modeling and multi-modal learning have enabled motion generation from signals such as speech, conversational context and visual cues. However, generating expressive and coherent face and body dynamics remains challenging due to the complex interplay of verbal / non-verbal cues and individual personality traits. This survey reviews body and face motion generation, covering core concepts, representations techniques, generative approaches, datasets and evaluation metrics. We highlight future directions to enhance the realism, coherence and expressiveness of avatars in dyadic settings. To the best of our knowledge, this work is the first comprehensive review to cover both body and face motion. Detailed resources are listed on https://lownish23csz0010.github.io/mogen/. + oai:arXiv.org:2512.09005v1 cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Ryan Banks, Camila Lindoni Azevedo, Hongying Tang, Yunpeng Li - - - HOLE: Homological Observation of Latent Embeddings for Neural Network Interpretability - https://arxiv.org/abs/2512.07988 - arXiv:2512.07988v1 Announce Type: new -Abstract: Deep learning models have achieved remarkable success across various domains, yet their learned representations and decision-making processes remain largely opaque and hard to interpret. This work introduces HOLE (Homological Observation of Latent Embeddings), a method for analyzing and interpreting deep neural networks through persistent homology. HOLE extracts topological features from neural activations and presents them using a suite of visualization techniques, including Sankey diagrams, heatmaps, dendrograms, and blob graphs. These tools facilitate the examination of representation structure and quality across layers. We evaluate HOLE on standard datasets using a range of discriminative models, focusing on representation quality, interpretability across layers, and robustness to input perturbations and model compression. The results indicate that topological analysis reveals patterns associated with class separation, feature disentanglement, and model robustness, providing a complementary perspective for understanding and improving deep learning systems. - oai:arXiv.org:2512.07988v1 - cs.LG - cs.GR cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sudhanva Manjunath Athreya, Paul Rosen + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Lownish Rai Sookha, Nikhil Pakhale, Mudasir Ganaie, Abhinav Dhall - A Gray Literature Study on Fairness Requirements in AI-enabled Software Engineering - https://arxiv.org/abs/2512.07990 - arXiv:2512.07990v1 Announce Type: new -Abstract: Today, with the growing obsession with applying Artificial Intelligence (AI), particularly Machine Learning (ML), to software across various contexts, much of the focus has been on the effectiveness of AI models, often measured through common metrics such as F1- score, while fairness receives relatively little attention. This paper presents a review of existing gray literature, examining fairness requirements in AI context, with a focus on how they are defined across various application domains, managed throughout the Software Development Life Cycle (SDLC), and the causes, as well as the corresponding consequences of their violation by AI models. Our gray literature investigation shows various definitions of fairness requirements in AI systems, commonly emphasizing non-discrimination and equal treatment across different demographic and social attributes. Fairness requirement management practices vary across the SDLC, particularly in model training and bias mitigation, fairness monitoring and evaluation, and data handling practices. Fairness requirement violations are frequently linked, but not limited, to data representation bias, algorithmic and model design bias, human judgment, and evaluation and transparency gaps. The corresponding consequences include harm in a broad sense, encompassing specific professional and societal impacts as key examples, stereotype reinforcement, data and privacy risks, and loss of trust and legitimacy in AI-supported decisions. These findings emphasize the need for consistent frameworks and practices to integrate fairness into AI software, paying as much attention to fairness as to effectiveness. - oai:arXiv.org:2512.07990v1 + Llama-based source code vulnerability detection: Prompt engineering vs Fine tuning + https://arxiv.org/abs/2512.09006 + arXiv:2512.09006v1 Announce Type: new +Abstract: The significant increase in software production, driven by the acceleration of development cycles over the past two decades, has led to a steady rise in software vulnerabilities, as shown by statistics published yearly by the CVE program. The automation of the source code vulnerability detection (CVD) process has thus become essential, and several methods have been proposed ranging from the well established program analysis techniques to the more recent AI-based methods. Our research investigates Large Language Models (LLMs), which are considered among the most performant AI models to date, for the CVD task. The objective is to study their performance and apply different state-of-the-art techniques to enhance their effectiveness for this task. We explore various fine-tuning and prompt engineering settings. We particularly suggest one novel approach for fine-tuning LLMs which we call Double Fine-tuning, and also test the understudied Test-Time fine-tuning approach. We leverage the recent open-source Llama-3.1 8B, with source code samples extracted from BigVul and PrimeVul datasets. Our conclusions highlight the importance of fine-tuning to resolve the task, the performance of Double tuning, as well as the potential of Llama models for CVD. Though prompting proved ineffective, Retrieval augmented generation (RAG) performed relatively well as an example selection technique. Overall, some of our research questions have been answered, and many are still on hold, which leaves us many future work perspectives. Code repository is available here: https://github.com/DynaSoumhaneOuchebara/Llama-based-vulnerability-detection. + oai:arXiv.org:2512.09006v1 cs.SE cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Thanh Nguyen, Chaima Boufaied, Ronnie de Souza Santos + http://creativecommons.org/licenses/by/4.0/ + Dyna Soumhane Ouchebara, St\'ephane Dupont - Bridging the Clinical Expertise Gap: Development of a Web-Based Platform for Accessible Time Series Forecasting and Analysis - https://arxiv.org/abs/2512.07992 - arXiv:2512.07992v1 Announce Type: new -Abstract: Time series forecasting has applications across domains and industries, especially in healthcare, but the technical expertise required to analyze data, build models, and interpret results can be a barrier to using these techniques. This article presents a web platform that makes the process of analyzing and plotting data, training forecasting models, and interpreting and viewing results accessible to researchers and clinicians. Users can upload data and generate plots to showcase their variables and the relationships between them. The platform supports multiple forecasting models and training techniques which are highly customizable according to the user's needs. Additionally, recommendations and explanations can be generated from a large language model that can help the user choose appropriate parameters for their data and understand the results for each model. The goal is to integrate this platform into learning health systems for continuous data collection and inference from clinical pipelines. - oai:arXiv.org:2512.07992v1 - cs.LG - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Towards Lossless Ultimate Vision Token Compression for VLMs + https://arxiv.org/abs/2512.09010 + arXiv:2512.09010v1 Announce Type: new +Abstract: Visual language models encounter challenges in computational efficiency and latency, primarily due to the substantial redundancy in the token representations of high-resolution images and videos. Current attention/similarity-based compression algorithms suffer from either position bias or class imbalance, leading to significant accuracy degradation. They also fail to generalize to shallow LLM layers, which exhibit weaker cross-modal interactions. To address this, we extend token compression to the visual encoder through an effective iterative merging scheme that is orthogonal in spatial axes to accelerate the computation across the entire VLM. Furthermoer, we integrate a spectrum pruning unit into LLM through an attention/similarity-free low-pass filter, which gradually prunes redundant visual tokens and is fully compatible to modern FlashAttention. On this basis, we propose Lossless Ultimate Vision tokens Compression (LUVC) framework. LUVC systematically compresses visual tokens until complete elimination at the final layer of LLM, so that the high-dimensional visual features are gradually fused into the multimodal queries. The experiments show that LUVC achieves a 2 speedup inference in language model with negligible accuracy degradation, and the training-free characteristic enables immediate deployment across multiple VLMs. + oai:arXiv.org:2512.09010v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Aaron D. Mullen, Daniel R. Harris, Svetla Slavova, V. K. Cody Bumgardner + Dehua Zheng, Mouxiao Huang, Borui Jiang, Hailin Hu, Xinghao Chen - SkipKV: Selective Skipping of KV Generation and Storage for Efficient Inference with Large Reasoning Models - https://arxiv.org/abs/2512.07993 - arXiv:2512.07993v1 Announce Type: new -Abstract: Large reasoning models (LRMs) often cost significant key-value (KV) cache overhead, due to their linear growth with the verbose chain-of-thought (CoT) reasoning process. This costs both memory and throughput bottleneck limiting their efficient deployment. Towards reducing KV cache size during inference, we first investigate the effectiveness of existing KV cache eviction methods for CoT reasoning. Interestingly, we find that due to unstable token-wise scoring and the reduced effective KV budget caused by padding tokens, state-of-the-art (SoTA) eviction methods fail to maintain accuracy in the multi-batch setting. Additionally, these methods often generate longer sequences than the original model, as semantic-unaware token-wise eviction leads to repeated revalidation during reasoning. To address these issues, we present \textbf{SkipKV}, a \textbf{\textit{training-free}} KV compression method for selective \textit{eviction} and \textit{generation} operating at a coarse-grained sentence-level sequence removal for efficient CoT reasoning. In specific, it introduces a \textit{sentence-scoring metric} to identify and remove highly similar sentences while maintaining semantic coherence. To suppress redundant generation, SkipKV dynamically adjusts a steering vector to update the hidden activation states during inference enforcing the LRM to generate concise response. Extensive evaluations on multiple reasoning benchmarks demonstrate the effectiveness of SkipKV in maintaining up to $\mathbf{26.7}\%$ improved accuracy compared to the alternatives, at a similar compression budget. Additionally, compared to SoTA, SkipKV yields up to $\mathbf{1.6}\times$ fewer generation length while improving throughput up to $\mathbf{1.7}\times$. - oai:arXiv.org:2512.07993v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + An Approach for Detection of Entities in Dynamic Media Contents + https://arxiv.org/abs/2512.09011 + arXiv:2512.09011v1 Announce Type: new +Abstract: The notion of learning underlies almost every evolution of Intelligent Agents. In this paper, we present an approach for searching and detecting a given entity in a video sequence. Specifically, we study how the deep learning technique by artificial neuralnetworks allows us to detect a character in a video sequence. The technique of detecting a character in a video is a complex field of study, considering the multitude of objects present in the data under analysis. From the results obtained, we highlight the following, compared to state of the art: In our approach, within the field of Computer Vision, the structuring of supervised learning algorithms allowed us to achieve several successes from simple characteristics of the target character. Our results demonstrate that is new approach allows us to locate, in an efficient way, wanted individuals from a private or public image base. For the case of Angola, the classifier we propose opens the possibility of reinforcing the national security system based on the database of target individuals (disappeared, criminals, etc.) and the video sequences of the Integrated Public Security Centre (CISP). + oai:arXiv.org:2512.09011v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jiayi Tian, Seyedarmin Azizi, Yequan Zhao, Erfan Baghaei Potraghloo, Sean McPherson, Sharath Nittur Sridhar, Zhengyang Wang, Zheng Zhang, Massoud Pedram, Souvik Kundu + 10.32996/jcsts.2023.5.3.2 + Journal of Computer Science and Technology Studies, Vol. 5, No. 3, pp. 13-24, 2023 + Nzakiese Mbongo, Ngombo Armando - On semantics of first-order justification logic with binding modalities - https://arxiv.org/abs/2512.07994 - arXiv:2512.07994v1 Announce Type: new -Abstract: We introduce the first order logic of proofs $FOLP^\Box$ in the joint language combining justification terms and binding modalities. The main issue is Kripke--style semantics for this logic. We describe models for $FOLP^\Box$ in terms of valuations of individual variables instead of introducing constants to the language. This approach requires a new format of the evidence function. This allows us to assign semantic meaning to formulas that contain free variables. The main results are soundness and completeness of $FOLP^\Box$ with respect to the described semantics. - oai:arXiv.org:2512.07994v1 - cs.LO - math.LO - Wed, 10 Dec 2025 00:00:00 -0500 + Graph Deep Learning for Intracranial Aneurysm Blood Flow Simulation and Risk Assessment + https://arxiv.org/abs/2512.09013 + arXiv:2512.09013v1 Announce Type: new +Abstract: Intracranial aneurysms remain a major cause of neurological morbidity and mortality worldwide, where rupture risk is tightly coupled to local hemodynamics particularly wall shear stress and oscillatory shear index. Conventional computational fluid dynamics simulations provide accurate insights but are prohibitively slow and require specialized expertise. Clinical imaging alternatives such as 4D Flow MRI offer direct in-vivo measurements, yet their spatial resolution remains insufficient to capture the fine-scale shear patterns that drive endothelial remodeling and rupture risk while being extremely impractical and expensive. + We present a graph neural network surrogate model that bridges this gap by reproducing full-field hemodynamics directly from vascular geometries in less than one minute per cardiac cycle. Trained on a comprehensive dataset of high-fidelity simulations of patient-specific aneurysms, our architecture combines graph transformers with autoregressive predictions to accurately simulate blood flow, wall shear stress, and oscillatory shear index. The model generalizes across unseen patient geometries and inflow conditions without mesh-specific calibration. Beyond accelerating simulation, our framework establishes the foundation for clinically interpretable hemodynamic prediction. By enabling near real-time inference integrated with existing imaging pipelines, it allows direct comparison with hospital phase-diagram assessments and extends them with physically grounded, high-resolution flow fields. + This work transforms high-fidelity simulations from an expert-only research tool into a deployable, data-driven decision support system. Our full pipeline delivers high-resolution hemodynamic predictions within minutes of patient imaging, without requiring computational specialists, marking a step-change toward real-time, bedside aneurysm analysis. + oai:arXiv.org:2512.09013v1 + cs.LG + physics.flu-dyn + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tatiana Yavorskaya (Steklov Mathematical Institute of Russian Academy of Science), Elena Popova (Steklov Mathematical Institute of Russian Academy of Science) + http://creativecommons.org/licenses/by/4.0/ + Paul Garnier, Pablo Jeken-Rico, Vincent Lannelongue, Chiara Faitini, Aur\`ele Goetz, Lea Chanvillard, Ramy Nemer, Jonathan Viquerat, Ugo Pelissier, Philippe Meliga, Jacques S\'edat, Thomas Liebig, Yves Chau, Elie Hachem - A Comparative Study of EMG- and IMU-based Gesture Recognition at the Wrist and Forearm - https://arxiv.org/abs/2512.07997 - arXiv:2512.07997v1 Announce Type: new -Abstract: Gestures are an integral part of our daily interactions with the environment. Hand gesture recognition (HGR) is the process of interpreting human intent through various input modalities, such as visual data (images and videos) and bio-signals. Bio-signals are widely used in HGR due to their ability to be captured non-invasively via sensors placed on the arm. Among these, surface electromyography (sEMG), which measures the electrical activity of muscles, is the most extensively studied modality. However, less-explored alternatives such as inertial measurement units (IMUs) can provide complementary information on subtle muscle movements, which makes them valuable for gesture recognition. In this study, we investigate the potential of using IMU signals from different muscle groups to capture user intent. Our results demonstrate that IMU signals contain sufficient information to serve as the sole input sensor for static gesture recognition. Moreover, we compare different muscle groups and check the quality of pattern recognition on individual muscle groups. We further found that tendon-induced micro-movement captured by IMUs is a major contributor to static gesture recognition. We believe that leveraging muscle micro-movement information can enhance the usability of prosthetic arms for amputees. This approach also offers new possibilities for hand gesture recognition in fields such as robotics, teleoperation, sign language interpretation, and beyond. - oai:arXiv.org:2512.07997v1 + Prototyping and Evaluating a Real-time Neuro-Adaptive Virtual Reality Flight Training System + https://arxiv.org/abs/2512.09014 + arXiv:2512.09014v1 Announce Type: new +Abstract: Real-time adjustments to task difficulty during flight training are crucial for optimizing performance and managing pilot workload. This study evaluated the functionality of a pre-trained brain-computer interface (BCI) that adapts training difficulty based on real-time estimations of workload from brain signals. Specifically, an EEG-based neuro-adaptive training system was developed and tested in Virtual Reality (VR) flight simulations with military student pilots. The neuro-adaptive system was compared to a fixed sequence that progressively increased in difficulty, in terms of self-reported user engagement, workload, and simulator sickness (subjective measures), as well as flight performance (objective metric). Additionally, we explored the relationships between subjective workload and flight performance in the VR simulator for each condition. The experiments concluded with semi-structured interviews to elicit the pilots' experience with the neuro-adaptive prototype. Results revealed no significant differences between the adaptive and fixed sequence conditions in subjective measures or flight performance. In both conditions, flight performance decreased as subjective workload increased. The semi-structured interviews indicated that, upon briefing, the pilots preferred the neuro-adaptive VR training system over the system with a fixed sequence, although individual differences were observed in the perception of difficulty and the order of changes in difficulty. Even though this study shows performance does not change, BCI-based flight training systems hold the potential to provide a more personalized and varied training experience. + oai:arXiv.org:2512.09014v1 cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Evy van Weelden, Jos M. Prinsen, Caterina Ceccato, Ethel Pruss, Anita Vrins, Maryam Alimardani, Travis J. Wiltshire, Max M. Louwerse + + + Luxical: High-Speed Lexical-Dense Text Embeddings + https://arxiv.org/abs/2512.09015 + arXiv:2512.09015v1 Announce Type: new +Abstract: Frontier language model quality increasingly hinges on our ability to organize web-scale text corpora for training. Today's dominant tools trade off speed and flexibility: lexical classifiers (e.g., FastText) are fast but limited to producing classification output scores, while the vector-valued outputs of transformer text embedding models flexibly support numerous workflows (e.g., clustering, classification, and retrieval) but are computationally expensive to produce. We introduce Luxical, a library for high-speed "lexical-dense" text embeddings that aims to recover the best properties of both approaches for web-scale text organization. Luxical combines sparse TF--IDF features, a small ReLU network, and a knowledge distillation training regimen to approximate large transformer embedding models at a fraction of their operational cost. In this technical report, we describe the Luxical architecture and training objective and evaluate a concrete Luxical model in two disparate applications: a targeted webcrawl document retrieval test and an end-to-end language model data curation task grounded in text classification. In these tasks we demonstrate speedups ranging from 3x to 100x over varying-sized neural baselines, and comparable to FastText model inference during the data curation task. On these evaluations, the tested Luxical model illustrates favorable compute/quality trade-offs for large-scale text organization, matching the quality of neural baselines. Luxical is available as open-source software at https://github.com/datologyai/luxical. + oai:arXiv.org:2512.09015v1 + cs.CL cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Soroush Baghernezhad, Elaheh Mohammadreza, Vinicius Prado da Fonseca, Ting Zou, Xianta Jiang + DatologyAI, :, Luke Merrick, Alex Fang, Aldo Carranza, Alvin Deng, Amro Abbas, Brett Larsen, Cody Blakeney, Darren Teh, David Schwab, Fan Pan, Haakon Mongstad, Haoli Yin, Jack Urbanek, Jason Lee, Jason Telanoff, Josh Wills, Kaleigh Mentzer, Paul Burstein, Parth Doshi, Paul Burnstein, Pratyush Maini, Ricardo Monti, Rishabh Adiga, Scott Loftin, Siddharth Joshi, Spandan Das, Tony Jiang, Vineeth Dorma, Zhengping Wang, Bogdan Gaza, Ari Morcos, Matthew Leavitt - DIJIT: A Robotic Head for an Active Observer - https://arxiv.org/abs/2512.07998 - arXiv:2512.07998v1 Announce Type: new -Abstract: We present DIJIT, a novel binocular robotic head expressly designed for mobile agents that behave as active observers. DIJIT's unique breadth of functionality enables active vision research and the study of human-like eye and head-neck motions, their interrelationships, and how each contributes to visual ability. DIJIT is also being used to explore the differences between how human vision employs eye/head movements to solve visual tasks and current computer vision methods. DIJIT's design features nine mechanical degrees of freedom, while the cameras and lenses provide an additional four optical degrees of freedom. The ranges and speeds of the mechanical design are comparable to human performance. Our design includes the ranges of motion required for convergent stereo, namely, vergence, version, and cyclotorsion. The exploration of the utility of these to both human and machine vision is ongoing. Here, we present the design of DIJIT and evaluate aspects of its performance. We present a new method for saccadic camera movements. In this method, a direct relationship between camera orientation and motor values is developed. The resulting saccadic camera movements are close to human movements in terms of their accuracy. - oai:arXiv.org:2512.07998v1 - cs.RO + Learning to Remove Lens Flare in Event Camera + https://arxiv.org/abs/2512.09016 + arXiv:2512.09016v1 Announce Type: new +Abstract: Event cameras have the potential to revolutionize vision systems with their high temporal resolution and dynamic range, yet they remain susceptible to lens flare, a fundamental optical artifact that causes severe degradation. In event streams, this optical artifact forms a complex, spatio-temporal distortion that has been largely overlooked. We present E-Deflare, the first systematic framework for removing lens flare from event camera data. We first establish the theoretical foundation by deriving a physics-grounded forward model of the non-linear suppression mechanism. This insight enables the creation of the E-Deflare Benchmark, a comprehensive resource featuring a large-scale simulated training set, E-Flare-2.7K, and the first-ever paired real-world test set, E-Flare-R, captured by our novel optical system. Empowered by this benchmark, we design E-DeflareNet, which achieves state-of-the-art restoration performance. Extensive experiments validate our approach and demonstrate clear benefits for downstream tasks. Code and datasets are publicly available. + oai:arXiv.org:2512.09016v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mostafa Kamali Tabrizi, Mingshi Chi, Bir Bikram Dey, Yu Qing Yuan, Markus D. Solbach, Yiqian Liu, Michael Jenkin, John K. Tsotsos + http://creativecommons.org/licenses/by-sa/4.0/ + Haiqian Han, Lingdong Kong, Jianing Li, Ao Liang, Chengtao Zhu, Jiacheng Lyu, Lai Xing Ng, Xiangyang Ji, Wei Tsang Ooi, Benoit R. Cottereau - Modeling the Potential of Message-Free Communication via CXL.mem - https://arxiv.org/abs/2512.08005 - arXiv:2512.08005v1 Announce Type: new -Abstract: Heterogeneous memory technologies are increasingly important instruments in addressing the memory wall in HPC systems. While most are deployed in single node setups, CXL.mem is a technology that implements memories that can be attached to multiple nodes simultaneously, enabling shared memory pooling. This opens new possibilities, particularly for efficient inter-node communication. - In this paper, we present a novel performance evaluation toolchain combined with an extended performance model for message-based communication, which can be used to predict potential performance benefits from using CXL.mem for data exchange. Our approach analyzes data access patterns of MPI applications: it analyzes on-node accesses to/from MPI buffers, as well as cross-node MPI traffic to gather a full understanding of the impact of memory performance. We combine this data in an extended performance model to predict which data transfers could benefit from direct CXL.mem implementations as compared to traditional MPI messages. Our model works on a per-MPI call granularity, allowing the identification and later optimizations of those MPI invocations in the code with the highest potential for speedup by using CXL.mem. - For our toolchain, we extend the memory trace sampling tool Mitos and use it to extract data access behavior. In the post-processing step, the raw data is automatically analyzed to provide performance models for each individual MPI call. We validate the models on two sample applications -- a 2D heat transfer miniapp and the HPCG benchmark -- and use them to demonstrate their support for targeted optimizations by integrating CXL.mem. - oai:arXiv.org:2512.08005v1 - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + EMMap: A Systematic Framework for Spatial EMFI Mapping and Fault Classification on Microcontrollers + https://arxiv.org/abs/2512.09049 + arXiv:2512.09049v1 Announce Type: new +Abstract: Electromagnetic Fault Injection (EMFI) is a powerful technique for inducing bit flips and instruction-level perturbations on microcontrollers, yet existing literature lacks a unified methodology for systematically mapping spatial sensitivity and classifying resulting fault behaviors. Building on insights from O'Flynn and Kuhnapfel et al., we introduce a platform-agnostic framework for Spatial EMFI Mapping and Fault Classification, aimed at understanding how spatial probe position influences fault outcomes. We present pilot experiments on three representative microcontroller targets including the Xtensa LX6 (ESP32) and two ChipWhisper boards not as definitive evaluations, but as illustrative demonstrations of how the proposed methodology can be applied in practice. These preliminary observations motivate a generalized and reproducible workflow that researchers can adopt when analyzing EMFI susceptibility across diverse embedded architectures. + oai:arXiv.org:2512.09049v1 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.1145/3773656.3773661 - Stepan Vanecek, Matthew Turner, Manisha Gajbe, Matthew Wolf, Martin Schulz + Gandham Sai Santhosh, Siddhartha Sanjay Naik, Ritwik Badola, Chester Rebeiro - Beyond Unified Models: A Service-Oriented Approach to Low Latency, Context Aware Phonemization for Real Time TTS - https://arxiv.org/abs/2512.08006 - arXiv:2512.08006v1 Announce Type: new -Abstract: Lightweight, real-time text-to-speech systems are crucial for accessibility. However, the most efficient TTS models often rely on lightweight phonemizers that struggle with context-dependent challenges. In contrast, more advanced phonemizers with a deeper linguistic understanding typically incur high computational costs, which prevents real-time performance. - This paper examines the trade-off between phonemization quality and inference speed in G2P-aided TTS systems, introducing a practical framework to bridge this gap. We propose lightweight strategies for context-aware phonemization and a service-oriented TTS architecture that executes these modules as independent services. This design decouples heavy context-aware components from the core TTS engine, effectively breaking the latency barrier and enabling real-time use of high-quality phonemization models. Experimental results confirm that the proposed system improves pronunciation soundness and linguistic accuracy while maintaining real-time responsiveness, making it well-suited for offline and end-device TTS applications. - oai:arXiv.org:2512.08006v1 - cs.SD - cs.CL - eess.AS - Wed, 10 Dec 2025 00:00:00 -0500 + Improving Multi-Class Calibration through Normalization-Aware Isotonic Techniques + https://arxiv.org/abs/2512.09054 + arXiv:2512.09054v1 Announce Type: new +Abstract: Accurate and reliable probability predictions are essential for multi-class supervised learning tasks, where well-calibrated models enable rational decision-making. While isotonic regression has proven effective for binary calibration, its extension to multi-class problems via one-vs-rest calibration produced suboptimal results when compared to parametric methods, limiting its practical adoption. In this work, we propose novel isotonic normalization-aware techniques for multiclass calibration, grounded in natural and intuitive assumptions expected by practitioners. Unlike prior approaches, our methods inherently account for probability normalization by either incorporating normalization directly into the optimization process (NA-FIR) or modeling the problem as a cumulative bivariate isotonic regression (SCIR). Empirical evaluation on a variety of text and image classification datasets across different model architectures reveals that our approach consistently improves negative log-likelihood (NLL) and expected calibration error (ECE) metrics. + oai:arXiv.org:2512.09054v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mahta Fetrat, Donya Navabi, Zahra Dehghanian, Morteza Abolghasemi, Hamid R. Rabiee + http://creativecommons.org/licenses/by/4.0/ + Alon Arad, Saharon Rosset - Resonant and Stochastic Vibration in Neurorehabilitation - https://arxiv.org/abs/2512.08009 - arXiv:2512.08009v1 Announce Type: new -Abstract: Neurological injuries and age-related decline can impair sensory processing and disrupt motor coordination, gait, and balance. As mechanisms of neuroplasticity have become better understood, vibration-based interventions have gained attention as potential tools to stimulate sensory pathways and motor circuits to support functional recovery. This survey reviews stochastic and resonant vibration modalities, describing their mechanisms, therapeutic rationales, and clinical applications. We synthesize evidence on whole-body vibration for improving balance, mobility, and fine motor function in aging adults, stroke survivors, and individuals with Parkinson's disease, with attention to challenges in parameter optimization, generalizability, and safety. We also assess recent developments in focused muscle vibration and wearable stochastic resonance devices for upper-limb rehabilitation, evaluating their clinical promise along with limitations in scalability, ecological validity, and standardization. Across these modalities, we identify key variables that shape therapeutic outcomes and highlight ongoing efforts to refine protocols, improve usability, and integrate vibration techniques into broader neurorehabilitation frameworks. We conclude by outlining the most important research needs for translating vibration-based interventions into reliable and deployable clinical tools. - oai:arXiv.org:2512.08009v1 - cs.ET - cs.HC - cs.NE - Wed, 10 Dec 2025 00:00:00 -0500 + ConceptPose: Training-Free Zero-Shot Object Pose Estimation using Concept Vectors + https://arxiv.org/abs/2512.09056 + arXiv:2512.09056v1 Announce Type: new +Abstract: Object pose estimation is a fundamental task in computer vision and robotics, yet most methods require extensive, dataset-specific training. Concurrently, large-scale vision language models show remarkable zero-shot capabilities. In this work, we bridge these two worlds by introducing ConceptPose, a framework for object pose estimation that is both training-free and model-free. ConceptPose leverages a vision-language-model (VLM) to create open-vocabulary 3D concept maps, where each point is tagged with a concept vector derived from saliency maps. By establishing robust 3D-3D correspondences across concept maps, our approach allows precise estimation of 6DoF relative pose. Without any object or dataset-specific training, our approach achieves state-of-the-art results on common zero shot relative pose estimation benchmarks, significantly outperforming existing methods by over 62% in ADD(-S) score, including those that utilize extensive dataset-specific training. + oai:arXiv.org:2512.09056v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ava Hays, Nolan Kosnic, Ryan Miller, Kunal Siddhawar + Liming Kuang, Yordanka Velikova, Mahdi Saleh, Jan-Nico Zaech, Danda Pani Paudel, Benjamin Busam - Sensor Attack Detection Method for Encrypted State Observers - https://arxiv.org/abs/2512.08010 - arXiv:2512.08010v1 Announce Type: new -Abstract: This paper proposes an encrypted state observer that is capable of detecting sensor attacks without decryption. We first design a state observer that operates over a finite field of integers with the modular arithmetic. The observer generates a residue signal that indicates the presence of attacks under sparse attack and sensing redundancy conditions. Then, we develop a homomorphic encryption scheme that enables the observer to operate over encrypted data while automatically disclosing the residue signal. Unlike our previous work restricted to single-input single-output systems, the proposed scheme is applicable to general multi-input multi-output systems. Given that the disclosed residue signal remains below a prescribed threshold, the full state can be recovered as an encrypted message. - oai:arXiv.org:2512.08010v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + A Diffusion-Based Framework for High-Resolution Precipitation Forecasting over CONUS + https://arxiv.org/abs/2512.09059 + arXiv:2512.09059v1 Announce Type: new +Abstract: Accurate precipitation forecasting is essential for hydrometeorological risk management, especially for anticipating extreme rainfall that can lead to flash flooding and infrastructure damage. This study introduces a diffusion-based deep learning (DL) framework that systematically compares three residual prediction strategies differing only in their input sources: (1) a fully data-driven model using only past observations from the Multi-Radar Multi-Sensor (MRMS) system, (2) a corrective model using only forecasts from the High-Resolution Rapid Refresh (HRRR) numerical weather prediction system, and (3) a hybrid model integrating both MRMS and selected HRRR forecast variables. By evaluating these approaches under a unified setup, we provide a clearer understanding of how each data source contributes to predictive skill over the Continental United States (CONUS). Forecasts are produced at 1-km spatial resolution, beginning with direct 1-hour predictions and extending to 12 hours using autoregressive rollouts. Performance is evaluated using both CONUS-wide and region-specific metrics that assess overall performance and skill at extreme rainfall thresholds. Across all lead times, our DL framework consistently outperforms the HRRR baseline in pixel-wise and spatiostatistical metrics. The hybrid model performs best at the shortest lead time, while the HRRR-corrective model outperforms others at longer lead times, maintaining high skill through 12 hours. To assess reliability, we incorporate calibrated uncertainty quantification tailored to the residual learning setup. These gains, particularly at longer lead times, are critical for emergency preparedness, where modest increases in forecast horizon can improve decision-making. This work advances DL-based precipitation forecasting by enhancing predictive skill, reliability, and applicability across regions. + oai:arXiv.org:2512.09059v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yeongjun Jang, Sangwon Lee, Junsoo Kim + Marina Vicens-Miquel, Amy McGovern, Aaron J. Hill, Efi Foufoula-Georgiou, Clement Guilloteau, Samuel S. P. Shen - Benchmarking Offline Multi-Objective Reinforcement Learning in Critical Care - https://arxiv.org/abs/2512.08012 - arXiv:2512.08012v1 Announce Type: new -Abstract: In critical care settings such as the Intensive Care Unit, clinicians face the complex challenge of balancing conflicting objectives, primarily maximizing patient survival while minimizing resource utilization (e.g., length of stay). Single-objective Reinforcement Learning approaches typically address this by optimizing a fixed scalarized reward function, resulting in rigid policies that fail to adapt to varying clinical priorities. Multi-objective Reinforcement Learning (MORL) offers a solution by learning a set of optimal policies along the Pareto Frontier, allowing for dynamic preference selection at test time. However, applying MORL in healthcare necessitates strict offline learning from historical data. - In this paper, we benchmark three offline MORL algorithms, Conditioned Conservative Pareto Q-Learning (CPQL), Adaptive CPQL, and a modified Pareto Efficient Decision Agent (PEDA) Decision Transformer (PEDA DT), against three scalarized single-objective baselines (BC, CQL, and DDQN) on the MIMIC-IV dataset. Using Off-Policy Evaluation (OPE) metrics, we demonstrate that PEDA DT algorithm offers superior flexibility compared to static scalarized baselines. Notably, our results extend previous findings on single-objective Decision Transformers in healthcare, confirming that sequence modeling architectures remain robust and effective when scaled to multi-objective conditioned generation. These findings suggest that offline MORL is a promising framework for enabling personalized, adjustable decision-making in critical care without the need for retraining. - oai:arXiv.org:2512.08012v1 + SIP: Site in Pieces- A Dataset of Disaggregated Construction-Phase 3D Scans for Semantic Segmentation and Scene Understanding + https://arxiv.org/abs/2512.09062 + arXiv:2512.09062v1 Announce Type: new +Abstract: Accurate 3D scene interpretation in active construction sites is essential for progress monitoring, safety assessment, and digital twin development. LiDAR is widely used in construction because it offers advantages over camera-based systems, performing reliably in cluttered and dynamically changing conditions. Yet most public datasets for 3D perception are derived from densely fused scans with uniform sampling and complete visibility, conditions that do not reflect real construction sites. Field data are often collected as isolated single-station LiDAR views, constrained by safety requirements, limited access, and ongoing operations. These factors lead to radial density decay, fragmented geometry, and view-dependent visibility-characteristics that remain underrepresented in existing datasets. This paper presents SIP, Site in Pieces, a dataset created to reflect the practical constraints of LiDAR acquisition during construction. SIP provides indoor and outdoor scenes captured with a terrestrial LiDAR scanner and annotated at the point level using a taxonomy tailored to construction environments: A. Built Environment, B. Construction Operations, and C. Site Surroundings. The dataset includes both structural components and slender temporary objects such as scaffolding, MEP piping, and scissor lifts, where sparsity caused by occlusion and fragmented geometry make segmentation particularly challenging. The scanning protocol, annotation workflow, and quality control procedures establish a consistent foundation for the dataset. SIP is openly available with a supporting Git repository, offering adaptable class configurations that streamline adoption within modern 3D deep learning frameworks. By providing field data that retain real-world sensing characteristics, SIP enables robust benchmarking and contributes to advancing construction-oriented 3D vision tasks. + oai:arXiv.org:2512.09062v1 + cs.CV cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Aryaman Bansal, Divya Sharma + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Seongyong Kim, Yong Kwon Cho - Learning Dynamics from Infrequent Output Measurements for Uncertainty-Aware Optimal Control - https://arxiv.org/abs/2512.08013 - arXiv:2512.08013v1 Announce Type: new -Abstract: Reliable optimal control is challenging when the dynamics of a nonlinear system are unknown and only infrequent, noisy output measurements are available. This work addresses this setting of limited sensing by formulating a Bayesian prior over the continuous-time dynamics and latent state trajectory in state-space form and updating it through a targeted marginal Metropolis-Hastings sampler equipped with a numerical ODE integrator. The resulting posterior samples are used to formulate a scenario-based optimal control problem that accounts for both model and measurement uncertainty and is solved using standard nonlinear programming methods. The approach is validated in a numerical case study on glucose regulation using a Type 1 diabetes model. - oai:arXiv.org:2512.08013v1 - eess.SY - cs.LG - cs.SY - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + ShelfAware: Real-Time Visual-Inertial Semantic Localization in Quasi-Static Environments with Low-Cost Sensors + https://arxiv.org/abs/2512.09065 + arXiv:2512.09065v1 Announce Type: new +Abstract: Many indoor workspaces are quasi-static: global layout is stable but local semantics change continually, producing repetitive geometry, dynamic clutter, and perceptual noise that defeat vision-based localization. We present ShelfAware, a semantic particle filter for robust global localization that treats scene semantics as statistical evidence over object categories rather than fixed landmarks. ShelfAware fuses a depth likelihood with a category-centric semantic similarity and uses a precomputed bank of semantic viewpoints to perform inverse semantic proposals inside MCL, yielding fast, targeted hypothesis generation on low-cost, vision-only hardware. Across 100 global-localization trials spanning four conditions (cart-mounted, wearable, dynamic obstacles, and sparse semantics) in a semantically dense, retail environment, ShelfAware achieves a 96% success rate (vs. 22% MCL and 10% AMCL) with a mean time-to-convergence of 1.91s, attains the lowest translational RMSE in all conditions, and maintains stable tracking in 80% of tested sequences, all while running in real time on a consumer laptop-class platform. By modeling semantics distributionally at the category level and leveraging inverse proposals, ShelfAware resolves geometric aliasing and semantic drift common to quasi-static domains. Because the method requires only vision sensors and VIO, it integrates as an infrastructure-free building block for mobile robots in warehouses, labs, and retail settings; as a representative application, it also supports the creation of assistive devices providing start-anytime, shared-control assistive navigation for people with visual impairments. + oai:arXiv.org:2512.09065v1 + cs.RO + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Robert Lefringhausen, Theodor Springer, Sandra Hirche + Shivendra Agrawal, Jake Brawer, Ashutosh Naik, Alessandro Roncone, Bradley Hayes - FRIEDA: Benchmarking Multi-Step Cartographic Reasoning in Vision-Language Models - https://arxiv.org/abs/2512.08016 - arXiv:2512.08016v1 Announce Type: new -Abstract: Cartographic reasoning is the skill of interpreting geographic relationships by aligning legends, map scales, compass directions, map texts, and geometries across one or more map images. Although essential as a concrete cognitive capability and for critical tasks such as disaster response and urban planning, it remains largely unevaluated. Building on progress in chart and infographic understanding, recent large vision language model studies on map visual question-answering often treat maps as a special case of charts. In contrast, map VQA demands comprehension of layered symbology (e.g., symbols, geometries, and text labels) as well as spatial relations tied to orientation and distance that often span multiple maps and are not captured by chart-style evaluations. To address this gap, we introduce FRIEDA, a benchmark for testing complex open-ended cartographic reasoning in LVLMs. FRIEDA sources real map images from documents and reports in various domains and geographical areas. Following classifications in Geographic Information System (GIS) literature, FRIEDA targets all three categories of spatial relations: topological (border, equal, intersect, within), metric (distance), and directional (orientation). All questions require multi-step inference, and many require cross-map grounding and reasoning. We evaluate eleven state-of-the-art LVLMs under two settings: (1) the direct setting, where we provide the maps relevant to the question, and (2) the contextual setting, where the model may have to identify the maps relevant to the question before reasoning. Even the strongest models, Gemini-2.5-Pro and GPT-5-Think, achieve only 38.20% and 37.20% accuracy, respectively, far below human performance of 84.87%. These results reveal a persistent gap in multi-step cartographic reasoning, positioning FRIEDA as a rigorous benchmark to drive progress on spatial intelligence in LVLMs. - oai:arXiv.org:2512.08016v1 - cs.CV + ORCA: Open-ended Response Correctness Assessment for Audio Question Answering + https://arxiv.org/abs/2512.09066 + arXiv:2512.09066v1 Announce Type: new +Abstract: Evaluating open-ended responses from large audio language models (LALMs) is challenging because human annotators often genuinely disagree on answer correctness due to multiple valid interpretations, partial correctness, and subjective judgment. Traditional metrics reporting only mean scores fail to capture this uncertainty. We present ORCA (Open-ended Response Correctness Assessment), a framework that models the variability in human judgments using Beta distributions to predict both expected correctness and uncertainty. Our three-stage annotation framework combines human judgment with structured feedback and iterative refinement to simultaneously curate training data and improve benchmark quality. We collected 11,721 annotations across 3,580 question-answer pairs from 15 LALMs on two audio QA benchmarks, achieving inter-annotator agreement of 0.82 (Krippendorff's alpha). ORCA achieves 0.91 Spearman correlation with mean human judgments, matching or outperforming LLM-judge baselines while providing uncertainty estimates and requiring significantly less compute. We release our models, code, and curated dataset. + oai:arXiv.org:2512.09066v1 + cs.SD cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + \v{S}imon Sedl\'a\v{c}ek, Sara Barahona, Bolaji Yusuf, Laura Herrera-Alarc\'on, Santosh Kesiraju, Cecilia Bola\~nos, Alicia Lozano-Diez, Sathvik Udupa, Fernando L\'opez, Allison Ferner, Ramani Duraiswami, Jan \v{C}ernock\'y + + + Contrast transfer functions help quantify neural network out-of-distribution generalization in HRTEM + https://arxiv.org/abs/2512.09067 + arXiv:2512.09067v1 Announce Type: new +Abstract: Neural networks, while effective for tackling many challenging scientific tasks, are not known to perform well out-of-distribution (OOD), i.e., within domains which differ from their training data. Understanding neural network OOD generalization is paramount to their successful deployment in experimental workflows, especially when ground-truth knowledge about the experiment is hard to establish or experimental conditions significantly vary. With inherent access to ground-truth information and fine-grained control of underlying distributions, simulation-based data curation facilitates precise investigation of OOD generalization behavior. Here, we probe generalization with respect to imaging conditions of neural network segmentation models for high-resolution transmission electron microscopy (HRTEM) imaging of nanoparticles, training and measuring the OOD generalization of over 12,000 neural networks using synthetic data generated via random structure sampling and multislice simulation. Using the HRTEM contrast transfer function, we further develop a framework to compare information content of HRTEM datasets and quantify OOD domain shifts. We demonstrate that neural network segmentation models enjoy significant performance stability, but will smoothly and predictably worsen as imaging conditions shift from the training distribution. Lastly, we consider limitations of our approach in explaining other OOD shifts, such as of the atomic structures, and discuss complementary techniques for understanding generalization in such settings. + oai:arXiv.org:2512.09067v1 + cs.LG + cond-mat.mtrl-sci + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jiyoon Pyo, Yuankun Jiao, Dongwon Jung, Zekun Li, Leeje Jang, Sofia Kirsanova, Jina Kim, Yijun Lin, Qin Liu, Junyi Xie, Hadi Askari, Nan Xu, Muhao Chen, Yao-Yi Chiang + Luis Rangel DaCosta, Mary C. Scott - Structure Theorems (and Fast Algorithms) for List Recovery of Subspace-Design Codes - https://arxiv.org/abs/2512.08017 - arXiv:2512.08017v1 Announce Type: new -Abstract: List recovery of error-correcting codes has emerged as a fundamental notion with broad applications across coding theory and theoretical computer science. Folded Reed-Solomon (FRS) and univariate multiplicity codes are explicit constructions which can be efficiently list-recovered up to capacity, namely a fraction of errors approaching $1-R$ where $R$ is the code rate. - Chen and Zhang and related works showed that folded Reed-Solomon codes and linear codes must have list sizes exponential in $1/\epsilon$ for list-recovering from an error-fraction $1-R-\epsilon$. These results suggest that one cannot list-recover FRS codes in time that is also polynomial in $1/\epsilon$. In contrast to such limitations, we show, extending algorithmic advances of Ashvinkumar, Habib, and Srivastava for list decoding, that even if the lists in the case of list-recovery are large, they are highly structured. In particular, we can output a compact description of a set of size only $\ell^{O((\log \ell)/\epsilon)}$ which contains the relevant list, while running in time only polynomial in $1/\epsilon$ (the previously known compact description due to Guruswami and Wang had size $\approx n^{\ell/\epsilon}$). We also improve on the state-of-the-art algorithmic results for the task of list-recovery. - oai:arXiv.org:2512.08017v1 - cs.IT - cs.CC - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification + https://arxiv.org/abs/2512.09069 + arXiv:2512.09069v1 Announce Type: new +Abstract: Age-related macular degeneration (AMD) and choroidal neovascularization (CNV)-related conditions are leading causes of vision loss worldwide, with optical coherence tomography (OCT) serving as a cornerstone for early detection and management. However, deploying state-of-the-art deep learning models like ConvNeXtV2-Large in clinical settings is hindered by their computational demands. Therefore, it is desirable to develop efficient models that maintain high diagnostic performance while enabling real-time deployment. In this study, a novel knowledge distillation framework, termed KD-OCT, is proposed to compress a high-performance ConvNeXtV2-Large teacher model, enhanced with advanced augmentations, stochastic weight averaging, and focal loss, into a lightweight EfficientNet-B2 student for classifying normal, drusen, and CNV cases. KD-OCT employs real-time distillation with a combined loss balancing soft teacher knowledge transfer and hard ground-truth supervision. The effectiveness of the proposed method is evaluated on the Noor Eye Hospital (NEH) dataset using patient-level cross-validation. Experimental results demonstrate that KD-OCT outperforms comparable multi-scale or feature-fusion OCT classifiers in efficiency- accuracy balance, achieving near-teacher performance with substantial reductions in model size and inference time. Despite the compression, the student model exceeds most existing frameworks, facilitating edge deployment for AMD screening. Code is available at https://github.com/erfan-nourbakhsh/KD- OCT. + oai:arXiv.org:2512.09069v1 + cs.CV + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-sa/4.0/ - Rohan Goyal, Venkatesan Guruswami + Erfan Nourbakhsh, Nasrin Sanjari, Ali Nourbakhsh - "Your Privacy is Your Responsibility": Understanding How Users Collectively Navigate the Complexity of Privacy on Quora - https://arxiv.org/abs/2512.08025 - arXiv:2512.08025v1 Announce Type: new -Abstract: In the current technology environment, users are often in a vulnerable position when it comes to protecting their privacy. Previous efforts to promote privacy protection have largely focused on top-down approaches such as regulation and technology design, missing opportunities to understand how to empower users through bottom-up, collective approaches. Our paper addresses this by analyzing what and how privacy-related topics are discussed on Quora. We identified a wide range of interconnected privacy topics brought up by the users, including privacy risks and dangers, protection strategies, organizational practices, and existing laws and regulations. Our results highlight the interplay among the individual, technological, organizational, and societal factors affecting users' privacy attitudes. Moreover, we provide implications for designing community-based tools to better support users' collective efforts in navigating privacy, tools that incorporate users' diverse privacy-related behaviors and preferences, simplify information access and sharing, and connect designers and developers with the user community. - oai:arXiv.org:2512.08025v1 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Banach neural operator for Navier-Stokes equations + https://arxiv.org/abs/2512.09070 + arXiv:2512.09070v1 Announce Type: new +Abstract: Classical neural networks are known for their ability to approximate mappings between finite-dimensional spaces, but they fall short in capturing complex operator dynamics across infinite-dimensional function spaces. Neural operators, in contrast, have emerged as powerful tools in scientific machine learning for learning such mappings. However, standard neural operators typically lack mechanisms for mixing or attending to input information across space and time. In this work, we introduce the Banach neural operator (BNO) -- a novel framework that integrates Koopman operator theory with deep neural networks to predict nonlinear, spatiotemporal dynamics from partial observations. The BNO approximates a nonlinear operator between Banach spaces by combining spectral linearization (via Koopman theory) with deep feature learning (via convolutional neural networks and nonlinear activations). This sequence-to-sequence model captures dominant dynamic modes and allows for mesh-independent prediction. Numerical experiments on the Navier-Stokes equations demonstrate the method's accuracy and generalization capabilities. In particular, BNO achieves robust zero-shot super-resolution in unsteady flow prediction and consistently outperforms conventional Koopman-based methods and deep learning models. + oai:arXiv.org:2512.09070v1 + cs.NE + cs.LG + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Varun Shiri, Maggie Xiong, Jin L. C. Guo, Jinghui Cheng + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1063/5.0284818 + Bo Zhang - Toward an AI Reasoning-Enabled System for Patient-Clinical Trial Matching - https://arxiv.org/abs/2512.08026 - arXiv:2512.08026v1 Announce Type: new -Abstract: Screening patients for clinical trial eligibility remains a manual, time-consuming, and resource-intensive process. We present a secure, scalable proof-of-concept system for Artificial Intelligence (AI)-augmented patient-trial matching that addresses key implementation challenges: integrating heterogeneous electronic health record (EHR) data, facilitating expert review, and maintaining rigorous security standards. Leveraging open-source, reasoning-enabled large language models (LLMs), the system moves beyond binary classification to generate structured eligibility assessments with interpretable reasoning chains that support human-in-the-loop review. This decision support tool represents eligibility as a dynamic state rather than a fixed determination, identifying matches when available and offering actionable recommendations that could render a patient eligible in the future. The system aims to reduce coordinator burden, intelligently broaden the set of trials considered for each patient and guarantee comprehensive auditability of all AI-generated outputs. - oai:arXiv.org:2512.08026v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Adaptive Thresholding for Visual Place Recognition using Negative Gaussian Mixture Statistics + https://arxiv.org/abs/2512.09071 + arXiv:2512.09071v1 Announce Type: new +Abstract: Visual place recognition (VPR) is an important component technology for camera-based mapping and navigation applications. This is a challenging problem because images of the same place may appear quite different for reasons including seasonal changes, weather illumination, structural changes to the environment, as well as transient pedestrian or vehicle traffic. Papers focusing on generating image descriptors for VPR report their results using metrics such as recall@K and ROC curves. However, for a robot implementation, determining which matches are sufficiently good is often reduced to a manually set threshold. And it is difficult to manually select a threshold that will work for a variety of visual scenarios. This paper addresses the problem of automatically selecting a threshold for VPR by looking at the 'negative' Gaussian mixture statistics for a place - image statistics indicating not this place. We show that this approach can be used to select thresholds that work well for a variety of image databases and image descriptors. + oai:arXiv.org:2512.09071v1 + cs.CV + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Caroline N. Leach, Mitchell A. Klusty, Samuel E. Armstrong, Justine C. Pickarski, Kristen L. Hankins, Emily B. Collier, Maya Shah, Aaron D. Mullen, V. K. Cody Bumgardner + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nick Trinh, Damian Lyons - Optimized Area Coverage in Disaster Response Utilizing Autonomous UAV Swarm Formations - https://arxiv.org/abs/2512.08028 - arXiv:2512.08028v1 Announce Type: new -Abstract: This paper presents a UAV swarm system designed to assist first responders in disaster scenarios like wildfires. By distributing sensors across multiple agents, the system extends flight duration and enhances data availability, reducing the risk of mission failure due to collisions. To mitigate this risk further, we introduce an autonomous navigation framework that utilizes a local Euclidean Signed Distance Field (ESDF) map for obstacle avoidance while maintaining swarm formation with minimal path deviation. Additionally, we incorporate a Traveling Salesman Problem (TSP) variant to optimize area coverage, prioritizing Points of Interest (POIs) based on preassigned values derived from environmental behavior and critical infrastructure. The proposed system is validated through simulations with varying swarm sizes, demonstrating its ability to maximize coverage while ensuring collision avoidance between UAVs and obstacles. - oai:arXiv.org:2512.08028v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Modular Deep-Learning-Based Early Warning System for Deadly Heatwave Prediction + https://arxiv.org/abs/2512.09074 + arXiv:2512.09074v1 Announce Type: new +Abstract: Severe heatwaves in urban areas significantly threaten public health, calling for establishing early warning strategies. Despite predicting occurrence of heatwaves and attributing historical mortality, predicting an incoming deadly heatwave remains a challenge due to the difficulty in defining and estimating heat-related mortality. Furthermore, establishing an early warning system imposes additional requirements, including data availability, spatial and temporal robustness, and decision costs. To address these challenges, we propose DeepTherm, a modular early warning system for deadly heatwave prediction without requiring heat-related mortality history. By highlighting the flexibility of deep learning, DeepTherm employs a dual-prediction pipeline, disentangling baseline mortality in the absence of heatwaves and other irregular events from all-cause mortality. We evaluated DeepTherm on real-world data across Spain. Results demonstrate consistent, robust, and accurate performance across diverse regions, time periods, and population groups while allowing trade-off between missed alarms and false alarms. + oai:arXiv.org:2512.09074v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1109/MED64031.2025.11073337 - Lampis Papakostas, Aristeidis Geladaris, Athanasios Mastrogeorgiou, Jim Sharples, Gautier Hattenberger, Panagiotis Chatzakos, Panagiotis Polygerinos + http://creativecommons.org/licenses/by/4.0/ + Shangqing Xu, Zhiyuan Zhao, Megha Sharma, Jos\'e Mar\'ia Mart\'in-Olalla, Alexander Rodr\'iguez, Gregory A. Wellenius, B. Aditya Prakash - CLARITY: Medical World Model for Guiding Treatment Decisions by Modeling Context-Aware Disease Trajectories in Latent Space - https://arxiv.org/abs/2512.08029 - arXiv:2512.08029v1 Announce Type: new -Abstract: Clinical decision-making in oncology requires predicting dynamic disease evolution, a task current static AI predictors cannot perform. While world models (WMs) offer a paradigm for generative prediction, existing medical applications remain limited. Existing methods often rely on stochastic diffusion models, focusing on visual reconstruction rather than causal, physiological transitions. Furthermore, in medical domain, models like MeWM typically ignore patient-specific temporal and clinical contexts and lack a feedback mechanism to link predictions to treatment decisions. To address these gaps, we introduce CLARITY, a medical world model that forecasts disease evolution directly within a structured latent space. It explicitly integrates time intervals (temporal context) and patient-specific data (clinical context) to model treatment-conditioned progression as a smooth, interpretable trajectory, and thus generate physiologically faithful, individualized treatment plans. Finally, CLARITY introduces a novel prediction-to-decision framework, translating latent rollouts into transparent, actionable recommendations. CLARITY demonstrates state-of-the-art performance in treatment planning. On the MU-Glioma-Post dataset, our approach outperforms recent MeWM by 12\%, and significantly surpasses all other medical-specific large language models. - oai:arXiv.org:2512.08029v1 + Beyond the Hype: Comparing Lightweight and Deep Learning Models for Air Quality Forecasting + https://arxiv.org/abs/2512.09076 + arXiv:2512.09076v1 Announce Type: new +Abstract: Accurate forecasting of urban air pollution is essential for protecting public health and guiding mitigation policies. While Deep Learning (DL) and hybrid pipelines dominate recent research, their complexity and limited interpretability hinder operational use. This study investigates whether lightweight additive models -- Facebook Prophet (FBP) and NeuralProphet (NP) -- can deliver competitive forecasts for particulate matter (PM$_{2.5}$, PM$_{10}$) in Beijing, China. Using multi-year pollutant and meteorological data, we applied systematic feature selection (correlation, mutual information, mRMR), leakage-safe scaling, and chronological data splits. Both models were trained with pollutant and precursor regressors, with NP additionally leveraging lagged dependencies. For context, two machine learning baselines (LSTM, LightGBM) and one traditional statistical model (SARIMAX) were also implemented. Performance was evaluated on a 7-day holdout using MAE, RMSE, and $R^2$. Results show that FBP consistently outperformed NP, SARIMAX, and the learning-based baselines, achieving test $R^2$ above 0.94 for both pollutants. These findings demonstrate that interpretable additive models remain competitive with both traditional and complex approaches, offering a practical balance of accuracy, transparency, and ease of deployment. + oai:arXiv.org:2512.09076v1 cs.LG + cs.AI + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Moazzam Umer Gondal, Hamad ul Qudous, Asma Ahmad Farhan + + + Almost-Optimal Approximation Algorithms for Global Minimum Cut in Directed Graphs + https://arxiv.org/abs/2512.09080 + arXiv:2512.09080v1 Announce Type: new +Abstract: We develop new $(1+\epsilon)$-approximation algorithms for finding the global minimum edge-cut in a directed edge-weighted graph, and for finding the global minimum vertex-cut in a directed vertex-weighted graph. Our algorithms are randomized, and have a running time of $O\left(m^{1+o(1)}/\epsilon\right)$ on any $m$-edge $n$-vertex input graph, assuming all edge/vertex weights are polynomially-bounded. In particular, for any constant $\epsilon>0$, our algorithms have an almost-optimal running time of $O\left(m^{1+o(1)}\right)$. The fastest previously-known running time for this setting, due to (Cen et al., FOCS 2021), is $\tilde{O}\left(\min\left\{n^2/\epsilon^2,m^{1+o(1)}\sqrt{n}\right\}\right)$ for Minimum Edge-Cut, and $\tilde{O}\left(n^2/\epsilon^2\right)$ for Minimum Vertex-Cut. Our results further extend to the rooted variants of the Minimum Edge-Cut and Minimum Vertex-Cut problems, where the algorithm is additionally given a root vertex $r$, and the goal is to find a minimum-weight cut separating any vertex from the root $r$. In terms of techniques, we build upon and extend a framework that was recently introduced by (Chuzhoy et al., SODA 2026) for solving the Minimum Vertex-Cut problem in unweighted directed graphs. Additionally, in order to obtain our result for the Global Minimum Vertex-Cut problem, we develop a novel black-box reduction from this problem to its rooted variant. Prior to our work, such reductions were only known for more restricted settings, such as when all vertex-weights are unit. + oai:arXiv.org:2512.09080v1 + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ron Mosenzon + + + AgentComp: From Agentic Reasoning to Compositional Mastery in Text-to-Image Models + https://arxiv.org/abs/2512.09081 + arXiv:2512.09081v1 Announce Type: new +Abstract: Text-to-image generative models have achieved remarkable visual quality but still struggle with compositionality$-$accurately capturing object relationships, attribute bindings, and fine-grained details in prompts. A key limitation is that models are not explicitly trained to differentiate between compositionally similar prompts and images, resulting in outputs that are close to the intended description yet deviate in fine-grained details. To address this, we propose AgentComp, a framework that explicitly trains models to better differentiate such compositional variations and enhance their reasoning ability. AgentComp leverages the reasoning and tool-use capabilities of large language models equipped with image generation, editing, and VQA tools to autonomously construct compositional datasets. Using these datasets, we apply an agentic preference optimization method to fine-tune text-to-image models, enabling them to better distinguish between compositionally similar samples and resulting in overall stronger compositional generation ability. AgentComp achieves state-of-the-art results on compositionality benchmarks such as T2I-CompBench, without compromising image quality$-$a common drawback in prior approaches$-$and even generalizes to other capabilities not explicitly trained for, such as text rendering. + oai:arXiv.org:2512.09081v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Tianxingjian Ding, Yuanhao Zou, Chen Chen, Mubarak Shah, Yu Tian + Arman Zarei, Jiacheng Pan, Matthew Gwilliam, Soheil Feizi, Zhenheng Yang - What Pulls the Strings? Understanding the Characteristics and Role of Argumentation in Open-Source Software Usability Discussions - https://arxiv.org/abs/2512.08032 - arXiv:2512.08032v1 Announce Type: new -Abstract: The usability of open-source software (OSS) is important but frequently overlooked in favor of technical and functional complexity. Argumentation can be a pivotal device for diverse stakeholders in OSS usability discussions to express opinions and persuade others. However, the characteristics of argument discourse in those discussions remain unknown, resulting in difficulties in providing effective support for discussion participants. We address this through a comprehensive analysis of argument discourse and quality in five OSS projects. Our results indicated that usability discussions are predominantly argument-driven, although their qualities vary. Issue comments exhibit lower-quality arguments than the issue posts, suggesting a shortage of collective intelligence about usability in OSS communities. Moreover, argument discourse and quality have various impacts on the subsequent behavior of participants. Overall, this research offers insights to help OSS stakeholders build more effective arguments and eventually improve OSS usability. These insights can also inform studies about other distributed collaborative communities. - oai:arXiv.org:2512.08032v1 - cs.SE - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + GS-KAN: Parameter-Efficient Kolmogorov-Arnold Networks via Sprecher-Type Shared Basis Functions + https://arxiv.org/abs/2512.09084 + arXiv:2512.09084v1 Announce Type: new +Abstract: The Kolmogorov-Arnold representation theorem offers a theoretical alternative to Multi-Layer Perceptrons (MLPs) by placing learnable univariate functions on edges rather than nodes. While recent implementations such as Kolmogorov-Arnold Networks (KANs) demonstrate high approximation capabilities, they suffer from significant parameter inefficiency due to the requirement of maintaining unique parameterizations for every network edge. In this work, we propose GS-KAN (Generalized Sprecher-KAN), a lightweight architecture inspired by David Sprecher's refinement of the superposition theorem. GS-KAN constructs unique edge functions by applying learnable linear transformations to a single learnable, shared parent function per layer. We evaluate GS-KAN against existing KAN architectures and MLPs across synthetic function approximation, tabular data regression and image classification tasks. Our results demonstrate that GS-KAN outperforms both MLPs and standard KAN baselines on continuous function approximation tasks while maintaining superior parameter efficiency. Additionally, GS-KAN achieves competitive performance with existing KAN architectures on tabular regression and outperforms MLPs on high-dimensional classification tasks. Crucially, the proposed architecture enables the deployment of KAN-based architectures in high-dimensional regimes under strict parameter constraints, a setting where standard implementations are typically infeasible due to parameter explosion. The source code is available at https://github.com/rambamn48/gs-impl. + oai:arXiv.org:2512.09084v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Arghavan Sanei, Chaima Amiri, Atefeh Shokrizadeh, Jinghui Cheng + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Oscar Eliasson - Expectations in Expectation Propagation - https://arxiv.org/abs/2512.08034 - arXiv:2512.08034v1 Announce Type: new -Abstract: Expectation Propagation (EP) is a widely used message-passing algorithm that decomposes a global inference problem into multiple local ones. It approximates marginal distributions (beliefs) using intermediate functions (messages). While beliefs must be proper probability distributions that integrate to one, messages may have infinite integral values. In Gaussian-projected EP, such messages take a Gaussian form and appear as if they have "negative" variances. Although allowed within the EP framework, these negative-variance messages can impede algorithmic progress. - In this paper, we investigate EP in linear models and analyze the relationship between the corresponding beliefs. Based on the analysis, we propose both non-persistent and persistent approaches that prevent the algorithm from being blocked by messages with infinite integral values. - Furthermore, by examining the relationship between the EP messages in linear models, we develop an additional approach that avoids the occurrence of messages with infinite integral values. - oai:arXiv.org:2512.08034v1 - cs.IT - eess.SP - math.IT - stat.CO - Wed, 10 Dec 2025 00:00:00 -0500 + Mental Models of Autonomy and Sentience Shape Reactions to AI + https://arxiv.org/abs/2512.09085 + arXiv:2512.09085v1 Announce Type: new +Abstract: Narratives about artificial intelligence (AI) entangle autonomy, the capacity to self-govern, with sentience, the capacity to sense and feel. AI agents that perform tasks autonomously and companions that recognize and express emotions may activate mental models of autonomy and sentience, respectively, provoking distinct reactions. To examine this possibility, we conducted three pilot studies (N = 374) and four preregistered vignette experiments describing an AI as autonomous, sentient, both, or neither (N = 2,702). Activating a mental model of sentience increased general mind perception (cognition and emotion) and moral consideration more than autonomy, but autonomy increased perceived threat more than sentience. Sentience also increased perceived autonomy more than vice versa. Based on a within-paper meta-analysis, sentience changed reactions more than autonomy on average. By disentangling different mental models of AI, we can study human-AI interaction with more precision to better navigate the detailed design of anthropomorphized AI and prompting interfaces. + oai:arXiv.org:2512.09085v1 + cs.HC + cs.AI + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Zilu Zhao, Fangqing Xiao, Dirk Slock + http://creativecommons.org/licenses/by/4.0/ + Janet V. T. Pauketat, Daniel B. Shank, Aikaterina Manoli, Jacy Reese Anthis - Joint Activity Design Heuristics for Enhancing Human-Machine Collaboration - https://arxiv.org/abs/2512.08036 - arXiv:2512.08036v1 Announce Type: new -Abstract: Joint activity describes when more than one agent (human or machine) contributes to the completion of a task or activity. Designing for joint activity focuses on explicitly supporting the interdependencies between agents necessary for effective coordination among agents engaged in the joint activity. This builds and expands upon designing for usability to further address how technologies can be designed to act as effective team players. Effective joint activity requires supporting, at minimum, five primary macrocognitive functions within teams: Event Detection, Sensemaking, Adaptability, Perspective-Shifting, and Coordination. Supporting these functions is equally as important as making technologies usable. We synthesized fourteen heuristics from relevant literature including display design, human factors, cognitive systems engineering, cognitive psychology, and computer science to aid the design, development, and evaluation of technologies that support joint human-machine activity. - oai:arXiv.org:2512.08036v1 + Inferring Operator Emotions from a Motion-Controlled Robotic Arm + https://arxiv.org/abs/2512.09086 + arXiv:2512.09086v1 Announce Type: new +Abstract: A remote robot operator's affective state can significantly impact the resulting robot's motions leading to unexpected consequences, even when the user follows protocol and performs permitted tasks. The recognition of a user operator's affective states in remote robot control scenarios is, however, underexplored. Current emotion recognition methods rely on reading the user's vital signs or body language, but the devices and user participation these measures require would add limitations to remote robot control. We demonstrate that the functional movements of a remote-controlled robotic avatar, which was not designed for emotional expression, can be used to infer the emotional state of the human operator via a machine-learning system. Specifically, our system achieved 83.3$\%$ accuracy in recognizing the user's emotional state expressed by robot movements, as a result of their hand motions. We discuss the implications of this system on prominent current and future remote robot operation and affective robotic contexts. + oai:arXiv.org:2512.09086v1 + cs.RO cs.HC - cs.AI - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mohammadreza Jalaeian, Dane A. Morey, Michael F. Rayo + Xinyu Qi, Zeyu Deng, Shaun Alexander Macdonald, Liying Li, Chen Wang, Muhammad Ali Imran, Philip G. Zhao - SSplain: Sparse and Smooth Explainer for Retinopathy of Prematurity Classification - https://arxiv.org/abs/2512.08038 - arXiv:2512.08038v1 Announce Type: new -Abstract: Neural networks are frequently used in medical diagnosis. However, due to their black-box nature, model explainers are used to help clinicians understand better and trust model outputs. This paper introduces an explainer method for classifying Retinopathy of Prematurity (ROP) from fundus images. Previous methods fail to generate explanations that preserve input image structures such as smoothness and sparsity. We introduce Sparse and Smooth Explainer (SSplain), a method that generates pixel-wise explanations while preserving image structures by enforcing smoothness and sparsity. This results in realistic explanations to enhance the understanding of the given black-box model. To achieve this goal, we define an optimization problem with combinatorial constraints and solve it using the Alternating Direction Method of Multipliers (ADMM). Experimental results show that SSplain outperforms commonly used explainers in terms of both post-hoc accuracy and smoothness analyses. Additionally, SSplain identifies features that are consistent with domain-understandable features that clinicians consider as discriminative factors for ROP. We also show SSplain's generalization by applying it to additional publicly available datasets. Code is available at https://github.com/neu-spiral/SSplain. - oai:arXiv.org:2512.08038v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + A posteriori error estimates for mixed-dimensional Darcy flow using non-matching grids + https://arxiv.org/abs/2512.09087 + arXiv:2512.09087v1 Announce Type: new +Abstract: In this article, we extend the a posteriori error estimates for hierarchical mixed-dimensional elliptic equations developed in [Varela et al., J. Numer. Math., 48 (2023), pp. 247-280] to the setting of non-matching mixed-dimensional grids. The extension is achieved by introducing transfer grids between the planar subdomain and interface grids, together with stable discrete projection operators for primal (potential) and dual (flux) variables. The proposed non-matching estimators remain fully guaranteed and computable. Numerical experiments, including three-dimensional problems based on community benchmarks for incompressible Darcy flow in fractured porous media, demonstrate reliable performance of the estimators for the non-matching grids and effectivity that is comparable to the estimators for matching grids. + oai:arXiv.org:2512.09087v1 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Elifnur Sunger, Tales Imbiriba, Peter Campbell, Deniz Erdogmus, Stratis Ioannidis, Jennifer Dy + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jhabriel Varela, Christian E. Schaerer, Eirik Keilegavlen, Inga Berre - Lost in Translation, Found in Embeddings: Sign Language Translation and Alignment - https://arxiv.org/abs/2512.08040 - arXiv:2512.08040v1 Announce Type: new -Abstract: Our aim is to develop a unified model for sign language understanding, that performs sign language translation (SLT) and sign-subtitle alignment (SSA). Together, these two tasks enable the conversion of continuous signing videos into spoken language text and also the temporal alignment of signing with subtitles -- both essential for practical communication, large-scale corpus construction, and educational applications. To achieve this, our approach is built upon three components: (i) a lightweight visual backbone that captures manual and non-manual cues from human keypoints and lip-region images while preserving signer privacy; (ii) a Sliding Perceiver mapping network that aggregates consecutive visual features into word-level embeddings to bridge the vision-text gap; and (iii) a multi-task scalable training strategy that jointly optimises SLT and SSA, reinforcing both linguistic and temporal alignment. To promote cross-linguistic generalisation, we pretrain our model on large-scale sign-text corpora covering British Sign Language (BSL) and American Sign Language (ASL) from the BOBSL and YouTube-SL-25 datasets. With this multilingual pretraining and strong model design, we achieve state-of-the-art results on the challenging BOBSL (BSL) dataset for both SLT and SSA. Our model also demonstrates robust zero-shot generalisation and finetuned SLT performance on How2Sign (ASL), highlighting the potential of scalable translation across different sign languages. - oai:arXiv.org:2512.08040v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Calibrated Trust in Dealing with LLM Hallucinations: A Qualitative Study + https://arxiv.org/abs/2512.09088 + arXiv:2512.09088v1 Announce Type: new +Abstract: Hallucinations are outputs by Large Language Models (LLMs) that are factually incorrect yet appear plausible [1]. This paper investigates how such hallucinations influence users' trust in LLMs and users' interaction with LLMs. To explore this in everyday use, we conducted a qualitative study with 192 participants. Our findings show that hallucinations do not result in blanket mistrust but instead lead to context-sensitive trust calibration. Building on the calibrated trust model by Lee & See [2] and Afroogh et al.'s trust-related factors [3], we confirm expectancy [3], [4], prior experience [3], [4], [5], and user expertise & domain knowledge [3], [4] as userrelated (human) trust factors, and identify intuition as an additional factor relevant for hallucination detection. Additionally, we found that trust dynamics are further influenced by contextual factors, particularly perceived risk [3] and decision stakes [6]. Consequently, we validate the recursive trust calibration process proposed by Bl\"obaum [7] and extend it by including intuition as a user-related trust factor. Based on these insights, we propose practical recommendations for responsible and reflective LLM use. + oai:arXiv.org:2512.09088v1 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Youngjoon Jang, Liliane Momeni, Zifan Jiang, Joon Son Chung, G\"ul Varol, Andrew Zisserman + The 3rd International Conference on Foundation and Large Language Models (FLLM2025), Vienna, Austria, 25-28 November 2025 + Adrian Ryser, Florian Allwein, Tim Schlippe - Towards Sustainable Universal Deepfake Detection with Frequency-Domain Masking - https://arxiv.org/abs/2512.08042 - arXiv:2512.08042v1 Announce Type: new -Abstract: Universal deepfake detection aims to identify AI-generated images across a broad range of generative models, including unseen ones. This requires robust generalization to new and unseen deepfakes, which emerge frequently, while minimizing computational overhead to enable large-scale deepfake screening, a critical objective in the era of Green AI. In this work, we explore frequency-domain masking as a training strategy for deepfake detectors. Unlike traditional methods that rely heavily on spatial features or large-scale pretrained models, our approach introduces random masking and geometric transformations, with a focus on frequency masking due to its superior generalization properties. We demonstrate that frequency masking not only enhances detection accuracy across diverse generators but also maintains performance under significant model pruning, offering a scalable and resource-conscious solution. Our method achieves state-of-the-art generalization on GAN- and diffusion-generated image datasets and exhibits consistent robustness under structured pruning. These results highlight the potential of frequency-based masking as a practical step toward sustainable and generalizable deepfake detection. Code and models are available at: [https://github.com/chandlerbing65nm/FakeImageDetection](https://github.com/chandlerbing65nm/FakeImageDetection). - oai:arXiv.org:2512.08042v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + A Taxonomy of Numerical Differentiation Methods + https://arxiv.org/abs/2512.09090 + arXiv:2512.09090v1 Announce Type: new +Abstract: Differentiation is a cornerstone of computing and data analysis in every discipline of science and engineering. Indeed, most fundamental physics laws are expressed as relationships between derivatives in space and time. However, derivatives are rarely directly measurable and must instead be computed, often from noisy, potentially corrupt data streams. There is a rich and broad literature of computational differentiation algorithms, but many impose extra constraints to work correctly, e.g. periodic boundary conditions, or are compromised in the presence of noise and corruption. It can therefore be challenging to select the method best-suited to any particular problem. Here, we review a broad range of numerical methods for calculating derivatives, present important contextual considerations and choice points, compare relative advantages, and provide basic theory for each algorithm in order to assist users with the mathematical underpinnings. This serves as a practical guide to help scientists and engineers match methods to application domains. We also provide an open-source Python package, PyNumDiff, which contains a broad suite of methods for differentiating noisy data. + oai:arXiv.org:2512.09090v1 + math.NA + cs.CE + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Chandler Timm C. Doloriel, Habib Ullah, Kristian Hovde Liland, Fadi Al Machot, Ngai-Man Cheung + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Pavel Komarov, Floris van Breugel, J. Nathan Kutz - Scaffolding Reshapes Dialogic Engagement in Collaborative Problem Solving: Comparative Analysis of Two Approaches - https://arxiv.org/abs/2512.08045 - arXiv:2512.08045v1 Announce Type: new -Abstract: Supporting learners during Collaborative Problem Solving (CPS) is a necessity. Existing studies have compared scaffolds with maximal and minimal instructional support by studying their effects on learning and behaviour. However, our understanding of how such scaffolds could differently shape the distribution of individual dialogic engagement and behaviours across different CPS phases remains limited. This study applied Heterogeneous Interaction Network Analysis (HINA) and Sequential Pattern Mining (SPM) to uncover the structural effects of scaffolding on different phases of the CPS process among K-12 students in authentic educational settings. Students with a maximal scaffold demonstrated higher dialogic engagement across more phases than those with a minimal scaffold. However, they were extensively demonstrating scripting behaviours across the phases, evidencing the presence of overscripting. Although students with the minimal scaffold demonstrated more problem solving behaviours and fewer scripting behaviours across the phases, they repeated particular behaviours in multiple phases and progressed more to socialising behaviours. In both scaffold conditions, problem solving behaviours rarely progressed to other problem solving behaviours. The paper discusses the implications of these findings for scaffold design and teaching practice of CPS, and highlights the distinct yet complementary value of HINA and SPM approaches to investigate students' learning processes during CPS. - oai:arXiv.org:2512.08045v1 - cs.SI - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + Explaining the Unseen: Multimodal Vision-Language Reasoning for Situational Awareness in Underground Mining Disasters + https://arxiv.org/abs/2512.09092 + arXiv:2512.09092v1 Announce Type: new +Abstract: Underground mining disasters produce pervasive darkness, dust, and collapses that obscure vision and make situational awareness difficult for humans and conventional systems. To address this, we propose MDSE, Multimodal Disaster Situation Explainer, a novel vision-language framework that automatically generates detailed textual explanations of post-disaster underground scenes. MDSE has three-fold innovations: (i) Context-Aware Cross-Attention for robust alignment of visual and textual features even under severe degradation; (ii) Segmentation-aware dual pathway visual encoding that fuses global and region-specific embeddings; and (iii) Resource-Efficient Transformer-Based Language Model for expressive caption generation with minimal compute cost. To support this task, we present the Underground Mine Disaster (UMD) dataset--the first image-caption corpus of real underground disaster scenes--enabling rigorous training and evaluation. Extensive experiments on UMD and related benchmarks show that MDSE substantially outperforms state-of-the-art captioning models, producing more accurate and contextually relevant descriptions that capture crucial details in obscured environments, improving situational awareness for underground emergency response. The code is at https://github.com/mizanJewel/Multimodal-Disaster-Situation-Explainer. + oai:arXiv.org:2512.09092v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Kester Wong, Feng Shihui, Sahan Bulathwela, Mutlu Cukurova + http://creativecommons.org/licenses/by-sa/4.0/ + Mizanur Rahman Jewel, Mohamed Elmahallawy, Sanjay Madria, Samuel Frimpong - Mask to Adapt: Simple Random Masking Enables Robust Continual Test-Time Learning - https://arxiv.org/abs/2512.08048 - arXiv:2512.08048v1 Announce Type: new -Abstract: Distribution shifts at test time degrade image classifiers. Recent continual test-time adaptation (CTTA) methods use masking to regulate learning, but often depend on calibrated uncertainty or stable attention scores and introduce added complexity. We ask: do we need custom-made masking designs, or can a simple random masking schedule suffice under strong corruption? We introduce Mask to Adapt (M2A), a simple CTTA approach that generates a short sequence of masked views (spatial or frequency) and adapts with two objectives: a mask consistency loss that aligns predictions across different views and an entropy minimization loss that encourages confident outputs. Motivated by masked image modeling, we study two common masking families -- spatial masking and frequency masking -- and further compare subtypes within each (spatial: patch vs.\ pixel; frequency: all vs.\ low vs.\ high). On CIFAR10C/CIFAR100C/ImageNetC (severity~5), M2A (Spatial) attains 8.3\%/19.8\%/39.2\% mean error, outperforming or matching strong CTTA baselines, while M2A (Frequency) lags behind. Ablations further show that simple random masking is effective and robust. These results indicate that a simple random masking schedule, coupled with consistency and entropy objectives, is sufficient to drive effective test-time adaptation without relying on uncertainty or attention signals. - oai:arXiv.org:2512.08048v1 + Food Image Generation on Multi-Noun Categories + https://arxiv.org/abs/2512.09095 + arXiv:2512.09095v1 Announce Type: new +Abstract: Generating realistic food images for categories with multiple nouns is surprisingly challenging. For instance, the prompt "egg noodle" may result in images that incorrectly contain both eggs and noodles as separate entities. Multi-noun food categories are common in real-world datasets and account for a large portion of entries in benchmarks such as UEC-256. These compound names often cause generative models to misinterpret the semantics, producing unintended ingredients or objects. This is due to insufficient multi-noun category related knowledge in the text encoder and misinterpretation of multi-noun relationships, leading to incorrect spatial layouts. To overcome these challenges, we propose FoCULR (Food Category Understanding and Layout Refinement) which incorporates food domain knowledge and introduces core concepts early in the generation process. Experimental results demonstrate that the integration of these techniques improves image generation performance in the food domain. + oai:arXiv.org:2512.09095v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Chandler Timm C. Doloriel + Xinyue Pan, Yuhao Chen, Jiangpeng He, Fengqing Zhu - An Introduction to Deep Reinforcement and Imitation Learning - https://arxiv.org/abs/2512.08052 - arXiv:2512.08052v1 Announce Type: new -Abstract: Embodied agents, such as robots and virtual characters, must continuously select actions to execute tasks effectively, solving complex sequential decision-making problems. Given the difficulty of designing such controllers manually, learning-based approaches have emerged as promising alternatives, most notably Deep Reinforcement Learning (DRL) and Deep Imitation Learning (DIL). DRL leverages reward signals to optimize behavior, while DIL uses expert demonstrations to guide learning. This document introduces DRL and DIL in the context of embodied agents, adopting a concise, depth-first approach to the literature. It is self-contained, presenting all necessary mathematical and machine learning concepts as they are needed. It is not intended as a survey of the field; rather, it focuses on a small set of foundational algorithms and techniques, prioritizing in-depth understanding over broad coverage. The material ranges from Markov Decision Processes to REINFORCE and Proximal Policy Optimization (PPO) for DRL, and from Behavioral Cloning to Dataset Aggregation (DAgger) and Generative Adversarial Imitation Learning (GAIL) for DIL. - oai:arXiv.org:2512.08052v1 + Characterizing Human Feedback-Based Control in Naturalistic Driving Interactions via Gaussian Process Regression with Linear Feedback + https://arxiv.org/abs/2512.09097 + arXiv:2512.09097v1 Announce Type: new +Abstract: Understanding driver interactions is critical to designing autonomous vehicles to interoperate safely with human-driven cars. We consider the impact of these interactions on the policies drivers employ when navigating unsigned intersections in a driving simulator. The simulator allows the collection of naturalistic decision-making and behavior data in a controlled environment. Using these data, we model the human driver responses as state-based feedback controllers learned via Gaussian Process regression methods. We compute the feedback gain of the controller using a weighted combination of linear and nonlinear priors. We then analyze how the individual gains are reflected in driver behavior. We also assess differences in these controllers across populations of drivers. Our work in data-driven analyses of how drivers determine their policies can facilitate future work in the design of socially responsive autonomy for vehicles. + oai:arXiv.org:2512.09097v1 + eess.SY cs.RO - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Pedro Santana + Rachel DiPirro, Rosalyn Devonport, Dan Calderone, Chishang "Mario'' Yang, Wendy Ju, Meeko Oishi - Fairness-aware PageRank via Edge Reweighting - https://arxiv.org/abs/2512.08055 - arXiv:2512.08055v1 Announce Type: new -Abstract: Link-analysis algorithms, such as PageRank, are instrumental in understanding the structural dynamics of networks by evaluating the importance of individual vertices based on their connectivity. Recently, with the rising importance of responsible AI, the question of fairness in link-analysis algorithms has gained traction. In this paper, we present a new approach for incorporating group fairness into the PageRank algorithm by reweighting the transition probabilities in the underlying transition matrix. We formulate the problem of achieving fair PageRank by seeking to minimize the fairness loss, which is the difference between the original group-wise PageRank distribution and a target PageRank distribution. We further define a group-adapted fairness notion, which accounts for group homophily by considering random walks with group-biased restart for each group. Since the fairness loss is non-convex, we propose an efficient projected gradient-descent method for computing locally-optimal edge weights. Unlike earlier approaches, we do not recommend adding new edges to the network, nor do we adjust the restart vector. Instead, we keep the topology of the underlying network unchanged and only modify the relative importance of existing edges. We empirically compare our approach with state-of-the-art baselines and demonstrate the efficacy of our method, where very small changes in the transition matrix lead to significant improvement in the fairness of the PageRank algorithm. - oai:arXiv.org:2512.08055v1 - cs.SI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Masked Generative Policy for Robotic Control + https://arxiv.org/abs/2512.09101 + arXiv:2512.09101v1 Announce Type: new +Abstract: We present Masked Generative Policy (MGP), a novel framework for visuomotor imitation learning. We represent actions as discrete tokens, and train a conditional masked transformer that generates tokens in parallel and then rapidly refines only low-confidence tokens. We further propose two new sampling paradigms: MGP-Short, which performs parallel masked generation with score-based refinement for Markovian tasks, and MGP-Long, which predicts full trajectories in a single pass and dynamically refines low-confidence action tokens based on new observations. With globally coherent prediction and robust adaptive execution capabilities, MGP-Long enables reliable control on complex and non-Markovian tasks that prior methods struggle with. Extensive evaluations on 150 robotic manipulation tasks spanning the Meta-World and LIBERO benchmarks show that MGP achieves both rapid inference and superior success rates compared to state-of-the-art diffusion and autoregressive policies. Specifically, MGP increases the average success rate by 9% across 150 tasks while cutting per-sequence inference time by up to 35x. It further improves the average success rate by 60% in dynamic and missing-observation environments, and solves two non-Markovian scenarios where other state-of-the-art methods fail. + oai:arXiv.org:2512.09101v1 + cs.RO + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.1145/3773966.3777991 - WSDM 2026 - Honglian Wang, Haoyun Chen, Aristides Gionis + Lipeng Zhuang, Shiyu Fan, Florent P. Audonnet, Yingdong Ru, Gerardo Aragon Camarasa, Paul Henderson - Large Language Models for Education and Research: An Empirical and User Survey-based Analysis - https://arxiv.org/abs/2512.08057 - arXiv:2512.08057v1 Announce Type: new -Abstract: Pretrained Large Language Models (LLMs) have achieved remarkable success across diverse domains, with education and research emerging as particularly impactful areas. Among current state-of-the-art LLMs, ChatGPT and DeepSeek exhibit strong capabilities in mathematics, science, medicine, literature, and programming. In this study, we present a comprehensive evaluation of these two LLMs through background technology analysis, empirical experiments, and a real-world user survey. The evaluation explores trade-offs among model accuracy, computational efficiency, and user experience in educational and research affairs. We benchmarked these LLMs performance in text generation, programming, and specialized problem-solving. Experimental results show that ChatGPT excels in general language understanding and text generation, while DeepSeek demonstrates superior performance in programming tasks due to its efficiency- focused design. Moreover, both models deliver medically accurate diagnostic outputs and effectively solve complex mathematical problems. Complementing these quantitative findings, a survey of students, educators, and researchers highlights the practical benefits and limitations of these models, offering deeper insights into their role in advancing education and research. - oai:arXiv.org:2512.08057v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Natural Geometry of Robust Data Attribution: From Convex Models to Deep Networks + https://arxiv.org/abs/2512.09103 + arXiv:2512.09103v1 Announce Type: new +Abstract: Data attribution methods identify which training examples are responsible for a model's predictions, but their sensitivity to distributional perturbations undermines practical reliability. We present a unified framework for certified robust attribution that extends from convex models to deep networks. For convex settings, we derive Wasserstein-Robust Influence Functions (W-RIF) with provable coverage guarantees. For deep networks, we demonstrate that Euclidean certification is rendered vacuous by spectral amplification -- a mechanism where the inherent ill-conditioning of deep representations inflates Lipschitz bounds by over $10{,}000\times$. This explains why standard TRAK scores, while accurate point estimates, are geometrically fragile: naive Euclidean robustness analysis yields 0\% certification. Our key contribution is the Natural Wasserstein metric, which measures perturbations in the geometry induced by the model's own feature covariance. This eliminates spectral amplification, reducing worst-case sensitivity by $76\times$ and stabilizing attribution estimates. On CIFAR-10 with ResNet-18, Natural W-TRAK certifies 68.7\% of ranking pairs compared to 0\% for Euclidean baselines -- to our knowledge, the first non-vacuous certified bounds for neural network attribution. Furthermore, we prove that the Self-Influence term arising from our analysis equals the Lipschitz constant governing attribution stability, providing theoretical grounding for leverage-based anomaly detection. Empirically, Self-Influence achieves 0.970 AUROC for label noise detection, identifying 94.1\% of corrupted labels by examining just the top 20\% of training data. + oai:arXiv.org:2512.09103v1 + cs.LG + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Md Mostafizer Rahman, Ariful Islam Shiplu, Md Faizul Ibne Amin, Yutaka Watanobe, Lu Peng + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shihao Li, Jiachen Li, Dongmei Chen - LUNA: Linear Universal Neural Attention with Generalization Guarantees - https://arxiv.org/abs/2512.08061 - arXiv:2512.08061v1 Announce Type: new -Abstract: Scaling attention faces a critical bottleneck: the $\mathcal{O}(n^2)$ quadratic computational cost of softmax attention, which limits its application in long-sequence domains. While linear attention mechanisms reduce this cost to $\mathcal{O}(n)$, they typically rely on fixed random feature maps, such as random Fourier features or hand-crafted functions. This reliance on static, data-agnostic kernels creates a fundamental trade-off, forcing practitioners to sacrifice significant model accuracy for computational efficiency. We introduce \textsc{LUNA}, a kernelized linear attention mechanism that eliminates this trade-off, retaining linear cost while matching and surpassing the accuracy of quadratic attention. \textsc{LUNA} is built on the key insight that the kernel feature map itself should be learned rather than fixed a priori. By parameterizing the kernel, \textsc{LUNA} learns a feature basis tailored to the specific data and task, overcoming the expressive limitations of fixed-feature methods. \textsc{Luna} implements this with a learnable feature map that induces a positive-definite kernel and admits a streaming form, yielding linear time and memory scaling in the sequence length. Empirical evaluations validate our approach across diverse settings. On the Long Range Arena (LRA), \textsc{Luna} achieves state-of-the-art average accuracy among efficient Transformers under compute parity, using the same parameter count, training steps, and approximate FLOPs. \textsc{Luna} also excels at post-hoc conversion: replacing softmax in fine-tuned BERT and ViT-B/16 checkpoints and briefly fine-tuning recovers most of the original performance, substantially outperforming fixed linearizations. - oai:arXiv.org:2512.08061v1 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + SURA: Secure Unsourced Random Access + https://arxiv.org/abs/2512.09104 + arXiv:2512.09104v1 Announce Type: new +Abstract: This work introduces security for unsourced random access (URA) by employing wiretap-inspired physical layer techniques. To achieve confidentiality, the proposed system opportunistically exploits intrinsic features of feedback-aided URA without adding any overhead or altering its original structure or operational characteristics. As a result, the proposed system preserves the low-cost advantages of URA, including low delay and minimal signaling overhead, while providing secure communication. To secure transmission, each user generates a secret key and an artificial noise sequence from the feedback signal that the BS broadcasts in previous transmission rounds. This feedback depends on the BS-user channel, making it a private signal for each user. The secure transmission is performed by three actions: encrypting the data using the secret key, sending only the parity bits of the LDPC encoded secret key to allow the legitimate receiver to recover it, and masking these parity bits with the artificial noise. For reception, a receiver algorithm is designed for the legitimate user, and a leakage analysis is provided to quantify the information available to the eavesdropper. The simulation results show that meaningful secrecy is achieved in URA without modifying its structure and with negligible impact on standard performance. + oai:arXiv.org:2512.09104v1 + cs.IT + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mohammad Javad Ahmadi, Rafael F. Schaefer, H. Vincent Poor + + + Cognitive Trust in HRI: "Pay Attention to Me and I'll Trust You Even if You are Wrong" + https://arxiv.org/abs/2512.09105 + arXiv:2512.09105v1 Announce Type: new +Abstract: Cognitive trust and the belief that a robot is capable of accurately performing tasks, are recognized as central factors in fostering high-quality human-robot interactions. It is well established that performance factors such as the robot's competence and its reliability shape cognitive trust. Recent studies suggest that affective factors, such as robotic attentiveness, also play a role in building cognitive trust. This work explores the interplay between these two factors that shape cognitive trust. Specifically, we evaluated whether different combinations of robotic competence and attentiveness introduce a compensatory mechanism, where one factor compensates for the lack of the other. In the experiment, participants performed a search task with a robotic dog in a 2x2 experimental design that included two factors: competence (high or low) and attentiveness (high or low). The results revealed that high attentiveness can compensate for low competence. Participants who collaborated with a highly attentive robot that performed poorly reported trust levels comparable to those working with a highly competent robot. When the robot did not demonstrate attentiveness, low competence resulted in a substantial decrease in cognitive trust. The findings indicate that building cognitive trust in human-robot interaction may be more complex than previously believed, involving emotional processes that are typically overlooked. We highlight an affective compensatory mechanism that adds a layer to consider alongside traditional competence-based models of cognitive trust. + oai:arXiv.org:2512.09105v1 + cs.RO + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Ashkan Shahbazi, Ping He, Ali Abbasi, Yikun Bai, Xinran Liu, Elaheh Akbari, Darian Salehi, Navid NaderiAlizadeh, Soheil Kolouri + Adi Manor, Dan Cohen, Ziv Keidar, Avi Parush, Hadas Erel - Deep Kernel Aalen-Johansen Estimator: An Interpretable and Flexible Neural Net Framework for Competing Risks - https://arxiv.org/abs/2512.08063 - arXiv:2512.08063v1 Announce Type: new -Abstract: We propose an interpretable deep competing risks model called the Deep Kernel Aalen-Johansen (DKAJ) estimator, which generalizes the classical Aalen-Johansen nonparametric estimate of cumulative incidence functions (CIFs). Each data point (e.g., patient) is represented as a weighted combination of clusters. If a data point has nonzero weight only for one cluster, then its predicted CIFs correspond to those of the classical Aalen-Johansen estimator restricted to data points from that cluster. These weights come from an automatically learned kernel function that measures how similar any two data points are. On four standard competing risks datasets, we show that DKAJ is competitive with state-of-the-art baselines while being able to provide visualizations to assist model interpretation. - oai:arXiv.org:2512.08063v1 + Learning Unmasking Policies for Diffusion Language Models + https://arxiv.org/abs/2512.09106 + arXiv:2512.09106v1 Announce Type: new +Abstract: Diffusion (Large) Language Models (dLLMs) now match the downstream performance of their autoregressive counterparts on many tasks, while holding the promise of being more efficient during inference. One particularly successful variant is masked discrete diffusion, in which a buffer filled with special mask tokens is progressively replaced with tokens sampled from the model's vocabulary. Efficiency can be gained by unmasking several tokens in parallel, but doing too many at once risks degrading the generation quality. Thus, one critical design aspect of dLLMs is the sampling procedure that selects, at each step of the diffusion process, which tokens to replace. Indeed, recent work has found that heuristic strategies such as confidence thresholding lead to both higher quality and token throughput compared to random unmasking. However, such heuristics have downsides: they require manual tuning, and we observe that their performance degrades with larger buffer sizes. In this work, we instead propose to train sampling procedures using reinforcement learning. Specifically, we formalize masked diffusion sampling as a Markov decision process in which the dLLM serves as the environment, and propose a lightweight policy architecture based on a single-layer transformer that maps dLLM token confidences to unmasking decisions. Our experiments show that these trained policies match the performance of state-of-the-art heuristics when combined with semi-autoregressive generation, while outperforming them in the full diffusion setting. We also examine the transferability of these policies, finding that they can generalize to new underlying dLLMs and longer sequence lengths. However, we also observe that their performance degrades when applied to out-of-domain data, and that fine-grained tuning of the accuracy-efficiency trade-off can be challenging with our approach. + oai:arXiv.org:2512.09106v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaobin Shen, George H. Chen + http://creativecommons.org/licenses/by/4.0/ + Metod Jazbec, Theo X. Olausson, Louis B\'ethune, Pierre Ablin, Michael Kirchhof, Joao Monterio, Victor Turrisi, Jason Ramapuram, Marco Cuturi - Cabin Layout, Seat Density, and Passenger Segmentation in Air Transport: Implications for Prices, Ancillary Revenues, and Efficiency - https://arxiv.org/abs/2512.08066 - arXiv:2512.08066v1 Announce Type: new -Abstract: This study investigates how the layout and density of seats in aircraft cabins influence the pricing of airline tickets on domestic flights. The analysis is based on microdata from boarding passes linked to face-to-face interviews with passengers, allowing us to relate the price paid to the location on the aircraft seat map, as well as market characteristics and flight operations. Econometric models were estimated using the Post-Double-Selection LASSO (PDS-LASSO) procedure, which selects numerous controls for unobservable factors linked to commercial and operational aspects, thus enabling better identification of the effect of variables such as advance purchase, reason for travel, fuel price, market structure, and load factor, among others. The results suggest that a higher density of seat rows is associated with lower prices, reflecting economies of scale with the increase in aircraft size and gains in operational efficiency. An unexpected result was also obtained: in situations where there was no seat selection fee, passengers with more expensive tickets were often allocated middle seats due to purchasing at short notice, when the side alternatives were no longer available. This behavior helps explain the economic logic behind one of the main ancillary revenues of airlines. In addition to quantitative analysis, the study incorporates an exploratory approach to innovative cabin concepts and their possible effects on density and comfort on board. - oai:arXiv.org:2512.08066v1 - eess.SY - cs.SY - econ.GN - q-fin.EC - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Evolving Excellence: Automated Optimization of LLM-based Agents + https://arxiv.org/abs/2512.09108 + arXiv:2512.09108v1 Announce Type: new +Abstract: Agentic AI systems built on large language models (LLMs) offer significant potential for automating complex workflows, from software development to customer support. However, LLM agents often underperform due to suboptimal configurations; poorly tuned prompts, tool descriptions, and parameters that typically require weeks of manual refinement. Existing optimization methods either are too complex for general use or treat components in isolation, missing critical interdependencies. + We present ARTEMIS, a no-code evolutionary optimization platform that jointly optimizes agent configurations through semantically-aware genetic operators. Given only a benchmark script and natural language goals, ARTEMIS automatically discovers configurable components, extracts performance signals from execution logs, and evolves configurations without requiring architectural modifications. + We evaluate ARTEMIS on four representative agent systems: the \emph{ALE Agent} for competitive programming on AtCoder Heuristic Contest, achieving a \textbf{$13.6\%$ improvement} in acceptance rate; the \emph{Mini-SWE Agent} for code optimization on SWE-Perf, with a statistically significant \textbf{10.1\% performance gain}; and the \emph{CrewAI Agent} for cost and mathematical reasoning on Math Odyssey, achieving a statistically significant \textbf{$36.9\%$ reduction} in the number of tokens required for evaluation. We also evaluate the \emph{MathTales-Teacher Agent} powered by a smaller open-source model (Qwen2.5-7B) on GSM8K primary-level mathematics problems, achieving a \textbf{22\% accuracy improvement} and demonstrating that ARTEMIS can optimize agents based on both commercial and local models. + oai:arXiv.org:2512.09108v1 + cs.SE + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - 10.5281/zenodo.17860616 - Communications in Airline Economics Research, 202117818, 2025 - Alessandro V. M. Oliveira, Moises D. Vassallo + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Paul Brookes, Vardan Voskanyan, Rafail Giavrimis, Matthew Truscott, Mina Ilieva, Chrystalla Pavlou, Alexandru Staicu, Manal Adham, Will Evers- Hood, Jingzhi Gong, Kejia Zhang, Matvey Fedoseev, Vishal Sharma, Roman Bauer, Zheng Wang, Hema Nair, Wei Jie, Tianhua Xu, Aurora Constantin, Leslie Kanthan, Michail Basios - CapsuleFS A Multi-credential DataCapsule Filesystem - https://arxiv.org/abs/2512.08067 - arXiv:2512.08067v1 Announce Type: new -Abstract: CapsuleFS (CFS) is the first filesystem to integrate multi-credential functionality within a POSIX-compliant framework, utilizing DataCapsule as the storage provider. This innovative system is established based on the Global Data Plane in the area of edge computing. Our comprehensive design and implementation of CFS successfully fulfill the objective of providing a multi-credential Common Access API. The architecture of CFS is methodically segmented into three integral components: Firstly, the DataCapsule server, tasked with the storage, dissemination, and replication of DataCapsules on the edge. Secondly, the middleware, a crucial element running in a Trusted Execution Environment responsible for the enforcement and management of write permissions and requests. Finally, the client component, which manifests as a POSIX-compliant filesystem, is adaptable and operational across many architectures. Experimental evaluations of CFS reveal that, while its read and write performances are comparatively modest, it upholds a high degree of functional correctness. This attribute distinctly positions CFS as a viable candidate for application in real-world software development scenarios. The paper also delineates potential future enhancements, aimed at augmenting the practicality of CFS in the landscape of software development. - oai:arXiv.org:2512.08067v1 - cs.DC - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Semantic Trajectory Generation for Goal-Oriented Spacecraft Rendezvous + https://arxiv.org/abs/2512.09111 + arXiv:2512.09111v1 Announce Type: new +Abstract: Reliable real-time trajectory generation is essential for future autonomous spacecraft. While recent progress in nonconvex guidance and control is paving the way for onboard autonomous trajectory optimization, these methods still rely on extensive expert input (e.g., waypoints, constraints, mission timelines, etc.), which limits the operational scalability in real rendezvous missions.This paper introduces SAGES (Semantic Autonomous Guidance Engine for Space), a trajectory-generation framework that translates natural-language commands into spacecraft trajectories that reflect high-level intent while respecting nonconvex constraints. Experiments in two settings -- fault-tolerant proximity operations with continuous-time constraint enforcement and a free-flying robotic platform -- demonstrate that SAGES reliably produces trajectories aligned with human commands, achieving over 90\% semantic-behavioral consistency across diverse behavior modes. Ultimately, this work marks an initial step toward language-conditioned, constraint-aware spacecraft trajectory generation, enabling operators to interactively guide both safety and behavior through intuitive natural-language commands with reduced expert burden. + oai:arXiv.org:2512.09111v1 + cs.RO + cs.AI + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qingyang Hu, Yucheng Huang, Manshi Yang + Yuji Takubo, Arpit Dwivedi, Sukeerth Ramkumar, Luis A. Pabon, Daniele Gammelli, Marco Pavone, Simone D'Amico - CAMO: Causality-Guided Adversarial Multimodal Domain Generalization for Crisis Classification - https://arxiv.org/abs/2512.08071 - arXiv:2512.08071v1 Announce Type: new -Abstract: Crisis classification in social media aims to extract actionable disaster-related information from multimodal posts, which is a crucial task for enhancing situational awareness and facilitating timely emergency responses. However, the wide variation in crisis types makes achieving generalizable performance across unseen disasters a persistent challenge. Existing approaches primarily leverage deep learning to fuse textual and visual cues for crisis classification, achieving numerically plausible results under in-domain settings. However, they exhibit poor generalization across unseen crisis types because they 1. do not disentangle spurious and causal features, resulting in performance degradation under domain shift, and 2. fail to align heterogeneous modality representations within a shared space, which hinders the direct adaptation of established single-modality domain generalization (DG) techniques to the multimodal setting. To address these issues, we introduce a causality-guided multimodal domain generalization (MMDG) framework that combines adversarial disentanglement with unified representation learning for crisis classification. The adversarial objective encourages the model to disentangle and focus on domain-invariant causal features, leading to more generalizable classifications grounded in stable causal mechanisms. The unified representation aligns features from different modalities within a shared latent space, enabling single-modality DG strategies to be seamlessly extended to multimodal learning. Experiments on the different datasets demonstrate that our approach achieves the best performance in unseen disaster scenarios. - oai:arXiv.org:2512.08071v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + GimbalDiffusion: Gravity-Aware Camera Control for Video Generation + https://arxiv.org/abs/2512.09112 + arXiv:2512.09112v1 Announce Type: new +Abstract: Recent progress in text-to-video generation has achieved remarkable realism, yet fine-grained control over camera motion and orientation remains elusive. Existing approaches typically encode camera trajectories through relative or ambiguous representations, limiting explicit geometric control. We introduce GimbalDiffusion, a framework that enables camera control grounded in physical-world coordinates, using gravity as a global reference. Instead of describing motion relative to previous frames, our method defines camera trajectories in an absolute coordinate system, allowing precise and interpretable control over camera parameters without requiring an initial reference frame. We leverage panoramic 360-degree videos to construct a wide variety of camera trajectories, well beyond the predominantly straight, forward-facing trajectories seen in conventional video data. To further enhance camera guidance, we introduce null-pitch conditioning, an annotation strategy that reduces the model's reliance on text content when conflicting with camera specifications (e.g., generating grass while the camera points towards the sky). Finally, we establish a benchmark for camera-aware video generation by rebalancing SpatialVID-HQ for comprehensive evaluation under wide camera pitch variation. Together, these contributions advance the controllability and robustness of text-to-video models, enabling precise, gravity-aligned camera manipulation within generative frameworks. + oai:arXiv.org:2512.09112v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Pingchuan Ma, Chengshuai Zhao, Bohan Jiang, Saketh Vishnubhatla, Ujun Jeong, Alimohammad Beigi, Adrienne Raglin, Huan Liu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Fr\'ed\'eric Fortier-Chouinard, Yannick Hold-Geoffroy, Valentin Deschaintre, Matheus Gadelha, Jean-Fran\c{c}ois Lalonde - Detecting Privileged Documents by Ranking Connected Network Entities - https://arxiv.org/abs/2512.08073 - arXiv:2512.08073v1 Announce Type: new -Abstract: This paper presents a link analysis approach for identifying privileged documents by constructing a network of human entities derived from email header metadata. Entities are classified as either counsel or non-counsel based on a predefined list of known legal professionals. The core assumption is that individuals with frequent interactions with lawyers are more likely to participate in privileged communications. To quantify this likelihood, an algorithm assigns a score to each entity within the network. By utilizing both entity scores and the strength of their connections, the method enhances the identification of privileged documents. Experimental results demonstrate the algorithm's effectiveness in ranking legal entities for privileged document detection. - oai:arXiv.org:2512.08073v1 - cs.IR - Wed, 10 Dec 2025 00:00:00 -0500 + AI TIPS 2.0: A Comprehensive Framework for Operationalizing AI Governance + https://arxiv.org/abs/2512.09114 + arXiv:2512.09114v1 Announce Type: new +Abstract: The deployment of AI systems faces three critical governance challenges that current frameworks fail to adequately address. First, organizations struggle with inadequate risk assessment at the use case level, exemplified by the Humana class action lawsuit and other high impact cases where an AI system deployed to production exhibited both significant bias and high error rates, resulting in improper healthcare claim denials. Each AI use case presents unique risk profiles requiring tailored governance, yet most frameworks provide one size fits all guidance. Second, existing frameworks like ISO 42001 and NIST AI RMF remain at high conceptual levels, offering principles without actionable controls, leaving practitioners unable to translate governance requirements into specific technical implementations. Third, organizations lack mechanisms for operationalizing governance at scale, with no systematic approach to embed trustworthy AI practices throughout the development lifecycle, measure compliance quantitatively, or provide role-appropriate visibility from boards to data scientists. We present AI TIPS, Artificial Intelligence Trust-Integrated Pillars for Sustainability 2.0, update to the comprehensive operational framework developed in 2019,four years before NIST's AI Risk Management Framework, that directly addresses these challenges. + oai:arXiv.org:2512.09114v1 + cs.AI + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Jianping Zhang, Han Qin, Nathaniel Huber-Fliflet + Pamela Gupta - Identification of Deforestation Areas in the Amazon Rainforest Using Change Detection Models - https://arxiv.org/abs/2512.08075 - arXiv:2512.08075v1 Announce Type: new -Abstract: The preservation of the Amazon Rainforest is one of the global priorities in combating climate change, protecting biodiversity, and safeguarding indigenous cultures. The Satellite-based Monitoring Project of Deforestation in the Brazilian Legal Amazon (PRODES), a project of the National Institute for Space Research (INPE), stands out as a fundamental initiative in this effort, annually monitoring deforested areas not only in the Amazon but also in other Brazilian biomes. Recently, machine learning models have been developed using PRODES data to support this effort through the comparative analysis of multitemporal satellite images, treating deforestation detection as a change detection problem. However, existing approaches present significant limitations: models evaluated in the literature still show unsatisfactory effectiveness, many do not incorporate modern architectures, such as those based on self-attention mechanisms, and there is a lack of methodological standardization that allows direct comparisons between different studies. In this work, we address these gaps by evaluating various change detection models in a unified dataset, including fully convolutional models and networks incorporating self-attention mechanisms based on Transformers. We investigate the impact of different pre- and post-processing techniques, such as filtering deforested areas predicted by the models based on the size of connected components, texture replacement, and image enhancements; we demonstrate that such approaches can significantly improve individual model effectiveness. Additionally, we test different strategies for combining the evaluated models to achieve results superior to those obtained individually, reaching an F1-score of 80.41%, a value comparable to other recent works in the literature. - oai:arXiv.org:2512.08075v1 + SuperF: Neural Implicit Fields for Multi-Image Super-Resolution + https://arxiv.org/abs/2512.09115 + arXiv:2512.09115v1 Announce Type: new +Abstract: High-resolution imagery is often hindered by limitations in sensor technology, atmospheric conditions, and costs. Such challenges occur in satellite remote sensing, but also with handheld cameras, such as our smartphones. Hence, super-resolution aims to enhance the image resolution algorithmically. Since single-image super-resolution requires solving an inverse problem, such methods must exploit strong priors, e.g. learned from high-resolution training data, or be constrained by auxiliary data, e.g. by a high-resolution guide from another modality. While qualitatively pleasing, such approaches often lead to "hallucinated" structures that do not match reality. In contrast, multi-image super-resolution (MISR) aims to improve the (optical) resolution by constraining the super-resolution process with multiple views taken with sub-pixel shifts. Here, we propose SuperF, a test-time optimization approach for MISR that leverages coordinate-based neural networks, also called neural fields. Their ability to represent continuous signals with an implicit neural representation (INR) makes them an ideal fit for the MISR task. + The key characteristic of our approach is to share an INR for multiple shifted low-resolution frames and to jointly optimize the frame alignment with the INR. Our approach advances related INR baselines, adopted from burst fusion for layer separation, by directly parameterizing the sub-pixel alignment as optimizable affine transformation parameters and by optimizing via a super-sampled coordinate grid that corresponds to the output resolution. Our experiments yield compelling results on simulated bursts of satellite imagery and ground-level images from handheld cameras, with upsampling factors of up to 8. A key advantage of SuperF is that this approach does not rely on any high-resolution training data. + oai:arXiv.org:2512.09115v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Christian Massao Konishi, Helio Pedrini + Sander Riis{\o}en Jyhne, Christian Igel, Morten Goodwin, Per-Arne Andersen, Serge Belongie, Nico Lang - Mitigation of Datacenter Demand Ramping and Fluctuation using Hybrid ESS and Supercapacitor - https://arxiv.org/abs/2512.08076 - arXiv:2512.08076v1 Announce Type: new -Abstract: This paper proposes a hybrid energy storage system (HESS)-based control framework that enables comprehensive power smoothing for hyperscale AI datacenters with large load variations. Datacenters impose severe ramping and fluctuation-induced stresses on the grid frequency and voltage stability. To mitigate such disturbances, the proposed HESS integrates a battery energy storage system (BESS) and a supercapacitor (SC) through coordinated multi-timescale control. A high-pass filter (HPF) separates the datacenter demand into slow and fast components, allocating them respectively to the ESS via a leaky-integral controller and to the SC via a phase-lead proportional-derivative controller enhanced with feedforward and ramp-tracking compensation. Adaptive weighting and repetitive control mechanisms further improve transient and periodic responses. Case studies verify that the proposed method effectively suppresses both ramping and fluctuations, stabilizes the system frequency, and maintains sustainable state-of-charge (SoC) trajectories for both ESS and SC under prolonged, stochastic training cycles. - oai:arXiv.org:2512.08076v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + High Order Numerical Methods Preserving Invariant Domain for Hyperbolic and Related Systems + https://arxiv.org/abs/2512.09116 + arXiv:2512.09116v1 Announce Type: new +Abstract: Admissible states in hyperbolic systems and related equations often form a convex invariant domain. Numerical violations of this domain can lead to loss of hyperbolicity, resulting in illposedness and severe numerical instabilities. It is therefore crucial for numerical schemes to preserve the invariant domain to ensure both physically meaningful solutions and robust computations. For complex systems, constructing invariant-domain-preserving (IDP) schemes is highly nontrivial and particularly challenging for high-order accurate methods. This paper presents a comprehensive survey of IDP schemes for hyperbolic and related systems, with a focus on the most popular approaches for constructing provable IDP schemes. We first give a systematic review of the fundamental approaches for establishing the IDP property in first-order accurate schemes, covering finite difference, finite volume, finite element, and residual distribution methods. Then we focus on two widely used and actively developed classes of high order IDP schemes as well as their recent developments, most of which have emerged in the past decade. The first class of methods seeks an intrinsic weak IDP property in high-order schemes and then designs polynomial limiters to enforce a strong IDP property at the points of interest. This generic approach applies to high-order finite volume and discontinuousGalerkin schemes. The second class is based on the flux limiting approaches, which originated from the flux-corrected transport method and can be adapted to a broader range of spatial discretizations, including finite difference and continuous finite element methods. In this survey, we elucidate the main ideas in the construction of IDP schemes, provide some new perspectives and insights, with extensive examples, and numerical experiments in gas dynamics and magnetohydrodynamics. + oai:arXiv.org:2512.09116v1 + math.NA + astro-ph.IM + cs.NA + physics.comp-ph + physics.flu-dyn + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Min-Seung Ko, Jae Woong Shim, Hao Zhu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kailiang Wu, Xiangxiong Zhang, Chi-Wang Shu - Unveiling Latent Knowledge in Chemistry Language Models through Sparse Autoencoders - https://arxiv.org/abs/2512.08077 - arXiv:2512.08077v1 Announce Type: new -Abstract: Since the advent of machine learning, interpretability has remained a persistent challenge, becoming increasingly urgent as generative models support high-stakes applications in drug and material discovery. Recent advances in large language model (LLM) architectures have yielded chemistry language models (CLMs) with impressive capabilities in molecular property prediction and molecular generation. However, how these models internally represent chemical knowledge remains poorly understood. In this work, we extend sparse autoencoder techniques to uncover and examine interpretable features within CLMs. Applying our methodology to the Foundation Models for Materials (FM4M) SMI-TED chemistry foundation model, we extract semantically meaningful latent features and analyse their activation patterns across diverse molecular datasets. Our findings reveal that these models encode a rich landscape of chemical concepts. We identify correlations between specific latent features and distinct domains of chemical knowledge, including structural motifs, physicochemical properties, and pharmacological drug classes. Our approach provides a generalisable framework for uncovering latent knowledge in chemistry-focused AI systems. This work has implications for both foundational understanding and practical deployment; with the potential to accelerate computational chemistry research. - oai:arXiv.org:2512.08077v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem + https://arxiv.org/abs/2512.09117 + arXiv:2512.09117v1 Announce Type: new +Abstract: This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem. + oai:arXiv.org:2512.09117v1 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jaron Cohen, Alexander G. Hasson, Sara Tanovic - - - A Comparative Study of Retrieval Methods in Azure AI Search - https://arxiv.org/abs/2512.08078 - arXiv:2512.08078v1 Announce Type: new -Abstract: Increasingly, attorneys are interested in moving beyond keyword and semantic search to improve the efficiency of how they find key information during a document review task. Large language models (LLMs) are now seen as tools that attorneys can use to ask natural language questions of their data during document review to receive accurate and concise answers. This study evaluates retrieval strategies within Microsoft Azure's Retrieval-Augmented Generation (RAG) framework to identify effective approaches for Early Case Assessment (ECA) in eDiscovery. During ECA, legal teams analyze data at the outset of a matter to gain a general understanding of the data and attempt to determine key facts and risks before beginning full-scale review. In this paper, we compare the performance of Azure AI Search's keyword, semantic, vector, hybrid, and hybrid-semantic retrieval methods. We then present the accuracy, relevance, and consistency of each method's AI-generated responses. Legal practitioners can use the results of this study to enhance how they select RAG configurations in the future. - oai:arXiv.org:2512.08078v1 - cs.IR - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Qiang Mao, Han Qin, Robert Neary, Charles Wang, Fusheng Wei, Jianping Zhang, Nathaniel Huber-Fliflet + Luciano Floridi, Yiyang Jia, Fernando Tohm\'e - Leveraging Machine Learning and Large Language Models for Automated Image Clustering and Description in Legal Discovery - https://arxiv.org/abs/2512.08079 - arXiv:2512.08079v1 Announce Type: new -Abstract: The rapid increase in digital image creation and retention presents substantial challenges during legal discovery, digital archive, and content management. Corporations and legal teams must organize, analyze, and extract meaningful insights from large image collections under strict time pressures, making manual review impractical and costly. These demands have intensified interest in automated methods that can efficiently organize and describe large-scale image datasets. This paper presents a systematic investigation of automated cluster description generation through the integration of image clustering, image captioning, and large language models (LLMs). We apply K-means clustering to group images into 20 visually coherent clusters and generate base captions using the Azure AI Vision API. We then evaluate three critical dimensions of the cluster description process: (1) image sampling strategies, comparing random, centroid-based, stratified, hybrid, and density-based sampling against using all cluster images; (2) prompting techniques, contrasting standard prompting with chain-of-thought prompting; and (3) description generation methods, comparing LLM-based generation with traditional TF-IDF and template-based approaches. We assess description quality using semantic similarity and coverage metrics. Results show that strategic sampling with 20 images per cluster performs comparably to exhaustive inclusion while significantly reducing computational cost, with only stratified sampling showing modest degradation. LLM-based methods consistently outperform TF-IDF baselines, and standard prompts outperform chain-of-thought prompts for this task. These findings provide practical guidance for deploying scalable, accurate cluster description systems that support high-volume workflows in legal discovery and other domains requiring automated organization of large image collections. - oai:arXiv.org:2512.08079v1 - cs.IR - Wed, 10 Dec 2025 00:00:00 -0500 + A Hybrid Neural Network-Finite Element Method for the Viscous-Plastic Sea-Ice Model + https://arxiv.org/abs/2512.09118 + arXiv:2512.09118v1 Announce Type: new +Abstract: We present an efficient hybrid Neural Network-Finite Element Method (NN-FEM) for solving the viscous-plastic (VP) sea-ice model. The VP model is widely used in climate simulations to represent large-scale sea-ice dynamics. However, the strong nonlinearity introduced by the material law makes VP solvers computationally expensive, with the cost per degree of freedom increasing rapidly under mesh refinement. High spatial resolution is particularly required to capture narrow deformation bands known as linear kinematic features in viscous-plastic models. To improve computational efficiency in simulating such fine-scale deformation features, we propose to enrich coarse-mesh finite element approximations with fine-scale corrections predicted by neural networks trained with high-resolution simulations. The neural network operates locally on small patches of grid elements, which is efficient due to its relatively small size and parallel applicability across grid patches. An advantage of this local approach is that it generalizes well to different right-hand sides and computational domains, since the network operates on small subregions rather than learning details tied to a specific choice of boundary conditions, forcing, or geometry. The numerical examples quantify the runtime and evaluate the error for this hybrid approach with respect to the simulation of sea-ice deformations. Applying the learned network correction enables coarser-grid simulations to achieve qualitatively similar accuracy at approximately 11 times lower computational cost relative to the high-resolution reference simulations. Moreover, the learned correction accelerates the Newton solver by up to 10% compared to runs without the correction at the same mesh resolution. + oai:arXiv.org:2512.09118v1 + math.NA + cs.NA + physics.comp-ph + physics.flu-dyn + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Qiang Mao, Fusheng Wei, Robert Neary, Charles Wang, Han Qin, Jianping Zhang, Nathaniel Huber-Fliflet + http://creativecommons.org/licenses/by-sa/4.0/ + Nils Margenberg, Carolin Mehlmann - Short-Context Dominance: How Much Local Context Natural Language Actually Needs? - https://arxiv.org/abs/2512.08082 - arXiv:2512.08082v1 Announce Type: new -Abstract: We investigate the short-context dominance hypothesis: that for most sequences, a small local prefix suffices to predict their next tokens. Using large language models as statistical oracles, we measure the minimum context length (MCL) needed to reproduce accurate full-context predictions across datasets with sequences of varying lengths. For sequences with 1-7k tokens from long-context documents, we consistently find that 75-80% require only the last 96 tokens at most. Given the dominance of short-context tokens, we then ask whether it is possible to detect challenging long-context sequences for which a short local prefix does not suffice for prediction. We introduce a practical proxy to MCL, called Distributionally Aware MCL (DaMCL), that does not require knowledge of the actual next-token and is compatible with sampling strategies beyond greedy decoding. Our experiments validate that simple thresholding of the metric defining DaMCL achieves high performance in detecting long vs. short context sequences. Finally, to counter the bias that short-context dominance induces in LLM output distributions, we develop an intuitive decoding algorithm that leverages our detector to identify and boost tokens that are long-range-relevant. Across Q&A tasks and model architectures, we confirm that mitigating the bias improves performance. - oai:arXiv.org:2512.08082v1 + Knowledge-Guided Large Language Model for Automatic Pediatric Dental Record Understanding and Safe Antibiotic Recommendation + https://arxiv.org/abs/2512.09127 + arXiv:2512.09127v1 Announce Type: new +Abstract: Accurate interpretation of pediatric dental clinical records and safe antibiotic prescribing remain persistent challenges in dental informatics. Traditional rule-based clinical decision support systems struggle with unstructured dental narratives, incomplete radiographic descriptions, and complex safety constraints. To address these limitations, this study proposes a Knowledge-Guided Large Language Model (KG-LLM) that integrates a pediatric dental knowledge graph, retrieval-augmented generation (RAG), and a multi-stage safety validation pipeline for evidence-grounded antibiotic recommendation. The framework first employs a clinical NER/RE module to extract structured entities and relations from dental notes and radiology reports. Relevant guidelines, drug-safety rules, and analogous historical cases are subsequently retrieved from the knowledge graph and supplied to the LLM for diagnostic summarization and dose-drug-duration prediction. Safety assurance is achieved through a dual-layer validation mechanism combining deterministic rule checking with a learned classifier for detecting allergies, contraindications, and dosing errors. Experiments on 32,000 de-identified pediatric dental visit records demonstrate the effectiveness of the proposed approach. Compared with a domain-adapted Llama-2 clinical baseline, KG-LLM improves record-understanding performance (F1: 0.914 vs. 0.867), drug-dose-duration accuracy (Top-1: 0.782 vs. 0.716), and reduces unsafe antibiotic suggestions by 50%. Additional evaluation across summary quality, recommendation accuracy, and global safety scores further confirms the robustness of the system. Ablation analyses indicate that the knowledge graph, RAG, and safety modules each contribute substantially to clinical reliability and interpretability. + oai:arXiv.org:2512.09127v1 cs.CL cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Vala Vakilian, Zimeng Wang, Ankit Singh Rawat, Christos Thrampoulidis + Zihan Han, Junyan Ge, Caifeng Li - Exploiting the Randomness of Large Language Models (LLM) in Text Classification Tasks: Locating Privileged Documents in Legal Matters - https://arxiv.org/abs/2512.08083 - arXiv:2512.08083v1 Announce Type: new -Abstract: In legal matters, text classification models are most often used to filter through large datasets in search of documents that meet certain pre-selected criteria like relevance to a certain subject matter, such as legally privileged communications and attorney-directed documents. In this context, large language models have demonstrated strong performance. This paper presents an empirical study investigating the role of randomness in LLM-based classification for attorney-client privileged document detection, focusing on four key dimensions: (1) the effectiveness of LLMs in identifying legally privileged documents, (2) the influence of randomness control parameters on classification outputs, (3) their impact on overall classification performance, and (4) a methodology for leveraging randomness to enhance accuracy. Experimental results showed that LLMs can identify privileged documents effectively, randomness control parameters have minimal impact on classification performance, and importantly, our developed methodology for leveraging randomness can have a significant impact on improving accuracy. Notably, this methodology that leverages randomness could also enhance a corporation's confidence in an LLM's output when incorporated into its sanctions-compliance processes. As organizations increasingly rely on LLMs to augment compliance workflows, reducing output variability helps build internal and regulatory confidence in LLM-derived sanctions-screening decisions. - oai:arXiv.org:2512.08083v1 - cs.IR - Wed, 10 Dec 2025 00:00:00 -0500 + Integrated Pipeline for Coronary Angiography With Automated Lesion Profiling, Virtual Stenting, and 100-Vessel FFR Validation + https://arxiv.org/abs/2512.09134 + arXiv:2512.09134v1 Announce Type: new +Abstract: Coronary angiography is the main tool for assessing coronary artery disease, but visual grading of stenosis is variable and only moderately related to ischaemia. Wire based fractional flow reserve (FFR) improves lesion selection but is not used systematically. Angiography derived indices such as quantitative flow ratio (QFR) offer wire free physiology, yet many tools are workflow intensive and separate from automated anatomy analysis and virtual PCI planning. We developed AngioAI-QFR, an end to end angiography only pipeline combining deep learning stenosis detection, lumen segmentation, centreline and diameter extraction, per millimetre Relative Flow Capacity profiling, and virtual stenting with automatic recomputation of angiography derived QFR. The system was evaluated in 100 consecutive vessels with invasive FFR as reference. Primary endpoints were agreement with FFR (correlation, mean absolute error) and diagnostic performance for FFR <= 0.80. On held out frames, stenosis detection achieved precision 0.97 and lumen segmentation Dice 0.78. Across 100 vessels, AngioAI-QFR correlated strongly with FFR (r = 0.89, MAE 0.045). The AUC for detecting FFR <= 0.80 was 0.93, with sensitivity 0.88 and specificity 0.86. The pipeline completed fully automatically in 93 percent of vessels, with median time to result 41 s. RFC profiling distinguished focal from diffuse capacity loss, and virtual stenting predicted larger QFR gain in focal than in diffuse disease. AngioAI-QFR provides a practical, near real time pipeline that unifies computer vision, functional profiling, and virtual PCI with automated angiography derived physiology. + oai:arXiv.org:2512.09134v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Keith Huffman, Jianping Zhang, Nathaniel Huber-Fliflet, Fusheng Wei, Peter Gronvall + http://creativecommons.org/licenses/by/4.0/ + Georgy Kopanitsa, Oleg Metsker, Alexey Yakovlev - Adaptation of Embedding Models to Financial Filings via LLM Distillation - https://arxiv.org/abs/2512.08088 - arXiv:2512.08088v1 Announce Type: new -Abstract: Despite advances in generative large language models (LLMs), practical application of specialized conversational AI agents remains constrained by computation costs, latency requirements, and the need for precise domain-specific relevance measures. While existing embedding models address the first two constraints, they underperform on information retrieval in specialized domains like finance. This paper introduces a scalable pipeline that trains specialized models from an unlabeled corpus using a general purpose retrieval embedding model as foundation. Our method yields an average of 27.7% improvement in MRR$\texttt{@}$5, 44.6% improvement in mean DCG$\texttt{@}$5 across 14 financial filing types measured over 21,800 query-document pairs, and improved NDCG on 3 of 4 document classes in FinanceBench. We adapt retrieval embeddings (bi-encoder) for RAG, not LLM generators, using LLM-judged relevance to distill domain knowledge into a compact retriever. There are prior works which pair synthetically generated queries with real passages to directly fine-tune the retrieval model. Our pipeline differs from these by introducing interaction between student and teacher models that interleaves retrieval-based mining of hard positive/negative examples from the unlabeled corpus with iterative retraining of the student model's weights using these examples. Each retrieval iteration uses the refined student model to mine the corpus for progressively harder training examples for the subsequent training iteration. The methodology provides a cost-effective solution to bridging the gap between general-purpose models and specialized domains without requiring labor-intensive human annotation. - oai:arXiv.org:2512.08088v1 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Energy-Based Modeling and Structure-Preserving Discretization of Physical Systems + https://arxiv.org/abs/2512.09138 + arXiv:2512.09138v1 Announce Type: new +Abstract: This paper develops a comprehensive mathematical framework for energy-based modeling of physical systems, with particular emphasis on preserving fundamental structural properties throughout the modeling and discretization process. The approach provides systematic methods for handling challenging system classes including high-index differential-algebraic equations and nonlinear multiphysics problems. Theoretical foundations are established for regularizing constrained systems while maintaining physical consistency, analyzing stability properties, and constructing numerical discretizations that inherit the energy dissipation structure of the continuous models. The versatility and practical utility of the framework are demonstrated through applications across multiple domains including poroelastic media, nonlinear circuits, constrained mechanics, and phase-field models. The results ensure that essential physical properties such as energy balance and dissipation are maintained from the continuous formulation through to numerical implementation, providing robust foundations for computational physics and engineering applications. + oai:arXiv.org:2512.09138v1 + math.NA + cs.NA + math.DS + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Eliot Brenner, Dominic Seyler, Manjunath Hegde, Andrei Simion, Koustuv Dasgupta, Bing Xiang + http://creativecommons.org/licenses/by-nc-sa/4.0/ + M. H. M Rashid - NysX: An Accurate and Energy-Efficient FPGA Accelerator for Hyperdimensional Graph Classification at the Edge - https://arxiv.org/abs/2512.08089 - arXiv:2512.08089v1 Announce Type: new -Abstract: Real-time, energy-efficient inference on edge devices is essential for graph classification across a range of applications. Hyperdimensional Computing (HDC) is a brain-inspired computing paradigm that encodes input features into low-precision, high-dimensional vectors with simple element-wise operations, making it well-suited for resource-constrained edge platforms. Recent work enhances HDC accuracy for graph classification via Nystr\"om kernel approximations. Edge acceleration of such methods faces several challenges: (i) redundancy among (landmark) samples selected via uniform sampling, (ii) storing the Nystr\"om projection matrix under limited on-chip memory, (iii) expensive, contention-prone codebook lookups, and (iv) load imbalance due to irregular sparsity in SpMV. To address these challenges, we propose NysX, the first end-to-end FPGA accelerator for Nystr\"om-based HDC graph classification at the edge. NysX integrates four key optimizations: (i) a hybrid landmark selection strategy combining uniform sampling with determinantal point processes (DPPs) to reduce redundancy while improving accuracy; (ii) a streaming architecture for Nystr\"om projection matrix maximizing external memory bandwidth utilization; (iii) a minimal-perfect-hash lookup engine enabling $O(1)$ key-to-index mapping with low on-chip memory overhead; and (iv) sparsity-aware SpMV engines with static load balancing. Together, these innovations enable real-time, energy-efficient inference on resource-constrained platforms. Implemented on an AMD Zynq UltraScale+ (ZCU104) FPGA, NysX achieves $6.85\times$ ($4.32\times$) speedup and $169\times$ ($314\times$) energy efficiency gains over optimized CPU (GPU) baselines, while improving classification accuracy by $3.4\%$ on average across TUDataset benchmarks, a widely used standard for graph classification. - oai:arXiv.org:2512.08089v1 - cs.AR - Wed, 10 Dec 2025 00:00:00 -0500 + SDialog: A Python Toolkit for End-to-End Agent Building, User Simulation, Dialog Generation, and Evaluation + https://arxiv.org/abs/2512.09142 + arXiv:2512.09142v1 Announce Type: new +Abstract: We present SDialog, an MIT-licensed open-source Python toolkit that unifies dialog generation, evaluation and mechanistic interpretability into a single end-to-end framework for building and analyzing LLM-based conversational agents. Built around a standardized \texttt{Dialog} representation, SDialog provides: (1) persona-driven multi-agent simulation with composable orchestration for controlled, synthetic dialog generation, (2) comprehensive evaluation combining linguistic metrics, LLM-as-a-judge and functional correctness validators, (3) mechanistic interpretability tools for activation inspection and steering via feature ablation and induction, and (4) audio generation with full acoustic simulation including 3D room modeling and microphone effects. The toolkit integrates with all major LLM backends, enabling mixed-backend experiments under a unified API. By coupling generation, evaluation, and interpretability in a dialog-centric architecture, SDialog enables researchers to build, benchmark and understand conversational systems more systematically. + oai:arXiv.org:2512.09142v1 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jebacyril Arockiaraj, Dhruv Parikh, Viktor Prasanna + Sergio Burdisso, S\'everin Baroudi, Yanis Labrak, David Grunert, Pawel Cyrta, Yiyang Chen, Srikanth Madikeri, Esa\'u Villatoro-Tello, Thomas Schaaf, Ricard Marxer, Petr Motlicek - Complexity of One-Dimensional ReLU DNNs - https://arxiv.org/abs/2512.08091 - arXiv:2512.08091v1 Announce Type: new -Abstract: We study the expressivity of one-dimensional (1D) ReLU deep neural networks through the lens of their linear regions. For randomly initialized, fully connected 1D ReLU networks (He scaling with nonzero bias) in the infinite-width limit, we prove that the expected number of linear regions grows as $\sum_{i = 1}^L n_i + \mathop{{o}}\left(\sum_{i = 1}^L{n_i}\right) + 1$, where $n_\ell$ denotes the number of neurons in the $\ell$-th hidden layer. We also propose a function-adaptive notion of sparsity that compares the expected regions used by the network to the minimal number needed to approximate a target within a fixed tolerance. - oai:arXiv.org:2512.08091v1 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Detecting Hallucinations in Graph Retrieval-Augmented Generation via Attention Patterns and Semantic Alignment + https://arxiv.org/abs/2512.09148 + arXiv:2512.09148v1 Announce Type: new +Abstract: Graph-based Retrieval-Augmented Generation (GraphRAG) enhances Large Language Models (LLMs) by incorporating external knowledge from linearized subgraphs retrieved from knowledge graphs. However, LLMs struggle to interpret the relational and topological information in these inputs, resulting in hallucinations that are inconsistent with the retrieved knowledge. To analyze how LLMs attend to and retain structured knowledge during generation, we propose two lightweight interpretability metrics: Path Reliance Degree (PRD), which measures over-reliance on shortest-path triples, and Semantic Alignment Score (SAS), which assesses how well the model's internal representations align with the retrieved knowledge. Through empirical analysis on a knowledge-based QA task, we identify failure patterns associated with over-reliance on salient paths and weak semantic grounding, as indicated by high PRD and low SAS scores. We further develop a lightweight post-hoc hallucination detector, Graph Grounding and Alignment (GGA), which outperforms strong semantic and confidence-based baselines across AUC and F1. By grounding hallucination analysis in mechanistic interpretability, our work offers insights into how structural limitations in LLMs contribute to hallucinations, informing the design of more reliable GraphRAG systems in the future. + oai:arXiv.org:2512.09148v1 + cs.CL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jonathan Kogan, Hayden Jananthan, Jeremy Kepner + Shanghao Li, Jinda Han, Yibo Wang, Yuanjie Zhu, Zihe Song, Langzhou He, Kenan Kamel A Alghythee, Philip S. Yu - Training LLMs for Honesty via Confessions - https://arxiv.org/abs/2512.08093 - arXiv:2512.08093v1 Announce Type: new -Abstract: Large language models (LLMs) can be dishonest when reporting on their actions and beliefs -- for example, they may overstate their confidence in factual claims or cover up evidence of covert actions. Such dishonesty may arise due to the effects of reinforcement learning (RL), where challenges with reward shaping can result in a training process that inadvertently incentivizes the model to lie or misrepresent its actions. - In this work we propose a method for eliciting an honest expression of an LLM's shortcomings via a self-reported *confession*. A confession is an output, provided upon request after a model's original answer, that is meant to serve as a full account of the model's compliance with the letter and spirit of its policies and instructions. The reward assigned to a confession during training is solely based on its honesty, and does not impact positively or negatively the main answer's reward. As long as the "path of least resistance" for maximizing confession reward is to surface misbehavior rather than covering it up, this incentivizes models to be honest in their confessions. Our findings provide some justification this empirical assumption, especially in the case of egregious model misbehavior. - To demonstrate the viability of our approach, we train GPT-5-Thinking to produce confessions, and we evaluate its honesty in out-of-distribution scenarios measuring hallucination, instruction following, scheming, and reward hacking. We find that when the model lies or omits shortcomings in its "main" answer, it often confesses to these behaviors honestly, and this confession honesty modestly improves with training. Confessions can enable a number of inference-time interventions including monitoring, rejection sampling, and surfacing issues to the user. - oai:arXiv.org:2512.08093v1 - cs.LG + MindShift: Analyzing Language Models' Reactions to Psychological Prompts + https://arxiv.org/abs/2512.09149 + arXiv:2512.09149v1 Announce Type: new +Abstract: Large language models (LLMs) hold the potential to absorb and reflect personality traits and attitudes specified by users. In our study, we investigated this potential using robust psychometric measures. We adapted the most studied test in psychological literature, namely Minnesota Multiphasic Personality Inventory (MMPI) and examined LLMs' behavior to identify traits. To asses the sensitivity of LLMs' prompts and psychological biases we created personality-oriented prompts, crafting a detailed set of personas that vary in trait intensity. This enables us to measure how well LLMs follow these roles. Our study introduces MindShift, a benchmark for evaluating LLMs' psychological adaptability. The results highlight a consistent improvement in LLMs' role perception, attributed to advancements in training datasets and alignment techniques. Additionally, we observe significant differences in responses to psychometric assessments across different model types and families, suggesting variability in their ability to emulate human-like personality traits. MindShift prompts and code for LLM evaluation will be publicly available. + oai:arXiv.org:2512.09149v1 + cs.CL cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Manas Joglekar, Jeremy Chen, Gabriel Wu, Jason Yosinski, Jasmine Wang, Boaz Barak, Amelia Glaese + Anton Vasiliuk, Irina Abdullaeva, Polina Druzhinina, Anton Razzhigaev, Andrey Kuznetsov - Segment, Embed, and Align: A Universal Recipe for Aligning Subtitles to Signing - https://arxiv.org/abs/2512.08094 - arXiv:2512.08094v1 Announce Type: new -Abstract: The goal of this work is to develop a universal approach for aligning subtitles (i.e., spoken language text with corresponding timestamps) to continuous sign language videos. Prior approaches typically rely on end-to-end training tied to a specific language or dataset, which limits their generality. In contrast, our method Segment, Embed, and Align (SEA) provides a single framework that works across multiple languages and domains. SEA leverages two pretrained models: the first to segment a video frame sequence into individual signs and the second to embed the video clip of each sign into a shared latent space with text. Alignment is subsequently performed with a lightweight dynamic programming procedure that runs efficiently on CPUs within a minute, even for hour-long episodes. SEA is flexible and can adapt to a wide range of scenarios, utilizing resources from small lexicons to large continuous corpora. Experiments on four sign language datasets demonstrate state-of-the-art alignment performance, highlighting the potential of SEA to generate high-quality parallel data for advancing sign language processing. SEA's code and models are openly available. - oai:arXiv.org:2512.08094v1 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Exposing Vulnerabilities in Counterfeit Prevention Systems Utilizing Physically Unclonable Surface Features + https://arxiv.org/abs/2512.09150 + arXiv:2512.09150v1 Announce Type: new +Abstract: Counterfeit products pose significant risks to public health and safety through infiltrating untrusted supply chains. Among numerous anti-counterfeiting techniques, leveraging inherent, unclonable microscopic irregularities of paper surfaces is an accurate and cost-effective solution. Prior work of this approach has focused on enabling ubiquitous acquisition of these physically unclonable features (PUFs). However, we will show that existing authentication methods relying on paper surface PUFs may be vulnerable to adversaries, resulting in a gap between technological feasibility and secure real-world deployment. This gap is investigated through formalizing an operational framework for paper-PUF-based authentication. Informed by this framework, we reveal system-level vulnerabilities across both physical and digital domains, designing physical denial-of-service and digital forgery attacks to disrupt proper authentication. The effectiveness of the designed attacks underscores the strong need for security countermeasures for reliable and resilient authentication based on paper PUFs. The proposed framework further facilitates a comprehensive, stage-by-stage security analysis, guiding the design of future counterfeit prevention systems. This analysis delves into potential attack strategies, offering a foundational understanding of how various system components, such as physical features and verification processes, might be exploited by adversaries. + oai:arXiv.org:2512.09150v1 + cs.CR + eess.SP + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zifan Jiang, Youngjoon Jang, Liliane Momeni, G\"ul Varol, Sarah Ebling, Andrew Zisserman + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Anirudh Nakra, Nayeeb Rashid, Chau-Wai Wong, Min Wu - Selling Privacy in Blockchain Transactions - https://arxiv.org/abs/2512.08096 - arXiv:2512.08096v1 Announce Type: new -Abstract: We study methods to enhance privacy in blockchain transactions from an economic angle. We consider mechanisms for privacy-aware users whose utility depends not only on the outcome of the mechanism but also negatively on the exposure of their economic preferences. Specifically, we study two auction-theoretic settings with privacy-aware users. First, we analyze an order flow auction, where a user auctions off to specialized agents, called searchers, the right to execute her transaction while maintaining a degree of privacy. We examine how the degree of privacy affects the revenue of the auction and, broadly, the net utility of the privacy-aware user. In this new setting, we describe the optimal auction, which is a sealed-bid auction. Subsequently, we analyze a variant of a Dutch auction in which the user gradually decreases the price and the degree of privacy until the transaction is sold. We compare the revenue of this auction to that of the optimal one as a function of the number of communication rounds. Then, we introduce a two-sided market - a privacy marketplace - with multiple users selling their transactions under their privacy preferences to multiple searchers. We propose a posted-price mechanism for the two-sided market that guarantees constant approximation of the optimal social welfare while maintaining incentive compatibility (from both sides of the market) and budget balance. This work builds on the emerging line of research that attempts to improve the performance of economic mechanisms by appending cryptographic primitives to them. - oai:arXiv.org:2512.08096v1 - cs.GT - Wed, 10 Dec 2025 00:00:00 -0500 + PILLTOP: Multi-Material Topology Optimization of Polypills for Prescribed Drug-Release Kinetics + https://arxiv.org/abs/2512.09154 + arXiv:2512.09154v1 Announce Type: new +Abstract: Polypills are single oral dosage forms that combine multiple active pharmaceutical ingredients and excipients, enabling fixed-dose combination therapies, coordinated multi-phase release, and precise customization of patient-specific treatment protocols. Recent advances in additive manufacturing facilitate the physical realization of multi-material excipients, offering superior customization of target release profiles. However, polypill formulations remain tuned by ad hoc parameter sweeps; this reliance renders current design workflows ill-suited for the systematic exploration of the high-dimensional space of shapes, compositions, and release behaviors. + We present an automated design framework for polypills that leverages topology optimization to match dissolution behaviors with prescribed drug release kinetics. In particular, we employ a supershape parametrization to define geometry/phase distribution, a neural network representation to specify excipient distribution, and a coupled system of modified Allen-Cahn and Fick's diffusion equations to govern dissolution kinetics. The framework is implemented in JAX, utilizing automatic differentiation to compute sensitivities for the co-optimization of pill shape and constituent distribution. We validate the method through single-phase and multi-excipient case studies. + oai:arXiv.org:2512.09154v1 + cs.CE + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Georgios Chionas, Olga Gorelkina, Piotr Krysta, Rida Laraki + Rahul Kumar Padhy, Aaditya Chandrasekhar, Amir M. Mirzendehdel - Generalizations of the Normalized Radon Cumulative Distribution Transform for Limited Data Recognition - https://arxiv.org/abs/2512.08099 - arXiv:2512.08099v1 Announce Type: new -Abstract: The Radon cumulative distribution transform (R-CDT) exploits one-dimensional Wasserstein transport and the Radon transform to represent prominent features in images. It is closely related to the sliced Wasserstein distance and facilitates classification tasks, especially in the small data regime, like the recognition of watermarks in filigranology. Here, a typical issue is that the given data may be subject to affine transformations caused by the measuring process. To make the R-CDT invariant under arbitrary affine transformations, a two-step normalization of the R-CDT has been proposed in our earlier works. The aim of this paper is twofold. First, we propose a family of generalized normalizations to enhance flexibility for applications. Second, we study multi-dimensional and non-Euclidean settings by making use of generalized Radon transforms. We prove that our novel feature representations are invariant under certain transformations and allow for linear separation in feature space. Our theoretical results are supported by numerical experiments based on 2d images, 3d shapes and 3d rotation matrices, showing near perfect classification accuracies and clustering results. - oai:arXiv.org:2512.08099v1 - math.NA - cs.CV - cs.IT - cs.NA - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + Improving a Parallel C++ Intel AVX-512 SIMD Linear Genetic Programming Interpreter + https://arxiv.org/abs/2512.09157 + arXiv:2512.09157v1 Announce Type: new +Abstract: We extend recent 256 SSE vector work to 512 AVX giving a four fold speedup. We use MAGPIE (Machine Automated General Performance Improvement via Evolution of software) to speedup a C++ linear genetic programming interpreter. Local search is provided with three alternative hand optimised codes, revision history and the Intel 512 bit AVX512VL documentation as C++ XML. Magpie is applied to the new Single Instruction Multiple Data (SIMD) parallel interpreter for Peter Nordin's linear genetic programming GPengine. Linux mprotect sandboxes whilst performance is given by perf instruction count. In both cases, in a matter of hours local search reliably sped up 114 or 310 lines of manually written parallel SIMD code for the Intel Advanced Vector Extensions (AVX) by 2 percent. + oai:arXiv.org:2512.09157v1 + cs.NE + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Matthias Beckmann, Robert Beinert, Jonas Bresch + http://creativecommons.org/licenses/by-nc-nd/4.0/ + William B. Langdon - Broadband Thermoelectric Energy Harvesting for Wearable Biosensors Using Plasmonic Field-Enhancement and Machine-Learning-Guided Device Optimization - https://arxiv.org/abs/2512.08103 - arXiv:2512.08103v1 Announce Type: new -Abstract: Wearable biosensors increasingly require continuous and battery-free power sources, but conventional skin-mounted thermoelectric generators are limited by the small temperature differences available in real environments. This work introduces a hybrid thermoplasmonic and thermoelectric energy harvester that combines multiband plasmonic absorption with machine-learning-guided optimization to improve on-body energy conversion. A broadband metasurface made of cross-bowtie nanoantennas is designed to absorb infrared radiation across the 2 to 12 micron range, capturing human body emission, ambient infrared radiation, and near-infrared sunlight. Electromagnetic simulations show strong field enhancement in nanoscale antenna gaps, producing localized thermoplasmonic heating directly above flexible Bi2Te3 thermoelectric junctions. Coupled optical, thermal, and electrical modeling indicates that this localized heating increases the effective temperature difference from the typical 3 to 4 degrees C of standard wearable thermoelectric generators to approximately 13 degrees C. This results in a power density of about 0.15 mW per cm^2 under indoor-relevant infrared flux, representing a four- to six-fold improvement over existing flexible devices. A machine-learning surrogate model trained on multiphysics data predicts temperature rise and electrical output with high accuracy (R2 greater than 0.92) and identifies optimal device geometries through Pareto-front analysis. The proposed hybrid thermoplasmonic, thermoelectric, and machine-learning framework provides a scalable route toward more efficient, compact, and flexible energy harvesters for autonomous and long-term wearable physiological monitoring. - oai:arXiv.org:2512.08103v1 - cs.CE - Wed, 10 Dec 2025 00:00:00 -0500 + GTAvatar: Bridging Gaussian Splatting and Texture Mapping for Relightable and Editable Gaussian Avatars + https://arxiv.org/abs/2512.09162 + arXiv:2512.09162v1 Announce Type: new +Abstract: Recent advancements in Gaussian Splatting have enabled increasingly accurate reconstruction of photorealistic head avatars, opening the door to numerous applications in visual effects, videoconferencing, and virtual reality. This, however, comes with the lack of intuitive editability offered by traditional triangle mesh-based methods. In contrast, we propose a method that combines the accuracy and fidelity of 2D Gaussian Splatting with the intuitiveness of UV texture mapping. By embedding each canonical Gaussian primitive's local frame into a patch in the UV space of a template mesh in a computationally efficient manner, we reconstruct continuous editable material head textures from a single monocular video on a conventional UV domain. Furthermore, we leverage an efficient physically based reflectance model to enable relighting and editing of these intrinsic material maps. Through extensive comparisons with state-of-the-art methods, we demonstrate the accuracy of our reconstructions, the quality of our relighting results, and the ability to provide intuitive controls for modifying an avatar's appearance and geometry via texture mapping without additional optimization. + oai:arXiv.org:2512.09162v1 + cs.CV + cs.GR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hamidreza Moradi, Melika Filvantorkaman + http://creativecommons.org/licenses/by/4.0/ + Kelian Baert, Mae Younes, Francois Bourel, Marc Christie, Adnane Boukhayma - AgentCrypt: Advancing Privacy and (Secure) Computation in AI Agent Collaboration - https://arxiv.org/abs/2512.08104 - arXiv:2512.08104v1 Announce Type: new -Abstract: As AI agents increasingly operate in real-world, multi-agent environments, ensuring reliable and context-aware privacy in agent communication is critical, especially to comply with evolving regulatory requirements. Traditional access controls are insufficient, as privacy risks often arise after access is granted; agents may use information in ways that compromise privacy, such as messaging humans, sharing context with other agents, making tool calls, persisting data, or generating derived private information. Existing approaches often treat privacy as a binary constraint, whether data is shareable or not, overlooking nuanced, role-specific, and computation-dependent privacy needs essential for regulatory compliance. - Agents, including those based on large language models, are inherently probabilistic and heuristic. There is no formal guarantee of how an agent will behave for any query, making them ill-suited for operations critical to security. To address this, we introduce AgentCrypt, a four-tiered framework for fine-grained, encrypted agent communication that adds a protection layer atop any AI agent platform. AgentCrypt spans unrestricted data exchange (Level 1) to fully encrypted computation using techniques such as homomorphic encryption (Level 4). Crucially, it guarantees the privacy of tagged data is always maintained, prioritizing privacy above correctness. - AgentCrypt ensures privacy across diverse interactions and enables computation on otherwise inaccessible data, overcoming barriers such as data silos. We implemented and tested it with Langgraph and Google ADK, demonstrating versatility across platforms. We also introduce a benchmark dataset simulating privacy-critical tasks at all privacy levels, enabling systematic evaluation and fostering the development of regulatable machine learning systems for secure agent communication and computation. - oai:arXiv.org:2512.08104v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + WonderZoom: Multi-Scale 3D World Generation + https://arxiv.org/abs/2512.09164 + arXiv:2512.09164v1 Announce Type: new +Abstract: We present WonderZoom, a novel approach to generating 3D scenes with contents across multiple spatial scales from a single image. Existing 3D world generation models remain limited to single-scale synthesis and cannot produce coherent scene contents at varying granularities. The fundamental challenge is the lack of a scale-aware 3D representation capable of generating and rendering content with largely different spatial sizes. WonderZoom addresses this through two key innovations: (1) scale-adaptive Gaussian surfels for generating and real-time rendering of multi-scale 3D scenes, and (2) a progressive detail synthesizer that iteratively generates finer-scale 3D contents. Our approach enables users to "zoom into" a 3D region and auto-regressively synthesize previously non-existent fine details from landscapes to microscopic features. Experiments demonstrate that WonderZoom significantly outperforms state-of-the-art video and 3D models in both quality and alignment, enabling multi-scale 3D world creation from a single image. We show video results and an interactive viewer of generated multi-scale 3D worlds in https://wonderzoom.github.io/ + oai:arXiv.org:2512.09164v1 + cs.CV + cs.AI + cs.GR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Harish Karthikeyan, Yue Guo, Leo de Castro, Antigoni Polychroniadou, Leo Ardon, Udari Madhushani Sehwag, Sumitra Ganesh, Manuela Veloso + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jin Cao, Hong-Xing Yu, Jiajun Wu - Beyond Revenue and Welfare: Counterfactual Analysis of Spectrum Auctions with Application to Canada's 3800MHz Allocation - https://arxiv.org/abs/2512.08106 - arXiv:2512.08106v1 Announce Type: new -Abstract: Spectrum auctions are the primary mechanism through which governments allocate scarce radio frequencies, with outcomes that shape competition, coverage, and innovation in telecommunications markets. While traditional models of spectrum auctions often rely on strong equilibrium assumptions, we take a more parsimonious approach by modeling bidders as myopic and straightforward: in each round, firms simply demand the bundle that maximizes their utility given current prices. Despite its simplicity, this model proves effective in predicting the outcomes of Canada's 2023 auction of 3800 MHz spectrum licenses. Using detailed round-by-round bidding data, we estimate bidders' valuations through a linear programming framework and validate that our model reproduces key features of the observed allocation and price evolution. We then use these estimated valuations to simulate a counterfactual auction under an alternative mechanism that incentivizes deployment in rural and remote regions, aligning with one of the key objectives set out in the Canadian Telecommunications Act. The results show that the proposed mechanism substantially improves population coverage in underserved areas. These findings demonstrate that a behavioral model with minimal assumptions is sufficient to generate reliable counterfactual predictions, making it a practical tool for policymakers to evaluate how alternative auction designs may influence future outcomes. In particular, our study demonstrates a method for counterfactual mechanism design, providing a framework to evaluate how alternative auction rules could advance policy goals such as equitable deployment across Canada. - oai:arXiv.org:2512.08106v1 - cs.GT - Wed, 10 Dec 2025 00:00:00 -0500 + Spectral Embedding via Chebyshev Bases for Robust DeepONet Approximation + https://arxiv.org/abs/2512.09165 + arXiv:2512.09165v1 Announce Type: new +Abstract: Deep Operator Networks (DeepONets) have become a central tool in data-driven operator learning, providing flexible surrogates for nonlinear mappings arising in partial differential equations (PDEs). However, the standard trunk design based on fully connected layers acting on raw spatial or spatiotemporal coordinates struggles to represent sharp gradients, boundary layers, and non-periodic structures commonly found in PDEs posed on bounded domains with Dirichlet or Neumann boundary conditions. To address these limitations, we introduce the Spectral-Embedded DeepONet (SEDONet), a new DeepONet variant in which the trunk is driven by a fixed Chebyshev spectral dictionary rather than coordinate inputs. This non-periodic spectral embedding provides a principled inductive bias tailored to bounded domains, enabling the learned operator to capture fine-scale non-periodic features that are difficult for Fourier or MLP trunks to represent. SEDONet is evaluated on a suite of PDE benchmarks including 2D Poisson, 1D Burgers, 1D advection-diffusion, Allen-Cahn dynamics, and the Lorenz-96 chaotic system, covering elliptic, parabolic, advective, and multiscale temporal phenomena, all of which can be viewed as canonical problems in computational mechanics. Across all datasets, SEDONet consistently achieves the lowest relative L2 errors among DeepONet, FEDONet, and SEDONet, with average improvements of about 30-40% over the baseline DeepONet and meaningful gains over Fourier-embedded variants on non-periodic geometries. Spectral analyses further show that SEDONet more accurately preserves high-frequency and boundary-localized features, demonstrating the value of Chebyshev embeddings in non-periodic operator learning. The proposed architecture offers a simple, parameter-neutral modification to DeepONets, delivering a robust and efficient spectral framework for surrogate modeling of PDEs on bounded domains. + oai:arXiv.org:2512.09165v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sara Jalili Shani, Kris Joseph, Michael B. McNally, James R. Wright + Muhammad Abid, Omer San - Detecting Ambiguity Aversion in Cyberattack Behavior to Inform Cognitive Defense Strategies - https://arxiv.org/abs/2512.08107 - arXiv:2512.08107v1 Announce Type: new -Abstract: Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the ability to model and detect when they exhibit ambiguity aversion, a cognitive bias reflecting a preference for known (versus unknown) probabilities. We introduce a novel methodological framework that (1) leverages rich, multi-modal data from human-subjects red-team experiments, (2) employs a large language model (LLM) pipeline to parse unstructured logs into MITRE ATT&CK-mapped action sequences, and (3) applies a new computational model to infer an attacker's ambiguity aversion level in near-real time. By operationalizing this cognitive trait, our work provides a foundational component for developing adaptive cognitive defense strategies. - oai:arXiv.org:2512.08107v1 - cs.CR - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Prompt-Based Continual Compositional Zero-Shot Learning + https://arxiv.org/abs/2512.09172 + arXiv:2512.09172v1 Announce Type: new +Abstract: We tackle continual adaptation of vision-language models to new attributes, objects, and their compositions in Compositional Zero-Shot Learning (CZSL), while preventing forgetting of prior knowledge. Unlike classical continual learning where classes are disjoint, CCZSL is more complex as attributes and objects may reoccur across sessions while compositions remain unique. Built on a frozen VLM backbone, we propose the first Prompt-based Continual Compositional Zero-Shot Learning (PromptCCZSL) framework that retains prior knowledge through recency-weighted multi-teacher distillation. It employs session-aware compositional prompts to fuse multimodal features for new compositions, while attribute and object prompts are learned through session-agnostic fusion to maintain global semantic consistency, which is further stabilized by a Cosine Anchor Loss (CAL) to preserve prior knowledge. To enhance adaptation in the current session, an Orthogonal Projection Loss (OPL) ensures that new attribute and object embeddings remain distinct from previous ones, preventing overlap, while an Intra-Session Diversity Loss (IDL) promotes variation among current-session embeddings for richer, more discriminative representations. We also introduce a comprehensive protocol that jointly measures catastrophic forgetting and compositional generalization. Extensive experiments on UT-Zappos and C-GQA benchmarks demonstrate that PromptCCZSL achieves substantial improvements over prior VLM-based and non-VLM baselines, setting a new benchmark for CCZSL in closed-world settings. + oai:arXiv.org:2512.09172v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Stephan Carney, Soham Hans, Sofia Hirschmann, Stacey Marsella, Yvonne Fonken, Peggy Wu, Nikolos Gurney + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sauda Maryam, Sara Nadeem, Faisal Qureshi, Mohsen Ali - Scalable Offline Model-Based RL with Action Chunks - https://arxiv.org/abs/2512.08108 - arXiv:2512.08108v1 Announce Type: new -Abstract: In this paper, we study whether model-based reinforcement learning (RL), in particular model-based value expansion, can provide a scalable recipe for tackling complex, long-horizon tasks in offline RL. Model-based value expansion fits an on-policy value function using length-n imaginary rollouts generated by the current policy and a learned dynamics model. While larger n reduces bias in value bootstrapping, it amplifies accumulated model errors over long horizons, degrading future predictions. We address this trade-off with an \emph{action-chunk} model that predicts a future state from a sequence of actions (an "action chunk") instead of a single action, which reduces compounding errors. In addition, instead of directly training a policy to maximize rewards, we employ rejection sampling from an expressive behavioral action-chunk policy, which prevents model exploitation from out-of-distribution actions. We call this recipe \textbf{Model-Based RL with Action Chunks (MAC)}. Through experiments on highly challenging tasks with large-scale datasets of up to 100M transitions, we show that MAC achieves the best performance among offline model-based RL algorithms, especially on challenging long-horizon tasks. - oai:arXiv.org:2512.08108v1 + Understanding the Failure Modes of Transformers through the Lens of Graph Neural Networks + https://arxiv.org/abs/2512.09182 + arXiv:2512.09182v1 Announce Type: new +Abstract: Transformers and more specifically decoder-only transformers dominate modern LLM architectures. While they have shown to work exceptionally well, they are not without issues, resulting in surprising failure modes and predictably asymmetric performance degradation. This article is a study of many of these observed failure modes of transformers through the lens of graph neural network (GNN) theory. We first make the case that much of deep learning, including transformers, is about learnable information mixing and propagation. This makes the study of model failure modes a study of bottlenecks in information propagation. This naturally leads to GNN theory, where there is already a rich literature on information propagation bottlenecks and theoretical failure modes of models. We then make the case that many issues faced by GNNs are also experienced by transformers. In addition, we analyze how the causal nature of decoder-only transformers create interesting geometric properties in information propagation, resulting in predictable and potentially devastating failure modes. Finally, we observe that existing solutions in transformer research tend to be ad-hoc and driven by intuition rather than grounded theoretical motivation. As such, we unify many such solutions under a more theoretical perspective, providing insight into why they work, what problem they are actually solving, and how they can be further improved to target specific failure modes of transformers. Overall, this article is an attempt to bridge the gap between observed failure modes in transformers and a general lack of theoretical understanding of them in this space. + oai:arXiv.org:2512.09182v1 cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kwanyoung Park, Seohong Park, Youngwoon Lee, Sergey Levine + Hunjae Lee - The Bichromatic Two-Center Problem on Graphs - https://arxiv.org/abs/2512.08111 - arXiv:2512.08111v1 Announce Type: new -Abstract: In this paper, we study the (weighted) bichromatic two-center problem on graphs. The input consists of a graph $G$ of $n$ (weighted) vertices and $m$ edges, and a set $\mathcal{P}$ of pairs of distinct vertices, where no vertex appears in more than one pair. The problem aims to find two points (i.e., centers) on $G$ by assigning vertices of each pair to different centers so as to minimize the maximum (weighted) distance of vertices to their assigned centers (so that the graph can be bi-colored with this goal). To the best of our knowledge, this problem has not been studied on graphs, including tree graphs. In this paper, we propose an $O(m^2n\log n\log mn)$ algorithm for solving the problem on an undirected graph provided with the distance matrix, an $O(n\log n)$-time algorithm for the problem on trees, and a linear-time approach for the unweighted tree version. - oai:arXiv.org:2512.08111v1 - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 + Learning Patient-Specific Disease Dynamics with Latent Flow Matching for Longitudinal Imaging Generation + https://arxiv.org/abs/2512.09185 + arXiv:2512.09185v1 Announce Type: new +Abstract: Understanding disease progression is a central clinical challenge with direct implications for early diagnosis and personalized treatment. While recent generative approaches have attempted to model progression, key mismatches remain: disease dynamics are inherently continuous and monotonic, yet latent representations are often scattered, lacking semantic structure, and diffusion-based models disrupt continuity with random denoising process. In this work, we propose to treat the disease dynamic as a velocity field and leverage Flow Matching (FM) to align the temporal evolution of patient data. Unlike prior methods, it captures the intrinsic dynamic of disease, making the progression more interpretable. However, a key challenge remains: in latent space, Auto-Encoders (AEs) do not guarantee alignment across patients or correlation with clinical-severity indicators (e.g., age and disease conditions). To address this, we propose to learn patient-specific latent alignment, which enforces patient trajectories to lie along a specific axis, with magnitude increasing monotonically with disease severity. This leads to a consistent and semantically meaningful latent space. Together, we present $\Delta$-LFM, a framework for modeling patient-specific latent progression with flow matching. Across three longitudinal MRI benchmarks, $\Delta$-LFM demonstrates strong empirical performance and, more importantly, offers a new framework for interpreting and visualizing disease dynamics. + oai:arXiv.org:2512.09185v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qi Sun, Jingru Zhang + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Hao Chen, Rui Yin, Yifan Chen, Qi Chen, Chao Li - Balanced Accuracy: The Right Metric for Evaluating LLM Judges - Explained through Youden's J statistic - https://arxiv.org/abs/2512.08121 - arXiv:2512.08121v1 Announce Type: new -Abstract: Rigorous evaluation of large language models (LLMs) relies on comparing models by the prevalence of desirable or undesirable behaviors, such as task pass rates or policy violations. These prevalence estimates are produced by a classifier, either an LLM-as-a-judge or human annotators, making the choice of classifier central to trustworthy evaluation. Common metrics used for this choice, such as Accuracy, Precision, and F1, are sensitive to class imbalance and to arbitrary choices of positive class, and can favor judges that distort prevalence estimates. We show that Youden's $J$ statistic is theoretically aligned with choosing the best judge to compare models, and that Balanced Accuracy is an equivalent linear transformation of $J$. Through both analytical arguments and empirical examples and simulations, we demonstrate how selecting judges using Balanced Accuracy leads to better, more robust classifier selection. - oai:arXiv.org:2512.08121v1 - cs.LG + WOLF: Werewolf-based Observations for LLM Deception and Falsehoods + https://arxiv.org/abs/2512.09187 + arXiv:2512.09187v1 Announce Type: new +Abstract: Deception is a fundamental challenge for multi-agent reasoning: effective systems must strategically conceal information while detecting misleading behavior in others. Yet most evaluations reduce deception to static classification, ignoring the interactive, adversarial, and longitudinal nature of real deceptive dynamics. Large language models (LLMs) can deceive convincingly but remain weak at detecting deception in peers. We present WOLF, a multi-agent social deduction benchmark based on Werewolf that enables separable measurement of deception production and detection. WOLF embeds role-grounded agents (Villager, Werewolf, Seer, Doctor) in a programmable LangGraph state machine with strict night-day cycles, debate turns, and majority voting. Every statement is a distinct analysis unit, with self-assessed honesty from speakers and peer-rated deceptiveness from others. Deception is categorized via a standardized taxonomy (omission, distortion, fabrication, misdirection), while suspicion scores are longitudinally smoothed to capture both immediate judgments and evolving trust dynamics. Structured logs preserve prompts, outputs, and state transitions for full reproducibility. Across 7,320 statements and 100 runs, Werewolves produce deceptive statements in 31% of turns, while peer detection achieves 71-73% precision with ~52% overall accuracy. Precision is higher for identifying Werewolves, though false positives occur against Villagers. Suspicion toward Werewolves rises from ~52% to over 60% across rounds, while suspicion toward Villagers and the Doctor stabilizes near 44-46%. This divergence shows that extended interaction improves recall against liars without compounding errors against truthful roles. WOLF moves deception evaluation beyond static datasets, offering a dynamic, controlled testbed for measuring deceptive and detective capacity in adversarial multi-agent interaction. + oai:arXiv.org:2512.09187v1 + cs.MA cs.AI - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Stephane Collot, Colin Fraser, Justin Zhao, William F. Shen, Timon Willi, Ilias Leontiadis + http://creativecommons.org/licenses/by/4.0/ + Mrinal Agarwal, Saad Rana, Theo Sundoro, Hermela Berhe, Spencer Kim, Vasu Sharma, Sean O'Brien, Kevin Zhu - Evolutionary perspective of large language models on shaping research insights into healthcare disparities - https://arxiv.org/abs/2512.08122 - arXiv:2512.08122v1 Announce Type: new -Abstract: Introduction. Advances in large language models (LLMs) offer a chance to act as scientific assistants, helping people grasp complex research areas. This study examines how LLMs evolve in healthcare disparities research, with attention to public access to relevant information. Methods. We studied three well-known LLMs: ChatGPT, Copilot, and Gemini. Each week, we asked them a consistent prompt about research themes in healthcare disparities and tracked how their answers changed over a one-month period. Analysis. The themes produced by the LLMs were categorized and cross-checked against H-index values from the Web of Science to verify relevance. This dual approach shows how the outputs of LLMs develop over time and how such progress could help researchers navigate trends. Results. The outputs aligned with actual scientific impact and trends in the field, indicating that LLMs can help people understand the healthcare disparities landscape. Time-series comparisons showed differences among the models in how broadly and deeply they identified and classified themes. Conclusion. The study offers a framework that uses the evolution of multiple LLMs to illuminate AI tools for studying healthcare disparities, informing future research and public engagement strategies. - oai:arXiv.org:2512.08122v1 - cs.CY - cs.SI - Wed, 10 Dec 2025 00:00:00 -0500 + Understanding Mental States in Active and Autonomous Driving with EEG + https://arxiv.org/abs/2512.09190 + arXiv:2512.09190v1 Announce Type: new +Abstract: Understanding how driver mental states differ between active and autonomous driving is critical for designing safe human-vehicle interfaces. This paper presents the first EEG-based comparison of cognitive load, fatigue, valence, and arousal across the two driving modes. Using data from 31 participants performing identical tasks in both scenarios of three different complexity levels, we analyze temporal patterns, task-complexity effects, and channel-wise activation differences. Our findings show that although both modes evoke similar trends across complexity levels, the intensity of mental states and the underlying neural activation differ substantially, indicating a clear distribution shift between active and autonomous driving. Transfer-learning experiments confirm that models trained on active driving data generalize poorly to autonomous driving and vice versa. We attribute this distribution shift primarily to differences in motor engagement and attentional demands between the two driving modes, which lead to distinct spatial and temporal EEG activation patterns. Although autonomous driving results in lower overall cortical activation, participants continue to exhibit measurable fluctuations in cognitive load, fatigue, valence, and arousal associated with readiness to intervene, task-evoked emotional responses, and monotony-related passive fatigue. These results emphasize the need for scenario-specific data and models when developing next-generation driver monitoring systems for autonomous vehicles. + oai:arXiv.org:2512.09190v1 + cs.HC + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - David An + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Prithila Angkan, Paul Hungler, Ali Etemad - Universal Adversarial Suffixes Using Calibrated Gumbel-Softmax Relaxation - https://arxiv.org/abs/2512.08123 - arXiv:2512.08123v1 Announce Type: new -Abstract: Language models (LMs) are often used as zero-shot or few-shot classifiers by scoring label words, but they remain fragile to adversarial prompts. Prior work typically optimizes task- or model-specific triggers, making results difficult to compare and limiting transferability. We study universal adversarial suffixes: short token sequences (4-10 tokens) that, when appended to any input, broadly reduce accuracy across tasks and models. Our approach learns the suffix in a differentiable "soft" form using Gumbel-Softmax relaxation and then discretizes it for inference. Training maximizes calibrated cross-entropy on the label region while masking gold tokens to prevent trivial leakage, with entropy regularization to avoid collapse. A single suffix trained on one model transfers effectively to others, consistently lowering both accuracy and calibrated confidence. Experiments on sentiment analysis, natural language inference, paraphrase detection, commonsense QA, and physical reasoning with Qwen2-1.5B, Phi-1.5, and TinyLlama-1.1B demonstrate consistent attack effectiveness and transfer across tasks and model families. - oai:arXiv.org:2512.08123v1 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + TritonForge: Profiling-Guided Framework for Automated Triton Kernel Optimization + https://arxiv.org/abs/2512.09196 + arXiv:2512.09196v1 Announce Type: new +Abstract: High-performance GPU kernel optimization remains a critical yet labor-intensive task in modern machine learning workloads. Although Triton, a domain-specific language for GPU programming, enables developers to write efficient kernels with concise code, achieving expert-level performance still requires deep understanding of GPU architectures and low-level performance trade-offs. We present TritonForge, a profiling-guided framework for automated Triton kernel optimization. TritonForge integrates kernel analysis, runtime profiling, and iterative code transformation to streamline the optimization process. By incorporating data-driven feedback from profiling results, the system identifies performance bottlenecks, proposes targeted code modifications, and evaluates their impact automatically. While our prototype leverages large language models (LLMs) to assist in code reasoning and transformation, the framework remains modular and model-agnostic. Across diverse kernel types and GPU architectures, TritonForge achieves up to 5x performance improvement over baseline implementations and on average 1.76x of the cases are successful, providing a foundation for future research in automated GPU performance optimization. + oai:arXiv.org:2512.09196v1 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-sa/4.0/ - Sampriti Soor, Suklav Ghosh, Arijit Sur + Haonan Li, Keyu Man, Partha Kanuparthy, Hanning Chen, Wei Sun, Sreen Tallam, Chenguang Zhu, Kevin Zhu, Zhiyun Qian - Long-only cryptocurrency portfolio management by ranking the assets: a neural network approach - https://arxiv.org/abs/2512.08124 - arXiv:2512.08124v1 Announce Type: new -Abstract: This paper will propose a novel machine learning based portfolio management method in the context of the cryptocurrency market. Previous researchers mainly focus on the prediction of the movement for specific cryptocurrency such as the bitcoin(BTC) and then trade according to the prediction. In contrast to the previous work that treats the cryptocurrencies independently, this paper manages a group of cryptocurrencies by analyzing the relative relationship. Specifically, in each time step, we utilize the neural network to predict the rank of the future return of the managed cryptocurrencies and place weights accordingly. By incorporating such cross-sectional information, the proposed methods is shown to profitable based on the backtesting experiments on the real daily cryptocurrency market data from May, 2020 to Nov, 2023. During this 3.5 years, the market experiences the full cycle of bullish, bearish and stagnant market conditions. Despite under such complex market conditions, the proposed method outperforms the existing methods and achieves a Sharpe ratio of 1.01 and annualized return of 64.26%. Additionally, the proposed method is shown to be robust to the increase of transaction fee. - oai:arXiv.org:2512.08124v1 + Towards Optimal Valve Prescription for Transcatheter Aortic Valve Replacement (TAVR) Surgery: A Machine Learning Approach + https://arxiv.org/abs/2512.09198 + arXiv:2512.09198v1 Announce Type: new +Abstract: Transcatheter Aortic Valve Replacement (TAVR) has emerged as a minimally invasive treatment option for patients with severe aortic stenosis, a life-threatening cardiovascular condition. Multiple transcatheter heart valves (THV) have been approved for use in TAVR, but current guidelines regarding valve type prescription remain an active topic of debate. We propose a data-driven clinical support tool to identify the optimal valve type with the objective of minimizing the risk of permanent pacemaker implantation (PPI), a predominant postoperative complication. We synthesize a novel dataset that combines U.S. and Greek patient populations and integrates three distinct data sources (patient demographics, computed tomography scans, echocardiograms) while harmonizing differences in each country's record system. We introduce a leaf-level analysis to leverage population heterogeneity and avoid benchmarking against uncertain counterfactual risk estimates. The final prescriptive model shows a reduction in PPI rates of 26% and 16% compared with the current standard of care in our internal U.S. population and external Greek validation cohort, respectively. To the best of our knowledge, this work represents the first unified, personalized prescription strategy for THV selection in TAVR. + oai:arXiv.org:2512.09198v1 cs.LG cs.AI - cs.NE - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.1109/IJCNN64981.2025.11228268 - 2025 International Joint Conference on Neural Networks (IJCNN), Rome, Italy, 2025, pp. 1-8 - Zijiang Yang + Phevos Paschalidis, Vasiliki Stoumpou, Lisa Everest, Yu Ma, Talhat Azemi, Jawad Haider, Steven Zweibel, Eleftherios M. Protopapas, Jeff Mather, Maciej Tysarowski, George E. Sarris, Robert C. Hagberg, Howard L. Haronian, Dimitris Bertsimas - Improving the Sensitivity of Backdoor Detectors via Class Subspace Orthogonalization - https://arxiv.org/abs/2512.08129 - arXiv:2512.08129v1 Announce Type: new -Abstract: Most post-training backdoor detection methods rely on attacked models exhibiting extreme outlier detection statistics for the target class of an attack, compared to non-target classes. However, these approaches may fail: (1) when some (non-target) classes are easily discriminable from all others, in which case they may naturally achieve extreme detection statistics (e.g., decision confidence); and (2) when the backdoor is subtle, i.e., with its features weak relative to intrinsic class-discriminative features. A key observation is that the backdoor target class has contributions to its detection statistic from both the backdoor trigger and from its intrinsic features, whereas non-target classes only have contributions from their intrinsic features. To achieve more sensitive detectors, we thus propose to suppress intrinsic features while optimizing the detection statistic for a given class. For non-target classes, such suppression will drastically reduce the achievable statistic, whereas for the target class the (significant) contribution from the backdoor trigger remains. In practice, we formulate a constrained optimization problem, leveraging a small set of clean examples from a given class, and optimizing the detection statistic while orthogonalizing with respect to the class's intrinsic features. We dub this plug-and-play approach Class Subspace Orthogonalization (CSO) and assess it against challenging mixed-label and adaptive attacks. - oai:arXiv.org:2512.08129v1 + LLMs for Analog Circuit Design Continuum (ACDC) + https://arxiv.org/abs/2512.09199 + arXiv:2512.09199v1 Announce Type: new +Abstract: Large Language Models (LLMs) and transformer architectures have shown impressive reasoning and generation capabilities across diverse natural language tasks. However, their reliability and robustness in real-world engineering domains remain largely unexplored, limiting their practical utility in human-centric workflows. In this work, we investigate the applicability and consistency of LLMs for analog circuit design -- a task requiring domain-specific reasoning, adherence to physical constraints, and structured representations -- focusing on AI-assisted design where humans remain in the loop. We study how different data representations influence model behavior and compare smaller models (e.g., T5, GPT-2) with larger foundation models (e.g., Mistral-7B, GPT-oss-20B) under varying training conditions. Our results highlight key reliability challenges, including sensitivity to data format, instability in generated designs, and limited generalization to unseen circuit configurations. These findings provide early evidence on the limits and potential of LLMs as tools to enhance human capabilities in complex engineering tasks, offering insights into designing reliable, deployable foundation models for structured, real-world applications. + oai:arXiv.org:2512.09199v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.PF + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Guangmingmei Yang, David J. Miller, George Kesidis + http://creativecommons.org/licenses/by/4.0/ + Yasaman Esfandiari, Jocelyn Rego, Austin Meyer, Jonathan Gallagher, Mia Levy - Biothreat Benchmark Generation Framework for Evaluating Frontier AI Models I: The Task-Query Architecture - https://arxiv.org/abs/2512.08130 - arXiv:2512.08130v1 Announce Type: new -Abstract: Both model developers and policymakers seek to quantify and mitigate the risk of rapidly-evolving frontier artificial intelligence (AI) models, especially large language models (LLMs), to facilitate bioterrorism or access to biological weapons. An important element of such efforts is the development of model benchmarks that can assess the biosecurity risk posed by a particular model. This paper describes the first component of a novel Biothreat Benchmark Generation (BBG) Framework. The BBG approach is designed to help model developers and evaluators reliably measure and assess the biosecurity risk uplift and general harm potential of existing and future AI models, while accounting for key aspects of the threat itself that are often overlooked in other benchmarking efforts, including different actor capability levels, and operational (in addition to purely technical) risk factors. As a pilot, the BBG is first being developed to address bacterial biological threats only. The BBG is built upon a hierarchical structure of biothreat categories, elements and tasks, which then serves as the basis for the development of task-aligned queries. This paper outlines the development of this biothreat task-query architecture, which we have named the Bacterial Biothreat Schema, while future papers will describe follow-on efforts to turn queries into model prompts, as well as how the resulting benchmarks can be implemented for model evaluation. Overall, the BBG Framework, including the Bacterial Biothreat Schema, seeks to offer a robust, re-usable structure for evaluating bacterial biological risks arising from LLMs across multiple levels of aggregation, which captures the full scope of technical and operational requirements for biological adversaries, and which accounts for a wide spectrum of biological adversary capabilities. - oai:arXiv.org:2512.08130v1 - cs.LG - cs.AI - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + Meta Lattice: Model Space Redesign for Cost-Effective Industry-Scale Ads Recommendations + https://arxiv.org/abs/2512.09200 + arXiv:2512.09200v1 Announce Type: new +Abstract: The rapidly evolving landscape of products, surfaces, policies, and regulations poses significant challenges for deploying state-of-the-art recommendation models at industry scale, primarily due to data fragmentation across domains and escalating infrastructure costs that hinder sustained quality improvements. + To address this challenge, we propose Lattice, a recommendation framework centered around model space redesign that extends Multi-Domain, Multi-Objective (MDMO) learning beyond models and learning objectives. Lattice addresses these challenges through a comprehensive model space redesign that combines cross-domain knowledge sharing, data consolidation, model unification, distillation, and system optimizations to achieve significant improvements in both quality and cost-efficiency. + Our deployment of Lattice at Meta has resulted in 10% revenue-driving top-line metrics gain, 11.5% user satisfaction improvement, 6% boost in conversion rate, with 20% capacity saving. + oai:arXiv.org:2512.09200v1 + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Gary Ackerman, Brandon Behlendorf, Zachary Kallenborn, Sheriff Almakki, Doug Clifford, Jenna LaTourette, Hayley Peterson, Noah Sheinbaum, Olivia Shoemaker, Anna Wetzel + Liang Luo, Yuxin Chen, Zhengyu Zhang, Mengyue Hang, Andrew Gu, Buyun Zhang, Boyang Liu, Chen Chen, Chengze Fan, Dong Liang, Fan Yang, Feifan Gu, Huayu Li, Jade Nie, Jiayi Xu, Jiyan Yang, Jongsoo Park, Laming Chen, Longhao Jin, Qianru Li, Qin Huang, Shali Jiang, Shiwen Shen, Shuaiwen Wang, Sihan Zeng, Siyang Yuan, Tongyi Tang, Weilin Zhang, Wenjun Wang, Xi Liu, Xiaohan Wei, Xiaozhen Xia, Yuchen Hao, Yunlong He, Yasmine Badr, Zeliang Chen, Maxim Naumov, Yantao Yao, Wenlin Chen, Santanu Kolay, GP Musumeci, Ellie Dingqiao Wen - Universal Adversarial Suffixes for Language Models Using Reinforcement Learning with Calibrated Reward - https://arxiv.org/abs/2512.08131 - arXiv:2512.08131v1 Announce Type: new -Abstract: Language models are vulnerable to short adversarial suffixes that can reliably alter predictions. Previous works usually find such suffixes with gradient search or rule-based methods, but these are brittle and often tied to a single task or model. In this paper, a reinforcement learning framework is used where the suffix is treated as a policy and trained with Proximal Policy Optimization against a frozen model as a reward oracle. Rewards are shaped using calibrated cross-entropy, removing label bias and aggregating across surface forms to improve transferability. The proposed method is evaluated on five diverse NLP benchmark datasets, covering sentiment, natural language inference, paraphrase, and commonsense reasoning, using three distinct language models: Qwen2-1.5B Instruct, TinyLlama-1.1B Chat, and Phi-1.5. Results show that RL-trained suffixes consistently degrade accuracy and transfer more effectively across tasks and models than previous adversarial triggers of similar genres. - oai:arXiv.org:2512.08131v1 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Residual Primitive Fitting of 3D Shapes with SuperFrusta + https://arxiv.org/abs/2512.09201 + arXiv:2512.09201v1 Announce Type: new +Abstract: We introduce a framework for converting 3D shapes into compact and editable assemblies of analytic primitives, directly addressing the persistent trade-off between reconstruction fidelity and parsimony. Our approach combines two key contributions: a novel primitive, termed SuperFrustum, and an iterative fiting algorithm, Residual Primitive Fitting (ResFit). SuperFrustum is an analytical primitive that is simultaneously (1) expressive, being able to model various common solids such as cylinders, spheres, cones & their tapered and bent forms, (2) editable, being compactly parameterized with 8 parameters, and (3) optimizable, with a sign distance field differentiable w.r.t. its parameters almost everywhere. ResFit is an unsupervised procedure that interleaves global shape analysis with local optimization, iteratively fitting primitives to the unexplained residual of a shape to discover a parsimonious yet accurate decompositions for each input shape. On diverse 3D benchmarks, our method achieves state-of-the-art results, improving IoU by over 9 points while using nearly half as many primitives as prior work. The resulting assemblies bridge the gap between dense 3D data and human-controllable design, producing high-fidelity and editable shape programs. + oai:arXiv.org:2512.09201v1 + cs.GR + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Sampriti Soor, Suklav Ghosh, Arijit Sur + http://creativecommons.org/licenses/by/4.0/ + Aditya Ganeshan, Matheus Gadelha, Thibault Groueix, Zhiqin Chen, Siddhartha Chaudhuri, Vladimir Kim, Wang Yifan, Daniel Ritchie - Multi-agent learning under uncertainty: Recurrence vs. concentration - https://arxiv.org/abs/2512.08132 - arXiv:2512.08132v1 Announce Type: new -Abstract: In this paper, we examine the convergence landscape of multi-agent learning under uncertainty. Specifically, we analyze two stochastic models of regularized learning in continuous games -- one in continuous and one in discrete time with the aim of characterizing the long-run behavior of the induced sequence of play. In stark contrast to deterministic, full-information models of learning (or models with a vanishing learning rate), we show that the resulting dynamics do not converge in general. In lieu of this, we ask instead which actions are played more often in the long run, and by how much. We show that, in strongly monotone games, the dynamics of regularized learning may wander away from equilibrium infinitely often, but they always return to its vicinity in finite time (which we estimate), and their long-run distribution is sharply concentrated around a neighborhood thereof. We quantify the degree of this concentration, and we show that these favorable properties may all break down if the underlying game is not strongly monotone -- underscoring in this way the limits of regularized learning in the presence of persistent randomness and uncertainty. - oai:arXiv.org:2512.08132v1 - cs.GT + Tensor-Compressed and Fully-Quantized Training of Neural PDE Solvers + https://arxiv.org/abs/2512.09202 + arXiv:2512.09202v1 Announce Type: new +Abstract: Physics-Informed Neural Networks (PINNs) have emerged as a promising paradigm for solving partial differential equations (PDEs) by embedding physical laws into neural network training objectives. However, their deployment on resource-constrained platforms is hindered by substantial computational and memory overhead, primarily stemming from higher-order automatic differentiation, intensive tensor operations, and reliance on full-precision arithmetic. To address these challenges, we present a framework that enables scalable and energy-efficient PINN training on edge devices. This framework integrates fully quantized training, Stein's estimator (SE)-based residual loss computation, and tensor-train (TT) decomposition for weight compression. It contributes three key innovations: (1) a mixed-precision training method that use a square-block MX (SMX) format to eliminate data duplication during backpropagation; (2) a difference-based quantization scheme for the Stein's estimator that mitigates underflow; and (3) a partial-reconstruction scheme (PRS) for TT-Layers that reduces quantization-error accumulation. We further design PINTA, a precision-scalable hardware accelerator, to fully exploit the performance of the framework. Experiments on the 2-D Poisson, 20-D Hamilton-Jacobi-Bellman (HJB), and 100-D Heat equations demonstrate that the proposed framework achieves accuracy comparable to or better than full-precision, uncompressed baselines while delivering 5.5x to 83.5x speedups and 159.6x to 2324.1x energy savings. This work enables real-time PDE solving on edge devices and paves the way for energy-efficient scientific computing at scale. + oai:arXiv.org:2512.09202v1 cs.LG - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.AR + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Jinming Lu, Jiayi Tian, Yequan Zhao, Hai Li, Zheng Zhang + + + Beyond Algorithm Evolution: An LLM-Driven Framework for the Co-Evolution of Swarm Intelligence Optimization Algorithms and Prompts + https://arxiv.org/abs/2512.09209 + arXiv:2512.09209v1 Announce Type: new +Abstract: The field of automated algorithm design has been advanced by frameworks such as EoH, FunSearch, and Reevo. Yet, their focus on algorithm evolution alone, neglecting the prompts that guide them, limits their effectiveness with LLMs, especially in complex, uncertain environments where they nonetheless implicitly rely on strategies from swarm intelligence optimization algorithms. Recognizing this, we argue that swarm intelligence optimization provides a more generalized and principled foundation for automated design. Consequently, this paper proposes a novel framework for the collaborative evolution of both swarm intelligence algorithms and guiding prompts using a single LLM. To enhance interpretability, we also propose a simple yet efficient evaluation method for prompt templates. The framework was rigorously evaluated on a range of NP problems, where it demonstrated superior performance compared to several state-of-the-art automated design approaches. Experiments with various LLMs (e.g., GPT-4o-mini, Qwen3-32B, GPT-5) reveal significantly divergent evolutionary trajectories in the generated prompts, further underscoring the necessity of a structured co-evolution framework. Importantly, our approach maintains leading performance across different models, demonstrating reduced reliance on the most powerful LLMs and enabling more cost-effective deployments. Ablation studies and in-depth analysis of the evolved prompts confirm that collaborative evolution is essential for achieving optimal performance. Our work establishes a new paradigm for swarm intelligence optimization algorithms, underscoring the indispensable role of prompt evolution. + oai:arXiv.org:2512.09209v1 + cs.NE + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet + Shipeng Cen, Ying Tan + + + Targeting Misalignment: A Conflict-Aware Framework for Reward-Model-based LLM Alignment + https://arxiv.org/abs/2512.09212 + arXiv:2512.09212v1 Announce Type: new +Abstract: Reward-model-based fine-tuning is a central paradigm in aligning Large Language Models with human preferences. However, such approaches critically rely on the assumption that proxy reward models accurately reflect intended supervision, a condition often violated due to annotation noise, bias, or limited coverage. This misalignment can lead to undesirable behaviors, where models optimize for flawed signals rather than true human values. In this paper, we investigate a novel framework to identify and mitigate such misalignment by treating the fine-tuning process as a form of knowledge integration. We focus on detecting instances of proxy-policy conflicts, cases where the base model strongly disagrees with the proxy. We argue that such conflicts often signify areas of shared ignorance, where neither the policy nor the reward model possesses sufficient knowledge, making them especially susceptible to misalignment. To this end, we propose two complementary metrics for identifying these conflicts: a localized Proxy-Policy Alignment Conflict Score (PACS) and a global Kendall-Tau Distance measure. Building on this insight, we design an algorithm named Selective Human-in-the-loop Feedback via Conflict-Aware Sampling (SHF-CAS) that targets high-conflict QA pairs for additional feedback, refining both the reward model and policy efficiently. Experiments on two alignment tasks demonstrate that our approach enhances general alignment performance, even when trained with a biased proxy reward. Our work provides a new lens for interpreting alignment failures and offers a principled pathway for targeted refinement in LLM training. + oai:arXiv.org:2512.09212v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Zixuan Liu, Siavash H. Khajavi, Guangkai Jiang, Xinru Liu - A Dynamic Coding Scheme to Prevent Covert Cyber-Attacks in Cyber-Physical Systems - https://arxiv.org/abs/2512.08134 - arXiv:2512.08134v1 Announce Type: new -Abstract: In this paper, we address two main problems in the context of covert cyber-attacks in cyber-physical systems (CPS). First, we aim to investigate and develop necessary and sufficient conditions in terms of disruption resources of the CPS that enable adversaries to execute covert cyber-attacks. These conditions can be utilized to identify the input and output communication channels that are needed by adversaries to execute these attacks. Second, this paper introduces and develops a dynamic coding scheme as a countermeasure against covert cyber-attacks. Under certain conditions and assuming the existence of one secure input and two secure output communication channels, the proposed dynamic coding scheme prevents adversaries from executing covert cyber-attacks. A numerical case study of a flight control system is provided to demonstrate the capabilities of our proposed and developed dynamic coding scheme. - oai:arXiv.org:2512.08134v1 + MPC for momentum counter-balanced and zero-impulse contact with a free-spinning satellite + https://arxiv.org/abs/2512.09213 + arXiv:2512.09213v1 Announce Type: new +Abstract: In on-orbit robotics, a servicer satellite's ability to make contact with a free-spinning target satellite is essential to completing most on-orbit servicing (OOS) tasks. This manuscript develops a nonlinear model predictive control (MPC) framework that generates feasible controls for a servicer satellite to achieve zero-impulse contact with a free-spinning target satellite. The overall maneuver requires coordination between two separately actuated modules of the servicer satellite: (1) a moment generation module and (2) a manipulation module. We apply MPC to control both modules by explicitly modeling the cross-coupling dynamics between them. We demonstrate that the MPC controller can enforce actuation and state constraints that prior control approaches could not account for. We evaluate the performance of the MPC controller by simulating zero-impulse contact scenarios with a free-spinning target satellite via numerical Monte Carlo (MC) trials and comparing the simulation results with prior control approaches. Our simulation results validate the effectiveness of the MPC controller in maintaining spin synchronization and zero-impulse contact under operation constraints, moving contact location, and observation and actuation noise. + oai:arXiv.org:2512.09213v1 eess.SY + cs.RO cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mahdi Taheri, Khashayar Khorasani, Nader Meskin + Theofania Karampela, Rishie Seshadri, Florian D\"orfler, Sarah H. Q. Li - CVP: Central-Peripheral Vision-Inspired Multimodal Model for Spatial Reasoning - https://arxiv.org/abs/2512.08135 - arXiv:2512.08135v1 Announce Type: new -Abstract: We present a central-peripheral vision-inspired framework (CVP), a simple yet effective multimodal model for spatial reasoning that draws inspiration from the two types of human visual fields -- central vision and peripheral vision. Existing approaches primarily rely on unstructured representations, such as point clouds, voxels, or patch features, and inject scene context implicitly via coordinate embeddings. However, this often results in limited spatial reasoning capabilities due to the lack of explicit, high-level structural understanding. To address this limitation, we introduce two complementary components into a Large Multimodal Model-based architecture: target-affinity token, analogous to central vision, that guides the model's attention toward query-relevant objects; and allocentric grid, akin to peripheral vision, that captures global scene context and spatial arrangements. These components work in tandem to enable structured, context-aware understanding of complex 3D environments. Experiments show that CVP achieves state-of-the-art performance across a range of 3D scene understanding benchmarks. - oai:arXiv.org:2512.08135v1 + View-on-Graph: Zero-shot 3D Visual Grounding via Vision-Language Reasoning on Scene Graphs + https://arxiv.org/abs/2512.09215 + arXiv:2512.09215v1 Announce Type: new +Abstract: 3D visual grounding (3DVG) identifies objects in 3D scenes from language descriptions. Existing zero-shot approaches leverage 2D vision-language models (VLMs) by converting 3D spatial information (SI) into forms amenable to VLM processing, typically as composite inputs such as specified view renderings or video sequences with overlaid object markers. However, this VLM + SI paradigm yields entangled visual representations that compel the VLM to process entire cluttered cues, making it hard to exploit spatial semantic relationships effectively. In this work, we propose a new VLM x SI paradigm that externalizes the 3D SI into a form enabling the VLM to incrementally retrieve only what it needs during reasoning. We instantiate this paradigm with a novel View-on-Graph (VoG) method, which organizes the scene into a multi-modal, multi-layer scene graph and allows the VLM to operate as an active agent that selectively accesses necessary cues as it traverses the scene. This design offers two intrinsic advantages: (i) by structuring 3D context into a spatially and semantically coherent scene graph rather than confounding the VLM with densely entangled visual inputs, it lowers the VLM's reasoning difficulty; and (ii) by actively exploring and reasoning over the scene graph, it naturally produces transparent, step-by-step traces for interpretable 3DVG. Extensive experiments show that VoG achieves state-of-the-art zero-shot performance, establishing structured scene exploration as a promising strategy for advancing zero-shot 3DVG. + oai:arXiv.org:2512.09215v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zeyuan Chen, Xiang Zhang, Haiyang Xu, Jianwen Xie, Zhuowen Tu + Yuanyuan Liu, Haiyang Mei, Dongyang Zhan, Jiayue Zhao, Dongsheng Zhou, Bo Dong, Xin Yang - Robust equilibria in continuous games: From strategic to dynamic robustness - https://arxiv.org/abs/2512.08138 - arXiv:2512.08138v1 Announce Type: new -Abstract: In this paper, we examine the robustness of Nash equilibria in continuous games, under both strategic and dynamic uncertainty. Starting with the former, we introduce the notion of a robust equilibrium as those equilibria that remain invariant to small -- but otherwise arbitrary -- perturbations to the game's payoff structure, and we provide a crisp geometric characterization thereof. Subsequently, we turn to the question of dynamic robustness, and we examine which equilibria may arise as stable limit points of the dynamics of "follow the regularized leader" (FTRL) in the presence of randomness and uncertainty. Despite their very distinct origins, we establish a structural correspondence between these two notions of robustness: strategic robustness implies dynamic robustness, and, conversely, the requirement of strategic robustness cannot be relaxed if dynamic robustness is to be maintained. Finally, we examine the rate of convergence to robust equilibria as a function of the underlying regularizer, and we show that entropically regularized learning converges at a geometric rate in games with affinely constrained action spaces. - oai:arXiv.org:2512.08138v1 - cs.GT - cs.LG - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + Bug Priority Change Prediction: An Exploratory Study on Apache Software + https://arxiv.org/abs/2512.09216 + arXiv:2512.09216v1 Announce Type: new +Abstract: Bug fixing is a critical activity in the software development process. In issue tracking systems such as JIRA, each bug report is assigned a priority level to indicate the urgency and importance level of the bug. The priority may change during the bug fixing process, indicating that the urgency and importance level of the bug will change with the bug fixing. However, manually evaluating priority changes for bugs is a tedious process that heavily relies on the subjective judgment of developers and project managers, leading to incorrect priority changes and thus hindering timely bug fixes. Given the lack of research on bug priority change prediction, we propose a novel two-phase bug report priority change prediction method based on bug fixing evolution features and class imbalance handling strategy. Specifically, we divided the bug lifecycle into two phases: bug reporting and bug fixing, and constructed bug priority change prediction models for each phase. To evaluate the performance of our method, we conducted experiments on a bug dataset constructed from 32 non-trivial Apache projects. The experimental results show that our proposed bug fixing evolution features and the adopted class imbalance handling strategy can effectively improve the performance of prediction models. The F1-score of the prediction model constructed for the bug reporting phase reached 0.798, while the F1-weighted and F1-macro of the prediction model constructed for the bug fixing phase were 0.712 and 0.613, respectively. Furthermore, we explored the cross-project applicability of our prediction models and their performance at different priority levels. The findings indicate large variations in model performance across different projects, although the overall scores remain decent. Meanwhile, the predictive performance across various priority levels remained relatively consistently high. + oai:arXiv.org:2512.09216v1 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet + Guangzong Cai, Zengyang Li, Peng Liang, Ran Mo, Hui Liu, Yutao Ma - Robust Agents in Open-Ended Worlds - https://arxiv.org/abs/2512.08139 - arXiv:2512.08139v1 Announce Type: new -Abstract: The growing prevalence of artificial intelligence (AI) in various applications underscores the need for agents that can successfully navigate and adapt to an ever-changing, open-ended world. A key challenge is ensuring these AI agents are robust, excelling not only in familiar settings observed during training but also effectively generalising to previously unseen and varied scenarios. In this thesis, we harness methodologies from open-endedness and multi-agent learning to train and evaluate robust AI agents capable of generalising to novel environments, out-of-distribution inputs, and interactions with other co-player agents. We begin by introducing MiniHack, a sandbox framework for creating diverse environments through procedural content generation. Based on the game of NetHack, MiniHack enables the construction of new tasks for reinforcement learning (RL) agents with a focus on generalisation. We then present Maestro, a novel approach for generating adversarial curricula that progressively enhance the robustness and generality of RL agents in two-player zero-sum games. We further probe robustness in multi-agent domains, utilising quality-diversity methods to systematically identify vulnerabilities in state-of-the-art, pre-trained RL policies within the complex video game football domain, characterised by intertwined cooperative and competitive dynamics. Finally, we extend our exploration of robustness to the domain of LLMs. Here, our focus is on diagnosing and enhancing the robustness of LLMs against adversarial prompts, employing evolutionary search to generate a diverse range of effective inputs that aim to elicit undesirable outputs from an LLM. This work collectively paves the way for future advancements in AI robustness, enabling the development of agents that not only adapt to an ever-evolving world but also thrive in the face of unforeseen challenges and interactions. - oai:arXiv.org:2512.08139v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Dynamic Graph Coloring: Sequential, Parallel, and Distributed + https://arxiv.org/abs/2512.09218 + arXiv:2512.09218v1 Announce Type: new +Abstract: We present a simple randomized algorithm that can efficiently maintain a $(\Delta+1)$ coloring as the graph undergoes edge insertion and deletion updates, where $\Delta$ denotes an upper bound on the maximum degree. A key advantage is the algorithm's ability to process many updates simultaneously, which makes it naturally adaptable to the parallel and distributed models. Concretely, it gives a unified framework across the models, leading to the following results: + - In the sequential setting, the algorithm processes each update in $O(1)$ expected time, worst-case. This matches and strengthens the results of Henzinger and Peng [TALG 2022] and Bhattacharya et al. [TALG 2022], who achieved an $O(1)$ bound but amortized (in expectation and with high probability, respectively), whose work was an improvement of the $O(\log \Delta)$ expected amortized bound of Bhattacharya et al. [SODA'18]. + - In the parallel setting, the algorithm processes each (arbitrary size) batch of updates using $O(1)$ work per update in the batch in expectation, and in $\text{poly}(\log n)$ depth with high probability. This is, in a sense, an ideal parallelization of the above results. + - In the distributed setting, the algorithm can maintain a coloring of the network graph as (potentially many) edges are added or deleted. The maintained coloring is always proper; it may become partial upon updates, i.e., some nodes may temporarily lose their colors, but quickly converges to a full, proper coloring. Concretely, each insertion and deletion causes at most $O(1)$ nodes to become uncolored, but this is resolved within $O(\log n)$ rounds with high probability (e.g., in the absence of further updates nearby--the precise guarantee is stronger, but technical). Importantly, the algorithm incurs only $O(1)$ expected message complexity and computation per update. + oai:arXiv.org:2512.09218v1 + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mikayel Samvelyan + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Mohsen Ghaffari, Jaehyun Koo - Well-posedness of a novel Lagrange multiplier formulation for fluid-poroelastic interaction - https://arxiv.org/abs/2512.08142 - arXiv:2512.08142v1 Announce Type: new -Abstract: We introduce a novel monolithic formulation that employs Lagrange multipliers (LMs) to couple a fluid flow governed by the time-dependent Stokes equations with a poroelastic structure described by the Biot equations. The formulation is developed in detail, and we establish the well-posedness of both the semi-discrete and fully discrete saddle point problems. We further prove the stability of the fully discrete system. This saddle point formulation, which utilizes three LMs, is designed to enable a partitioned approach that completely decouples the Stokes and Biot subdomains, and this approach will be explored in a subsequent work. - oai:arXiv.org:2512.08142v1 - math.NA - cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + CORE: A Conceptual Reasoning Layer for Large Language Models + https://arxiv.org/abs/2512.09222 + arXiv:2512.09222v1 Announce Type: new +Abstract: Large language models handle single-turn generation well, but multi-turn interactions still require the model to reconstruct user intent and task state from an expanding token history because internal representations do not persist across turns. This token-first paradigm leads to drift, inconsistent reasoning modes, and growing prompts as conversations deepen. We propose CORE, a concept-first interaction layer that improves multi-turn stability without modifying model weights. CORE combines a small library of universal cognitive operators with a persistent Local Concept - a compact semantic state capturing the task, constraints, preferences, and intermediate results. Each model call receives only this concept state, the user's latest instruction, and the selected operator, eliminating the need to replay full history. A preliminary prototype simulating CORE's behavior shows about 42% reduction in cumulative prompt tokens, though this number reflects prototype conditions and should not be interpreted as a real-world performance estimate. CORE offers a model-agnostic mechanism that separates conceptual reasoning from language generation, suggesting a scalable direction for more stable multi-turn systems. + oai:arXiv.org:2512.09222v1 + cs.CL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Amy de Castro, Hyesuk Lee + Vishwas Hegde, Vindhya Shigehalli - PolyLingua: Margin-based Inter-class Transformer for Robust Cross-domain Language Detection - https://arxiv.org/abs/2512.08143 - arXiv:2512.08143v1 Announce Type: new -Abstract: Language identification is a crucial first step in multilingual systems such as chatbots and virtual assistants, enabling linguistically and culturally accurate user experiences. Errors at this stage can cascade into downstream failures, setting a high bar for accuracy. Yet, existing language identification tools struggle with key cases--such as music requests where the song title and user language differ. Open-source tools like LangDetect, FastText are fast but less accurate, while large language models, though effective, are often too costly for low-latency or low-resource settings. We introduce PolyLingua, a lightweight Transformer-based model for in-domain language detection and fine-grained language classification. It employs a two-level contrastive learning framework combining instance-level separation and class-level alignment with adaptive margins, yielding compact and well-separated embeddings even for closely related languages. Evaluated on two challenging datasets--Amazon Massive (multilingual digital assistant utterances) and a Song dataset (music requests with frequent code-switching)--PolyLingua achieves 99.25% F1 and 98.15% F1, respectively, surpassing Sonnet 3.5 while using 10x fewer parameters, making it ideal for compute- and latency-constrained environments. - oai:arXiv.org:2512.08143v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Enabling Next-Generation Consumer Experience with Feature Coding for Machines + https://arxiv.org/abs/2512.09232 + arXiv:2512.09232v1 Announce Type: new +Abstract: As consumer devices become increasingly intelligent and interconnected, efficient data transfer solutions for machine tasks have become essential. This paper presents an overview of the latest Feature Coding for Machines (FCM) standard, part of MPEG-AI and developed by the Moving Picture Experts Group (MPEG). FCM supports AI-driven applications by enabling the efficient extraction, compression, and transmission of intermediate neural network features. By offloading computationally intensive operations to base servers with high computing resources, FCM allows low-powered devices to leverage large deep learning models. Experimental results indicate that the FCM standard maintains the same level of accuracy while reducing bitrate requirements by 75.90% compared to remote inference. + oai:arXiv.org:2512.09232v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ali Lotfi Rezaabad, Bikram Khanal, Shashwat Chaurasia, Lu Zeng, Dezhi Hong, Hossein Beshashati, Thomas Butler, Megan Ganji + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1109/ICCE63647.2025.10930026 + 2025 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 2025, pp. 1-4 + Md Eimran Hossain Eimon, Juan Merlos, Ashan Perera, Hari Kalva, Velibor Adzic, Borko Furht - Chat with UAV -- Human-UAV Interaction Based on Large Language Models - https://arxiv.org/abs/2512.08145 - arXiv:2512.08145v1 Announce Type: new -Abstract: The future of UAV interaction systems is evolving from engineer-driven to user-driven, aiming to replace traditional predefined Human-UAV Interaction designs. This shift focuses on enabling more personalized task planning and design, thereby achieving a higher quality of interaction experience and greater flexibility, which can be used in many fileds, such as agriculture, aerial photography, logistics, and environmental monitoring. However, due to the lack of a common language between users and the UAVs, such interactions are often difficult to be achieved. The developments of Large Language Models possess the ability to understand nature languages and Robots' (UAVs') behaviors, marking the possibility of personalized Human-UAV Interaction. Recently, some HUI frameworks based on LLMs have been proposed, but they commonly suffer from difficulties in mixed task planning and execution, leading to low adaptability in complex scenarios. In this paper, we propose a novel dual-agent HUI framework. This framework constructs two independent LLM agents (a task planning agent, and an execution agent) and applies different Prompt Engineering to separately handle the understanding, planning, and execution of tasks. To verify the effectiveness and performance of the framework, we have built a task database covering four typical application scenarios of UAVs and quantified the performance of the HUI framework using three independent metrics. Meanwhile different LLM models are selected to control the UAVs with compared performance. Our user study experimental results demonstrate that the framework improves the smoothness of HUI and the flexibility of task execution in the tasks scenario we set up, effectively meeting users' personalized needs. - oai:arXiv.org:2512.08145v1 - cs.RO - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Analysis of the Security Design, Engineering, and Implementation of the SecureDNA System + https://arxiv.org/abs/2512.09233 + arXiv:2512.09233v1 Announce Type: new +Abstract: We analyze security aspects of the SecureDNA system regarding its system design, engineering, and implementation. This system enables DNA synthesizers to screen order requests against a database of hazards. By applying novel cryptography, the system aims to keep order requests and the database of hazards secret. Discerning the detailed operation of the system in part from source code (Version 1.0.8), our analysis examines key management, certificate infrastructure, authentication, and rate-limiting mechanisms. We also perform the first formal-methods analysis of the mutual authentication, basic request, and exemption-handling protocols. + Without breaking the cryptography, our main finding is that SecureDNA's custom mutual authentication protocol SCEP achieves only one-way authentication: the hazards database and keyservers never learn with whom they communicate. This structural weakness violates the principle of defense in depth and enables an adversary to circumvent rate limits that protect the secrecy of the hazards database, if the synthesizer connects with a malicious or corrupted keyserver or hashed database. We point out an additional structural weakness that also violates the principle of defense in depth: inadequate cryptographic bindings prevent the system from detecting if responses, within a TLS channel, from the hazards database were modified. Consequently, if a synthesizer were to reconnect with the database over the same TLS session, an adversary could replay and swap responses from the database without breaking TLS. Although the SecureDNA implementation does not allow such reconnections, it would be stronger security engineering to avoid the underlying structural weakness. We identify these vulnerabilities and suggest and verify mitigations, including adding strong bindings. Software Version 1.1.0 fixes SCEP with our proposed SCEP+ protocol. + oai:arXiv.org:2512.09233v1 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Haoran Wang, Zhuohang Chen, Guang Li, Bo Ma, Chuanghuang Li + http://creativecommons.org/publicdomain/zero/1.0/ + Alan T. Sherman, Jeremy J. Romanik Romano, Edward Zieglar, Enis Golaszewski, Jonathan D. Fuchs, William E. Byrd - Scalable Back-End for an AI-Based Diabetes Prediction Application - https://arxiv.org/abs/2512.08147 - arXiv:2512.08147v1 Announce Type: new -Abstract: The rising global prevalence of diabetes necessitates early detection to prevent severe complications. While AI-powered prediction applications offer a promising solution, they require a responsive and scalable back-end architecture to serve a large user base effectively. This paper details the development and evaluation of a scalable back-end system designed for a mobile diabetes prediction application. The primary objective was to maintain a failure rate below 5% and an average latency of under 1000 ms. The architecture leverages horizontal scaling, database sharding, and asynchronous communication via a message queue. Performance evaluation showed that 83% of the system's features (20 out of 24) met the specified performance targets. Key functionalities such as user profile management, activity tracking, and read-intensive prediction operations successfully achieved the desired performance. The system demonstrated the ability to handle up to 10,000 concurrent users without issues, validating its scalability. The implementation of asynchronous communication using RabbitMQ proved crucial in minimizing the error rate for computationally intensive prediction requests, ensuring system reliability by queuing requests and preventing data loss under heavy load. - oai:arXiv.org:2512.08147v1 - cs.AI - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Efficient Feature Compression for Machines with Global Statistics Preservation + https://arxiv.org/abs/2512.09235 + arXiv:2512.09235v1 Announce Type: new +Abstract: The split-inference paradigm divides an artificial intelligence (AI) model into two parts. This necessitates the transfer of intermediate feature data between the two halves. Here, effective compression of the feature data becomes vital. In this paper, we employ Z-score normalization to efficiently recover the compressed feature data at the decoder side. To examine the efficacy of our method, the proposed method is integrated into the latest Feature Coding for Machines (FCM) codec standard under development by the Moving Picture Experts Group (MPEG). Our method supersedes the existing scaling method used by the current standard under development. It both reduces the overhead bits and improves the end-task accuracy. To further reduce the overhead in certain circumstances, we also propose a simplified method. Experiments show that using our proposed method shows 17.09% reduction in bitrate on average across different tasks and up to 65.69% for object tracking without sacrificing the task accuracy. + oai:arXiv.org:2512.09235v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Henry Anand Septian Radityo, Bernardus Willson, Reynard Tanadi, Latifa Dwiyanti, Saiful Akbar + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1109/ISCAS56072.2025.11043278 + 2025 IEEE International Symposium on Circuits and Systems (ISCAS), London, United Kingdom, 2025, pp. 1-5 + Md Eimran Hossain Eimon, Hyomin Choi, Fabien Racap\'e, Mateen Ulhaq, Velibor Adzic, Hari Kalva, Borko Furht - TreeGRPO: Tree-Advantage GRPO for Online RL Post-Training of Diffusion Models - https://arxiv.org/abs/2512.08153 - arXiv:2512.08153v1 Announce Type: new -Abstract: Reinforcement learning (RL) post-training is crucial for aligning generative models with human preferences, but its prohibitive computational cost remains a major barrier to widespread adoption. We introduce \textbf{TreeGRPO}, a novel RL framework that dramatically improves training efficiency by recasting the denoising process as a search tree. From shared initial noise samples, TreeGRPO strategically branches to generate multiple candidate trajectories while efficiently reusing their common prefixes. This tree-structured approach delivers three key advantages: (1) \emph{High sample efficiency}, achieving better performance under same training samples (2) \emph{Fine-grained credit assignment} via reward backpropagation that computes step-specific advantages, overcoming the uniform credit assignment limitation of trajectory-based methods, and (3) \emph{Amortized computation} where multi-child branching enables multiple policy updates per forward pass. Extensive experiments on both diffusion and flow-based models demonstrate that TreeGRPO achieves \textbf{2.4$\times$ faster training} while establishing a superior Pareto frontier in the efficiency-reward trade-off space. Our method consistently outperforms GRPO baselines across multiple benchmarks and reward models, providing a scalable and effective pathway for RL-based visual generative model alignment. The project website is available at treegrpo.github.io. - oai:arXiv.org:2512.08153v1 - cs.LG - cs.AI - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Training-free Context-adaptive Attention for Efficient Long Context Modeling + https://arxiv.org/abs/2512.09238 + arXiv:2512.09238v1 Announce Type: new +Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language processing tasks. These capabilities stem primarily from the self-attention mechanism, which enables modeling of long-range dependencies. However, the quadratic complexity of self-attention with respect to sequence length poses significant computational and memory challenges, especially as sequence length extends to extremes. While various sparse attention and KV cache compression methods have been proposed to improve efficiency, they often suffer from limitations such as reliance on fixed patterns, inability to handle both prefilling and decoding stages, or the requirement for additional training. In this paper, we propose Training-free Context-adaptive Attention (TCA-Attention), a training-free sparse attention mechanism that selectively attends to only the informative tokens for efficient long-context inference. Our method consists of two lightweight phases: i) an offline calibration phase that determines head-specific sparsity budgets via a single forward pass, and ii) an online token selection phase that adaptively retains core context tokens using a lightweight redundancy metric. TCA-Attention provides a unified solution that accelerates both prefilling and decoding while reducing KV cache memory footprint, without requiring parameter updates or architectural changes. Theoretical analysis shows that our approach maintains bounded approximation error. Extensive experiments demonstrate that TCA-Attention achieves a 2.8$\times$ speedup and reduces KV cache by 61% at 128K context length while maintaining performance comparable to full attention across various benchmarks, offering a practical plug-and-play solution for efficient long-context inference. + oai:arXiv.org:2512.09238v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zheng Ding, Weirui Ye + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Zeng You, Yaofo Chen, Shuhai Zhang, Zhijie Qiu, Tingyu Wu, Yingjian Li, Yaowei Wang, Mingkui Tan - Adaptive Matched Filtering for Sensing With Communication Signals in Cluttered Environments - https://arxiv.org/abs/2512.08157 - arXiv:2512.08157v1 Announce Type: new -Abstract: This paper investigates the performance of the adaptive matched filtering (AMF) in cluttered environments, particularly when operating with superimposed signals. Since the instantaneous signal-to-clutter-plus-noise ratio (SCNR) is a random variable dependent on the data payload, using it directly as a design objective poses severe practical challenges, such as prohibitive computational burdens and signaling overhead. To address this, we propose shifting the optimization objective from an instantaneous to a statistical metric, which focuses on maximizing the average SCNR over all possible payloads. Due to its analytical intractability, we leverage tools from random matrix theory (RMT) to derive an asymptotic approximation for the average SCNR, which remains accurate even in moderate-dimensional regimes. A key finding from our theoretical analysis is that, for a fixed modulation basis, the PSK achieves a superior average SCNR compared to QAM and the pure Gaussian constellation. Furthermore, for any given constellation, the OFDM achieves a higher average SCNR than SC and AFDM. Then, we propose two pilot design schemes to enhance system performance: a Data-Payload-Dependent (DPD) scheme and a Data-Payload-Independent (DPI) scheme. The DPD approach maximizes the instantaneous SCNR for each transmission. Conversely, the DPI scheme optimizes the average SCNR, offering a flexible trade-off between sensing performance and implementation complexity. Then, we develop two dedicated optimization algorithms for DPD and DPI schemes. In particular, for the DPD problem, we employ fractional optimization and the KKT conditions to derive a closed-form solution. For the DPI problem, we adopt a manifold optimization approach to handle the inherent rank-one constraint efficiently. Simulation results validate the accuracy of our theoretical analysis and demonstrate the effectiveness of the proposed methods. - oai:arXiv.org:2512.08157v1 - cs.IT - eess.SP - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + A Clinically Interpretable Deep CNN Framework for Early Chronic Kidney Disease Prediction Using Grad-CAM-Based Explainable AI + https://arxiv.org/abs/2512.09244 + arXiv:2512.09244v1 Announce Type: new +Abstract: Chronic Kidney Disease (CKD) constitutes a major global medical burden, marked by the gradual deterioration of renal function, which results in the impaired clearance of metabolic waste and disturbances in systemic fluid homeostasis. Owing to its substantial contribution to worldwide morbidity and mortality, the development of reliable and efficient diagnostic approaches is critically important to facilitate early detection and prompt clinical management. This study presents a deep convolutional neural network (CNN) for early CKD detection from CT kidney images, complemented by class balancing using Synthetic Minority Over-sampling Technique (SMOTE) and interpretability via Gradient-weighted Class Activation Mapping (Grad-CAM). The model was trained and evaluated on the CT KIDNEY DATASET, which contains 12,446 CT images, including 3,709 cyst, 5,077 normal, 1,377 stone, and 2,283 tumor cases. The proposed deep CNN achieved a remarkable classification performance, attaining 100% accuracy in the early detection of chronic kidney disease (CKD). This significant advancement demonstrates strong potential for addressing critical clinical diagnostic challenges and enhancing early medical intervention strategies. + oai:arXiv.org:2512.09244v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Lei Xie, Hengtao He, Yifeng Xiong, Fan Liu, Shi Jin + Anas Bin Ayub, Nilima Sultana Niha, Md. Zahurul Haque - Reeb Graph of Sample Thickenings - https://arxiv.org/abs/2512.08159 - arXiv:2512.08159v1 Announce Type: new -Abstract: We consider the Reeb graph of a thickening of points sampled from an unknown space. Our main contribution is a framework to transfer reconstruction results similar to the well-known work of Niyogi, Smale, and Weinberger to the setting of Reeb graphs. To this end, we first generalize and study the interleaving distances for Reeb graphs. We find that many of the results previously established for constructible spaces also hold for general topological spaces. We use this to show that under certain conditions for topological spaces with real-valued Lipschitz maps, the Reeb graph of a sample thickening approximates the Reeb graph of the underlying space. Finally, we provide an algorithm for computing the Reeb graph of a sample thickening. - oai:arXiv.org:2512.08159v1 - cs.CG - math.AT - Wed, 10 Dec 2025 00:00:00 -0500 + OmniPSD: Layered PSD Generation with Diffusion Transformer + https://arxiv.org/abs/2512.09247 + arXiv:2512.09247v1 Announce Type: new +Abstract: Recent advances in diffusion models have greatly improved image generation and editing, yet generating or reconstructing layered PSD files with transparent alpha channels remains highly challenging. We propose OmniPSD, a unified diffusion framework built upon the Flux ecosystem that enables both text-to-PSD generation and image-to-PSD decomposition through in-context learning. For text-to-PSD generation, OmniPSD arranges multiple target layers spatially into a single canvas and learns their compositional relationships through spatial attention, producing semantically coherent and hierarchically structured layers. For image-to-PSD decomposition, it performs iterative in-context editing, progressively extracting and erasing textual and foreground components to reconstruct editable PSD layers from a single flattened image. An RGBA-VAE is employed as an auxiliary representation module to preserve transparency without affecting structure learning. Extensive experiments on our new RGBA-layered dataset demonstrate that OmniPSD achieves high-fidelity generation, structural consistency, and transparency awareness, offering a new paradigm for layered design generation and decomposition with diffusion transformers. + oai:arXiv.org:2512.09247v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - H{\aa}vard Bakke Bjerkevik, Nello Blaser, Lars M. Salbu + http://creativecommons.org/licenses/by-sa/4.0/ + Cheng Liu, Yiren Song, Haofan Wang, Mike Zheng Shou - LayerPipe2: Multistage Pipelining and Weight Recompute via Improved Exponential Moving Average for Training Neural Networks - https://arxiv.org/abs/2512.08160 - arXiv:2512.08160v1 Announce Type: new -Abstract: In our prior work, LayerPipe, we had introduced an approach to accelerate training of convolutional, fully connected, and spiking neural networks by overlapping forward and backward computation. However, despite empirical success, a principled understanding of how much gradient delay needs to be introduced at each layer to achieve desired level of pipelining was not addressed. This paper, LayerPipe2, fills that gap by formally deriving LayerPipe using variable delayed gradient adaptation and retiming. We identify where delays may be legally inserted and show that the required amount of delay follows directly from the network structure where inner layers require fewer delays and outer layers require longer delays. When pipelining is applied at every layer, the amount of delay depends only on the number of remaining downstream stages. When layers are pipelined in groups, all layers in the group share the same assignment of delays. These insights not only explain previously observed scheduling patterns but also expose an often overlooked challenge that pipelining implicitly requires storage of historical weights. We overcome this storage bottleneck by developing a pipeline--aware moving average that reconstructs the required past states rather than storing them explicitly. This reduces memory cost without sacrificing the accuracy guarantees that makes pipelined learning viable. The result is a principled framework that illustrates how to construct LayerPipe architectures, predicts their delay requirements, and mitigates their storage burden, thereby enabling scalable pipelined training with controlled communication computation tradeoffs. - oai:arXiv.org:2512.08160v1 - cs.LG + GLACIA: Instance-Aware Positional Reasoning for Glacial Lake Segmentation via Multimodal Large Language Model + https://arxiv.org/abs/2512.09251 + arXiv:2512.09251v1 Announce Type: new +Abstract: Glacial lake monitoring bears great significance in mitigating the anticipated risk of Glacial Lake Outburst Floods. However, existing segmentation methods based on convolutional neural networks (CNNs) and Vision Transformers (ViTs), remain constrained to pixel-level predictions, lacking high-level global scene semantics and human-interpretable reasoning. To address this, we introduce GLACIA (\textbf{G}lacial \textbf{LA}ke segmentation with \textbf{C}ontextual \textbf{I}nstance \textbf{A}wareness), the first framework that integrates large language models with segmentation capabilities to produce both accurate segmentation masks and corresponding spatial reasoning outputs. We construct the Glacial Lake Position Reasoning (GLake-Pos) dataset pipeline, which provides diverse, spatially grounded question-answer pairs designed to overcome the lack of instance-aware positional reasoning data in remote sensing. Comparative evaluation demonstrate that GLACIA (mIoU: 87.30) surpasses state-of-the-art method based on CNNs (mIoU: 78.55 - 79.01), ViTs (mIoU: 69.27 - 81.75), Geo-foundation models (mIoU: 76.37 - 87.10), and reasoning based segmentation methods (mIoU: 60.12 - 75.66). Our approach enables intuitive disaster preparedness and informed policy-making in the context of rapidly changing glacial environments by facilitating natural language interaction, thereby supporting more efficient and interpretable decision-making. The code is released on https://github.com/lalitmaurya47/GLACIA + oai:arXiv.org:2512.09251v1 + cs.CV cs.AI - cs.AR - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Nanda K. Unnikrishnan, Keshab K. Parhi + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Lalit Maurya, Saurabh Kaushik, Beth Tellman - Fourier-RWKV: A Multi-State Perception Network for Efficient Image Dehazing - https://arxiv.org/abs/2512.08161 - arXiv:2512.08161v1 Announce Type: new -Abstract: Image dehazing is crucial for reliable visual perception, yet it remains highly challenging under real-world non-uniform haze conditions. Although Transformer-based methods excel at capturing global context, their quadratic computational complexity hinders real-time deployment. To address this, we propose Fourier Receptance Weighted Key Value (Fourier-RWKV), a novel dehazing framework based on a Multi-State Perception paradigm. The model achieves comprehensive haze degradation modeling with linear complexity by synergistically integrating three distinct perceptual states: (1) Spatial-form Perception, realized through the Deformable Quad-directional Token Shift (DQ-Shift) operation, which dynamically adjusts receptive fields to accommodate local haze variations; (2) Frequency-domain Perception, implemented within the Fourier Mix block, which extends the core WKV attention mechanism of RWKV from the spatial domain to the Fourier domain, preserving the long-range dependencies essential for global haze estimation while mitigating spatial attenuation; (3) Semantic-relation Perception, facilitated by the Semantic Bridge Module (SBM), which utilizes Dynamic Semantic Kernel Fusion (DSK-Fusion) to precisely align encoder-decoder features and suppress artifacts. Extensive experiments on multiple benchmarks demonstrate that Fourier-RWKV delivers state-of-the-art performance across diverse haze scenarios while significantly reducing computational overhead, establishing a favorable trade-off between restoration quality and practical efficiency. Code is available at: https://github.com/Dilizlr/Fourier-RWKV. - oai:arXiv.org:2512.08161v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + The Illusion of Rationality: Tacit Bias and Strategic Dominance in Frontier LLM Negotiation Games + https://arxiv.org/abs/2512.09254 + arXiv:2512.09254v1 Announce Type: new +Abstract: Large language models (LLMs) are increasingly being deployed as autonomous agents on behalf of institutions and individuals in economic, political, and social settings that involve negotiation. Yet this trend carries significant risks if their strategic behavior is not well understood. In this work, we revisit the NegotiationArena framework and run controlled simulation experiments on a diverse set of frontier LLMs across three multi turn bargaining games: Buyer Seller, Multi turn Ultimatum, and Resource Exchange. We ask whether improved general reasoning capabilities lead to rational, unbiased, and convergent negotiation strategies. Our results challenge this assumption. We find that models diverge into distinct, model specific strategic equilibria rather than converging to a unified optimal behavior. Moreover, strong numerical and semantic anchoring effects persist: initial offers are highly predictive of final agreements, and models consistently generate biased proposals by collapsing diverse internal valuations into rigid, generic price points. More concerningly, we observe dominance patterns in which some models systematically achieve higher payoffs than their counterparts. These findings underscore an urgent need to develop mechanisms to mitigate these issues before deploying such systems in real-world scenarios. + oai:arXiv.org:2512.09254v1 + cs.GT + cs.MA + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Lirong Zheng, Yanshan Li, Rui Yu, Kaihao Zhang + Manuel S. R\'ios, Ruben F. Manrique, Nicanor Quijano, Luis F. Giraldo - Accuracy Does Not Guarantee Human-Likeness in Monocular Depth Estimators - https://arxiv.org/abs/2512.08163 - arXiv:2512.08163v1 Announce Type: new -Abstract: Monocular depth estimation is a fundamental capability for real-world applications such as autonomous driving and robotics. Although deep neural networks (DNNs) have achieved superhuman accuracy on physical-based benchmarks, a key challenge remains: aligning model representations with human perception, a promising strategy for enhancing model robustness and interpretability. Research in object recognition has revealed a complex trade-off between model accuracy and human-like behavior, raising a question whether a similar divergence exist in depth estimation, particularly for natural outdoor scenes where benchmarks rely on sensor-based ground truth rather than human perceptual estimates. In this study, we systematically investigated the relationship between model accuracy and human similarity across 69 monocular depth estimators using the KITTI dataset. To dissect the structure of error patterns on a factor-by-factor basis, we applied affine fitting to decompose prediction errors into interpretable components. Intriguingly, our results reveal while humans and DNNs share certain estimation biases (positive error correlations), we observed distinct trade-off relationships between model accuracy and human similarity. This finding indicates that improving accuracy does not necessarily lead to more human-like behavior, underscoring the necessity of developing multifaceted, human-centric evaluations beyond traditional accuracy. - oai:arXiv.org:2512.08163v1 + ROI-Packing: Efficient Region-Based Compression for Machine Vision + https://arxiv.org/abs/2512.09258 + arXiv:2512.09258v1 Announce Type: new +Abstract: This paper introduces ROI-Packing, an efficient image compression method tailored specifically for machine vision. By prioritizing regions of interest (ROI) critical to end-task accuracy and packing them efficiently while discarding less relevant data, ROI-Packing achieves significant compression efficiency without requiring retraining or fine-tuning of end-task models. Comprehensive evaluations across five datasets and two popular tasks-object detection and instance segmentation-demonstrate up to a 44.10% reduction in bitrate without compromising end-task accuracy, along with an 8.88 % improvement in accuracy at the same bitrate compared to the state-of-the-art Versatile Video Coding (VVC) codec standardized by the Moving Picture Experts Group (MPEG). + oai:arXiv.org:2512.09258v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuki Kubota, Taiki Fukiage + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1109/MIPR67560.2025.00044 + International Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, 2025, pp. 233-238 + Md Eimran Hossain Eimon, Alena Krause, Ashan Perera, Juan Merlos, Hari Kalva, Velibor Adzic, Borko Furht - Information-Dense Reasoning for Efficient and Auditable Security Alert Triage - https://arxiv.org/abs/2512.08169 - arXiv:2512.08169v1 Announce Type: new -Abstract: Security Operations Centers face massive, heterogeneous alert streams under minute-level service windows, creating the Alert Triage Latency Paradox: verbose reasoning chains ensure accuracy and compliance but incur prohibitive latency and token costs, while minimal chains sacrifice transparency and auditability. Existing solutions fail: signature systems are brittle, anomaly methods lack actionability, and fully cloud-hosted LLMs raise latency, cost, and privacy concerns. We propose AIDR, a hybrid cloud-edge framework that addresses this trade-off through constrained information-density optimization. The core innovation is gradient-based compression of reasoning chains to retain only decision-critical steps--minimal evidence sufficient to justify predictions while respecting token and latency budgets. We demonstrate that this approach preserves decision-relevant information while minimizing complexity. We construct compact datasets by distilling alerts into 3-5 high-information bullets (68% token reduction), train domain-specialized experts via LoRA, and deploy a cloud-edge architecture: a cloud LLM routes alerts to on-premises experts generating SOAR-ready JSON. Experiments demonstrate AIDR achieves higher accuracy and 40.6% latency reduction versus Chain-of-Thought, with robustness to data corruption and out-of-distribution generalization, enabling auditable and efficient SOC triage with full data residency compliance. - oai:arXiv.org:2512.08169v1 - cs.CR - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + From Forecast to Action: Uncertainty-Aware UAV Deployment for Ocean Drifter Recovery + https://arxiv.org/abs/2512.09260 + arXiv:2512.09260v1 Announce Type: new +Abstract: We present a novel predict-then-optimize framework for maritime search operations that integrates trajectory forecasting with UAV deployment optimization-an end-to-end approach not addressed in prior work. A large language model predicts the drifter's trajectory, and spatial uncertainty is modeled using Gaussian-based particle sampling. Unlike traditional static deployment methods, we dynamically adapt UAV detection radii based on distance and optimize their placement using meta-heuristic algorithms. Experiments on real-world data from the Korean coastline demonstrate that our method, particularly the repair mechanism designed for this problem, significantly outperforms the random search baselines. This work introduces a practical and robust integration of trajectory prediction and spatial optimization for intelligent maritime rescue. + oai:arXiv.org:2512.09260v1 + cs.NE + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Guangze Zhao, Yongzheng Zhang, Changbo Tian, Dan Xie, Hongri Liu, Bailing Wang + Jingeun Kim, Yong-Hyuk Kim, Yourim Yoon - RAVES-Calib: Robust, Accurate and Versatile Extrinsic Self Calibration Using Optimal Geometric Features - https://arxiv.org/abs/2512.08170 - arXiv:2512.08170v1 Announce Type: new -Abstract: In this paper, we present a user-friendly LiDAR-camera calibration toolkit that is compatible with various LiDAR and camera sensors and requires only a single pair of laser points and a camera image in targetless environments. Our approach eliminates the need for an initial transform and remains robust even with large positional and rotational LiDAR-camera extrinsic parameters. We employ the Gluestick pipeline to establish 2D-3D point and line feature correspondences for a robust and automatic initial guess. To enhance accuracy, we quantitatively analyze the impact of feature distribution on calibration results and adaptively weight the cost of each feature based on these metrics. As a result, extrinsic parameters are optimized by filtering out the adverse effects of inferior features. We validated our method through extensive experiments across various LiDAR-camera sensors in both indoor and outdoor settings. The results demonstrate that our method provides superior robustness and accuracy compared to SOTA techniques. Our code is open-sourced on GitHub to benefit the community. - oai:arXiv.org:2512.08170v1 - cs.RO - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + FLARE v2: A Recursive Framework for Program Comprehension Across Languages and Levels of Abstraction + https://arxiv.org/abs/2512.09261 + arXiv:2512.09261v1 Announce Type: new +Abstract: Building on the classroom framework reported in Heath et al. (2025), this paper proposes FLARE v2 as a recursive, semiotically informed account of how program meaning is constructed. It reinterprets the descriptive tiers of FLARE v1 as instances of a single generative operation: identify elements (characterised by the four properties Receives, Sends, Effects, Shares); analyse their bindings along two dimensions (Causal-Temporal and Communicative); and recognise the new element that emerges. The Causal-Temporal dimension encompasses three subtypes - Sequential, Branch, and Event - that together account for control flow in both procedural and event-driven environments. A Compositional Ladder provides a visual parallel between literacy progressions and programming structures, illustrating how recursive composition operates from blocks and statements through segments, systems, and services. The framework aims to address conceptual and cognitive-load limitations reported in FLARE v1 and is situated within semiotic and program-comprehension theory. FLARE v2 is presented as a conceptual lens with potential implications for pedagogy and curriculum design; implementation and empirical evaluation are left for future work. + oai:arXiv.org:2512.09261v1 + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haoxin Zhang, Shuaixin Li, Xiaozhou Zhu, Hongbo Chen, Wen Yao + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Justin Heath - Security Analysis of Integer Learning with Errors with Rejection Sampling - https://arxiv.org/abs/2512.08172 - arXiv:2512.08172v1 Announce Type: new -Abstract: At ASIACRYPT 2018, a digital attack based on linear least squares was introduced for a variant of the learning with errors (LWE) problem which omits modular reduction known as the integer learning with errors problem (ILWE). In this paper, we present a theoretical and experimental study of the effectiveness of the attack when applied directly to small parameter ILWE instances found in popular digital signature schemes such as CRYSTALS-Dilithium which utilize rejection sampling. Unlike other studies which form ILWE instances based on additional information obtained from side-channel attacks, we take a more direct approach to the problem by constructing our ILWE instance from only the obtained signatures. We outline and introduce novel techniques in our simulation designs such as modular polynomial arithmetic via matrices in $\mathbb{R}$, as well as algorithms for handling large sample sizes efficiently. Our experimental results reinforce the proclaimed security of signature schemes based on ILWE. We additionally discuss the implications of our work and digital signatures as a whole in regards to real-world applications such as in Intelligent Transportation Systems (ITS). - oai:arXiv.org:2512.08172v1 + FBA$^2$D: Frequency-based Black-box Attack for AI-generated Image Detection + https://arxiv.org/abs/2512.09264 + arXiv:2512.09264v1 Announce Type: new +Abstract: The prosperous development of Artificial Intelligence-Generated Content (AIGC) has brought people's anxiety about the spread of false information on social media. Designing detectors for filtering is an effective defense method, but most detectors will be compromised by adversarial samples. Currently, most studies exposing AIGC security issues assume information on model structure and data distribution. In real applications, attackers query and interfere with models that provide services in the form of application programming interfaces (APIs), which constitutes the black-box decision-based attack paradigm. However, to the best of our knowledge, decision-based attacks on AIGC detectors remain unexplored. In this study, we propose \textbf{FBA$^2$D}: a frequency-based black-box attack method for AIGC detection to fill the research gap. Motivated by frequency-domain discrepancies between generated and real images, we develop a decision-based attack that leverages the Discrete Cosine Transform (DCT) for fine-grained spectral partitioning and selects frequency bands as query subspaces, improving both query efficiency and image quality. Moreover, attacks on AIGC detectors should mitigate initialization failures, preserve image quality, and operate under strict query budgets. To address these issues, we adopt an ``adversarial example soup'' method, averaging candidates from successive surrogate iterations and using the result as the initialization to accelerate the query-based attack. The empirical study on the Synthetic LSUN dataset and GenImage dataset demonstrate the effectiveness of our prosed method. This study shows the urgency of addressing practical AIGC security problems. + oai:arXiv.org:2512.09264v1 cs.CR - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Kyle Yates, Antsa Pierrottet, Abdullah Al Mamun, Ryann Cartor, Mashrur Chowdhury, Shuhong Gao - - - The Instability of Painlev\'e Equations in Recovering Largest Eigenvalue Distributions of GUE, LUE, JUE and an Attempt of Solution to It - https://arxiv.org/abs/2512.08178 - arXiv:2512.08178v1 Announce Type: new -Abstract: The distribution of the largest eigenvalue for the three classical unitary ensembles -- GUE, LUE, and JUE -- admits two complementary exact descriptions: (i) as Fredholm determinants of their orthogonal--polynomial correlation kernels and (ii) as isomonodromic $\tau$--functions governed by Painlev\'e equations. For finite $n$, the associated Jimbo--Miwa--Okamoto $\sigma$--forms are $\PIV$ (GUE), $\mathrm{PV}$ (LUE), and $\PVI$ (JUE); under soft- or hard-edge scalings these degenerate to $\PII$ or $\PIIIp$ descriptions of the Tracy--Widom and hard-edge laws \cite{tracy1994level,forrester2003painleve,deift1999orthogonal}. - It is well known among random matrix theorists (for example Folkmar Bornemann) that the Fredholm determinant is a more numerically stable and accurate way to compute the CDF of the largest eigenvalue for GUE, LUE, JUE than direct Painlev\'e integration. The aim of this paper is not to improve on Fredholm methods, but to see to what extent one can numerically recover the \emph{correct} Painlev\'e solution from finite-$n$ data and how unstable this reconstruction is. Numerically, we verify the equality between the Fredholm- and Painlev\'e-based CDFs by combining (a) high-accuracy Nystr\"om discretizations of the finite-$n$ Fredholm determinants \cite{bornemann2010numerical} with (b) an anchored, branch-locked integration of the $\sigma$--form ODEs, where anchors are extracted from local least-squares fits to $\log\det(I-\mathsf K)$. Our results confirm agreement across GUE/LUE/JUE with precision of $O(10^{-3})$ to $O(10^{-5})$ (occasionally $O(10^{-2})$) and illustrate the finite-$n$ to scaling-limit transition. The theoretical connections to $\tau$--functions and Virasoro constraints follow the framework of \cite{adler2000random,forrester2003painleve} - oai:arXiv.org:2512.08178v1 - math.NA - cs.NA - math.PR - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haonan Gu + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Xiaojing Chen, Dan Li, Lijun Peng, Jun Yan{\L}etter, Zhiqing Guo, Junyang Chen, Xiao Lan, Zhongjie Ba, Yunfeng Diao{\L}etter - GeoLoom: High-quality Geometric Diagram Generation from Textual Input - https://arxiv.org/abs/2512.08180 - arXiv:2512.08180v1 Announce Type: new -Abstract: High-quality geometric diagram generation presents both a challenge and an opportunity: it demands strict spatial accuracy while offering well-defined constraints to guide generation. Inspired by recent advances in geometry problem solving that employ formal languages and symbolic solvers for enhanced correctness and interpretability, we propose GeoLoom, a novel framework for text-to-diagram generation in geometric domains. GeoLoom comprises two core components: an autoformalization module that translates natural language into a specifically designed generation-oriented formal language GeoLingua, and a coordinate solver that maps formal constraints to precise coordinates using the efficient Monte Carlo optimization. To support this framework, we introduce GeoNF, a dataset aligning natural language geometric descriptions with formal GeoLingua descriptions. We further propose a constraint-based evaluation metric that quantifies structural deviation, offering mathematically grounded supervision for iterative refinement. Empirical results demonstrate that GeoLoom significantly outperforms state-of-the-art baselines in structural fidelity, providing a principled foundation for interpretable and scalable diagram generation. - oai:arXiv.org:2512.08180v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Contrastive Learning for Semi-Supervised Deep Regression with Generalized Ordinal Rankings from Spectral Seriation + https://arxiv.org/abs/2512.09267 + arXiv:2512.09267v1 Announce Type: new +Abstract: Contrastive learning methods enforce label distance relationships in feature space to improve representation capability for regression models. However, these methods highly depend on label information to correctly recover ordinal relationships of features, limiting their applications to semi-supervised regression. In this work, we extend contrastive regression methods to allow unlabeled data to be used in the semi-supervised setting, thereby reducing the dependence on costly annotations. Particularly we construct the feature similarity matrix with both labeled and unlabeled samples in a mini-batch to reflect inter-sample relationships, and an accurate ordinal ranking of involved unlabeled samples can be recovered through spectral seriation algorithms if the level of error is within certain bounds. The introduction of labeled samples above provides regularization of the ordinal ranking with guidance from the ground-truth label information, making the ranking more reliable. To reduce feature perturbations, we further utilize the dynamic programming algorithm to select robust features for the matrix construction. The recovered ordinal relationship is then used for contrastive learning on unlabeled samples, and we thus allow more data to be used for feature representation learning, thereby achieving more robust results. The ordinal rankings can also be used to supervise predictions on unlabeled samples, serving as an additional training signal. We provide theoretical guarantees and empirical verification through experiments on various datasets, demonstrating that our method can surpass existing state-of-the-art semi-supervised deep regression methods. Our code have been released on https://github.com/xmed-lab/CLSS. + oai:arXiv.org:2512.09267v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaojing Wei, Ting Zhang, Wei He, Jingdong Wang, Hua Huang + Ce Wang, Weihang Dai, Hanru Bai, Xiaomeng Li - Framing Climate Change on YouTube: North-South Divides in Narratives and Public Engagement - https://arxiv.org/abs/2512.08183 - arXiv:2512.08183v1 Announce Type: new -Abstract: Climate change debates have gained increasing visibility on social media, with YouTube emerging as one of the most influential platforms for political communication. Reaching billions of users worldwide, it functions both as a news outlet and as a space for public discourse. While existing studies of climate discourse on YouTube often adopt a global perspective, this study examines the platform through the lens of the Global North-South divide. We analyse a dataset of 758 climate-related videos and their comment sections, applying topic modelling and sentiment analysis to identify recurring discursive patterns. Through these patterns, we recognise parallels with respect to debates in international climate negotiations. The findings reveal notable differences. Videos from the Global North and Global South reflect real-world divides, with the North emphasising the need for policies to curb carbon emissions, while the South highlights developmental priorities. A key area of convergence between the regions lies in the shared recognition of the importance of emissions reduction and international agreements. Audience responses, however, diverge more sharply: comment sections under Global North videos are dominated by criticism, conspiracy, and climate fatigue, whereas those under Global South videos are generally more supportive, constructive, and knowledge-oriented. Overall, the study demonstrates how YouTube reflects and reshapes global climate politics, while also revealing the gap between curated narratives and public sentiment. Bridging these divides may contribute to more inclusive and cooperative approaches to climate action. - oai:arXiv.org:2512.08183v1 - cs.SI - Wed, 10 Dec 2025 00:00:00 -0500 + Goal inference with Rao-Blackwellized Particle Filters + https://arxiv.org/abs/2512.09269 + arXiv:2512.09269v1 Announce Type: new +Abstract: Inferring the eventual goal of a mobile agent from noisy observations of its trajectory is a fundamental estimation problem. We initiate the study of such intent inference using a variant of a Rao-Blackwellized Particle Filter (RBPF), subject to the assumption that the agent's intent manifests through closed-loop behavior with a state-of-the-art provable practical stability property. Leveraging the assumed closed-form agent dynamics, the RBPF analytically marginalizes the linear-Gaussian substructure and updates particle weights only, improving sample efficiency over a standard particle filter. Two difference estimators are introduced: a Gaussian mixture model using the RBPF weights and a reduced version confining the mixture to the effective sample. We quantify how well the adversary can recover the agent's intent using information-theoretic leakage metrics and provide computable lower bounds on the Kullback-Leibler (KL) divergence between the true intent distribution and RBPF estimates via Gaussian-mixture KL bounds. We also provide a bound on the difference in performance between the two estimators, highlighting the fact that the reduced estimator performs almost as well as the complete one. Experiments illustrate fast and accurate intent recovery for compliant agents, motivating future work on designing intent-obfuscating controllers. + oai:arXiv.org:2512.09269v1 + cs.LG + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Sanika Damle, Radhika Krishnan + Yixuan Wang, Dan P. Guralnik, Warren E. Dixon - A Practical Framework for Evaluating Medical AI Security: Reproducible Assessment of Jailbreaking and Privacy Vulnerabilities Across Clinical Specialties - https://arxiv.org/abs/2512.08185 - arXiv:2512.08185v1 Announce Type: new -Abstract: Medical Large Language Models (LLMs) are increasingly deployed for clinical decision support across diverse specialties, yet systematic evaluation of their robustness to adversarial misuse and privacy leakage remains inaccessible to most researchers. Existing security benchmarks require GPU clusters, commercial API access, or protected health data -- barriers that limit community participation in this critical research area. We propose a practical, fully reproducible framework for evaluating medical AI security under realistic resource constraints. Our framework design covers multiple medical specialties stratified by clinical risk -- from high-risk domains such as emergency medicine and psychiatry to general practice -- addressing jailbreaking attacks (role-playing, authority impersonation, multi-turn manipulation) and privacy extraction attacks. All evaluation utilizes synthetic patient records requiring no IRB approval. The framework is designed to run entirely on consumer CPU hardware using freely available models, eliminating cost barriers. We present the framework specification including threat models, data generation methodology, evaluation protocols, and scoring rubrics. This proposal establishes a foundation for comparative security assessment of medical-specialist models and defense mechanisms, advancing the broader goal of ensuring safe and trustworthy medical AI systems. - oai:arXiv.org:2512.08185v1 - cs.CR - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + MoRel: Long-Range Flicker-Free 4D Motion Modeling via Anchor Relay-based Bidirectional Blending with Hierarchical Densification + https://arxiv.org/abs/2512.09270 + arXiv:2512.09270v1 Announce Type: new +Abstract: Recent advances in 4D Gaussian Splatting (4DGS) have extended the high-speed rendering capability of 3D Gaussian Splatting (3DGS) into the temporal domain, enabling real-time rendering of dynamic scenes. However, one of the major remaining challenges lies in modeling long-range motion-contained dynamic videos, where a naive extension of existing methods leads to severe memory explosion, temporal flickering, and failure to handle appearing or disappearing occlusions over time. To address these challenges, we propose a novel 4DGS framework characterized by an Anchor Relay-based Bidirectional Blending (ARBB) mechanism, named MoRel, which enables temporally consistent and memory-efficient modeling of long-range dynamic scenes. Our method progressively constructs locally canonical anchor spaces at key-frame time index and models inter-frame deformations at the anchor level, enhancing temporal coherence. By learning bidirectional deformations between KfA and adaptively blending them through learnable opacity control, our approach mitigates temporal discontinuities and flickering artifacts. We further introduce a Feature-variance-guided Hierarchical Densification (FHD) scheme that effectively densifies KfA's while keeping rendering quality, based on an assigned level of feature-variance. To effectively evaluate our model's capability to handle real-world long-range 4D motion, we newly compose long-range 4D motion-contained dataset, called SelfCap$_{\text{LR}}$. It has larger average dynamic motion magnitude, captured at spatially wider spaces, compared to previous dynamic video datasets. Overall, our MoRel achieves temporally coherent and flicker-free long-range 4D reconstruction while maintaining bounded memory usage, demonstrating both scalability and efficiency in dynamic Gaussian-based representations. + oai:arXiv.org:2512.09270v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jinghao Wang, Ping Zhang, Carter Yagemann + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sangwoon Kwak, Weeyoung Kwon, Jun Young Jeong, Geonho Kim, Won-Sik Cheong, Jihyong Oh - Ground Slow, Move Fast: A Dual-System Foundation Model for Generalizable Vision-and-Language Navigation - https://arxiv.org/abs/2512.08186 - arXiv:2512.08186v1 Announce Type: new -Abstract: While recent large vision-language models (VLMs) have improved generalization in vision-language navigation (VLN), existing methods typically rely on end-to-end pipelines that map vision-language inputs directly to short-horizon discrete actions. Such designs often produce fragmented motions, incur high latency, and struggle with real-world challenges like dynamic obstacle avoidance. We propose DualVLN, the first dual-system VLN foundation model that synergistically integrates high-level reasoning with low-level action execution. System 2, a VLM-based global planner, "grounds slowly" by predicting mid-term waypoint goals via image-grounded reasoning. System 1, a lightweight, multi-modal conditioning Diffusion Transformer policy, "moves fast" by leveraging both explicit pixel goals and latent features from System 2 to generate smooth and accurate trajectories. The dual-system design enables robust real-time control and adaptive local decision-making in complex, dynamic environments. By decoupling training, the VLM retains its generalization, while System 1 achieves interpretable and effective local navigation. DualVLN outperforms prior methods across all VLN benchmarks and real-world experiments demonstrate robust long-horizon planning and real-time adaptability in dynamic environments. - oai:arXiv.org:2512.08186v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + LongT2IBench: A Benchmark for Evaluating Long Text-to-Image Generation with Graph-structured Annotations + https://arxiv.org/abs/2512.09271 + arXiv:2512.09271v1 Announce Type: new +Abstract: The increasing popularity of long Text-to-Image (T2I) generation has created an urgent need for automatic and interpretable models that can evaluate the image-text alignment in long prompt scenarios. However, the existing T2I alignment benchmarks predominantly focus on short prompt scenarios and only provide MOS or Likert scale annotations. This inherent limitation hinders the development of long T2I evaluators, particularly in terms of the interpretability of alignment. In this study, we contribute LongT2IBench, which comprises 14K long text-image pairs accompanied by graph-structured human annotations. Given the detail-intensive nature of long prompts, we first design a Generate-Refine-Qualify annotation protocol to convert them into textual graph structures that encompass entities, attributes, and relations. Through this transformation, fine-grained alignment annotations are achieved based on these granular elements. Finally, the graph-structed annotations are converted into alignment scores and interpretations to facilitate the design of T2I evaluation models. Based on LongT2IBench, we further propose LongT2IExpert, a LongT2I evaluator that enables multi-modal large language models (MLLMs) to provide both quantitative scores and structured interpretations through an instruction-tuning process with Hierarchical Alignment Chain-of-Thought (CoT). Extensive experiments and comparisons demonstrate the superiority of the proposed LongT2IExpert in alignment evaluation and interpretation. Data and code have been released in https://welldky.github.io/LongT2IBench-Homepage/. + oai:arXiv.org:2512.09271v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Meng Wei, Chenyang Wan, Jiaqi Peng, Xiqian Yu, Yuqiang Yang, Delin Feng, Wenzhe Cai, Chenming Zhu, Tai Wang, Jiangmiao Pang, Xihui Liu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhichao Yang, Tianjiao Gu, Jianjie Wang, Feiyu Lin, Xiangfei Sheng, Pengfei Chen, Leida Li - Embodied Tree of Thoughts: Deliberate Manipulation Planning with Embodied World Model - https://arxiv.org/abs/2512.08188 - arXiv:2512.08188v1 Announce Type: new -Abstract: World models have emerged as a pivotal component in robot manipulation planning, enabling agents to predict future environmental states and reason about the consequences of actions before execution. While video-generation models are increasingly adopted, they often lack rigorous physical grounding, leading to hallucinations and a failure to maintain consistency in long-horizon physical constraints. To address these limitations, we propose Embodied Tree of Thoughts (EToT), a novel Real2Sim2Real planning framework that leverages a physics-based interactive digital twin as an embodied world model. EToT formulates manipulation planning as a tree search expanded through two synergistic mechanisms: (1) Priori Branching, which generates diverse candidate execution paths based on semantic and spatial analysis; and (2) Reflective Branching, which utilizes VLMs to diagnose execution failures within the simulator and iteratively refine the planning tree with corrective actions. By grounding high-level reasoning in a physics simulator, our framework ensures that generated plans adhere to rigid-body dynamics and collision constraints. We validate EToT on a suite of short- and long-horizon manipulation tasks, where it consistently outperforms baselines by effectively predicting physical dynamics and adapting to potential failures. Website at https://embodied-tree-of-thoughts.github.io . - oai:arXiv.org:2512.08188v1 - cs.RO - cs.AI + Dynamic Facial Expressions Analysis Based Parkinson's Disease Auxiliary Diagnosis + https://arxiv.org/abs/2512.09276 + arXiv:2512.09276v1 Announce Type: new +Abstract: Parkinson's disease (PD), a prevalent neurodegenerative disorder, significantly affects patients' daily functioning and social interactions. To facilitate a more efficient and accessible diagnostic approach for PD, we propose a dynamic facial expression analysis-based PD auxiliary diagnosis method. This method targets hypomimia, a characteristic clinical symptom of PD, by analyzing two manifestations: reduced facial expressivity and facial rigidity, thereby facilitating the diagnosis process. We develop a multimodal facial expression analysis network to extract expression intensity features during patients' performance of various facial expressions. This network leverages the CLIP architecture to integrate visual and textual features while preserving the temporal dynamics of facial expressions. Subsequently, the expression intensity features are processed and input into an LSTM-based classification network for PD diagnosis. Our method achieves an accuracy of 93.1%, outperforming other in-vitro PD diagnostic approaches. This technique offers a more convenient detection method for potential PD patients, improving their diagnostic experience. + oai:arXiv.org:2512.09276v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wenjiang Xu, Cindy Wang, Rui Fang, Mingkang Zhang, Lusong Li, Jing Xu, Jiayuan Gu, Zecui Zeng, Rui Chen + Xiaochen Huang, Xiaochen Bi, Cuihua Lv, Xin Wang, Haoyan Zhang, Wenjing Jiang, Xin Ma, Yibin Li - ClinicalTrialsHub: Bridging Registries and Literature for Comprehensive Clinical Trial Access - https://arxiv.org/abs/2512.08193 - arXiv:2512.08193v1 Announce Type: new -Abstract: We present ClinicalTrialsHub, an interactive search-focused platform that consolidates all data from ClinicalTrials.gov and augments it by automatically extracting and structuring trial-relevant information from PubMed research articles. Our system effectively increases access to structured clinical trial data by 83.8% compared to relying on ClinicalTrials.gov alone, with potential to make access easier for patients, clinicians, researchers, and policymakers, advancing evidence-based medicine. ClinicalTrialsHub uses large language models such as GPT-5.1 and Gemini-3-Pro to enhance accessibility. The platform automatically parses full-text research articles to extract structured trial information, translates user queries into structured database searches, and provides an attributed question-answering system that generates evidence-grounded answers linked to specific source sentences. We demonstrate its utility through a user study involving clinicians, clinical researchers, and PhD students of pharmaceutical sciences and nursing, and a systematic automatic evaluation of its information extraction and question answering capabilities. - oai:arXiv.org:2512.08193v1 - cs.CL - cs.AI - cs.HC - cs.IR - Wed, 10 Dec 2025 00:00:00 -0500 + Efficient MoE Serving in the Memory-Bound Regime: Balance Activated Experts, Not Tokens + https://arxiv.org/abs/2512.09277 + arXiv:2512.09277v1 Announce Type: new +Abstract: Expert Parallelism (EP) permits Mixture of Experts (MoE) models to scale beyond a single GPU. To address load imbalance across GPUs in EP, existing approaches aim to balance the number of tokens each GPU processes. Surprisingly, we find that this objective degrades performance rather than improving it when processing is memory-bound - a common occurrence in MoE serving, especially in the decode phase. Our analysis reveals that balancing the number of tokens processed per GPU increases the number of activated experts, exacerbating memory pressure in the memory-bound regime. + We propose Minimum Expert Token ROuting, a novel token-routing algorithm for high-performance expert-parallel MoE serving in the memory-bound regime that balances the number of activated experts per GPU rather than token counts. METRO achieves near-optimal routing quality with minimal computational overhead by jointly optimizing algorithmic efficiency and leveraging the GPU's parallel processing power. To guarantee routing quality, METRO also employs a novel allGather scheme to gather global top-k knowledge, which has minimal overhead compared to conventional allToAll. Our evaluation of METRO against EPLB on both real systems (vLLM over 8 A100 GPUs) and a proprietary simulator (8-16 B200 GPUs) shows that METRO reduces decode latency by 11 - 22%, and total token throughput by 3 - 21% for Qwen3 and DeepSeek-V3 serving, where prefill and decode phases are co-deployed. In addition, by trading latency headroom for throughput, METRO improves decode throughput by up to 4.11x over EPLB at a fixed decode SLO. + oai:arXiv.org:2512.09277v1 + cs.DC + cs.AR + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jiwoo Park, Ruoqi Liu, Avani Jagdale, Andrew Srisuwananukorn, Jing Zhao, Lang Li, Ping Zhang, Sachin Kumar + Yanpeng Yu, Haiyue Ma, Krish Agarwal, Nicolai Oswald, Qijing Huang, Hugo Linsenmaier, Chunhui Mei, Ritchie Zhao, Ritika Borkar, Bita Darvish Rouhani, David Nellans, Ronny Krashinsky, Anurag Khandelwal - Integrating Delay-Absorption Capability into Flight Departure Delay Prediction - https://arxiv.org/abs/2512.08197 - arXiv:2512.08197v1 Announce Type: new -Abstract: Accurately forecasting flight departure delays is essential for improving operational efficiency and mitigating the cascading disruptions that propagate through tightly coupled aircraft rotations. Traditional machine learning approaches often treat upstream delays as static variables, overlooking the dynamic recovery processes that determine whether a delay is absorbed or transmitted to subsequent legs. This study introduces a two-stage machine learning framework that explicitly models delay-absorption behavior and incorporates it into downstream delay prediction. In Stage I, a CatBoost classifier estimates the probability that a flight successfully absorbs an upstream delay based on operational, temporal, and meteorological features. This probability, termed AbsorbScore, quantifies airport- and flight-specific resilience to delay propagation. In Stage II, an XGBoost classifier integrates AbsorbScore with schedule, weather, and congestion indicators to predict whether a flight will depart more than 15 minutes late. Using U.S. domestic flight and NOAA weather data from Summer 2023, the proposed framework achieves substantial improvements over baseline models, increasing ROC-AUC from 0.865 to 0.898 and enhancing precision to 89.2% in identifying delayed flights. The results demonstrate that modeling delay absorption as an intermediate mechanism significantly improves predictive performance and yields interpretable insights into airport recovery dynamics, offering a practical foundation for data-driven delay management and proactive operational planning. - oai:arXiv.org:2512.08197v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + LoGoColor: Local-Global 3D Colorization for 360{\deg} Scenes + https://arxiv.org/abs/2512.09278 + arXiv:2512.09278v1 Announce Type: new +Abstract: Single-channel 3D reconstruction is widely used in fields such as robotics and medical imaging. While this line of work excels at reconstructing 3D geometry, the outputs are not colored 3D models, thus 3D colorization is required for visualization. Recent 3D colorization studies address this problem by distilling 2D image colorization models. However, these approaches suffer from an inherent inconsistency of 2D image models. This results in colors being averaged during training, leading to monotonous and oversimplified results, particularly in complex 360{\deg} scenes. In contrast, we aim to preserve color diversity by generating a new set of consistently colorized training views, thereby bypassing the averaging process. Nevertheless, eliminating the averaging process introduces a new challenge: ensuring strict multi-view consistency across these colorized views. To achieve this, we propose LoGoColor, a pipeline designed to preserve color diversity by eliminating this guidance-averaging process with a `Local-Global' approach: we partition the scene into subscenes and explicitly tackle both inter-subscene and intra-subscene consistency using a fine-tuned multi-view diffusion model. We demonstrate that our method achieves quantitatively and qualitatively more consistent and plausible 3D colorization on complex 360{\deg} scenes than existing methods, and validate its superior color diversity using a novel Color Diversity Index. + oai:arXiv.org:2512.09278v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jianyang Zhou + Yeonjin Chang, Juhwan Cho, Seunghyeon Seo, Wonsik Shin, Nojun Kwak - Animal Re-Identification on Microcontrollers - https://arxiv.org/abs/2512.08198 - arXiv:2512.08198v1 Announce Type: new -Abstract: Camera-based animal re-identification (Animal Re-ID) can support wildlife monitoring and precision livestock management in large outdoor environments with limited wireless connectivity. In these settings, inference must run directly on collar tags or low-power edge nodes built around microcontrollers (MCUs), yet most Animal Re-ID models are designed for workstations or servers and are too large for devices with small memory and low-resolution inputs. We propose an on-device framework. First, we characterise the gap between state-of-the-art Animal Re-ID models and MCU-class hardware, showing that straightforward knowledge distillation from large teachers offers limited benefit once memory and input resolution are constrained. Second, guided by this analysis, we design a high-accuracy Animal Re-ID architecture by systematically scaling a CNN-based MobileNetV2 backbone for low-resolution inputs. Third, we evaluate the framework with a real-world dataset and introduce a data-efficient fine-tuning strategy to enable fast adaptation with just three images per animal identity at a new site. Across six public Animal Re-ID datasets, our compact model achieves competitive retrieval accuracy while reducing model size by over two orders of magnitude. On a self-collected cattle dataset, the deployed model performs fully on-device inference with only a small accuracy drop and unchanged Top-1 accuracy relative to its cluster version. We demonstrate that practical, adaptable Animal Re-ID is achievable on MCU-class devices, paving the way for scalable deployment in real field environments. - oai:arXiv.org:2512.08198v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + A Modular Lean 4 Framework for Confluence and Strong Normalization of Lambda Calculi with Products and Sums + https://arxiv.org/abs/2512.09280 + arXiv:2512.09280v1 Announce Type: new +Abstract: We present Metatheory, a comprehensive library for programming language foundations in Lean 4. The library features a modular framework for proving confluence of abstract rewriting systems using three classical proof techniques: the diamond property, Newmans lemma, and the Hindley-Rosen lemma. These are instantiated across six case studies including untyped lambda calculus, combinatory logic, term rewriting, simply typed lambda calculus, and STLC with products and sums. All theorems are fully mechanized with zero axioms or sorry statements. We provide complete proofs of de Bruijn substitution infrastructure and demonstrate strong normalization via logical relations. To our knowledge, this is the first comprehensive confluence and normalization framework for Lean 4. + oai:arXiv.org:2512.09280v1 + cs.LO + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yubo Chen, Di Zhao, Yun Sing Koh, Talia Xu + Arthur Ramos, Anjolina Oliveira, Ruy de Queiroz, Tiago de Veras - Bounding the Minimal Current Harmonic Distortion in Optimal Modulation of Single-Phase Power Converters - https://arxiv.org/abs/2512.08201 - arXiv:2512.08201v1 Announce Type: new -Abstract: Optimal pulse patterns (OPPs) are a modulation technique in which a switching signal is computed offline through an optimization process that accounts for selected performance criteria, such as current harmonic distortion. The optimization determines both the switching angles (i.e., switching times) and the pattern structure (i.e., the sequence of voltage levels). This optimization task is a challenging mixed-integer nonconvex problem, involving integer-valued voltage levels and trigono metric nonlinearities in both the objective and the constraints. We address this challenge by reinterpreting OPP design as a periodic mode-selecting optimal control problem of a hybrid system, where selecting angles and levels corresponds to choosing jump times in a transition graph. This time-domain formulation enables the direct use of convex-relaxation techniques from optimal control, producing a hierarchy of semidefinite programs that lower-bound the minimal achievable harmonic distortion and scale subquadratically with the number of converter levels and switching angles. Numerical results demonstrate the effectiveness of the proposed approachs - oai:arXiv.org:2512.08201v1 - eess.SY - cs.SY - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + Higher-order multi-scale computational method and its convergence analysis for hygro-thermo-mechanical coupling problems of quasi-periodic composite structures + https://arxiv.org/abs/2512.09281 + arXiv:2512.09281v1 Announce Type: new +Abstract: This paper proposes a novel higher-order multi-scale (HOMS) computational method, which is highly targeted for efficient, high-accuracy and low-computational-cost simulation of hygro-thermo-mechanical (H-T-M) coupling problems in quasi-periodic composite structures. The first innovation of this work is that the establishment of the high-accuracy multi-scale model incorporating the higher-order correction terms for H-T-M coupling problems of quasi-periodic composite structures. The second innovation of this work is that the error analyses in the point-wise and integral senses are rigorously derived for multi-scale asymptotic solutions. Especially from the point-wise error analysis, the primary impetus for current study to develop the HOMS approach for quasi-periodic composite structures is illustrated. Furthermore, an high-accuracy multi-scale numerical algorithm is developed based on finite element method, while corresponding convergent analysis is also obtained. Finally, extensive numerical experiments are conducted to validate the computational performance of the proposed HOMS computational approach, demonstrating not only exceptional numerical accuracy, but also reduced computational cost. + oai:arXiv.org:2512.09281v1 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jared Miller, Petros Karamanakos, Tobias Geyer + Hao Dong, Yifei Ding, Jiale Linghu, Yufeng Nie, Yaochuang Han - Error-Resilient Semantic Communication for Speech Transmission over Packet-Loss Networks - https://arxiv.org/abs/2512.08203 - arXiv:2512.08203v1 Announce Type: new -Abstract: Real-time speech communication over wireless networks remains challenging, as conventional channel protection mechanisms cannot effectively counter packet loss under stringent bandwidth and latency constraints. Semantic communication has emerged as a promising paradigm for enhancing the robustness of speech transmission by means of joint source-channel coding (JSCC). However, its cross-layer design hinders practical deployment due to the incompatibility with existing digital communication systems. In this case, the robustness of speech communication is consequently evaluated primarily by the error-resilience to packet loss over wireless networks. To address these challenges, we propose \emph{Glaris}, a generative latent-prior-based resilient speech semantic communication framework that performs resilient speech coding in the generative latent space. Generative latent priors enable high-quality packet loss concealment (PLC) at the receiver side, well-balancing semantic consistency and reconstruction fidelity. Additionally, an integrated error resilience mechanism is designed to mitigate the error propagation and improve the effectiveness of PLC. Compared with traditional packet-level forward error correction (FEC) strategies, our new method achieves enhanced robustness over dynamic wireless networks while reducing redundancy overhead significantly. Experimental results on the LibriSpeech dataset demonstrate that \emph{Glaris} consistently outperforms existing error-resilient codecs, achieving JSCC-level robustness while maintaining seamless compatibility with existing systems, and it also strikes a favorable balance between transmission efficiency and speech reconstruction quality. - oai:arXiv.org:2512.08203v1 - cs.SD - Wed, 10 Dec 2025 00:00:00 -0500 + FoundIR-v2: Optimizing Pre-Training Data Mixtures for Image Restoration Foundation Model + https://arxiv.org/abs/2512.09282 + arXiv:2512.09282v1 Announce Type: new +Abstract: Recent studies have witnessed significant advances in image restoration foundation models driven by improvements in the scale and quality of pre-training data. In this work, we find that the data mixture proportions from different restoration tasks are also a critical factor directly determining the overall performance of all-in-one image restoration models. To this end, we propose a high-capacity diffusion-based image restoration foundation model, FoundIR-v2, which adopts a data equilibrium scheduling paradigm to dynamically optimize the proportions of mixed training datasets from different tasks. By leveraging the data mixing law, our method ensures a balanced dataset composition, enabling the model to achieve consistent generalization and comprehensive performance across diverse tasks. Furthermore, we introduce an effective Mixture-of-Experts (MoE)-driven scheduler into generative pre-training to flexibly allocate task-adaptive diffusion priors for each restoration task, accounting for the distinct degradation forms and levels exhibited by different tasks. Extensive experiments demonstrate that our method can address over 50 sub-tasks across a broader scope of real-world scenarios and achieves favorable performance against state-of-the-art approaches. + oai:arXiv.org:2512.09282v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhuohang Han, Jincheng Dai, Shengshi Yao, Junyi Wang, Yanlong Li, Kai Niu, Wenjun Xu, Ping Zhang + Xiang Chen, Jinshan Pan, Jiangxin Dong, Jian Yang, Jinhui Tang - Evaluating Vulnerabilities of Connected Vehicles Under Cyber Attacks by Attack-Defense Tree - https://arxiv.org/abs/2512.08204 - arXiv:2512.08204v1 Announce Type: new -Abstract: Connected vehicles represent a key enabler of intelligent transportation systems, where vehicles are equipped with advanced communication, sensing, and computing technologies to interact not only with one another but also with surrounding infrastructures and the environment. Through continuous data exchange, such vehicles are capable of enhancing road safety, improving traffic efficiency, and ensuring more reliable mobility services. Further, when these capabilities are integrated with advanced automation technologies, the concept essentially evolves into connected and autonomous vehicles (CAVs). While connected vehicles primarily focus on seamless information sharing, autonomous vehicles are mainly dependent on advanced perception, decision-making, and control mechanisms to operate with minimal or without human intervention. However, as a result of connectivity, an adversary with malicious intentions might be able to compromise successfully by breaching the system components of CAVs. In this paper, we present an attack-tree based methodology for evaluating cyber security vulnerabilities in CAVs. In particular, we utilize the attack-defense tree formulation to systematically assess attack-leaf vulnerabilities, and before analyzing the vulnerability indices, we also define a measure of vulnerabilities, which is based on existing cyber security threats and corresponding defensive countermeasures. - oai:arXiv.org:2512.08204v1 - cs.CR - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 + UPETrack: Unidirectional Position Estimation for Tracking Occluded Deformable Linear Objects + https://arxiv.org/abs/2512.09283 + arXiv:2512.09283v1 Announce Type: new +Abstract: Real-time state tracking of Deformable Linear Objects (DLOs) is critical for enabling robotic manipulation of DLOs in industrial assembly, medical procedures, and daily-life applications. However, the high-dimensional configuration space, nonlinear dynamics, and frequent partial occlusions present fundamental barriers to robust real-time DLO tracking. To address these limitations, this study introduces UPETrack, a geometry-driven framework based on Unidirectional Position Estimation (UPE), which facilitates tracking without the requirement for physical modeling, virtual simulation, or visual markers. The framework operates in two phases: (1) visible segment tracking is based on a Gaussian Mixture Model (GMM) fitted via the Expectation Maximization (EM) algorithm, and (2) occlusion region prediction employing UPE algorithm we proposed. UPE leverages the geometric continuity inherent in DLO shapes and their temporal evolution patterns to derive a closed-form positional estimator through three principal mechanisms: (i) local linear combination displacement term, (ii) proximal linear constraint term, and (iii) historical curvature term. This analytical formulation allows efficient and stable estimation of occluded nodes through explicit linear combinations of geometric components, eliminating the need for additional iterative optimization. Experimental results demonstrate that UPETrack surpasses two state-of-the-art tracking algorithms, including TrackDLO and CDCPD2, in both positioning accuracy and computational efficiency. + oai:arXiv.org:2512.09283v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Muhammad Baqer Mollah, Honggang Wang, Hua Fang + Fan Wu, Chenguang Yang, Haibin Yang, Shuo Wang, Yanrui Xu, Xing Zhou, Meng Gao, Yaoqi Xian, Zhihong Zhu, Shifeng Huang - High-Performance Dual-Arm Task and Motion Planning for Tabletop Rearrangement - https://arxiv.org/abs/2512.08206 - arXiv:2512.08206v1 Announce Type: new -Abstract: We propose Synchronous Dual-Arm Rearrange- ment Planner (SDAR), a task and motion planning (TAMP) framework for tabletop rearrangement, where two robot arms equipped with 2-finger grippers must work together in close proximity to rearrange objects whose start and goal config- urations are strongly entangled. To tackle such challenges, SDAR tightly knit together its dependency-driven task planner (SDAR-T) and synchronous dual-arm motion planner (SDAR- M), to intelligently sift through a large number of possible task and motion plans. Specifically, SDAR-T applies a simple yet effective strategy to decompose the global object dependency graph induced by the rearrangement task, to produce more optimal dual-arm task plans than solutions derived from optimal task plans for a single arm. Leveraging state-of-the-art GPU SIMD-based motion planning tools, SDAR-M employs a layered motion planning strategy to sift through many task plans for the best synchronous dual-arm motion plan while ensuring high levels of success rate. Comprehensive evaluation demonstrates that SDAR delivers a 100% success rate in solving complex, non-monotone, long-horizon tabletop rearrangement tasks with solution quality far exceeding the previous state- of-the-art. Experiments on two UR-5e arms further confirm SDAR directly and reliably transfers to robot hardware. - oai:arXiv.org:2512.08206v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Who Speaks What from Afar: Eavesdropping In-Person Conversations via mmWave Sensing + https://arxiv.org/abs/2512.09285 + arXiv:2512.09285v1 Announce Type: new +Abstract: Multi-participant meetings occur across various domains, such as business negotiations and medical consultations, during which sensitive information like trade secrets, business strategies, and patient conditions is often discussed. Previous research has demonstrated that attackers with mmWave radars outside the room can overhear meeting content by detecting minute speech-induced vibrations on objects. However, these eavesdropping attacks cannot differentiate which speech content comes from which person in a multi-participant meeting, leading to potential misunderstandings and poor decision-making. In this paper, we answer the question ``who speaks what''. By leveraging the spatial diversity introduced by ubiquitous objects, we propose an attack system that enables attackers to remotely eavesdrop on in-person conversations without requiring prior knowledge, such as identities, the number of participants, or seating arrangements. Since participants in in-person meetings are typically seated at different locations, their speech induces distinct vibration patterns on nearby objects. To exploit this, we design a noise-robust unsupervised approach for distinguishing participants by detecting speech-induced vibration differences in the frequency domain. Meanwhile, a deep learning-based framework is explored to combine signals from objects for speech quality enhancement. We validate the proof-of-concept attack on speech classification and signal enhancement through extensive experiments. The experimental results show that our attack can achieve the speech classification accuracy of up to $0.99$ with several participants in a meeting room. Meanwhile, our attack demonstrates consistent speech quality enhancement across all real-world scenarios, including different distances between the radar and the objects. + oai:arXiv.org:2512.09285v1 + cs.SD + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Duo Zhang, Junshan Huang, Jingjin Yu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shaoying Wang, Hansong Zhou, Yukun Yuan, Xiaonan Zhang - Duct boundary conditions for incompressible fluid flows: finite element discretizations and parameter estimation in coronary blood flow - https://arxiv.org/abs/2512.08207 - arXiv:2512.08207v1 Announce Type: new -Abstract: 3D-0D coupled flow models are widely used across many application fields but remain challenging to solve. Implicit coupling introduces non-local terms, whereas explicit coupling results in only conditionally stable schemes. Furthermore, incorporating inertial effects alongside viscous resistance enlarges the parameter space, making calibration more difficult. - In this work, we propose a new type of boundary condition based on the method of asymptotic partial decomposition of a domain (MAPDD), which we denote as the Duct Boundary Condition (DuBC). This approach enables the incorporation of geometrically reduced domains as a boundary term with only local coupling in the implicit case. Moreover, the DuBC accounts for both viscous and inertial effects simultaneously using a single physical parameter. Additionally, we derive a fractional-step time-marching scheme including the DuBC. We demonstrate the features of the DuBC in coronary artery blood flow simulations, including sequential parameter estimation from noisy velocity data. - oai:arXiv.org:2512.08207v1 + Fast operator learning for mapping correlations + https://arxiv.org/abs/2512.09286 + arXiv:2512.09286v1 Announce Type: new +Abstract: We propose a fast, optimization-free method for learning the transition operators of high-dimensional Markov processes. The central idea is to perform a Galerkin projection of the transition operator to a suitable set of low-order bases that capture the correlations between the dimensions. Such a discretized operator can be obtained from moments corresponding to our choice of basis without curse of dimensionality. Furthermore, by exploiting its low-rank structure and the spatial decay of correlations, we can obtain a compressed representation with computational complexity of order $\mathcal{O}(dN)$, where $d$ is the dimensionality and $N$ is the sample size. We further theoretically analyze the approximation error of the proposed compressed representation. We numerically demonstrate that the learned operator allows efficient prediction of future events and solving high-dimensional boundary value problems. This gives rise to a simple linear algebraic method for high-dimensional rare-events simulations. + oai:arXiv.org:2512.09286v1 math.NA cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jerem\'ias Garay, David Nolte, Crist\'obal Bertoglio + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuehaw Khoo, Yuguan Wang, Siyao Yang - MobileFineTuner: A Unified End-to-End Framework for Fine-Tuning LLMs on Mobile Phones - https://arxiv.org/abs/2512.08211 - arXiv:2512.08211v1 Announce Type: new -Abstract: Mobile phones are the most ubiquitous end devices, generating vast amounts of human-authored data and serving as the primary platform for end-side applications. As high-quality public data for large language models (LLMs) approaches exhaustion, on-device fine-tuning provides an opportunity to leverage private user data while preserving privacy. However, existing approaches are predominantly simulation-based or rely on IoT devices and PCs, leaving commodity mobile phones largely unexplored. A key gap is the absence of an open-source framework that enables practical LLM fine-tuning on mobile phones. We present MobileFineTuner, a unified open-source framework that enables end-to-end LLM fine-tuning directly on commodity mobile phones. MobileFineTuner is designed for efficiency, scalability, and usability, supporting full-parameters fine-tuning (Full-FT) and parameter-efficient fine-tuning (PEFT). To address the memory and energy limitations inherent to mobile phones, we introduce system-level optimizations including parameter sharding, gradient accumulation, and energy-aware computation scheduling. We demonstrate the practicality of MobileFineTuner by fine-tuning GPT-2, Gemma 3, and Qwen 2.5 on real mobile phones. Extensive experiments and ablation studies validate the effectiveness of the proposed optimizations and establish MobileFineTuner as a viable foundation for future research on on-device LLM training. - oai:arXiv.org:2512.08211v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + MelanomaNet: Explainable Deep Learning for Skin Lesion Classification + https://arxiv.org/abs/2512.09289 + arXiv:2512.09289v1 Announce Type: new +Abstract: Automated skin lesion classification using deep learning has shown remarkable accuracy, yet clinical adoption remains limited due to the "black box" nature of these models. We present MelanomaNet, an explainable deep learning system for multi-class skin lesion classification that addresses this gap through four complementary interpretability mechanisms. Our approach combines an EfficientNet V2 backbone with GradCAM++ attention visualization, automated ABCDE clinical criterion extraction, Fast Concept Activation Vectors (FastCAV) for concept-based explanations, and Monte Carlo Dropout uncertainty quantification. We evaluate our system on the ISIC 2019 dataset containing 25,331 dermoscopic images across 9 diagnostic categories. Our model achieves 85.61% accuracy with a weighted F1 score of 0.8564, while providing clinically meaningful explanations that align model attention with established dermatological assessment criteria. The uncertainty quantification module decomposes prediction confidence into epistemic and aleatoric components, enabling automatic flagging of unreliable predictions for clinical review. Our results demonstrate that high classification performance can be achieved alongside comprehensive interpretability, potentially facilitating greater trust and adoption in clinical dermatology workflows. The source code is available at https://github.com/suxrobgm/explainable-melanoma + oai:arXiv.org:2512.09289v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jiaxiang Geng, Lunyu Zhao, Yiyi Lu, Bing Luo + http://creativecommons.org/licenses/by/4.0/ + Sukhrobbek Ilyosbekov - Secure or Suspect? Investigating Package Hallucinations of Shell Command in Original and Quantized LLMs - https://arxiv.org/abs/2512.08213 - arXiv:2512.08213v1 Announce Type: new -Abstract: Large Language Models for code (LLMs4Code) are increasingly used to generate software artifacts, including library and package recommendations in languages such as Go. However, recent evidence shows that LLMs frequently hallucinate package names or generate dependencies containing known security vulnerabilities, posing significant risks to developers and downstream software supply chains. At the same time, quantization has become a widely adopted technique to reduce inference cost and enable deployment of LLMs on resource-constrained environments. Despite its popularity, little is known about how quantization affects the correctness and security of LLM-generated software dependencies while generating shell commands for package installation. - In this work, we conduct the first systematic empirical study of the impact of quantization on package hallucination and vulnerability risks in LLM-generated Go packages. We evaluate five Qwen model sizes under full-precision, 8-bit, and 4-bit quantization across three datasets (SO, MBPP, and paraphrase). Our results show that quantization substantially increases the package hallucination rate (PHR), with 4-bit models exhibiting the most severe degradation. We further find that even among the correctly generated packages, the vulnerability presence rate (VPR) rises as precision decreases, indicating elevated security risk in lower-precision models. Finally, our analysis of hallucinated outputs reveals that most fabricated packages resemble realistic URL-based Go module paths, such as most commonly malformed or non-existent GitHub and golang.org repositories, highlighting a systematic pattern in how LLMs hallucinate dependencies. Overall, our findings provide actionable insights into the reliability and security implications of deploying quantized LLMs for code generation and dependency recommendation. - oai:arXiv.org:2512.08213v1 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Identifying Bias in Machine-generated Text Detection + https://arxiv.org/abs/2512.09292 + arXiv:2512.09292v1 Announce Type: new +Abstract: The meteoric rise in text generation capability has been accompanied by parallel growth in interest in machine-generated text detection: the capability to identify whether a given text was generated using a model or written by a person. While detection models show strong performance, they have the capacity to cause significant negative impacts. We explore potential biases in English machine-generated text detection systems. We curate a dataset of student essays and assess 16 different detection systems for bias across four attributes: gender, race/ethnicity, English-language learner (ELL) status, and economic status. We evaluate these attributes using regression-based models to determine the significance and power of the effects, as well as performing subgroup analysis. We find that while biases are generally inconsistent across systems, there are several key issues: several models tend to classify disadvantaged groups as machine-generated, ELL essays are more likely to be classified as machine-generated, economically disadvantaged students' essays are less likely to be classified as machine-generated, and non-White ELL essays are disproportionately classified as machine-generated relative to their White counterparts. Finally, we perform human annotation and find that while humans perform generally poorly at the detection task, they show no significant biases on the studied attributes. + oai:arXiv.org:2512.09292v1 + cs.CL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Md Nazmul Haque, Elizabeth Lin, Lawrence Arkoh, Biruk Tadesse, Bowen Xu + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Kevin Stowe, Svetlana Afanaseva, Rodolfo Raimundo, Yitao Sun, Kailash Patil - Blur2Sharp: Human Novel Pose and View Synthesis with Generative Prior Refinement - https://arxiv.org/abs/2512.08215 - arXiv:2512.08215v1 Announce Type: new -Abstract: The creation of lifelike human avatars capable of realistic pose variation and viewpoint flexibility remains a fundamental challenge in computer vision and graphics. Current approaches typically yield either geometrically inconsistent multi-view images or sacrifice photorealism, resulting in blurry outputs under diverse viewing angles and complex motions. To address these issues, we propose Blur2Sharp, a novel framework integrating 3D-aware neural rendering and diffusion models to generate sharp, geometrically consistent novel-view images from only a single reference view. Our method employs a dual-conditioning architecture: initially, a Human NeRF model generates geometrically coherent multi-view renderings for target poses, explicitly encoding 3D structural guidance. Subsequently, a diffusion model conditioned on these renderings refines the generated images, preserving fine-grained details and structural fidelity. We further enhance visual quality through hierarchical feature fusion, incorporating texture, normal, and semantic priors extracted from parametric SMPL models to simultaneously improve global coherence and local detail accuracy. Extensive experiments demonstrate that Blur2Sharp consistently surpasses state-of-the-art techniques in both novel pose and view generation tasks, particularly excelling under challenging scenarios involving loose clothing and occlusions. - oai:arXiv.org:2512.08215v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Electric Arc Furnaces Scheduling under Electricity Price Volatility with Reinforcement Learning + https://arxiv.org/abs/2512.09293 + arXiv:2512.09293v1 Announce Type: new +Abstract: This paper proposes a reinforcement learning-based framework for optimizing the operation of electric arc furnaces (EAFs) under volatile electricity prices. We formulate the deterministic version of the EAF scheduling problem into a mixed-integer linear programming (MILP) formulation, and then develop a Q-learning algorithm to perform real-time control of multiple EAF units under real-time price volatility and shared feeding capacity constraints. We design a custom reward function for the Q-learning algorithm to smooth the start-up penalties of the EAFs. Using real data from EAF designs and electricity prices in New York State, we benchmark our algorithm against a baseline rule-based controller and a MILP benchmark, assuming perfect price forecasts. The results show that our reinforcement learning algorithm achieves around 90% of the profit compared to the perfect MILP benchmark in various single-unit and multi-unit cases under a non-anticipatory control setting. + oai:arXiv.org:2512.09293v1 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Chia-Hern Lai, I-Hsuan Lo, Yen-Ku Yeh, Thanh-Nguyen Truong, Ching-Chun Huang + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ruonan Pi, Zhiyuan Fan, Bolun Xu - Correction of Decoupled Weight Decay - https://arxiv.org/abs/2512.08217 - arXiv:2512.08217v1 Announce Type: new -Abstract: Decoupled weight decay, solely responsible for the performance advantage of AdamW over Adam, has long been set to proportional to learning rate $\gamma$ without questioning. Some researchers have recently challenged such assumption and argued that decoupled weight decay should be set $\propto \gamma^2$ instead based on orthogonality arguments at steady state. To the contrary, we find that eliminating the contribution of the perpendicular component of the update to the weight norm leads to little change to the training dynamics. Instead, we derive that decoupled weight decay $\propto \gamma^2$ results in stable weight norm based on the simple assumption that updates become independent of the weights at steady state, regardless of the nature of the optimizer. Based on the same assumption, we derive and empirically verify that the Total Update Contribution (TUC) of a minibatch under the Scion optimizer is better characterized by the momentum-dependent effective learning rate whose optimal value transfers and we show that decoupled weight decay $\propto \gamma^2$ leads to stable weight and gradient norms and allows us to better control the training dynamics and improve the model performance. - oai:arXiv.org:2512.08217v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Traffic Scene Small Target Detection Method Based on YOLOv8n-SPTS Model for Autonomous Driving + https://arxiv.org/abs/2512.09296 + arXiv:2512.09296v1 Announce Type: new +Abstract: This paper focuses on the key issue in autonomous driving: small target recognition in dynamic perception. Existing algorithms suffer from poor detection performance due to missing small target information, scale imbalance, and occlusion. We propose an improved YOLOv8n-SPTS model, which enhances the detection accuracy of small traffic targets through three key innovations: First, optimizing the feature extraction module. In the Backbone Bottleneck structure of YOLOv8n, 4 traditional convolution modules are replaced with Space-to-Depth Convolution (SPD-Conv) modules. This module retains fine-grained information through space-to-depth conversion, reduces information loss, and enhances the ability to capture features of low-resolution small targets. Second, enhancing feature fusion capability. The Spatial Pyramid Pooling - Fast Cross Stage Partial Connection (SPPFCSPC) module is introduced to replace the original SPPF module, integrating the multi-scale feature extraction from Spatial Pyramid Pooling (SPP) and the feature fusion mechanism of Cross Stage Partial Connection (CSP), thereby improving the model's contextual understanding of complex scenes and multi-scale feature expression ability. Third, designing a dedicated detection structure for small targets. A Triple-Stage Feature Pyramid (TSFP) structure is proposed, which adds a 160*160 small target detection head to the original detection heads to fully utilize high-resolution features in shallow layers; meanwhile, redundant large target detection heads are removed to balance computational efficiency. Comparative experiments on the VisDrone2019-DET dataset show that YOLOv8n-SPTS model ranks first in precision (61.9%), recall (48.3%), mAP@0.5 (52.6%), and mAP@0.5:0.95 (32.6%). Visualization results verify that the miss rate of small targets such as pedestrians and bicycles in occluded and dense scenes is significantly reduced. + oai:arXiv.org:2512.09296v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jason Chuan-Chih Chou + Songhan Wu - PR-CapsNet: Pseudo-Riemannian Capsule Network with Adaptive Curvature Routing for Graph Learning - https://arxiv.org/abs/2512.08218 - arXiv:2512.08218v1 Announce Type: new -Abstract: Capsule Networks (CapsNets) show exceptional graph representation capacity via dynamic routing and vectorized hierarchical representations, but they model the complex geometries of real\-world graphs poorly by fixed\-curvature space due to the inherent geodesical disconnectedness issues, leading to suboptimal performance. Recent works find that non\-Euclidean pseudo\-Riemannian manifolds provide specific inductive biases for embedding graph data, but how to leverage them to improve CapsNets is still underexplored. Here, we extend the Euclidean capsule routing into geodesically disconnected pseudo\-Riemannian manifolds and derive a Pseudo\-Riemannian Capsule Network (PR\-CapsNet), which models data in pseudo\-Riemannian manifolds of adaptive curvature, for graph representation learning. Specifically, PR\-CapsNet enhances the CapsNet with Adaptive Pseudo\-Riemannian Tangent Space Routing by utilizing pseudo\-Riemannian geometry. Unlike single\-curvature or subspace\-partitioning methods, PR\-CapsNet concurrently models hierarchical and cluster or cyclic graph structures via its versatile pseudo\-Riemannian metric. It first deploys Pseudo\-Riemannian Tangent Space Routing to decompose capsule states into spherical\-temporal and Euclidean\-spatial subspaces with diffeomorphic transformations. Then, an Adaptive Curvature Routing is developed to adaptively fuse features from different curvature spaces for complex graphs via a learnable curvature tensor with geometric attention from local manifold properties. Finally, a geometric properties\-preserved Pseudo\-Riemannian Capsule Classifier is developed to project capsule embeddings to tangent spaces and use curvature\-weighted softmax for classification. Extensive experiments on node and graph classification benchmarks show PR\-CapsNet outperforms SOTA models, validating PR\-CapsNet's strong representation power for complex graph structures. - oai:arXiv.org:2512.08218v1 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + One-Shot Real-World Demonstration Synthesis for Scalable Bimanual Manipulation + https://arxiv.org/abs/2512.09297 + arXiv:2512.09297v1 Announce Type: new +Abstract: Learning dexterous bimanual manipulation policies critically depends on large-scale, high-quality demonstrations, yet current paradigms face inherent trade-offs: teleoperation provides physically grounded data but is prohibitively labor-intensive, while simulation-based synthesis scales efficiently but suffers from sim-to-real gaps. We present BiDemoSyn, a framework that synthesizes contact-rich, physically feasible bimanual demonstrations from a single real-world example. The key idea is to decompose tasks into invariant coordination blocks and variable, object-dependent adjustments, then adapt them through vision-guided alignment and lightweight trajectory optimization. This enables the generation of thousands of diverse and feasible demonstrations within several hour, without repeated teleoperation or reliance on imperfect simulation. Across six dual-arm tasks, we show that policies trained on BiDemoSyn data generalize robustly to novel object poses and shapes, significantly outperforming recent baselines. By bridging the gap between efficiency and real-world fidelity, BiDemoSyn provides a scalable path toward practical imitation learning for complex bimanual manipulation without compromising physical grounding. + oai:arXiv.org:2512.09297v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ye Qin, Jingchao Wang, Yang Shi, Haiying Huang, Junxu Li, Weijian Liu, Tinghui Chen, Jinghui Qin + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Huayi Zhou, Kui Jia - Any Old Tom, Dick or Harry: The Citation Impact of First Name Genderedness - https://arxiv.org/abs/2512.08219 - arXiv:2512.08219v1 Announce Type: new -Abstract: This paper attempts a first analysis of citation distributions based on the genderedness of authors' first name. Following the extraction of first name and sex data from all human entity triplets contained in Wikidata, a first name genderedness table is first created based on compiled sex frequencies, then merged with bibliometric data from eponymous, US-affiliated authors. Comparisons of various cumulative distributions show that citation concentrations fluctuations are highest at the opposite ends of the genderedness spectrum, as authors with very feminine and masculine first names respectively get a lower and higher share of citations for every article published, irrespective of their contribution role. - oai:arXiv.org:2512.08219v1 - cs.DL - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + VABench: A Comprehensive Benchmark for Audio-Video Generation + https://arxiv.org/abs/2512.09299 + arXiv:2512.09299v1 Announce Type: new +Abstract: Recent advances in video generation have been remarkable, enabling models to produce visually compelling videos with synchronized audio. While existing video generation benchmarks provide comprehensive metrics for visual quality, they lack convincing evaluations for audio-video generation, especially for models aiming to generate synchronized audio-video outputs. To address this gap, we introduce VABench, a comprehensive and multi-dimensional benchmark framework designed to systematically evaluate the capabilities of synchronous audio-video generation. VABench encompasses three primary task types: text-to-audio-video (T2AV), image-to-audio-video (I2AV), and stereo audio-video generation. It further establishes two major evaluation modules covering 15 dimensions. These dimensions specifically assess pairwise similarities (text-video, text-audio, video-audio), audio-video synchronization, lip-speech consistency, and carefully curated audio and video question-answering (QA) pairs, among others. Furthermore, VABench covers seven major content categories: animals, human sounds, music, environmental sounds, synchronous physical sounds, complex scenes, and virtual worlds. We provide a systematic analysis and visualization of the evaluation results, aiming to establish a new standard for assessing video generation models with synchronous audio capabilities and to promote the comprehensive advancement of the field. + oai:arXiv.org:2512.09299v1 + cs.CV + cs.SD + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Maxime Holmberg Sainte-Marie, Vincent Larivi\`ere + Daili Hua, Xizhi Wang, Bohan Zeng, Xinyi Huang, Hao Liang, Junbo Niu, Xinlong Chen, Quanqing Xu, Wentao Zhang - VisKnow: Constructing Visual Knowledge Base for Object Understanding - https://arxiv.org/abs/2512.08221 - arXiv:2512.08221v1 Announce Type: new -Abstract: Understanding objects is fundamental to computer vision. Beyond object recognition that provides only a category label as typical output, in-depth object understanding represents a comprehensive perception of an object category, involving its components, appearance characteristics, inter-category relationships, contextual background knowledge, etc. Developing such capability requires sufficient multi-modal data, including visual annotations such as parts, attributes, and co-occurrences for specific tasks, as well as textual knowledge to support high-level tasks like reasoning and question answering. However, these data are generally task-oriented and not systematically organized enough to achieve the expected understanding of object categories. In response, we propose the Visual Knowledge Base that structures multi-modal object knowledge as graphs, and present a construction framework named VisKnow that extracts multi-modal, object-level knowledge for object understanding. This framework integrates enriched aligned text and image-source knowledge with region annotations at both object and part levels through a combination of expert design and large-scale model application. As a specific case study, we construct AnimalKB, a structured animal knowledge base covering 406 animal categories, which contains 22K textual knowledge triplets extracted from encyclopedic documents, 420K images, and corresponding region annotations. A series of experiments showcase how AnimalKB enhances object-level visual tasks such as zero-shot recognition and fine-grained VQA, and serves as challenging benchmarks for knowledge graph completion and part segmentation. Our findings highlight the potential of automatically constructing visual knowledge bases to advance visual understanding and its practical applications. The project page is available at https://vipl-vsu.github.io/VisKnow. - oai:arXiv.org:2512.08221v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + ZeroOS: A Universal Modular Library OS for zkVMs + https://arxiv.org/abs/2512.09300 + arXiv:2512.09300v1 Announce Type: new +Abstract: zkVMs promise general-purpose verifiable computation through ISA-level compatibility with modern programs and toolchains. However, compatibility extends further than just the ISA; modern programs often cannot run or even compile without an operating system and libc. zkVMs attempt to address this by maintaining forks of language-specific runtimes and statically linking them into applications to create self-contained unikernels, but this ad-hoc approach leads to version hell and burdens verifiable applications (vApps) with an unnecessarily large trusted computing base. We solve this problem with ZeroOS, a modular library operating system (libOS) for vApp unikernels; vApp developers can use off-the-shelf toolchains to compile and link only the exact subset of the Linux ABI their vApp needs. Any zkVM team can easily leverage the ZeroOS ecosystem by writing a ZeroOS bootloader for their platform, resulting in a reduced maintainence burden and unifying the entire zkVM ecosystem with consolidated development and audit resources. ZeroOS is free and open-sourced at https://github.com/LayerZero-Labs/ZeroOS. + oai:arXiv.org:2512.09300v1 + cs.OS + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Ziwei Yao, Qiyang Wan, Ruiping Wang, Xilin Chen + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Guangxian Zou, Isaac Zhang, Ryan Zarick, Kelvin Wong, Thomas Kim, Daniel L. -K. Wong, Saeid Yazdinejad, Dan Boneh - SOP^2: Transfer Learning with Scene-Oriented Prompt Pool on 3D Object Detection - https://arxiv.org/abs/2512.08223 - arXiv:2512.08223v1 Announce Type: new -Abstract: With the rise of Large Language Models (LLMs) such as GPT-3, these models exhibit strong generalization capabilities. Through transfer learning techniques such as fine-tuning and prompt tuning, they can be adapted to various downstream tasks with minimal parameter adjustments. This approach is particularly common in the field of Natural Language Processing (NLP). This paper aims to explore the effectiveness of common prompt tuning methods in 3D object detection. We investigate whether a model trained on the large-scale Waymo dataset can serve as a foundation model and adapt to other scenarios within the 3D object detection field. This paper sequentially examines the impact of prompt tokens and prompt generators, and further proposes a Scene-Oriented Prompt Pool (\textbf{SOP$^2$}). We demonstrate the effectiveness of prompt pools in 3D object detection, with the goal of inspiring future researchers to delve deeper into the potential of prompts in the 3D field. - oai:arXiv.org:2512.08223v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + RACAM: Enhancing DRAM with Reuse-Aware Computation and Automated Mapping for ML Inference + https://arxiv.org/abs/2512.09304 + arXiv:2512.09304v1 Announce Type: new +Abstract: In-DRAM Processing-In-Memory (DRAM-PIM) has emerged as a promising approach to accelerate memory-intensive workloads by mitigating data transfer overhead between DRAM and the host processor. Bit-serial DRAM-PIM architectures, further enhance efficiency by supporting runtime variable data precision, which is critical for emerging workloads, such as large language model (LLM) inference. However, existing works still have major limitations: lack of data reuse, significant amounts of redundant data transfer, and insufficient support for workload mapping. To address these issues, we propose RACAM, the first in-DRAM bit-serial architecture which uses dedicated locality buffers, bit-serial PEs, popcount reduction units and broadcast units to enable data reuse and alleviate redundant data transfers. Furthermore, a workload mapping mechanism is proposed to fully explore the massive parallelism of DRAM architecture and identify the best mapping scheme of a given workload. We evaluate RACAM against GPUs and the state-of-the-art, in-DRAM PIM system, Proteus, across end-to-end LLM inferences. RACAM achieves 9x to 102x speedup over GPUs and 233x higher performance per mm2 compared to Proteus in case of GPT3. + oai:arXiv.org:2512.09304v1 + cs.AR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - 10.1109/AVSS65446.2025.11149933 - Ching-Hung Cheng, Hsiu-Fu Wu, Bing-Chen Wu, Khanh-Phong Bui, Van-Tin Luu, Ching-Chun Huang + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Siyuan Ma, Jiajun Hu, Jeeho Ryoo, Aman Arora, Lizy Kurian John - New VVC profiles targeting Feature Coding for Machines - https://arxiv.org/abs/2512.08227 - arXiv:2512.08227v1 Announce Type: new -Abstract: Modern video codecs have been extensively optimized to preserve perceptual quality, leveraging models of the human visual system. However, in split inference systems-where intermediate features from neural network are transmitted instead of pixel data-these assumptions no longer apply. Intermediate features are abstract, sparse, and task-specific, making perceptual fidelity irrelevant. In this paper, we investigate the use of Versatile Video Coding (VVC) for compressing such features under the MPEG-AI Feature Coding for Machines (FCM) standard. We perform a tool-level analysis to understand the impact of individual coding components on compression efficiency and downstream vision task accuracy. Based on these insights, we propose three lightweight essential VVC profiles-Fast, Faster, and Fastest. The Fast profile provides 2.96% BD-Rate gain while reducing encoding time by 21.8%. Faster achieves a 1.85% BD-Rate gain with a 51.5% speedup. Fastest reduces encoding time by 95.6% with only a 1.71% loss in BD-Rate. - oai:arXiv.org:2512.08227v1 + From SAM to DINOv2: Towards Distilling Foundation Models to Lightweight Baselines for Generalized Polyp Segmentation + https://arxiv.org/abs/2512.09307 + arXiv:2512.09307v1 Announce Type: new +Abstract: Accurate polyp segmentation during colonoscopy is critical for the early detection of colorectal cancer and still remains challenging due to significant size, shape, and color variations, and the camouflaged nature of polyps. While lightweight baseline models such as U-Net, U-Net++, and PraNet offer advantages in terms of easy deployment and low computational cost, they struggle to deal with the above issues, leading to limited segmentation performance. In contrast, large-scale vision foundation models such as SAM, DINOv2, OneFormer, and Mask2Former have exhibited impressive generalization performance across natural image domains. However, their direct transfer to medical imaging tasks (e.g., colonoscopic polyp segmentation) is not straightforward, primarily due to the scarcity of large-scale datasets and lack of domain-specific knowledge. To bridge this gap, we propose a novel distillation framework, Polyp-DiFoM, that transfers the rich representations of foundation models into lightweight segmentation baselines, allowing efficient and accurate deployment in clinical settings. In particular, we infuse semantic priors from the foundation models into canonical architectures such as U-Net and U-Net++ and further perform frequency domain encoding for enhanced distillation, corroborating their generalization capability. Extensive experiments are performed across five benchmark datasets, such as Kvasir-SEG, CVC-ClinicDB, ETIS, ColonDB, and CVC-300. Notably, Polyp-DiFoM consistently outperforms respective baseline models significantly, as well as the state-of-the-art model, with nearly 9 times reduced computation overhead. The code is available at https://github.com/lostinrepo/PolypDiFoM. + oai:arXiv.org:2512.09307v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Md Eimran Hossain Eimon, Ashan Perera, Juan Merlos, Velibor Adzic, Hari Kalva + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shivanshu Agnihotri, Snehashis Majhi, Deepak Ranjan Nayak, Debesh Jha - MM-CoT:A Benchmark for Probing Visual Chain-of-Thought Reasoning in Multimodal Models - https://arxiv.org/abs/2512.08228 - arXiv:2512.08228v1 Announce Type: new -Abstract: The ability to perform Chain-of-Thought (CoT) reasoning marks a major milestone for multimodal models (MMs), enabling them to solve complex visual reasoning problems. Yet a critical question remains: is such reasoning genuinely grounded in visual evidence and logically coherent? Existing benchmarks emphasize generation but neglect verification, i.e., the capacity to assess whether a reasoning chain is both visually consistent and logically valid. To fill this gap, we introduce MM-CoT, a diagnostic benchmark specifically designed to probe the visual grounding and logical coherence of CoT reasoning in MMs. Instead of generating free-form explanations, models must select the sole event chain that satisfies two orthogonal constraints: (i) visual consistency, ensuring all steps are anchored in observable evidence, and (ii) logical coherence, ensuring causal and commonsense validity. Adversarial distractors are engineered to violate one of these constraints, exposing distinct reasoning failures. We evaluate leading vision-language models on MM-CoT and find that even the most advanced systems struggle, revealing a sharp discrepancy between generative fluency and true reasoning fidelity. MM-CoT shows low correlation with existing benchmarks, confirming that it measures a unique combination of visual grounding and logical reasoning. This benchmark provides a foundation for developing future models that reason not just plausibly, but faithfully and coherently within the visual world. - oai:arXiv.org:2512.08228v1 + A Distributed Framework for Privacy-Enhanced Vision Transformers on the Edge + https://arxiv.org/abs/2512.09309 + arXiv:2512.09309v1 Announce Type: new +Abstract: Nowadays, visual intelligence tools have become ubiquitous, offering all kinds of convenience and possibilities. However, these tools have high computational requirements that exceed the capabilities of resource-constrained mobile and wearable devices. While offloading visual data to the cloud is a common solution, it introduces significant privacy vulnerabilities during transmission and server-side computation. To address this, we propose a novel distributed, hierarchical offloading framework for Vision Transformers (ViTs) that addresses these privacy challenges by design. Our approach uses a local trusted edge device, such as a mobile phone or an Nvidia Jetson, as the edge orchestrator. This orchestrator partitions the user's visual data into smaller portions and distributes them across multiple independent cloud servers. By design, no single external server possesses the complete image, preventing comprehensive data reconstruction. The final data merging and aggregation computation occurs exclusively on the user's trusted edge device. We apply our framework to the Segment Anything Model (SAM) as a practical case study, which demonstrates that our method substantially enhances content privacy over traditional cloud-based approaches. Evaluations show our framework maintains near-baseline segmentation performance while substantially reducing the risk of content reconstruction and user data exposure. Our framework provides a scalable, privacy-preserving solution for vision tasks in the edge-cloud continuum. + oai:arXiv.org:2512.09309v1 + cs.DC + cs.CR cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jusheng Zhang, Kaitong Cai, Xiaoyang Guo, Sidi Liu, Qinhan Lv, Ruiqi Chen, Jing Yang, Yijia Fan, Xiaofei Sun, Jian Wang, Ziliang Chen, Liang Lin, Keze Wang + 10.1145/3769102.3772714 + Proceedings of the Tenth ACM/IEEE Symposium on Edge Computing (SEC '25), 2025, Article 8, pp. 1-16 + Zihao Ding, Mufeng Zhu, Zhongze Tang, Sheng Wei, Yao Liu - Geometry-Aware Sparse Depth Sampling for High-Fidelity RGB-D Depth Completion in Robotic Systems - https://arxiv.org/abs/2512.08229 - arXiv:2512.08229v1 Announce Type: new -Abstract: Accurate three-dimensional perception is essential for modern industrial robotic systems that perform manipulation, inspection, and navigation tasks. RGB-D and stereo vision sensors are widely used for this purpose, but the depth maps they produce are often noisy, incomplete, or biased due to sensor limitations and environmental conditions. Depth completion methods aim to generate dense, reliable depth maps from RGB images and sparse depth input. However, a key limitation in current depth completion pipelines is the unrealistic generation of sparse depth: sparse pixels are typically selected uniformly at random from dense ground-truth depth, ignoring the fact that real sensors exhibit geometry-dependent and spatially nonuniform reliability. In this work, we propose a normal-guided sparse depth sampling strategy that leverages PCA-based surface normal estimation on the RGB-D point cloud to compute a per-pixel depth reliability measure. The sparse depth samples are then drawn according to this reliability distribution. We integrate this sampling method with the Marigold-DC diffusion-based depth completion model and evaluate it on NYU Depth v2 using the standard metrics. Experiments show that our geometry-aware sparse depth improves accuracy, reduces artifacts near edges and discontinuities, and produces more realistic training conditions that better reflect real sensor behavior. - oai:arXiv.org:2512.08229v1 - cs.CV + Scene-agnostic Hierarchical Bimanual Task Planning via Visual Affordance Reasoning + https://arxiv.org/abs/2512.09310 + arXiv:2512.09310v1 Announce Type: new +Abstract: Embodied agents operating in open environments must translate high-level instructions into grounded, executable behaviors, often requiring coordinated use of both hands. While recent foundation models offer strong semantic reasoning, existing robotic task planners remain predominantly unimanual and fail to address the spatial, geometric, and coordination challenges inherent to bimanual manipulation in scene-agnostic settings. We present a unified framework for scene-agnostic bimanual task planning that bridges high-level reasoning with 3D-grounded two-handed execution. Our approach integrates three key modules. Visual Point Grounding (VPG) analyzes a single scene image to detect relevant objects and generate world-aligned interaction points. Bimanual Subgoal Planner (BSP) reasons over spatial adjacency and cross-object accessibility to produce compact, motion-neutralized subgoals that exploit opportunities for coordinated two-handed actions. Interaction-Point-Driven Bimanual Prompting (IPBP) binds these subgoals to a structured skill library, instantiating synchronized unimanual or bimanual action sequences that satisfy hand-state and affordance constraints. Together, these modules enable agents to plan semantically meaningful, physically feasible, and parallelizable two-handed behaviors in cluttered, previously unseen scenes. Experiments show that it produces coherent, feasible, and compact two-handed plans, and generalizes to cluttered scenes without retraining, demonstrating robust scene-agnostic affordance reasoning for bimanual tasks. + oai:arXiv.org:2512.09310v1 cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Tony Salloom, Dandi Zhou, Xinhai Sun + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kwang Bin Lee, Jiho Kang, Sung-Hee Lee - Empowerment Gain and Causal Model Construction: Children and adults are sensitive to controllability and variability in their causal interventions - https://arxiv.org/abs/2512.08230 - arXiv:2512.08230v1 Announce Type: new -Abstract: Learning about the causal structure of the world is a fundamental problem for human cognition. Causal models and especially causal learning have proved to be difficult for large pretrained models using standard techniques of deep learning. In contrast, cognitive scientists have applied advances in our formal understanding of causation in computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. In the very different tradition of reinforcement learning, researchers have described an intrinsic reward signal called "empowerment" which maximizes mutual information between actions and their outcomes. "Empowerment" may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines. If an agent learns an accurate causal world model, they will necessarily increase their empowerment, and increasing empowerment will lead to a more accurate causal world model. Empowerment may also explain distinctive features of childrens causal learning, as well as providing a more tractable computational account of how that learning is possible. In an empirical study, we systematically test how children and adults use cues to empowerment to infer causal relations, and design effective causal interventions. - oai:arXiv.org:2512.08230v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Transformer-Driven Multimodal Fusion for Explainable Suspiciousness Estimation in Visual Surveillance + https://arxiv.org/abs/2512.09311 + arXiv:2512.09311v1 Announce Type: new +Abstract: Suspiciousness estimation is critical for proactive threat detection and ensuring public safety in complex environments. This work introduces a large-scale annotated dataset, USE50k, along with a computationally efficient vision-based framework for real-time suspiciousness analysis. The USE50k dataset contains 65,500 images captured from diverse and uncontrolled environments, such as airports, railway stations, restaurants, parks, and other public areas, covering a broad spectrum of cues including weapons, fire, crowd density, abnormal facial expressions, and unusual body postures. Building on this dataset, we present DeepUSEvision, a lightweight and modular system integrating three key components, i.e., a Suspicious Object Detector based on an enhanced YOLOv12 architecture, dual Deep Convolutional Neural Networks (DCNN-I and DCNN-II) for facial expression and body-language recognition using image and landmark features, and a transformer-based Discriminator Network that adaptively fuses multimodal outputs to yield an interpretable suspiciousness score. Extensive experiments confirm the superior accuracy, robustness, and interpretability of the proposed framework compared to state-of-the-art approaches. Collectively, the USE50k dataset and the DeepUSEvision framework establish a strong and scalable foundation for intelligent surveillance and real-time risk assessment in safety-critical applications. + oai:arXiv.org:2512.09311v1 + cs.CV + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Eunice Yiu, Kelsey Allen, Shiry Ginosar, Alison Gopnik + Kuldeep Singh Yadav, Lalan Kumar - Semantic-Metric Bayesian Risk Fields: Learning Robot Safety from Human Videos with a VLM Prior - https://arxiv.org/abs/2512.08233 - arXiv:2512.08233v1 Announce Type: new -Abstract: Humans interpret safety not as a binary signal but as a continuous, context- and spatially-dependent notion of risk. While risk is subjective, humans form rational mental models that guide action selection in dynamic environments. This work proposes a framework for extracting implicit human risk models by introducing a novel, semantically-conditioned and spatially-varying parametrization of risk, supervised directly from safe human demonstration videos and VLM common sense. Notably, we define risk through a Bayesian formulation. The prior is furnished by a pretrained vision-language model. In order to encourage the risk estimate to be more human aligned, a likelihood function modulates the prior to produce a relative metric of risk. Specifically, the likelihood is a learned ViT that maps pretrained features, to pixel-aligned risk values. Our pipeline ingests RGB images and a query object string, producing pixel-dense risk images. These images that can then be used as value-predictors in robot planning tasks or be projected into 3D for use in conventional trajectory optimization to produce human-like motion. This learned mapping enables generalization to novel objects and contexts, and has the potential to scale to much larger training datasets. In particular, the Bayesian framework that is introduced enables fast adaptation of our model to additional observations or common sense rules. We demonstrate that our proposed framework produces contextual risk that aligns with human preferences. Additionally, we illustrate several downstream applications of the model; as a value learner for visuomotor planners or in conjunction with a classical trajectory optimization algorithm. Our results suggest that our framework is a significant step toward enabling autonomous systems to internalize human-like risk. Code and results can be found at https://riskbayesian.github.io/bayesian_risk/. - oai:arXiv.org:2512.08233v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Tyche: A Hybrid Computation Framework of Illumination Pattern for Satellite Beam Hopping + https://arxiv.org/abs/2512.09312 + arXiv:2512.09312v1 Announce Type: new +Abstract: High-Throughput Satellites (HTS) use beam hopping to handle non-uniform and time-varying ground traffic demand. A significant technical challenge in beam hopping is the computation of effective illumination patterns. Traditional algorithms, like the genetic algorithm, require over 300 seconds to compute a single illumination pattern for just 37 cells, whereas modern HTS typically covers over 300 cells, rendering current methods impractical for real-world applications. Advanced approaches, such as multi-agent deep reinforcement learning, face convergence issues when the number of cells exceeds 40. In this paper, we introduce Tyche, a hybrid computation framework designed to address this challenge. Tyche incorporates a Monte Carlo Tree Search Beam Hopping (MCTS-BH) algorithm for computing illumination patterns and employs sliding window and pruning techniques to significantly reduce computation time. Specifically, MCTS-BH can compute one illumination pattern for 37 cells in just 12 seconds. To ensure real-time computation, we use a Greedy Beam Hopping (G-BH) algorithm, which provides a provisional solution while MCTS-BH completes its computation in the background. Our evaluation results show that MCTS-BH can increase throughput by up to 98.76%, demonstrating substantial improvements over existing solutions. + oai:arXiv.org:2512.09312v1 + cs.NI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Timothy Chen, Marcus Dominguez-Kuhne, Aiden Swann, Xu Liu, Mac Schwager + Ziheng Yang, Kun Qiu, Zhe Chen, Wenjun Zhu, Yue Gao - FastBEV++: Fast by Algorithm, Deployable by Design - https://arxiv.org/abs/2512.08237 - arXiv:2512.08237v1 Announce Type: new -Abstract: The advancement of camera-only Bird's-Eye-View(BEV) perception is currently impeded by a fundamental tension between state-of-the-art performance and on-vehicle deployment tractability. This bottleneck stems from a deep-rooted dependency on computationally prohibitive view transformations and bespoke, platform-specific kernels. This paper introduces FastBEV++, a framework engineered to reconcile this tension, demonstrating that high performance and deployment efficiency can be achieved in unison via two guiding principles: Fast by Algorithm and Deployable by Design. We realize the "Deployable by Design" principle through a novel view transformation paradigm that decomposes the monolithic projection into a standard Index-Gather-Reshape pipeline. Enabled by a deterministic pre-sorting strategy, this transformation is executed entirely with elementary, operator native primitives (e.g Gather, Matrix Multiplication), which eliminates the need for specialized CUDA kernels and ensures fully TensorRT-native portability. Concurrently, our framework is "Fast by Algorithm", leveraging this decomposed structure to seamlessly integrate an end-to-end, depth-aware fusion mechanism. This jointly learned depth modulation, further bolstered by temporal aggregation and robust data augmentation, significantly enhances the geometric fidelity of the BEV representation.Empirical validation on the nuScenes benchmark corroborates the efficacy of our approach. FastBEV++ establishes a new state-of-the-art 0.359 NDS while maintaining exceptional real-time performance, exceeding 134 FPS on automotive-grade hardware (e.g Tesla T4). By offering a solution that is free of custom plugins yet highly accurate, FastBEV++ presents a mature and scalable design philosophy for production autonomous systems. The code is released at: https://github.com/ymlab/advanced-fastbev - oai:arXiv.org:2512.08237v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Hetero-SplitEE: Split Learning of Neural Networks with Early Exits for Heterogeneous IoT Devices + https://arxiv.org/abs/2512.09313 + arXiv:2512.09313v1 Announce Type: new +Abstract: The continuous scaling of deep neural networks has fundamentally transformed machine learning, with larger models demonstrating improved performance across diverse tasks. This growth in model size has dramatically increased the computational resources required for the training process. Consequently, distributed approaches, such as Federated Learning and Split Learning, have become essential paradigms for scalable deployment. However, existing Split Learning approaches assume client homogeneity and uniform split points across all participants. This critically limits their applicability to real-world IoT systems where devices exhibit heterogeneity in computational resources. To address this limitation, this paper proposes Hetero-SplitEE, a novel method that enables heterogeneous IoT devices to train a shared deep neural network in parallel collaboratively. By integrating heterogeneous early exits into hierarchical training, our approach allows each client to select distinct split points (cut layers) tailored to its computational capacity. In addition, we propose two cooperative training strategies, the Sequential strategy and the Averaging strategy, to facilitate this collaboration among clients with different split points. The Sequential strategy trains clients sequentially with a shared server model to reduce computational overhead. The Averaging strategy enables parallel client training with periodic cross-layer aggregation. Extensive experiments on CIFAR-10, CIFAR-100, and STL-10 datasets using ResNet-18 demonstrate that our method maintains competitive accuracy while efficiently supporting diverse computational constraints, enabling practical deployment of collaborative deep learning in heterogeneous IoT ecosystems. + oai:arXiv.org:2512.09313v1 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuanpeng Chen, Hui Song, Wei Tao, ShanHui Mo, Shuang Zhang, Xiao Hua, TianKun Zhao + Yuki Oda, Yuta Ono, Hiroshi Nakamura, Hideki Takase - SpeechQualityLLM: LLM-Based Multimodal Assessment of Speech Quality - https://arxiv.org/abs/2512.08238 - arXiv:2512.08238v1 Announce Type: new -Abstract: Objective speech quality assessment is central to telephony, VoIP, and streaming systems, where large volumes of degraded audio must be monitored and optimized at scale. Classical metrics such as PESQ and POLQA approximate human mean opinion scores (MOS) but require carefully controlled conditions and expensive listening tests, while learning-based models such as NISQA regress MOS and multiple perceptual dimensions from waveforms or spectrograms, achieving high correlation with subjective ratings yet remaining rigid: they do not support interactive, natural-language queries and do not natively provide textual rationales. In this work, we introduce SpeechQualityLLM, a multimodal speech quality question-answering (QA) system that couples an audio encoder with a language model and is trained on the NISQA corpus using template-based question-answer pairs covering overall MOS and four perceptual dimensions (noisiness, coloration, discontinuity, and loudness) in both single-ended (degraded only) and double-ended (degraded plus clean reference) setups. Instead of directly regressing scores, our system is supervised to generate textual answers from which numeric predictions are parsed and evaluated with standard regression and ranking metrics; on held-out NISQA clips, the double-ended model attains a MOS mean absolute error (MAE) of 0.41 with Pearson correlation of 0.86, with competitive performance on dimension-wise tasks. Beyond these quantitative gains, it offers a flexible natural-language interface in which the language model acts as an audio quality expert: practitioners can query arbitrary aspects of degradations, prompt the model to emulate different listener profiles to capture human variability and produce diverse but plausible judgments rather than a single deterministic score, and thereby reduce reliance on large-scale crowdsourced tests and their monetary cost. - oai:arXiv.org:2512.08238v1 - cs.SD - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Benchmarking Real-World Medical Image Classification with Noisy Labels: Challenges, Practice, and Outlook + https://arxiv.org/abs/2512.09315 + arXiv:2512.09315v1 Announce Type: new +Abstract: Learning from noisy labels remains a major challenge in medical image analysis, where annotation demands expert knowledge and substantial inter-observer variability often leads to inconsistent or erroneous labels. Despite extensive research on learning with noisy labels (LNL), the robustness of existing methods in medical imaging has not been systematically assessed. To address this gap, we introduce LNMBench, a comprehensive benchmark for Label Noise in Medical imaging. LNMBench encompasses \textbf{10} representative methods evaluated across 7 datasets, 6 imaging modalities, and 3 noise patterns, establishing a unified and reproducible framework for robustness evaluation under realistic conditions. Comprehensive experiments reveal that the performance of existing LNL methods degrades substantially under high and real-world noise, highlighting the persistent challenges of class imbalance and domain variability in medical data. Motivated by these findings, we further propose a simple yet effective improvement to enhance model robustness under such conditions. The LNMBench codebase is publicly released to facilitate standardized evaluation, promote reproducible research, and provide practical insights for developing noise-resilient algorithms in both research and real-world medical applications.The codebase is publicly available on https://github.com/myyy777/LNMBench. + oai:arXiv.org:2512.09315v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Mahathir Monjur, Shahriar Nirjon + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuan Ma, Junlin Hou, Chao Zhang, Yukun Zhou, Zongyuan Ge, Haoran Xie, Lie Ju - HybridToken-VLM: Hybrid Token Compression for Vision-Language Models - https://arxiv.org/abs/2512.08240 - arXiv:2512.08240v1 Announce Type: new -Abstract: Vision-language models (VLMs) have transformed multimodal reasoning, but feeding hundreds of visual patch tokens into LLMs incurs quadratic computational costs, straining memory and context windows. Traditional approaches face a trade-off: continuous compression dilutes high-level semantics such as object identities, while discrete quantization loses fine-grained details such as textures. We introduce HTC-VLM, a hybrid framework that disentangles semantics and appearance through dual channels, i.e., a continuous pathway for fine-grained details via ViT patches and a discrete pathway for symbolic anchors using MGVQ quantization projected to four tokens. These are fused into a 580-token hybrid sequence and compressed into a single voco token via a disentanglement attention mask and bottleneck, ensuring efficient and grounded representations. HTC-VLM achieves an average performance retention of 87.2 percent across seven benchmarks (GQA, VQAv2, MMBench, MME, POPE, SEED-Bench, ScienceQA-Image), outperforming the leading continuous baseline at 81.0 percent with a 580-to-1 compression ratio. Attention analyses show that the compressed token prioritizes the discrete anchor, validating its semantic guidance. Our work demonstrates that a minimalist hybrid design can resolve the efficiency-fidelity dilemma and advance scalable VLMs. - oai:arXiv.org:2512.08240v1 - cs.CV + Simultaneous Genetic Evolution of Neural Networks for Optimal SFC Embedding + https://arxiv.org/abs/2512.09318 + arXiv:2512.09318v1 Announce Type: new +Abstract: The reliance of organisations on computer networks is enabled by network programmability, which is typically achieved through Service Function Chaining. These chains virtualise network functions, link them, and programmatically embed them on networking infrastructure. Optimal embedding of Service Function Chains is an NP-hard problem, with three sub-problems, chain composition, virtual network function embedding, and link embedding, that have to be optimised simultaneously, rather than sequentially, for optimal results. Genetic Algorithms have been employed for this, but existing approaches either do not optimise all three sub-problems or do not optimise all three sub-problems simultaneously. We propose a Genetic Algorithm-based approach called GENESIS, which evolves three sine-function-activated Neural Networks, and funnels their output to a Gaussian distribution and an A* algorithm to optimise all three sub-problems simultaneously. We evaluate GENESIS on an emulator across 48 different data centre scenarios and compare its performance to two state-of-the-art Genetic Algorithms and one greedy algorithm. GENESIS produces an optimal solution for 100% of the scenarios, whereas the second-best method optimises only 71% of the scenarios. Moreover, GENESIS is the fastest among all Genetic Algorithms, averaging 15.84 minutes, compared to an average of 38.62 minutes for the second-best Genetic Algorithm. + oai:arXiv.org:2512.09318v1 + cs.NE cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jusheng Zhang, Xiaoyang Guo, Kaitong Cai, Qinhan Lv, Yijia Fan, Wenhao Chai, Jian Wang, Keze Wang + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Theviyanthan Krishnamohan, Lauritz Thamsen, Paul Harvey - Persistent Topological Structures and Cohomological Flows as a Mathematical Framework for Brain-Inspired Representation Learning - https://arxiv.org/abs/2512.08241 - arXiv:2512.08241v1 Announce Type: new -Abstract: This paper presents a mathematically rigorous framework for brain-inspired representation learning founded on the interplay between persistent topological structures and cohomological flows. Neural computation is reformulated as the evolution of cochain maps over dynamic simplicial complexes, enabling representations that capture invariants across temporal, spatial, and functional brain states. The proposed architecture integrates algebraic topology with differential geometry to construct cohomological operators that generalize gradient-based learning within a homological landscape. Synthetic data with controlled topological signatures and real neural datasets are jointly analyzed using persistent homology, sheaf cohomology, and spectral Laplacians to quantify stability, continuity, and structural preservation. Empirical results demonstrate that the model achieves superior manifold consistency and noise resilience compared to graph neural and manifold-based deep architectures, establishing a coherent mathematical foundation for topology-driven representation learning. - oai:arXiv.org:2512.08241v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Efficiency-Aware Computational Intelligence for Resource-Constrained Manufacturing Toward Edge-Ready Deployment + https://arxiv.org/abs/2512.09319 + arXiv:2512.09319v1 Announce Type: new +Abstract: Industrial cyber physical systems operate under heterogeneous sensing, stochastic dynamics, and shifting process conditions, producing data that are often incomplete, unlabeled, imbalanced, and domain shifted. High-fidelity datasets remain costly, confidential, and slow to obtain, while edge devices face strict limits on latency, bandwidth, and energy. These factors restrict the practicality of centralized deep learning, hinder the development of reliable digital twins, and increase the risk of error escape in safety-critical applications. Motivated by these challenges, this dissertation develops an efficiency grounded computational framework that enables data lean, physics-aware, and deployment ready intelligence for modern manufacturing environments. The research advances methods that collectively address core bottlenecks across multimodal and multiscale industrial scenarios. Generative strategies mitigate data scarcity and imbalance, while semi-supervised learning integrates unlabeled information to reduce annotation and simulation demands. Physics-informed representation learning strengthens interpretability and improves condition monitoring under small-data regimes. Spatially aware graph-based surrogate modeling provides efficient approximation of complex processes, and an edge cloud collaborative compression scheme supports real-time signal analytics under resource constraints. The dissertation also extends visual understanding through zero-shot vision language reasoning augmented by domain specific retrieval, enabling generalizable assessment in previously unseen scenarios. Together, these developments establish a unified paradigm of data efficient and resource aware intelligence that bridges laboratory learning with industrial deployment, supporting reliable decision-making across diverse manufacturing systems. + oai:arXiv.org:2512.09319v1 + cs.CE + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Preksha Girish, Rachana Mysore, Mahanthesha U, Shrey Kumar, Shipra Prashant + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Qianyu Zhou - Chopper: A Multi-Level GPU Characterization Tool & Derived Insights Into LLM Training Inefficiency - https://arxiv.org/abs/2512.08242 - arXiv:2512.08242v1 Announce Type: new -Abstract: Training large language models (LLMs) efficiently requires a deep understanding of how modern GPU systems behave under real-world distributed training workloads. While prior work has focused primarily on kernel-level performance or single-GPU microbenchmarks, the complex interaction between communication, computation, memory behavior, and power management in multi-GPU LLM training remains poorly characterized. In this work, we introduce Chopper, a profiling and analysis framework that collects, aligns, and visualizes GPU kernel traces and hardware performance counters across multiple granularities (i.e., from individual kernels to operations, layers, phases, iterations, and GPUs). Using Chopper, we perform a comprehensive end-to-end characterization of Llama 3 8B training under fully sharded data parallelism (FSDP) on an eight-GPU AMD InstinctTM MI300X node. Our analysis reveals several previously underexplored bottlenecks and behaviors, such as memory determinism enabling higher, more stable GPU and memory frequencies. We identify several sources of inefficiencies, with frequency overhead (DVFS effects) being the single largest contributor to the gap between theoretical and observed performance, exceeding the impact of MFMA utilization loss, communication/computation overlap, and kernel launch overheads. Overall, Chopper provides the first holistic, multi-granularity characterization of LLM training on AMD InstinctTM MI300X GPUs, yielding actionable insights for optimizing training frameworks, improving power-management strategies, and guiding future GPU architecture and system design. - oai:arXiv.org:2512.08242v1 - cs.DC - cs.AR - Wed, 10 Dec 2025 00:00:00 -0500 + ObliInjection: Order-Oblivious Prompt Injection Attack to LLM Agents with Multi-source Data + https://arxiv.org/abs/2512.09321 + arXiv:2512.09321v1 Announce Type: new +Abstract: Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended task. In many applications and agents, the input data originates from multiple sources, with each source contributing a segment of the overall input. In these multi-source scenarios, an attacker may control only a subset of the sources and contaminate the corresponding segments, but typically does not know the order in which the segments are arranged within the input. Existing prompt injection attacks either assume that the entire input data comes from a single source under the attacker's control or ignore the uncertainty in the ordering of segments from different sources. As a result, their success is limited in domains involving multi-source data. + In this work, we propose ObliInjection, the first prompt injection attack targeting LLM applications and agents with multi-source input data. ObliInjection introduces two key technical innovations: the order-oblivious loss, which quantifies the likelihood that the LLM will complete the attacker-chosen task regardless of how the clean and contaminated segments are ordered; and the orderGCG algorithm, which is tailored to minimize the order-oblivious loss and optimize the contaminated segments. Comprehensive experiments across three datasets spanning diverse application domains and twelve LLMs demonstrate that ObliInjection is highly effective, even when only one out of 6-100 segments in the input data is contaminated. + oai:arXiv.org:2512.09321v1 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Marco Kurzynski, Shaizeen Aga, Di Wu + Ruiqi Wang, Yuqi Jia, Neil Zhenqiang Gong - Residual-SwinCA-Net: A Channel-Aware Integrated Residual CNN-Swin Transformer for Malignant Lesion Segmentation in BUSI - https://arxiv.org/abs/2512.08243 - arXiv:2512.08243v1 Announce Type: new -Abstract: A novel deep hybrid Residual-SwinCA-Net segmentation framework is proposed in the study for addressing such challenges by extracting locally correlated and robust features, incorporating residual CNN modules. Furthermore, for learning global dependencies, Swin Transformer blocks are customized using internal residual pathways, which reinforce gradient stability, refine local patterns, and facilitate global feature fusion. Formerly, for enhancing tissue continuity, ultrasound noise suppressions, and accentuating fine structural transitions Laplacian-of-Gaussian regional operator is applied, and for maintaining the morphological integrity of malignant lesion contours, a boundary-oriented operator has been incorporated. Subsequently, a contraction strategy was applied stage-wise by progressively reducing features-map progressively for capturing scale invariance and enhancing the robustness of structural variability. In addition, each decoder level prior augmentation integrates a new Multi-Scale Channel Attention and Squeezing (MSCAS) module. The MSCAS selectively emphasizes encoder salient maps, retains discriminative global context, and complementary local structures with minimal computational cost while suppressing redundant activations. Finally, the Pixel-Attention module encodes class-relevant spatial cues by adaptively weighing malignant lesion pixels while suppressing background interference. The Residual-SwinCA-Net and existing CNNs/ViTs techniques have been implemented on the publicly available BUSI dataset. The proposed Residual-SwinCA-Net framework outperformed and achieved 99.29% mean accuracy, 98.74% IoU, and 0.9041 Dice for breast lesion segmentation. The proposed Residual-SwinCA-Net framework improves the BUSI lesion diagnostic performance and strengthens timely clinical decision-making. - oai:arXiv.org:2512.08243v1 - cs.CV - cs.AI + Self-Supervised Learning with Gaussian Processes + https://arxiv.org/abs/2512.09322 + arXiv:2512.09322v1 Announce Type: new +Abstract: Self supervised learning (SSL) is a machine learning paradigm where models learn to understand the underlying structure of data without explicit supervision from labeled samples. The acquired representations from SSL have demonstrated useful for many downstream tasks including clustering, and linear classification, etc. To ensure smoothness of the representation space, most SSL methods rely on the ability to generate pairs of observations that are similar to a given instance. However, generating these pairs may be challenging for many types of data. Moreover, these methods lack consideration of uncertainty quantification and can perform poorly in out-of-sample prediction settings. To address these limitations, we propose Gaussian process self supervised learning (GPSSL), a novel approach that utilizes Gaussian processes (GP) models on representation learning. GP priors are imposed on the representations, and we obtain a generalized Bayesian posterior minimizing a loss function that encourages informative representations. The covariance function inherent in GPs naturally pulls representations of similar units together, serving as an alternative to using explicitly defined positive samples. We show that GPSSL is closely related to both kernel PCA and VICReg, a popular neural network-based SSL method, but unlike both allows for posterior uncertainties that can be propagated to downstream tasks. Experiments on various datasets, considering classification and regression tasks, demonstrate that GPSSL outperforms traditional methods in terms of accuracy, uncertainty quantification, and error control. + oai:arXiv.org:2512.09322v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Saeeda Naz (Artificial Intelligence Lab, Department of Computer Systems Engineering, University of Engineering,Applied Sciences), Saddam Hussain Khan (Artificial Intelligence Lab, Department of Computer Systems Engineering, University of Engineering,Applied Sciences) + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yunshan Duan, Sinead Williamson - Migrating QAOA from Qiskit 1.x to 2.x: An experience report - https://arxiv.org/abs/2512.08245 - arXiv:2512.08245v1 Announce Type: new -Abstract: Migrating quantum algorithms across evolving frameworks introduces subtle behavioral changes that affect accuracy and reproducibility. This paper reports our experience converting the Quantum Approximate Optimization Algorithm (QAOA) from Qiskit Algorithms with Qiskit 1.x (v1 primitives) to a custom implementation using Qiskit 2.x (v2 primitives). Despite identical circuits, optimizers, and Hamiltonians, the new version produced drastically different results. A systematic analysis revealed the root cause: the sampling budget -- the number of circuit executions (shots) per iteration. The library's implicit use of unlimited shots yielded dense probability distributions, whereas the v2 default of 10 000 shots captured only 23% of the state space. Increasing shots to 250 000 restored library-level accuracy. This study highlights how hidden parameters at the quantum--classical interaction level can dominate hybrid algorithm performance and provides actionable recommendations for developers and framework designers to ensure reproducible results in quantum software migration. - oai:arXiv.org:2512.08245v1 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Analysis of Frequency and Voltage Strength in Power Electronics-Dominated Power Systems Based on Eigen-subsystems + https://arxiv.org/abs/2512.09323 + arXiv:2512.09323v1 Announce Type: new +Abstract: The large-scale integration of inverter-based resources (IBRs) has deteriorated the frequency/voltage (F/V) responses of power systems, leading to a higher risk of instability. Consequently, evaluating the F/V strength has become an important task in power electronics (PE)-dominated power systems. Existing methods typically examine F/V strength separately, employing fundamentally different metrics, such as inertia (focusing on device dynamics) and short-circuit ratio (SCR, addressing network characteristics). These fragmented approaches have resulted in a lack of comprehensive understanding of the overall system strength, potentially overlooking critical aspects. To address this problem, this paper proposes a unified framework for analyzing F/V strength. First, a unified modeling of F/V regulations is introduced. Then, based on modal decoupling, the power systems are decomposed into several eigen-subsystems, where the F/V responses are both decomposed into common-mode (CM) and differential-mode (DM) components, namely, CM-F, DM-F, CM-V, and DM-V. The CM-F and CM-V represent the collective response of all devices to external active or reactive power disturbances, independent of the power network characteristics. In contrast, the DM-F and DM-V capture the redistribution of disturbance power within the system, which is strongly influenced by the network topology and the locations of devices. Notably, traditional strength analysis generally ignores the CM-V (global voltage response), which, as discovered in this paper, may also become unstable in PE-dominated power systems. Based on the proposed framework, new metrics are proposed to evaluate the strength of each modal component. Finally, the effectiveness of the proposed approach is validated through simulations. + oai:arXiv.org:2512.09323v1 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Julien Cardinal, Imen Benzarti, Ghizlane El boussaidi, Christophe Pere + Huisheng Gao, Linbin Huang, Huanhai Xin, Zhiyi Li, Ping Ju - SPROCKET: Extending ROCKET to Distance-Based Time-Series Transformations With Prototypes - https://arxiv.org/abs/2512.08246 - arXiv:2512.08246v1 Announce Type: new -Abstract: Classical Time Series Classification algorithms are dominated by feature engineering strategies. One of the most prominent of these transforms is ROCKET, which achieves strong performance through random kernel features. We introduce SPROCKET (Selected Prototype Random Convolutional Kernel Transform), which implements a new feature engineering strategy based on prototypes. On a majority of the UCR and UEA Time Series Classification archives, SPROCKET achieves performance comparable to existing convolutional algorithms and the new MR-HY-SP ( MultiROCKET-HYDRA-SPROCKET) ensemble's average accuracy ranking exceeds HYDRA-MR, the previous best convolutional ensemble's performance. These experimental results demonstrate that prototype-based feature transformation can enhance both accuracy and robustness in time series classification. - oai:arXiv.org:2512.08246v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + UniLS: End-to-End Audio-Driven Avatars for Unified Listening and Speaking + https://arxiv.org/abs/2512.09327 + arXiv:2512.09327v1 Announce Type: new +Abstract: Generating lifelike conversational avatars requires modeling not just isolated speakers, but the dynamic, reciprocal interaction of speaking and listening. However, modeling the listener is exceptionally challenging: direct audio-driven training fails, producing stiff, static listening motions. This failure stems from a fundamental imbalance: the speaker's motion is strongly driven by speech audio, while the listener's motion primarily follows an internal motion prior and is only loosely guided by external speech. This challenge has led most methods to focus on speak-only generation. The only prior attempt at joint generation relies on extra speaker's motion to produce the listener. This design is not end-to-end, thereby hindering the real-time applicability. To address this limitation, we present UniLS, the first end-to-end framework for generating unified speak-listen expressions, driven by only dual-track audio. Our method introduces a novel two-stage training paradigm. Stage 1 first learns the internal motion prior by training an audio-free autoregressive generator, capturing the spontaneous dynamics of natural facial motion. Stage 2 then introduces the dual-track audio, fine-tuning the generator to modulate the learned motion prior based on external speech cues. Extensive evaluations show UniLS achieves state-of-the-art speaking accuracy. More importantly, it delivers up to 44.1\% improvement in listening metrics, generating significantly more diverse and natural listening expressions. This effectively mitigates the stiffness problem and provides a practical, high-fidelity audio-driven solution for interactive digital humans. + oai:arXiv.org:2512.09327v1 + cs.CV + cs.SD + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Nicholas Harner + http://creativecommons.org/licenses/by/4.0/ + Xuangeng Chu, Ruicong Liu, Yifei Huang, Yun Liu, Yichen Peng, Bo Zheng - Distilling Future Temporal Knowledge with Masked Feature Reconstruction for 3D Object Detection - https://arxiv.org/abs/2512.08247 - arXiv:2512.08247v1 Announce Type: new -Abstract: Camera-based temporal 3D object detection has shown impressive results in autonomous driving, with offline models improving accuracy by using future frames. Knowledge distillation (KD) can be an appealing framework for transferring rich information from offline models to online models. However, existing KD methods overlook future frames, as they mainly focus on spatial feature distillation under strict frame alignment or on temporal relational distillation, thereby making it challenging for online models to effectively learn future knowledge. To this end, we propose a sparse query-based approach, Future Temporal Knowledge Distillation (FTKD), which effectively transfers future frame knowledge from an offline teacher model to an online student model. Specifically, we present a future-aware feature reconstruction strategy to encourage the student model to capture future features without strict frame alignment. In addition, we further introduce future-guided logit distillation to leverage the teacher's stable foreground and background context. FTKD is applied to two high-performing 3D object detection baselines, achieving up to 1.3 mAP and 1.3 NDS gains on the nuScenes dataset, as well as the most accurate velocity estimation, without increasing inference cost. - oai:arXiv.org:2512.08247v1 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Self Distillation Fine-Tuning of Protein Language Models Improves Versatility in Protein Design + https://arxiv.org/abs/2512.09329 + arXiv:2512.09329v1 Announce Type: new +Abstract: Supervised fine-tuning (SFT) is a standard approach for adapting large language models to specialized domains, yet its application to protein sequence modeling and protein language models (PLMs) remains ad hoc. This is in part because high-quality annotated data are far more difficult to obtain for proteins than for natural language. We present a simple and general recipe for fast SFT of PLMs, designed to improve the fidelity, reliability, and novelty of generated protein sequences. Unlike existing approaches that require costly precompiled experimental datasets for SFT, our method leverages the PLM itself, integrating a lightweight curation pipeline with domain-specific filters to construct high-quality training data. These filters can independently refine a PLM's output and identify candidates for in vitro evaluation; when combined with SFT, they enable PLMs to generate more stable and functional enzymes, while expanding exploration into protein sequence space beyond natural variants. Although our approach is agnostic to both the choice of protein language model (PLM) and the protein system, we demonstrate its effectiveness with a genome-scale PLM (GenSLM) applied to the tryptophan synthase enzyme family. The supervised fine-tuned model generates sequences that are not only more novel but also display improved characteristics across both targeted design constraints and emergent protein property measures. + oai:arXiv.org:2512.09329v1 + cs.LG + cs.CE + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Haowen Zheng, Hu Zhu, Lu Deng, Weihao Gu, Yang Yang, Yanyan Liang + http://creativecommons.org/licenses/by/4.0/ + Amin Tavakoli, Raswanth Murugan, Ozan Gokdemir, Arvind Ramanathan, Frances Arnold, Anima Anandkumar - Learning Spatiotemporal Tubes for Temporal Reach-Avoid-Stay Tasks using Physics-Informed Neural Networks - https://arxiv.org/abs/2512.08248 - arXiv:2512.08248v1 Announce Type: new -Abstract: This paper presents a Spatiotemporal Tube (STT)-based control framework for general control-affine MIMO nonlinear pure-feedback systems with unknown dynamics to satisfy prescribed time reach-avoid-stay tasks under external disturbances. The STT is defined as a time-varying ball, whose center and radius are jointly approximated by a Physics-Informed Neural Network (PINN). The constraints governing the STT are first formulated as loss functions of the PINN, and a training algorithm is proposed to minimize the overall violation. The PINN being trained on certain collocation points, we propose a Lipschitz-based validity condition to formally verify that the learned PINN satisfies the conditions over the continuous time horizon. Building on the learned STT representation, an approximation-free closed-form controller is defined to guarantee satisfaction of the T-RAS specification. Finally, the effectiveness and scalability of the framework are validated through two case studies involving a mobile robot and an aerial vehicle navigating through cluttered environments. - oai:arXiv.org:2512.08248v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Passing the Baton: High Throughput Distributed Disk-Based Vector Search with BatANN + https://arxiv.org/abs/2512.09331 + arXiv:2512.09331v1 Announce Type: new +Abstract: Vector search underpins modern information-retrieval systems, including retrieval-augmented generation (RAG) pipelines and search engines over unstructured text and images. As datasets scale to billions of vectors, disk-based vector search has emerged as a practical solution. However, looking to the future, we need to anticipate datasets too large for any single server. We present BatANN, a distributed disk-based approximate nearest neighbor (ANN) system that retains the logarithmic search efficiency of a single global graph while achieving near-linear throughput scaling in the number of servers. Our core innovation is that when accessing a neighborhood which is stored on another machine, we send the full state of the query to the other machine to continue executing there for improved locality. On 100M- and 1B-point datasets at 0.95 recall using 10 servers, BatANN achieves 6.21-6.49x and 2.5-5.10x the throughput of the scatter-gather baseline, respectively, while maintaining mean latency below 6 ms. Moreover, we get these results on standard TCP. To our knowledge, BatANN is the first open-source distributed disk-based vector search system to operate over a single global graph. + oai:arXiv.org:2512.09331v1 + cs.DC + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Ahan Basu, Ratnangshu Das, Pushpak Jagtap + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nam Anh Dang (Cornell University), Ben Landrum (Cornell University), Ken Birman (Cornell University) - Query-aware Hub Prototype Learning for Few-Shot 3D Point Cloud Semantic Segmentation - https://arxiv.org/abs/2512.08253 - arXiv:2512.08253v1 Announce Type: new -Abstract: Few-shot 3D point cloud semantic segmentation (FS-3DSeg) aims to segment novel classes with only a few labeled samples. However, existing metric-based prototype learning methods generate prototypes solely from the support set, without considering their relevance to query data. This often results in prototype bias, where prototypes overfit support-specific characteristics and fail to generalize to the query distribution, especially in the presence of distribution shifts, which leads to degraded segmentation performance. To address this issue, we propose a novel Query-aware Hub Prototype (QHP) learning method that explicitly models semantic correlations between support and query sets. Specifically, we propose a Hub Prototype Generation (HPG) module that constructs a bipartite graph connecting query and support points, identifies frequently linked support hubs, and generates query-relevant prototypes that better capture cross-set semantics. To further mitigate the influence of bad hubs and ambiguous prototypes near class boundaries, we introduce a Prototype Distribution Optimization (PDO) module, which employs a purity-reweighted contrastive loss to refine prototype representations by pulling bad hubs and outlier prototypes closer to their corresponding class centers. Extensive experiments on S3DIS and ScanNet demonstrate that QHP achieves substantial performance gains over state-of-the-art methods, effectively narrowing the semantic gap between prototypes and query sets in FS-3DSeg. - oai:arXiv.org:2512.08253v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Improved Physics-Driven Neural Network to Solve Inverse Scattering Problems + https://arxiv.org/abs/2512.09333 + arXiv:2512.09333v1 Announce Type: new +Abstract: This paper presents an improved physics-driven neural network (IPDNN) framework for solving electromagnetic inverse scattering problems (ISPs). A new Gaussian-localized oscillation-suppressing window (GLOW) activation function is introduced to stabilize convergence and enable a lightweight yet accurate network architecture. A dynamic scatter subregion identification strategy is further developed to adaptively refine the computational domain, preventing missed detections and reducing computational cost. Moreover, transfer learning is incorporated to extend the solver's applicability to practical scenarios, integrating the physical interpretability of iterative algorithms with the real-time inference capability of neural networks. Numerical simulations and experimental results demonstrate that the proposed solver achieves superior reconstruction accuracy, robustness, and efficiency compared with existing state-of-the-art methods. + oai:arXiv.org:2512.09333v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - YiLin Zhou, Lili Wei, Zheming Xu, Ziyi Chen, Congyan Lang + Yutong Du, Zicheng Liu, Bo Wu, Jingwei Kou, Hang Li, Changyou Li, Yali Zong, Bo Qi - SFP: Real-World Scene Recovery Using Spatial and Frequency Priors - https://arxiv.org/abs/2512.08254 - arXiv:2512.08254v1 Announce Type: new -Abstract: Scene recovery serves as a critical task for various computer vision applications. Existing methods typically rely on a single prior, which is inherently insufficient to handle multiple degradations, or employ complex network architectures trained on synthetic data, which suffer from poor generalization for diverse real-world scenarios. In this paper, we propose Spatial and Frequency Priors (SFP) for real-world scene recovery. In the spatial domain, we observe that the inverse of the degraded image exhibits a projection along its spectral direction that resembles the scene transmission. Leveraging this spatial prior, the transmission map is estimated to recover the scene from scattering degradation. In the frequency domain, a mask is constructed for adaptive frequency enhancement, with two parameters estimated using our proposed novel priors. Specifically, one prior assumes that the mean intensity of the degraded image's direct current (DC) components across three channels in the frequency domain closely approximates that of each channel in the clear image. The second prior is based on the observation that, for clear images, the magnitude of low radial frequencies below 0.001 constitutes approximately 1% of the total spectrum. Finally, we design a weighted fusion strategy to integrate spatial-domain restoration, frequency-domain enhancement, and salient features from the input image, yielding the final recovered result. Extensive evaluations demonstrate the effectiveness and superiority of our proposed SFP for scene recovery under various degradation conditions. - oai:arXiv.org:2512.08254v1 + Relightable and Dynamic Gaussian Avatar Reconstruction from Monocular Video + https://arxiv.org/abs/2512.09335 + arXiv:2512.09335v1 Announce Type: new +Abstract: Modeling relightable and animatable human avatars from monocular video is a long-standing and challenging task. Recently, Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) methods have been employed to reconstruct the avatars. However, they often produce unsatisfactory photo-realistic results because of insufficient geometrical details related to body motion, such as clothing wrinkles. In this paper, we propose a 3DGS-based human avatar modeling framework, termed as Relightable and Dynamic Gaussian Avatar (RnD-Avatar), that presents accurate pose-variant deformation for high-fidelity geometrical details. To achieve this, we introduce dynamic skinning weights that define the human avatar's articulation based on pose while also learning additional deformations induced by body motion. We also introduce a novel regularization to capture fine geometric details under sparse visual cues. Furthermore, we present a new multi-view dataset with varied lighting conditions to evaluate relight. Our framework enables realistic rendering of novel poses and views while supporting photo-realistic lighting effects under arbitrary lighting conditions. Our method achieves state-of-the-art performance in novel view synthesis, novel pose rendering, and relighting. + oai:arXiv.org:2512.09335v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.MM + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yun Liu, Tao Li, Cosmin Ancuti, Wenqi Ren, Weisi Lin + 10.1145/3746027.3754851 + Seonghwa Choi, Moonkyeong Choi, Mingyu Jang, Jaekyung Kim, Jianfei Cai, Wen-Huang Cheng, Sanghoon Lee - Wavelet-Accelerated Physics-Informed Quantum Neural Network for Multiscale Partial Differential Equations - https://arxiv.org/abs/2512.08256 - arXiv:2512.08256v1 Announce Type: new -Abstract: This work proposes a wavelet-based physics-informed quantum neural network framework to efficiently address multiscale partial differential equations that involve sharp gradients, stiffness, rapid local variations, and highly oscillatory behavior. Traditional physics-informed neural networks (PINNs) have demonstrated substantial potential in solving differential equations, and their quantum counterparts, quantum-PINNs, exhibit enhanced representational capacity with fewer trainable parameters. However, both approaches face notable challenges in accurately solving multiscale features. Furthermore, their reliance on automatic differentiation for constructing loss functions introduces considerable computational overhead, resulting in longer training times. To overcome these challenges, we developed a wavelet-accelerated physics-informed quantum neural network that eliminates the need for automatic differentiation, significantly reducing computational complexity. The proposed framework incorporates the multiresolution property of wavelets within the quantum neural network architecture, thereby enhancing the network's ability to effectively capture both local and global features of multiscale problems. Numerical experiments demonstrate that our proposed method achieves superior accuracy while requiring less than five percent of the trainable parameters compared to classical wavelet-based PINNs, resulting in faster convergence. Moreover, it offers a speedup of three to five times compared to existing quantum PINNs, highlighting the potential of the proposed approach for efficiently solving challenging multiscale and oscillatory problems. - oai:arXiv.org:2512.08256v1 - cs.LG - math.AP - math.QA - Wed, 10 Dec 2025 00:00:00 -0500 + An Efficient Solver to Helmholtz Equations by Recontruction Discontinuous Approximation + https://arxiv.org/abs/2512.09338 + arXiv:2512.09338v1 Announce Type: new +Abstract: In this paper, an efficient solver for the Helmholtz equation using a noval approximation space is developed. The ingradients of the method include the approximation space recently proposed, a discontinuous Galerkin scheme extensively used, and a linear system solver with a natural preconditioner. Comparing to traditional discontinuous Galerkin methods, we refer to the new method as being more efficient in the following sense. The numerical performance of the new method shows that: 1) much less error can be reached using the same degrees of freedom; 2) the sparse matrix therein has much fewer nonzero entries so that both the storage space and the solution time cost for the iterative solver are reduced; 3) the preconditioner is proved to be optimal with respect to the mesh size in the absorbing case. Such advantage becomes more pronounced as the approximation order increases. + oai:arXiv.org:2512.09338v1 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Deepak Gupta, Himanshu Pandey, Ratikanta Behera + Shuhai Zhao - Geometric-Stochastic Multimodal Deep Learning for Predictive Modeling of SUDEP and Stroke Vulnerability - https://arxiv.org/abs/2512.08257 - arXiv:2512.08257v1 Announce Type: new -Abstract: Sudden Unexpected Death in Epilepsy (SUDEP) and acute ischemic stroke are life-threatening conditions involving complex interactions across cortical, brainstem, and autonomic systems. We present a unified geometric-stochastic multimodal deep learning framework that integrates EEG, ECG, respiration, SpO2, EMG, and fMRI signals to model SUDEP and stroke vulnerability. The approach combines Riemannian manifold embeddings, Lie-group invariant feature representations, fractional stochastic dynamics, Hamiltonian energy-flow modeling, and cross-modal attention mechanisms. Stroke propagation is modeled using fractional epidemic diffusion over structural brain graphs. Experiments on the MULTI-CLARID dataset demonstrate improved predictive accuracy and interpretable biomarkers derived from manifold curvature, fractional memory indices, attention entropy, and diffusion centrality. The proposed framework provides a mathematically principled foundation for early detection, risk stratification, and interpretable multimodal modeling in neural-autonomic disorders. - oai:arXiv.org:2512.08257v1 + Visual Categorization Across Minds and Models: Cognitive Analysis of Human Labeling and Neuro-Symbolic Integration + https://arxiv.org/abs/2512.09340 + arXiv:2512.09340v1 Announce Type: new +Abstract: Understanding how humans and AI systems interpret ambiguous visual stimuli offers critical insight into the nature of perception, reasoning, and decision-making. This paper examines image labeling performance across human participants and deep neural networks, focusing on low-resolution, perceptually degraded stimuli. Drawing from computational cognitive science, cognitive architectures, and connectionist-symbolic hybrid models, we contrast human strategies such as analogical reasoning, shape-based recognition, and confidence modulation with AI's feature-based processing. Grounded in Marr's tri-level hypothesis, Simon's bounded rationality, and Thagard's frameworks of representation and emotion, we analyze participant responses in relation to Grad-CAM visualizations of model attention. Human behavior is further interpreted through cognitive principles modeled in ACT-R and Soar, revealing layered and heuristic decision strategies under uncertainty. Our findings highlight key parallels and divergences between biological and artificial systems in representation, inference, and confidence calibration. The analysis motivates future neuro-symbolic architectures that unify structured symbolic reasoning with connectionist representations. Such architectures, informed by principles of embodiment, explainability, and cognitive alignment, offer a path toward AI systems that are not only performant but also interpretable and cognitively grounded. + oai:arXiv.org:2512.09340v1 + cs.AI + cs.CV cs.LG - eess.IV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Preksha Girish, Rachana Mysore, Mahanthesha U, Shrey Kumar, Misbah Fatimah Annigeri, Tanish Jain + http://creativecommons.org/licenses/by-sa/4.0/ + Chethana Prasad Kabgere - Beyond Traditional Diagnostics: Transforming Patient-Side Information into Predictive Insights with Knowledge Graphs and Prototypes - https://arxiv.org/abs/2512.08261 - arXiv:2512.08261v1 Announce Type: new -Abstract: Predicting diseases solely from patient-side information, such as demographics and self-reported symptoms, has attracted significant research attention due to its potential to enhance patient awareness, facilitate early healthcare engagement, and improve healthcare system efficiency. However, existing approaches encounter critical challenges, including imbalanced disease distributions and a lack of interpretability, resulting in biased or unreliable predictions. To address these issues, we propose the Knowledge graph-enhanced, Prototype-aware, and Interpretable (KPI) framework. KPI systematically integrates structured and trusted medical knowledge into a unified disease knowledge graph, constructs clinically meaningful disease prototypes, and employs contrastive learning to enhance predictive accuracy, which is particularly important for long-tailed diseases. Additionally, KPI utilizes large language models (LLMs) to generate patient-specific, medically relevant explanations, thereby improving interpretability and reliability. Extensive experiments on real-world datasets demonstrate that KPI outperforms state-of-the-art methods in predictive accuracy and provides clinically valid explanations that closely align with patient narratives, highlighting its practical value for patient-centered healthcare delivery. - oai:arXiv.org:2512.08261v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Development and Testing for Perception Based Autonomous Landing of a Long-Range QuadPlane + https://arxiv.org/abs/2512.09343 + arXiv:2512.09343v1 Announce Type: new +Abstract: QuadPlanes combine the range efficiency of fixed-wing aircraft with the maneuverability of multi-rotor platforms for long-range autonomous missions. In GPS-denied or cluttered urban environments, perception-based landing is vital for reliable operation. Unlike structured landing zones, real-world sites are unstructured and highly variable, requiring strong generalization capabilities from the perception system. Deep neural networks (DNNs) provide a scalable solution for learning landing site features across diverse visual and environmental conditions. While perception-driven landing has been shown in simulation, real-world deployment introduces significant challenges. Payload and volume constraints limit high-performance edge AI devices like the NVIDIA Jetson Orin Nano, which are crucial for real-time detection and control. Accurate pose estimation during descent is necessary, especially in the absence of GPS, and relies on dependable visual-inertial odometry. Achieving this with limited edge AI resources requires careful optimization of the entire deployment framework. The flight characteristics of large QuadPlanes further complicate the problem. These aircraft exhibit high inertia, reduced thrust vectoring, and slow response times further complicate stable landing maneuvers. This work presents a lightweight QuadPlane system for efficient vision-based autonomous landing and visual-inertial odometry, specifically developed for long-range QuadPlane operations such as aerial monitoring. It describes the hardware platform, sensor configuration, and embedded computing architecture designed to meet demanding real-time, physical constraints. This establishes a foundation for deploying autonomous landing in dynamic, unstructured, GPS-denied environments. + oai:arXiv.org:2512.09343v1 + cs.RO + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yibowen Zhao, Yinan Zhang, Zhixiang Su, Lizhen Cui, Chunyan Miao + Ashik E Rasul, Humaira Tasnim, Ji Yu Kim, Young Hyun Lim, Scott Schmitz, Bruce W. Jo, Hyung-Jin Yoon - RLCNet: An end-to-end deep learning framework for simultaneous online calibration of LiDAR, RADAR, and Camera - https://arxiv.org/abs/2512.08262 - arXiv:2512.08262v1 Announce Type: new -Abstract: Accurate extrinsic calibration of LiDAR, RADAR, and camera sensors is essential for reliable perception in autonomous vehicles. Still, it remains challenging due to factors such as mechanical vibrations and cumulative sensor drift in dynamic environments. This paper presents RLCNet, a novel end-to-end trainable deep learning framework for the simultaneous online calibration of these multimodal sensors. Validated on real-world datasets, RLCNet is designed for practical deployment and demonstrates robust performance under diverse conditions. To support real-time operation, an online calibration framework is introduced that incorporates a weighted moving average and outlier rejection, enabling dynamic adjustment of calibration parameters with reduced prediction noise and improved resilience to drift. An ablation study highlights the significance of architectural choices, while comparisons with existing methods demonstrate the superior accuracy and robustness of the proposed approach. - oai:arXiv.org:2512.08262v1 - cs.CV - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Eunomia: A Multicontroller Domain Partitioning Framework in Hierarchical Satellite Network + https://arxiv.org/abs/2512.09345 + arXiv:2512.09345v1 Announce Type: new +Abstract: With the rise of mega-satellite constellations, the integration of hierarchical non-terrestrial and terrestrial networks has become a cornerstone of 6G coverage enhancements. In these hierarchical satellite networks, controllers manage satellite switches within their assigned domains. However, the high mobility of LEO satellites and field-of-view (FOV) constraints pose fundamental challenges to efficient domain partitioning. Centralized control approaches face scalability bottlenecks, while distributed architectures with onboard controllers often disregard FOV limitations, leading to excessive signaling overhead. LEO satellites outside a controller's FOV require an average of five additional hops, resulting in a 10.6-fold increase in response time. To address these challenges, we propose Eunomia, a three-step domain-partitioning framework that leverages movement-aware FOV segmentation within a hybrid control plane combining ground stations and MEO satellites. Eunomia reduces control plane latency by constraining domains to FOV-aware regions and ensures single-hop signaling. It further balances traffic load through spectral clustering on a Control Overhead Relationship Graph and optimizes controller assignment via the Kuhn-Munkres algorithm. We implement Eunomia on the Plotinus emulation platform with realistic constellation parameters. Experimental results demonstrate that Eunomia reduces request loss by up to 58.3%, control overhead by up to 50.3\%, and algorithm execution time by 77.7% significantly outperforming current state-of-the-art solutions. + oai:arXiv.org:2512.09345v1 + cs.NI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hafeez Husain Cholakkal, Stefano Arrigoni, Francesco Braghin + http://creativecommons.org/licenses/by/4.0/ + Qi Zhang, Kun Qiu, Zhe Chen, Wenjun Zhu, Xiaofan Xu, Ping Du, Yue Gao - Mathematical Foundations of Neural Tangents and Infinite-Width Networks - https://arxiv.org/abs/2512.08264 - arXiv:2512.08264v1 Announce Type: new -Abstract: We investigate the mathematical foundations of neural networks in the infinite-width regime through the Neural Tangent Kernel (NTK). We propose the NTK-Eigenvalue-Controlled Residual Network (NTK-ECRN), an architecture integrating Fourier feature embeddings, residual connections with layerwise scaling, and stochastic depth to enable rigorous analysis of kernel evolution during training. Our theoretical contributions include deriving bounds on NTK dynamics, characterizing eigenvalue evolution, and linking spectral properties to generalization and optimization stability. Empirical results on synthetic and benchmark datasets validate the predicted kernel behavior and demonstrate improved training stability and generalization. This work provides a comprehensive framework bridging infinite-width theory and practical deep-learning architectures. - oai:arXiv.org:2512.08264v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + COVLM-RL: Critical Object-Oriented Reasoning for Autonomous Driving Using VLM-Guided Reinforcement Learning + https://arxiv.org/abs/2512.09349 + arXiv:2512.09349v1 Announce Type: new +Abstract: End-to-end autonomous driving frameworks face persistent challenges in generalization, training efficiency, and interpretability. While recent methods leverage Vision-Language Models (VLMs) through supervised learning on large-scale datasets to improve reasoning, they often lack robustness in novel scenarios. Conversely, reinforcement learning (RL)-based approaches enhance adaptability but remain data-inefficient and lack transparent decision-making. % contribution To address these limitations, we propose COVLM-RL, a novel end-to-end driving framework that integrates Critical Object-oriented (CO) reasoning with VLM-guided RL. Specifically, we design a Chain-of-Thought (CoT) prompting strategy that enables the VLM to reason over critical traffic elements and generate high-level semantic decisions, effectively transforming multi-view visual inputs into structured semantic decision priors. These priors reduce the input dimensionality and inject task-relevant knowledge into the RL loop, accelerating training and improving policy interpretability. However, bridging high-level semantic guidance with continuous low-level control remains non-trivial. To this end, we introduce a consistency loss that encourages alignment between the VLM's semantic plans and the RL agent's control outputs, enhancing interpretability and training stability. Experiments conducted in the CARLA simulator demonstrate that COVLM-RL significantly improves the success rate by 30\% in trained driving environments and by 50\% in previously unseen environments, highlighting its strong generalization capability. + oai:arXiv.org:2512.09349v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Rachana Mysore, Preksha Girish, Kavitha Jayaram, Shrey Kumar, Preksha Girish, Shravan Sanjeev Bagal, Kavitha Jayaram, Shreya Aravind Shastry + Lin Li, Yuxin Cai, Jianwu Fang, Jianru Xue, Chen Lv - Theoretical Studies of Sub-THz Active Split-Ring Resonators for Near-Field Imaging - https://arxiv.org/abs/2512.08265 - arXiv:2512.08265v1 Announce Type: new -Abstract: This paper develops a theoretical framework for the design of Active Split-Ring Resonators (ASRRs). An ASRR is a Split-Ring Resonator (SRR) equipped with a tunable negative resistor, enabling both switchability and quality factor boosting and tuning. These properties make ASRRs well-suited for integration into dense arrays on silicon chips, where pixelated near-fields are generated and leveraged for high-resolution 2D imaging of samples. Such imagers pave the way for real-time, non-invasive, and low-cost imaging of human body tissue. The paper investigates ASRR coupling to host transmission lines, nonlinear effects, signal flow, and the influence of various noise sources on detection performance. Verified through simulations, these studies provide design guidelines for optimizing the Signal-to-Noise Ratio (SNR) and power consumption of a single pixel, while adhering to the constraints of a scalable array. - oai:arXiv.org:2512.08265v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + TextGuider: Training-Free Guidance for Text Rendering via Attention Alignment + https://arxiv.org/abs/2512.09350 + arXiv:2512.09350v1 Announce Type: new +Abstract: Despite recent advances, diffusion-based text-to-image models still struggle with accurate text rendering. Several studies have proposed fine-tuning or training-free refinement methods for accurate text rendering. However, the critical issue of text omission, where the desired text is partially or entirely missing, remains largely overlooked. In this work, we propose TextGuider, a novel training-free method that encourages accurate and complete text appearance by aligning textual content tokens and text regions in the image. Specifically, we analyze attention patterns in MM-DiT models, particularly for text-related tokens intended to be rendered in the image. Leveraging this observation, we apply latent guidance during the early stage of denoising steps based on two loss functions that we introduce. Our method achieves state-of-the-art performance in test-time text rendering, with significant gains in recall and strong results in OCR accuracy and CLIP score. + oai:arXiv.org:2512.09350v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ali Ameri, Jun-Chau Chien, Ali M. Niknejad + Kanghyun Baek, Sangyub Lee, Jin Young Choi, Jaewoo Song, Daemin Park, Jooyoung Choi, Chaehun Shin, Bohyung Han, Sungroh Yoon - Token Sugar: Making Source Code Sweeter for LLMs through Token-Efficient Shorthand - https://arxiv.org/abs/2512.08266 - arXiv:2512.08266v1 Announce Type: new -Abstract: Large language models (LLMs) have shown exceptional performance in code generation and understanding tasks, yet their high computational costs hinder broader adoption. One important factor is the inherent verbosity of programming languages, such as unnecessary formatting elements and lengthy boilerplate code. This leads to inflated token counts in both input and generated outputs, which increases inference costs and slows down the generation process. Prior work improves this through simplifying programming language grammar, reducing token usage across both code understanding and generation tasks. However, it is confined to syntactic transformations, leaving significant opportunities for token reduction unrealized at the semantic level. - In this work, we propose Token Sugar, a concept that replaces frequent and verbose code patterns with reversible, token-efficient shorthand in the source code. To realize this concept in practice, we designed a systematic solution that mines high-frequency, token-heavy patterns from a code corpus, maps each to a unique shorthand, and integrates them into LLM pretraining via code transformation. With this solution, we obtain 799 (code pattern, shorthand) pairs, which can reduce up to 15.1% token count in the source code and is complementary to existing syntax-focused methods. We further trained three widely used LLMs on Token Sugar-augmented data. Experimental results show that these models not only achieve significant token savings (up to 11.2% reduction) during generation but also maintain near-identical Pass@1 scores compared to baselines trained on unprocessed code. - oai:arXiv.org:2512.08266v1 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Video-QTR: Query-Driven Temporal Reasoning Framework for Lightweight Video Understanding + https://arxiv.org/abs/2512.09354 + arXiv:2512.09354v1 Announce Type: new +Abstract: The rapid development of multimodal large-language models (MLLMs) has significantly expanded the scope of visual language reasoning, enabling unified systems to interpret and describe complex visual content. However, applying these models to long-video understanding remains computationally intensive. Dense frame encoding generates excessive visual tokens, leading to high memory consumption, redundant computation, and limited scalability in real-world applications. This inefficiency highlights a key limitation of the traditional process-then-reason paradigm, which analyzes visual streams exhaustively before semantic reasoning. To address this challenge, we introduce Video-QTR (Query-Driven Temporal Reasoning), a lightweight framework that redefines video comprehension as a query-guided reasoning process. Instead of encoding every frame, Video-QTR dynamically allocates perceptual resources based on the semantic intent of the query, creating an adaptive feedback loop between reasoning and perception. Extensive experiments across five benchmarks: MSVD-QA, Activity Net-QA, Movie Chat, and Video MME demonstrate that Video-QTR achieves state-of-the-art performance while reducing input frame consumption by up to 73%. These results confirm that query-driven temporal reasoning provides an efficient and scalable solution for video understanding. + oai:arXiv.org:2512.09354v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhensu Sun, Chengran Yang, Xiaoning Du, Zhou Yang, Li Li, David Lo + Xinkui Zhao, Zuxin Wang, Yifan Zhang, Guanjie Cheng, Yueshen Xu, Shuiguang Deng, Chang Liu, Naibo Wang, Jianwei Yin - SOFA-FL: Self-Organizing Hierarchical Federated Learning with Adaptive Clustered Data Sharing - https://arxiv.org/abs/2512.08267 - arXiv:2512.08267v1 Announce Type: new -Abstract: Federated Learning (FL) faces significant challenges in evolving environments, particularly regarding data heterogeneity and the rigidity of fixed network topologies. To address these issues, this paper proposes \textbf{SOFA-FL} (Self-Organizing Hierarchical Federated Learning with Adaptive Clustered Data Sharing), a novel framework that enables hierarchical federated systems to self-organize and adapt over time. - The framework is built upon three core mechanisms: (1) \textbf{Dynamic Multi-branch Agglomerative Clustering (DMAC)}, which constructs an initial efficient hierarchical structure; (2) \textbf{Self-organizing Hierarchical Adaptive Propagation and Evolution (SHAPE)}, which allows the system to dynamically restructure its topology through atomic operations -- grafting, pruning, consolidation, and purification -- to adapt to changes in data distribution; and (3) \textbf{Adaptive Clustered Data Sharing}, which mitigates data heterogeneity by enabling controlled partial data exchange between clients and cluster nodes. - By integrating these mechanisms, SOFA-FL effectively captures dynamic relationships among clients and enhances personalization capabilities without relying on predetermined cluster structures. - oai:arXiv.org:2512.08267v1 + Branching Strategies Based on Subgraph GNNs: A Study on Theoretical Promise versus Practical Reality + https://arxiv.org/abs/2512.09355 + arXiv:2512.09355v1 Announce Type: new +Abstract: Graph Neural Networks (GNNs) have emerged as a promising approach for ``learning to branch'' in Mixed-Integer Linear Programming (MILP). While standard Message-Passing GNNs (MPNNs) are efficient, they theoretically lack the expressive power to fully represent MILP structures. Conversely, higher-order GNNs (like 2-FGNNs) are expressive but computationally prohibitive. In this work, we investigate Subgraph GNNs as a theoretical middle ground. Crucially, while previous work [Chen et al., 2025] demonstrated that GNNs with 3-WL expressive power can approximate Strong Branching, we prove a sharper result: node-anchored Subgraph GNNs whose expressive power is strictly lower than 3-WL [Zhang et al., 2023] are sufficient to approximate Strong Branching scores. However, our extensive empirical evaluation on four benchmark datasets reveals a stark contrast between theory and practice. While node-anchored Subgraph GNNs theoretically offer superior branching decisions, their $O(n)$ complexity overhead results in significant memory bottlenecks and slower solving times than MPNNs and heuristics. Our results indicate that for MILP branching, the computational cost of expressive GNNs currently outweighs their gains in decision quality, suggesting that future research must focus on efficiency-preserving expressivity. + oai:arXiv.org:2512.09355v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.NA + math.NA + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Junru Zhou, Yicheng Wang, Pan Li + + + A higher-order three-scale computational method for efficient nonlinear thermo-mechanical coupling simulation of heterogeneous structures with multiple spatial scales + https://arxiv.org/abs/2512.09357 + arXiv:2512.09357v1 Announce Type: new +Abstract: Classical multi-scale methods involving two spatial scales face significant challenges when simulating heterogeneous structures with complicated three-scale spatial configurations. This study proposes an innovative higher-order three-scale (HOTS) computational method, aimed at accurately and efficiently computing the transient nonlinear thermo-mechanical coupling problems of heterogeneous structures with multiple spatial scales. In these heterogeneous structures, temperature-dependent material properties have an important impact on the thermo-mechanical coupling responses, which is the particular interest in this work. At first, the detailed macro-meso-micro correlative model with higher-order correction terms is established by recursively two-scale analysis between macro-meso and meso-micro scales, which enables high-accuracy analysis of temperature-dependent nonlinear thermo-mechanical behaviors of heterogeneous structures with complicated three-scale configurations. The local error analysis mathematically illustrates the well-balanced property of HOTS computational model, endowing it with high computational accuracy. In addition, a two-stage numerical algorithm with off-line and on-line stages is proposed in order to efficiently simulate the nonlinear thermo-mechanical responses of heterogeneous structures with three-level spatial scales and accurately capture their highly oscillatory information at micro-scale. Finally, the high computational efficiency, high numerical accuracy and low computational cost of the presented higher-order three-scale computational approach are substantiated via representative numerical experiments. It can be summarized that this scalable and robust HOTS computational approach offers a reliably numerical tool for nonlinear multiphysics simulation of large-scale heterogeneous structures in real-world applications. + oai:arXiv.org:2512.09357v1 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yi Ni, Xinkun Wang, Han Zhang + Hao Dong, Yanqi Wang, Jiale Linghu, Qiang Ma - Direct Product Theorems for Randomized Query Complexity - https://arxiv.org/abs/2512.08268 - arXiv:2512.08268v1 Announce Type: new -Abstract: We establish two new direct product theorems for the randomized query complexity of Boolean functions. - The first shows that computing $n$ copies of a function $f$, even with a small success probability of $\gamma^n$, requires $\Theta(n)$ times the "maximum distributional" query complexity of $f$ with success parameter $\gamma$. This result holds for all success parameters $\gamma$, even when $\gamma$ is very close to $1/2$ or to $1$. As a result, it unifies and generalizes Drucker's direct product theorem (2012) for $\gamma$ bounded away from $\frac12$ and $1$ as well as the strong direct sum theorem of Blais and Brody (2019) for $\gamma\approx 1-1/n$. - The second establishes a general "list decoding" direct product theorem that captures many different variants of partial computation tasks related to the function $f^n$ consisting of $n$ copies of $f$. Notably, our list decoding direct product theorem yields a new threshold direct product theorem and other new variants such as the labelled-threshold direct product theorem. - Both of these direct product theorems are obtained by taking a new approach. Instead of directly analyzing the query complexity of algorithms, we introduce a new measure of complexity of functions that we call "discounted score". We show that this measure satisfies a number of useful structural properties, including tensorization, that make it particularly suitable for the study of direct product questions. - oai:arXiv.org:2512.08268v1 - cs.CC - Wed, 10 Dec 2025 00:00:00 -0500 + A Granular Framework for Construction Material Price Forecasting: Econometric and Machine-Learning Approaches + https://arxiv.org/abs/2512.09360 + arXiv:2512.09360v1 Announce Type: new +Abstract: The persistent volatility of construction material prices poses significant risks to cost estimation, budgeting, and project delivery, underscoring the urgent need for granular and scalable forecasting methods. This study develops a forecasting framework that leverages the Construction Specifications Institute (CSI) MasterFormat as the target data structure, enabling predictions at the six-digit section level and supporting detailed cost projections across a wide spectrum of building materials. To enhance predictive accuracy, the framework integrates explanatory variables such as raw material prices, commodity indexes, and macroeconomic indicators. Four time-series models, Long Short-Term Memory (LSTM), Autoregressive Integrated Moving Average (ARIMA), Vector Error Correction Model (VECM), and Chronos-Bolt, were evaluated under both baseline configurations (using CSI data only) and extended versions with explanatory variables. Results demonstrate that incorporating explanatory variables significantly improves predictive performance across all models. Among the tested approaches, the LSTM model consistently achieved the highest accuracy, with RMSE values as low as 1.390 and MAPE values of 0.957, representing improvements of up to 59\% over the traditional statistical time-series model, ARIMA. Validation across multiple CSI divisions confirmed the framework's scalability, while Division 06 (Wood, Plastics, and Composites) is presented in detail as a demonstration case. This research offers a robust methodology that enables owners and contractors to improve budgeting practices and achieve more reliable cost estimation at the Definitive level. + oai:arXiv.org:2512.09360v1 + cs.LG + econ.EM + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Shalev Ben-David, Eric Blais + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Boge Lyu, Qianye Yin, Iris Denise Tommelein, Hanyang Liu, Karnamohit Ranka, Karthik Yeluripati, Junzhe Shi - EgoX: Egocentric Video Generation from a Single Exocentric Video - https://arxiv.org/abs/2512.08269 - arXiv:2512.08269v1 Announce Type: new -Abstract: Egocentric perception enables humans to experience and understand the world directly from their own point of view. Translating exocentric (third-person) videos into egocentric (first-person) videos opens up new possibilities for immersive understanding but remains highly challenging due to extreme camera pose variations and minimal view overlap. This task requires faithfully preserving visible content while synthesizing unseen regions in a geometrically consistent manner. To achieve this, we present EgoX, a novel framework for generating egocentric videos from a single exocentric input. EgoX leverages the pretrained spatio temporal knowledge of large-scale video diffusion models through lightweight LoRA adaptation and introduces a unified conditioning strategy that combines exocentric and egocentric priors via width and channel wise concatenation. Additionally, a geometry-guided self-attention mechanism selectively attends to spatially relevant regions, ensuring geometric coherence and high visual fidelity. Our approach achieves coherent and realistic egocentric video generation while demonstrating strong scalability and robustness across unseen and in-the-wild videos. - oai:arXiv.org:2512.08269v1 + StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation + https://arxiv.org/abs/2512.09363 + arXiv:2512.09363v1 Announce Type: new +Abstract: The growing adoption of XR devices has fueled strong demand for high-quality stereo video, yet its production remains costly and artifact-prone. To address this challenge, we present StereoWorld, an end-to-end framework that repurposes a pretrained video generator for high-fidelity monocular-to-stereo video generation. Our framework jointly conditions the model on the monocular video input while explicitly supervising the generation with a geometry-aware regularization to ensure 3D structural fidelity. A spatio-temporal tiling scheme is further integrated to enable efficient, high-resolution synthesis. To enable large-scale training and evaluation, we curate a high-definition stereo video dataset containing over 11M frames aligned to natural human interpupillary distance (IPD). Extensive experiments demonstrate that StereoWorld substantially outperforms prior methods, generating stereo videos with superior visual fidelity and geometric consistency. The project webpage is available at https://ke-xing.github.io/StereoWorld/. + oai:arXiv.org:2512.09363v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Taewoong Kang, Kinam Kim, Dohyeon Kim, Minho Park, Junha Hyung, Jaegul Choo + http://creativecommons.org/publicdomain/zero/1.0/ + Ke Xing, Longfei Li, Yuyang Yin, Hanwen Liang, Guixun Luo, Chen Fang, Jue Wang, Konstantinos N. Plataniotis, Xiaojie Jin, Yao Zhao, Yunchao Wei - Reasoning Models Ace the CFA Exams - https://arxiv.org/abs/2512.08270 - arXiv:2512.08270v1 Announce Type: new -Abstract: Previous research has reported that large language models (LLMs) demonstrate poor performance on the Chartered Financial Analyst (CFA) exams. However, recent reasoning models have achieved strong results on graduate-level academic and professional examinations across various disciplines. In this paper, we evaluate state-of-the-art reasoning models on a set of mock CFA exams consisting of 980 questions across three Level I exams, two Level II exams, and three Level III exams. Using the same pass/fail criteria from prior studies, we find that most models clear all three levels. The models that pass, ordered by overall performance, are Gemini 3.0 Pro, Gemini 2.5 Pro, GPT-5, Grok 4, Claude Opus 4.1, and DeepSeek-V3.1. Specifically, Gemini 3.0 Pro achieves a record score of 97.6% on Level I. Performance is also strong on Level II, led by GPT-5 at 94.3%. On Level III, Gemini 2.5 Pro attains the highest score with 86.4% on multiple-choice questions while Gemini 3.0 Pro achieves 92.0% on constructed-response questions. - oai:arXiv.org:2512.08270v1 - cs.AI - cs.CL - q-fin.GN - Wed, 10 Dec 2025 00:00:00 -0500 + ASSIST-3D: Adapted Scene Synthesis for Class-Agnostic 3D Instance Segmentation + https://arxiv.org/abs/2512.09364 + arXiv:2512.09364v1 Announce Type: new +Abstract: Class-agnostic 3D instance segmentation tackles the challenging task of segmenting all object instances, including previously unseen ones, without semantic class reliance. Current methods struggle with generalization due to the scarce annotated 3D scene data or noisy 2D segmentations. While synthetic data generation offers a promising solution, existing 3D scene synthesis methods fail to simultaneously satisfy geometry diversity, context complexity, and layout reasonability, each essential for this task. To address these needs, we propose an Adapted 3D Scene Synthesis pipeline for class-agnostic 3D Instance SegmenTation, termed as ASSIST-3D, to synthesize proper data for model generalization enhancement. Specifically, ASSIST-3D features three key innovations, including 1) Heterogeneous Object Selection from extensive 3D CAD asset collections, incorporating randomness in object sampling to maximize geometric and contextual diversity; 2) Scene Layout Generation through LLM-guided spatial reasoning combined with depth-first search for reasonable object placements; and 3) Realistic Point Cloud Construction via multi-view RGB-D image rendering and fusion from the synthetic scenes, closely mimicking real-world sensor data acquisition. Experiments on ScanNetV2, ScanNet++, and S3DIS benchmarks demonstrate that models trained with ASSIST-3D-generated data significantly outperform existing methods. Further comparisons underscore the superiority of our purpose-built pipeline over existing 3D scene synthesis approaches. + oai:arXiv.org:2512.09364v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jaisal Patel, Yunzhe Chen, Kaiwen He, Keyi Wang, David Li, Kairong Xiao, Xiao-Yang Liu + Shengchao Zhou, Jiehong Lin, Jiahui Liu, Shizhen Zhao, Chirui Chang, Xiaojuan Qi - Zero-Splat TeleAssist: A Zero-Shot Pose Estimation Framework for Semantic Teleoperation - https://arxiv.org/abs/2512.08271 - arXiv:2512.08271v1 Announce Type: new -Abstract: We introduce Zero-Splat TeleAssist, a zero-shot sensor-fusion pipeline that transforms commodity CCTV streams into a shared, 6-DoF world model for multilateral teleoperation. By integrating vision-language segmentation, monocular depth, weighted-PCA pose extraction, and 3D Gaussian Splatting (3DGS), TeleAssist provides every operator with real-time global positions and orientations of multiple robots without fiducials or depth sensors in an interaction-centric teleoperation setup. - oai:arXiv.org:2512.08271v1 - cs.RO - cs.CV + KGOT: Unified Knowledge Graph and Optimal Transport Pseudo-Labeling for Molecule-Protein Interaction Prediction + https://arxiv.org/abs/2512.09365 + arXiv:2512.09365v1 Announce Type: new +Abstract: Predicting molecule-protein interactions (MPIs) is a fundamental task in computational biology, with crucial applications in drug discovery and molecular function annotation. However, existing MPI models face two major challenges. First, the scarcity of labeled molecule-protein pairs significantly limits model performance, as available datasets capture only a small fraction of biological relevant interactions. Second, most methods rely solely on molecular and protein features, ignoring broader biological context such as genes, metabolic pathways, and functional annotations that could provide essential complementary information. To address these limitations, our framework first aggregates diverse biological datasets, including molecular, protein, genes and pathway-level interactions, and then develop an optimal transport-based approach to generate high-quality pseudo-labels for unlabeled molecule-protein pairs, leveraging the underlying distribution of known interactions to guide label assignment. By treating pseudo-labeling as a mechanism for bridging disparate biological modalities, our approach enables the effective use of heterogeneous data to enhance MPI prediction. We evaluate our framework on multiple MPI datasets including virtual screening tasks and protein retrieval tasks, demonstrating substantial improvements over state-of-the-art methods in prediction accuracies and zero shot ability across unseen interactions. Beyond MPI prediction, our approach provides a new paradigm for leveraging diverse biological data sources to tackle problems traditionally constrained by single- or bi-modal learning, paving the way for future advances in computational biology and drug discovery. + oai:arXiv.org:2512.09365v1 cs.LG - eess.IV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Srijan Dokania, Dharini Raghavan + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Jiayu Qin, Zhengquan Luo, Guy Tadmor, Changyou Chen, David Zeevi, Zhiqiang Xu - AgentEval: Generative Agents as Reliable Proxies for Human Evaluation of AI-Generated Content - https://arxiv.org/abs/2512.08273 - arXiv:2512.08273v1 Announce Type: new -Abstract: Modern businesses are increasingly challenged by the time and expense required to generate and assess high-quality content. Human writers face time constraints, and extrinsic evaluations can be costly. While Large Language Models (LLMs) offer potential in content creation, concerns about the quality of AI-generated content persist. Traditional evaluation methods, like human surveys, further add operational costs, highlighting the need for efficient, automated solutions. This research introduces Generative Agents as a means to tackle these challenges. These agents can rapidly and cost-effectively evaluate AI-generated content, simulating human judgment by rating aspects such as coherence, interestingness, clarity, fairness, and relevance. By incorporating these agents, businesses can streamline content generation and ensure consistent, high-quality output while minimizing reliance on costly human evaluations. The study provides critical insights into enhancing LLMs for producing business-aligned, high-quality content, offering significant advancements in automated content generation and evaluation. - oai:arXiv.org:2512.08273v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Procurement Auctions with Predictions: Improved Frugality for Facility Location + https://arxiv.org/abs/2512.09367 + arXiv:2512.09367v1 Announce Type: new +Abstract: We study the problem of designing procurement auctions for the strategic uncapacitated facility location problem: a company needs to procure a set of facility locations in order to serve its customers and each facility location is owned by a strategic agent. Each owner has a private cost for providing access to their facility (e.g., renting it or selling it to the company) and needs to be compensated accordingly. The goal is to design truthful auctions that decide which facilities the company should procure and how much to pay the corresponding owners, aiming to minimize the total cost, i.e., the monetary cost paid to the owners and the connection cost suffered by the customers (their distance to the nearest facility). We evaluate the performance of these auctions using the \emph{frugality ratio}. + We first analyze the performance of the classic VCG auction in this context and prove that its frugality ratio is exactly $3$. We then leverage the learning-augmented framework and design auctions that are augmented with predictions regarding the owners' private costs. Specifically, we propose a family of learning-augmented auctions that achieve significant payment reductions when the predictions are accurate, leading to much better frugality ratios. At the same time, we demonstrate that these auctions remain robust even if the predictions are arbitrarily inaccurate, and maintain reasonable frugality ratios even under adversarially chosen predictions. We finally provide a family of ``error-tolerant'' auctions that maintain improved frugality ratios even if the predictions are only approximately accurate, and we provide upper bounds on their frugality ratio as a function of the prediction error. + oai:arXiv.org:2512.09367v1 + cs.GT + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Thanh Vu, Richi Nayak, Thiru Balasubramaniam + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Eric Balkanski, Nicholas DeFilippis, Vasilis Gkatzelis, Xizhi Tan - gHAWK: Local and Global Structure Encoding for Scalable Training of Graph Neural Networks on Knowledge Graphs - https://arxiv.org/abs/2512.08274 - arXiv:2512.08274v1 Announce Type: new -Abstract: Knowledge Graphs (KGs) are a rich source of structured, heterogeneous data, powering a wide range of applications. A common approach to leverage this data is to train a graph neural network (GNN) on the KG. However, existing message-passing GNNs struggle to scale to large KGs because they rely on the iterative message passing process to learn the graph structure, which is inefficient, especially under mini-batch training, where a node sees only a partial view of its neighborhood. In this paper, we address this problem and present gHAWK, a novel and scalable GNN training framework for large KGs. The key idea is to precompute structural features for each node that capture its local and global structure before GNN training even begins. Specifically, gHAWK introduces a preprocessing step that computes: (a)~Bloom filters to compactly encode local neighborhood structure, and (b)~TransE embeddings to represent each node's global position in the graph. These features are then fused with any domain-specific features (e.g., text embeddings), producing a node feature vector that can be incorporated into any GNN technique. By augmenting message-passing training with structural priors, gHAWK significantly reduces memory usage, accelerates convergence, and improves model accuracy. Extensive experiments on large datasets from the Open Graph Benchmark (OGB) demonstrate that gHAWK achieves state-of-the-art accuracy and lower training time on both node property prediction and link prediction tasks, topping the OGB leaderboard for three graphs. - oai:arXiv.org:2512.08274v1 + CFLight: Enhancing Safety with Traffic Signal Control through Counterfactual Learning + https://arxiv.org/abs/2512.09368 + arXiv:2512.09368v1 Announce Type: new +Abstract: Traffic accidents result in millions of injuries and fatalities globally, with a significant number occurring at intersections each year. Traffic Signal Control (TSC) is an effective strategy for enhancing safety at these urban junctures. Despite the growing popularity of Reinforcement Learning (RL) methods in optimizing TSC, these methods often prioritize driving efficiency over safety, thus failing to address the critical balance between these two aspects. Additionally, these methods usually need more interpretability. CounterFactual (CF) learning is a promising approach for various causal analysis fields. In this study, we introduce a novel framework to improve RL for safety aspects in TSC. This framework introduces a novel method based on CF learning to address the question: ``What if, when an unsafe event occurs, we backtrack to perform alternative actions, and will this unsafe event still occur in the subsequent period?'' To answer this question, we propose a new structure causal model to predict the result after executing different actions, and we propose a new CF module that integrates with additional ``X'' modules to promote safe RL practices. Our new algorithm, CFLight, which is derived from this framework, effectively tackles challenging safety events and significantly improves safety at intersections through a near-zero collision control strategy. Through extensive numerical experiments on both real-world and synthetic datasets, we demonstrate that CFLight reduces collisions and improves overall traffic performance compared to conventional RL methods and the recent safe RL model. Moreover, our method represents a generalized and safe framework for RL methods, opening possibilities for applications in other domains. The data and code are available in the github https://github.com/MJLee00/CFLight-Enhancing-Safety-with-Traffic-Signal-Control-through-Counterfactual-Learning. + oai:arXiv.org:2512.09368v1 cs.LG - cs.DB - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Humera Sabir, Fatima Farooq, Ashraf Aboulnaga + Mingyuan Li, Chunyu Liu, Zhuojun Li, Xiao Liu, Guangsheng Yu, Bo Du, Jun Shen, Qiang Wu - FedLAD: A Modular and Adaptive Testbed for Federated Log Anomaly Detection - https://arxiv.org/abs/2512.08277 - arXiv:2512.08277v1 Announce Type: new -Abstract: Log-based anomaly detection (LAD) is critical for ensuring the reliability of large-scale distributed systems. However, most existing LAD approaches assume centralized training, which is often impractical due to privacy constraints and the decentralized nature of system logs. While federated learning (FL) offers a promising alternative, there is a lack of dedicated testbeds tailored to the needs of LAD in federated settings. To address this, we present FedLAD, a unified platform for training and evaluating LAD models under FL constraints. FedLAD supports plug-and-play integration of diverse LAD models, benchmark datasets, and aggregation strategies, while offering runtime support for validation logging (self-monitoring), parameter tuning (self-configuration), and adaptive strategy control (self-adaptation). By enabling reproducible and scalable experimentation, FedLAD bridges the gap between FL frameworks and LAD requirements, providing a solid foundation for future research. Project code is publicly available at: https://github.com/AA-cityu/FedLAD. - oai:arXiv.org:2512.08277v1 - cs.SE + Are Hypervectors Enough? Single-Call LLM Reasoning over Knowledge Graphs + https://arxiv.org/abs/2512.09369 + arXiv:2512.09369v1 Announce Type: new +Abstract: Recent advances in large language models (LLMs) have enabled strong reasoning over both structured and unstructured knowledge. When grounded on knowledge graphs (KGs), however, prevailing pipelines rely on heavy neural encoders to embed and score symbolic paths or on repeated LLM calls to rank candidates, leading to high latency, GPU cost, and opaque decisions that hinder faithful, scalable deployment. We propose PathHD, a lightweight and encoder-free KG reasoning framework that replaces neural path scoring with hyperdimensional computing (HDC) and uses only a single LLM call per query. PathHD encodes relation paths into block-diagonal GHRR hypervectors, ranks candidates with blockwise cosine similarity and Top-K pruning, and then performs a one-shot LLM adjudication to produce the final answer together with cited supporting paths. Technically, PathHD is built on three ingredients: (i) an order-aware, non-commutative binding operator for path composition, (ii) a calibrated similarity for robust hypervector-based retrieval, and (iii) a one-shot adjudication step that preserves interpretability while eliminating per-path LLM scoring. On WebQSP, CWQ, and the GrailQA split, PathHD (i) attains comparable or better Hits@1 than strong neural baselines while using one LLM call per query; (ii) reduces end-to-end latency by $40-60\%$ and GPU memory by $3-5\times$ thanks to encoder-free retrieval; and (iii) delivers faithful, path-grounded rationales that improve error diagnosis and controllability. These results indicate that carefully designed HDC representations provide a practical substrate for efficient KG-LLM reasoning, offering a favorable accuracy-efficiency-interpretability trade-off. + oai:arXiv.org:2512.09369v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yihan Liao, Jacky Keung, Zhenyu Mao, Jingyu Zhang, Jialong Li + http://creativecommons.org/licenses/by/4.0/ + Yezi Liu, William Youngwoo Chung, Hanning Chen, Calvin Yeung, Mohsen Imani - Model-Based Diffusion Sampling for Predictive Control in Offline Decision Making - https://arxiv.org/abs/2512.08280 - arXiv:2512.08280v1 Announce Type: new -Abstract: Offline decision-making requires synthesizing reliable behaviors from fixed datasets without further interaction, yet existing generative approaches often yield trajectories that are dynamically infeasible. We propose Model Predictive Diffuser (MPDiffuser), a compositional model-based diffusion framework consisting of: (i) a planner that generates diverse, task-aligned trajectories; (ii) a dynamics model that enforces consistency with the underlying system dynamics; and (iii) a ranker module that selects behaviors aligned with the task objectives. MPDiffuser employs an alternating diffusion sampling scheme, where planner and dynamics updates are interleaved to progressively refine trajectories for both task alignment and feasibility during the sampling process. We also provide a theoretical rationale for this procedure, showing how it balances fidelity to data priors with dynamics consistency. Empirically, the compositional design improves sample efficiency, as it leverages even low-quality data for dynamics learning and adapts seamlessly to novel dynamics. We evaluate MPDiffuser on both unconstrained (D4RL) and constrained (DSRL) offline decision-making benchmarks, demonstrating consistent gains over existing approaches. Furthermore, we present a preliminary study extending MPDiffuser to vision-based control tasks, showing its potential to scale to high-dimensional sensory inputs. Finally, we deploy our method on a real quadrupedal robot, showcasing its practicality for real-world control. - oai:arXiv.org:2512.08280v1 - cs.RO - cs.AI - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Optimizing Data Extraction from Materials Science Literature: A Study of Tools Using Large Language Models + https://arxiv.org/abs/2512.09370 + arXiv:2512.09370v1 Announce Type: new +Abstract: Large Language Models (LLMs) are increasingly utilized for large-scale extraction and organization of unstructured data owing to their exceptional Natural Language Processing (NLP) capabilities. Empowering materials design, vast amounts of data from experiments and simulations are scattered across numerous scientific publications, but high-quality experimental databases are scarce. This study considers the effectiveness and practicality of five representative AI tools (ChemDataExtractor, BERT-PSIE, ChatExtract, LangChain, and Kimi) to extract bandgaps from 200 randomly selected Materials Science publications in two presentations (arXiv and publisher versions), comparing the results to those obtained by human processing. Although the integrity of data extraction has not met expectations, encouraging results have been achieved in terms of precision and the ability to eliminate irrelevant papers from human consideration. Our analysis highlights both the strengths and limitations of these tools, offering insights into improving future data extraction techniques for enhanced scientific discovery and innovation. In conjunction with recent research, we provide guidance on feasible improvements for future data extraction methodologies, helping to bridge the gap between unstructured scientific data and structured, actionable databases. + oai:arXiv.org:2512.09370v1 + cs.DL + cond-mat.mtrl-sci + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haldun Balim, Na Li, Yilun Du + http://creativecommons.org/licenses/by/4.0/ + Wenkai Ning, Musen Li, Jeffrey R. Reimers, Rika Kobayashi - Probabilistic Multi-Agent Aircraft Landing Time Prediction - https://arxiv.org/abs/2512.08281 - arXiv:2512.08281v1 Announce Type: new -Abstract: Accurate and reliable aircraft landing time prediction is essential for effective resource allocation in air traffic management. However, the inherent uncertainty of aircraft trajectories and traffic flows poses significant challenges to both prediction accuracy and trustworthiness. Therefore, prediction models should not only provide point estimates of aircraft landing times but also the uncertainties associated with these predictions. Furthermore, aircraft trajectories are frequently influenced by the presence of nearby aircraft through air traffic control interventions such as radar vectoring. Consequently, landing time prediction models must account for multi-agent interactions in the airspace. In this work, we propose a probabilistic multi-agent aircraft landing time prediction framework that provides the landing times of multiple aircraft as distributions. We evaluate the proposed framework using an air traffic surveillance dataset collected from the terminal airspace of the Incheon International Airport in South Korea. The results demonstrate that the proposed model achieves higher prediction accuracy than the baselines and quantifies the associated uncertainties of its outcomes. In addition, the model uncovered underlying patterns in air traffic control through its attention scores, thereby enhancing explainability. - oai:arXiv.org:2512.08281v1 - cs.MA - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Intelligent Resilience Testing for Decision-Making Agents with Dual-Mode Surrogate Adaptation + https://arxiv.org/abs/2512.09372 + arXiv:2512.09372v1 Announce Type: new +Abstract: Testing and evaluating decision-making agents remains challenging due to unknown system architectures, limited access to internal states, and the vastness of high-dimensional scenario spaces. Existing testing approaches often rely on surrogate models of decision-making agents to generate large-scale scenario libraries; however, discrepancies between surrogate models and real decision-making agents significantly limit their generalizability and practical applicability. To address this challenge, this paper proposes intelligent resilience testing (IRTest), a unified online adaptive testing framework designed to rapidly adjust to diverse decision-making agents. IRTest initializes with an offline-trained surrogate prediction model and progressively reduces surrogate-to-real gap during testing through two complementary adaptation mechanisms: (i) online neural fine-tuning in data-rich regimes, and (ii) lightweight importance-sampling-based weighting correction in data-limited regimes. A Bayesian optimization strategy, equipped with bias-corrected acquisition functions, guides scenario generation to balance exploration and exploitation in complex testing spaces. Extensive experiments across varying levels of task complexity and system heterogeneity demonstrate that IRTest consistently improves failure-discovery efficiency, testing robustness, and cross-system generalizability. These results highlight the potential of IRTest as a practical solution for scalable, adaptive, and resilient testing of decision-making agents. + oai:arXiv.org:2512.09372v1 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kyungmin Kim, Seokbin Yoon, Keumjin Lee + http://creativecommons.org/licenses/by/4.0/ + Jingxuan Yang, Weichao Xu, Yuchen Shi, Yi Zhang, Shuo Feng, Huaxin Pei - PAVAS: Physics-Aware Video-to-Audio Synthesis - https://arxiv.org/abs/2512.08282 - arXiv:2512.08282v1 Announce Type: new -Abstract: Recent advances in Video-to-Audio (V2A) generation have achieved impressive perceptual quality and temporal synchronization, yet most models remain appearance-driven, capturing visual-acoustic correlations without considering the physical factors that shape real-world sounds. We present Physics-Aware Video-to-Audio Synthesis (PAVAS), a method that incorporates physical reasoning into a latent diffusion-based V2A generation through the Physics-Driven Audio Adapter (Phy-Adapter). The adapter receives object-level physical parameters estimated by the Physical Parameter Estimator (PPE), which uses a Vision-Language Model (VLM) to infer the moving-object mass and a segmentation-based dynamic 3D reconstruction module to recover its motion trajectory for velocity computation. These physical cues enable the model to synthesize sounds that reflect underlying physical factors. To assess physical realism, we curate VGG-Impact, a benchmark focusing on object-object interactions, and introduce Audio-Physics Correlation Coefficient (APCC), an evaluation metric that measures consistency between physical and auditory attributes. Comprehensive experiments show that PAVAS produces physically plausible and perceptually coherent audio, outperforming existing V2A models in both quantitative and qualitative evaluations. Visit https://physics-aware-video-to-audio-synthesis.github.io for demo videos. - oai:arXiv.org:2512.08282v1 + FUSER: Feed-Forward MUltiview 3D Registration Transformer and SE(3)$^N$ Diffusion Refinement + https://arxiv.org/abs/2512.09373 + arXiv:2512.09373v1 Announce Type: new +Abstract: Registration of multiview point clouds conventionally relies on extensive pairwise matching to build a pose graph for global synchronization, which is computationally expensive and inherently ill-posed without holistic geometric constraints. This paper proposes FUSER, the first feed-forward multiview registration transformer that jointly processes all scans in a unified, compact latent space to directly predict global poses without any pairwise estimation. To maintain tractability, FUSER encodes each scan into low-resolution superpoint features via a sparse 3D CNN that preserves absolute translation cues, and performs efficient intra- and inter-scan reasoning through a Geometric Alternating Attention module. Particularly, we transfer 2D attention priors from off-the-shelf foundation models to enhance 3D feature interaction and geometric consistency. Building upon FUSER, we further introduce FUSER-DF, an SE(3)$^N$ diffusion refinement framework to correct FUSER's estimates via denoising in the joint SE(3)$^N$ space. FUSER acts as a surrogate multiview registration model to construct the denoiser, and a prior-conditioned SE(3)$^N$ variational lower bound is derived for denoising supervision. Extensive experiments on 3DMatch, ScanNet and ArkitScenes demonstrate that our approach achieves the superior registration accuracy and outstanding computational efficiency. + oai:arXiv.org:2512.09373v1 cs.CV - cs.MM - cs.SD - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Oh Hyun-Bin, Yuhta Takida, Toshimitsu Uesaka, Tae-Hyun Oh, Yuki Mitsufuji + http://creativecommons.org/licenses/by/4.0/ + Haobo Jiang, Jin Xie, Jian Yang, Liang Yu, Jianmin Zheng - Empowering smart app development with SolidGPT: an edge-cloud hybrid AI agent framework - https://arxiv.org/abs/2512.08286 - arXiv:2512.08286v1 Announce Type: new -Abstract: The integration of Large Language Models (LLMs) into mobile and software development workflows faces a persistent tension among three demands: semantic awareness, developer productivity, and data privacy. Traditional cloud-based tools offer strong reasoning but risk data exposure and latency, while on-device solutions lack full-context understanding across codebase and developer tooling. We introduce SolidGPT, an open-source, edge-cloud hybrid developer assistant built on GitHub, designed to enhance code and workspace semantic search. SolidGPT enables developers to: talk to your codebase: interactively query code and project structure, discovering the right methods and modules without manual searching. Automate software project workflows: generate PRDs, task breakdowns, Kanban boards, and even scaffold web app beginnings, with deep integration via VSCode and Notion. Configure private, extensible agents: onboard private code folders (up to approximately 500 files), connect Notion, customize AI agent personas via embedding and in-context training, and deploy via Docker, CLI, or VSCode extension. In practice, SolidGPT empowers developer productivity through: Semantic-rich code navigation: no more hunting through files or wondering where a feature lives. Integrated documentation and task management: seamlessly sync generated PRD content and task boards into developer workflows. Privacy-first design: running locally via Docker or VSCode, with full control over code and data, while optionally reaching out to LLM APIs as needed. By combining interactive code querying, automated project scaffolding, and human-AI collaboration, SolidGPT provides a practical, privacy-respecting edge assistant that accelerates real-world development workflows, ideal for intelligent mobile and software engineering contexts. - oai:arXiv.org:2512.08286v1 - cs.SE - cs.AI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Derandomizing Isolation In Catalytic Logspace + https://arxiv.org/abs/2512.09374 + arXiv:2512.09374v1 Announce Type: new +Abstract: A language is said to be in catalytic logspace if we can test membership using a deterministic logspace machine that has an additional read/write tape filled with arbitrary data whose contents have to be restored to their original value at the end of the computation. The model of catalytic computation was introduced by Buhrman et al [STOC2014]. + As our first result, we obtain a catalytic logspace algorithm for computing a minimum weight witness to a search problem, with small weights, provided the algorithm is given oracle access for the corresponding weighted decision problem. In particular, our reduction yields CL algorithms for the search versions of the following three problems: planar perfect matching, planar exact perfect matching and weighted arborescences in weighted digraphs. + Our second set of results concern the significantly larger class CL^{NP}_{2-round}. We show that CL^{NP}_{2-round} contains SearchSAT and the complexity classes BPP, MA and ZPP^{NP[1]}. While SearchSAT is shown to be in CL^{NP}_{2-round} using the isolation lemma, the other three containments, while based on the compress-or-random technique, use the Nisan-Wigderson [JCSS 1994] based pseudo-random generator. These containments show that CL^{NP}_{2-round} resembles ZPP^NP more than P^{NP}, providing some weak evidence that CL is more like ZPP than P. + For our third set of results we turn to isolation well inside catalytic classes. We consider the unambiguous catalytic class CUTISP[poly(n),logn,log^2n] and show that it contains reachability and therefore NL. This is a catalytic version of the result of van Melkebeek & Prakriya [SIAM J. Comput. 2019]. Building on their result, we also show a tradeoff between workspace and catalytic space. Finally, we extend these catalytic upper bounds to LogCFL. + oai:arXiv.org:2512.09374v1 + cs.CC + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.54254/2977-3903/2025.25283 - Advances in Engineering Innovation 16.7 (2025): 86-92 - Liao Hu, Qiteng Wu, Ruoyu Qi + V. Arvind, Srijan Chakraborty, Samir Datta - Synergizing Monetization, Orchestration, and Semantics in Computing Continuum - https://arxiv.org/abs/2512.08288 - arXiv:2512.08288v1 Announce Type: new -Abstract: Industry demands are growing for hyper-distributed applications that span from the cloud to the edge in domains such as smart manufacturing, transportation, and agriculture. Yet today's solutions struggle to meet these demands due to inherent limitations in scalability, interoperability, and trust. In this article, we introduce HERMES (Heterogeneous Computing Continuum with Resource Monetization, Orchestration, and Semantic) - a novel framework designed to transform connectivity and data utilization across the computing continuum. HERMES establishes an open, seamless, and secure environment where resources, from cloud servers to tiny edge devices, can be orchestrated intelligently, data and services can be monetized in a distributed marketplace, and knowledge is shared through semantic interoperability. By bridging these key facets, HERMES lays a foundation for a new generation of distributed applications that are more efficient, trustworthy, and autonomous. - oai:arXiv.org:2512.08288v1 - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + Log NeRF: Comparing Spaces for Learning Radiance Fields + https://arxiv.org/abs/2512.09375 + arXiv:2512.09375v1 Announce Type: new +Abstract: Neural Radiance Fields (NeRF) have achieved remarkable results in novel view synthesis, typically using sRGB images for supervision. However, little attention has been paid to the color space in which the network is learning the radiance field representation. Inspired by the BiIlluminant Dichromatic Reflection (BIDR) model, which suggests that a logarithmic transformation simplifies the separation of illumination and reflectance, we hypothesize that log RGB space enables NeRF to learn a more compact and effective representation of scene appearance. To test this, we captured approximately 30 videos using a GoPro camera, ensuring linear data recovery through inverse encoding. We trained NeRF models under various color space interpretations linear, sRGB, GPLog, and log RGB by converting each network output to a common color space before rendering and loss computation, enforcing representation learning in different color spaces. Quantitative and qualitative evaluations demonstrate that using a log RGB color space consistently improves rendering quality, exhibits greater robustness across scenes, and performs particularly well in low light conditions while using the same bit-depth input images. Further analysis across different network sizes and NeRF variants confirms the generalization and stability of the log space advantage. + oai:arXiv.org:2512.09375v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Chinmaya Kumar Dehury, Lauri Lov\'en, Praveen Kumar Donta, Ilir Murturi, Schahram Dustdar + Sihe Chen (Northeastern University), Luv Verma (Northeastern University), Bruce A. Maxwell (Northeastern University) - MIRAGE: Misleading Retrieval-Augmented Generation via Black-box and Query-agnostic Poisoning Attacks - https://arxiv.org/abs/2512.08289 - arXiv:2512.08289v1 Announce Type: new -Abstract: Retrieval-Augmented Generation (RAG) systems enhance LLMs with external knowledge but introduce a critical attack surface: corpus poisoning. While recent studies have demonstrated the potential of such attacks, they typically rely on impractical assumptions, such as white-box access or known user queries, thereby underestimating the difficulty of real-world exploitation. In this paper, we bridge this gap by proposing MIRAGE, a novel multi-stage poisoning pipeline designed for strict black-box and query-agnostic environments. Operating on surrogate model feedback, MIRAGE functions as an automated optimization framework that integrates three key mechanisms: it utilizes persona-driven query synthesis to approximate latent user search distributions, employs semantic anchoring to imperceptibly embed these intents for high retrieval visibility, and leverages an adversarial variant of Test-Time Preference Optimization (TPO) to maximize persuasion. To rigorously evaluate this threat, we construct a new benchmark derived from three long-form, domain-specific datasets. Extensive experiments demonstrate that MIRAGE significantly outperforms existing baselines in both attack efficacy and stealthiness, exhibiting remarkable transferability across diverse retriever-LLM configurations and highlighting the urgent need for robust defense strategies. - oai:arXiv.org:2512.08289v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Rates and architectures for learning geometrically non-trivial operators + https://arxiv.org/abs/2512.09376 + arXiv:2512.09376v1 Announce Type: new +Abstract: Deep learning methods have proven capable of recovering operators between high-dimensional spaces, such as solution maps of PDEs and similar objects in mathematical physics, from very few training samples. This phenomenon of data-efficiency has been proven for certain classes of elliptic operators with simple geometry, i.e., operators that do not change the domain of the function or propagate singularities. However, scientific machine learning is commonly used for problems that do involve the propagation of singularities in a priori unknown ways, such as waves, advection, and fluid dynamics. In light of this, we expand the learning theory to include double fibration transforms--geometric integral operators that include generalized Radon and geodesic ray transforms. We prove that this class of operators does not suffer from the curse of dimensionality: the error decays superalgebraically, that is, faster than any fixed power of the reciprocal of the number of training samples. Furthermore, we investigate architectures that explicitly encode the geometry of these transforms, demonstrating that an architecture reminiscent of cross-attention based on levelset methods yields a parameterization that is universal, stable, and learns double fibration transforms from very few training examples. Our results contribute to a rapidly-growing line of theoretical work on learning operators for scientific machine learning. + oai:arXiv.org:2512.09376v1 + cs.LG + cs.CV + eess.IV + math.DG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tailun Chen, Yu He, Yan Wang, Shuo Shao, Haolun Zheng, Zhihao Liu, Jinfeng Li, Yuefeng Chen, Zhixuan Chu, Zhan Qin + T. Mitchell Roddenberry, Leo Tzou, Ivan Dokmani\'c, Maarten V. de Hoop, Richard G. Baraniuk - Systematization of Knowledge: Security and Safety in the Model Context Protocol Ecosystem - https://arxiv.org/abs/2512.08290 - arXiv:2512.08290v1 Announce Type: new -Abstract: The Model Context Protocol (MCP) has emerged as the de facto standard for connecting Large Language Models (LLMs) to external data and tools, effectively functioning as the "USB-C for Agentic AI." While this decoupling of context and execution solves critical interoperability challenges, it introduces a profound new threat landscape where the boundary between epistemic errors (hallucinations) and security breaches (unauthorized actions) dissolves. This Systematization of Knowledge (SoK) aims to provide a comprehensive taxonomy of risks in the MCP ecosystem, distinguishing between adversarial security threats (e.g., indirect prompt injection, tool poisoning) and epistemic safety hazards (e.g., alignment failures in distributed tool delegation). We analyze the structural vulnerabilities of MCP primitives, specifically Resources, Prompts, and Tools, and demonstrate how "context" can be weaponized to trigger unauthorized operations in multi-agent environments. Furthermore, we survey state-of-the-art defenses, ranging from cryptographic provenance (ETDI) to runtime intent verification, and conclude with a roadmap for securing the transition from conversational chatbots to autonomous agentic operating systems. - oai:arXiv.org:2512.08290v1 - cs.CR - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Observability Analysis and Composite Disturbance Filtering for a Bar Tethered to Dual UAVs Subject to Multi-source Disturbances + https://arxiv.org/abs/2512.09377 + arXiv:2512.09377v1 Announce Type: new +Abstract: Cooperative suspended aerial transportation is highly susceptible to multi-source disturbances such as aerodynamic effects and thrust uncertainties. To achieve precise load manipulation, existing methods often rely on extra sensors to measure cable directions or the payload's pose, which increases the system cost and complexity. A fundamental question remains: is the payload's pose observable under multi-source disturbances using only the drones' odometry information? To answer this question, this work focuses on the two-drone-bar system and proves that the whole system is observable when only two or fewer types of lumped disturbances exist by using the observability rank criterion. To the best of our knowledge, we are the first to present such a conclusion and this result paves the way for more cost-effective and robust systems by minimizing their sensor suites. Next, to validate this analysis, we consider the situation where the disturbances are only exerted on the drones, and develop a composite disturbance filtering scheme. A disturbance observer-based error-state extended Kalman filter is designed for both state and disturbance estimation, which renders improved estimation performance for the whole system evolving on the manifold $(\mathbb{R}^3)^2\times(TS^2)^3$. Our simulation and experimental tests have validated that it is possible to fully estimate the state and disturbance of the system with only odometry information of the drones. + oai:arXiv.org:2512.09377v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shiva Gaire, Srijan Gyawali, Saroj Mishra, Suman Niroula, Dilip Thakur, Umesh Yadav + http://creativecommons.org/licenses/by/4.0/ + Lidan Xu, Dadong Fan, Junhong Wang, Wenshuo Li, Hao Lu, Jianzhong Qiao - Exposing and Defending Membership Leakage in Vulnerability Prediction Models - https://arxiv.org/abs/2512.08291 - arXiv:2512.08291v1 Announce Type: new -Abstract: Neural models for vulnerability prediction (VP) have achieved impressive performance by learning from large-scale code repositories. However, their susceptibility to Membership Inference Attacks (MIAs), where adversaries aim to infer whether a particular code sample was used during training, poses serious privacy concerns. While MIA has been widely investigated in NLP and vision domains, its effects on security-critical code analysis tasks remain underexplored. In this work, we conduct the first comprehensive analysis of MIA on VP models, evaluating the attack success across various architectures (LSTM, BiGRU, and CodeBERT) and feature combinations, including embeddings, logits, loss, and confidence. Our threat model aligns with black-box and gray-box settings where prediction outputs are observable, allowing adversaries to infer membership by analyzing output discrepancies between training and non-training samples. The empirical findings reveal that logits and loss are the most informative and vulnerable outputs for membership leakage. Motivated by these observations, we propose a Noise-based Membership Inference Defense (NMID), which is a lightweight defense module that applies output masking and Gaussian noise injection to disrupt adversarial inference. Extensive experiments demonstrate that NMID significantly reduces MIA effectiveness, lowering the attack AUC from nearly 1.0 to below 0.65, while preserving the predictive utility of VP models. Our study highlights critical privacy risks in code analysis and offers actionable defense strategies for securing AI-powered software systems. - oai:arXiv.org:2512.08291v1 - cs.CR - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Federated Distillation Assisted Vehicle Edge Caching Scheme Based on Lightweight DDPM + https://arxiv.org/abs/2512.09378 + arXiv:2512.09378v1 Announce Type: new +Abstract: Vehicle edge caching is a promising technology that can significantly reduce the latency for vehicle users (VUs) to access content by pre-caching user-interested content at edge nodes. It is crucial to accurately predict the content that VUs are interested in without exposing their privacy. Traditional federated learning (FL) can protect user privacy by sharing models rather than raw data. However, the training of FL requires frequent model transmission, which can result in significant communication overhead. Additionally, vehicles may leave the road side unit (RSU) coverage area before training is completed, leading to training failures. To address these issues, in this letter, we propose a federated distillation-assisted vehicle edge caching scheme based on lightweight denoising diffusion probabilistic model (LDPM). The simulation results demonstrate that the proposed vehicle edge caching scheme has good robustness to variations in vehicle speed, significantly reducing communication overhead and improving cache hit percentage. + oai:arXiv.org:2512.09378v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yihan Liao, Jacky Keung, Xiaoxue Ma, Jingyu Zhang, Yicheng Sun + Xun Li, Qiong Wu, Pingyi Fan, Kezhi Wang, Wen Chen, Khaled B. Letaief - OpenSubject: Leveraging Video-Derived Identity and Diversity Priors for Subject-driven Image Generation and Manipulation - https://arxiv.org/abs/2512.08294 - arXiv:2512.08294v1 Announce Type: new -Abstract: Despite the promising progress in subject-driven image generation, current models often deviate from the reference identities and struggle in complex scenes with multiple subjects. To address this challenge, we introduce OpenSubject, a video-derived large-scale corpus with 2.5M samples and 4.35M images for subject-driven generation and manipulation. The dataset is built with a four-stage pipeline that exploits cross-frame identity priors. (i) Video Curation. We apply resolution and aesthetic filtering to obtain high-quality clips. (ii) Cross-Frame Subject Mining and Pairing. We utilize vision-language model (VLM)-based category consensus, local grounding, and diversity-aware pairing to select image pairs. (iii) Identity-Preserving Reference Image Synthesis. We introduce segmentation map-guided outpainting to synthesize the input images for subject-driven generation and box-guided inpainting to generate input images for subject-driven manipulation, together with geometry-aware augmentations and irregular boundary erosion. (iv) Verification and Captioning. We utilize a VLM to validate synthesized samples, re-synthesize failed samples based on stage (iii), and then construct short and long captions. In addition, we introduce a benchmark covering subject-driven generation and manipulation, and then evaluate identity fidelity, prompt adherence, manipulation consistency, and background consistency with a VLM judge. Extensive experiments show that training with OpenSubject improves generation and manipulation performance, particularly in complex scenes. - oai:arXiv.org:2512.08294v1 + Perception-Inspired Color Space Design for Photo White Balance Editing + https://arxiv.org/abs/2512.09383 + arXiv:2512.09383v1 Announce Type: new +Abstract: White balance (WB) is a key step in the image signal processor (ISP) pipeline that mitigates color casts caused by varying illumination and restores the scene's true colors. Currently, sRGB-based WB editing for post-ISP WB correction is widely used to address color constancy failures in the ISP pipeline when the original camera RAW is unavailable. However, additive color models (e.g., sRGB) are inherently limited by fixed nonlinear transformations and entangled color channels, which often impede their generalization to complex lighting conditions. + To address these challenges, we propose a novel framework for WB correction that leverages a perception-inspired Learnable HSI (LHSI) color space. Built upon a cylindrical color model that naturally separates luminance from chromatic components, our framework further introduces dedicated parameters to enhance this disentanglement and learnable mapping to adaptively refine the flexibility. Moreover, a new Mamba-based network is introduced, which is tailored to the characteristics of the proposed LHSI color space. + Experimental results on benchmark datasets demonstrate the superiority of our method, highlighting the potential of perception-inspired color space design in computational photography. The source code is available at https://github.com/YangCheng58/WB_Color_Space. + oai:arXiv.org:2512.09383v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yang Cheng, Ziteng Cui, Lin Gu, Shenghan Su, Zenghui Zhang + + + BugSweeper: Function-Level Detection of Smart Contract Vulnerabilities Using Graph Neural Networks + https://arxiv.org/abs/2512.09385 + arXiv:2512.09385v1 Announce Type: new +Abstract: The rapid growth of Ethereum has made it more important to quickly and accurately detect smart contract vulnerabilities. While machine-learning-based methods have shown some promise, many still rely on rule-based preprocessing designed by domain experts. Rule-based preprocessing methods often discard crucial context from the source code, potentially causing certain vulnerabilities to be overlooked and limiting adaptability to newly emerging threats. We introduce BugSweeper, an end-to-end deep learning framework that detects vulnerabilities directly from the source code without manual engineering. BugSweeper represents each Solidity function as a Function-Level Abstract Syntax Graph (FLAG), a novel graph that combines its Abstract Syntax Tree (AST) with enriched control-flow and data-flow semantics. Then, our two-stage Graph Neural Network (GNN) analyzes these graphs. The first-stage GNN filters noise from the syntax graphs, while the second-stage GNN conducts high-level reasoning to detect diverse vulnerabilities. Extensive experiments on real-world contracts show that BugSweeper significantly outperforms all state-of-the-art detection methods. By removing the need for handcrafted rules, our approach offers a robust, automated, and scalable solution for securing smart contracts without any dependence on security experts. + oai:arXiv.org:2512.09385v1 + cs.CR + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yexin Liu, Manyuan Zhang, Yueze Wang, Hongyu Li, Dian Zheng, Weiming Zhang, Changsheng Lu, Xunliang Cai, Yan Feng, Peng Pei, Harry Yang + Uisang Lee, Changhoon Chung, Junmo Lee, Soo-Mook Moon - Towards a Science of Scaling Agent Systems - https://arxiv.org/abs/2512.08296 - arXiv:2512.08296v1 Announce Type: new -Abstract: Agents, language model (LM)-based systems that are capable of reasoning, planning, and acting are becoming the dominant paradigm for real-world AI applications. Despite this widespread adoption, the principles that determine their performance remain underexplored, leaving practitioners to rely on heuristics rather than principled design choices. We address this gap by deriving quantitative scaling principles for agent systems. We evaluate this across four diverse benchmarks: Finance-Agent, BrowseComp-Plus, PlanCraft, and Workbench. Using five canonical architectures (Single, Independent, Centralized, Decentralized, Hybrid) instantiated across three LLM families, we perform a controlled evaluation spanning 180 configurations with standardized tools and token budgets. We derive a predictive model using empirical coordination metrics, including efficiency, overhead, error amplification, and redundancy, that achieves cross-validated R^2=0.513. We identify three dominant effects: (1) a tool-coordination trade-off: under fixed computational budgets, tool-heavy tasks suffer disproportionately from multi-agent overhead. (2) a capability saturation: coordination yields diminishing or negative returns (beta=-0.408, p<0.001) once single-agent baselines exceed ~45%. (3) topology-dependent error amplification: independent agents amplify errors 17.2x through unchecked propagation, while centralized coordination contains this to 4.4x. Centralized coordination improves performance by 80.9% on parallelizable tasks like financial reasoning, while decentralized coordination excels on dynamic web navigation (+9.2% vs. +0.2%). Yet for sequential reasoning tasks, all multi-agent variants degraded performance by 39-70%. The framework predicts the optimal coordination strategy for 87% of held-out configurations, providing a predictive principle of agentic scaling based on measurable task properties. - oai:arXiv.org:2512.08296v1 + CONCUR: A Framework for Continual Constrained and Unconstrained Routing + https://arxiv.org/abs/2512.09386 + arXiv:2512.09386v1 Announce Type: new +Abstract: AI tasks differ in complexity and are best addressed with different computation strategies (e.g., combinations of models and decoding methods). Hence, an effective routing system that maps tasks to the appropriate strategies is crucial. Most prior methods build the routing framework by training a single model across all strategies, which demands full retraining whenever new strategies appear and leads to high overhead. Attempts at such continual routing, however, often face difficulties with generalization. Prior models also typically use a single input representation, limiting their ability to capture the full complexity of the routing problem and leading to sub-optimal routing decisions. To address these gaps, we propose CONCUR, a continual routing framework that supports both constrained and unconstrained routing (i.e., routing with or without a budget). Our modular design trains a separate predictor model for each strategy, enabling seamless incorporation of new strategies with low additional training cost. Our predictors also leverage multiple representations of both tasks and computation strategies to better capture overall problem complexity. Experiments on both in-distribution and out-of-distribution, knowledge- and reasoning-intensive tasks show that our method outperforms the best single strategy and strong existing routing techniques with higher end-to-end accuracy and lower inference cost in both continual and non-continual settings, while also reducing training cost in the continual setting. + oai:arXiv.org:2512.09386v1 + cs.CL cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yubin Kim, Ken Gu, Chanwoo Park, Chunjong Park, Samuel Schmidgall, A. Ali Heydari, Yao Yan, Zhihan Zhang, Yuchen Zhuang, Mark Malhotra, Paul Pu Liang, Hae Won Park, Yuzhe Yang, Xuhai Xu, Yilun Du, Shwetak Patel, Tim Althoff, Daniel McDuff, Xin Liu + Peter Baile Chen, Weiyue Li, Dan Roth, Michael Cafarella, Samuel Madden, Jacob Andreas - Formation and Investigation of Cooperative Platooning at the Early Stage of Connected and Automated Vehicles Deployment - https://arxiv.org/abs/2512.08298 - arXiv:2512.08298v1 Announce Type: new -Abstract: Cooperative platooning, enabled by cooperative adaptive cruise control (CACC), is a cornerstone technology for connected automated vehicles (CAVs), offering significant improvements in safety, comfort, and traffic efficiency over traditional adaptive cruise control (ACC). This paper addresses a key challenge in the initial deployment phase of CAVs: the limited benefits of cooperative platooning due to the sparse distribution of CAVs on the road. To overcome this limitation, we propose an innovative control framework that enhances cooperative platooning in mixed traffic environments. Two techniques are utilized: (1) a mixed cooperative platooning strategy that integrates CACC with unconnected vehicles (CACCu), and (2) a strategic lane-change decision model designed to facilitate safe and efficient lane changes for platoon formation. Additionally, a surrounding vehicle identification system is embedded in the framework to enable CAVs to effectively identify and select potential platooning leaders. Simulation studies across various CV market penetration rates (MPRs) show that incorporating CACCu systems significantly improves safety, comfort, and traffic efficiency compared to existing systems with only CACC and ACC systems, even at CV penetration as low as 10%. The maximized platoon formation increases by up to 24%, accompanied by an 11% reduction in acceleration and a 7% decrease in fuel consumption. Furthermore, the strategic lane-change model enhances CAV performance, achieving notable improvements between 6% and 60% CV penetration, without adversely affecting overall traffic flow. - oai:arXiv.org:2512.08298v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Detection and Localization of Subdural Hematoma Using Deep Learning on Computed Tomography + https://arxiv.org/abs/2512.09393 + arXiv:2512.09393v1 Announce Type: new +Abstract: Background. Subdural hematoma (SDH) is a common neurosurgical emergency, with increasing incidence in aging populations. Rapid and accurate identification is essential to guide timely intervention, yet existing automated tools focus primarily on detection and provide limited interpretability or spatial localization. There remains a need for transparent, high-performing systems that integrate multimodal clinical and imaging information to support real-time decision-making. + Methods. We developed a multimodal deep-learning framework that integrates structured clinical variables, a 3D convolutional neural network trained on CT volumes, and a transformer-enhanced 2D segmentation model for SDH detection and localization. Using 25,315 head CT studies from Hartford HealthCare (2015--2024), of which 3,774 (14.9\%) contained clinician-confirmed SDH, tabular models were trained on demographics, comorbidities, medications, and laboratory results. Imaging models were trained to detect SDH and generate voxel-level probability maps. A greedy ensemble strategy combined complementary predictors. + Findings. Clinical variables alone provided modest discriminatory power (AUC 0.75). Convolutional models trained on CT volumes and segmentation-derived maps achieved substantially higher accuracy (AUCs 0.922 and 0.926). The multimodal ensemble integrating all components achieved the best overall performance (AUC 0.9407; 95\% CI, 0.930--0.951) and produced anatomically meaningful localization maps consistent with known SDH patterns. + Interpretation. This multimodal, interpretable framework provides rapid and accurate SDH detection and localization, achieving high detection performance and offering transparent, anatomically grounded outputs. Integration into radiology workflows could streamline triage, reduce time to intervention, and improve consistency in SDH management. + oai:arXiv.org:2512.09393v1 + cs.CV + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Zeyu Mu, Sergei S. Avedisov, Ahmadreza Moradipari, B. Brian Park + http://creativecommons.org/licenses/by/4.0/ + Vasiliki Stoumpou, Rohan Kumar, Bernard Burman, Diego Ojeda, Tapan Mehta, Dimitris Bertsimas - Secure Audio Embedding in Images using Nature-Inspired Optimization - https://arxiv.org/abs/2512.08299 - arXiv:2512.08299v1 Announce Type: new -Abstract: In todays digital world, protecting sensitive data is very essential. Steganography hides the existence of secret data instead of its content, providing better security for multimedia communication. This paper proposes a new technique for hiding audio files inside images using the Least Significant Bit (LSB) method optimized by the Harris Hawks Optimization (HHO) algorithm. HHO is a nature-inspired metaheuristic that imitates the hunting behavior of Harris hawks to find optimal pixel positions for embedding data. The proposed method is evaluated using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Square Error (MSE). Experimental results show that HHO achieves better image quality, robustness, and embedding capacity compared to existing methods. - oai:arXiv.org:2512.08299v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Language models as tools for investigating the distinction between possible and impossible natural languages + https://arxiv.org/abs/2512.09394 + arXiv:2512.09394v1 Announce Type: new +Abstract: We argue that language models (LMs) have strong potential as investigative tools for probing the distinction between possible and impossible natural languages and thus uncovering the inductive biases that support human language learning. We outline a phased research program in which LM architectures are iteratively refined to better discriminate between possible and impossible languages, supporting linking hypotheses to human cognition. + oai:arXiv.org:2512.09394v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/publicdomain/zero/1.0/ - Aman Kumar, Ankit Chaudhary + http://creativecommons.org/licenses/by/4.0/ + Julie Kallini, Christopher Potts - rSIM: Incentivizing Reasoning Capabilities of LLMs via Reinforced Strategy Injection - https://arxiv.org/abs/2512.08300 - arXiv:2512.08300v1 Announce Type: new -Abstract: Large language models (LLMs) are post-trained through reinforcement learning (RL) to evolve into Reasoning Language Models (RLMs), where the hallmark of this advanced reasoning is ``aha'' moments when they start to perform strategies, such as self-reflection and deep thinking, within chain of thoughts (CoTs). Motivated by this, this paper proposes a novel reinforced strategy injection mechanism (rSIM), that enables any LLM to become an RLM by employing a small planner to guide the LLM's CoT through the adaptive injection of reasoning strategies. To achieve this, the planner (leader agent) is jointly trained with an LLM (follower agent) using multi-agent RL (MARL), based on a leader-follower framework and straightforward rule-based rewards. Experimental results show that rSIM enables Qwen2.5-0.5B to become an RLM and significantly outperform Qwen2.5-14B. Moreover, the planner is generalizable: it only needs to be trained once and can be applied as a plug-in to substantially improve the reasoning capabilities of existing LLMs. In addition, the planner supports continual learning across various tasks, allowing its planning abilities to gradually improve and generalize to a wider range of problems. - oai:arXiv.org:2512.08300v1 + GAIR: GUI Automation via Information-Joint Reasoning and Group Reflection + https://arxiv.org/abs/2512.09396 + arXiv:2512.09396v1 Announce Type: new +Abstract: Building AI systems for GUI automation task has attracted remarkable research efforts, where MLLMs are leveraged for processing user requirements and give operations. However, GUI automation includes a wide range of tasks, from document processing to online shopping, from CAD to video editing. Diversity between particular tasks requires MLLMs for GUI automation to have heterogeneous capabilities and master multidimensional expertise, raising problems on constructing such a model. To address such challenge, we propose GAIR: GUI Automation via Information-Joint Reasoning and Group Reflection, a novel MLLM-based GUI automation agent framework designed for integrating knowledge and combining capabilities from heterogeneous models to build GUI automation agent systems with higher performance. Since different GUI-specific MLLMs are trained on different dataset and thus have different strengths, GAIR introduced a general-purpose MLLM for jointly processing the information from multiple GUI-specific models, further enhancing performance of the agent framework. The general-purpose MLLM also serves as decision maker, trying to execute a reasonable operation based on previously gathered information. When the general-purpose model thinks that there isn't sufficient information for a reasonable decision, GAIR would transit into group reflection status, where the general-purpose model would provide GUI-specific models with different instructions and hints based on their strengths and weaknesses, driving them to gather information with more significance and accuracy that can support deeper reasoning and decision. We evaluated the effectiveness and reliability of GAIR through extensive experiments on GUI benchmarks. + oai:arXiv.org:2512.09396v1 + cs.MA cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Sijia Chen, Baochun Li, Di Niu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zishu Wei, Qixiang Ma, Xavier Hu, Yuhang Liu, Hui Zang, Yudong Zhao, Tao Wang, Shengyu Zhang, Fei Wu - Jacobian Aligned Random Forests - https://arxiv.org/abs/2512.08306 - arXiv:2512.08306v1 Announce Type: new -Abstract: Axis-aligned decision trees are fast and stable but struggle on datasets with rotated or interaction-dependent decision boundaries, where informative splits require linear combinations of features rather than single-feature thresholds. Oblique forests address this with per-node hyperplane splits, but at added computational cost and implementation complexity. We propose a simple alternative: JARF, Jacobian-Aligned Random Forests. Concretely, we first fit an axis-aligned forest to estimate class probabilities or regression outputs, compute finite-difference gradients of these predictions with respect to each feature, aggregate them into an expected Jacobian outer product that generalizes the expected gradient outer product (EGOP), and use it as a single global linear preconditioner for all inputs. This supervised preconditioner applies a single global rotation of the feature space, then hands the transformed data back to a standard axis-aligned forest, preserving off-the-shelf training pipelines while capturing oblique boundaries and feature interactions that would otherwise require many axis-aligned splits to approximate. The same construction applies to any model that provides gradients, though we focus on random forests and gradient-boosted trees in this work. On tabular classification and regression benchmarks, this preconditioning consistently improves axis-aligned forests and often matches or surpasses oblique baselines while improving training time. Our experimental results and theoretical analysis together indicate that supervised preconditioning can recover much of the accuracy of oblique forests while retaining the simplicity and robustness of axis-aligned trees. - oai:arXiv.org:2512.08306v1 + Towards Resilient Transportation: A Conditional Transformer for Accident-Informed Traffic Forecasting + https://arxiv.org/abs/2512.09398 + arXiv:2512.09398v1 Announce Type: new +Abstract: Traffic prediction remains a key challenge in spatio-temporal data mining, despite progress in deep learning. Accurate forecasting is hindered by the complex influence of external factors such as traffic accidents and regulations, often overlooked by existing models due to limited data integration. To address these limitations, we present two enriched traffic datasets from Tokyo and California, incorporating traffic accident and regulation data. Leveraging these datasets, we propose ConFormer (Conditional Transformer), a novel framework that integrates graph propagation with guided normalization layer. This design dynamically adjusts spatial and temporal node relationships based on historical patterns, enhancing predictive accuracy. Our model surpasses the state-of-the-art STAEFormer in both predictive performance and efficiency, achieving lower computational costs and reduced parameter demands. Extensive evaluations demonstrate that ConFormer consistently outperforms mainstream spatio-temporal baselines across multiple metrics, underscoring its potential to advance traffic prediction research. + oai:arXiv.org:2512.09398v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Sarwesh Rauniyar + Hongjun Wang, Jiawei Yong, Jiawei Wang, Shintaro Fukushima, Renhe Jiang - Terrain Diffusion: A Diffusion-Based Successor to Perlin Noise in Infinite, Real-Time Terrain Generation - https://arxiv.org/abs/2512.08309 - arXiv:2512.08309v1 Announce Type: new -Abstract: For decades, procedural worlds have been built on procedural noise functions such as Perlin noise, which are fast and infinite, yet fundamentally limited in realism and large-scale coherence. We introduce Terrain Diffusion, an AI-era successor to Perlin noise that bridges the fidelity of diffusion models with the properties that made procedural noise indispensable: seamless infinite extent, seed-consistency, and constant-time random access. At its core is InfiniteDiffusion, a novel algorithm for infinite generation, enabling seamless, real-time synthesis of boundless landscapes. A hierarchical stack of diffusion models couples planetary context with local detail, while a compact Laplacian encoding stabilizes outputs across Earth-scale dynamic ranges. An open-source infinite-tensor framework supports constant-memory manipulation of unbounded tensors, and few-step consistency distillation enables efficient generation. Together, these components establish diffusion models as a practical foundation for procedural world generation, capable of synthesizing entire planets coherently, controllably, and without limits. - oai:arXiv.org:2512.08309v1 + Wasserstein-Aligned Hyperbolic Multi-View Clustering + https://arxiv.org/abs/2512.09402 + arXiv:2512.09402v1 Announce Type: new +Abstract: Multi-view clustering (MVC) aims to uncover the latent structure of multi-view data by learning view-common and view-specific information. Although recent studies have explored hyperbolic representations for better tackling the representation gap between different views, they focus primarily on instance-level alignment and neglect global semantic consistency, rendering them vulnerable to view-specific information (\textit{e.g.}, noise and cross-view discrepancies). To this end, this paper proposes a novel Wasserstein-Aligned Hyperbolic (WAH) framework for multi-view clustering. Specifically, our method exploits a view-specific hyperbolic encoder for each view to embed features into the Lorentz manifold for hierarchical semantic modeling. Whereafter, a global semantic loss based on the hyperbolic sliced-Wasserstein distance is introduced to align manifold distributions across views. This is followed by soft cluster assignments to encourage cross-view semantic consistency. Extensive experiments on multiple benchmarking datasets show that our method can achieve SOTA clustering performance. + oai:arXiv.org:2512.09402v1 cs.CV - cs.AI - cs.GR - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Alexander Goslin + Rui Wang, Yuting Jiang, Xiaoqing Luo, Xiao-Jun Wu, Nicu Sebe, Ziheng Chen - Privacy-Preserving Identifier Checking in 5G - https://arxiv.org/abs/2512.08310 - arXiv:2512.08310v1 Announce Type: new -Abstract: Device identifiers like the International Mobile Equipment Identity (IMEI) are crucial for ensuring device integrity and meeting regulations in 4G and 5G networks. However, sharing these identifiers with Mobile Network Operators (MNOs) brings significant privacy risks by enabling long-term tracking and linking of user activities across sessions. In this work, we propose a privacy-preserving identifier checking method in 5G. This paper introduces a protocol for verifying device identifiers without exposing them to the network while maintaining the same functions as the 3GPP-defined Equipment Identity Register (EIR) process. The proposed solution modifies the PEPSI protocol for a Private Set Membership (PSM) setting using the BFV homomorphic encryption scheme. This lets User Equipment (UE) prove that its identifier is not on an operator's blacklist or greylist while ensuring that the MNO only learns the outcome of the verification. The protocol allows controlled deanonymization through an authorized Law Enforcement (LE) hook, striking a balance between privacy and accountability. Implementation results show that the system can perform online verification within five seconds and requires about 15 to 16 MB of communication per session. This confirms its practical use under post-quantum security standards. The findings highlight the promise of homomorphic encryption for managing identifiers while preserving privacy in 5G, laying the groundwork for scalable and compliant verification systems in future 6G networks. - oai:arXiv.org:2512.08310v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Black-Box Behavioral Distillation Breaks Safety Alignment in Medical LLMs + https://arxiv.org/abs/2512.09403 + arXiv:2512.09403v1 Announce Type: new +Abstract: As medical large language models (LLMs) become increasingly integrated into clinical workflows, concerns around alignment robustness, and safety are escalating. Prior work on model extraction has focused on classification models or memorization leakage, leaving the vulnerability of safety-aligned generative medical LLMs underexplored. + We present a black-box distillation attack that replicates the domain-specific reasoning of safety-aligned medical LLMs using only output-level access. By issuing 48,000 instruction queries to Meditron-7B and collecting 25,000 benign instruction response pairs, we fine-tune a LLaMA3 8B surrogate via parameter efficient LoRA under a zero-alignment supervision setting, requiring no access to model weights, safety filters, or training data. With a cost of $12, the surrogate achieves strong fidelity on benign inputs while producing unsafe completions for 86% of adversarial prompts, far exceeding both Meditron-7B (66%) and the untuned base model (46%). This reveals a pronounced functional-ethical gap, task utility transfers, while alignment collapses. To analyze this collapse, we develop a dynamic adversarial evaluation framework combining Generative Query (GQ)-based harmful prompt generation, verifier filtering, category-wise failure analysis, and adaptive Random Search (RS) jailbreak attacks. We also propose a layered defense system, as a prototype detector for real-time alignment drift in black-box deployments. + Our findings show that benign-only black-box distillation exposes a practical and under-recognized threat: adversaries can cheaply replicate medical LLM capabilities while stripping safety mechanisms, underscoring the need for extraction-aware safety monitoring. + oai:arXiv.org:2512.09403v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Marcel D. S. K. Gr\"afenstein, Stefan K\"opsell, Maryam Zarezadeh + Sohely Jahan, Ruimin Sun - Minimizing Layerwise Activation Norm Improves Generalization in Federated Learning - https://arxiv.org/abs/2512.08314 - arXiv:2512.08314v1 Announce Type: new -Abstract: Federated Learning (FL) is an emerging machine learning framework that enables multiple clients (coordinated by a server) to collaboratively train a global model by aggregating the locally trained models without sharing any client's training data. It has been observed in recent works that learning in a federated manner may lead the aggregated global model to converge to a 'sharp minimum' thereby adversely affecting the generalizability of this FL-trained model. Therefore, in this work, we aim to improve the generalization performance of models trained in a federated setup by introducing a 'flatness' constrained FL optimization problem. This flatness constraint is imposed on the top eigenvalue of the Hessian computed from the training loss. As each client trains a model on its local data, we further re-formulate this complex problem utilizing the client loss functions and propose a new computationally efficient regularization technique, dubbed 'MAN,' which Minimizes Activation's Norm of each layer on client-side models. We also theoretically show that minimizing the activation norm reduces the top eigenvalue of the layer-wise Hessian of the client's loss, which in turn decreases the overall Hessian's top eigenvalue, ensuring convergence to a flat minimum. We apply our proposed flatness-constrained optimization to the existing FL techniques and obtain significant improvements, thereby establishing new state-of-the-art. - oai:arXiv.org:2512.08314v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + H2R-Grounder: A Paired-Data-Free Paradigm for Translating Human Interaction Videos into Physically Grounded Robot Videos + https://arxiv.org/abs/2512.09406 + arXiv:2512.09406v1 Announce Type: new +Abstract: Robots that learn manipulation skills from everyday human videos could acquire broad capabilities without tedious robot data collection. We propose a video-to-video translation framework that converts ordinary human-object interaction videos into motion-consistent robot manipulation videos with realistic, physically grounded interactions. Our approach does not require any paired human-robot videos for training only a set of unpaired robot videos, making the system easy to scale. We introduce a transferable representation that bridges the embodiment gap: by inpainting the robot arm in training videos to obtain a clean background and overlaying a simple visual cue (a marker and arrow indicating the gripper's position and orientation), we can condition a generative model to insert the robot arm back into the scene. At test time, we apply the same process to human videos (inpainting the person and overlaying human pose cues) and generate high-quality robot videos that mimic the human's actions. We fine-tune a SOTA video diffusion model (Wan 2.2) in an in-context learning manner to ensure temporal coherence and leveraging of its rich prior knowledge. Empirical results demonstrate that our approach achieves significantly more realistic and grounded robot motions compared to baselines, pointing to a promising direction for scaling up robot learning from unlabeled human videos. Project page: https://showlab.github.io/H2R-Grounder/ + oai:arXiv.org:2512.09406v1 + cs.RO + cs.AI + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - M Yashwanth, Gaurav Kumar Nayak, Harsh Rangwani, Arya Singh, R. Venkatesh Babu, Anirban Chakraborty + Hai Ci, Xiaokang Liu, Pei Yang, Yiren Song, Mike Zheng Shou - GeoDM: Geometry-aware Distribution Matching for Dataset Distillation - https://arxiv.org/abs/2512.08317 - arXiv:2512.08317v1 Announce Type: new -Abstract: Dataset distillation aims to synthesize a compact subset of the original data, enabling models trained on it to achieve performance comparable to those trained on the original large dataset. Existing distribution-matching methods are confined to Euclidean spaces, making them only capture linear structures and overlook the intrinsic geometry of real data, e.g., curvature. However, high-dimensional data often lie on low-dimensional manifolds, suggesting that dataset distillation should have the distilled data manifold aligned with the original data manifold. In this work, we propose a geometry-aware distribution-matching framework, called \textbf{GeoDM}, which operates in the Cartesian product of Euclidean, hyperbolic, and spherical manifolds, with flat, hierarchical, and cyclical structures all captured by a unified representation. To adapt to the underlying data geometry, we introduce learnable curvature and weight parameters for three kinds of geometries. At the same time, we design an optimal transport loss to enhance the distribution fidelity. Our theoretical analysis shows that the geometry-aware distribution matching in a product space yields a smaller generalization error bound than the Euclidean counterparts. Extensive experiments conducted on standard benchmarks demonstrate that our algorithm outperforms state-of-the-art data distillation methods and remains effective across various distribution-matching strategies for the single geometries. - oai:arXiv.org:2512.08317v1 + Generative Point Cloud Registration + https://arxiv.org/abs/2512.09407 + arXiv:2512.09407v1 Announce Type: new +Abstract: In this paper, we propose a novel 3D registration paradigm, Generative Point Cloud Registration, which bridges advanced 2D generative models with 3D matching tasks to enhance registration performance. Our key idea is to generate cross-view consistent image pairs that are well-aligned with the source and target point clouds, enabling geometry-color feature fusion to facilitate robust matching. To ensure high-quality matching, the generated image pair should feature both 2D-3D geometric consistency and cross-view texture consistency. To achieve this, we introduce Match-ControlNet, a matching-specific, controllable 2D generative model. Specifically, it leverages the depth-conditioned generation capability of ControlNet to produce images that are geometrically aligned with depth maps derived from point clouds, ensuring 2D-3D geometric consistency. Additionally, by incorporating a coupled conditional denoising scheme and coupled prompt guidance, Match-ControlNet further promotes cross-view feature interaction, guiding texture consistency generation. Our generative 3D registration paradigm is general and could be seamlessly integrated into various registration methods to enhance their performance. Extensive experiments on 3DMatch and ScanNet datasets verify the effectiveness of our approach. + oai:arXiv.org:2512.09407v1 cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xuhui Li, Zhengquan Luo, Zihui Cui, Zhiqiang Xu + http://creativecommons.org/licenses/by/4.0/ + Haobo Jiang, Jin Xie, Jian Yang, Liang Yu, Jianmin Zheng - Developing a Strong CPS Defender: An Evolutionary Approach - https://arxiv.org/abs/2512.08320 - arXiv:2512.08320v1 Announce Type: new -Abstract: Cyber-physical systems (CPSs) are used extensively in critical infrastructure, underscoring the need for anomaly detection systems that are able to catch even the most motivated attackers. Traditional anomaly detection techniques typically do `one-off' training on datasets crafted by experts or generated by fuzzers, potentially limiting their ability to generalize to unseen and more subtle attack strategies. Stopping at this point misses a key opportunity: a defender can actively challenge the attacker to find more nuanced attacks, which in turn can lead to more effective detection capabilities. Building on this concept, we propose Evo-Defender, an evolutionary framework that iteratively strengthens CPS defenses through a dynamic attacker-defender interaction. Evo-Defender includes a smart attacker that employs guided fuzzing to explore diverse, non-redundant attack strategies, while the self-evolving defender uses incremental learning to adapt to new attack patterns. We implement Evo-Defender on two realistic CPS testbeds: the Tennessee Eastman process and a Robotic Arm Assembly Workstation, injecting over 600 attack scenarios. In end-to-end attack detection experiments, Evo-Defender achieves up to 2.7% higher performance than state-of-the-art baselines on unseen scenarios, while utilizing training data more efficiently for faster and more robust detection. - oai:arXiv.org:2512.08320v1 + Proof of Trusted Execution: A Consensus Paradigm for Deterministic Blockchain Finality + https://arxiv.org/abs/2512.09409 + arXiv:2512.09409v1 Announce Type: new +Abstract: Current blockchain consensus protocols -- notably, Proof of Work (PoW) and Proof of Stake (PoS) -- deliver global agreement but exhibit structural constraints. PoW anchors security in heavy computation, inflating energy use and imposing high confirmation latency. PoS improves efficiency but introduces stake concentration, long-range and "nothing-at-stake" vulnerabilities, and a hard performance ceiling shaped by slot times and multi-round committee voting. In this paper, we propose Proof of Trusted Execution (PoTE), a consensus paradigm where agreement emerges from verifiable execution rather than replicated re-execution. Validators operate inside heterogeneous VM-based TEEs, each running the same canonical program whose measurement is publicly recorded, and each producing vendor-backed attestations that bind the enclave code hash to the block contents. Because the execution is deterministic and the proposer is uniquely derived from public randomness, PoTE avoids forks, eliminates slot.time bottlenecks, and commits blocks in a single round of verification. We present the design of a PoTE consensus client, describe our reference implementation, and evaluate its performance against the stringent throughput requirements of the Trillion decentralized exchange. + oai:arXiv.org:2512.09409v1 cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Qingyuan Hu, Christopher M. Poskitt, Jun Sun, Yuqi Chen + Kyle Habib, Vladislav Kapitsyn, Giovanni Mazzeo, Faisal Mehrban - Emulation of Complex Matrix Multiplication based on the Chinese Remainder Theorem - https://arxiv.org/abs/2512.08321 - arXiv:2512.08321v1 Announce Type: new -Abstract: Modern computing architectures feature low-precision matrix multiplication units that achieve substantially higher throughput than their high-precision counterparts. Motivated by this architectural trend, the emulation of high-precision matrix multiplication using low-precision hardware has attracted significant interest in the high-performance computing community. Ozaki, Uchino, and Imamura introduced the Ozaki-II scheme as a general framework for emulating matrix multiplication. Building on this framework, Uchino, Ozaki, and Imamura developed high-performance and power-efficient techniques for emulating single- and double-precision real matrix multiplication on INT8 matrix engines. Extending this line of research, the present study proposes high-performance emulation methods for single- and double-precision complex matrix multiplication on INT8 matrix engines, based on the Ozaki-II scheme. On an NVIDIA B200 GPU, the proposed methods achieve 4.0x--5.6x and 4.4x--6.5x speedups over the native single- and double-precision complex matrix multiplication routines from cuBLAS, respectively, for sufficiently large problem sizes. When lower accuracy than that of the standard routine is acceptable, the proposed methods can operate at even higher speed. Conversely, with only a modest increase in computation time, they can also deliver higher accuracy than the standard routines. These properties suggest that the proposed approach has the potential to serve as a default algorithm across a wide range of applications. - oai:arXiv.org:2512.08321v1 - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + Generalizable Collaborative Search-and-Capture in Cluttered Environments via Path-Guided MAPPO and Directional Frontier Allocation + https://arxiv.org/abs/2512.09410 + arXiv:2512.09410v1 Announce Type: new +Abstract: Collaborative pursuit-evasion in cluttered environments presents significant challenges due to sparse rewards and constrained Fields of View (FOV). Standard Multi-Agent Reinforcement Learning (MARL) often suffers from inefficient exploration and fails to scale to large scenarios. We propose PGF-MAPPO (Path-Guided Frontier MAPPO), a hierarchical framework bridging topological planning with reactive control. To resolve local minima and sparse rewards, we integrate an A*-based potential field for dense reward shaping. Furthermore, we introduce Directional Frontier Allocation, combining Farthest Point Sampling (FPS) with geometric angle suppression to enforce spatial dispersion and accelerate coverage. The architecture employs a parameter-shared decentralized critic, maintaining O(1) model complexity suitable for robotic swarms. Experiments demonstrate that PGF-MAPPO achieves superior capture efficiency against faster evaders. Policies trained on 10x10 maps exhibit robust zero-shot generalization to unseen 20x20 environments, significantly outperforming rule-based and learning-based baselines. + oai:arXiv.org:2512.09410v1 + cs.RO + cs.LG + cs.MA + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuki Uchino, Qianxiang Ma, Toshiyuki Imamura, Katsuhisa Ozaki, Patrick Lars Gutsche - - - Collaborative Intelligence for UAV-Satellite Network Slicing: Towards a Joint QoS-Energy-Fairness MADRL Optimization - https://arxiv.org/abs/2512.08322 - arXiv:2512.08322v1 Announce Type: new -Abstract: Non terrestrial networks are critical for achieving global 6G coverage, yet efficient resource management in aerial and space environments remains challenging due to limited onboard power and dynamic operational conditions. Network slicing offers a promising solution for spectrum optimization in UAV based systems serving heterogeneous service demands. For that, this paper proposes a hierarchical network slicing framework for UAV satellite integrated networks supporting eMBB, URLLC, and mMTC services. Specifically, we formulate a joint optimization of UAV trajectory, transmission power, and spectrum allocation as a decentralized partially observable Markov decision process that ensures quality of service while minimizing energy consumption and maximizing resource fairness. To address the computational intractability and partial observability, we develop a multi agent deep reinforcement learning solution under the centralized training and decentralized execution paradigm. In the proposed system, UAV agents act as distributed actors coordinated by a shared critic operating with multi head attention mechanism at a low Earth orbit satellite. Experimental results then demonstrate that our approach outperforms existing methods by up to 33% in cumulative reward while achieving superior energy efficiency and fairness. - oai:arXiv.org:2512.08322v1 - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Thanh-Dao Nguyen, Ngoc-Tan Nguyen, Thai-Duong Nguyen, Nguyen Van Huynh, Dinh-Hieu Tran, Symeon Chatzinotas + Jialin Ying, Zhihao Li, Zicheng Dong, Guohua Wu, Yihuan Liao - Detecting Dental Landmarks from Intraoral 3D Scans: the 3DTeethLand challenge - https://arxiv.org/abs/2512.08323 - arXiv:2512.08323v1 Announce Type: new -Abstract: Teeth landmark detection is a critical task in modern clinical orthodontics. Their precise identification enables advanced diagnostics, facilitates personalized treatment strategies, and supports more effective monitoring of treatment progress in clinical dentistry. However, several significant challenges may arise due to the intricate geometry of individual teeth and the substantial variations observed across different individuals. To address these complexities, the development of advanced techniques, especially through the application of deep learning, is essential for the precise and reliable detection of 3D tooth landmarks. In this context, the 3DTeethLand challenge was held in collaboration with the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2024, calling for algorithms focused on teeth landmark detection from intraoral 3D scans. This challenge introduced the first publicly available dataset for 3D teeth landmark detection, offering a valuable resource to assess the state-of-the-art methods in this task and encourage the community to provide methodological contributions towards the resolution of their problem with significant clinical implications. - oai:arXiv.org:2512.08323v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + D$^2$GSLAM: 4D Dynamic Gaussian Splatting SLAM + https://arxiv.org/abs/2512.09411 + arXiv:2512.09411v1 Announce Type: new +Abstract: Recent advances in Dense Simultaneous Localization and Mapping (SLAM) have demonstrated remarkable performance in static environments. However, dense SLAM in dynamic environments remains challenging. Most methods directly remove dynamic objects and focus solely on static scene reconstruction, which ignores the motion information contained in these dynamic objects. In this paper, we present D$^2$GSLAM, a novel dynamic SLAM system utilizing Gaussian representation, which simultaneously performs accurate dynamic reconstruction and robust tracking within dynamic environments. Our system is composed of four key components: (i) We propose a geometric-prompt dynamic separation method to distinguish between static and dynamic elements of the scene. This approach leverages the geometric consistency of Gaussian representation and scene geometry to obtain coarse dynamic regions. The regions then serve as prompts to guide the refinement of the coarse mask for achieving accurate motion mask. (ii) To facilitate accurate and efficient mapping of the dynamic scene, we introduce dynamic-static composite representation that integrates static 3D Gaussians with dynamic 4D Gaussians. This representation allows for modeling the transitions between static and dynamic states of objects in the scene for composite mapping and optimization. (iii) We employ a progressive pose refinement strategy that leverages both the multi-view consistency of static scene geometry and motion information from dynamic objects to achieve accurate camera tracking. (iv) We introduce a motion consistency loss, which leverages the temporal continuity in object motions for accurate dynamic modeling. Our D$^2$GSLAM demonstrates superior performance on dynamic scenes in terms of mapping and tracking accuracy, while also showing capability in accurate dynamic modeling. + oai:arXiv.org:2512.09411v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Achraf Ben-Hamadou, Nour Neifar, Ahmed Rekik, Oussama Smaoui, Firas Bouzguenda, Sergi Pujades, Niels van Nistelrooij, Shankeeth Vinayahalingam, Kaibo Shi, Hairong Jin, Youyi Zheng, Tibor Kub\'ik, Old\v{r}ich Kodym, Petr \v{S}illing, Kate\v{r}ina Tr\'avn\'i\v{c}kov\'a, Tom\'a\v{s} Moj\v{z}i\v{s}, Jan Matula, Jeffry Hartanto, Xiaoying Zhu, Kim-Ngan Nguyen, Tudor Dascalu, Huikai Wu, and Weijie Liu, Shaojie Zhuang, Guangshun Wei, Yuanfeng Zhou + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Siting Zhu, Yuxiang Huang, Wenhua Wu, Chaokang Jiang, Yongbo Chen, I-Ming Chen, Hesheng Wang - GeoDiffMM: Geometry-Guided Conditional Diffusion for Motion Magnification - https://arxiv.org/abs/2512.08325 - arXiv:2512.08325v1 Announce Type: new -Abstract: Video Motion Magnification (VMM) amplifies subtle macroscopic motions to a perceptible level. Recently, existing mainstream Eulerian approaches address amplification-induced noise via decoupling representation learning such as texture, shape and frequancey schemes, but they still struggle to separate photon noise from true micro-motion when motion displacements are very small. We propose GeoDiffMM, a novel diffusion-based Lagrangian VMM framework conditioned on optical flow as a geometric cue, enabling structurally consistent motion magnification. Specifically, we design a Noise-free Optical Flow Augmentation strategy that synthesizes diverse nonrigid motion fields without photon noise as supervision, helping the model learn more accurate geometry-aware optial flow and generalize better. Next, we develop a Diffusion Motion Magnifier that conditions the denoising process on (i) optical flow as a geometry prior and (ii) a learnable magnification factor controlling magnitude, thereby selectively amplifying motion components consistent with scene semantics and structure while suppressing content-irrelevant perturbations. Finally, we perform Flow-based Video Synthesis to map the amplified motion back to the image domain with high fidelity. Extensive experiments on real and synthetic datasets show that GeoDiffMM outperforms state-of-the-art methods and significantly improves motion magnification. - oai:arXiv.org:2512.08325v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Simple Modal Types for Functional Reactive Programming + https://arxiv.org/abs/2512.09412 + arXiv:2512.09412v1 Announce Type: new +Abstract: Functional reactive programming (FRP) is a declarative programming paradigm for implementing reactive programs at a high level of abstraction. It applies functional programming principles to construct and manipulate time-varying values, also known as signals. However, for this programming paradigm to work in practice, an FRP language must ensure that programs are causal, productive, and free from space leaks. Over the past fifteen years, several modal type systems to enforce these operational properties have been developed. + We present a new FRP language with a significantly simplified modal type system that imposes fewer restrictions than previous modal FRP languages while still guaranteeing the central operational properties of causality, productivity, and absence of space leaks. The key enabling idea is to alter the semantics of signals so that the type system can safely allow more programs to type-check, which also makes the language more expressive. With this new semantics, signals are modelled as mutable references whose mutability is tightly controlled by the 'later' type modality. This disciplined form of mutability also enables more efficient in-place updates of signals, all while preserving a functional programming style. + oai:arXiv.org:2512.09412v1 + cs.PL + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xuedeng Liu, Jiabao Guo, Zheng Zhang, Fei Wang, Zhi Liu, Dan Guo + Patrick Bahr - Argus: A Multi-Agent Sensitive Information Leakage Detection Framework Based on Hierarchical Reference Relationships - https://arxiv.org/abs/2512.08326 - arXiv:2512.08326v1 Announce Type: new -Abstract: Sensitive information leakage in code repositories has emerged as a critical security challenge. Traditional detection methods that rely on regular expressions, fingerprint features, and high-entropy calculations often suffer from high false-positive rates. This not only reduces detection efficiency but also significantly increases the manual screening burden on developers. Recent advances in large language models (LLMs) and multi-agent collaborative architectures have demonstrated remarkable potential for tackling complex tasks, offering a novel technological perspective for sensitive information detection. In response to these challenges, we propose Argus, a multi-agent collaborative framework for detecting sensitive information. Argus employs a three-tier detection mechanism that integrates key content, file context, and project reference relationships to effectively reduce false positives and enhance overall detection accuracy. To comprehensively evaluate Argus in real-world repository environments, we developed two new benchmarks, one to assess genuine leak detection capabilities and another to evaluate false-positive filtering performance. Experimental results show that Argus achieves up to 94.86% accuracy in leak detection, with a precision of 96.36%, recall of 94.64%, and an F1 score of 0.955. Moreover, the analysis of 97 real repositories incurred a total cost of only 2.2$. All code implementations and related datasets are publicly available at https://github.com/TheBinKing/Argus-Guard for further research and application. - oai:arXiv.org:2512.08326v1 - cs.CR - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Time-Discretized Simulation of Vehicle Platoons for Safety Analysis with Guaranteed Error Bounds + https://arxiv.org/abs/2512.09416 + arXiv:2512.09416v1 Announce Type: new +Abstract: Wireless communication is essential to achieve coordinated control in vehicle platoons. However, packet losses in wireless communication can cause critical safety issues when they occur in conjunction with sudden brakes. In this paper, we propose simulation-based methods that allow the study of such safety issues by determining the absolute minimum distance between vehicles over time for various control parameters that guarantee string stability. For our proposed time-discretized simulations, we provide two methods for selecting different time-step intervals to ensure that the error in distance approximation remains within specified bounds at all times. Through numerical examples we demonstrate that among control parameters that guarantee string stability some perform better than others under simultaneously occurring packet losses and sudden brakes. + oai:arXiv.org:2512.09416v1 + eess.SY + cs.SY + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1145/3744916.3773208 - Bin Wang, Hui Li, Liyang Zhang, Qijia Zhuang, Ao Yang, Dong Zhang, Xijun Luo, Bing Lin + Yuhao Chen, Ahmet Cetinkaya - Low Rank Support Quaternion Matrix Machine - https://arxiv.org/abs/2512.08327 - arXiv:2512.08327v1 Announce Type: new -Abstract: Input features are conventionally represented as vectors, matrices, or third order tensors in the real field, for color image classification. Inspired by the success of quaternion data modeling for color images in image recovery and denoising tasks, we propose a novel classification method for color image classification, named as the Low-rank Support Quaternion Matrix Machine (LSQMM), in which the RGB channels are treated as pure quaternions to effectively preserve the intrinsic coupling relationships among channels via the quaternion algebra. For the purpose of promoting low-rank structures resulting from strongly correlated color channels, a quaternion nuclear norm regularization term, serving as a natural extension of the conventional matrix nuclear norm to the quaternion domain, is added to the hinge loss in our LSQMM model. An Alternating Direction Method of Multipliers (ADMM)-based iterative algorithm is designed to effectively resolve the proposed quaternion optimization model. Experimental results on multiple color image classification datasets demonstrate that our proposed classification approach exhibits advantages in classification accuracy, robustness and computational efficiency, compared to several state-of-the-art methods using support vector machines, support matrix machines, and support tensor machines. - oai:arXiv.org:2512.08327v1 + DirectSwap: Mask-Free Cross-Identity Training and Benchmarking for Expression-Consistent Video Head Swapping + https://arxiv.org/abs/2512.09417 + arXiv:2512.09417v1 Announce Type: new +Abstract: Video head swapping aims to replace the entire head of a video subject, including facial identity, head shape, and hairstyle, with that of a reference image, while preserving the target body, background, and motion dynamics. Due to the lack of ground-truth paired swapping data, prior methods typically train on cross-frame pairs of the same person within a video and rely on mask-based inpainting to mitigate identity leakage. Beyond potential boundary artifacts, this paradigm struggles to recover essential cues occluded by the mask, such as facial pose, expressions, and motion dynamics. To address these issues, we prompt a video editing model to synthesize new heads for existing videos as fake swapping inputs, while maintaining frame-synchronized facial poses and expressions. This yields HeadSwapBench, the first cross-identity paired dataset for video head swapping, which supports both training (\TrainNum{} videos) and benchmarking (\TestNum{} videos) with genuine outputs. Leveraging this paired supervision, we propose DirectSwap, a mask-free, direct video head-swapping framework that extends an image U-Net into a video diffusion model with a motion module and conditioning inputs. Furthermore, we introduce the Motion- and Expression-Aware Reconstruction (MEAR) loss, which reweights the diffusion loss per pixel using frame-difference magnitudes and facial-landmark proximity, thereby enhancing cross-frame coherence in motion and expressions. Extensive experiments demonstrate that DirectSwap achieves state-of-the-art visual quality, identity fidelity, and motion and expression consistency across diverse in-the-wild video scenes. We will release the source code and the HeadSwapBench dataset to facilitate future research. + oai:arXiv.org:2512.09417v1 cs.CV - cs.LG - math.OC - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wang Chen, Ziyan Luo, Shuangyue Wang + http://creativecommons.org/licenses/by/4.0/ + Yanan Wang, Shengcai Liao, Panwen Hu, Xin Li, Fan Yang, Xiaodan Liang - Interpreting Structured Perturbations in Image Protection Methods for Diffusion Models - https://arxiv.org/abs/2512.08329 - arXiv:2512.08329v1 Announce Type: new -Abstract: Recent image protection mechanisms such as Glaze and Nightshade introduce imperceptible, adversarially designed perturbations intended to disrupt downstream text-to-image generative models. While their empirical effectiveness is known, the internal structure, detectability, and representational behavior of these perturbations remain poorly understood. This study provides a systematic, explainable AI analysis using a unified framework that integrates white-box feature-space inspection and black-box signal-level probing. Through latent-space clustering, feature-channel activation analysis, occlusion-based spatial sensitivity mapping, and frequency-domain characterization, we show that protection mechanisms operate as structured, low-entropy perturbations tightly coupled to underlying image content across representational, spatial, and spectral domains. Protected images preserve content-driven feature organization with protection-specific substructure rather than inducing global representational drift. Detectability is governed by interacting effects of perturbation entropy, spatial deployment, and frequency alignment, with sequential protection amplifying detectable structure rather than suppressing it. Frequency-domain analysis shows that Glaze and Nightshade redistribute energy along dominant image-aligned frequency axes rather than introducing diffuse noise. These findings indicate that contemporary image protection operates through structured feature-level deformation rather than semantic dislocation, explaining why protection signals remain visually subtle yet consistently detectable. This work advances the interpretability of adversarial image protection and informs the design of future defenses and detection strategies for generative AI systems. - oai:arXiv.org:2512.08329v1 + Label-free Motion-Conditioned Diffusion Model for Cardiac Ultrasound Synthesis + https://arxiv.org/abs/2512.09418 + arXiv:2512.09418v1 Announce Type: new +Abstract: Ultrasound echocardiography is essential for the non-invasive, real-time assessment of cardiac function, but the scarcity of labelled data, driven by privacy restrictions and the complexity of expert annotation, remains a major obstacle for deep learning methods. We propose the Motion Conditioned Diffusion Model (MCDM), a label-free latent diffusion framework that synthesises realistic echocardiography videos conditioned on self-supervised motion features. To extract these features, we design the Motion and Appearance Feature Extractor (MAFE), which disentangles motion and appearance representations from videos. Feature learning is further enhanced by two auxiliary objectives: a re-identification loss guided by pseudo appearance features and an optical flow loss guided by pseudo flow fields. Evaluated on the EchoNet-Dynamic dataset, MCDM achieves competitive video generation performance, producing temporally coherent and clinically realistic sequences without reliance on manual labels. These results demonstrate the potential of self-supervised conditioning for scalable echocardiography synthesis. Our code is available at https://github.com/ZheLi2020/LabelfreeMCDM. + oai:arXiv.org:2512.09418v1 cs.CV - cs.AI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Michael R. Martin, Garrick Chan, Kwan-Liu Ma + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Zhe Li, Hadrien Reynaud, Johanna P M\"uller, Bernhard Kainz - PointDico: Contrastive 3D Representation Learning Guided by Diffusion Models - https://arxiv.org/abs/2512.08330 - arXiv:2512.08330v1 Announce Type: new -Abstract: Self-supervised representation learning has shown significant improvement in Natural Language Processing and 2D Computer Vision. However, existing methods face difficulties in representing 3D data because of its unordered and uneven density. Through an in-depth analysis of mainstream contrastive and generative approaches, we find that contrastive models tend to suffer from overfitting, while 3D Mask Autoencoders struggle to handle unordered point clouds. This motivates us to learn 3D representations by sharing the merits of diffusion and contrast models, which is non-trivial due to the pattern difference between the two paradigms. In this paper, we propose \textit{PointDico}, a novel model that seamlessly integrates these methods. \textit{PointDico} learns from both denoising generative modeling and cross-modal contrastive learning through knowledge distillation, where the diffusion model serves as a guide for the contrastive model. We introduce a hierarchical pyramid conditional generator for multi-scale geometric feature extraction and employ a dual-channel design to effectively integrate local and global contextual information. \textit{PointDico} achieves a new state-of-the-art in 3D representation learning, \textit{e.g.}, \textbf{94.32\%} accuracy on ScanObjectNN, \textbf{86.5\%} Inst. mIoU on ShapeNetPart. - oai:arXiv.org:2512.08330v1 + InfoMotion: A Graph-Based Approach to Video Dataset Distillation for Echocardiography + https://arxiv.org/abs/2512.09422 + arXiv:2512.09422v1 Announce Type: new +Abstract: Echocardiography playing a critical role in the diagnosis and monitoring of cardiovascular diseases as a non-invasive real-time assessment of cardiac structure and function. However, the growing scale of echocardiographic video data presents significant challenges in terms of storage, computation, and model training efficiency. Dataset distillation offers a promising solution by synthesizing a compact, informative subset of data that retains the key clinical features of the original dataset. In this work, we propose a novel approach for distilling a compact synthetic echocardiographic video dataset. Our method leverages motion feature extraction to capture temporal dynamics, followed by class-wise graph construction and representative sample selection using the Infomap algorithm. This enables us to select a diverse and informative subset of synthetic videos that preserves the essential characteristics of the original dataset. We evaluate our approach on the EchoNet-Dynamic datasets and achieve a test accuracy of \(69.38\%\) using only \(25\) synthetic videos. These results demonstrate the effectiveness and scalability of our method for medical video dataset distillation. + oai:arXiv.org:2512.09422v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Pengbo Li, Yiding Sun, Haozhe Cheng + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Zhe Li, Hadrien Reynaud, Alberto Gomez, Bernhard Kainz - Bi^2MAC: Bimodal Bi-Adaptive Mask-Aware Convolution for Remote Sensing Pansharpening - https://arxiv.org/abs/2512.08331 - arXiv:2512.08331v1 Announce Type: new -Abstract: Pansharpening aims to fuse a high-resolution panchromatic (PAN) image with a low-resolution multispectral (LRMS) image to generate a high-resolution multispectral image (HRMS). Conventional deep learning-based methods are inherently limited in their ability to adapt to regional heterogeneity within feature representations. Although various adaptive convolution methods have been proposed to address this limitation, they often suffer from excessive computational costs and a limited ability to capture heterogeneous regions in remote sensing images effectively. To overcome these challenges, we propose Bimodal Bi-Adaptive Mask-Aware Convolution (Bi^2MAC), which effectively exploits information from different types of regions while intelligently allocating computational resources. Specifically, we design a lightweight module to generate both soft and hard masks, which are used to modulate the input features preliminarily and to guide different types of regions into separate processing branches, respectively. Redundant features are directed to a compact branch for low-cost global processing. In contrast, heterogeneous features are routed to a focused branch that invests more computational resources for fine-grained modeling. Extensive experiments on multiple benchmark datasets demonstrate that Bi^2MAC achieves state-of-the-art (SOTA) performance while requiring substantially lower training time and parameter counts, and the minimal computational cost among adaptive convolution models. - oai:arXiv.org:2512.08331v1 + FunPhase: A Periodic Functional Autoencoder for Motion Generation via Phase Manifolds + https://arxiv.org/abs/2512.09423 + arXiv:2512.09423v1 Announce Type: new +Abstract: Learning natural body motion remains challenging due to the strong coupling between spatial geometry and temporal dynamics. Embedding motion in phase manifolds, latent spaces that capture local periodicity, has proven effective for motion prediction; however, existing approaches lack scalability and remain confined to specific settings. We introduce FunPhase, a functional periodic autoencoder that learns a phase manifold for motion and replaces discrete temporal decoding with a function-space formulation, enabling smooth trajectories that can be sampled at arbitrary temporal resolutions. FunPhase supports downstream tasks such as super-resolution and partial-body motion completion, generalizes across skeletons and datasets, and unifies motion prediction and generation within a single interpretable manifold. Our model achieves substantially lower reconstruction error than prior periodic autoencoder baselines while enabling a broader range of applications and performing on par with state-of-the-art motion generation methods. + oai:arXiv.org:2512.09423v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xianghong Xiao, Zeyu Xia, Zhou Fei, Jinliang Xiao, Haorui Chen, Liangjian Deng + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Marco Pegoraro, Evan Atherton, Bruno Roy, Aliasghar Khani, Arianna Rampini - On the Fundamental Tradeoff of Joint Communication and QCD: The Monostatic Case - https://arxiv.org/abs/2512.08332 - arXiv:2512.08332v1 Announce Type: new -Abstract: This paper investigates the fundamental tradeoff between communication and quickest change detection (QCD) in integrated sensing and communication (ISAC) systems under a monostatic setup. We introduce a novel Joint Communication and quickest Change subblock coding Strategy (JCCS) that leverages feedback to adapt coding dynamically based on real-time state estimation. The achievable rate-delay region is characterized using state-dependent mutual information and KL divergence, providing a comprehensive framework for analyzing the interplay between communication performance and detection delay. Moreover, we provide a partial converse demonstrating the asymptotic optimality of the proposed detection algorithm within the JCCS framework. To illustrate the practical implications, we analyze binary and MIMO Gaussian channels, revealing insights into achieving optimal tradeoffs in ISAC system design. - oai:arXiv.org:2512.08332v1 - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + ODMA: On-Demand Memory Allocation Framework for LLM Serving on LPDDR-Class Accelerators + https://arxiv.org/abs/2512.09427 + arXiv:2512.09427v1 Announce Type: new +Abstract: Serving large language models (LLMs) on accelerators with poor random-access bandwidth (e.g., LPDDR5-based) is limited by current memory managers. Static pre-allocation wastes memory, while fine-grained paging (e.g., PagedAttention) is ill-suited due to high random-access costs. Existing HBM-centric solutions do not exploit the characteristics of random-access-constrained memory (RACM) accelerators like Cambricon MLU370. We present ODMA, an on-demand memory allocation framework for RACM. ODMA addresses distribution drift and heavy-tailed requests by coupling a lightweight length predictor with dynamic bucket partitioning and a large-bucket safeguard. Boundaries are periodically updated from live traces to maximize utilization. On Alpaca and Google-NQ, ODMA improves prediction accuracy of prior work significantly (e.g., from 82.68% to 93.36%). Serving DeepSeek-R1-Distill-Qwen-7B on Cambricon MLU370-X4, ODMA raises memory utilization from 55.05% to 72.45% and improves RPS and TPS by 29% and 27% over static baselines. This demonstrates that hardware-aware allocation unlocks efficient LLM serving on RACM platforms. + oai:arXiv.org:2512.09427v1 + cs.AR + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sung Hoon Lim, Daewon Seo + Guoqiang Zou, Wanyu Wang, Hao Zheng, Longxiang Yin, Yinhe Han - Robust Finetuning of Vision-Language-Action Robot Policies via Parameter Merging - https://arxiv.org/abs/2512.08333 - arXiv:2512.08333v1 Announce Type: new -Abstract: Generalist robot policies, trained on large and diverse datasets, have demonstrated the ability to generalize across a wide spectrum of behaviors, enabling a single policy to act in varied real-world environments. However, they still fall short on new tasks not covered in the training data. When finetuned on limited demonstrations of a new task, these policies often overfit to the specific demonstrations--not only losing their prior abilities to solve a wide variety of generalist tasks but also failing to generalize within the new task itself. In this work, we aim to develop a method that preserves the generalization capabilities of the generalist policy during finetuning, allowing a single policy to robustly incorporate a new skill into its repertoire. Our goal is a single policy that both learns to generalize to variations of the new task and retains the broad competencies gained from pretraining. We show that this can be achieved through a simple yet effective strategy: interpolating the weights of a finetuned model with that of the pretrained model. We show, across extensive simulated and real-world experiments, that such model merging produces a single model that inherits the generalist abilities of the base model and learns to solve the new task robustly, outperforming both the pretrained and finetuned model on out-of-distribution variations of the new task. Moreover, we show that model merging enables continual acquisition of new skills in a lifelong learning setting, without sacrificing previously learned generalist abilities. - oai:arXiv.org:2512.08333v1 + A Hierarchical, Model-Based System for High-Performance Humanoid Soccer + https://arxiv.org/abs/2512.09431 + arXiv:2512.09431v1 Announce Type: new +Abstract: The development of athletic humanoid robots has gained significant attention as advances in actuation, sensing, and control enable increasingly dynamic, real-world capabilities. RoboCup, an international competition of fully autonomous humanoid robots, provides a uniquely challenging benchmark for such systems, culminating in the long-term goal of competing against human soccer players by 2050. This paper presents the hardware and software innovations underlying our team's victory in the RoboCup 2024 Adult-Sized Humanoid Soccer Competition. On the hardware side, we introduce an adult-sized humanoid platform built with lightweight structural components, high-torque quasi-direct-drive actuators, and a specialized foot design that enables powerful in-gait kicks while preserving locomotion robustness. On the software side, we develop an integrated perception and localization framework that combines stereo vision, object detection, and landmark-based fusion to provide reliable estimates of the ball, goals, teammates, and opponents. A mid-level navigation stack then generates collision-aware, dynamically feasible trajectories, while a centralized behavior manager coordinates high-level decision making, role selection, and kick execution based on the evolving game state. The seamless integration of these subsystems results in fast, precise, and tactically effective gameplay, enabling robust performance under the dynamic and adversarial conditions of real matches. This paper presents the design principles, system architecture, and experimental results that contributed to ARTEMIS's success as the 2024 Adult-Sized Humanoid Soccer champion. + oai:arXiv.org:2512.09431v1 cs.RO - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yajat Yadav, Zhiyuan Zhou, Andrew Wagenmaker, Karl Pertsch, Sergey Levine + http://creativecommons.org/licenses/by/4.0/ + Quanyou Wang, Mingzhang Zhu, Ruochen Hou, Kay Gillespie, Alvin Zhu, Shiqi Wang, Yicheng Wang, Gaberiel I. Fernandez, Yeting Liu, Colin Togashi, Hyunwoo Nam, Aditya Navghare, Alex Xu, Taoyuanmin Zhu, Min Sung Ahn, Arturo Flores Alvarez, Justin Quan, Ethan Hong, Dennis W. Hong - HybridSplat: Fast Reflection-baked Gaussian Tracing using Hybrid Splatting - https://arxiv.org/abs/2512.08334 - arXiv:2512.08334v1 Announce Type: new -Abstract: Rendering complex reflection of real-world scenes using 3D Gaussian splatting has been a quite promising solution for photorealistic novel view synthesis, but still faces bottlenecks especially in rendering speed and memory storage. This paper proposes a new Hybrid Splatting(HybridSplat) mechanism for Gaussian primitives. Our key idea is a new reflection-baked Gaussian tracing, which bakes the view-dependent reflection within each Gaussian primitive while rendering the reflection using tile-based Gaussian splatting. Then we integrate the reflective Gaussian primitives with base Gaussian primitives using a unified hybrid splatting framework for high-fidelity scene reconstruction. Moreover, we further introduce a pipeline-level acceleration for the hybrid splatting, and reflection-sensitive Gaussian pruning to reduce the model size, thus achieving much faster rendering speed and lower memory storage while preserving the reflection rendering quality. By extensive evaluation, our HybridSplat accelerates about 7x rendering speed across complex reflective scenes from Ref-NeRF, NeRF-Casting with 4x fewer Gaussian primitives than similar ray-tracing based Gaussian splatting baselines, serving as a new state-of-the-art method especially for complex reflective scenes. - oai:arXiv.org:2512.08334v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + CourtPressGER: A German Court Decision to Press Release Summarization Dataset + https://arxiv.org/abs/2512.09434 + arXiv:2512.09434v1 Announce Type: new +Abstract: Official court press releases from Germany's highest courts present and explain judicial rulings to the public, as well as to expert audiences. Prior NLP efforts emphasize technical headnotes, ignoring citizen-oriented communication needs. We introduce CourtPressGER, a 6.4k dataset of triples: rulings, human-drafted press releases, and synthetic prompts for LLMs to generate comparable releases. This benchmark trains and evaluates LLMs in generating accurate, readable summaries from long judicial texts. We benchmark small and large LLMs using reference-based metrics, factual-consistency checks, LLM-as-judge, and expert ranking. Large LLMs produce high-quality drafts with minimal hierarchical performance loss; smaller models require hierarchical setups for long judgments. Initial benchmarks show varying model performance, with human-drafted releases ranking highest. + oai:arXiv.org:2512.09434v1 + cs.CL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chang Liu, Hongliang Yuan, Lianghao Zhang, Sichao Wang, Jianwei Guo, Shi-Sheng Huang + http://creativecommons.org/licenses/by/4.0/ + Sebastian Nagl, Mohamed Elganayni, Melanie Pospisil, Matthias Grabmair - Dflow-SUR: Enhancing Generative Aerodynamic Inverse Design using Differentiation Throughout Flow Matching - https://arxiv.org/abs/2512.08336 - arXiv:2512.08336v1 Announce Type: new -Abstract: Generative inverse design requires incorporating physical constraints to ensure that generated designs are both reliable and accurate. However, we observe that current state-of-the-art energy-based methods suffer from an asynchronous phenomenon, where the optimization of the physical loss is constrained by the flow matching inference process. To overcome this limitation, we introduce Dflow-SUR, a differentiation strategy that separates the optimization of the physical loss from the flow matching inference. - Compared to the most advanced energy-based baseline, Dflow-SUR achieves a reduction in physical loss by four orders of magnitude, while also cutting wall-clock time by 74% on the airfoil case. Additionally, it increases the mean lift-to-drag ratio by 11.8% over traditional Latin-hypercube sampling in wing design. Beyond improvements in accuracy and efficiency, Dflow-SUR offers three additional practical advantages: (i) enhanced control over guidance, (ii) lower surrogate uncertainty, and (iii) greater robustness to hyper-parameter tuning. - Together, these results demonstrate that Dflow-SUR is a highly promising framework, providing both scalability and high fidelity for generative aerodynamic design. - oai:arXiv.org:2512.08336v1 - cs.CE - Wed, 10 Dec 2025 00:00:00 -0500 + UniPart: Part-Level 3D Generation with Unified 3D Geom-Seg Latents + https://arxiv.org/abs/2512.09435 + arXiv:2512.09435v1 Announce Type: new +Abstract: Part-level 3D generation is essential for applications requiring decomposable and structured 3D synthesis. However, existing methods either rely on implicit part segmentation with limited granularity control or depend on strong external segmenters trained on large annotated datasets. In this work, we observe that part awareness emerges naturally during whole-object geometry learning and propose Geom-Seg VecSet, a unified geometry-segmentation latent representation that jointly encodes object geometry and part-level structure. Building on this representation, we introduce UniPart, a two-stage latent diffusion framework for image-guided part-level 3D generation. The first stage performs joint geometry generation and latent part segmentation, while the second stage conditions part-level diffusion on both whole-object and part-specific latents. A dual-space generation scheme further enhances geometric fidelity by predicting part latents in both global and canonical spaces. Extensive experiments demonstrate that UniPart achieves superior segmentation controllability and part-level geometric quality compared with existing approaches. + oai:arXiv.org:2512.09435v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Aobo Yang, Zhen Wei, Rhea Liem, Pascal Fua + Xufan He, Yushuang Wu, Xiaoyang Guo, Chongjie Ye, Jiaqing Zhou, Tianlei Hu, Xiaoguang Han, Dong Du - DINO-BOLDNet: A DINOv3-Guided Multi-Slice Attention Network for T1-to-BOLD Generation - https://arxiv.org/abs/2512.08337 - arXiv:2512.08337v1 Announce Type: new -Abstract: Generating BOLD images from T1w images offers a promising solution for recovering missing BOLD information and enabling downstream tasks when BOLD images are corrupted or unavailable. Motivated by this, we propose DINO-BOLDNet, a DINOv3-guided multi-slice attention framework that integrates a frozen self-supervised DINOv3 encoder with a lightweight trainable decoder. The model uses DINOv3 to extract within-slice structural representations, and a separate slice-attention module to fuse contextual information across neighboring slices. A multi-scale generation decoder then restores fine-grained functional contrast, while a DINO-based perceptual loss encourages structural and textural consistency between predictions and ground-truth BOLD in the transformer feature space. Experiments on a clinical dataset of 248 subjects show that DINO-BOLDNet surpasses a conditional GAN baseline in both PSNR and MS-SSIM. To our knowledge, this is the first framework capable of generating mean BOLD images directly from T1w images, highlighting the potential of self-supervised transformer guidance for structural-to-functional mapping. - oai:arXiv.org:2512.08337v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Knowledge-Augmented Large Language Model Agents for Explainable Financial Decision-Making + https://arxiv.org/abs/2512.09440 + arXiv:2512.09440v1 Announce Type: new +Abstract: This study investigates an explainable reasoning method for financial decision-making based on knowledge-enhanced large language model agents. To address the limitations of traditional financial decision methods that rely on parameterized knowledge, lack factual consistency, and miss reasoning chains, an integrated framework is proposed that combines external knowledge retrieval, semantic representation, and reasoning generation. The method first encodes financial texts and structured data to obtain semantic representations, and then retrieves task-related information from external knowledge bases using similarity computation. Internal representations and external knowledge are combined through weighted fusion, which ensures fluency while improving factual accuracy and completeness of generated content. In the reasoning stage, a multi-head attention mechanism is introduced to construct logical chains, allowing the model to present transparent causal relationships and traceability during generation. Finally, the model jointly optimizes task objectives and explanation consistency objectives, which enhances predictive performance and reasoning interpretability. Experiments on financial text processing and decision tasks show that the method outperforms baseline approaches in accuracy, text generation quality, and factual support, verifying the effectiveness of knowledge enhancement and explainable reasoning. Overall, the proposed approach overcomes the limitations of traditional models in semantic coverage and reasoning transparency, and demonstrates strong practical value in complex financial scenarios. + oai:arXiv.org:2512.09440v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jianwei Wang, Qing Wang, Menglan Ruan, Rongjun Ge, Chunfeng Yang, Yang Chen, Chunming Xie + Qingyuan Zhang, Yuxi Wang, Cancan Hua, Yulin Huang, Ning Lyu - Predicting California Bearing Ratio with Ensemble and Neural Network Models: A Case Study from T\"urkiye - https://arxiv.org/abs/2512.08340 - arXiv:2512.08340v1 Announce Type: new -Abstract: The California Bearing Ratio (CBR) is a key geotechnical indicator used to assess the load-bearing capacity of subgrade soils, especially in transportation infrastructure and foundation design. Traditional CBR determination relies on laboratory penetration tests. Despite their accuracy, these tests are often time-consuming, costly, and can be impractical, particularly for large-scale or diverse soil profiles. Recent progress in artificial intelligence, especially machine learning (ML), has enabled data-driven approaches for modeling complex soil behavior with greater speed and precision. This study introduces a comprehensive ML framework for CBR prediction using a dataset of 382 soil samples collected from various geoclimatic regions in T\"urkiye. The dataset includes physicochemical soil properties relevant to bearing capacity, allowing multidimensional feature representation in a supervised learning context. Twelve ML algorithms were tested, including decision tree, random forest, extra trees, gradient boosting, xgboost, k-nearest neighbors, support vector regression, multi-layer perceptron, adaboost, bagging, voting, and stacking regressors. Each model was trained, validated, and evaluated to assess its generalization and robustness. Among them, the random forest regressor performed the best, achieving strong R2 scores of 0.95 (training), 0.76 (validation), and 0.83 (test). These outcomes highlight the model's powerful nonlinear mapping ability, making it a promising tool for predictive geotechnical tasks. The study supports the integration of intelligent, data-centric models in geotechnical engineering, offering an effective alternative to traditional methods and promoting digital transformation in infrastructure analysis and design. - oai:arXiv.org:2512.08340v1 + Representation Calibration and Uncertainty Guidance for Class-Incremental Learning based on Vision Language Model + https://arxiv.org/abs/2512.09441 + arXiv:2512.09441v1 Announce Type: new +Abstract: Class-incremental learning requires a learning system to continually learn knowledge of new classes and meanwhile try to preserve previously learned knowledge of old classes. As current state-of-the-art methods based on Vision-Language Models (VLMs) still suffer from the issue of differentiating classes across learning tasks. Here a novel VLM-based continual learning framework for image classification is proposed. In this framework, task-specific adapters are added to the pre-trained and frozen image encoder to learn new knowledge, and a novel cross-task representation calibration strategy based on a mixture of light-weight projectors is used to help better separate all learned classes in a unified feature space, alleviating class confusion across tasks. In addition, a novel inference strategy guided by prediction uncertainty is developed to more accurately select the most appropriate image feature for class prediction. Extensive experiments on multiple datasets under various settings demonstrate the superior performance of our method compared to existing ones. + oai:arXiv.org:2512.09441v1 + cs.CV cs.AI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - 10.5281/zenodo.17530868 - Proc. of the 13th Int. Symp. on Intelligent Manufacturing and Service Systems, pp. 563-570, 2025, ISBN 978-625-00-3472-9 - Abdullah Hulusi K\"ok\c{c}am, U\u{g}ur Da\u{g}deviren, Talas Fikret Kurnaz, Alparslan Serhat Demir, Caner Erden + http://creativecommons.org/licenses/by-sa/4.0/ + Jiantao Tan, Peixian Ma, Tong Yu, Wentao Zhang, Ruixuan Wang - Multi-Agent Deep Reinforcement Learning for Collaborative UAV Relay Networks under Jamming Atatcks - https://arxiv.org/abs/2512.08341 - arXiv:2512.08341v1 Announce Type: new -Abstract: The deployment of Unmanned Aerial Vehicle (UAV) swarms as dynamic communication relays is critical for next-generation tactical networks. However, operating in contested environments requires solving a complex trade-off, including maximizing system throughput while ensuring collision avoidance and resilience against adversarial jamming. Existing heuristic-based approaches often struggle to find effective solutions due to the dynamic and multi-objective nature of this problem. This paper formulates this challenge as a cooperative Multi-Agent Reinforcement Learning (MARL) problem, solved using the Centralized Training with Decentralized Execution (CTDE) framework. Our approach employs a centralized critic that uses global state information to guide decentralized actors which operate using only local observations. Simulation results show that our proposed framework significantly outperforms heuristic baselines, increasing the total system throughput by approximately 50% while simultaneously achieving a near-zero collision rate. A key finding is that the agents develop an emergent anti-jamming strategy without explicit programming. They learn to intelligently position themselves to balance the trade-off between mitigating interference from jammers and maintaining effective communication links with ground users. - oai:arXiv.org:2512.08341v1 - cs.NI - cs.LG - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + Reference Recommendation based Membership Inference Attack against Hybrid-based Recommender Systems + https://arxiv.org/abs/2512.09442 + arXiv:2512.09442v1 Announce Type: new +Abstract: Recommender systems have been widely deployed across various domains such as e-commerce and social media, and intelligently suggest items like products and potential friends to users based on their preferences and interaction history, which are often privacy-sensitive. Recent studies have revealed that recommender systems are prone to membership inference attacks (MIAs), where an attacker aims to infer whether or not a user's data has been used for training a target recommender system. However, existing MIAs fail to exploit the unique characteristic of recommender systems, and therefore are only applicable to mixed recommender systems consisting of two recommendation algorithms. This leaves a gap in investigating MIAs against hybrid-based recommender systems where the same algorithm utilizing user-item historical interactions and attributes of users and items serves and produces personalised recommendations. To investigate how the personalisation in hybrid-based recommender systems influences MIA, we propose a novel metric-based MIA. Specifically, we leverage the characteristic of personalisation to obtain reference recommendation for any target users. Then, a relative membership metric is proposed to exploit a target user's historical interactions, target recommendation, and reference recommendation to infer the membership of the target user's data. Finally, we theoretically and empirically demonstrate the efficacy of the proposed metric-based MIA on hybrid-based recommender systems. + oai:arXiv.org:2512.09442v1 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Thai Duong Nguyen, Ngoc-Tan Nguyen, Thanh-Dao Nguyen, Nguyen Van Huynh, Dinh-Hieu Tran, Symeon Chatzinotas + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiaoxiao Chi, Xuyun Zhang, Yan Wang, Hongsheng Hu, Wanchun Dou - Soil Compaction Parameters Prediction Based on Automated Machine Learning Approach - https://arxiv.org/abs/2512.08343 - arXiv:2512.08343v1 Announce Type: new -Abstract: Soil compaction is critical in construction engineering to ensure the stability of structures like road embankments and earth dams. Traditional methods for determining optimum moisture content (OMC) and maximum dry density (MDD) involve labor-intensive laboratory experiments, and empirical regression models have limited applicability and accuracy across diverse soil types. In recent years, artificial intelligence (AI) and machine learning (ML) techniques have emerged as alternatives for predicting these compaction parameters. However, ML models often struggle with prediction accuracy and generalizability, particularly with heterogeneous datasets representing various soil types. This study proposes an automated machine learning (AutoML) approach to predict OMC and MDD. AutoML automates algorithm selection and hyperparameter optimization, potentially improving accuracy and scalability. Through extensive experimentation, the study found that the Extreme Gradient Boosting (XGBoost) algorithm provided the best performance, achieving R-squared values of 80.4% for MDD and 89.1% for OMC on a separate dataset. These results demonstrate the effectiveness of AutoML in predicting compaction parameters across different soil types. The study also highlights the importance of heterogeneous datasets in improving the generalization and performance of ML models. Ultimately, this research contributes to more efficient and reliable construction practices by enhancing the prediction of soil compaction parameters. - oai:arXiv.org:2512.08343v1 + Advancing Research via Human-AI Interactive Theorem Proving + https://arxiv.org/abs/2512.09443 + arXiv:2512.09443v1 Announce Type: new +Abstract: We investigate how large language models can be used as research tools in scientific computing while preserving mathematical rigor. We propose a human-in-the-loop workflow for interactive theorem proving and discovery with LLMs. Human experts retain control over problem formulation and admissible assumptions, while the model searches for proofs or contradictions, proposes candidate properties and theorems, and helps construct structures and parameters that satisfy explicit constraints, supported by numerical experiments and simple verification checks. Experts treat these outputs as raw material, further refine them, and organize the results into precise statements and rigorous proofs. We instantiate this workflow in a case study on the connection between manifold optimization and Grover's quantum search algorithm, where the pipeline helps identify invariant subspaces, explore Grover-compatible retractions, and obtain convergence guarantees for the retraction-based gradient method. The framework provides a practical template for integrating large language models into frontier mathematical research, enabling faster exploration of proof space and algorithm design while maintaining transparent reasoning responsibilities. Although illustrated on manifold optimization problems in quantum computing, the principles extend to other core areas of scientific computing. + oai:arXiv.org:2512.09443v1 + cs.HC cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.5281/zenodo.17533851 - Proc. of the 13th Int. Symp. on Intelligent Manufacturing and Service Systems, pp. 571-578, 2025, ISBN 978-625-00-3472-9 - Caner Erden, Alparslan Serhat Demir, Abdullah Hulusi Kokcam, Talas Fikret Kurnaz, Ugur Dagdeviren - - - Enhancing Explainability of Graph Neural Networks Through Conceptual and Structural Analyses and Their Extensions - https://arxiv.org/abs/2512.08344 - arXiv:2512.08344v1 Announce Type: new -Abstract: Graph Neural Networks (GNNs) have become a powerful tool for modeling and analyzing data with graph structures. The wide adoption in numerous applications underscores the value of these models. However, the complexity of these methods often impedes understanding their decision-making processes. Current Explainable AI (XAI) methods struggle to untangle the intricate relationships and interactions within graphs. Several methods have tried to bridge this gap via a post-hoc approach or self-interpretable design. Most of them focus on graph structure analysis to determine essential patterns that correlate with prediction outcomes. While post-hoc explanation methods are adaptable, they require extra computational resources and may be less reliable due to limited access to the model's internal workings. Conversely, Interpretable models can provide immediate explanations, but their generalizability to different scenarios remains a major concern. To address these shortcomings, this thesis seeks to develop a novel XAI framework tailored for graph-based machine learning. The proposed framework aims to offer adaptable, computationally efficient explanations for GNNs, moving beyond individual feature analysis to capture how graph structure influences predictions. - oai:arXiv.org:2512.08344v1 - cs.AI - cs.IT - cs.LG - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Tien Cuong Bui + Chenyi Li, Zhijian Lai, Dong An, Jiang Hu, Zaiwen Wen - The High Cost of Incivility: Quantifying Interaction Inefficiency via Multi-Agent Monte Carlo Simulations - https://arxiv.org/abs/2512.08345 - arXiv:2512.08345v1 Announce Type: new -Abstract: Workplace toxicity is widely recognized as detrimental to organizational culture, yet quantifying its direct impact on operational efficiency remains methodologically challenging due to the ethical and practical difficulties of reproducing conflict in human subjects. This study leverages Large Language Model (LLM) based Multi-Agent Systems to simulate 1-on-1 adversarial debates, creating a controlled "sociological sandbox". We employ a Monte Carlo method to simulate hundrets of discussions, measuring the convergence time (defined as the number of arguments required to reach a conclusion) between a baseline control group and treatment groups involving agents with "toxic" system prompts. Our results demonstrate a statistically significant increase of approximately 25\% in the duration of conversations involving toxic participants. We propose that this "latency of toxicity" serves as a proxy for financial damage in corporate and academic settings. Furthermore, we demonstrate that agent-based modeling provides a reproducible, ethical alternative to human-subject research for measuring the mechanics of social friction. - oai:arXiv.org:2512.08345v1 - cs.AI + Advancing Text Classification with Large Language Models and Neural Attention Mechanisms + https://arxiv.org/abs/2512.09444 + arXiv:2512.09444v1 Announce Type: new +Abstract: This study proposes a text classification algorithm based on large language models, aiming to address the limitations of traditional methods in capturing long-range dependencies, understanding contextual semantics, and handling class imbalance. The framework includes text encoding, contextual representation modeling, attention-based enhancement, feature aggregation, and classification prediction. In the representation stage, deep semantic embeddings are obtained through large-scale pretrained language models, and attention mechanisms are applied to enhance the selective representation of key features. In the aggregation stage, global and weighted strategies are combined to generate robust text-level vectors. In the classification stage, a fully connected layer and Softmax output are used to predict class distributions, and cross-entropy loss is employed to optimize model parameters. Comparative experiments introduce multiple baseline models, including recurrent neural networks, graph neural networks, and Transformers, and evaluate them on Precision, Recall, F1-Score, and AUC. Results show that the proposed method outperforms existing models on all metrics, with especially strong improvements in Recall and AUC. In addition, sensitivity experiments are conducted on hyperparameters and data conditions, covering the impact of hidden dimensions on AUC and the impact of class imbalance ratios on Recall. The findings demonstrate that proper model configuration has a significant effect on performance and reveal the adaptability and stability of the model under different conditions. Overall, the proposed text classification method not only achieves effective performance improvement but also verifies its robustness and applicability in complex data environments through systematic analysis. + oai:arXiv.org:2512.09444v1 cs.CL - cs.CY - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Benedikt Mangold + Ning Lyu, Yuxi Wang, Feng Chen, Qingyuan Zhang - A tight example for approximation ratio 5 for covering small cuts by the primal-dual method - https://arxiv.org/abs/2512.08350 - arXiv:2512.08350v1 Announce Type: new -Abstract: In the Small Cuts Cover problem we seek to cover by a min-cost edge-set the set family of cuts of size/capacity $<k$ of a graph. Recently, Simmons showed that the primal-dual algorithm of Williamson, Goemans, Mihail, and Vazirani achieves approximation ratio $5$ for this problem, and asked whether this bound is tight. We will answer this question positively, by providing an example in which the ratio between the solution produced by the primal-dual algorithm and the optimum is arbitrarily close to $5$. - oai:arXiv.org:2512.08350v1 - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 + Defect-aware Hybrid Prompt Optimization via Progressive Tuning for Zero-Shot Multi-type Anomaly Detection and Segmentation + https://arxiv.org/abs/2512.09446 + arXiv:2512.09446v1 Announce Type: new +Abstract: Recent vision language models (VLMs) like CLIP have demonstrated impressive anomaly detection performance under significant distribution shift by utilizing high-level semantic information through text prompts. However, these models often neglect fine-grained details, such as which kind of anomalies, like "hole", "cut", "scratch" that could provide more specific insight into the nature of anomalies. We argue that recognizing fine-grained anomaly types 1) enriches the representation of "abnormal" with structured semantics, narrowing the gap between coarse anomaly signals and fine-grained defect categories; 2) enables manufacturers to understand the root causes of the anomaly and implement more targeted and appropriate corrective measures quickly. While incorporating such detailed semantic information is crucial, designing handcrafted prompts for each defect type is both time-consuming and susceptible to human bias. For this reason, we introduce DAPO, a novel approach for Defect-aware Prompt Optimization based on progressive tuning for the zero-shot multi-type and binary anomaly detection and segmentation under distribution shifts. Our approach aligns anomaly-relevant image features with their corresponding text semantics by learning hybrid defect-aware prompts with both fixed textual anchors and learnable token embeddings. We conducted experiments on public benchmarks (MPDD, VisA, MVTec-AD, MAD, and Real-IAD) and an internal dataset. The results suggest that compared to the baseline models, DAPO achieves a 3.7% average improvement in AUROC and average precision metrics at the image level under distribution shift, and a 6.5% average improvement in localizing novel anomaly types under zero-shot settings. + oai:arXiv.org:2512.09446v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Zeev Nutov + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Nadeem Nazer, Hongkuan Zhou, Lavdim Halilaj, Ylli Sadikaj, Steffen Staab - Turning Threat into Opportunity: DRL-Powered Anti-Jamming via Energy Harvesting in UAV-Disrupted Channels - https://arxiv.org/abs/2512.08351 - arXiv:2512.08351v1 Announce Type: new -Abstract: The open and broadcast nature of wireless communication systems, while enabling ubiquitous connectivity, also exposes them to jamming attacks that may critically compromise network performance or disrupt service availability. The proliferation of Unmanned Aerial Vehicles (UAVs) introduces a new dimension to this threat, as UAVs can act as mobile, intelligent jammers capable of launching sophisticated attacks by leveraging Line-of-Sight (LoS) channels and adaptive strategies. This paper addresses a critical challenge of countering intelligent UAV jamming in the context of energy-constrained ambient backscatter communication systems. Traditional anti-jamming techniques often fall short against such dynamic threats or are unsuitable for low-power backscatter devices. Hence, we propose a novel anti-jamming framework based on Deep Reinforcement Learning (DRL) that empowers the transmitter to not only defend against but also strategically exploit the UAV's jamming signals. In particular, our approach allows the transmitter to learn an optimal policy for switching between active transmission, energy harvesting from the jamming signal, and backscattering information using the jammer's own emissions. We then formulate the problem as a Markov Decision Process (MDP) and employ a Deep Q-Network (DQN) to derive the optimal operational strategy. Simulation results demonstrate that our DQN-based method significantly outperforms conventional Q-learning in convergence speed and surpasses a greedy anti-jamming strategy in terms of average throughput, packet loss rate, and packet delivery ratio. - oai:arXiv.org:2512.08351v1 - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 + Sequential Testing for Descriptor-Agnostic LiDAR Loop Closure in Repetitive Environments + https://arxiv.org/abs/2512.09447 + arXiv:2512.09447v1 Announce Type: new +Abstract: We propose a descriptor-agnostic, multi-frame loop closure verification method that formulates LiDAR loop closure as a truncated Sequential Probability Ratio Test (SPRT). Instead of deciding from a single descriptor comparison or using fixed thresholds with late-stage Iterative Closest Point (ICP) vetting, the verifier accumulates a short temporal stream of descriptor similarities between a query and each candidate. It then issues an accept/reject decision adaptively once sufficient multi-frame evidence has been observed, according to user-specified Type-I/II error design targets. This precision-first policy is designed to suppress false positives in structurally repetitive indoor environments. We evaluate the verifier on a five-sequence library dataset, using a fixed retrieval front-end with several representative LiDAR global descriptors. Performance is assessed via segment-level K-hit precision-recall and absolute trajectory error (ATE) and relative pose error (RPE) after pose graph optimization. Across descriptors, the sequential verifier consistently improves precision and reduces the impact of aliased loops compared with single-frame and heuristic multi-frame baselines. Our implementation and dataset will be released at: https://github.com/wanderingcar/snu_library_dataset. + oai:arXiv.org:2512.09447v1 + cs.RO + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Ngoc-Tan Nguyen, Thi-Thu Hoang, Trung-Dung Hoang, Thai-Duong Nguyen + Jaehyun Kim, Seungwon Choi, Tae-Wan Kim - On Discrete Ambiguity Functions of Random Communication Waveforms - https://arxiv.org/abs/2512.08352 - arXiv:2512.08352v1 Announce Type: new -Abstract: This paper provides a fundamental characterization of the discrete ambiguity functions (AFs) of random communication waveforms under arbitrary orthonormal modulation with random constellation symbols, which serve as a key metric for evaluating the delay-Doppler sensing performance in future ISAC applications. A unified analytical framework is developed for two types of AFs, namely the discrete periodic AF (DP-AF) and the fast-slow time AF (FST-AF), where the latter may be seen as a small-Doppler approximation of the DP-AF. By analyzing the expectation of squared AFs, we derive exact closed-form expressions for both the expected sidelobe level (ESL) and the expected integrated sidelobe level (EISL) under the DP-AF and FST-AF formulations. For the DP-AF, we prove that the normalized EISL is identical for all orthogonal waveforms. To gain structural insights, we introduce a matrix representation based on the finite Weyl-Heisenberg (WH) group, where each delay-Doppler shift corresponds to a WH operator acting on the ISAC signal. This WH-group viewpoint yields sharp geometric constraints on the lowest sidelobes: The minimum ESL can only occur along a one-dimensional cut or over a set of widely dispersed delay-Doppler bins. Consequently, no waveform can attain the minimum ESL over any compact two-dimensional region, leading to a no-optimality (no-go) result under the DP-AF framework. For the FST-AF, the closed-form ESL and EISL expressions reveal a constellation-dependent regime governed by its kurtosis: The OFDM modulation achieves the minimum ESL for sub-Gaussian constellations, whereas the OTFS waveform becomes optimal for super-Gaussian constellations. Finally, four representative waveforms, namely, SC, OFDM, OTFS, and AFDM, are examined under both frameworks, and all theoretical results are verified through numerical examples. - oai:arXiv.org:2512.08352v1 - cs.IT + Power Control of Multi-Layer Repeater Networks (POLARNet) + https://arxiv.org/abs/2512.09449 + arXiv:2512.09449v1 Announce Type: new +Abstract: In this letter we introduce POLARNet -- power control of multi-layer repeater networks -- for local optimization of SNR given different repeater power constraints. We assume relays or repeaters in groups or layers spatially separated. Under ideal circumstances SISO narrow-band communication and TDD, the system may be viewed as a dual to a deep neural network, where activations, corresponding to repeater amplifications, are optimized and weight matrices, corresponding to channel matrices, are static. Repeater amplifications are locally optimized layer-by-layer in a forward-backward manner over compact sets. The method is applicable for a wide range of constraints on within-layer power/energy utilization, is furthermore gradient-free, step-size-free, and has proven monotonicity in the objective. Numerical simulations show significant improvement compared to upper bounds on the expected SNR. In addition, power distribution over multiple repeaters is shown to be superior to optimal selection of single repeaters in the layers. + oai:arXiv.org:2512.09449v1 + eess.SY + cs.SY eess.SP - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ying Zhang, Fan Liu, Yifeng Xiong, Weijie Yuan, Shuangyang Li, Le Zheng, Tony Xiao Han, Christos Masouros, Shi Jin + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Johan Siwerson, Johan Thunberg - A reconstructed discontinuous approximation for distributed elliptic control problems - https://arxiv.org/abs/2512.08353 - arXiv:2512.08353v1 Announce Type: new -Abstract: In this paper, we present and analyze an internal penalty discontinuous Galerkin method for the distributed elliptic optimal control problems. It is based on a reconstructed discontinuous approximation which admits arbitrarily high-order approximation space with only one unknown per element. Applying this method, we develop a proper discretization scheme that approximates the state and adjoint variables in the approximation space. Our main contributions are twofold: (1) the derivation of both a priori and a posteriori error estimates of the $L^2$-norm and the energy norms, and (2) the implementation of an efficiently solvable discrete system, which is solved via a linearly convergent projected gradient descent method. Numerical experiments are provided to verify the convergence order in a priori estimate and the efficiency of a posteriori error estimate. - oai:arXiv.org:2512.08353v1 - math.NA - cs.NA - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + BlockFLEX: An Adaptive and Survivable Architecture with Hierarchical Routing for LEO Satellite Networks + https://arxiv.org/abs/2512.09453 + arXiv:2512.09453v1 Announce Type: new +Abstract: This paper presents \textbf{BlockFLEX}, an adaptive and survivable architecture with a hierarchical routing scheme for Low Earth Orbit satellite networks, designed to address dynamic topology changes and severe link failures. + By organizing satellites into autonomous blocks, BlockFLEX establishes a survivable underlay network that masks network volatility and offers a stable overlay view. The architecture employs a hierarchical routing scheme integrating both convergence-free geographic routing and convergence-isolated routing. Furthermore, BlockFLEX adaptively switches between stateful and stateless forwarding modes, enabling efficient, resilient, and stable routing via a dedicated protection mechanism and an optimized source satellite selection algorithm. + Experimental evaluations on current operational LEO satellite networks (LSNs) demonstrate that under scenarios with up to 30\% random link failures, the proposed method achieves a $2\times$ improvement in reachability compared to current leading schemes, while maintaining near-100\% routing availability. Moreover, the overhead of control messages and forwarding information base (FIB) updates remains below $0.2\%$ of that in OSPF, accompanied by a $\geq 36\%$ reduction in routing computation time and a $\geq 50\%$ decrease in latency jitter. + oai:arXiv.org:2512.09453v1 + cs.NI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/publicdomain/zero/1.0/ - Ruo Li, Haoyang Liu, Jun Yin + http://creativecommons.org/licenses/by/4.0/ + Xiangtong Wang - TrackingWorld: World-centric Monocular 3D Tracking of Almost All Pixels - https://arxiv.org/abs/2512.08358 - arXiv:2512.08358v1 Announce Type: new -Abstract: Monocular 3D tracking aims to capture the long-term motion of pixels in 3D space from a single monocular video and has witnessed rapid progress in recent years. However, we argue that the existing monocular 3D tracking methods still fall short in separating the camera motion from foreground dynamic motion and cannot densely track newly emerging dynamic subjects in the videos. To address these two limitations, we propose TrackingWorld, a novel pipeline for dense 3D tracking of almost all pixels within a world-centric 3D coordinate system. First, we introduce a tracking upsampler that efficiently lifts the arbitrary sparse 2D tracks into dense 2D tracks. Then, to generalize the current tracking methods to newly emerging objects, we apply the upsampler to all frames and reduce the redundancy of 2D tracks by eliminating the tracks in overlapped regions. Finally, we present an efficient optimization-based framework to back-project dense 2D tracks into world-centric 3D trajectories by estimating the camera poses and the 3D coordinates of these 2D tracks. Extensive evaluations on both synthetic and real-world datasets demonstrate that our system achieves accurate and dense 3D tracking in a world-centric coordinate frame. - oai:arXiv.org:2512.08358v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + $t$-Fold $s$-Blocking Sets and $s$-Minimal Codes + https://arxiv.org/abs/2512.09457 + arXiv:2512.09457v1 Announce Type: new +Abstract: Blocking sets and minimal codes have been studied for many years in projective geometry and coding theory. In this paper, we provide a new lower bound on the size of $t$-fold $s$-blocking sets without the condition $t \leq q$, which is stronger than the classical result of Beutelspacher in 1983. Then a lower bound on lengths of projective $s$-minimal codes is also obtained. It is proved that $(s+1)$-minimal codes are certainly $s$-minimal codes. We generalize the Ashikhmin-Barg condition for minimal codes to $s$-minimal codes. Many infinite families of $s$-minimal codes satisfying and violating this generalized Ashikhmin-Barg condition are constructed. We also give several examples which are binary minimal codes, but not $2$-minimal codes. + oai:arXiv.org:2512.09457v1 + cs.IT + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jiahao Lu, Weitao Xiong, Jiacheng Deng, Peng Li, Tianyu Huang, Zhiyang Dou, Cheng Lin, Sai-Kit Yeung, Yuan Liu + http://creativecommons.org/publicdomain/zero/1.0/ + Hao Chen, Xu Pan, Conghui Xie - Conditional Morphogenesis: Emergent Generation of Structural Digits via Neural Cellular Automata - https://arxiv.org/abs/2512.08360 - arXiv:2512.08360v1 Announce Type: new -Abstract: Biological systems exhibit remarkable morphogenetic plasticity, where a single genome can encode various specialized cellular structures triggered by local chemical signals. In the domain of Deep Learning, Differentiable Neural Cellular Automata (NCA) have emerged as a paradigm to mimic this self-organization. However, existing NCA research has predominantly focused on continuous texture synthesis or single-target object recovery, leaving the challenge of class-conditional structural generation largely unexplored. In this work, we propose a novel Conditional Neural Cellular Automata (c-NCA) architecture capable of growing distinct topological structures - specifically MNIST digits - from a single generic seed, guided solely by a spatially broadcasted class vector. Unlike traditional generative models (e.g., GANs, VAEs) that rely on global reception fields, our model enforces strict locality and translation equivariance. We demonstrate that by injecting a one-hot condition into the cellular perception field, a single set of local rules can learn to break symmetry and self-assemble into ten distinct geometric attractors. Experimental results show that our c-NCA achieves stable convergence, correctly forming digit topologies from a single pixel, and exhibits robustness characteristic of biological systems. This work bridges the gap between texture-based NCAs and structural pattern formation, offering a lightweight, biologically plausible alternative for conditional generation. - oai:arXiv.org:2512.08360v1 - cs.NE + Architectures for Building Agentic AI + https://arxiv.org/abs/2512.09458 + arXiv:2512.09458v1 Announce Type: new +Abstract: This chapter argues that the reliability of agentic and generative AI is chiefly an architectural property. We define agentic systems as goal-directed, tool-using decision makers operating in closed loops, and show how reliability emerges from principled componentisation (goal manager, planner, tool-router, executor, memory, verifiers, safety monitor, telemetry), disciplined interfaces (schema-constrained, validated, least-privilege tool calls), and explicit control and assurance loops. Building on classical foundations, we propose a practical taxonomy-tool-using agents, memory-augmented agents, planning and self-improvement agents, multi-agent systems, and embodied or web agents - and analyse how each pattern reshapes the reliability envelope and failure modes. We distil design guidance on typed schemas, idempotency, permissioning, transactional semantics, memory provenance and hygiene, runtime governance (budgets, termination conditions), and simulate-before-actuate safeguards. + oai:arXiv.org:2512.09458v1 cs.AI - cs.CV cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ali Sakour - - - SCU-CGAN: Enhancing Fire Detection through Synthetic Fire Image Generation and Dataset Augmentation - https://arxiv.org/abs/2512.08362 - arXiv:2512.08362v1 Announce Type: new -Abstract: Fire has long been linked to human life, causing severe disasters and losses. Early detection is crucial, and with the rise of home IoT technologies, household fire detection systems have emerged. However, the lack of sufficient fire datasets limits the performance of detection models. We propose the SCU-CGAN model, which integrates U-Net, CBAM, and an additional discriminator to generate realistic fire images from nonfire images. We evaluate the image quality and confirm that SCU-CGAN outperforms existing models. Specifically, SCU-CGAN achieved a 41.5% improvement in KID score compared to CycleGAN, demonstrating the superior quality of the generated fire images. Furthermore, experiments demonstrate that the augmented dataset significantly improves the accuracy of fire detection models without altering their structure. For the YOLOv5 nano model, the most notable improvement was observed in the mAP@0.5:0.95 metric, which increased by 56.5%, highlighting the effectiveness of the proposed approach. - oai:arXiv.org:2512.08362v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Ju-Young Kim, Ji-Hong Park, Gun-Woo Kim + S{\l}awomir Nowaczyk - Generalized Discrepancy of Random Points - https://arxiv.org/abs/2512.08364 - arXiv:2512.08364v1 Announce Type: new -Abstract: We study the $L_p$-discrepancy of random point sets in high dimensions, with emphasis on small values of $p$. Although the classical $L_p$-discrepancy suffers from the curse of dimensionality for all $p \in (1,\infty)$, the gap between known upper and lower bounds remains substantial, in particular for small $p \ge 1$. To clarify this picture, we review the existing results for i.i.d.\ uniformly distributed points and derive new upper bounds for \emph{generalized} $L_p$-discrepancies, obtained by allowing non-uniform sampling densities and corresponding non-negative quadrature weights. - Using the probabilistic method, we show that random points drawn from optimally chosen product densities lead to significantly improved upper bounds. For $p=2$ these bounds are explicit and optimal; for general $p \in [1,\infty)$ we obtain sharp asymptotic estimates. The improvement can be interpreted as a form of importance sampling for the underlying Sobolev space $F_{d,q}$. - Our results also reveal that, even with optimal densities, the curse of dimensionality persists for random points when $p\ge 1$, and it becomes most pronounced for small $p$. This suggests that the curse should also hold for the classical $L_1$-discrepancy for deterministic point sets. - oai:arXiv.org:2512.08364v1 + The Complex-Step Integral Transform + https://arxiv.org/abs/2512.09459 + arXiv:2512.09459v1 Announce Type: new +Abstract: Building on the well-established connection between the Hilbert transform and derivative operators, and motivated by recent developments in complex-step differentiation, we introduce the Complex-Step Integral Transform (CSIT): a generalized integral transform that combines analytic continuation, derivative approximation, and multi-scale smoothing within a unified framework. A spectral analysis shows that the CSIT preserves phase while suppressing high-wavenumber noise, offering advantages over conventional Fourier derivatives. We discuss the roles of the real and imaginary step parameters, compare FFT-based and interpolation-based implementations, and demonstrate the method on the advection equation and instantaneous-frequency computation. Results show that the CSIT yields smoother, more robust attributes than Hilbert-based methods and provides built-in stabilization for PDE solvers. The CSIT thus represents a flexible alternative for numerical differentiation, spectral analysis, and seismic signal processing. The method opens several avenues for future work, including non-periodic implementations, adaptive parameter selection, and integration with local interpolation frameworks such as high-order Finite-Element methods. + oai:arXiv.org:2512.09459v1 math.NA cs.NA - math.NT - math.PR - Wed, 10 Dec 2025 00:00:00 -0500 + physics.geo-ph + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Erich Novak, Friedrich Pillichshammer + Rafael Abreu, Stephanie Durand, Jochen Kamm, Christine Thomas, Monika Pandey - Magneton: Optimizing Energy Efficiency of ML Systems via Differential Energy Debugging - https://arxiv.org/abs/2512.08365 - arXiv:2512.08365v1 Announce Type: new -Abstract: The training and deployment of machine learning (ML) models have become extremely energy-intensive. While existing optimization efforts focus primarily on hardware energy efficiency, a significant but overlooked source of inefficiency is software energy waste caused by poor software design. This often includes redundant or poorly designed operations that consume more energy without improving performance. These inefficiencies arise in widely used ML frameworks and applications, yet developers often lack the visibility and tools to detect and diagnose them. - We propose differential energy debugging, a novel approach that leverages the observation that competing ML systems often implement similar functionality with vastly different energy consumption. Building on this insight, we design and implement Magneton, an energy profiler that compares energy consumption between similar ML systems at the operator level and automatically pinpoints code regions and configuration choices responsible for excessive energy use. Applied to 9 popular ML systems spanning LLM inference, general ML frameworks, and image generation, Magneton detects and diagnoses 16 known cases of software energy inefficiency and further discovers 8 previously unknown cases, 7 of which have been confirmed by developers. - oai:arXiv.org:2512.08365v1 - cs.DC - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Cytoplasmic Strings Analysis in Human Embryo Time-Lapse Videos using Deep Learning Framework + https://arxiv.org/abs/2512.09461 + arXiv:2512.09461v1 Announce Type: new +Abstract: Infertility is a major global health issue, and while in-vitro fertilization has improved treatment outcomes, embryo selection remains a critical bottleneck. Time-lapse imaging enables continuous, non-invasive monitoring of embryo development, yet most automated assessment methods rely solely on conventional morphokinetic features and overlook emerging biomarkers. Cytoplasmic Strings, thin filamentous structures connecting the inner cell mass and trophectoderm in expanded blastocysts, have been associated with faster blastocyst formation, higher blastocyst grades, and improved viability. However, CS assessment currently depends on manual visual inspection, which is labor-intensive, subjective, and severely affected by detection and subtle visual appearance. In this work, we present, to the best of our knowledge, the first computational framework for CS analysis in human IVF embryos. We first design a human-in-the-loop annotation pipeline to curate a biologically validated CS dataset from TLI videos, comprising 13,568 frames with highly sparse CS-positive instances. Building on this dataset, we propose a two-stage deep learning framework that (i) classifies CS presence at the frame level and (ii) localizes CS regions in positive cases. To address severe imbalance and feature uncertainty, we introduce the Novel Uncertainty-aware Contractive Embedding (NUCE) loss, which couples confidence-aware reweighting with an embedding contraction term to form compact, well-separated class clusters. NUCE consistently improves F1-score across five transformer backbones, while RF-DETR-based localization achieves state-of-the-art (SOTA) detection performance for thin, low-contrast CS structures. The source code will be made publicly available at: https://github.com/HamadYA/CS_Detection. + oai:arXiv.org:2512.09461v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yi Pan, Wenbo Qian, Dedong Xie, Ruiyan Hu, Yigong Hu, Baris Kasikci + Anabia Sohail, Mohamad Alansari, Ahmed Abughali, Asmaa Chehab, Abdelfatah Ahmed, Divya Velayudhan, Sajid Javed, Hasan Al Marzouqi, Ameena Saad Al-Sumaiti, Junaid Kashir, Naoufel Werghi + + + Development of a Compliant Gripper for Safe Robot-Assisted Trouser Dressing-Undressing + https://arxiv.org/abs/2512.09462 + arXiv:2512.09462v1 Announce Type: new +Abstract: In recent years, many countries, including Japan, have rapidly aging populations, making the preservation of seniors' quality of life a significant concern. For elderly people with impaired physical abilities, support for toileting is one of the most important issues. This paper details the design, development, experimental assessment, and potential application of the gripper system, with a focus on the unique requirements and obstacles involved in aiding elderly or hemiplegic individuals in dressing and undressing trousers. The gripper we propose seeks to find the right balance between compliance and grasping forces, ensuring precise manipulation while maintaining a safe and compliant interaction with the users. The gripper's integration into a custom--built robotic manipulator system provides a comprehensive solution for assisting hemiplegic individuals in their dressing and undressing tasks. Experimental evaluations and comparisons with existing studies demonstrate the gripper's ability to successfully assist in both dressing and dressing of trousers in confined spaces with a high success rate. This research contributes to the advancement of assistive robotics, empowering elderly, and physically impaired individuals to maintain their independence and improve their quality of life. + oai:arXiv.org:2512.09462v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1080/01691864.2024.2376024 + Unde, J., Inden, T., Wakayama, Y., Colan, J., Zhu, Y., Aoyama, T., and Hasegawa, Y. (2024). Development of a compliant gripper for safe robot-assisted trouser dressing--undressing. \textit{Advanced Robotics}, 38(19--20), 1424--1440 + Jayant Unde, Takumi Inden, Yuki Wakayama, Jacinto Colan, Yaonan Zhu, Tadayoshi Aoyama, Yasuhisa Hasegawa - Reflecting with Two Voices: A Co-Adaptive Dual-Strategy Framework for LLM-Based Agent Decision Making - https://arxiv.org/abs/2512.08366 - arXiv:2512.08366v1 Announce Type: new -Abstract: Large language model (LLM) agents often rely on external demonstrations or retrieval-augmented planning, leading to brittleness, poor generalization, and high computational overhead. Inspired by human problem-solving, we propose DuSAR (Dual-Strategy Agent with Reflecting) - a demonstration-free framework that enables a single frozen LLM to perform co-adaptive reasoning via two complementary strategies: a high-level holistic plan and a context-grounded local policy. These strategies interact through a lightweight reflection mechanism, where the agent continuously assesses progress via a Strategy Fitness Score and dynamically revises its global plan when stuck or refines it upon meaningful advancement, mimicking human metacognitive behavior. On ALFWorld and Mind2Web, DuSAR achieves state-of-the-art performance with open-source LLMs (7B-70B), reaching 37.1% success on ALFWorld (Llama3.1-70B) - more than doubling the best prior result (13.0%) - and 4.02% on Mind2Web, also more than doubling the strongest baseline. Remarkably, it reduces per-step token consumption by 3-9X while maintaining strong performance. Ablation studies confirm the necessity of dual-strategy coordination. Moreover, optional integration of expert demonstrations further boosts results, highlighting DuSAR's flexibility and compatibility with external knowledge. - oai:arXiv.org:2512.08366v1 + Privacy-Preserving Computer Vision for Industry: Three Case Studies in Human-Centric Manufacturing + https://arxiv.org/abs/2512.09463 + arXiv:2512.09463v1 Announce Type: new +Abstract: The adoption of AI-powered computer vision in industry is often constrained by the need to balance operational utility with worker privacy. Building on our previously proposed privacy-preserving framework, this paper presents its first comprehensive validation on real-world data collected directly by industrial partners in active production environments. We evaluate the framework across three representative use cases: woodworking production monitoring, human-aware AGV navigation, and multi-camera ergonomic risk assessment. The approach employs learned visual transformations that obscure sensitive or task-irrelevant information while retaining features essential for task performance. Through both quantitative evaluation of the privacy-utility trade-off and qualitative feedback from industrial partners, we assess the framework's effectiveness, deployment feasibility, and trust implications. Results demonstrate that task-specific obfuscation enables effective monitoring with reduced privacy risks, establishing the framework's readiness for real-world adoption and providing cross-domain recommendations for responsible, human-centric AI deployment in industry. + oai:arXiv.org:2512.09463v1 + cs.CV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wentao Zhang, Qunbo Wang, Tao Zhang, Junsheng Wu, Hongping Gan, Yang Liu, Ling Dai, Shizhuang Deng, Shuntong Sun + Sander De Coninck, Emilio Gamba, Bart Van Doninck, Abdellatif Bey-Temsamani, Sam Leroux, Pieter Simoens - A Multivariate Bernoulli-Based Sampling Method for Multi-Label Data with Application to Meta-Research - https://arxiv.org/abs/2512.08371 - arXiv:2512.08371v1 Announce Type: new -Abstract: Datasets may contain observations with multiple labels. If the labels are not mutually exclusive, and if the labels vary greatly in frequency, obtaining a sample that includes sufficient observations with scarcer labels to make inferences about those labels, and which deviates from the population frequencies in a known manner, creates challenges. In this paper, we consider a multivariate Bernoulli distribution as our underlying distribution of a multi-label problem. We present a novel sampling algorithm that takes label dependencies into account. It uses observed label frequencies to estimate multivariate Bernoulli distribution parameters and calculate weights for each label combination. This approach ensures the weighted sampling acquires target distribution characteristics while accounting for label dependencies. We applied this approach to a sample of research articles from Web of Science labeled with 64 biomedical topic categories. We aimed to preserve category frequency order, reduce frequency differences between most and least common categories, and account for category dependencies. This approach produced a more balanced sub-sample, enhancing the representation of minority categories. - oai:arXiv.org:2512.08371v1 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Nominal Type Theory by Nullary Internal Parametricity + https://arxiv.org/abs/2512.09464 + arXiv:2512.09464v1 Announce Type: new +Abstract: There are many ways to represent the syntax of a language with binders. In particular, nominal frameworks are metalanguages that feature (among others) name abstraction types, which can be used to specify the type of binders. The resulting syntax representation (nominal data types) makes alpha-equivalent terms equal, and features a name-invariant induction principle. It is known that name abstraction types can be presented either as existential or universal quantification on names. On the one hand, nominal frameworks use the existential presentation for practical reasoning since the user is allowed to match on a name-term pattern where the name is bound in the term. However inference rules for existential name abstraction are cumbersome to specify/implement because they must keep track of information about free and bound names at the type level. On the other hand, universal name abstractions are easier to specify since they are treated not as pairs, but as functions consuming fresh names. Yet the ability to pattern match on such functions is seemingly lost. In this work we show that this ability and others are recovered in a type theory consisting of (1) nullary ($0$-ary) internally parametric type theory (nullary PTT) (2) a type of names and a novel name induction principle (3) nominal data types. This extension of nullary PTT can act as a legitimate nominal framework. Indeed it has universal name abstractions, nominal pattern matching, a freshness type former, name swapping and local-scope operations and (non primitive) existential name abstractions. We illustrate how term-relevant nullary parametricity is used to recover nominal pattern matching. Our main example involves synthetic Kripke parametricity. + oai:arXiv.org:2512.09464v1 + cs.LO + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Simon Chung, Colby J. Vorland, Donna L. Maney, Andrew W. Brown + http://creativecommons.org/licenses/by/4.0/ + Antoine Van Muylder, Andreas Nuyts, Dominique Devriese - USCSA: Evolution-Aware Security Analysis for Proxy-Based Upgradeable Smart Contracts - https://arxiv.org/abs/2512.08372 - arXiv:2512.08372v1 Announce Type: new -Abstract: In the case of upgrading smart contracts on blockchain systems, it is essential to consider the continuity of upgrade and subsequent maintenance. In practice, upgrade operations often introduce new vulnerabilities. To address this, we propose an Upgradable Smart Contract Security Analyzer, USCSA, which evaluates the risks associated with the upgrade process using the Abstract Syntax Tree (AST) differential analysis. We collected and analyzed 3,546 cases of vulnerabilities in upgradable contracts,covering common vulnerability categories such as reentrancy, access control flaws, and integer overflow. Experimental results show that USCSA achieves an accuracy of 92.3%, recall of 89.7%, and F1-score of 91.0% in detecting upgrade-induced vulnerabilities. - In addition, the efficiency of mapping high-risk changes has achieved a 30% improvement over the conventional approach. As a result, USCSA provides a significant advantage to improve the security and integrity of upgradable smart contracts, providing a novel and efficient solution to secure audits on blockchain applications. - oai:arXiv.org:2512.08372v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Cauchy-Schwarz Fairness Regularizer + https://arxiv.org/abs/2512.09467 + arXiv:2512.09467v1 Announce Type: new +Abstract: Group fairness in machine learning is often enforced by adding a regularizer that reduces the dependence between model predictions and sensitive attributes. However, existing regularizers are built on heterogeneous distance measures and design choices, which makes their behavior hard to reason about and their performance inconsistent across tasks. This raises a basic question: what properties make a good fairness regularizer? We address this question by first organizing existing in-process methods into three families: (i) matching prediction statistics across sensitive groups, (ii) aligning latent representations, and (iii) directly minimizing dependence between predictions and sensitive attributes. Through this lens, we identify desirable properties of the underlying distance measure, including tight generalization bounds, robustness to scale differences, and the ability to handle arbitrary prediction distributions. Motivated by these properties, we propose a Cauchy-Schwarz (CS) fairness regularizer that penalizes the empirical CS divergence between prediction distributions conditioned on sensitive groups. Under a Gaussian comparison, we show that CS divergence yields a tighter bound than Kullback-Leibler divergence, Maximum Mean Discrepancy, and the mean disparity used in Demographic Parity, and we discuss how these advantages translate to a distribution-free, kernel-based estimator that naturally extends to multiple sensitive attributes. Extensive experiments on four tabular benchmarks and one image dataset demonstrate that the proposed CS regularizer consistently improves Demographic Parity and Equal Opportunity metrics while maintaining competitive accuracy, and achieves a more stable utility-fairness trade-off across hyperparameter settings compared to prior regularizers. + oai:arXiv.org:2512.09467v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaoqi Li, Lei Xie, Wenkai Li, Zongwei Li + http://creativecommons.org/licenses/by/4.0/ + Yezi Liu, Hanning Chen, Wenjun Huang, Yang Ni, Mohsen Imani - The Unseen Bias: How Norm Discrepancy in Pre-Norm MLLMs Leads to Visual Information Loss - https://arxiv.org/abs/2512.08374 - arXiv:2512.08374v1 Announce Type: new -Abstract: Multimodal Large Language Models (MLLMs), which couple pre-trained vision encoders and language models, have shown remarkable capabilities. However, their reliance on the ubiquitous Pre-Norm architecture introduces a subtle yet critical flaw: a severe norm disparity between the high-norm visual tokens and the low-norm text tokens. In this work, we present a formal theoretical analysis demonstrating that this imbalance is not a static issue. Instead, it induces an ``asymmetric update dynamic,'' where high-norm visual tokens exhibit a ``representational inertia,'' causing them to transform semantically much slower than their textual counterparts. This fundamentally impairs effective cross-modal feature fusion. Our empirical validation across a range of mainstream MLLMs confirms that this theoretical dynamic -- the persistence of norm disparity and the resulting asymmetric update rates -- is a prevalent phenomenon. Based on this insight, we propose a remarkably simple yet effective solution: inserting a single, carefully initialized LayerNorm layer after the visual projector to enforce norm alignment. Experiments conducted on the LLaVA-1.5 architecture show that this intervention yields significant performance gains not only on a wide suite of multimodal benchmarks but also, notably, on text-only evaluations such as MMLU, suggesting that resolving the architectural imbalance leads to a more holistically capable model. - oai:arXiv.org:2512.08374v1 + Temporal-Spatial Tubelet Embedding for Cloud-Robust MSI Reconstruction using MSI-SAR Fusion: A Multi-Head Self-Attention Video Vision Transformer Approach + https://arxiv.org/abs/2512.09471 + arXiv:2512.09471v1 Announce Type: new +Abstract: Cloud cover in multispectral imagery (MSI) significantly hinders early-season crop mapping by corrupting spectral information. Existing Vision Transformer(ViT)-based time-series reconstruction methods, like SMTS-ViT, often employ coarse temporal embeddings that aggregate entire sequences, causing substantial information loss and reducing reconstruction accuracy. To address these limitations, a Video Vision Transformer (ViViT)-based framework with temporal-spatial fusion embedding for MSI reconstruction in cloud-covered regions is proposed in this study. Non-overlapping tubelets are extracted via 3D convolution with constrained temporal span $(t=2)$, ensuring local temporal coherence while reducing cross-day information degradation. Both MSI-only and SAR-MSI fusion scenarios are considered during the experiments. Comprehensive experiments on 2020 Traill County data demonstrate notable performance improvements: MTS-ViViT achieves a 2.23\% reduction in MSE compared to the MTS-ViT baseline, while SMTS-ViViT achieves a 10.33\% improvement with SAR integration over the SMTS-ViT baseline. The proposed framework effectively enhances spectral reconstruction quality for robust agricultural monitoring. + oai:arXiv.org:2512.09471v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bozhou Li, Xinda Xue, Sihan Yang, Yang Shi, Xinlong Chen, Yushuo Guan, Yuanxing Zhang, Wentao Zhang + Yiqun Wang, Lujun Li, Meiru Yue, Radu State - A Distribution Testing Approach to Clustering Distributions - https://arxiv.org/abs/2512.08376 - arXiv:2512.08376v1 Announce Type: new -Abstract: We study the following distribution clustering problem: Given a hidden partition of $k$ distributions into two groups, such that the distributions within each group are the same, and the two distributions associated with the two clusters are $\varepsilon$-far in total variation, the goal is to recover the partition. We establish upper and lower bounds on the sample complexity for two fundamental cases: (1) when one of the cluster's distributions is known, and (2) when both are unknown. Our upper and lower bounds characterize the sample complexity's dependence on the domain size $n$, number of distributions $k$, size $r$ of one of the clusters, and distance $\varepsilon$. In particular, we achieve tightness with respect to $(n,k,r,\varepsilon)$ (up to an $O(\log k)$ factor) for all regimes. - oai:arXiv.org:2512.08376v1 - cs.DS - cs.IT - math.IT - math.ST - stat.ML - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + WarmServe: Enabling One-for-Many GPU Prewarming for Multi-LLM Serving + https://arxiv.org/abs/2512.09472 + arXiv:2512.09472v1 Announce Type: new +Abstract: Deploying multiple models within shared GPU clusters is promising for improving resource efficiency in large language model (LLM) serving. Existing multi-LLM serving systems optimize GPU utilization at the cost of worse inference performance, especially time-to-first-token (TTFT). We identify the root cause of such compromise as their unawareness of future workload characteristics. In contrast, recent analysis on real-world traces has shown the high periodicity and long-term predictability of LLM serving workloads. + We propose universal GPU workers to enable one-for-many GPU prewarming that loads models with knowledge of future workloads. Based on universal GPU workers, we design and build WarmServe, a multi-LLM serving system that (1) mitigates cluster-wide prewarming interference by adopting an evict-aware model placement strategy, (2) prepares universal GPU workers in advance by proactive prewarming, and (3) manages GPU memory with a zero-overhead memory switching mechanism. Evaluation under real-world datasets shows that WarmServe improves TTFT by up to 50.8$\times$ compared to the state-of-the-art autoscaling-based system, while being capable of serving up to 2.5$\times$ more requests compared to the GPU-sharing system. + oai:arXiv.org:2512.09472v1 + cs.DC + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Gunjan Kumar, Yash Pote, Jonathan Scarlett + Chiheng Lou, Sheng Qi, Rui Kang, Yong Zhang, Chen Sun, Pengcheng Wang, Bingyang Liu, Xuanzhe Liu, Xin Jin - Simultaneous Enhancement and Noise Suppression under Complex Illumination Conditions - https://arxiv.org/abs/2512.08378 - arXiv:2512.08378v1 Announce Type: new -Abstract: Under challenging light conditions, captured images often suffer from various degradations, leading to a decline in the performance of vision-based applications. Although numerous methods have been proposed to enhance image quality, they either significantly amplify inherent noise or are only effective under specific illumination conditions. To address these issues, we propose a novel framework for simultaneous enhancement and noise suppression under complex illumination conditions. Firstly, a gradient-domain weighted guided filter (GDWGIF) is employed to accurately estimate illumination and improve image quality. Next, the Retinex model is applied to decompose the captured image into separate illumination and reflection layers. These layers undergo parallel processing, with the illumination layer being corrected to optimize lighting conditions and the reflection layer enhanced to improve image quality. Finally, the dynamic range of the image is optimized through multi-exposure fusion and a linear stretching strategy. The proposed method is evaluated on real-world datasets obtained from practical applications. Experimental results demonstrate that our proposed method achieves better performance compared to state-of-the-art methods in both contrast enhancement and noise suppression. - oai:arXiv.org:2512.08378v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + An Efficient Interaction Human-AI Synergy System Bridging Visual Awareness and Large Language Model for Intensive Care Units + https://arxiv.org/abs/2512.09473 + arXiv:2512.09473v1 Announce Type: new +Abstract: Intensive Care Units (ICUs) are critical environments characterized by high-stakes monitoring and complex data management. However, current practices often rely on manual data transcription and fragmented information systems, introducing potential risks to patient safety and operational efficiency. To address these issues, we propose a human-AI synergy system based on a cloud-edge-end architecture, which integrates visual-aware data extraction and semantic interaction mechanisms. Specifically, a visual-aware edge module non-invasively captures real-time physiological data from bedside monitors, reducing manual entry errors. To improve accessibility to fragmented data sources, a semantic interaction module, powered by a Large Language Model (LLM), enables physicians to perform efficient and intuitive voice-based queries over structured patient data. The hierarchical cloud-edge-end deployment ensures low-latency communication and scalable system performance. Our system reduces the cognitive burden on ICU nurses and physicians and demonstrates promising potential for broader applications in intelligent healthcare systems. + oai:arXiv.org:2512.09473v1 + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Jing Tao, You Li, Banglei Guan, Yang Shang, Qifeng Yu + http://creativecommons.org/licenses/by/4.0/ + Yibowen Zhao, Yiming Cao, Zhiqi Shen, Juan Du, Yonghui Xu, Lizhen Cui, Cyril Leung - DeepFeature: Iterative Context-aware Feature Generation for Wearable Biosignals - https://arxiv.org/abs/2512.08379 - arXiv:2512.08379v1 Announce Type: new -Abstract: Biosignals collected from wearable devices are widely utilized in healthcare applications. Machine learning models used in these applications often rely on features extracted from biosignals due to their effectiveness, lower data dimensionality, and wide compatibility across various model architectures. However, existing feature extraction methods often lack task-specific contextual knowledge, struggle to identify optimal feature extraction settings in high-dimensional feature space, and are prone to code generation and automation errors. In this paper, we propose DeepFeature, the first LLM-empowered, context-aware feature generation framework for wearable biosignals. DeepFeature introduces a multi-source feature generation mechanism that integrates expert knowledge with task settings. It also employs an iterative feature refinement process that uses feature assessment-based feedback for feature re-selection. Additionally, DeepFeature utilizes a robust multi-layer filtering and verification approach for robust feature-to-code translation to ensure that the extraction functions run without crashing. Experimental evaluation results show that DeepFeature achieves an average AUROC improvement of 4.21-9.67% across eight diverse tasks compared to baseline methods. It outperforms state-of-the-art approaches on five tasks while maintaining comparable performance on the remaining tasks. - oai:arXiv.org:2512.08379v1 + Color encoding in Latent Space of Stable Diffusion Models + https://arxiv.org/abs/2512.09477 + arXiv:2512.09477v1 Announce Type: new +Abstract: Recent advances in diffusion-based generative models have achieved remarkable visual fidelity, yet a detailed understanding of how specific perceptual attributes - such as color and shape - are internally represented remains limited. This work explores how color is encoded in a generative model through a systematic analysis of the latent representations in Stable Diffusion. Through controlled synthetic datasets, principal component analysis (PCA) and similarity metrics, we reveal that color information is encoded along circular, opponent axes predominantly captured in latent channels c_3 and c_4, whereas intensity and shape are primarily represented in channels c_1 and c_2. Our findings indicate that the latent space of Stable Diffusion exhibits an interpretable structure aligned with a efficient coding representation. These insights provide a foundation for future work in model understanding, editing applications, and the design of more disentangled generative frameworks. + oai:arXiv.org:2512.09477v1 + cs.CV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kaiwei Liu, Yuting He, Bufang Yang, Mu Yuan, Chun Man Victor Wong, Ho Pong Andrew Sze, Zhenyu Yan, Hongkai Chen + http://creativecommons.org/publicdomain/zero/1.0/ + Guillem Arias, Ariadna Sol\`a, Mart\'i Armengod, Maria Vanrell - Finding All Bounded-Length Simple Cycles in a Directed Graphs - Revisited - https://arxiv.org/abs/2512.08392 - arXiv:2512.08392v1 Announce Type: new -Abstract: In 2021, Gupta and Suzumura proposed a novel algorithm for enumerating all bounded-length simple cycles in directed graphs. In this work, we present concrete examples demonstrating that the proposed algorithm fails to enumerate certain valid cycles. Via these examples, we perform a detailed analysis pinpointing the specific points at which the proofs exhibit logical gaps. Furthermore, we propose a corrected formulation that resolves these issues while preserving the desirable property that the algorithm's computational complexity remains $O((c + 1) \cdot k \cdot (n + e))$ where $c$ is the number of simple cycles of a specified maximum length $k$, and $n$ and $e$ the number of graph nodes and edges respectively. - oai:arXiv.org:2512.08392v1 - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 + Personalized Building Climate Control with Contextual Preferential Bayesian Optimization + https://arxiv.org/abs/2512.09481 + arXiv:2512.09481v1 Announce Type: new +Abstract: Efficient tuning of building climate controllers to optimize occupant utility is essential for ensuring overall comfort and satisfaction. However, this is a challenging task since the latent utility are difficult to measure directly. Time-varying contextual factors, such as outdoor temperature, further complicate the problem. To address these challenges, we propose a contextual preferential Bayesian optimization algorithm that leverages binary preference feedback together with contextual information to enable efficient real-time controller tuning. We validate the approach by tuning an economic MPC controller on BOPTEST, a high-fidelity building simulation platform. Over a two-month simulation period, our method outperforms the baseline controller and achieves an improvement of up to 23% in utility. Moreover, for different occupant types, we demonstrate that the algorithm automatically adapts to individual preferences, enabling personalized controller tuning. + oai:arXiv.org:2512.09481v1 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Frank Bauern\"oppel, J\"org-R\"udiger Sack + Wenbin Wang, Jicheng Shi, Colin N. Jones - Detection of Digital Facial Retouching utilizing Face Beauty Information - https://arxiv.org/abs/2512.08397 - arXiv:2512.08397v1 Announce Type: new -Abstract: Facial retouching to beautify images is widely spread in social media, advertisements, and it is even applied in professional photo studios to let individuals appear younger, remove wrinkles and skin impurities. Generally speaking, this is done to enhance beauty. This is not a problem itself, but when retouched images are used as biometric samples and enrolled in a biometric system, it is one. Since previous work has proven facial retouching to be a challenge for face recognition systems,the detection of facial retouching becomes increasingly necessary. This work proposes to study and analyze changes in beauty assessment algorithms of retouched images, assesses different feature extraction methods based on artificial intelligence in order to improve retouching detection, and evaluates whether face beauty can be exploited to enhance the detection rate. In a scenario where the attacking retouching algorithm is unknown, this work achieved 1.1% D-EER on single image detection. - oai:arXiv.org:2512.08397v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Philipp Srock, Juan E. Tapia, Christoph Busch - - - Ontology-Based Knowledge Graph Framework for Industrial Standard Documents via Hierarchical and Propositional Structuring - https://arxiv.org/abs/2512.08398 - arXiv:2512.08398v1 Announce Type: new -Abstract: Ontology-based knowledge graph (KG) construction is a core technology that enables multidimensional understanding and advanced reasoning over domain knowledge. Industrial standards, in particular, contain extensive technical information and complex rules presented in highly structured formats that combine tables, scopes of application, constraints, exceptions, and numerical calculations, making KG construction especially challenging. In this study, we propose a method that organizes such documents into a hierarchical semantic structure, decomposes sentences and tables into atomic propositions derived from conditional and numerical rules, and integrates them into an ontology-knowledge graph through LLM-based triple extraction. Our approach captures both the hierarchical and logical structures of documents, effectively representing domain-specific semantics that conventional methods fail to reflect. To verify its effectiveness, we constructed rule, table, and multi-hop QA datasets, as well as a toxic clause detection dataset, from industrial standards, and implemented an ontology-aware KG-RAG framework for comparative evaluation. Experimental results show that our method achieves significant performance improvements across all QA types compared to existing KG-RAG approaches. This study demonstrates that reliable and scalable knowledge representation is feasible even for industrial documents with intertwined conditions, constraints, and scopes, contributing to future domain-specific RAG development and intelligent document management. - oai:arXiv.org:2512.08398v1 - cs.IR + Source Coverage and Citation Bias in LLM-based vs. Traditional Search Engines + https://arxiv.org/abs/2512.09483 + arXiv:2512.09483v1 Announce Type: new +Abstract: LLM-based Search Engines (LLM-SEs) introduces a new paradigm for information seeking. Unlike Traditional Search Engines (TSEs) (e.g., Google), these systems summarize results, often providing limited citation transparency. The implications of this shift remain largely unexplored, yet raises key questions regarding trust and transparency. In this paper, we present a large-scale empirical study of LLM-SEs, analyzing 55,936 queries and the corresponding search results across six LLM-SEs and two TSEs. We confirm that LLM-SEs cites domain resources with greater diversity than TSEs. Indeed, 37% of domains are unique to LLM-SEs. However, certain risks still persist: LLM-SEs do not outperform TSEs in credibility, political neutrality and safety metrics. Finally, to understand the selection criteria of LLM-SEs, we perform a feature-based analysis to identify key factors influencing source choice. Our findings provide actionable insights for end users, website owners, and developers. + oai:arXiv.org:2512.09483v1 cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jiin Park, Hyuna Jeon, Yoonseo Lee, Jisu Hong, Misuk Kim + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Peixian Zhang, Qiming Ye, Zifan Peng, Kiran Garimella, Gareth Tyson - Towards Visual Re-Identification of Fish using Fine-Grained Classification for Electronic Monitoring in Fisheries - https://arxiv.org/abs/2512.08400 - arXiv:2512.08400v1 Announce Type: new -Abstract: Accurate fisheries data are crucial for effective and sustainable marine resource management. With the recent adoption of Electronic Monitoring (EM) systems, more video data is now being collected than can be feasibly reviewed manually. This paper addresses this challenge by developing an optimized deep learning pipeline for automated fish re-identification (Re-ID) using the novel AutoFish dataset, which simulates EM systems with conveyor belts with six similarly looking fish species. We demonstrate that key Re-ID metrics (R1 and mAP@k) are substantially improved by using hard triplet mining in conjunction with a custom image transformation pipeline that includes dataset-specific normalization. By employing these strategies, we demonstrate that the Vision Transformer-based Swin-T architecture consistently outperforms the Convolutional Neural Network-based ResNet-50, achieving peak performance of 41.65% mAP@k and 90.43% Rank-1 accuracy. An in-depth analysis reveals that the primary challenge is distinguishing visually similar individuals of the same species (Intra-species errors), where viewpoint inconsistency proves significantly more detrimental than partial occlusion. The source code and documentation are available at: https://github.com/msamdk/Fish_Re_Identification.git - oai:arXiv.org:2512.08400v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Unambiguisability and Register Minimisation of Min-Plus Models + https://arxiv.org/abs/2512.09484 + arXiv:2512.09484v1 Announce Type: new +Abstract: We study the unambiguisability problem for min-plus (tropical) weighted automata (WFAs), and the counter-minimisation problem for tropical Cost Register Automata (CRAs), which are expressively-equivalent to WFAs. Both problems ask whether the "amount of nondeterminism" in the model can be reduced. We show that WFA unambiguisability is decidable, thus resolving this long-standing open problem. Our proof is via reduction to WFA determinisability, which was recently shown to be decidable. On the negative side, we show that CRA counter minimisation is undecidable, even for a fixed number of registers (specifically, already for 7 registers). + oai:arXiv.org:2512.09484v1 + cs.FL + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Samitha Nuwan Thilakarathna, Ercan Avsar, Martin Mathias Nielsen, Malte Pedersen + Shaull Almagor, Guy Arbel, Sarai Sheinvald - DFALLM: Achieving Generalizable Multitask Deepfake Detection by Optimizing Audio LLM Components - https://arxiv.org/abs/2512.08403 - arXiv:2512.08403v1 Announce Type: new -Abstract: Audio deepfake detection has recently garnered public concern due to its implications for security and reliability. Traditional deep learning methods have been widely applied to this task but often lack generalisability when confronted with newly emerging spoofing techniques and more tasks such as spoof attribution recognition rather than simple binary classification. In principle, Large Language Models (LLMs) are considered to possess the needed generalisation capabilities. However, previous research on Audio LLMs (ALLMs) indicates a generalization bottleneck in audio deepfake detection performance, even when sufficient data is available. Consequently, this study investigates the model architecture and examines the effects of the primary components of ALLMs, namely the audio encoder and the text-based LLM. Our experiments demonstrate that the careful selection and combination of audio encoders and text-based LLMs are crucial for unlocking the deepfake detection potential of ALLMs. We further propose an ALLM structure capable of generalizing deepfake detection abilities to out-of-domain spoofing tests and other deepfake tasks, such as spoof positioning and spoof attribution recognition. Our proposed model architecture achieves state-of-the-art (SOTA) performance across multiple datasets, including ASVSpoof2019, InTheWild, and Demopage, with accuracy reaching up to 95.76% on average, and exhibits competitive capabilities in other deepfake detection tasks such as attribution, and localisation compared to SOTA audio understanding models. Data and codes are provided in supplementary materials. - oai:arXiv.org:2512.08403v1 - cs.SD - Wed, 10 Dec 2025 00:00:00 -0500 + Advancing LLM-Based Security Automation with Customized Group Relative Policy Optimization for Zero-Touch Networks + https://arxiv.org/abs/2512.09485 + arXiv:2512.09485v1 Announce Type: new +Abstract: Zero-Touch Networks (ZTNs) represent a transformative paradigm toward fully automated and intelligent network management, providing the scalability and adaptability required for the complexity of sixth-generation (6G) networks. However, the distributed architecture, high openness, and deep heterogeneity of 6G networks expand the attack surface and pose unprecedented security challenges. To address this, security automation aims to enable intelligent security management across dynamic and complex environments, serving as a key capability for securing 6G ZTNs. Despite its promise, implementing security automation in 6G ZTNs presents two primary challenges: 1) automating the lifecycle from security strategy generation to validation and update under real-world, parallel, and adversarial conditions, and 2) adapting security strategies to evolving threats and dynamic environments. This motivates us to propose SecLoop and SA-GRPO. SecLoop constitutes the first fully automated framework that integrates large language models (LLMs) across the entire lifecycle of security strategy generation, orchestration, response, and feedback, enabling intelligent and adaptive defenses in dynamic network environments, thus tackling the first challenge. Furthermore, we propose SA-GRPO, a novel security-aware group relative policy optimization algorithm that iteratively refines security strategies by contrasting group feedback collected from parallel SecLoop executions, thereby addressing the second challenge. Extensive real-world experiments on five benchmarks, including 11 MITRE ATT&CK processes and over 20 types of attacks, demonstrate the superiority of the proposed SecLoop and SA-GRPO. We will release our platform to the community, facilitating the advancement of security automation towards next generation communications. + oai:arXiv.org:2512.09485v1 + cs.CR + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yupei Li, Li Wang, Yuxiang Wang, Lei Wang, Rizhao Cai, Jie Shi, Bj\"orn W. Schuller, Zhizheng Wu + Xinye Cao, Yihan Lin, Guoshun Nan, Qinchuan Zhou, Yuhang Luo, Yurui Gao, Zeliang Zhang, Haolang Lu, Qimei Cui, Yanzhao Hou, Xiaofeng Tao, Tony Q. S. Quek - Are generative AI text annotations systematically biased? - https://arxiv.org/abs/2512.08404 - arXiv:2512.08404v1 Announce Type: new -Abstract: This paper investigates bias in GLLM annotations by conceptually replicating manual annotations of Boukes (2024). Using various GLLMs (Llama3.1:8b, Llama3.3:70b, GPT4o, Qwen2.5:72b) in combination with five different prompts for five concepts (political content, interactivity, rationality, incivility, and ideology). We find GLLMs perform adequate in terms of F1 scores, but differ from manual annotations in terms of prevalence, yield substantively different downstream results, and display systematic bias in that they overlap more with each other than with manual annotations. Differences in F1 scores fail to account for the degree of bias. - oai:arXiv.org:2512.08404v1 + RouteRAG: Efficient Retrieval-Augmented Generation from Text and Graph via Reinforcement Learning + https://arxiv.org/abs/2512.09487 + arXiv:2512.09487v1 Announce Type: new +Abstract: Retrieval-Augmented Generation (RAG) integrates non-parametric knowledge into Large Language Models (LLMs), typically from unstructured texts and structured graphs. While recent progress has advanced text-based RAG to multi-turn reasoning through Reinforcement Learning (RL), extending these advances to hybrid retrieval introduces additional challenges. Existing graph-based or hybrid systems typically depend on fixed or handcrafted retrieval pipelines, lacking the ability to integrate supplementary evidence as reasoning unfolds. Besides, while graph evidence provides relational structures crucial for multi-hop reasoning, it is substantially more expensive to retrieve. To address these limitations, we introduce \model{}, an RL-based framework that enables LLMs to perform multi-turn and adaptive graph-text hybrid RAG. \model{} jointly optimizes the entire generation process via RL, allowing the model to learn when to reason, what to retrieve from either texts or graphs, and when to produce final answers, all within a unified generation policy. To guide this learning process, we design a two-stage training framework that accounts for both task outcome and retrieval efficiency, enabling the model to exploit hybrid evidence while avoiding unnecessary retrieval overhead. Experimental results across five question answering benchmarks demonstrate that \model{} significantly outperforms existing RAG baselines, highlighting the benefits of end-to-end RL in supporting adaptive and efficient retrieval for complex reasoning. + oai:arXiv.org:2512.09487v1 cs.CL cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sjoerd B. Stolwijk, Mark Boukes, Damian Trilling + Yucan Guo, Miao Su, Saiping Guan, Zihao Sun, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng - Learning Robot Manipulation from Audio World Models - https://arxiv.org/abs/2512.08405 - arXiv:2512.08405v1 Announce Type: new -Abstract: World models have demonstrated impressive performance on robotic learning tasks. Many such tasks inherently demand multimodal reasoning; for example, filling a bottle with water will lead to visual information alone being ambiguous or incomplete, thereby requiring reasoning over the temporal evolution of audio, accounting for its underlying physical properties and pitch patterns. In this paper, we propose a generative latent flow matching model to anticipate future audio observations, enabling the system to reason about long-term consequences when integrated into a robot policy. We demonstrate the superior capabilities of our system through two manipulation tasks that require perceiving in-the-wild audio or music signals, compared to methods without future lookahead. We further emphasize that successful robot action learning for these tasks relies not merely on multi-modal input, but critically on the accurate prediction of future audio states that embody intrinsic rhythmic patterns. - oai:arXiv.org:2512.08405v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + MODA: The First Challenging Benchmark for Multispectral Object Detection in Aerial Images + https://arxiv.org/abs/2512.09489 + arXiv:2512.09489v1 Announce Type: new +Abstract: Aerial object detection faces significant challenges in real-world scenarios, such as small objects and extensive background interference, which limit the performance of RGB-based detectors with insufficient discriminative information. Multispectral images (MSIs) capture additional spectral cues across multiple bands, offering a promising alternative. However, the lack of training data has been the primary bottleneck to exploiting the potential of MSIs. To address this gap, we introduce the first large-scale dataset for Multispectral Object Detection in Aerial images (MODA), which comprises 14,041 MSIs and 330,191 annotations across diverse, challenging scenarios, providing a comprehensive data foundation for this field. Furthermore, to overcome challenges inherent to aerial object detection using MSIs, we propose OSSDet, a framework that integrates spectral and spatial information with object-aware cues. OSSDet employs a cascaded spectral-spatial modulation structure to optimize target perception, aggregates spectrally related features by exploiting spectral similarities to reinforce intra-object correlations, and suppresses irrelevant background via object-aware masking. Moreover, cross-spectral attention further refines object-related representations under explicit object-aware guidance. Extensive experiments demonstrate that OSSDet outperforms existing methods with comparable parameters and efficiency. + oai:arXiv.org:2512.09489v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fan Zhang, Michael Gienger + Shuaihao Han, Tingfa Xu, Peifu Liu, Jianan Li - SAM-Body4D: Training-Free 4D Human Body Mesh Recovery from Videos - https://arxiv.org/abs/2512.08406 - arXiv:2512.08406v1 Announce Type: new -Abstract: Human Mesh Recovery (HMR) aims to reconstruct 3D human pose and shape from 2D observations and is fundamental to human-centric understanding in real-world scenarios. While recent image-based HMR methods such as SAM 3D Body achieve strong robustness on in-the-wild images, they rely on per-frame inference when applied to videos, leading to temporal inconsistency and degraded performance under occlusions. We address these issues without extra training by leveraging the inherent human continuity in videos. We propose SAM-Body4D, a training-free framework for temporally consistent and occlusion-robust HMR from videos. We first generate identity-consistent masklets using a promptable video segmentation model, then refine them with an Occlusion-Aware module to recover missing regions. The refined masklets guide SAM 3D Body to produce consistent full-body mesh trajectories, while a padding-based parallel strategy enables efficient multi-human inference. Experimental results demonstrate that SAM-Body4D achieves improved temporal stability and robustness in challenging in-the-wild videos, without any retraining. Our code and demo are available at: https://github.com/gaomingqi/sam-body4d. - oai:arXiv.org:2512.08406v1 + StateSpace-SSL: Linear-Time Self-supervised Learning for Plant Disease Detectio + https://arxiv.org/abs/2512.09492 + arXiv:2512.09492v1 Announce Type: new +Abstract: Self-supervised learning (SSL) is attractive for plant disease detection as it can exploit large collections of unlabeled leaf images, yet most existing SSL methods are built on CNNs or vision transformers that are poorly matched to agricultural imagery. CNN-based SSL struggles to capture disease patterns that evolve continuously along leaf structures, while transformer-based SSL introduces quadratic attention cost from high-resolution patches. To address these limitations, we propose StateSpace-SSL, a linear-time SSL framework that employs a Vision Mamba state-space encoder to model long-range lesion continuity through directional scanning across the leaf surface. A prototype-driven teacher-student objective aligns representations across multiple views, encouraging stable and lesion-aware features from labelled data. Experiments on three publicly available plant disease datasets show that StateSpace-SSL consistently outperforms the CNN- and transformer-based SSL baselines in various evaluation metrics. Qualitative analyses further confirm that it learns compact, lesion-focused feature maps, highlighting the advantage of linear state-space modelling for self-supervised plant disease representation learning. + oai:arXiv.org:2512.09492v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mingqi Gao, Yunqi Miao, Jungong Han + Abdullah Al Mamun, Miaohua Zhang, David Ahmedt-Aristizabal, Zeeshan Hayder, Mohammad Awrangjeb - Towards Effective and Efficient Long Video Understanding of Multimodal Large Language Models via One-shot Clip Retrieval - https://arxiv.org/abs/2512.08410 - arXiv:2512.08410v1 Announce Type: new -Abstract: Due to excessive memory overhead, most Multimodal Large Language Models (MLLMs) can only process videos of limited frames. In this paper, we propose an effective and efficient paradigm to remedy this shortcoming, termed One-shot video-Clip based Retrieval AuGmentation (OneClip-RAG). Compared with existing video RAG methods, OneClip-RAG makes full use of the merits of video clips for augmented video understanding in terms of both knowledge integrity and semantic coherence. Besides, it is also equipped with a novel query-guided video chunking algorithm that can unify clip chunking and cross-modal retrieval in one processing step, avoiding redundant computations. To improve instruction following, we further propose a new dataset called SynLongVideo and design a progressive training regime for OneClip-RAG. OneClip-RAG is plugged into five recent MLLMs and validated on a set of long-video benchmarks. Experimental results not only show the obvious performance gains by OneClip-RAG over MLLMs, e.g., boosting InternLV2 8B and Qwen2-VL 7B to the level of GPT-4o on MLVU, but also show its superior efficiency in handling long videos. e.g., enabling LLaVA-Video understand up to an hour of videos in less than 2.2 minutes on a single 4090 GPU. - oai:arXiv.org:2512.08410v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + On Mobile Ad Hoc Networks for Coverage of Partially Observable Worlds + https://arxiv.org/abs/2512.09495 + arXiv:2512.09495v1 Announce Type: new +Abstract: This paper addresses the movement and placement of mobile agents to establish a communication network in initially unknown environments. We cast the problem in a computational-geometric framework by relating the coverage problem and line-of-sight constraints to the Cooperative Guard Art Gallery Problem, and introduce its partially observable variant, the Partially Observable Cooperative Guard Art Gallery Problem (POCGAGP). We then present two algorithms that solve POCGAGP: CADENCE, a centralized planner that incrementally selects 270 degree corners at which to deploy agents, and DADENCE, a decentralized scheme that coordinates agents using local information and lightweight messaging. Both approaches operate under partial observability and target simultaneous coverage and connectivity. We evaluate the methods in simulation across 1,500 test cases of varied size and structure, demonstrating consistent success in forming connected networks while covering and exploring unknown space. These results highlight the value of geometric abstractions for communication-driven exploration and show that decentralized policies are competitive with centralized performance while retaining scalability. + oai:arXiv.org:2512.09495v1 + cs.RO + cs.CG + cs.MA + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tao Chen, Shaobo Ju, Qiong Wu, Chenxin Fang, Kun Zhang, Jun Peng, Hui Li, Yiyi Zhou, Rongrong Ji + Edwin Meriaux, Shuo Wen, Louis-Roy Langevin, Doina Precup, Antonio Lor\'ia, Gregory Dudek - Prismatic World Model: Learning Compositional Dynamics for Planning in Hybrid Systems - https://arxiv.org/abs/2512.08411 - arXiv:2512.08411v1 Announce Type: new -Abstract: Model-based planning in robotic domains is fundamentally challenged by the hybrid nature of physical dynamics, where continuous motion is punctuated by discrete events such as contacts and impacts. Conventional latent world models typically employ monolithic neural networks that enforce global continuity, inevitably over-smoothing the distinct dynamic modes (e.g., sticking vs. sliding, flight vs. stance). For a planner, this smoothing results in catastrophic compounding errors during long-horizon lookaheads, rendering the search process unreliable at physical boundaries. To address this, we introduce the Prismatic World Model (PRISM-WM), a structured architecture designed to decompose complex hybrid dynamics into composable primitives. PRISM-WM leverages a context-aware Mixture-of-Experts (MoE) framework where a gating mechanism implicitly identifies the current physical mode, and specialized experts predict the associated transition dynamics. We further introduce a latent orthogonalization objective to ensure expert diversity, effectively preventing mode collapse. By accurately modeling the sharp mode transitions in system dynamics, PRISM-WM significantly reduces rollout drift. Extensive experiments on challenging continuous control benchmarks, including high-dimensional humanoids and diverse multi-task settings, demonstrate that PRISM-WM provides a superior high-fidelity substrate for trajectory optimization algorithms (e.g., TD-MPC), proving its potential as a powerful foundational model for next-generation model-based agents. - oai:arXiv.org:2512.08411v1 + Representation Invariance and Allocation: When Subgroup Balance Matters + https://arxiv.org/abs/2512.09496 + arXiv:2512.09496v1 Announce Type: new +Abstract: Unequal representation of demographic groups in training data poses challenges to model generalisation across populations. Standard practice assumes that balancing subgroup representation optimises performance. However, recent empirical results contradict this assumption: in some cases, imbalanced data distributions actually improve subgroup performance, while in others, subgroup performance remains unaffected by the absence of an entire subgroup during training. We conduct a systematic study of subgroup allocation across four vision and language models, varying training data composition to characterise the sensitivity of subgroup performance to data balance. We propose the latent separation hypothesis, which states that a partially fine-tuned model's dependence on subgroup representation is determined by the degree of separation between subgroups in the latent space of the pre-trained model. We formalise this hypothesis, provide theoretical analysis, and validate it empirically. Finally, we present a practical application to foundation model fine-tuning, demonstrating that quantitative analysis of latent subgroup separation can inform data collection and balancing decisions. + oai:arXiv.org:2512.09496v1 + cs.LG cs.AI - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Mingwei Li, Xiaoyuan Zhang, Chengwei Yang, Zilong Zheng, Yaodong Yang + Anissa Alloula, Charles Jones, Zuzanna Wakefield-Skorniewska, Francesco Quinzan, Bart{\l}omiej Papie\.z - Integration of AI-Driven CAD Systems in Designing Water and Power Transportation Infrastructure for Industrial and Remote Landscape Applications - https://arxiv.org/abs/2512.08415 - arXiv:2512.08415v1 Announce Type: new -Abstract: The integration of AI into CAD systems transforms how engineers plan and develop infrastructure projects involving water and power transportation across industrial and remote landscapes. This paper discusses how AI-driven CAD systems improve the efficient, effective, and sustainable design of infrastructure by embedding automation, predictive modeling, and real-time data analytics. This study examines how AI-supported toolsets can enhance design workflows, minimize human error, and optimize resource allocation for projects in underdeveloped environments. It also addresses technical and organizational challenges to AI adoption, including data silos, interoperability issues, and workforce adaptation. The findings demonstrate that AI-powered CAD enables faster project delivery, enhanced design precision, and increased resilience to environmental and logistical constraints. AI helps connect CAD, GIS, and IoT technologies to develop self-learning, adaptive design systems that are needed to meet the increasing global demand for sustainable infrastructure. - oai:arXiv.org:2512.08415v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Gradient-Guided Learning Network for Infrared Small Target Detection + https://arxiv.org/abs/2512.09497 + arXiv:2512.09497v1 Announce Type: new +Abstract: Recently, infrared small target detection has attracted extensive attention. However, due to the small size and the lack of intrinsic features of infrared small targets, the existing methods generally have the problem of inaccurate edge positioning and the target is easily submerged by the background. Therefore, we propose an innovative gradient-guided learning network (GGL-Net). Specifically, we are the first to explore the introduction of gradient magnitude images into the deep learning-based infrared small target detection method, which is conducive to emphasizing the edge details and alleviating the problem of inaccurate edge positioning of small targets. On this basis, we propose a novel dual-branch feature extraction network that utilizes the proposed gradient supplementary module (GSM) to encode raw gradient information into deeper network layers and embeds attention mechanisms reasonably to enhance feature extraction ability. In addition, we construct a two-way guidance fusion module (TGFM), which fully considers the characteristics of feature maps at different levels. It can facilitate the effective fusion of multi-scale feature maps and extract richer semantic information and detailed information through reasonable two-way guidance. Extensive experiments prove that GGL-Net has achieves state-of-the-art results on the public real NUAA-SIRST dataset and the public synthetic NUDT-SIRST dataset. Our code has been integrated into https://github.com/YuChuang1205/MSDA-Net + oai:arXiv.org:2512.09497v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Sunggyu Park + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1109/LGRS.2023.3308783 + Jinmiao Zhao, Chuang Yu, Zelin Shi, Yunpeng Liu, Yingdi Zhang - Improvement and Stabilization of Output Voltages in a Vertical Tidal Turbine Using Intelligent Control Strategies - https://arxiv.org/abs/2512.08416 - arXiv:2512.08416v1 Announce Type: new -Abstract: This article investigates on the improvement and stabilization of alternating current (AC) and direct current (DC) output voltages in a Permanent Magnet Synchronous Generator (PMSG) driven by a vertical-axis tidal turbine using advanced control strategies. The research integrates artificial intelligence (AI)-based techniques to enhance voltage stability and efficiency. Initially, the Maximum Power Point Tracking (MPPT) approach based on Tip Speed Ratio (TSR) and Artificial Neural Network (ANN) Fuzzy logic controllers is explored. To further optimize the performance, Particle Swarm Optimization (PSO) and a hybrid ANN-PSO methodology are implemented. These strategies aim to refine the reference rotational speed of the turbine while minimizing deviations from optimal power extraction conditions. The simulation results of a tidal turbine operating at a water flow velocity of 1.5 m/s demonstrate that the PSO-based control approach significantly enhances the voltage stability compared to conventional MPPT-TSR and ANN-Fuzzy controllers. The hybrid ANN-PSO technique improves the voltage regulation by dynamically adapting to system variations and providing real-time reference speed adjustments. This research highlights the AI-based hybrid optimization benefit to stabilize the output voltage of tidal energy systems, thereby increasing reliability and efficiency in renewable energy applications. - oai:arXiv.org:2512.08416v1 - cs.NI - eess.SP - Wed, 10 Dec 2025 00:00:00 -0500 + Scalable Construction of Spiking Neural Networks using up to thousands of GPUs + https://arxiv.org/abs/2512.09502 + arXiv:2512.09502v1 Announce Type: new +Abstract: Diverse scientific and engineering research areas deal with discrete, time-stamped changes in large systems of interacting delay differential equations. Simulating such complex systems at scale on high-performance computing clusters demands efficient management of communication and memory. Inspired by the human cerebral cortex -- a sparsely connected network of $\mathcal{O}(10^{10})$ neurons, each forming $\mathcal{O}(10^{3})$--$\mathcal{O}(10^{4})$ synapses and communicating via short electrical pulses called spikes -- we study the simulation of large-scale spiking neural networks for computational neuroscience research. This work presents a novel network construction method for multi-GPU clusters and upcoming exascale supercomputers using the Message Passing Interface (MPI), where each process builds its local connectivity and prepares the data structures for efficient spike exchange across the cluster during state propagation. We demonstrate scaling performance of two cortical models using point-to-point and collective communication, respectively. + oai:arXiv.org:2512.09502v1 + cs.DC + cs.NE + physics.comp-ph + q-bio.NC + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - International Conference on Electrical and Computer Engineering Researches (ICECER 2025), Dec 2025, Antananarivo, Madagascar - Fanambinantsoa Philibert Andriniriniaimalaza (PIMENT), Nour Murad (PIMENT), Randriamaitso Telesphore (SPE), Bilal Habachi (SPE), Randriatefison Nirilalaina (NUIST), Manasina Ruffin (NUIST), Andrianirina Charles Bernard (NUIST), Ravelo Blaise (NUIST) + http://creativecommons.org/licenses/by/4.0/ + Bruno Golosio, Gianmarco Tiddia, Jos\'e Villamar, Luca Pontisso, Luca Sergi, Francesco Simula, Pooja Babu, Elena Pastorelli, Abigail Morrison, Markus Diesmann, Alessandro Lonardo, Pier Stanislao Paolucci, Johanna Senk - Attention is All You Need to Defend Against Indirect Prompt Injection Attacks in LLMs - https://arxiv.org/abs/2512.08417 - arXiv:2512.08417v1 Announce Type: new -Abstract: Large Language Models (LLMs) have been integrated into many applications (e.g., web agents) to perform more sophisticated tasks. However, LLM-empowered applications are vulnerable to Indirect Prompt Injection (IPI) attacks, where instructions are injected via untrustworthy external data sources. This paper presents Rennervate, a defense framework to detect and prevent IPI attacks. Rennervate leverages attention features to detect the covert injection at a fine-grained token level, enabling precise sanitization that neutralizes IPI attacks while maintaining LLM functionalities. Specifically, the token-level detector is materialized with a 2-step attentive pooling mechanism, which aggregates attention heads and response tokens for IPI detection and sanitization. Moreover, we establish a fine-grained IPI dataset, FIPI, to be open-sourced to support further research. Extensive experiments verify that Rennervate outperforms 15 commercial and academic IPI defense methods, achieving high precision on 5 LLMs and 6 datasets. We also demonstrate that Rennervate is transferable to unseen attacks and robust against adaptive adversaries. - oai:arXiv.org:2512.08417v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + DMP-TTS: Disentangled multi-modal Prompting for Controllable Text-to-Speech with Chained Guidance + https://arxiv.org/abs/2512.09504 + arXiv:2512.09504v1 Announce Type: new +Abstract: Controllable text-to-speech (TTS) systems face significant challenges in achieving independent manipulation of speaker timbre and speaking style, often suffering from entanglement between these attributes. We present DMP-TTS, a latent Diffusion Transformer (DiT) framework with explicit disentanglement and multi-modal prompting. A CLAP-based style encoder (Style-CLAP) aligns cues from reference audio and descriptive text in a shared space and is trained with contrastive learning plus multi-task supervision on style attributes. For fine-grained control during inference, we introduce chained classifier-free guidance (cCFG) trained with hierarchical condition dropout, enabling independent adjustment of content, timbre, and style guidance strengths. Additionally, we employ Representation Alignment (REPA) to distill acoustic-semantic features from a pretrained Whisper model into intermediate DiT representations, stabilizing training and accelerating convergence. Experiments show that DMP-TTS delivers stronger style controllability than open-source baselines while maintaining competitive intelligibility and naturalness. Code and demos will be available at https://y61329697.github.io/DMP-TTS/. + oai:arXiv.org:2512.09504v1 + cs.SD + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yinan Zhong, Qianhao Miao, Yanjiao Chen, Jiangyi Deng, Yushi Cheng, Wenyuan Xu + Kang Yin, Chunyu Qiang, Sirui Zhao, Xiaopeng Wang, Yuzhe Liang, Pengfei Cai, Tong Xu, Chen Zhang, Enhong Chen - Mechanical behaviour of brain-skull interface (meninges) under shear loading through experiment and finite element modelling: Preliminary results - https://arxiv.org/abs/2512.08425 - arXiv:2512.08425v1 Announce Type: new -Abstract: The brain-skull interface (meninges) plays a critical role in governing brain motion during head impacts, yet computational models often simplify this interface using idealized contact conditions due to limited experimental data. This study presents an improved protocol combining experimental testing and computational modelling to determine the mechanical properties of the brain-skull interface under shear loading. Brain tissue and brain-skull complex samples were extracted from sheep cadaver heads and subjected to shear loading. Magnetic resonance imaging (MRI) was used to obtain accurate 3D geometries of the samples, which were then used to create computational grids (meshes) for simulation of the experiments using finite element (FE) models to determine subject-specific properties of the brain tissue and brain-skull interface. A second-order Ogden hyperelastic model was used for the brain tissue, and a cohesive layer was employed to model the brain-skull interface. Our results indicate that a cohesive layer captures the force-displacement and damage initiation of the brain-skull interface. The calibrated cohesive properties showed consistent patterns across samples, with maximum normal tractions ranging from 2.8-3.4 kPa and maximum tangential tractions from 1.8-2.1 kPa. This framework provides a foundation for improving the biofidelity of computational head models used in injury prediction and neurosurgical planning by replacing arbitrary boundary conditions with formulations derived from experimental data on brain-skull interface (meninges) biomechanical behaviour. - oai:arXiv.org:2512.08425v1 + CNFinBench: A Benchmark for Safety and Compliance of Large Language Models in Finance + https://arxiv.org/abs/2512.09506 + arXiv:2512.09506v1 Announce Type: new +Abstract: Large language models are increasingly deployed across the financial sector for tasks such as research, compliance, risk analysis, and customer service, which makes rigorous safety evaluation essential. However, existing financial benchmarks primarily focus on textbook-style question answering and numerical problem solving, but fail to evaluate models' real-world safety behaviors. They weakly assess regulatory compliance and investor-protection norms, rarely stress-test multi-turn adversarial tactics such as jailbreaks or prompt injection, inconsistently ground answers in long filings, ignore tool- or RAG-induced over-reach risks, and rely on opaque or non-auditable evaluation protocols. To close these gaps, we introduce CNFinBench, a benchmark that employs finance-tailored red-team dialogues and is structured around a Capability-Compliance-Safety triad, including evidence-grounded reasoning over long reports and jurisdiction-aware rule/tax compliance tasks. For systematic safety quantification, we introduce the Harmful Instruction Compliance Score (HICS) to measure how consistently models resist harmful prompts across multi-turn adversarial dialogues. To ensure auditability, CNFinBench enforces strict output formats with dynamic option perturbation for objective tasks and employs a hybrid LLM-ensemble plus human-calibrated judge for open-ended evaluations. Experiments on 21 models across 15 subtasks confirm a persistent capability-compliance gap: models achieve an average score of 61.0 on capability tasks but fall to 34.18 on compliance and risk-control evaluations. Under multi-turn adversarial dialogue tests, most systems reach only partial resistance (HICS 60-79), demonstrating that refusal alone is not a reliable proxy for safety without cited and verifiable reasoning. + oai:arXiv.org:2512.09506v1 cs.CE - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Sajjad Arzemanzadeh, Karol Miller, Tim Rosenow, Sjoerd B. Vos, Adam Wittek + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jinru Ding, Chao Ding, Wenrao Pang, Boyi Xiao, Zhiqiang Liu, Pengcheng Chen, Jiayuan Chen, Tiantian Yuan, Junming Guan, Yidong Jiang, Dawei Cheng, Jie Xu - Mediating Personal Relationships with Robotic Pets for Fostering Human-Human Interaction of Older Adults - https://arxiv.org/abs/2512.08426 - arXiv:2512.08426v1 Announce Type: new -Abstract: Good human relationships are important for us to have a happy life and maintain our well-being. Otherwise, we will be at risk of experiencing loneliness or depression. In human-computer interaction (HCI) and computer-supported cooperative work (CSCW), robotic systems offer nuanced approaches to foster human connection, providing interaction beyond the traditional mediums that smartphones and computers offer. However, many existing studies primarily focus on the humanrobot relationships that older adults form directly with robotic pets rather than exploring how these robotic pets can enhance human-human relationships. Our ethnographic study investigates how robotic pets can be designed to facilitate human relationships. Through semi-structured interviews with six older adults and thematic analysis, our empirical findings provide insights into how robotic pets can be designed as telerobots to connect with others remotely, thus contributing to advance future development of robotic systems for mental health. - oai:arXiv.org:2512.08426v1 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Two-Variable Logic for Hierarchically Partitioned and Ordered Data + https://arxiv.org/abs/2512.09508 + arXiv:2512.09508v1 Announce Type: new +Abstract: We study Two-Variable First-Order Logic, FO2, under semantic constraints that model hierarchically structured data. Our first logic extends FO2 with a linear order < and a chain of increasingly coarser equivalence relations E_1, E_2, ... . We show that its finite satisfiability problem is NExpTime-complete. We also demonstrate that a weaker variant of this logic without the linear order enjoys the exponential model property. Our second logic extends FO2 with a chain of nested total preorders. We prove that its finite satisfiability problem is also NExpTime-complete.However, we show that the complexity increases to ExpSpace-complete once access to the successor relations of the preorders is allowed. Our last result is the undecidability of FO2 with two independent chains of nested equivalence relations. + oai:arXiv.org:2512.09508v1 + cs.LO + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Delong Du, Sara Gilda Amirhajlou, Akwasi Gyabaah, Richard Paluch, Claudia M\"uller + Oskar Fiuk, Emanuel Kieronski, Vincent Michielini - SDT-6D: Fully Sparse Depth-Transformer for Staged End-to-End 6D Pose Estimation in Industrial Multi-View Bin Picking - https://arxiv.org/abs/2512.08430 - arXiv:2512.08430v1 Announce Type: new -Abstract: Accurately recovering 6D poses in densely packed industrial bin-picking environments remain a serious challenge, owing to occlusions, reflections, and textureless parts. We introduce a holistic depth-only 6D pose estimation approach that fuses multi-view depth maps into either a fine-grained 3D point cloud in its vanilla version, or a sparse Truncated Signed Distance Field (TSDF). At the core of our framework lies a staged heatmap mechanism that yields scene-adaptive attention priors across different resolutions, steering computation toward foreground regions, thus keeping memory requirements at high resolutions feasible. Along, we propose a density-aware sparse transformer block that dynamically attends to (self-) occlusions and the non-uniform distribution of 3D data. While sparse 3D approaches has proven effective for long-range perception, its potential in close-range robotic applications remains underexplored. Our framework operates fully sparse, enabling high-resolution volumetric representations to capture fine geometric details crucial for accurate pose estimation in clutter. Our method processes the entire scene integrally, predicting the 6D pose via a novel per-voxel voting strategy, allowing simultaneous pose predictions for an arbitrary number of target objects. We validate our method on the recently published IPD and MV-YCB multi-view datasets, demonstrating competitive performance in heavily cluttered industrial and household bin picking scenarios. - oai:arXiv.org:2512.08430v1 - cs.CV + ViTA-Seg: Vision Transformer for Amodal Segmentation in Robotics + https://arxiv.org/abs/2512.09510 + arXiv:2512.09510v1 Announce Type: new +Abstract: Occlusions in robotic bin picking compromise accurate and reliable grasp planning. We present ViTA-Seg, a class-agnostic Vision Transformer framework for real-time amodal segmentation that leverages global attention to recover complete object masks, including hidden regions. We proposte two architectures: a) Single-Head for amodal mask prediction; b) Dual-Head for amodal and occluded mask prediction. We also introduce ViTA-SimData, a photo-realistic synthetic dataset tailored to industrial bin-picking scenario. Extensive experiments on two amodal benchmarks, COOCA and KINS, demonstrate that ViTA-Seg Dual Head achieves strong amodal and occlusion segmentation accuracy with computational efficiency, enabling robust, real-time robotic manipulation. + oai:arXiv.org:2512.09510v1 cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Donato Caramia, Florian T. Pokorny, Giuseppe Triggiani, Denis Ruffino, David Naso, Paolo Roberto Massenio + + + Exploring Community-Powered Conversational Agent for Health Knowledge Acquisition: A Case Study in Colorectal Cancer + https://arxiv.org/abs/2512.09511 + arXiv:2512.09511v1 Announce Type: new +Abstract: Online communities have become key platforms where young adults, actively seek and share information, including health knowledge. However, these users often face challenges when browsing these communities, such as fragmented content, varying information quality and unfamiliar terminology. Based on a survey with 56 participants and follow-up interviews, we identify common challenges and expected features for learning health knowledge. In this paper, we develop a computational workflow that integrates community content into a conversational agent named CanAnswer to facilitate health knowledge acquisition. Using colorectal cancer as a case study, we evaluate CanAnswer through a lab study with 24 participants and interviews with six medical experts. Results show that CanAnswer improves the recalled gained knowledge and reduces the task workload of the learning session. Our expert interviews (N=6) further confirm the reliability and usefulness of CanAnswer. We discuss the generality of CanAnswer and provide design considerations for enhancing the usefulness and credibility of community-powered learning tools. + oai:arXiv.org:2512.09511v1 + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Nico Leuze, Maximilian Hoh, Samed Do\u{g}an, Nicolas R. -Pe\~na, Alfred Schoettl + Yiwei Yuan, Zhiqing Wang, Xiucheng Zhang, Yichao Luo, Shuya Lin, Yang Bai, Zhenhui Peng - Beyond Wave Variables: A Data-Driven Ensemble Approach for Enhanced Teleoperation Transparency and Stability - https://arxiv.org/abs/2512.08436 - arXiv:2512.08436v1 Announce Type: new -Abstract: Time delays in communication channels present significant challenges for bilateral teleoperation systems, affecting both transparency and stability. Although traditional wave variable-based methods for a four-channel architecture ensure stability via passivity, they remain vulnerable to wave reflections and disturbances like variable delays and environmental noise. This article presents a data-driven hybrid framework that replaces the conventional wave-variable transform with an ensemble of three advanced sequence models, each optimized separately via the state-of-the-art Optuna optimizer, and combined through a stacking meta-learner. The base predictors include an LSTM augmented with Prophet for trend correction, an LSTM-based feature extractor paired with clustering and a random forest for improved regression, and a CNN-LSTM model for localized and long-term dynamics. Experimental validation was performed in Python using data generated from the baseline system implemented in MATLAB/Simulink. The results show that our optimized ensemble achieves a transparency comparable to the baseline wave-variable system under varying delays and noise, while ensuring stability through passivity constraints. - oai:arXiv.org:2512.08436v1 - eess.SY + Contextual Dynamic Pricing with Heterogeneous Buyers + https://arxiv.org/abs/2512.09513 + arXiv:2512.09513v1 Announce Type: new +Abstract: We initiate the study of contextual dynamic pricing with a heterogeneous population of buyers, where a seller repeatedly posts prices (over $T$ rounds) that depend on the observable $d$-dimensional context and receives binary purchase feedback. Unlike prior work assuming homogeneous buyer types, in our setting the buyer's valuation type is drawn from an unknown distribution with finite support size $K_{\star}$. We develop a contextual pricing algorithm based on optimistic posterior sampling with regret $\widetilde{O}(K_{\star}\sqrt{dT})$, which we prove to be tight in $d$ and $T$ up to logarithmic terms. Finally, we refine our analysis for the non-contextual pricing case, proposing a variance-aware zooming algorithm that achieves the optimal dependence on $K_{\star}$. + oai:arXiv.org:2512.09513v1 cs.LG - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Nour Mitiche, Farid Ferguene, Mourad Oussalah + Thodoris Lykouris, Sloan Nietert, Princewill Okoroafor, Chara Podimata, Julian Zimmert - Time and Money Matters for Sustainability: Insights on User Preferences on Renewable Energy for Electric Vehicle Charging Stations - https://arxiv.org/abs/2512.08437 - arXiv:2512.08437v1 Announce Type: new -Abstract: Charging electric vehicles (EVs) with renewable energy can lessen their environmental impact. However, the fluctuating availability of renewable energy affects the sustainability of public EV charging stations. Nearby public charging stations may utilize differing energy sources due to their microgrid connections - ranging from exclusively renewable to non-renewable or a combination of both - highlighting the substantial variability in energy supply types within short distances. This study investigates the near-future scenario of integrating dynamic renewable energy availability in charging station navigation to impact the choices of EV users towards renewable sources. We conducted a within-subjects design survey with 50 car users and semi-structured interviews with 10 EV users from rural, suburban, and urban areas. The results show that when choosing EV charging stations, drivers often prioritize either time savings or money savings based on the driving scenarios that influence drivers' consumer value. Notably, EV users tend to select renewable-powered stations when they align with their main priority, be it saving money or time. This study offers end-user insights into the front-end graphic user interface and the development of the back-end ranking algorithm for navigation recommender systems that integrate dynamic renewable energy availability for the sustainable use of electric vehicles. - oai:arXiv.org:2512.08437v1 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + QuanvNeXt: An end-to-end quanvolutional neural network for EEG-based detection of major depressive disorder + https://arxiv.org/abs/2512.09517 + arXiv:2512.09517v1 Announce Type: new +Abstract: This study presents QuanvNeXt, an end-to-end fully quanvolutional model for EEG-based depression diagnosis. QuanvNeXt incorporates a novel Cross Residual block, which reduces feature homogeneity and strengthens cross-feature relationships while retaining parameter efficiency. We evaluated QuanvNeXt on two open-source datasets, where it achieved an average accuracy of 93.1% and an average AUC-ROC of 97.2%, outperforming state-of-the-art baselines such as InceptionTime (91.7% accuracy, 95.9% AUC-ROC). An uncertainty analysis across Gaussian noise levels demonstrated well-calibrated predictions, with ECE scores remaining low (0.0436, Dataset 1) to moderate (0.1159, Dataset 2) even at the highest perturbation ({\epsilon} = 0.1). Additionally, a post-hoc explainable AI analysis confirmed that QuanvNeXt effectively identifies and learns spectrotemporal patterns that distinguish between healthy controls and major depressive disorder. Overall, QuanvNeXt establishes an efficient and reliable approach for EEG-based depression diagnosis. + oai:arXiv.org:2512.09517v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Delong Du, Apostolos Vavouris, Omid Veisi, Lu Jin, Gunnar Stevens, Lina Stankovic, Vladimir Stankovic, Alexander Boden + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Nabil Anan Orka, Ehtashamul Haque, Maftahul Jannat, Md Abdul Awal, Mohammad Ali Moni - LapFM: A Laparoscopic Segmentation Foundation Model via Hierarchical Concept Evolving Pre-training - https://arxiv.org/abs/2512.08439 - arXiv:2512.08439v1 Announce Type: new -Abstract: Surgical segmentation is pivotal for scene understanding yet remains hindered by annotation scarcity and semantic inconsistency across diverse procedures. Existing approaches typically fine-tune natural foundation models (e.g., SAM) with limited supervision, functioning merely as domain adapters rather than surgical foundation models. Consequently, they struggle to generalize across the vast variability of surgical targets. To bridge this gap, we present LapFM, a foundation model designed to evolve robust segmentation capabilities from massive unlabeled surgical images. Distinct from medical foundation models relying on inefficient self-supervised proxy tasks, LapFM leverages a Hierarchical Concept Evolving Pre-training paradigm. First, we establish a Laparoscopic Concept Hierarchy (LCH) via a hierarchical mask decoder with parent-child query embeddings, unifying diverse entities (i.e., Anatomy, Tissue, and Instrument) into a scalable knowledge structure with cross-granularity semantic consistency. Second, we propose a Confidence-driven Evolving Labeling that iteratively generates and filters pseudo-labels based on hierarchical consistency, progressively incorporating reliable samples from unlabeled images into training. This process yields LapBench-114K, a large-scale benchmark comprising 114K image-mask pairs. Extensive experiments demonstrate that LapFM significantly outperforms state-of-the-art methods, establishing new standards for granularity-adaptive generalization in universal laparoscopic segmentation. The source code is available at https://github.com/xq141839/LapFM. - oai:arXiv.org:2512.08439v1 + Masked Registration and Autoencoding of CT Images for Predictive Tibia Reconstruction + https://arxiv.org/abs/2512.09525 + arXiv:2512.09525v1 Announce Type: new +Abstract: Surgical planning for complex tibial fractures can be challenging for surgeons, as the 3D structure of the later desirable bone alignment may be diffi- cult to imagine. To assist in such planning, we address the challenge of predicting a patient-specific reconstruction target from a CT of the fractured tibia. Our ap- proach combines neural registration and autoencoder models. Specifically, we first train a modified spatial transformer network (STN) to register a raw CT to a standardized coordinate system of a jointly trained tibia prototype. Subsequently, various autoencoder (AE) architectures are trained to model healthy tibial varia- tions. Both the STN and AE models are further designed to be robust to masked input, allowing us to apply them to fractured CTs and decode to a prediction of the patient-specific healthy bone in standard coordinates. Our contributions include: i) a 3D-adapted STN for global spatial registration, ii) a comparative analysis of AEs for bone CT modeling, and iii) the extension of both to handle masked inputs for predictive generation of healthy bone structures. Project page: https://github.com/HongyouZhou/repair + oai:arXiv.org:2512.09525v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hongyou Zhou, Cederic A{\ss}mann, Alaa Bejaoui, Heiko Tzsch\"atzsch, Mark Heyland, Julian Zierke, Niklas Tuttle, Sebastian H\"olzl, Timo Auer, David A. Back, Marc Toussaint + + + Latent-Autoregressive GP-VAE Language Model + https://arxiv.org/abs/2512.09535 + arXiv:2512.09535v1 Announce Type: new +Abstract: We investigate a fully Latent AutoRegressive scheme based on a Gaussian Process (GP) integrated into a Variational Autoencoder (VAE). In this setting, sequential dynamics are transferred from the observation space to a continuous latent space, while linguistic generation remains parallel through a non-autoregressive decoder. We present a complete methodological formulation, including a causal GP prior, a structured amortized posterior, and a training protocol based on a regularized ELBO. Empirical evaluation, conducted within a deliberately constrained proof-of-concept (POC) framework, shows that the model can be trained stably and that the sequential and parallel sampling variants exhibit consistent behavior. Overall, the results suggest that part of the temporal structure in a language model can be supported by the probabilistic geometry of the latent space rather than by explicit neural operations. + oai:arXiv.org:2512.09535v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Qing Xu, Kun Yuan, Yuxiang Luo, Yuhao Zhai, Wenting Duan, Nassir Navab, Zhen Chen + Yves Ruffenach - What Triggers my Model? Contrastive Explanations Inform Gender Choices by Translation Models - https://arxiv.org/abs/2512.08440 - arXiv:2512.08440v1 Announce Type: new -Abstract: Interpretability can be implemented as a means to understand decisions taken by (black box) models, such as machine translation (MT) or large language models (LLMs). Yet, research in this area has been limited in relation to a manifested problem in these models: gender bias. With this research, we aim to move away from simply measuring bias to exploring its origins. Working with gender-ambiguous natural source data, this study examines which context, in the form of input tokens in the source sentence, influences (or triggers) the translation model choice of a certain gender inflection in the target language. To analyse this, we use contrastive explanations and compute saliency attribution. We first address the challenge of a lacking scoring threshold and specifically examine different attribution levels of source words on the model gender decisions in the translation. We compare salient source words with human perceptions of gender and demonstrate a noticeable overlap between human perceptions and model attribution. Additionally, we provide a linguistic analysis of salient words. Our work showcases the relevance of understanding model translation decisions in terms of gender, how this compares to human decisions and that this information should be leveraged to mitigate gender bias. - oai:arXiv.org:2512.08440v1 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + REASAN: Learning Reactive Safe Navigation for Legged Robots + https://arxiv.org/abs/2512.09537 + arXiv:2512.09537v1 Announce Type: new +Abstract: We present a novel modularized end-to-end framework for legged reactive navigation in complex dynamic environments using a single light detection and ranging (LiDAR) sensor. The system comprises four simulation-trained modules: three reinforcement-learning (RL) policies for locomotion, safety shielding, and navigation, and a transformer-based exteroceptive estimator that processes raw point-cloud inputs. This modular decomposition of complex legged motor-control tasks enables lightweight neural networks with simple architectures, trained using standard RL practices with targeted reward shaping and curriculum design, without reliance on heuristics or sophisticated policy-switching mechanisms. We conduct comprehensive ablations to validate our design choices and demonstrate improved robustness compared to existing approaches in challenging navigation tasks. The resulting reactive safe navigation (REASAN) system achieves fully onboard and real-time reactive navigation across both single- and multi-robot settings in complex environments. We release our training and deployment code at https://github.com/ASIG-X/REASAN. + oai:arXiv.org:2512.09537v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Jani\c{c}a Hackenbuchner, Arda Tezcan, Joke Daems + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Qihao Yuan, Ziyu Cao, Ming Cao, Kailai Li - Leveraging Multispectral Sensors for Color Correction in Mobile Cameras - https://arxiv.org/abs/2512.08441 - arXiv:2512.08441v1 Announce Type: new -Abstract: Recent advances in snapshot multispectral (MS) imaging have enabled compact, low-cost spectral sensors for consumer and mobile devices. By capturing richer spectral information than conventional RGB sensors, these systems can enhance key imaging tasks, including color correction. However, most existing methods treat the color correction pipeline in separate stages, often discarding MS data early in the process. We propose a unified, learning-based framework that (i) performs end-to-end color correction and (ii) jointly leverages data from a high-resolution RGB sensor and an auxiliary low-resolution MS sensor. Our approach integrates the full pipeline within a single model, producing coherent and color-accurate outputs. We demonstrate the flexibility and generality of our framework by refactoring two different state-of-the-art image-to-image architectures. To support training and evaluation, we construct a dedicated dataset by aggregating and repurposing publicly available spectral datasets, rendering under multiple RGB camera sensitivities. Extensive experiments show that our approach improves color accuracy and stability, reducing error by up to 50% compared to RGB-only and MS-driven baselines. Datasets, code, and models will be made available upon acceptance. - oai:arXiv.org:2512.08441v1 + Comparative Analysis of Hash-based Malware Clustering via K-Means + https://arxiv.org/abs/2512.09539 + arXiv:2512.09539v1 Announce Type: new +Abstract: With the adoption of multiple digital devices in everyday life, the cyber-attack surface has increased. Adversaries are continuously exploring new avenues to exploit them and deploy malware. On the other hand, detection approaches typically employ hashing-based algorithms such as SSDeep, TLSH, and IMPHash to capture structural and behavioural similarities among binaries. This work focuses on the analysis and evaluation of these techniques for clustering malware samples using the K-means algorithm. More specifically, we experimented with established malware families and traits and found that TLSH and IMPHash produce more distinct, semantically meaningful clusters, whereas SSDeep is more efficient for broader classification tasks. The findings of this work can guide the development of more robust threat-detection mechanisms and adaptive security mechanisms. + oai:arXiv.org:2512.09539v1 + cs.CR + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Aink Acrie Soe Thein, Nikolaos Pitropakis, Pavlos Papadopoulos, Sam Grierson, Sana Ullah Jan + + + SWEnergy: An Empirical Study on Energy Efficiency in Agentic Issue Resolution Frameworks with SLMs + https://arxiv.org/abs/2512.09543 + arXiv:2512.09543v1 Announce Type: new +Abstract: Context. LLM-based autonomous agents in software engineering rely on large, proprietary models, limiting local deployment. This has spurred interest in Small Language Models (SLMs), but their practical effectiveness and efficiency within complex agentic frameworks for automated issue resolution remain poorly understood. + Goal. We investigate the performance, energy efficiency, and resource consumption of four leading agentic issue resolution frameworks when deliberately constrained to using SLMs. We aim to assess the viability of these systems for this task in resource-limited settings and characterize the resulting trade-offs. + Method. We conduct a controlled evaluation of four leading agentic frameworks (SWE-Agent, OpenHands, Mini SWE Agent, AutoCodeRover) using two SLMs (Gemma-3 4B, Qwen-3 1.7B) on the SWE-bench Verified Mini benchmark. On fixed hardware, we measure energy, duration, token usage, and memory over 150 runs per configuration. + Results. We find that framework architecture is the primary driver of energy consumption. The most energy-intensive framework, AutoCodeRover (Gemma), consumed 9.4x more energy on average than the least energy-intensive, OpenHands (Gemma). However, this energy is largely wasted. Task resolution rates were near-zero, demonstrating that current frameworks, when paired with SLMs, consume significant energy on unproductive reasoning loops. The SLM's limited reasoning was the bottleneck for success, but the framework's design was the bottleneck for efficiency. + Conclusions. Current agentic frameworks, designed for powerful LLMs, fail to operate efficiently with SLMs. We find that framework architecture is the primary driver of energy consumption, but this energy is largely wasted due to the SLMs' limited reasoning. Viable low-energy solutions require shifting from passive orchestration to architectures that actively manage SLM weaknesses. + oai:arXiv.org:2512.09543v1 + cs.SE + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Arihant Tripathy, Ch Pavan Harshit, Karthik Vaidhyanathan + + + A Dual-Domain Convolutional Network for Hyperspectral Single-Image Super-Resolution + https://arxiv.org/abs/2512.09546 + arXiv:2512.09546v1 Announce Type: new +Abstract: This study presents a lightweight dual-domain super-resolution network (DDSRNet) that combines Spatial-Net with the discrete wavelet transform (DWT). Specifically, our proposed model comprises three main components: (1) a shallow feature extraction module, termed Spatial-Net, which performs residual learning and bilinear interpolation; (2) a low-frequency enhancement branch based on the DWT that refines coarse image structures; and (3) a shared high-frequency refinement branch that simultaneously enhances the LH (horizontal), HL (vertical), and HH (diagonal) wavelet subbands using a single CNN with shared weights. As a result, the DWT enables subband decomposition, while the inverse DWT reconstructs the final high-resolution output. By doing so, the integration of spatial- and frequency-domain learning enables DDSRNet to achieve highly competitive performance with low computational cost on three hyperspectral image datasets, demonstrating its effectiveness for hyperspectral image super-resolution. + oai:arXiv.org:2512.09546v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Luca Cogo, Marco Buzzelli, Simone Bianco, Javier Vazquez-Corral, Raimondo Schettini + Murat Karayaka, Usman Muhammad, Jorma Laaksonen, Md Ziaul Hoque, Tapio Sepp\"anen - Fully Decentralized Certified Unlearning - https://arxiv.org/abs/2512.08443 - arXiv:2512.08443v1 Announce Type: new -Abstract: Machine unlearning (MU) seeks to remove the influence of specified data from a trained model in response to privacy requests or data poisoning. While certified unlearning has been analyzed in centralized and server-orchestrated federated settings (via guarantees analogous to differential privacy, DP), the decentralized setting -- where peers communicate without a coordinator remains underexplored. We study certified unlearning in decentralized networks with fixed topologies and propose RR-DU, a random-walk procedure that performs one projected gradient ascent step on the forget set at the unlearning client and a geometrically distributed number of projected descent steps on the retained data elsewhere, combined with subsampled Gaussian noise and projection onto a trust region around the original model. We provide (i) convergence guarantees in the convex case and stationarity guarantees in the nonconvex case, (ii) $(\varepsilon,\delta)$ network-unlearning certificates on client views via subsampled Gaussian $R\'enyi$ DP (RDP) with segment-level subsampling, and (iii) deletion-capacity bounds that scale with the forget-to-local data ratio and quantify the effect of decentralization (network mixing and randomized subsampling) on the privacy--utility trade-off. Empirically, on image benchmarks (MNIST, CIFAR-10), RR-DU matches a given $(\varepsilon,\delta)$ while achieving higher test accuracy than decentralized DP baselines and reducing forget accuracy to random guessing ($\approx 10\%$). - oai:arXiv.org:2512.08443v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Supporting Dynamic Agentic Workloads: How Data and Agents Interact + https://arxiv.org/abs/2512.09548 + arXiv:2512.09548v1 Announce Type: new +Abstract: The rise of multi-agent systems powered by large language models (LLMs) and specialized reasoning agents exposes fundamental limitations in today's data management architectures. Traditional databases and data fabrics were designed for static, well-defined workloads, whereas agentic systems exhibit dynamic, context-driven, and collaborative behaviors. Agents continuously decompose tasks, shift attention across modalities, and share intermediate results with peers - producing non-deterministic, multi-modal workloads that strain conventional query optimizers and caching mechanisms. We propose an Agent-Centric Data Fabric, a unified architecture that rethinks how data systems serve, optimize, coordinate, and learn from agentic workloads. To achieve this we exploit the concepts of attention-guided data retrieval, semantic micro-caching for context-driven agent federations, predictive data prefetching and quorum-based data serving. Together, these mechanisms enable agents to access representative data faster and more efficiently, while reducing redundant queries, data movement, and inference load across systems. By framing data systems as adaptive collaborators, instead of static executors, we outline new research directions toward behaviorally responsive data infrastructures, where caching, probing, and orchestration jointly enable efficient, context-rich data exchange among dynamic, reasoning-driven agents. + oai:arXiv.org:2512.09548v1 + cs.MA + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Hithem Lamri, Michail Maniatakos + Ioana Giurgiu, Michael E. Nidd - Uncertainty-Aware Subset Selection for Robust Visual Explainability under Distribution Shifts - https://arxiv.org/abs/2512.08445 - arXiv:2512.08445v1 Announce Type: new -Abstract: Subset selection-based methods are widely used to explain deep vision models: they attribute predictions by highlighting the most influential image regions and support object-level explanations. While these methods perform well in in-distribution (ID) settings, their behavior under out-of-distribution (OOD) conditions remains poorly understood. Through extensive experiments across multiple ID-OOD sets, we find that reliability of the existing subset based methods degrades markedly, yielding redundant, unstable, and uncertainty-sensitive explanations. To address these shortcomings, we introduce a framework that combines submodular subset selection with layer-wise, gradient-based uncertainty estimation to improve robustness and fidelity without requiring additional training or auxiliary models. Our approach estimates uncertainty via adaptive weight perturbations and uses these estimates to guide submodular optimization, ensuring diverse and informative subset selection. Empirical evaluations show that, beyond mitigating the weaknesses of existing methods under OOD scenarios, our framework also yields improvements in ID settings. These findings highlight limitations of current subset-based approaches and demonstrate how uncertainty-driven optimization can enhance attribution and object-level interpretability, paving the way for more transparent and trustworthy AI in real-world vision applications. - oai:arXiv.org:2512.08445v1 - cs.CV - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Chasing Shadows: Pitfalls in LLM Security Research + https://arxiv.org/abs/2512.09549 + arXiv:2512.09549v1 Announce Type: new +Abstract: Large language models (LLMs) are increasingly prevalent in security research. Their unique characteristics, however, introduce challenges that undermine established paradigms of reproducibility, rigor, and evaluation. Prior work has identified common pitfalls in traditional machine learning research, but these studies predate the advent of LLMs. In this paper, we identify \emph{nine} common pitfalls that have become (more) relevant with the emergence of LLMs and that can compromise the validity of research involving them. These pitfalls span the entire computation process, from data collection, pre-training, and fine-tuning to prompting and evaluation. + We assess the prevalence of these pitfalls across all 72 peer-reviewed papers published at leading Security and Software Engineering venues between 2023 and 2024. We find that every paper contains at least one pitfall, and each pitfall appears in multiple papers. Yet only 15.7\% of the present pitfalls were explicitly discussed, suggesting that the majority remain unrecognized. To understand their practical impact, we conduct four empirical case studies showing how individual pitfalls can mislead evaluation, inflate performance, or impair reproducibility. Based on our findings, we offer actionable guidelines to support the community in future work. + oai:arXiv.org:2512.09549v1 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Madhav Gupta, Vishak Prasad C, Ganesh Ramakrishnan + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.14722/ndss.2026.241749 + Jonathan Evertz, Niklas Risse, Nicolai Neuer, Andreas M\"uller, Philipp Normann, Gaetano Sapia, Srishti Gupta, David Pape, Soumya Shaw, Devansh Srivastav, Christian Wressnegger, Erwin Quiring, Thorsten Eisenhofer, Daniel Arp, Lea Sch\"onherr - From Accuracy to Impact: The Impact-Driven AI Framework (IDAIF) for Aligning Engineering Architecture with Theory of Change - https://arxiv.org/abs/2512.08449 - arXiv:2512.08449v1 Announce Type: new -Abstract: This paper introduces the Impact-Driven AI Framework (IDAIF), a novel architectural methodology that integrates Theory of Change (ToC) principles with modern artificial intelligence system design. As AI systems increasingly influence high-stakes domains including healthcare, finance, and public policy, the alignment problem--ensuring AI behavior corresponds with human values and intentions--has become critical. Current approaches predominantly optimize technical performance metrics while neglecting the sociotechnical dimensions of AI deployment. IDAIF addresses this gap by establishing a systematic mapping between ToC's five-stage model (Inputs-Activities-Outputs-Outcomes-Impact) and corresponding AI architectural layers (Data Layer-Pipeline Layer-Inference Layer-Agentic Layer-Normative Layer). Each layer incorporates rigorous theoretical foundations: multi-objective Pareto optimization for value alignment, hierarchical multi-agent orchestration for outcome achievement, causal directed acyclic graphs (DAGs) for hallucination mitigation, and adversarial debiasing with Reinforcement Learning from Human Feedback (RLHF) for fairness assurance. We provide formal mathematical formulations for each component and introduce an Assurance Layer that manages assumption failures through guardian architectures. Three case studies demonstrate IDAIF application across healthcare, cybersecurity, and software engineering domains. This framework represents a paradigm shift from model-centric to impact-centric AI development, providing engineers with concrete architectural patterns for building ethical, trustworthy, and socially beneficial AI systems. - oai:arXiv.org:2512.08449v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Systematic Framework of Application Methods for Large Language Models in Language Sciences + https://arxiv.org/abs/2512.09552 + arXiv:2512.09552v1 Announce Type: new +Abstract: Large Language Models (LLMs) are transforming language sciences. However, their widespread deployment currently suffers from methodological fragmentation and a lack of systematic soundness. This study proposes two comprehensive methodological frameworks designed to guide the strategic and responsible application of LLMs in language sciences. The first method-selection framework defines and systematizes three distinct, complementary approaches, each linked to a specific research goal: (1) prompt-based interaction with general-use models for exploratory analysis and hypothesis generation; (2) fine-tuning of open-source models for confirmatory, theory-driven investigation and high-quality data generation; and (3) extraction of contextualized embeddings for further quantitative analysis and probing of model internal mechanisms. We detail the technical implementation and inherent trade-offs of each method, supported by empirical case studies. Based on the method-selection framework, the second systematic framework proposed provides constructed configurations that guide the practical implementation of multi-stage research pipelines based on these approaches. We then conducted a series of empirical experiments to validate our proposed framework, employing retrospective analysis, prospective application, and an expert evaluation survey. By enforcing the strategic alignment of research questions with the appropriate LLM methodology, the frameworks enable a critical paradigm shift in language science research. We believe that this system is fundamental for ensuring reproducibility, facilitating the critical evaluation of LLM mechanisms, and providing the structure necessary to move traditional linguistics from ad-hoc utility to verifiable, robust science. + oai:arXiv.org:2512.09552v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Yong-Woon Kim + Kun Sun, Rong Wang - Connectivity-Preserving Cortical Surface Tetrahedralization - https://arxiv.org/abs/2512.08450 - arXiv:2512.08450v1 Announce Type: new -Abstract: A prerequisite for many biomechanical simulation techniques is discretizing a bounded volume into a tetrahedral mesh. In certain contexts, such as cortical surface simulations, preserving input surface connectivity is critical. However, automated surface extraction often yields meshes containing self-intersections, small holes, and faulty geometry, which prevents existing constrained and unconstrained meshers from preserving this connectivity. We address this issue by developing a novel tetrahedralization method that maintains input surface connectivity in the presence of such defects. We also present a metric to quantify the preservation of surface connectivity and demonstrate that our method correctly maintains connectivity compared to existing solutions. - oai:arXiv.org:2512.08450v1 - cs.CG - Wed, 10 Dec 2025 00:00:00 -0500 + Building Reasonable Inference for Vision-Language Models in Blind Image Quality Assessment + https://arxiv.org/abs/2512.09555 + arXiv:2512.09555v1 Announce Type: new +Abstract: Recent progress in BIQA has been driven by VLMs, whose semantic reasoning abilities suggest that they might extract visual features, generate descriptive text, and infer quality in a human-like manner. However, these models often produce textual descriptions that contradict their final quality predictions, and the predicted scores can change unstably during inference - behaviors not aligned with human reasoning. To understand these issues, we analyze the factors that cause contradictory assessments and instability. We first estimate the relationship between the final quality predictions and the generated visual features, finding that the predictions are not fully grounded in the features and that the logical connection between them is weak. Moreover, decoding intermediate VLM layers shows that the model frequently relies on a limited set of candidate tokens, which contributes to prediction instability. To encourage more human-like reasoning, we introduce a two-stage tuning method that explicitly separates visual perception from quality inference. In the first stage, the model learns visual features; in the second, it infers quality solely from these features. Experiments on SPAQ and KONIQ demonstrate that our approach reduces prediction instability from 22.00% to 12.39% and achieves average gains of 0.3124/0.3507 in SRCC/PLCC across LIVE, CSIQ, SPAQ, and KONIQ compared to the baseline. Further analyses show that our method improves both stability and the reliability of the inference process. + oai:arXiv.org:2512.09555v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Besm Osman, Ruben Vink, Andrei Jalba, Maxime Chamberland + 10.1007/978-981-95-4378-6_20 + Building Reasonable Inference for Vision-Language Models in Blind Image Quality Assessment. In: Taniguchi, T., et al. Neural Information Processing. ICONIP 2025. Lecture Notes in Computer Science, vol 16310. Springer, Singapore + Yuan Li, Zitang Sun, Yen-ju Chen, Shin'ya Nishida - Biothreat Benchmark Generation Framework for Evaluating Frontier AI Models II: Benchmark Generation Process - https://arxiv.org/abs/2512.08451 - arXiv:2512.08451v1 Announce Type: new -Abstract: The potential for rapidly-evolving frontier artificial intelligence (AI) models, especially large language models (LLMs), to facilitate bioterrorism or access to biological weapons has generated significant policy, academic, and public concern. Both model developers and policymakers seek to quantify and mitigate any risk, with an important element of such efforts being the development of model benchmarks that can assess the biosecurity risk posed by a particular model. This paper, the second in a series of three, describes the second component of a novel Biothreat Benchmark Generation (BBG) framework: the generation of the Bacterial Biothreat Benchmark (B3) dataset. The development process involved three complementary approaches: 1) web-based prompt generation, 2) red teaming, and 3) mining existing benchmark corpora, to generate over 7,000 potential benchmarks linked to the Task-Query Architecture that was developed during the first component of the project. A process of de-duplication, followed by an assessment of uplift diagnosticity, and general quality control measures, reduced the candidates to a set of 1,010 final benchmarks. This procedure ensured that these benchmarks are a) diagnostic in terms of providing uplift; b) directly relevant to biosecurity threats; and c) are aligned with a larger biosecurity architecture permitting nuanced analysis at different levels of analysis. - oai:arXiv.org:2512.08451v1 - cs.LG - cs.AI - cs.CY - cs.ET - Wed, 10 Dec 2025 00:00:00 -0500 + Explainable Verification of Hierarchical Workflows Mined from Event Logs with Shapley Values + https://arxiv.org/abs/2512.09562 + arXiv:2512.09562v1 Announce Type: new +Abstract: Workflow mining discovers hierarchical process trees from event logs, but it remains unclear why such models satisfy or violate logical properties, or how individual elements contribute to overall behavior. We propose to translate mined workflows into logical specifications and analyze properties such as satisfiability, liveness, and safety with automated theorem provers. On this basis, we adapt Shapley values from cooperative game theory to attribute outcomes to workflow elements and quantify their contributions. Experiments on benchmark datasets show that this combination identifies critical nodes, reveals redundancies, and exposes harmful structures. This outlines a novel direction for explainable workflow analysis with direct relevance to software engineering practice, supporting compliance checks, process optimization, redundancy reduction, and the design of next-generation process mining tools. + oai:arXiv.org:2512.09562v1 + cs.SE + cs.IT + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Gary Ackerman, Zachary Kallenborn, Anna Wetzel, Hayley Peterson, Jenna LaTourette, Olivia Shoemaker, Brandon Behlendorf, Sheriff Almakki, Doug Clifford, Noah Sheinbaum + http://creativecommons.org/licenses/by/4.0/ + Radoslaw Klimek, Jakub Blazowski - MPC for tracking for anesthesia dynamics - https://arxiv.org/abs/2512.08452 - arXiv:2512.08452v1 Announce Type: new -Abstract: In this paper, an MPC for tracking formulation is proposed for the control of anesthesia dynamics. It seamlessly enables the optimization of the steady-states pair that is not unique due to the MISO nature of the model. Anesthesia dynamics is a multi-time scale system with two types of states characterized, respectively, by fast and slow dynamics. In anesthesia control, the output equation depends only on the fast dynamics. Therefore, the slow states can be treated as disturbances, and compensation terms can be introduced. Subsequently, the system can be reformulated as a nominal one allowing the design of an MPC for tracking strategy. The presented framework ensures recursive feasibility and asymptotic stability, through the design of appropriate terminal ingredients in the MPC for tracking framework. The controller performance is then assessed on a patient in a simulation environment. - oai:arXiv.org:2512.08452v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + System Report for CCL25-Eval Task 10: Prompt-Driven Large Language Model Merge for Fine-Grained Chinese Hate Speech Detection + https://arxiv.org/abs/2512.09563 + arXiv:2512.09563v1 Announce Type: new +Abstract: The proliferation of hate speech on Chinese social media poses urgent societal risks, yet traditional systems struggle to decode context-dependent rhetorical strategies and evolving slang. To bridge this gap, we propose a novel three-stage LLM-based framework: Prompt Engineering, Supervised Fine-tuning, and LLM Merging. First, context-aware prompts are designed to guide LLMs in extracting implicit hate patterns. Next, task-specific features are integrated during supervised fine-tuning to enhance domain adaptation. Finally, merging fine-tuned LLMs improves robustness against out-of-distribution cases. Evaluations on the STATE-ToxiCN benchmark validate the framework's effectiveness, demonstrating superior performance over baseline methods in detecting fine-grained hate speech. + oai:arXiv.org:2512.09563v1 + cs.CL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Maxim Raymond, Kaouther Moussa, Mirko Fiacchini, Jimmy Lauber + Binglin Wu, Jiaxiu Zou, Xianneng Li - Biothreat Benchmark Generation Framework for Evaluating Frontier AI Models III: Implementing the Bacterial Biothreat Benchmark (B3) Dataset - https://arxiv.org/abs/2512.08459 - arXiv:2512.08459v1 Announce Type: new -Abstract: The potential for rapidly-evolving frontier artificial intelligence (AI) models, especially large language models (LLMs), to facilitate bioterrorism or access to biological weapons has generated significant policy, academic, and public concern. Both model developers and policymakers seek to quantify and mitigate any risk, with an important element of such efforts being the development of model benchmarks that can assess the biosecurity risk posed by a particular model. This paper discusses the pilot implementation of the Bacterial Biothreat Benchmark (B3) dataset. It is the third in a series of three papers describing an overall Biothreat Benchmark Generation (BBG) framework, with previous papers detailing the development of the B3 dataset. The pilot involved running the benchmarks through a sample frontier AI model, followed by human evaluation of model responses, and an applied risk analysis of the results along several dimensions. Overall, the pilot demonstrated that the B3 dataset offers a viable, nuanced method for rapidly assessing the biosecurity risk posed by a LLM, identifying the key sources of that risk and providing guidance for priority areas of mitigation priority. - oai:arXiv.org:2512.08459v1 - cs.LG + From Graphs to Gates: DNS-HyXNet, A Lightweight and Deployable Sequential Model for Real-Time DNS Tunnel Detection + https://arxiv.org/abs/2512.09565 + arXiv:2512.09565v1 Announce Type: new +Abstract: Domain Name System (DNS) tunneling remains a covert channel for data exfiltration and command-and-control communication. Although graph-based methods such as GraphTunnel achieve strong accuracy, they introduce significant latency and computational overhead due to recursive parsing and graph construction, limiting their suitability for real-time deployment. This work presents DNS-HyXNet, a lightweight extended Long Short-Term Memory (xLSTM) hybrid framework designed for efficient sequence-based DNS tunnel detection. DNS-HyXNet integrates tokenized domain embeddings with normalized numerical DNS features and processes them through a two-layer xLSTM network that directly learns temporal dependencies from packet sequences, eliminating the need for graph reconstruction and enabling single-stage multi-class classification. The model was trained and evaluated on two public benchmark datasets with carefully tuned hyperparameters to ensure low memory consumption and fast inference. Across all experimental splits of the DNS-Tunnel-Datasets, DNS-HyXNet achieved up to 99.99% accuracy, with macro-averaged precision, recall, and F1-scores exceeding 99.96%, and demonstrated a per-sample detection latency of just 0.041 ms, confirming its scalability and real-time readiness. These results show that sequential modeling with xLSTM can effectively replace computationally expensive recursive graph generation, offering a deployable and energy-efficient alternative for real-time DNS tunnel detection on commodity hardware. + oai:arXiv.org:2512.09565v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Faraz Ali, Muhammad Afaq, Mahmood Niazi, Muzammil Behzad + + + Toward Closed-loop Molecular Discovery via Language Model, Property Alignment and Strategic Search + https://arxiv.org/abs/2512.09566 + arXiv:2512.09566v1 Announce Type: new +Abstract: Drug discovery is a time-consuming and expensive process, with traditional high-throughput and docking-based virtual screening hampered by low success rates and limited scalability. Recent advances in generative modelling, including autoregressive, diffusion, and flow-based approaches, have enabled de novo ligand design beyond the limits of enumerative screening. Yet these models often suffer from inadequate generalization, limited interpretability, and an overemphasis on binding affinity at the expense of key pharmacological properties, thereby restricting their translational utility. Here we present Trio, a molecular generation framework integrating fragment-based molecular language modeling, reinforcement learning, and Monte Carlo tree search, for effective and interpretable closed-loop targeted molecular design. Through the three key components, Trio enables context-aware fragment assembly, enforces physicochemical and synthetic feasibility, and guides a balanced search between the exploration of novel chemotypes and the exploitation of promising intermediates within protein binding pockets. Experimental results show that Trio reliably achieves chemically valid and pharmacologically enhanced ligands, outperforming state-of-the-art approaches with improved binding affinity (+7.85%), drug-likeness (+11.10%) and synthetic accessibility (+12.05%), while expanding molecular diversity more than fourfold. + oai:arXiv.org:2512.09566v1 cs.AI - cs.CY - cs.ET - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Gary Ackerman, Theodore Wilson, Zachary Kallenborn, Olivia Shoemaker, Anna Wetzel, Hayley Peterson, Abigail Danfora, Jenna LaTourette, Brandon Behlendorf, Douglas Clifford + Junkai Ji, Zhangfan Yang, Dong Xu, Ruibin Bai, Jianqiang Li, Tingjun Hou, Zexuan Zhu - Measuring Agile Agreement: Development and Validation of the Manifesto and Principle Scales - https://arxiv.org/abs/2512.08461 - arXiv:2512.08461v1 Announce Type: new -Abstract: While the importance of human factors in agile software development is widely acknowledged, the measurement of an individual's "agile agreement" remains an ill-defined and challenging area. A key limitation in existing research is the failure to distinguish between agreement with the abstract, high-level values of the Agile Manifesto and agreement with the concrete, day-to-day practices derived from the 12 Principles. This paper addresses this methodological gap by presenting the design and validation of two distinct instruments: the novel Manifesto Agreement Scale (MAS), and the Principle Agreement Scale (PAS), which is a systematic adaptation and refinement of a prior instrument. - We detail the systematic process of item creation and selection, survey design, and validation. The results demonstrate that both scales possess important internal consistency and construct validity. A convergence and divergence analysis, including Proportional Odds Logistic Regression, a Bland-Altman plot, and an Intraclass Correlation Coefficient (ICC), reveals that while the two scales are moderately correlated, they are not interchangeable and capture distinct dimensions of agile agreement. The primary contribution of this work is a pair of publicly available instruments, validated within a specific demographic of Belgian IT professionals. These scales represent a critical initial step toward facilitating a more nuanced measurement of agile agreement, distinguishing agile agreement across various levels of perception and aiding in a more refined interpretation of person-agile fit. - oai:arXiv.org:2512.08461v1 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + PHWSOA: A Pareto-based Hybrid Whale-Seagull Scheduling for Multi-Objective Tasks in Cloud Computing + https://arxiv.org/abs/2512.09568 + arXiv:2512.09568v1 Announce Type: new +Abstract: Task scheduling is a critical research challenge in cloud computing, a transformative technology widely adopted across industries. Although numerous scheduling solutions exist, they predominantly optimize singular or limited metrics such as execution time or resource utilization often neglecting the need for comprehensive multi-objective optimization. To bridge this gap, this paper proposes the Pareto-based Hybrid Whale-Seagull Optimization Algorithm (PHWSOA). This algorithm synergistically combines the strengths of the Whale Optimization Algorithm (WOA) and the Seagull Optimization Algorithm (SOA), specifically mitigating WOA's limitations in local exploitation and SOA's constraints in global exploration. Leveraging Pareto dominance principles, PHWSOA simultaneously optimizes three key objectives: makespan, virtual machine (VM) load balancing, and economic cost. Key enhancements include: Halton sequence initialization for superior population diversity, a Pareto-guided mutation mechanism to avert premature convergence, and parallel processing for accelerated convergence. Furthermore, a dynamic VM load redistribution mechanism is integrated to improve load balancing during task execution. Extensive experiments conducted on the CloudSim simulator, utilizing real-world workload traces from NASA-iPSC and HPC2N, demonstrate that PHWSOA delivers substantial performance gains. Specifically, it achieves up to a 72.1% reduction in makespan, a 36.8% improvement in VM load balancing, and 23.5% cost savings. These results substantially outperform baseline methods including WOA, GA, PEWOA, and GCWOA underscoring PHWSOA's strong potential for enabling efficient resource management in practical cloud environments. + oai:arXiv.org:2512.09568v1 + cs.DC + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Nicolas Matton, Anthony Simonofski, Marie-Ange Remiche, Beno\^it Vanderose + http://creativecommons.org/licenses/by/4.0/ + Zhi Zhao, Hang Xiao, Wei Rang - Transformers for Multimodal Brain State Decoding: Integrating Functional Magnetic Resonance Imaging Data and Medical Metadata - https://arxiv.org/abs/2512.08462 - arXiv:2512.08462v1 Announce Type: new -Abstract: Decoding brain states from functional magnetic resonance imaging (fMRI) data is vital for advancing neuroscience and clinical applications. While traditional machine learning and deep learning approaches have made strides in leveraging the high-dimensional and complex nature of fMRI data, they often fail to utilize the contextual richness provided by Digital Imaging and Communications in Medicine (DICOM) metadata. This paper presents a novel framework integrating transformer-based architectures with multimodal inputs, including fMRI data and DICOM metadata. By employing attention mechanisms, the proposed method captures intricate spatial-temporal patterns and contextual relationships, enhancing model accuracy, interpretability, and robustness. The potential of this framework spans applications in clinical diagnostics, cognitive neuroscience, and personalized medicine. Limitations, such as metadata variability and computational demands, are addressed, and future directions for optimizing scalability and generalizability are discussed. - oai:arXiv.org:2512.08462v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + The Gender Code: Gendering the Global Governance of Artificial Intelligence + https://arxiv.org/abs/2512.09570 + arXiv:2512.09570v1 Announce Type: new +Abstract: This paper examines how international AI governance frameworks address gender issues and gender-based harms. The analysis covers binding regulations, such as the EU AI Act; soft law instruments, like the UNESCO Recommendations on AI Ethics; and global initiatives, such as the Global Partnership on AI (GPAI). These instruments reveal emerging trends, including the integration of gender concerns into broader human rights frameworks, a shift toward explicit gender-related provisions, and a growing emphasis on inclusivity and diversity. Yet, some critical gaps persist, including inconsistent treatment of gender across governance documents, limited engagement with intersectionality, and a lack of robust enforcement mechanisms. However, this paper argues that effective AI governance must be intersectional, enforceable, and inclusive. This is key to moving beyond tokenism toward meaningful equity and preventing reinforcement of existing inequalities. The study contributes to ethical AI debates by highlighting the importance of gender-sensitive governance in building a just technological future. + oai:arXiv.org:2512.09570v1 + cs.CY + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Danial Jafarzadeh Jazi, Maryam Hajiesmaeili + Jelena Cupac - Using reinforcement learning to probe the role of feedback in skill acquisition - https://arxiv.org/abs/2512.08463 - arXiv:2512.08463v1 Announce Type: new -Abstract: Many high-performance human activities are executed with little or no external feedback: think of a figure skater landing a triple jump, a pitcher throwing a curveball for a strike, or a barista pouring latte art. To study the process of skill acquisition under fully controlled conditions, we bypass human subjects. Instead, we directly interface a generalist reinforcement learning agent with a spinning cylinder in a tabletop circulating water channel to maximize or minimize drag. This setup has several desirable properties. First, it is a physical system, with the rich interactions and complex dynamics that only the physical world has: the flow is highly chaotic and extremely difficult, if not impossible, to model or simulate accurately. Second, the objective -- drag minimization or maximization -- is easy to state and can be captured directly in the reward, yet good strategies are not obvious beforehand. Third, decades-old experimental studies provide recipes for simple, high-performance open-loop policies. Finally, the setup is inexpensive and far easier to reproduce than human studies. In our experiments we find that high-dimensional flow feedback lets the agent discover high-performance drag-control strategies with only minutes of real-world interaction. When we later replay the same action sequences without any feedback, we obtain almost identical performance. This shows that feedback, and in particular flow feedback, is not needed to execute the learned policy. Surprisingly, without flow feedback during training the agent fails to discover any well-performing policy in drag maximization, but still succeeds in drag minimization, albeit more slowly and less reliably. Our studies show that learning a high-performance skill can require richer information than executing it, and learning conditions can be kind or wicked depending solely on the goal, not on dynamics or policy complexity. - oai:arXiv.org:2512.08463v1 - cs.AI - cs.LG + Mastering Diverse, Unknown, and Cluttered Tracks for Robust Vision-Based Drone Racing + https://arxiv.org/abs/2512.09571 + arXiv:2512.09571v1 Announce Type: new +Abstract: Most reinforcement learning(RL)-based methods for drone racing target fixed, obstacle-free tracks, leaving the generalization to unknown, cluttered environments largely unaddressed. This challenge stems from the need to balance racing speed and collision avoidance, limited feasible space causing policy exploration trapped in local optima during training, and perceptual ambiguity between gates and obstacles in depth maps-especially when gate positions are only coarsely specified. To overcome these issues, we propose a two-phase learning framework: an initial soft-collision training phase that preserves policy exploration for high-speed flight, followed by a hard-collision refinement phase that enforces robust obstacle avoidance. An adaptive, noise-augmented curriculum with an asymmetric actor-critic architecture gradually shifts the policy's reliance from privileged gate-state information to depth-based visual input. We further impose Lipschitz constraints and integrate a track-primitive generator to enhance motion stability and cross-environment generalization. We evaluate our framework through extensive simulation and ablation studies, and validate it in real-world experiments on a computationally constrained quadrotor. The system achieves agile flight while remaining robust to gate-position errors, developing a generalizable drone racing framework with the capability to operate in diverse, partially unknown and cluttered environments. https://yufengsjtu.github.io/MasterRacing.github.io/ + oai:arXiv.org:2512.09571v1 cs.RO - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Antonio Terpin, Raffaello D'Andrea + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Feng Yu, Yu Hu, Yang Su, Yang Deng, Linzuo Zhang, Danping Zou - Quantization and Security Parameter Design for Overflow-Free Confidential FRIT - https://arxiv.org/abs/2512.08464 - arXiv:2512.08464v1 Announce Type: new -Abstract: This study proposes a systematic design procedure for determining the quantization gain and the security parameter in the Confidential Fictitious Reference Iterative Tuning (CFRIT), enabling overflow-free and accuracy-guaranteed encrypted controller tuning. Within an encrypted data-driven gain tuning, the range of quantization errors induced during the encoding (encryption) process can be estimated from operational data. Based on this insight, explicit analytical conditions on the quantization gain and the security parameter are derived to prevent overflow in computing over encrypted data. Furthermore, the analysis reveals a quantitative relationship between quantization-induced errors and the deviation between the gains obtained by CFRIT and non-confidential Fictitious Reference Iterative Tuning (FRIT), clarifying how parameter choice affects tuning accuracy. A numerical example verifies the proposed procedure by demonstrating that the designed parameters achieve accurate encrypted tuning within a prescribed tolerance while preventing overflow. In addition, the admissible region of parameter combinations is visualized to examine the characteristics of feasible and infeasible regions, providing practical insights into parameter design for encrypted data-driven control. - oai:arXiv.org:2512.08464v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Investigate the Low-level Visual Perception in Vision-Language based Image Quality Assessment + https://arxiv.org/abs/2512.09573 + arXiv:2512.09573v1 Announce Type: new +Abstract: Recent advances in Image Quality Assessment (IQA) have leveraged Multi-modal Large Language Models (MLLMs) to generate descriptive explanations. However, despite their strong visual perception modules, these models often fail to reliably detect basic low-level distortions such as blur, noise, and compression, and may produce inconsistent evaluations across repeated inferences. This raises an essential question: do MLLM-based IQA systems truly perceive the visual features that matter? To examine this issue, we introduce a low-level distortion perception task that requires models to classify specific distortion types. Our component-wise analysis shows that although MLLMs are structurally capable of representing such distortions, they tend to overfit training templates, leading to biases in quality scoring. As a result, critical low-level features are weakened or lost during the vision-language alignment transfer stage. Furthermore, by computing the semantic distance between visual features and corresponding semantic tokens before and after component-wise fine-tuning, we show that improving the alignment of the vision encoder dramatically enhances distortion recognition accuracy, increasing it from 14.92% to 84.43%. Overall, these findings indicate that incorporating dedicated constraints on the vision encoder can strengthen text-explainable visual representations and enable MLLM-based pipelines to produce more coherent and interpretable reasoning in vision-centric tasks. + oai:arXiv.org:2512.09573v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jungjin Park, Osamu Kaneko, Kiminao Kogiso + Yuan Li, Zitang Sun, Yen-Ju Chen, Shin'ya Nishida - High-performance computing enabled contingency analysis for modern power networks - https://arxiv.org/abs/2512.08465 - arXiv:2512.08465v1 Announce Type: new -Abstract: Modern power networks face increasing vulnerability to cascading failures due to high complexity and the growing penetration of intermittent resources, necessitating rigorous security assessment beyond the conventional $N-1$ criterion. Current approaches often struggle to achieve the computational tractability required for exhaustive $N-2$ contingency analysis integrated with complex stability evaluations like small-signal stability. Addressing this computational bottleneck and the limitations of deterministic screening, this paper presents a scalable methodology for the vulnerability assessment of modern power networks, integrating $N-2$ contingency analysis with small-signal stability evaluation. To prioritize critical components, we propose a probabilistic \textbf{Risk Index ($R_i$)} that weights the deterministic \textit{severity} of a contingency (including optimal power flow divergence, islanding, and oscillatory instability) by the \textit{failure frequency} of the involved elements based on reliability data. The proposed framework is implemented using High-Performance Computing (HPC) techniques through the PyCOMPSs parallel programming library, orchestrating optimal power flow simulations (VeraGrid) and small-signal analysis (STAMP) to enable the exhaustive exploration of massive contingency sets. The methodology is validated on the IEEE 118-bus test system, processing more than \num{57000} scenarios to identify components prone to triggering cascading failures. Results demonstrate that the risk-based approach effectively isolates critical assets that deterministic $N-1$ criteria often overlook. This work establishes a replicable and efficient workflow for probabilistic security assessment, suitable for large-scale networks and capable of supporting operator decision-making in near real-time environments. - oai:arXiv.org:2512.08465v1 + Instantaneous Complex Phase and Frequency: Conceptual Clarification and Equivalence between Formulations + https://arxiv.org/abs/2512.09574 + arXiv:2512.09574v1 Announce Type: new +Abstract: This letter seeks to clarify the different existing definitions of both instantaneous complex phase and frequency as well as their equivalence when specific hypotheses hold. To achieve this, the two fundamental definitions, i.e., those based on either the use of (i) analytic signals or (ii) space vectors, together with the premises used for their formulation, are presented and their relationship shown. Lastly, an unified notation and terminology to avoid confusion is proposed. + oai:arXiv.org:2512.09574v1 eess.SY - cs.PF cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Alexandre Gracia-Calvo, Francesca Rossi, Eduardo Iraola, Juan Carlos Olives-Camps, Eduardo Prieto-Araujo + C\'esar Garc\'ia-Veloso, Mario Paolone, Federico Milano - Team-Aware Football Player Tracking with SAM: An Appearance-Based Approach to Occlusion Recovery - https://arxiv.org/abs/2512.08467 - arXiv:2512.08467v1 Announce Type: new -Abstract: Football player tracking is challenged by frequent occlusions, similar appearances, and rapid motion in crowded scenes. This paper presents a lightweight SAM-based tracking method combining the Segment Anything Model (SAM) with CSRT trackers and jersey color-based appearance models. We propose a team-aware tracking system that uses SAM for precise initialization and HSV histogram-based re-identification to improve occlusion recovery. Our evaluation measures three dimensions: processing speed (FPS and memory), tracking accuracy (success rate and box stability), and robustness (occlusion recovery and identity consistency). Experiments on football video sequences show that the approach achieves 7.6-7.7 FPS with stable memory usage (~1880 MB), maintaining 100 percent tracking success in light occlusions and 90 percent in crowded penalty-box scenarios with 5 or more players. Appearance-based re-identification recovers 50 percent of heavy occlusions, demonstrating the value of domain-specific cues. Analysis reveals key trade-offs: the SAM + CSRT combination provides consistent performance across crowd densities but struggles with long-term occlusions where players leave the frame, achieving only 8.66 percent re-acquisition success. These results offer practical guidelines for deploying football tracking systems under resource constraints, showing that classical tracker-based methods work well with continuous visibility but require stronger re-identification mechanisms for extended absences. - oai:arXiv.org:2512.08467v1 + Seeing Soil from Space: Towards Robust and Scalable Remote Soil Nutrient Analysis + https://arxiv.org/abs/2512.09576 + arXiv:2512.09576v1 Announce Type: new +Abstract: Environmental variables are increasingly affecting agricultural decision-making, yet accessible and scalable tools for soil assessment remain limited. This study presents a robust and scalable modeling system for estimating soil properties in croplands, including soil organic carbon (SOC), total nitrogen (N), available phosphorus (P), exchangeable potassium (K), and pH, using remote sensing data and environmental covariates. The system employs a hybrid modeling approach, combining the indirect methods of modeling soil through proxies and drivers with direct spectral modeling. We extend current approaches by using interpretable physics-informed covariates derived from radiative transfer models (RTMs) and complex, nonlinear embeddings from a foundation model. We validate the system on a harmonized dataset that covers Europes cropland soils across diverse pedoclimatic zones. Evaluation is conducted under a robust validation framework that enforces strict spatial blocking, stratified splits, and statistically distinct train-test sets, which deliberately make the evaluation harder and produce more realistic error estimates for unseen regions. The models achieved their highest accuracy for SOC and N. This performance held across unseen locations, under both spatial cross-validation and an independent test set. SOC obtained a MAE of 5.12 g/kg and a CCC of 0.77, and N obtained a MAE of 0.44 g/kg and a CCC of 0.77. We also assess uncertainty through conformal calibration, achieving 90 percent coverage at the target confidence level. This study contributes to the digital advancement of agriculture through the application of scalable, data-driven soil analysis frameworks that can be extended to related domains requiring quantitative soil evaluation, such as carbon markets. + oai:arXiv.org:2512.09576v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + physics.geo-ph + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Chamath Ranasinghe, Uthayasanker Thayasivam + http://creativecommons.org/licenses/by-nc-nd/4.0/ + David Seu (CO2 Angels, Cluj-Napoca, Romania), Nicolas Longepe (European Space Agency Phi-Lab, Frascati, Italy), Gabriel Cioltea (CO2 Angels, Cluj-Napoca, Romania), Erik Maidik (CO2 Angels, Cluj-Napoca, Romania), Calin Andrei (CO2 Angels, Cluj-Napoca, Romania) - Measuring Computer Science Enthusiasm: A Questionnaire-Based Analysis of Age and Gender Effects on Students' Interest - https://arxiv.org/abs/2512.08472 - arXiv:2512.08472v1 Announce Type: new -Abstract: This study offers new insights into students' interest in computer science (CS) education by disentangling the distinct effects of age and gender across a diverse adolescent sample. Grounded in the person-object theory of interest (POI), we conceptualize enthusiasm as a short-term, activating expression of interest that combines positive affect, perceived relevance, and intention to re-engage. Experiencing such enthusiasm can temporarily shift CS attitudes and strengthen future engagement intentions, making it a valuable lens for evaluating brief outreach activities. To capture these dynamics, we developed a theoretically grounded questionnaire for pre-post assessment of the enthusiasm potential of CS interventions. Using data from more than 400 students participating in online CS courses, we examined age- and gender-related patterns in enthusiasm. The findings challenge the prevailing belief that early exposure is the primary pathway to sustained interest in CS. Instead, we identify a marked decline in enthusiasm during early adolescence, particularly among girls, alongside substantial variability in interest trajectories across age groups. Crucially, our analyses reveal that age is a more decisive factor than gender in shaping interest development and uncover key developmental breakpoints. Despite starting with lower baseline attitudes, older students showed the largest positive changes following the intervention, suggesting that well-designed short activities can effectively re-activate interest even at later ages. Overall, the study highlights the need for a dynamic, age-sensitive framework for CS education in which instructional strategies are aligned with developmental trajectories. - oai:arXiv.org:2512.08472v1 - cs.SE - cs.CY - econ.GN - q-fin.EC - Wed, 10 Dec 2025 00:00:00 -0500 + Auto-BenchmarkCard: Automated Synthesis of Benchmark Documentation + https://arxiv.org/abs/2512.09577 + arXiv:2512.09577v1 Announce Type: new +Abstract: We present Auto-BenchmarkCard, a workflow for generating validated descriptions of AI benchmarks. Benchmark documentation is often incomplete or inconsistent, making it difficult to interpret and compare benchmarks across tasks or domains. Auto-BenchmarkCard addresses this gap by combining multi-agent data extraction from heterogeneous sources (e.g., Hugging Face, Unitxt, academic papers) with LLM-driven synthesis. A validation phase evaluates factual accuracy through atomic entailment scoring using the FactReasoner tool. This workflow has the potential to promote transparency, comparability, and reusability in AI benchmark reporting, enabling researchers and practitioners to better navigate and evaluate benchmark choices. + oai:arXiv.org:2512.09577v1 + cs.HC + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kai Marquardt, Robert Hanak, Anne Koziolek, Lucia Happe + Aris Hofmann, Inge Vejsbjerg, Dhaval Salwala, Elizabeth M. Daly - Solving Over-Smoothing in GNNs via Nonlocal Message Passing: Algebraic Smoothing and Depth Scalability - https://arxiv.org/abs/2512.08475 - arXiv:2512.08475v1 Announce Type: new -Abstract: The relationship between Layer Normalization (LN) placement and the over-smoothing phenomenon remains underexplored. We identify a critical dilemma: Pre-LN architectures avoid over-smoothing but suffer from the curse of depth, while Post-LN architectures bypass the curse of depth but experience over-smoothing. - To resolve this, we propose a new method based on Post-LN that induces algebraic smoothing, preventing over-smoothing without the curse of depth. Empirical results across five benchmarks demonstrate that our approach supports deeper networks (up to 256 layers) and improves performance, requiring no additional parameters. - Key contributions: - Theoretical Characterization: Analysis of LN dynamics and their impact on over-smoothing and the curse of depth. - A Principled Solution: A parameter-efficient method that induces algebraic smoothing and avoids over-smoothing and the curse of depth. - Empirical Validation: Extensive experiments showing the effectiveness of the method in deeper GNNs. - oai:arXiv.org:2512.08475v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Hands-on Evaluation of Visual Transformers for Object Recognition and Detection + https://arxiv.org/abs/2512.09579 + arXiv:2512.09579v1 Announce Type: new +Abstract: Convolutional Neural Networks (CNNs) for computer vision sometimes struggle with understanding images in a global context, as they mainly focus on local patterns. On the other hand, Vision Transformers (ViTs), inspired by models originally created for language processing, use self-attention mechanisms, which allow them to understand relationships across the entire image. In this paper, we compare different types of ViTs (pure, hierarchical, and hybrid) against traditional CNN models across various tasks, including object recognition, detection, and medical image classification. We conduct thorough tests on standard datasets like ImageNet for image classification and COCO for object detection. Additionally, we apply these models to medical imaging using the ChestX-ray14 dataset. We find that hybrid and hierarchical transformers, especially Swin and CvT, offer a strong balance between accuracy and computational resources. Furthermore, by experimenting with data augmentation techniques on medical images, we discover significant performance improvements, particularly with the Swin Transformer model. Overall, our results indicate that Vision Transformers are competitive and, in many cases, outperform traditional CNNs, especially in scenarios requiring the understanding of global visual contexts like medical imaging. + oai:arXiv.org:2512.09579v1 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Weiqi Guan, Junlin He + http://creativecommons.org/licenses/by/4.0/ + 37th International Conference on Tools with Artificial Intelligence (ICTAI 2025) + Dimitrios N. Vlachogiannis, Dimitrios A. Koutsomitropoulos - A Multi-Agent LLM Framework for Design Space Exploration in Autonomous Driving Systems - https://arxiv.org/abs/2512.08476 - arXiv:2512.08476v1 Announce Type: new -Abstract: Designing autonomous driving systems requires efficient exploration of large hardware/software configuration spaces under diverse environmental conditions, e.g., with varying traffic, weather, and road layouts. Traditional design space exploration (DSE) approaches struggle with multi-modal execution outputs and complex performance trade-offs, and often require human involvement to assess correctness based on execution outputs. This paper presents a multi-agent, large language model (LLM)-based DSE framework, which integrates multi-modal reasoning with 3D simulation and profiling tools to automate the interpretation of execution outputs and guide the exploration of system designs. Specialized LLM agents are leveraged to handle user input interpretation, design point generation, execution orchestration, and analysis of both visual and textual execution outputs, which enables identification of potential bottlenecks without human intervention. A prototype implementation is developed and evaluated on a robotaxi case study (an SAE Level 4 autonomous driving application). Compared with a genetic algorithm baseline, the proposed framework identifies more Pareto-optimal, cost-efficient solutions with reduced navigation time under the same exploration budget. Experimental results also demonstrate the efficiency of the adoption of the LLM-based approach for DSE. We believe that this framework paves the way to the design automation of autonomous driving systems. - oai:arXiv.org:2512.08476v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Content-Adaptive Image Retouching Guided by Attribute-Based Text Representation + https://arxiv.org/abs/2512.09580 + arXiv:2512.09580v1 Announce Type: new +Abstract: Image retouching has received significant attention due to its ability to achieve high-quality visual content. Existing approaches mainly rely on uniform pixel-wise color mapping across entire images, neglecting the inherent color variations induced by image content. This limitation hinders existing approaches from achieving adaptive retouching that accommodates both diverse color distributions and user-defined style preferences. To address these challenges, we propose a novel Content-Adaptive image retouching method guided by Attribute-based Text Representation (CA-ATP). Specifically, we propose a content-adaptive curve mapping module, which leverages a series of basis curves to establish multiple color mapping relationships and learns the corresponding weight maps, enabling content-aware color adjustments. The proposed module can capture color diversity within the image content, allowing similar color values to receive distinct transformations based on their spatial context. In addition, we propose an attribute text prediction module that generates text representations from multiple image attributes, which explicitly represent user-defined style preferences. These attribute-based text representations are subsequently integrated with visual features via a multimodal model, providing user-friendly guidance for image retouching. Extensive experiments on several public datasets demonstrate that our method achieves state-of-the-art performance. + oai:arXiv.org:2512.09580v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Po-An Shih, Shao-Hua Wang, Yung-Che Li, Chia-Heng Tu, Chih-Han Chang + Hancheng Zhu, Xinyu Liu, Rui Yao, Kunyang Sun, Leida Li, Abdulmotaleb El Saddik - ContextDrag: Precise Drag-Based Image Editing via Context-Preserving Token Injection and Position-Consistent Attention - https://arxiv.org/abs/2512.08477 - arXiv:2512.08477v1 Announce Type: new -Abstract: Drag-based image editing aims to modify visual content followed by user-specified drag operations. Despite existing methods having made notable progress, they still fail to fully exploit the contextual information in the reference image, including fine-grained texture details, leading to edits with limited coherence and fidelity. To address this challenge, we introduce ContextDrag, a new paradigm for drag-based editing that leverages the strong contextual modeling capability of editing models, such as FLUX-Kontext. By incorporating VAE-encoded features from the reference image, ContextDrag can leverage rich contextual cues and preserve fine-grained details, without the need for finetuning or inversion. Specifically, ContextDrag introduced a novel Context-preserving Token Injection (CTI) that injects noise-free reference features into their correct destination locations via a Latent-space Reverse Mapping (LRM) algorithm. This strategy enables precise drag control while preserving consistency in both semantics and texture details. Second, ContextDrag adopts a novel Position-Consistent Attention (PCA), which positional re-encodes the reference tokens and applies overlap-aware masking to eliminate interference from irrelevant reference features. Extensive experiments on DragBench-SR and DragBench-DR demonstrate that our approach surpasses all existing SOTA methods. Code will be publicly available. - oai:arXiv.org:2512.08477v1 + UnReflectAnything: RGB-Only Highlight Removal by Rendering Synthetic Specular Supervision + https://arxiv.org/abs/2512.09583 + arXiv:2512.09583v1 Announce Type: new +Abstract: Specular highlights distort appearance, obscure texture, and hinder geometric reasoning in both natural and surgical imagery. We present UnReflectAnything, an RGB-only framework that removes highlights from a single image by predicting a highlight map together with a reflection-free diffuse reconstruction. The model uses a frozen vision transformer encoder to extract multi-scale features, a lightweight head to localize specular regions, and a token-level inpainting module that restores corrupted feature patches before producing the final diffuse image. To overcome the lack of paired supervision, we introduce a Virtual Highlight Synthesis pipeline that renders physically plausible specularities using monocular geometry, Fresnel-aware shading, and randomized lighting which enables training on arbitrary RGB images with correct geometric structure. UnReflectAnything generalizes across natural and surgical domains where non-Lambertian surfaces and non-uniform lighting create severe highlights and it achieves competitive performance with state-of-the-art results on several benchmarks. Project Page: https://alberto-rota.github.io/UnReflectAnything/ + oai:arXiv.org:2512.09583v1 cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Huiguo He, Pengyu Yan, Ziqi Yi, Weizhi Zhong, Zheng Liu, Yejun Tang, Huan Yang, Kun Gai, Guanbin Li, Lianwen Jin + http://creativecommons.org/licenses/by/4.0/ + Alberto Rota, Mert Kiray, Mert Asim Karaoglu, Patrick Ruhkamp, Elena De Momi, Nassir Navabm, Benjamin Busam - Visionary: The World Model Carrier Built on WebGPU-Powered Gaussian Splatting Platform - https://arxiv.org/abs/2512.08478 - arXiv:2512.08478v1 Announce Type: new -Abstract: Neural rendering, particularly 3D Gaussian Splatting (3DGS), has evolved rapidly and become a key component for building world models. However, existing viewer solutions remain fragmented, heavy, or constrained by legacy pipelines, resulting in high deployment friction and limited support for dynamic content and generative models. In this work, we present Visionary, an open, web-native platform for real-time various Gaussian Splatting and meshes rendering. Built on an efficient WebGPU renderer with per-frame ONNX inference, Visionary enables dynamic neural processing while maintaining a lightweight, "click-to-run" browser experience. It introduces a standardized Gaussian Generator contract, which not only supports standard 3DGS rendering but also allows plug-and-play algorithms to generate or update Gaussians each frame. Such inference also enables us to apply feedforward generative post-processing. The platform further offers a plug in three.js library with a concise TypeScript API for seamless integration into existing web applications. Experiments show that, under identical 3DGS assets, Visionary achieves superior rendering efficiency compared to current Web viewers due to GPU-based primitive sorting. It already supports multiple variants, including MLP-based 3DGS, 4DGS, neural avatars, and style transformation or enhancement networks. By unifying inference and rendering directly in the browser, Visionary significantly lowers the barrier to reproduction, comparison, and deployment of 3DGS-family methods, serving as a unified World Model Carrier for both reconstructive and generative paradigms. - oai:arXiv.org:2512.08478v1 - cs.CV + Stanford Sleep Bench: Evaluating Polysomnography Pre-training Methods for Sleep Foundation Models + https://arxiv.org/abs/2512.09591 + arXiv:2512.09591v1 Announce Type: new +Abstract: Polysomnography (PSG), the gold standard test for sleep analysis, generates vast amounts of multimodal clinical data, presenting an opportunity to leverage self-supervised representation learning (SSRL) for pre-training foundation models to enhance sleep analysis. However, progress in sleep foundation models is hindered by two key limitations: (1) the lack of a shared dataset and benchmark with diverse tasks for training and evaluation, and (2) the absence of a systematic evaluation of SSRL approaches across sleep-related tasks. To address these gaps, we introduce Stanford Sleep Bench, a large-scale PSG dataset comprising 17,467 recordings totaling over 163,000 hours from a major sleep clinic, including 13 clinical disease prediction tasks alongside canonical sleep-related tasks such as sleep staging, apnea diagnosis, and age estimation. We systematically evaluate SSRL pre-training methods on Stanford Sleep Bench, assessing downstream performance across four tasks: sleep staging, apnea diagnosis, age estimation, and disease and mortality prediction. Our results show that multiple pretraining methods achieve comparable performance for sleep staging, apnea diagnosis, and age estimation. However, for mortality and disease prediction, contrastive learning significantly outperforms other approaches while also converging faster during pretraining. To facilitate reproducibility and advance sleep research, we will release Stanford Sleep Bench along with pretrained model weights, training pipelines, and evaluation code. + oai:arXiv.org:2512.09591v1 + cs.LG cs.AI - cs.GR - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuning Gong, Yifei Liu, Yifan Zhan, Muyao Niu, Xueying Li, Yuanjun Liao, Jiaming Chen, Yuanyuan Gao, Jiaqi Chen, Minming Chen, Li Zhou, Yuning Zhang, Wei Wang, Xiaoqing Hou, Huaxi Huang, Shixiang Tang, Le Ma, Dingwen Zhang, Xue Yang, Junchi Yan, Yanchi Zhang, Yinqiang Zheng, Xiao Sun, Zhihang Zhong + http://creativecommons.org/licenses/by/4.0/ + Magnus Ruud Kjaer, Rahul Thapa, Gauri Ganjoo, Hyatt Moore IV, Poul Joergen Jennum, Brandon M. Westover, James Zou, Emmanuel Mignot, Bryan He, Andreas Brink-Kjaer - Construction and Performance of Kinetic Schemes for Linear Systems of Conservation Laws - https://arxiv.org/abs/2512.08479 - arXiv:2512.08479v1 Announce Type: new -Abstract: We describe a methodology to build vectorial kinetic schemes, targetting the numerical solution of linear symmetric-hyperbolic systems of conservation laws -a minimal application case for those schemes. Precisely, we fully detail the construction of kinetic schemes that satisfy a discrete equivalent to a convex extension (an additional non-trivial conservation law) of the target system -the (linear) acoustic and elastodynamics systems, specifically -. Then, we evaluate numerically the convergence of various possible kinetic schemes toward smooth solutions, in comparison with standard finite-difference and finite-volume discretizations on Cartesian meshes. Our numerical results confirm the interest of ensuring a discrete equivalent to a convex extension, and show the influence of remaining parameter variations in terms of error magnitude, both for ''first-order'' and ''second-order'' kinetic schemes\,: the parameter choice with largest CFL number (equiv., smallest spurious diffusion in the equivalent equation analysis) has the smallest discretization error. - oai:arXiv.org:2512.08479v1 - math.NA - cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + CS3D: An Efficient Facial Expression Recognition via Event Vision + https://arxiv.org/abs/2512.09592 + arXiv:2512.09592v1 Announce Type: new +Abstract: Responsive and accurate facial expression recognition is crucial to human-robot interaction for daily service robots. Nowadays, event cameras are becoming more widely adopted as they surpass RGB cameras in capturing facial expression changes due to their high temporal resolution, low latency, computational efficiency, and robustness in low-light conditions. Despite these advantages, event-based approaches still encounter practical challenges, particularly in adopting mainstream deep learning models. Traditional deep learning methods for facial expression analysis are energy-intensive, making them difficult to deploy on edge computing devices and thereby increasing costs, especially for high-frequency, dynamic, event vision-based approaches. To address this challenging issue, we proposed the CS3D framework by decomposing the Convolutional 3D method to reduce the computational complexity and energy consumption. Additionally, by utilizing soft spiking neurons and a spatial-temporal attention mechanism, the ability to retain information is enhanced, thus improving the accuracy of facial expression detection. Experimental results indicate that our proposed CS3D method attains higher accuracy on multiple datasets compared to architectures such as the RNN, Transformer, and C3D, while the energy consumption of the CS3D method is just 21.97\% of the original C3D required on the same device. + oai:arXiv.org:2512.09592v1 + cs.CV + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Emmanuel Audusse (LAGA), S\'ebastien Boyaval (MATHERIALS, LHSV), Virgile Dubos (UMA), Minh-Hoang Le (LHSV) + Zhe Wang, Qijin Song, Yucen Peng, Weibang Bai - Soft Inductive Bias Approach via Explicit Reasoning Perspectives in Inappropriate Utterance Detection Using Large Language Models - https://arxiv.org/abs/2512.08480 - arXiv:2512.08480v1 Announce Type: new -Abstract: Recent incidents in certain online games and communities, where anonymity is guaranteed, show that unchecked inappropriate remarks frequently escalate into verbal abuse and even criminal behavior, raising significant social concerns. Consequently, there is a growing need for research on techniques that can detect inappropriate utterances within conversational texts to help build a safer communication environment. Although large-scale language models trained on Korean corpora and chain-of-thought reasoning have recently gained attention, research applying these approaches to inappropriate utterance detection remains limited. In this study, we propose a soft inductive bias approach that explicitly defines reasoning perspectives to guide the inference process, thereby promoting rational decision-making and preventing errors that may arise during reasoning. We fine-tune a Korean large language model using the proposed method and conduct both quantitative performance comparisons and qualitative evaluations across different training strategies. Experimental results show that the Kanana-1.5 model achieves an average accuracy of 87.0046, improving by approximately 3.89 percent over standard supervised learning. These findings indicate that the proposed method goes beyond simple knowledge imitation by large language models and enables more precise and consistent judgments through constrained reasoning perspectives, demonstrating its effectiveness for inappropriate utterance detection. - oai:arXiv.org:2512.08480v1 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Model management to support systems engineering workflows using ontology-based knowledge graphs + https://arxiv.org/abs/2512.09596 + arXiv:2512.09596v1 Announce Type: new +Abstract: System engineering has been shifting from document-centric to model-based approaches, where assets are becoming more and more digital. Although digitisation conveys several benefits, it also brings several concerns (e.g., storage and access) and opportunities. In the context of Cyber- Physical Systems (CPS), we have experts from various domains executing complex workflows and manipulating models in a plethora of different formalisms, each with their own methods, techniques and tools. Storing knowledge on these workflows can reduce considerable effort during system development not only to allow their repeatability and replicability but also to access and reason on data generated by their execution. In this work, we propose a framework to manage modelling artefacts generated from workflow executions. The basic workflow concepts, related formalisms and artefacts are formally defined in an ontology specified in OML (Ontology Modelling Language). This ontology enables the construction of a knowledge graph that contains system engineering data to which we can apply reasoning. We also developed several tools to support system engineering during the design of workflows, their enactment, and artefact storage, considering versioning, querying and reasoning on the stored data. These tools also hide the complexity of manipulating the knowledge graph directly. Finally, we have applied our proposed framework in a real-world system development scenario of a drivetrain smart sensor system. Results show that our proposal not only helped the system engineer with fundamental difficulties like storage and versioning but also reduced the time needed to access relevant information and new knowledge that can be inferred from the knowledge graph. + oai:arXiv.org:2512.09596v1 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Proceedings of the 37th Annual Conference on Human and Language Technology, 2025, pp. 714-719 - Ju-Young Kim, Ji-Hong Park, Se-Yeon Lee, Sujin Park, Gun-Woo Kim + http://creativecommons.org/licenses/by/4.0/ + 10.1016/j.jii.2024.100720 + J. Ind. Inf. Integr. 42: 100720 (2024) + Arkadiusz Ry\'s, Lucas Lima, Joeri Exelmans, Dennis Janssens, Hans Vangheluwe - Prospect Theory in Physical Human-Robot Interaction: A Pilot Study of Probability Perception - https://arxiv.org/abs/2512.08481 - arXiv:2512.08481v1 Announce Type: new -Abstract: Understanding how humans respond to uncertainty is critical for designing safe and effective physical human-robot interaction (pHRI), as physically working with robots introduces multiple sources of uncertainty, including trust, comfort, and perceived safety. Conventional pHRI control frameworks typically build on optimal control theory, which assumes that human actions minimize a cost function; however, human behavior under uncertainty often departs from such optimal patterns. To address this gap, additional understanding of human behavior under uncertainty is needed. This pilot study implemented a physically coupled target-reaching task in which the robot delivered assistance or disturbances with systematically varied probabilities (10\% to 90\%). Analysis of participants' force inputs and decision-making strategies revealed two distinct behavioral clusters: a "trade-off" group that modulated their physical responses according to disturbance likelihood, and an "always-compensate" group characterized by strong risk aversion irrespective of probability. These findings provide empirical evidence that human decision-making in pHRI is highly individualized and that the perception of probability can differ to its true value. Accordingly, the study highlights the need for more interpretable behavioral models, such as cumulative prospect theory (CPT), to more accurately capture these behaviors and inform the design of future adaptive robot controllers. - oai:arXiv.org:2512.08481v1 + UrbanNav: Learning Language-Guided Urban Navigation from Web-Scale Human Trajectories + https://arxiv.org/abs/2512.09607 + arXiv:2512.09607v1 Announce Type: new +Abstract: Navigating complex urban environments using natural language instructions poses significant challenges for embodied agents, including noisy language instructions, ambiguous spatial references, diverse landmarks, and dynamic street scenes. Current visual navigation methods are typically limited to simulated or off-street environments, and often rely on precise goal formats, such as specific coordinates or images. This limits their effectiveness for autonomous agents like last-mile delivery robots navigating unfamiliar cities. To address these limitations, we introduce UrbanNav, a scalable framework that trains embodied agents to follow free-form language instructions in diverse urban settings. Leveraging web-scale city walking videos, we develop an scalable annotation pipeline that aligns human navigation trajectories with language instructions grounded in real-world landmarks. UrbanNav encompasses over 1,500 hours of navigation data and 3 million instruction-trajectory-landmark triplets, capturing a wide range of urban scenarios. Our model learns robust navigation policies to tackle complex urban scenarios, demonstrating superior spatial reasoning, robustness to noisy instructions, and generalization to unseen urban settings. Experimental results show that UrbanNav significantly outperforms existing methods, highlighting the potential of large-scale web video data to enable language-guided, real-world urban navigation for embodied agents. + oai:arXiv.org:2512.09607v1 cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yixiang Lin, Tiancheng Yang, Jonathan Eden, Ying Tan - - - NeurIDA: Dynamic Modeling for Effective In-Database Analytics - https://arxiv.org/abs/2512.08483 - arXiv:2512.08483v1 Announce Type: new -Abstract: Relational Database Management Systems (RDBMS) manage complex, interrelated data and support a broad spectrum of analytical tasks. With the growing demand for predictive analytics, the deep integration of machine learning (ML) into RDBMS has become critical. However, a fundamental challenge hinders this evolution: conventional ML models are static and task-specific, whereas RDBMS environments are dynamic and must support diverse analytical queries. Each analytical task entails constructing a bespoke pipeline from scratch, which incurs significant development overhead and hence limits wide adoption of ML in analytics. - We present NeurIDA, an autonomous end-to-end system for in-database analytics that dynamically "tweaks" the best available base model to better serve a given analytical task. In particular, we propose a novel paradigm of dynamic in-database modeling to pre-train a composable base model architecture over the relational data. Upon receiving a task, NeurIDA formulates the task and data profile to dynamically select and configure relevant components from the pool of base models and shared model components for prediction. For friendly user experience, NeurIDA supports natural language queries; it interprets user intent to construct structured task profiles, and generates analytical reports with dedicated LLM agents. By design, NeurIDA enables ease-of-use and yet effective and efficient in-database AI analytics. Extensive experiment study shows that NeurIDA consistently delivers up to 12% improve- ment in AUC-ROC and 25% relative reduction in MAE across ten tasks on five real-world datasets. The source code is available at https://github.com/Zrealshadow/NeurIDA - oai:arXiv.org:2512.08483v1 - cs.DB - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Lingze Zeng, Naili Xing, Shaofeng Cai, Peng Lu, Gang Chen, Jian Pei, Beng Chin Ooi + Yanghong Mei, Yirong Yang, Longteng Guo, Qunbo Wang, Ming-Ming Yu, Xingjian He, Wenjun Wu, Jing Liu - Optimal Perturbation Budget Allocation for Data Poisoning in Offline Reinforcement Learning - https://arxiv.org/abs/2512.08485 - arXiv:2512.08485v1 Announce Type: new -Abstract: Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to data poisoning attacks. Existing attack strategies typically rely on locally uniform perturbations, which treat all samples indiscriminately. This approach is inefficient, as it wastes the perturbation budget on low-impact samples, and lacks stealthiness due to significant statistical deviations. In this paper, we propose a novel Global Budget Allocation attack strategy. Leveraging the theoretical insight that a sample's influence on value function convergence is proportional to its Temporal Difference (TD) error, we formulate the attack as a global resource allocation problem. We derive a closed-form solution where perturbation magnitudes are assigned proportional to the TD-error sensitivity under a global L2 constraint. Empirical results on D4RL benchmarks demonstrate that our method significantly outperforms baseline strategies, achieving up to 80% performance degradation with minimal perturbations that evade detection by state-of-the-art statistical and spectral defenses. - oai:arXiv.org:2512.08485v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Super4DR: 4D Radar-centric Self-supervised Odometry and Gaussian-based Map Optimization + https://arxiv.org/abs/2512.09608 + arXiv:2512.09608v1 Announce Type: new +Abstract: Conventional SLAM systems using visual or LiDAR data often struggle in poor lighting and severe weather. Although 4D radar is suited for such environments, its sparse and noisy point clouds hinder accurate odometry estimation, while the radar maps suffer from obscure and incomplete structures. Thus, we propose Super4DR, a 4D radar-centric framework for learning-based odometry estimation and gaussian-based map optimization. First, we design a cluster-aware odometry network that incorporates object-level cues from the clustered radar points for inter-frame matching, alongside a hierarchical self-supervision mechanism to overcome outliers through spatio-temporal consistency, knowledge transfer, and feature contrast. Second, we propose using 3D gaussians as an intermediate representation, coupled with a radar-specific growth strategy, selective separation, and multi-view regularization, to recover blurry map areas and those undetected based on image texture. Experiments show that Super4DR achieves a 67% performance gain over prior self-supervised methods, nearly matches supervised odometry, and narrows the map quality disparity with LiDAR while enabling multi-modal image rendering. + oai:arXiv.org:2512.09608v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Junnan Qiu, Jie Li + Zhiheng Li, Weihua Wang, Qiang Shen, Yichen Zhao, Zheng Fang - Temporal Concept Dynamics in Diffusion Models via Prompt-Conditioned Interventions - https://arxiv.org/abs/2512.08486 - arXiv:2512.08486v1 Announce Type: new -Abstract: Diffusion models are usually evaluated by their final outputs, gradually denoising random noise into meaningful images. Yet, generation unfolds along a trajectory, and analyzing this dynamic process is crucial for understanding how controllable, reliable, and predictable these models are in terms of their success/failure modes. In this work, we ask the question: when does noise turn into a specific concept (e.g., age) and lock in the denoising trajectory? We propose PCI (Prompt-Conditioned Intervention) to study this question. PCI is a training-free and model-agnostic framework for analyzing concept dynamics through diffusion time. The central idea is the analysis of Concept Insertion Success (CIS), defined as the probability that a concept inserted at a given timestep is preserved and reflected in the final image, offering a way to characterize the temporal dynamics of concept formation. Applied to several state-of-the-art text-to-image diffusion models and a broad taxonomy of concepts, PCI reveals diverse temporal behaviors across diffusion models, in which certain phases of the trajectory are more favorable to specific concepts even within the same concept type. These findings also provide actionable insights for text-driven image editing, highlighting when interventions are most effective without requiring access to model internals or training, and yielding quantitatively stronger edits that achieve a balance of semantic accuracy and content preservation than strong baselines. Code is available at: https://github.com/adagorgun/PCI-Prompt-Controlled-Interventions - oai:arXiv.org:2512.08486v1 + ImageTalk: Designing a Multimodal AAC Text Generation System Driven by Image Recognition and Natural Language Generation + https://arxiv.org/abs/2512.09610 + arXiv:2512.09610v1 Announce Type: new +Abstract: People living with Motor Neuron Disease (plwMND) frequently encounter speech and motor impairments that necessitate a reliance on augmentative and alternative communication (AAC) systems. This paper tackles the main challenge that traditional symbol-based AAC systems offer a limited vocabulary, while text entry solutions tend to exhibit low communication rates. To help plwMND articulate their needs about the system efficiently and effectively, we iteratively design and develop a novel multimodal text generation system called ImageTalk through a tailored proxy-user-based and an end-user-based design phase. The system demonstrates pronounced keystroke savings of 95.6%, coupled with consistent performance and high user satisfaction. We distill three design guidelines for AI-assisted text generation systems design and outline four user requirement levels tailored for AAC purposes, guiding future research in this field. + oai:arXiv.org:2512.09610v1 + cs.HC + cs.AI cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ada Gorgun, Fawaz Sammani, Nikos Deligiannis, Bernt Schiele, Jonas Fischer + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Boyin Yang, Puming Jiang, Per Ola Kristensson - Autonomous Issue Resolver: Towards Zero-Touch Code Maintenance - https://arxiv.org/abs/2512.08492 - arXiv:2512.08492v1 Announce Type: new -Abstract: Recent advances in Large Language Models have revolutionized function-level code generation; however, repository-scale Automated Program Repair (APR) remains a significant challenge. Current approaches typically employ a control-centric paradigm, forcing agents to navigate complex directory structures and irrelevant control logic. In this paper, we propose a paradigm shift from the standard Code Property Graphs (CPGs) to the concept of Data Transformation Graph (DTG) that inverts the topology by modeling data states as nodes and functions as edges, enabling agents to trace logic defects through data lineage rather than control flow. We introduce a multi-agent framework that reconciles data integrity navigation with control flow logic. Our theoretical analysis and case studies demonstrate that this approach resolves the "Semantic Trap" inherent in standard RAG systems in modern coding agents. We provide a comprehensive implementation in the form of Autonomous Issue Resolver (AIR), a self-improvement system for zero-touch code maintenance that utilizes neuro-symbolic reasoning and uses the DTG structure for scalable logic repair. Our approach has demonstrated good results on several SWE benchmarks, reaching a resolution rate of 87.1% on SWE-Verified benchmark. Our approach directly addresses the core limitations of current AI code-assistant tools and tackles the critical need for a more robust foundation for our increasingly software-dependent world. - oai:arXiv.org:2512.08492v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Real-Time Non-Smooth MPC for Switching Systems: Application to a Three-Tank Process + https://arxiv.org/abs/2512.09611 + arXiv:2512.09611v1 Announce Type: new +Abstract: Real-time model predictive control of non-smooth switching systems remains challenging due to discontinuities and the presence of discrete modes, which complicate numerical integration and optimization. This paper presents a real-time feasible non-smooth model predictive control scheme for a physical three-tank process, implemented without mixed-integer formulations. The approach combines Filippov system modeling with finite elements and switch detection for time discretization, leading to a finite-dimensional optimal control problem formulated as a mathematical program with complementarity constraints. The mathematical program is solved via a homotopy of smooth nonlinear programs. We introduce modeling adjustments that make the three-tank dynamics numerically tractable, including additional modes to avoid non-Lipschitz points and undefined function values. Hardware experiments demonstrate efficient handling of switching events, mode-consistent tracking across reference changes, correct boundary handling, and constraint satisfaction. Furthermore, we investigate the impact of model mismatch and show that the tracking performance and computation times remain within real-time limits for the chosen sampling time. The complete controller is implemented using the non-smooth optimal control framework NOSNOC + oai:arXiv.org:2512.09611v1 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Aliaksei Kaliutau + http://creativecommons.org/licenses/by/4.0/ + Hendrik Alsmeier, Felix H\"ausser, Andreas Kn\"odler, Armin Nurkanovi\'c, Anton Pozharskiy, Moritz Diehl, Rolf Findeisen - LLM-based Vulnerable Code Augmentation: Generate or Refactor? - https://arxiv.org/abs/2512.08493 - arXiv:2512.08493v1 Announce Type: new -Abstract: Vulnerability code-bases often suffer from severe imbalance, limiting the effectiveness of Deep Learning-based vulnerability classifiers. Data Augmentation could help solve this by mitigating the scarcity of under-represented CWEs. In this context, we investigate LLM-based augmentation for vulnerable functions, comparing controlled generation of new vulnerable samples with semantics-preserving refactoring of existing ones. Using Qwen2.5-Coder to produce augmented data and CodeBERT as a vulnerability classifier on the SVEN dataset, we find that our approaches are indeed effective in enriching vulnerable code-bases through a simple process and with reasonable quality, and that a hybrid strategy best boosts vulnerability classifiers' performance. - oai:arXiv.org:2512.08493v1 - cs.CR + Rethinking Chain-of-Thought Reasoning for Videos + https://arxiv.org/abs/2512.09616 + arXiv:2512.09616v1 Announce Type: new +Abstract: Chain-of-thought (CoT) reasoning has been highly successful in solving complex tasks in natural language processing, and recent multimodal large language models (MLLMs) have extended this paradigm to video reasoning. However, these models typically build on lengthy reasoning chains and large numbers of input visual tokens. Motivated by empirical observations from our benchmark study, we hypothesize that concise reasoning combined with a reduced set of visual tokens can be sufficient for effective video reasoning. To evaluate this hypothesis, we design and validate an efficient post-training and inference framework that enhances a video MLLM's reasoning capability. Our framework enables models to operate on compressed visual tokens and generate brief reasoning traces prior to answering. The resulting models achieve substantially improved inference efficiency, deliver competitive performance across diverse benchmarks, and avoid reliance on manual CoT annotations or supervised fine-tuning. Collectively, our results suggest that long, human-like CoT reasoning may not be necessary for general video reasoning, and that concise reasoning can be both effective and efficient. Our code will be released at https://github.com/LaVi-Lab/Rethink_CoT_Video. + oai:arXiv.org:2512.09616v1 + cs.CV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CL + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Dyna Soumhane Ouchebara, St\'ephane Dupont + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yiwu Zhong, Zi-Yuan Hu, Yin Li, Liwei Wang - On-the-fly Large-scale 3D Reconstruction from Multi-Camera Rigs - https://arxiv.org/abs/2512.08498 - arXiv:2512.08498v1 Announce Type: new -Abstract: Recent advances in 3D Gaussian Splatting (3DGS) have enabled efficient free-viewpoint rendering and photorealistic scene reconstruction. While on-the-fly extensions of 3DGS have shown promise for real-time reconstruction from monocular RGB streams, they often fail to achieve complete 3D coverage due to the limited field of view (FOV). Employing a multi-camera rig fundamentally addresses this limitation. In this paper, we present the first on-the-fly 3D reconstruction framework for multi-camera rigs. Our method incrementally fuses dense RGB streams from multiple overlapping cameras into a unified Gaussian representation, achieving drift-free trajectory estimation and efficient online reconstruction. We propose a hierarchical camera initialization scheme that enables coarse inter-camera alignment without calibration, followed by a lightweight multi-camera bundle adjustment that stabilizes trajectories while maintaining real-time performance. Furthermore, we introduce a redundancy-free Gaussian sampling strategy and a frequency-aware optimization scheduler to reduce the number of Gaussian primitives and the required optimization iterations, thereby maintaining both efficiency and reconstruction fidelity. Our method reconstructs hundreds of meters of 3D scenes within just 2 minutes using only raw multi-camera video streams, demonstrating unprecedented speed, robustness, and Fidelity for on-the-fly 3D scene reconstruction. - oai:arXiv.org:2512.08498v1 + FROMAT: Multiview Material Appearance Transfer via Few-Shot Self-Attention Adaptation + https://arxiv.org/abs/2512.09617 + arXiv:2512.09617v1 Announce Type: new +Abstract: Multiview diffusion models have rapidly emerged as a powerful tool for content creation with spatial consistency across viewpoints, offering rich visual realism without requiring explicit geometry and appearance representation. However, compared to meshes or radiance fields, existing multiview diffusion models offer limited appearance manipulation, particularly in terms of material, texture, or style. + In this paper, we present a lightweight adaptation technique for appearance transfer in multiview diffusion models. Our method learns to combine object identity from an input image with appearance cues rendered in a separate reference image, producing multi-view-consistent output that reflects the desired materials, textures, or styles. This allows explicit specification of appearance parameters at generation time while preserving the underlying object geometry and view coherence. We leverage three diffusion denoising processes responsible for generating the original object, the reference, and the target images, and perform reverse sampling to aggregate a small subset of layer-wise self-attention features from the object and the reference to influence the target generation. Our method requires only a few training examples to introduce appearance awareness to pretrained multiview models. The experiments show that our method provides a simple yet effective way toward multiview generation with diverse appearance, advocating the adoption of implicit generative 3D representations in practice. + oai:arXiv.org:2512.09617v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Yijia Guo, Tong Hu, Zhiwei Li, Liwen Hu, Keming Qian, Xitong Lin, Shengbo Chen, Tiejun Huang, Lei Ma + http://creativecommons.org/publicdomain/zero/1.0/ + Hubert Kompanowski, Varun Jampani, Aaryaman Vasishta, Binh-Son Hua - Developing Distance-Aware Uncertainty Quantification Methods in Physics-Guided Neural Networks for Reliable Bearing Health Prediction - https://arxiv.org/abs/2512.08499 - arXiv:2512.08499v1 Announce Type: new -Abstract: Accurate and uncertainty-aware degradation estimation is essential for predictive maintenance in safety-critical systems like rotating machinery with rolling-element bearings. Many existing uncertainty methods lack confidence calibration, are costly to run, are not distance-aware, and fail to generalize under out-of-distribution data. We introduce two distance-aware uncertainty methods for deterministic physics-guided neural networks: PG-SNGP, based on Spectral Normalization Gaussian Process, and PG-SNER, based on Deep Evidential Regression. We apply spectral normalization to the hidden layers so the network preserves distances from input to latent space. PG-SNGP replaces the final dense layer with a Gaussian Process layer for distance-sensitive uncertainty, while PG-SNER outputs Normal Inverse Gamma parameters to model uncertainty in a coherent probabilistic form. We assess performance using standard accuracy metrics and a new distance-aware metric based on the Pearson Correlation Coefficient, which measures how well predicted uncertainty tracks the distance between test and training samples. We also design a dynamic weighting scheme in the loss to balance data fidelity and physical consistency. We test our methods on rolling-element bearing degradation using the PRONOSTIA dataset and compare them with Monte Carlo and Deep Ensemble PGNNs. Results show that PG-SNGP and PG-SNER improve prediction accuracy, generalize reliably under OOD conditions, and remain robust to adversarial attacks and noise. - oai:arXiv.org:2512.08499v1 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + GLaD: Geometric Latent Distillation for Vision-Language-Action Models + https://arxiv.org/abs/2512.09619 + arXiv:2512.09619v1 Announce Type: new +Abstract: Most existing Vision-Language-Action (VLA) models rely primarily on RGB information, while ignoring geometric cues crucial for spatial reasoning and manipulation. In this work, we introduce GLaD, a geometry-aware VLA framework that incorporates 3D geometric priors during pretraining through knowledge distillation. Rather than distilling geometric features solely into the vision encoder, we align the LLM's hidden states corresponding to visual tokens with features from a frozen geometry-aware vision transformer (VGGT), ensuring that geometric understanding is deeply integrated into the multimodal representations that drive action prediction. Pretrained on the Bridge dataset with this geometry distillation mechanism, GLaD achieves 94.1% average success rate across four LIBERO task suites, outperforming UniVLA (92.5%) which uses identical pretraining data. These results validate that geometry-aware pretraining enhances spatial reasoning and policy generalization without requiring explicit depth sensors or 3D annotations. + oai:arXiv.org:2512.09619v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Waleed Razzaq, Yun-Bo Zhao + Minghao Guo, Meng Cao, Jiachen Tao, Rongtao Xu, Yan Yan, Xiaodan Liang, Ivan Laptev, Xiaojun Chang - Learning to Control Physically-simulated 3D Characters via Generating and Mimicking 2D Motions - https://arxiv.org/abs/2512.08500 - arXiv:2512.08500v1 Announce Type: new -Abstract: Video data is more cost-effective than motion capture data for learning 3D character motion controllers, yet synthesizing realistic and diverse behaviors directly from videos remains challenging. Previous approaches typically rely on off-the-shelf motion reconstruction techniques to obtain 3D trajectories for physics-based imitation. These reconstruction methods struggle with generalizability, as they either require 3D training data (potentially scarce) or fail to produce physically plausible poses, hindering their application to challenging scenarios like human-object interaction (HOI) or non-human characters. We tackle this challenge by introducing Mimic2DM, a novel motion imitation framework that learns the control policy directly and solely from widely available 2D keypoint trajectories extracted from videos. By minimizing the reprojection error, we train a general single-view 2D motion tracking policy capable of following arbitrary 2D reference motions in physics simulation, using only 2D motion data. The policy, when trained on diverse 2D motions captured from different or slightly different viewpoints, can further acquire 3D motion tracking capabilities by aggregating multiple views. Moreover, we develop a transformer-based autoregressive 2D motion generator and integrate it into a hierarchical control framework, where the generator produces high-quality 2D reference trajectories to guide the tracking policy. We show that the proposed approach is versatile and can effectively learn to synthesize physically plausible and diverse motions across a range of domains, including dancing, soccer dribbling, and animal movements, without any reliance on explicit 3D motion data. Project Website: https://jiann-li.github.io/mimic2dm/ - oai:arXiv.org:2512.08500v1 - cs.GR - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Semantic-Aware Cooperative Communication and Computation Framework in Vehicular Networks + https://arxiv.org/abs/2512.09621 + arXiv:2512.09621v1 Announce Type: new +Abstract: Semantic Communication (SC) combined with Vehicular edge computing (VEC) provides an efficient edge task processing paradigm for Internet of Vehicles (IoV). Focusing on highway scenarios, this paper proposes a Tripartite Cooperative Semantic Communication (TCSC) framework, which enables Vehicle Users (VUs) to perform semantic task offloading via Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications. Considering task latency and the number of semantic symbols, the framework constructs a Mixed-Integer Nonlinear Programming (MINLP) problem, which is transformed into two subproblems. First, we innovatively propose a multi-agent proximal policy optimization task offloading optimization method based on parametric distribution noise (MAPPO-PDN) to solve the optimization problem of the number of semantic symbols; second, linear programming (LP) is used to solve offloading ratio. Simulations show that performance of this scheme is superior to that of other algorithms. + oai:arXiv.org:2512.09621v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jianan Li, Xiao Chen, Tao Huang, Tien-Tsin Wong + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jingbo Zhang, Maoxin Ji, Qiong Wu, Pingyi Fan, Kezhi Wang, Wen Chen - Disrupting Hierarchical Reasoning: Adversarial Protection for Geographic Privacy in Multimodal Reasoning Models - https://arxiv.org/abs/2512.08503 - arXiv:2512.08503v1 Announce Type: new -Abstract: Multi-modal large reasoning models (MLRMs) pose significant privacy risks by inferring precise geographic locations from personal images through hierarchical chain-of-thought reasoning. Existing privacy protection techniques, primarily designed for perception-based models, prove ineffective against MLRMs' sophisticated multi-step reasoning processes that analyze environmental cues. We introduce \textbf{ReasonBreak}, a novel adversarial framework specifically designed to disrupt hierarchical reasoning in MLRMs through concept-aware perturbations. Our approach is founded on the key insight that effective disruption of geographic reasoning requires perturbations aligned with conceptual hierarchies rather than uniform noise. ReasonBreak strategically targets critical conceptual dependencies within reasoning chains, generating perturbations that invalidate specific inference steps and cascade through subsequent reasoning stages. To facilitate this approach, we contribute \textbf{GeoPrivacy-6K}, a comprehensive dataset comprising 6,341 ultra-high-resolution images ($\geq$2K) with hierarchical concept annotations. Extensive evaluation across seven state-of-the-art MLRMs (including GPT-o3, GPT-5, Gemini 2.5 Pro) demonstrates ReasonBreak's superior effectiveness, achieving a 14.4\% improvement in tract-level protection (33.8\% vs 19.4\%) and nearly doubling block-level protection (33.5\% vs 16.8\%). This work establishes a new paradigm for privacy protection against reasoning-based threats. - oai:arXiv.org:2512.08503v1 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + CUBE: A Cardinality Estimator Based on Neural CDF + https://arxiv.org/abs/2512.09622 + arXiv:2512.09622v1 Announce Type: new +Abstract: Modern database optimizer relies on cardinality estimator, whose accuracy directly affects the optimizer's ability to choose an optimal execution plan. Recent work on data-driven methods has leveraged probabilistic models to achieve higher estimation accuracy, but these approaches cannot guarantee low inference latency at the same time and neglect scalability. As data dimensionality grows, optimization time can even exceed actual query execution time. Furthermore, inference with probabilistic models by sampling or integration procedures unpredictable estimation result and violate stability, which brings unstable performance with query execution and make database tuning hard for database users. In this paper, we propose a novel approach to cardinality estimation based on cumulative distribution function(CDF), which calculates range query cardinality without sampling or integration, ensuring accurate and predictable estimation results. With inference acceleration by merging calculations, we can achieve fast and nearly constant inference speed while maintaining high accuracy, even as dimensionality increases, which is over 10x faster than current state-of-the-art data-driven cardinality estimator. This demonstrates its excellent dimensional scalability, making it well-suited for real-world database applications. + oai:arXiv.org:2512.09622v1 + cs.DB + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jiaming Zhang, Che Wang, Yang Cao, Longtao Huang, Wei Yang Bryan Lim + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Xiao Yan, Tiezheng Nie, Boyang Fang, Derong Shen, Kou Yue, Yu Ge - Beyond the Noise: Aligning Prompts with Latent Representations in Diffusion Models - https://arxiv.org/abs/2512.08505 - arXiv:2512.08505v1 Announce Type: new -Abstract: Conditional diffusion models rely on language-to-image alignment methods to steer the generation towards semantically accurate outputs. Despite the success of this architecture, misalignment and hallucinations remain common issues and require automatic misalignment detection tools to improve quality, for example by applying them in a Best-of-N (BoN) post-generation setting. Unfortunately, measuring the alignment after the generation is an expensive step since we need to wait for the overall generation to finish to determine prompt adherence. In contrast, this work hypothesizes that text/image misalignments can be detected early in the denoising process, enabling real-time alignment assessment without waiting for the complete generation. In particular, we propose NoisyCLIP a method that measures semantic alignment in the noisy latent space. This work is the first to explore and benchmark prompt-to-latent misalignment detection during image generation using dual encoders in the reverse diffusion process. We evaluate NoisyCLIP qualitatively and quantitatively and find it reduces computational cost by 50% while achieving 98% of CLIP alignment performance in BoN settings. This approach enables real-time alignment assessment during generation, reducing costs without sacrificing semantic fidelity. - oai:arXiv.org:2512.08505v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + RIS-Assisted Coordinated Multi-Point ISAC for Low-Altitude Sensing Coverage + https://arxiv.org/abs/2512.09625 + arXiv:2512.09625v1 Announce Type: new +Abstract: The low-altitude economy (LAE) has emerged and developed in various fields, which has gained considerable interest. To ensure the security of LAE, it is essential to establish a proper sensing coverage scheme for monitoring the unauthorized targets. Introducing integrated sensing and communication (ISAC) into cellular networks is a promising solution that enables coordinated multiple base stations (BSs) to significantly enhance sensing performance and extend coverage. Meanwhile, deploying a reconfigurable intelligent surface (RIS) can mitigate signal blockages between BSs and low-altitude targets in urban areas. Therefore, this paper focuses on the low-altitude sensing coverage problem in RIS-assisted coordinated multi-point ISAC networks, where a RIS is employed to enable multiple BSs to sense a prescribed region while serving multiple communication users. A joint beamforming and phase shifts design is proposed to minimize the total transmit power while guaranteeing sensing signal-to-noise ratio and communication spectral efficiency. To tackle this non-convex optimization problem, an efficient algorithm is proposed by using the alternating optimization and semi-definite relaxation techniques. Numerical results demonstrate the superiority of our proposed scheme over the baseline schemes. + oai:arXiv.org:2512.09625v1 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Vasco Ramos, Regev Cohen, Idan Szpektor, Joao Magalhaes + Ying Zhang, Zeqi Hao, Tingting Zhang - OCCDiff: Occupancy Diffusion Model for High-Fidelity 3D Building Reconstruction from Noisy Point Clouds - https://arxiv.org/abs/2512.08506 - arXiv:2512.08506v1 Announce Type: new -Abstract: A major challenge in reconstructing buildings from LiDAR point clouds lies in accurately capturing building surfaces under varying point densities and noise interference. To flexibly gather high-quality 3D profiles of the building in diverse resolution, we propose OCCDiff applying latent diffusion in the occupancy function space. Our OCCDiff combines a latent diffusion process with a function autoencoder architecture to generate continuous occupancy functions evaluable at arbitrary locations. Moreover, a point encoder is proposed to provide condition features to diffusion learning, constraint the final occupancy prediction for occupancy decoder, and insert multi-modal features for latent generation to latent encoder. To further enhance the model performance, a multi-task training strategy is employed, ensuring that the point encoder learns diverse and robust feature representations. Empirical results show that our method generates physically consistent samples with high fidelity to the target distribution and exhibits robustness to noisy data. - oai:arXiv.org:2512.08506v1 + Beyond Sequences: A Benchmark for Atomic Hand-Object Interaction Using a Static RNN Encoder + https://arxiv.org/abs/2512.09626 + arXiv:2512.09626v1 Announce Type: new +Abstract: Reliably predicting human intent in hand-object interactions is an open challenge for computer vision. Our research concentrates on a fundamental sub-problem: the fine-grained classification of atomic interaction states, namely 'approaching', 'grabbing', and 'holding'. To this end, we introduce a structured data engineering process that converts raw videos from the MANIAC dataset into 27,476 statistical-kinematic feature vectors. Each vector encapsulates relational and dynamic properties from a short temporal window of motion. Our initial hypothesis posited that sequential modeling would be critical, leading us to compare static classifiers (MLPs) against temporal models (RNNs). Counter-intuitively, the key discovery occurred when we set the sequence length of a Bidirectional RNN to one (seq_length=1). This modification converted the network's function, compelling it to act as a high-capacity static feature encoder. This architectural change directly led to a significant accuracy improvement, culminating in a final score of 97.60%. Of particular note, our optimized model successfully overcame the most challenging transitional class, 'grabbing', by achieving a balanced F1-score of 0.90. These findings provide a new benchmark for low-level hand-object interaction recognition using structured, interpretable features and lightweight architectures. + oai:arXiv.org:2512.09626v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Jialu Sui, Rui Liu, Hongsheng Zhang + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yousef Azizi Movahed, Fatemeh Ziaeetabar - Thinking with Images via Self-Calling Agent - https://arxiv.org/abs/2512.08511 - arXiv:2512.08511v1 Announce Type: new -Abstract: Thinking-with-images paradigms have showcased remarkable visual reasoning capability by integrating visual information as dynamic elements into the Chain-of-Thought (CoT). However, optimizing interleaved multimodal CoT (iMCoT) through reinforcement learning remains challenging, as it relies on scarce high-quality reasoning data. In this study, we propose Self-Calling Chain-of-Thought (sCoT), a novel visual reasoning paradigm that reformulates iMCoT as a language-only CoT with self-calling. Specifically, a main agent decomposes the complex visual reasoning task to atomic subtasks and invokes its virtual replicas, i.e. parameter-sharing subagents, to solve them in isolated context. sCoT enjoys substantial training effectiveness and efficiency, as it requires no explicit interleaving between modalities. sCoT employs group-relative policy optimization to reinforce effective reasoning behavior to enhance optimization. Experiments on HR-Bench 4K show that sCoT improves the overall reasoning performance by up to $1.9\%$ with $\sim 75\%$ fewer GPU hours compared to strong baseline approaches. Code is available at https://github.com/YWenxi/think-with-images-through-self-calling. - oai:arXiv.org:2512.08511v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + LogICL: Distilling LLM Reasoning to Bridge the Semantic Gap in Cross-Domain Log Anomaly Detection + https://arxiv.org/abs/2512.09627 + arXiv:2512.09627v1 Announce Type: new +Abstract: Effective log anomaly detection is critical to sustaining reliability in large-scale IT infrastructures. Transformer-based models require substantial resources and labeled data, exacerbating the cold-start problem in target domains where logs are scarce. Existing cross-domain methods leverage source logs but struggle with generalization due to reliance on surface lexical similarity, failing to capture latent semantic equivalence amid structural divergences. To address this, we propose LogICL, a framework distilling Large Language Model (LLM) reasoning into a lightweight encoder for cross-domain anomaly detection. During training, LogICL constructs a delta matrix measuring the utility of demonstrations selected via Maximal Marginal Relevance relative to zero-shot inference. The encoder is optimized via a multi-objective loss comprising an ICL-Guided term that aligns representations based on reasoning assistance utility, maximum mean discrepancy for domain alignment, and supervised contrastive loss. At inference, the optimized encoder retrieves reasoning-aware demonstrations using semantic similarity and delta scores, enabling frozen-LLM in-context learning with Chain-of-Thought for accurate and interpretable detection. Experiments on few-shot and zero-shot cross-domain benchmarks confirm LogICL achieves state-of-the-art performance across heterogeneous systems. Further analysis via visualizations and case studies confirms LogICL bridges the semantic gap beyond surface lexical similarity, effectively capturing latent semantic equivalence for rapid deployment. + oai:arXiv.org:2512.09627v1 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Wenxi Yang, Yuzhong Zhao, Fang Wan, Qixiang Ye + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jingwei Ye, Zhi Wang, Chenbin Su, Jieshuai Yang, Jiayi Ding, Chunbo Liu, Ge Chu - A Lightweight Transfer Learning-Based State-of-Health Monitoring with Application to Lithium-ion Batteries in Unmanned Air Vehicles - https://arxiv.org/abs/2512.08512 - arXiv:2512.08512v1 Announce Type: new -Abstract: Accurate and rapid state-of-health (SOH) monitoring plays an important role in indicating energy information for lithium-ion battery-powered portable mobile devices. To confront their variable working conditions, transfer learning (TL) emerges as a promising technique for leveraging knowledge from data-rich source working conditions, significantly reducing the training data required for SOH monitoring from target working conditions. However, traditional TL-based SOH monitoring is infeasible when applied in portable mobile devices since substantial computational resources are consumed during the TL stage and unexpectedly reduce the working endurance. To address these challenges, this paper proposes a lightweight TL-based SOH monitoring approach with constructive incremental transfer learning (CITL). First, taking advantage of the unlabeled data in the target domain, a semi-supervised TL mechanism is proposed to minimize the monitoring residual in a constructive way, through iteratively adding network nodes in the CITL. Second, the cross-domain learning ability of node parameters for CITL is comprehensively guaranteed through structural risk minimization, transfer mismatching minimization, and manifold consistency maximization. Moreover, the convergence analysis of the CITL is given, theoretically guaranteeing the efficacy of TL performance and network compactness. Finally, the proposed approach is verified through extensive experiments with a realistic unmanned air vehicles (UAV) battery dataset collected from dozens of flight missions. Specifically, the CITL outperforms SS-TCA, MMD-LSTM-DA, DDAN, BO-CNN-TL, and AS$^3$LSTM, in SOH estimation by 83.73%, 61.15%, 28.24%, 87.70%, and 57.34%, respectively, as evaluated using the index root mean square error. - oai:arXiv.org:2512.08512v1 + An End-to-end Planning Framework with Agentic LLMs and PDDL + https://arxiv.org/abs/2512.09629 + arXiv:2512.09629v1 Announce Type: new +Abstract: We present an end-to-end framework for planning supported by verifiers. An orchestrator receives a human specification written in natural language and converts it into a PDDL (Planning Domain Definition Language) model, where the domain and problem are iteratively refined by sub-modules (agents) to address common planning requirements, such as time constraints and optimality, as well as ambiguities and contradictions that may exist in the human specification. The validated domain and problem are then passed to an external planning engine to generate a plan. The orchestrator and agents are powered by Large Language Models (LLMs) and require no human intervention at any stage of the process. Finally, a module translates the final plan back into natural language to improve human readability while maintaining the correctness of each step. We demonstrate the flexibility and effectiveness of our framework across various domains and tasks, including the Google NaturalPlan benchmark and PlanBench, as well as planning problems like Blocksworld and the Tower of Hanoi (where LLMs are known to struggle even with small instances). Our framework can be integrated with any PDDL planning engine and validator (such as Fast Downward, LPG, POPF, VAL, and uVAL, which we have tested) and represents a significant step toward end-to-end planning aided by LLMs. + oai:arXiv.org:2512.09629v1 cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - 10.1109/TII.2025.3631012 - Jiang Liu, Yan Qin, Wei Dai, Chau Yuen + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Emanuele La Malfa, Ping Zhu, Samuele Marro, Sara Bernardini, Michael Wooldridge - SensHRPS: Sensing Comfortable Human-Robot Proxemics and Personal Space With Eye-Tracking - https://arxiv.org/abs/2512.08518 - arXiv:2512.08518v1 Announce Type: new -Abstract: Social robots must adjust to human proxemic norms to ensure user comfort and engagement. While prior research demonstrates that eye-tracking features reliably estimate comfort in human-human interactions, their applicability to interactions with humanoid robots remains unexplored. In this study, we investigate user comfort with the robot "Ameca" across four experimentally controlled distances (0.5 m to 2.0 m) using mobile eye-tracking and subjective reporting (N=19). We evaluate multiple machine learning and deep learning models to estimate comfort based on gaze features. Contrary to previous human-human studies where Transformer models excelled, a Decision Tree classifier achieved the highest performance (F1-score = 0.73), with minimum pupil diameter identified as the most critical predictor. These findings suggest that physiological comfort thresholds in human-robot interaction differ from human-human dynamics and can be effectively modeled using interpretable logic. - oai:arXiv.org:2512.08518v1 - cs.RO - cs.AI - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Benchmarking SAM2-based Trackers on FMOX + https://arxiv.org/abs/2512.09633 + arXiv:2512.09633v1 Announce Type: new +Abstract: Several object tracking pipelines extending Segment Anything Model 2 (SAM2) have been proposed in the past year, where the approach is to follow and segment the object from a single exemplar template provided by the user on a initialization frame. We propose to benchmark these high performing trackers (SAM2, EfficientTAM, DAM4SAM and SAMURAI) on datasets containing fast moving objects (FMO) specifically designed to be challenging for tracking approaches. The goal is to understand better current limitations in state-of-the-art trackers by providing more detailed insights on the behavior of these trackers. We show that overall the trackers DAM4SAM and SAMURAI perform well on more challenging sequences. + oai:arXiv.org:2512.09633v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Nadezhda Kushina (RPTU Kaiserslautern-Landau), Ko Watanabe (DFKI GmbH), Aarthi Kannan (RPTU Kaiserslautern-Landau), Ashita Ashok (RPTU Kaiserslautern-Landau), Andreas Dengel (DFKI GmbH), Karsten Berns (RPTU Kaiserslautern-Landau) + 33rd International Conference on Artificial Intelligence and Cognitive Science (AICS 2025), December, 2025, Dublin, Ireland + Senem Aktas, Charles Markham, John McDonald, Rozenn Dahyot - Beyond Real Weights: Hypercomplex Representations for Stable Quantization - https://arxiv.org/abs/2512.08524 - arXiv:2512.08524v1 Announce Type: new -Abstract: Multimodal language models (MLLMs) require large parameter capacity to align high-dimensional visual features with linguistic representations, making them computationally heavy and difficult to deploy efficiently. We introduce a progressive reparameterization strategy that compresses these models by gradually replacing dense feed-forward network blocks with compact Parameterized Hypercomplex Multiplication (PHM) layers. A residual interpolation schedule, together with lightweight reconstruction and knowledge distillation losses, ensures that the PHM modules inherit the functional behavior of their dense counterparts during training. This transition yields substantial parameter and FLOP reductions while preserving strong multimodal alignment, enabling faster inference without degrading output quality. We evaluate the approach on multiple vision-language models (VLMs). Our method maintains performance comparable to the base models while delivering significant reductions in model size and inference latency. Progressive PHM substitution thus offers an architecture-compatible path toward more efficient multimodal reasoning and complements existing low-bit quantization techniques. - oai:arXiv.org:2512.08524v1 - cs.CV + Creation of the Estonian Subjectivity Dataset: Assessing the Degree of Subjectivity on a Scale + https://arxiv.org/abs/2512.09634 + arXiv:2512.09634v1 Announce Type: new +Abstract: This article presents the creation of an Estonian-language dataset for document-level subjectivity, analyzes the resulting annotations, and reports an initial experiment of automatic subjectivity analysis using a large language model (LLM). The dataset comprises of 1,000 documents-300 journalistic articles and 700 randomly selected web texts-each rated for subjectivity on a continuous scale from 0 (fully objective) to 100 (fully subjective) by four annotators. As the inter-annotator correlations were moderate, with some texts receiving scores at the opposite ends of the scale, a subset of texts with the most divergent scores was re-annotated, with the inter-annotator correlation improving. In addition to human annotations, the dataset includes scores generated by GPT-5 as an experiment on annotation automation. These scores were similar to human annotators, however several differences emerged, suggesting that while LLM based automatic subjectivity scoring is feasible, it is not an interchangeable alternative to human annotation, and its suitability depends on the intended application. + oai:arXiv.org:2512.09634v1 cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jawad Ibn Ahad, Maisha Rahman, Amrijit Biswas, Muhammad Rafsan Kabir, Robin Krambroeckers, Sifat Momen, Nabeel Mohammed, Shafin Rahman + Karl Gustav Gailit, Kadri Muischnek, Kairit Sirts - Analyzing Deviations from Monotonic Trends through Database Repair - https://arxiv.org/abs/2512.08526 - arXiv:2512.08526v1 Announce Type: new -Abstract: Datasets often exhibit violations of expected monotonic trends - for example, higher education level correlating with higher average salary, newer homes being more expensive, or diabetes prevalence increasing with age. We address the problem of quantifying how far a dataset deviates from such trends. To this end, we introduce Aggregate Order Dependencies (AODs), an aggregation-centric extension of the previously studied order dependencies. An AOD specifies that the aggregated value of a target attribute (e.g., mean salary) should monotonically increase or decrease with the grouping attribute (e.g., education level). - We formulate the AOD repair problem as finding the smallest set of tuples to delete from a table so that the given AOD is satisfied. We analyze the computational complexity of this problem and propose a general algorithmic template for solving it. We instantiate the template for common aggregation functions, introduce optimization techniques that substantially improve the runtime of the template instances, and develop efficient heuristic alternatives. Our experimental study, carried out on both real-world and synthetic datasets, demonstrates the practical efficiency of the algorithms and provides insight into the performance of the heuristics. We also present case studies that uncover and explain unexpected AOD violations using our framework. - oai:arXiv.org:2512.08526v1 - cs.DB - Wed, 10 Dec 2025 00:00:00 -0500 + MentraSuite: Post-Training Large Language Models for Mental Health Reasoning and Assessment + https://arxiv.org/abs/2512.09636 + arXiv:2512.09636v1 Announce Type: new +Abstract: Mental health disorders affect hundreds of millions globally, and the Web now serves as a primary medium for accessing support, information, and assessment. Large language models (LLMs) offer scalable and accessible assistance, yet their deployment in mental-health settings remains risky when their reasoning is incomplete, inconsistent, or ungrounded. Existing psychological LLMs emphasize emotional understanding or knowledge recall but overlook the step-wise, clinically aligned reasoning required for appraisal, diagnosis, intervention planning, abstraction, and verification. To address these issues, we introduce MentraSuite, a unified framework for advancing reliable mental-health reasoning. We propose MentraBench, a comprehensive benchmark spanning five core reasoning aspects, six tasks, and 13 datasets, evaluating both task performance and reasoning quality across five dimensions: conciseness, coherence, hallucination avoidance, task understanding, and internal consistency. We further present Mindora, a post-trained model optimized through a hybrid SFT-RL framework with an inconsistency-detection reward to enforce faithful and coherent reasoning. To support training, we construct high-quality trajectories using a novel reasoning trajectory generation strategy, that strategically filters difficult samples and applies a structured, consistency-oriented rewriting process to produce concise, readable, and well-balanced trajectories. Across 20 evaluated LLMs, Mindora achieves the highest average performance on MentraBench and shows remarkable performances in reasoning reliability, demonstrating its effectiveness for complex mental-health scenarios. + oai:arXiv.org:2512.09636v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shunit Agmon, Jonathan Gal, Amir Gilad, Ester Livshits, Or Mutay, Brit Youngmann, Benny Kimelfeld + Mengxi Xiao, Kailai Yang, Pengde Zhao, Enze Zhang, Ziyan Kuang, Zhiwei Liu, Weiguang Han, Shu Liao, Lianting Huang, Jinpeng Hu, Min Peng, Qianqian Xie, Sophia Ananiadou - MVP: Multiple View Prediction Improves GUI Grounding - https://arxiv.org/abs/2512.08529 - arXiv:2512.08529v1 Announce Type: new -Abstract: GUI grounding, which translates natural language instructions into precise pixel coordinates, is essential for developing practical GUI agents. However, we observe that existing grounding models exhibit significant coordinate prediction instability, minor visual perturbations (e.g. cropping a few pixels) can drastically alter predictions, flipping results between correct and incorrect. This instability severely undermines model performance, especially for samples with high-resolution and small UI elements. To address this issue, we propose Multi-View Prediction (MVP), a training-free framework that enhances grounding performance through multi-view inference. Our key insight is that while single-view predictions may be unstable, aggregating predictions from multiple carefully cropped views can effectively distinguish correct coordinates from outliers. MVP comprises two components: (1) Attention-Guided View Proposal, which derives diverse views guided by instruction-to-image attention scores, and (2) Multi-Coordinates Clustering, which ensembles predictions by selecting the centroid of the densest spatial cluster. Extensive experiments demonstrate MVP's effectiveness across various models and benchmarks. Notably, on ScreenSpot-Pro, MVP boosts UI-TARS-1.5-7B to 56.1%, GTA1-7B to 61.7%, Qwen3VL-8B-Instruct to 65.3%, and Qwen3VL-32B-Instruct to 74.0%. The code is available at https://github.com/ZJUSCL/MVP. - oai:arXiv.org:2512.08529v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Inexact Gauss Seidel and Coarse Solvers for AMG and s-step CG + https://arxiv.org/abs/2512.09642 + arXiv:2512.09642v1 Announce Type: new +Abstract: Communication-avoiding Krylov methods require solving small dense Gram systems at each outer iteration. We present a low-synchronization approach based on Forward Gauss--Seidel (FGS), which exploits the structure of Gram matrices arising from Chebyshev polynomial bases. We show that a single FGS sweep is mathematically equivalent to Modified Gram--Schmidt (MGS) orthogonalization in the $A$-norm and provide corresponding backward error bounds. For weak scaling on AMD MI-series GPUs, we demonstrate that 20--30 FGS iterations preserve scalability up to 64 GPUs with problem sizes exceeding 700 million unknowns. We further extend this approach to Algebraic MultiGrid (AMG) coarse-grid solves, removing the need to assemble or factor dense coarse operators + oai:arXiv.org:2512.09642v1 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yunzhu Zhang, Zeyu Pan, Zhengwen Zeng, Shuheng Shen, Changhua Meng, Linchao Zhu + Stephen Thomas, Pasqua D'Ambra - PaintFlow: A Unified Framework for Interactive Oil Paintings Editing and Generation - https://arxiv.org/abs/2512.08534 - arXiv:2512.08534v1 Announce Type: new -Abstract: Oil painting, as a high-level medium that blends human abstract thinking with artistic expression, poses substantial challenges for digital generation and editing due to its intricate brushstroke dynamics and stylized characteristics. Existing generation and editing techniques are often constrained by the distribution of training data and primarily focus on modifying real photographs. In this work, we introduce a unified multimodal framework for oil painting generation and editing. The proposed system allows users to incorporate reference images for precise semantic control, hand-drawn sketches for spatial structure alignment, and natural language prompts for high-level semantic guidance, while consistently maintaining a unified painting style across all outputs. Our method achieves interactive oil painting creation through three crucial technical advancements. First, we enhance the training stage with spatial alignment and semantic enhancement conditioning strategy, which map masks and sketches into spatial constraints, and encode contextual embedding from reference images and text into feature constraints, enabling object-level semantic alignment. Second, to overcome data scarcity, we propose a self-supervised style transfer pipeline based on Stroke-Based Rendering (SBR), which simulates the inpainting dynamics of oil painting restoration, converting real images into stylized oil paintings with preserved brushstroke textures to construct a large-scale paired training dataset. Finally, during inference, we integrate features using the AdaIN operator to ensure stylistic consistency. Extensive experiments demonstrate that our interactive system enables fine-grained editing while preserving the artistic qualities of oil paintings, achieving an unprecedented level of imagination realization in stylized oil paintings generation and editing. - oai:arXiv.org:2512.08534v1 + Kaapana: A Comprehensive Open-Source Platform for Integrating AI in Medical Imaging Research Environments + https://arxiv.org/abs/2512.09644 + arXiv:2512.09644v1 Announce Type: new +Abstract: Developing generalizable AI for medical imaging requires both access to large, multi-center datasets and standardized, reproducible tooling within research environments. However, leveraging real-world imaging data in clinical research environments is still hampered by strict regulatory constraints, fragmented software infrastructure, and the challenges inherent in conducting large-cohort multicentre studies. This leads to projects that rely on ad-hoc toolchains that are hard to reproduce, difficult to scale beyond single institutions and poorly suited for collaboration between clinicians and data scientists. We present Kaapana, a comprehensive open-source platform for medical imaging research that is designed to bridge this gap. Rather than building single-use, site-specific tooling, Kaapana provides a modular, extensible framework that unifies data ingestion, cohort curation, processing workflows and result inspection under a common user interface. By bringing the algorithm to the data, it enables institutions to keep control over their sensitive data while still participating in distributed experimentation and model development. By integrating flexible workflow orchestration with user-facing applications for researchers, Kaapana reduces technical overhead, improves reproducibility and enables conducting large-scale, collaborative, multi-centre imaging studies. We describe the core concepts of the platform and illustrate how they can support diverse use cases, from local prototyping to nation-wide research networks. The open-source codebase is available at https://github.com/kaapana/kaapana + oai:arXiv.org:2512.09644v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Zhangli Hu, Ye Chen, Jiajun Yao, Bingbing Ni + \"Unal Ak\"unal, Markus Bujotzek, Stefan Denner, Benjamin Hamm, Klaus Kades, Philipp Schader, Jonas Scherer, Marco Nolden, Peter Neher, Ralf Floca, Klaus Maier-Hein - Photo3D: Advancing Photorealistic 3D Generation through Structure-Aligned Detail Enhancement - https://arxiv.org/abs/2512.08535 - arXiv:2512.08535v1 Announce Type: new -Abstract: Although recent 3D-native generators have made great progress in synthesizing reliable geometry, they still fall short in achieving realistic appearances. A key obstacle lies in the lack of diverse and high-quality real-world 3D assets with rich texture details, since capturing such data is intrinsically difficult due to the diverse scales of scenes, non-rigid motions of objects, and the limited precision of 3D scanners. We introduce Photo3D, a framework for advancing photorealistic 3D generation, which is driven by the image data generated by the GPT-4o-Image model. Considering that the generated images can distort 3D structures due to their lack of multi-view consistency, we design a structure-aligned multi-view synthesis pipeline and construct a detail-enhanced multi-view dataset paired with 3D geometry. Building on it, we present a realistic detail enhancement scheme that leverages perceptual feature adaptation and semantic structure matching to enforce appearance consistency with realistic details while preserving the structural consistency with the 3D-native geometry. Our scheme is general to different 3D-native generators, and we present dedicated training strategies to facilitate the optimization of geometry-texture coupled and decoupled 3D-native generation paradigms. Experiments demonstrate that Photo3D generalizes well across diverse 3D-native generation paradigms and achieves state-of-the-art photorealistic 3D generation performance. - oai:arXiv.org:2512.08535v1 + VHOI: Controllable Video Generation of Human-Object Interactions from Sparse Trajectories via Motion Densification + https://arxiv.org/abs/2512.09646 + arXiv:2512.09646v1 Announce Type: new +Abstract: Synthesizing realistic human-object interactions (HOI) in video is challenging due to the complex, instance-specific interaction dynamics of both humans and objects. Incorporating controllability in video generation further adds to the complexity. Existing controllable video generation approaches face a trade-off: sparse controls like keypoint trajectories are easy to specify but lack instance-awareness, while dense signals such as optical flow, depths or 3D meshes are informative but costly to obtain. We propose VHOI, a two-stage framework that first densifies sparse trajectories into HOI mask sequences, and then fine-tunes a video diffusion model conditioned on these dense masks. We introduce a novel HOI-aware motion representation that uses color encodings to distinguish not only human and object motion, but also body-part-specific dynamics. This design incorporates a human prior into the conditioning signal and strengthens the model's ability to understand and generate realistic HOI dynamics. Experiments demonstrate state-of-the-art results in controllable HOI video generation. VHOI is not limited to interaction-only scenarios and can also generate full human navigation leading up to object interactions in an end-to-end manner. Project page: https://vcai.mpi-inf.mpg.de/projects/vhoi/. + oai:arXiv.org:2512.09646v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Xinyue Liang, Zhinyuan Ma, Lingchen Sun, Yanjun Guo, Lei Zhang + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Wanyue Zhang, Lin Geng Foo, Thabo Beeler, Rishabh Dabral, Christian Theobalt - Principles2Plan: LLM-Guided System for Operationalising Ethical Principles into Plans - https://arxiv.org/abs/2512.08536 - arXiv:2512.08536v1 Announce Type: new -Abstract: Ethical awareness is critical for robots operating in human environments, yet existing automated planning tools provide little support. Manually specifying ethical rules is labour-intensive and highly context-specific. We present Principles2Plan, an interactive research prototype demonstrating how a human and a Large Language Model (LLM) can collaborate to produce context-sensitive ethical rules and guide automated planning. A domain expert provides the planning domain, problem details, and relevant high-level principles such as beneficence and privacy. The system generates operationalisable ethical rules consistent with these principles, which the user can review, prioritise, and supply to a planner to produce ethically-informed plans. To our knowledge, no prior system supports users in generating principle-grounded rules for classical planning contexts. Principles2Plan showcases the potential of human-LLM collaboration for making ethical automated planning more practical and feasible. - oai:arXiv.org:2512.08536v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Membership and Dataset Inference Attacks on Large Audio Generative Models + https://arxiv.org/abs/2512.09654 + arXiv:2512.09654v1 Announce Type: new +Abstract: Generative audio models, based on diffusion and autoregressive architectures, have advanced rapidly in both quality and expressiveness. This progress, however, raises pressing copyright concerns, as such models are often trained on vast corpora of artistic and commercial works. A central question is whether one can reliably verify if an artist's material was included in training, thereby providing a means for copyright holders to protect their content. In this work, we investigate the feasibility of such verification through membership inference attacks (MIA) on open-source generative audio models, which attempt to determine whether a specific audio sample was part of the training set. Our empirical results show that membership inference alone is of limited effectiveness at scale, as the per-sample membership signal is weak for models trained on large and diverse datasets. However, artists and media owners typically hold collections of works rather than isolated samples. Building on prior work in text and vision domains, in this work we focus on dataset inference (DI), which aggregates diverse membership evidence across multiple samples. We find that DI is successful in the audio domain, offering a more practical mechanism for assessing whether an artist's works contributed to model training. Our results suggest DI as a promising direction for copyright protection and dataset accountability in the era of large audio generative models. + oai:arXiv.org:2512.09654v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tammy Zhong, Yang Song, Maurice Pagnucco + http://creativecommons.org/licenses/by/4.0/ + Jakub Proboszcz, Pawe{\l} Kochanski, Karol Korszun, Donato Crisostomi, Giorgio Strano, Emanuele Rodol\`a, Kamil Deja, Jan Dubinski - Fast-ARDiff: An Entropy-informed Acceleration Framework for Continuous Space Autoregressive Generation - https://arxiv.org/abs/2512.08537 - arXiv:2512.08537v1 Announce Type: new -Abstract: Autoregressive(AR)-diffusion hybrid paradigms combine AR's structured modeling with diffusion's photorealistic synthesis, yet suffer from high latency due to sequential AR generation and iterative denoising. In this work, we tackle this bottleneck and propose a unified AR-diffusion framework Fast-ARDiff that jointly optimizes both components, accelerating AR speculative decoding while simultaneously facilitating faster diffusion decoding. Specifically: (1) The entropy-informed speculative strategy encourages draft model to produce higher-entropy representations aligned with target model's entropy characteristics, mitigating entropy mismatch and high rejection rates caused by draft overconfidence. (2) For diffusion decoding, rather than treating it as an independent module, we integrate it into the same end-to-end framework using a dynamic scheduler that prioritizes AR optimization to guide the diffusion part in further steps. The diffusion part is optimized through a joint distillation framework combining trajectory and distribution matching, ensuring stable training and high-quality synthesis with extremely few steps. During inference, shallow feature entropy from AR module is used to pre-filter low-entropy drafts, avoiding redundant computation and improving latency. Fast-ARDiff achieves state-of-the-art acceleration across diverse models: on ImageNet 256$\times$256, TransDiff attains 4.3$\times$ lossless speedup, and NextStep-1 achieves 3$\times$ acceleration on text-conditioned generation. Code will be available at https://github.com/aSleepyTree/Fast-ARDiff. - oai:arXiv.org:2512.08537v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Binary and Non-Binary Self-Dual Sequences and Maximum Period Single-Track Gray Codes + https://arxiv.org/abs/2512.09655 + arXiv:2512.09655v1 Announce Type: new +Abstract: Binary self-dual sequences have been considered and analyzed throughout the years, and they were used for various applications. Motivated by a construction for single-track Gray codes, we examine the structure and recursive constructions for binary and non-binary self-dual sequences. The feedback shift registers that generate such sequences are discussed. The connections between these sequences and maximum period single-track codes are discussed. Maximum period non-binary single-track Gray codes of length $p^t$ and period $p^{p^t}$ are constructed. These are the first infinite families of maximum period codes presented in the literature. + oai:arXiv.org:2512.09655v1 + cs.IT + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Zhen Zou, Xiaoxiao Ma, Jie Huang, Zichao Yu, Feng Zhao + Tuvi Etzion - vEDGAR - Can CARLA Do HiL? - https://arxiv.org/abs/2512.08541 - arXiv:2512.08541v1 Announce Type: new -Abstract: Simulation offers advantages throughout the development process of automated driving functions, both in research and product development. Common open-source simulators like CARLA are extensively used in training, evaluation, and software-in-the-loop testing of new automated driving algorithms. However, the CARLA simulator lacks an evaluation where research and automated driving vehicles are simulated with their entire sensor and actuation stack in real time. The goal of this work is therefore to create a simulation framework for testing the automation software on its dedicated hardware and identifying its limits. Achieving this goal would greatly benefit the open-source development workflow of automated driving functions, designating CARLA as a consistent evaluation tool along the entire development process. To achieve this goal, in a first step, requirements are derived, and a simulation architecture is specified and implemented. Based on the formulated requirements, the proposed vEDGAR software is evaluated, resulting in a final conclusion on the applicability of CARLA for HiL testing of automated vehicles. The tool is available open source: Modified CARLA fork: https://github.com/TUMFTM/carla, vEDGAR Framework: https://github.com/TUMFTM/vEDGAR - oai:arXiv.org:2512.08541v1 + ReMoSPLAT: Reactive Mobile Manipulation Control on a Gaussian Splat + https://arxiv.org/abs/2512.09656 + arXiv:2512.09656v1 Announce Type: new +Abstract: Reactive control can gracefully coordinate the motion of the base and the arm of a mobile manipulator. However, incorporating an accurate representation of the environment to avoid obstacles without involving costly planning remains a challenge. In this work, we present ReMoSPLAT, a reactive controller based on a quadratic program formulation for mobile manipulation that leverages a Gaussian Splat representation for collision avoidance. By integrating additional constraints and costs into the optimisation formulation, a mobile manipulator platform can reach its intended end effector pose while avoiding obstacles, even in cluttered scenes. We investigate the trade-offs of two methods for efficiently calculating robot-obstacle distances, comparing a purely geometric approach with a rasterisation-based approach. Our experiments in simulation on both synthetic and real-world scans demonstrate the feasibility of our method, showing that the proposed approach achieves performance comparable to controllers that rely on perfect ground-truth information. + oai:arXiv.org:2512.09656v1 cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Nils Gehrke, David Brecht, Dominik Kulmer, Dheer Patel, Frank Diermeyer + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nicolas Marticorena, Tobias Fischer, Niko Suenderhauf - A Novel Wasserstein Quaternion Generative Adversarial Network for Color Image Generation - https://arxiv.org/abs/2512.08542 - arXiv:2512.08542v1 Announce Type: new -Abstract: Color image generation has a wide range of applications, but the existing generation models ignore the correlation among color channels, which may lead to chromatic aberration problems. In addition, the data distribution problem of color images has not been systematically elaborated and explained, so that there is still the lack of the theory about measuring different color images datasets. In this paper, we define a new quaternion Wasserstein distance and develop its dual theory. To deal with the quaternion linear programming problem, we derive the strong duality form with helps of quaternion convex set separation theorem and quaternion Farkas lemma. With using quaternion Wasserstein distance, we propose a novel Wasserstein quaternion generative adversarial network. Experiments demonstrate that this novel model surpasses both the (quaternion) generative adversarial networks and the Wasserstein generative adversarial network in terms of generation efficiency and image quality. - oai:arXiv.org:2512.08542v1 - cs.CV + Can LLMs Evaluate What They Cannot Annotate? Revisiting LLM Reliability in Hate Speech Detection + https://arxiv.org/abs/2512.09662 + arXiv:2512.09662v1 Announce Type: new +Abstract: Hate speech spreads widely online, harming individuals and communities, making automatic detection essential for large-scale moderation, yet detecting it remains difficult. Part of the challenge lies in subjectivity: what one person flags as hate speech, another may see as benign. Traditional annotation agreement metrics, such as Cohen's $\kappa$, oversimplify this disagreement, treating it as an error rather than meaningful diversity. Meanwhile, Large Language Models (LLMs) promise scalable annotation, but prior studies demonstrate that they cannot fully replace human judgement, especially in subjective tasks. In this work, we reexamine LLM reliability using a subjectivity-aware framework, cross-Rater Reliability (xRR), revealing that even under fairer lens, LLMs still diverge from humans. Yet this limitation opens an opportunity: we find that LLM-generated annotations can reliably reflect performance trends across classification models, correlating with human evaluations. We test this by examining whether LLM-generated annotations preserve the relative ordering of model performance derived from human evaluation (i.e. whether models ranked as more reliable by human annotators preserve the same order when evaluated with LLM-generated labels). Our results show that, although LLMs differ from humans at the instance level, they reproduce similar ranking and classification patterns, suggesting their potential as proxy evaluators. While not a substitute for human annotators, they might serve as a scalable proxy for evaluation in subjective NLP tasks. + oai:arXiv.org:2512.09662v1 + cs.CL cs.AI - cs.NA - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhigang Jia, Duan Wang, Hengkai Wang, Yajun Xie, Meixiang Zhao, Xiaoyu Zhao + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Paloma Piot, David Otero, Patricia Mart\'in-Rodilla, Javier Parapar - Optimal Control of Behavioral-Feedback SIR Epidemic Model - https://arxiv.org/abs/2512.08544 - arXiv:2512.08544v1 Announce Type: new -Abstract: We consider a behavioral-feedback SIR epidemic model, in which the infection rate depends in feedback on the fractions of susceptible and infected agents, respectively. The considered model allows one to account for endogenous adaptation mechanisms of the agents in response to the epidemics, such as voluntary social distancing, or the adoption of face masks. For this model, we formulate an optimal control problem for a social planner that has the ability to reduce the infection rate to keep the infection curve below a certain threshold within an infinite time horizon, while minimizing the intervention cost. Based on the dynamic properties of the model, we prove that, under quite general conditions on the infection rate, the \emph{filling the box} strategy is the optimal control. This strategy consists in letting the epidemics spread without intervention until the threshold is reached, then applying the minimum control that leaves the fraction of infected individuals constantly at the threshold until the reproduction number becomes less than one and the infection naturally fades out. Our result generalizes one available in the literature for the equivalent problem formulated for the classical SIR model, which can be recovered as a special case of our model when the infection rate is constant. Our contribution enhances the understanding of epidemic management with adaptive human behavior, offering insights for robust containment strategies. - oai:arXiv.org:2512.08544v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + IF-Bench: Benchmarking and Enhancing MLLMs for Infrared Images with Generative Visual Prompting + https://arxiv.org/abs/2512.09663 + arXiv:2512.09663v1 Announce Type: new +Abstract: Recent advances in multimodal large language models (MLLMs) have led to impressive progress across various benchmarks. However, their capability in understanding infrared images remains unexplored. To address this gap, we introduce IF-Bench, the first high-quality benchmark designed for evaluating multimodal understanding of infrared images. IF-Bench consists of 499 images sourced from 23 infrared datasets and 680 carefully curated visual question-answer pairs, covering 10 essential dimensions of image understanding. Based on this benchmark, we systematically evaluate over 40 open-source and closed-source MLLMs, employing cyclic evaluation, bilingual assessment, and hybrid judgment strategies to enhance the reliability of the results. Our analysis reveals how model scale, architecture, and inference paradigms affect infrared image comprehension, providing valuable insights for this area. Furthermore, we propose a training-free generative visual prompting (GenViP) method, which leverages advanced image editing models to translate infrared images into semantically and spatially aligned RGB counterparts, thereby mitigating domain distribution shifts. Extensive experiments demonstrate that our method consistently yields significant performance improvements across a wide range of MLLMs. The benchmark and code are available at https://github.com/casiatao/IF-Bench. + oai:arXiv.org:2512.09663v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Martina Alutto, Leonardo Cianfanelli, Giacomo Como, Fabio Fagnani, Francesca Parise + Tao Zhang, Yuyang Hong, Yang Xia, Kun Ding, Zeyu Zhang, Ying Wang, Shiming Xiang, Chunhong Pan - Curriculum Guided Massive Multi Agent System Solving For Robust Long Horizon Tasks - https://arxiv.org/abs/2512.08545 - arXiv:2512.08545v1 Announce Type: new -Abstract: Large Language Models and multi-agent systems have shown promise in decomposing complex tasks, yet they struggle with long-horizon reasoning tasks and escalating computation cost. This work introduces a hierarchical multi-agent architecture that distributes reasoning across a 64*64 grid of lightweight agents, supported by a selective oracle. A spatial curriculum progressively expands the operational region of the grid, ensuring that agents master easier central tasks before tackling harder peripheral ones. To improve reliability, the system integrates Negative Log-Likelihood as a measure of confidence, allowing the curriculum to prioritize regions where agents are both accurate and well calibrated. A Thompson Sampling curriculum manager adaptively chooses training zones based on competence and NLL-driven reward signals. We evaluate the approach on a spatially grounded Tower of Hanoi benchmark, which mirrors the long-horizon structure of many robotic manipulation and planning tasks. Results demonstrate improved stability, reduced oracle usage, and stronger long-range reasoning from distributed agent cooperation. - oai:arXiv.org:2512.08545v1 - cs.CL - cs.AI + SynthPix: A lightspeed PIV images generator + https://arxiv.org/abs/2512.09664 + arXiv:2512.09664v1 Announce Type: new +Abstract: We describe SynthPix, a synthetic image generator for Particle Image Velocimetry (PIV) with a focus on performance and parallelism on accelerators, implemented in JAX. SynthPix supports the same configuration parameters as existing tools but achieves a throughput several orders of magnitude higher in image-pair generation per second. SynthPix was developed to enable the training of data-hungry reinforcement learning methods for flow estimation and for reducing the iteration times during the development of fast flow estimation methods used in recent active fluids control studies with real-time PIV feedback. We believe SynthPix to be useful for the fluid dynamics community, and in this paper we describe the main ideas behind this software package. + oai:arXiv.org:2512.09664v1 + cs.DC cs.CV - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + eess.IV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Indrajit Kar, Kalathur Chenchu Kishore Kumar + http://creativecommons.org/licenses/by/4.0/ + Antonio Terpin, Alan Bonomi, Francesco Banelli, Raffaello D'Andrea - An Iteration-Free Fixed-Point Estimator for Diffusion Inversion - https://arxiv.org/abs/2512.08547 - arXiv:2512.08547v1 Announce Type: new -Abstract: Diffusion inversion aims to recover the initial noise corresponding to a given image such that this noise can reconstruct the original image through the denoising diffusion process. The key component of diffusion inversion is to minimize errors at each inversion step, thereby mitigating cumulative inaccuracies. Recently, fixed-point iteration has emerged as a widely adopted approach to minimize reconstruction errors at each inversion step. However, it suffers from high computational costs due to its iterative nature and the complexity of hyperparameter selection. To address these issues, we propose an iteration-free fixed-point estimator for diffusion inversion. First, we derive an explicit expression of the fixed point from an ideal inversion step. Unfortunately, it inherently contains an unknown data prediction error. Building upon this, we introduce the error approximation, which uses the calculable error from the previous inversion step to approximate the unknown error at the current inversion step. This yields a calculable, approximate expression for the fixed point, which is an unbiased estimator characterized by low variance, as shown by our theoretical analysis. We evaluate reconstruction performance on two text-image datasets, NOCAPS and MS-COCO. Compared to DDIM inversion and other inversion methods based on the fixed-point iteration, our method achieves consistent and superior performance in reconstruction tasks without additional iterations or training. - oai:arXiv.org:2512.08547v1 + OxEnsemble: Fair Ensembles for Low-Data Classification + https://arxiv.org/abs/2512.09665 + arXiv:2512.09665v1 Announce Type: new +Abstract: We address the problem of fair classification in settings where data is scarce and unbalanced across demographic groups. Such low-data regimes are common in domains like medical imaging, where false negatives can have fatal consequences. + We propose a novel approach \emph{OxEnsemble} for efficiently training ensembles and enforcing fairness in these low-data regimes. Unlike other approaches, we aggregate predictions across ensemble members, each trained to satisfy fairness constraints. By construction, \emph{OxEnsemble} is both data-efficient, carefully reusing held-out data to enforce fairness reliably, and compute-efficient, requiring little more compute than used to fine-tune or evaluate an existing model. We validate this approach with new theoretical guarantees. Experimentally, our approach yields more consistent outcomes and stronger fairness-accuracy trade-offs than existing methods across multiple challenging medical imaging classification datasets. + oai:arXiv.org:2512.09665v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CY + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yifei Chen, Kaiyu Song, Yan Pan, Jianxing Yu, Jian Yin, Hanjiang Lai + Jonathan Rystr{\o}m, Zihao Fu, Chris Russell - Bridging Scale Discrepancies in Robotic Control via Language-Based Action Representations - https://arxiv.org/abs/2512.08548 - arXiv:2512.08548v1 Announce Type: new -Abstract: Recent end-to-end robotic manipulation research increasingly adopts architectures inspired by large language models to enable robust manipulation. However, a critical challenge arises from severe distribution shifts between robotic action data, primarily due to substantial numerical variations in action commands across diverse robotic platforms and tasks, hindering the effective transfer of pretrained knowledge. To address this limitation, we propose a semantically grounded linguistic representation to normalize actions for efficient pretraining. Unlike conventional discretized action representations that are sensitive to numerical scales, the motion representation specifically disregards numeric scale effects, emphasizing directionality instead. This abstraction mitigates distribution shifts, yielding a more generalizable pretraining representation. Moreover, using the motion representation narrows the feature distance between action tokens and standard vocabulary tokens, mitigating modality gaps. Multi-task experiments on two benchmarks demonstrate that the proposed method significantly improves generalization performance and transferability in robotic manipulation tasks. - oai:arXiv.org:2512.08548v1 - cs.RO - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Neurosymbolic Information Extraction from Transactional Documents + https://arxiv.org/abs/2512.09666 + arXiv:2512.09666v1 Announce Type: new +Abstract: This paper presents a neurosymbolic framework for information extraction from documents, evaluated on transactional documents. We introduce a schema-based approach that integrates symbolic validation methods to enable more effective zero-shot output and knowledge distillation. The methodology uses language models to generate candidate extractions, which are then filtered through syntactic-, task-, and domain-level validation to ensure adherence to domain-specific arithmetic constraints. Our contributions include a comprehensive schema for transactional documents, relabeled datasets, and an approach for generating high-quality labels for knowledge distillation. Experimental results demonstrate significant improvements in $F_1$-scores and accuracy, highlighting the effectiveness of neurosymbolic validation in transactional document processing. + oai:arXiv.org:2512.09666v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuchi Zhang, Churui Sun, Shiqi Liang, Diyuan Liu, Chao Ji, Wei-Nan Zhang, Ting Liu + 10.1007/s10032-025-00530-0 + IJDAR 28, 475-485 (2025) + Arthur Hemmer, Micka\"el Coustaty, Nicola Bartolo, Jean-Marc Ogier - Gamification with Purpose: What Learners Prefer to Motivate Their Learning - https://arxiv.org/abs/2512.08551 - arXiv:2512.08551v1 Announce Type: new -Abstract: This study investigates learners' preferences for game design elements (GDEs) in educational contexts to inform the development of purpose-driven gamification strategies. It emphasizes a learner-centered approach that aligns gamification design with pedagogical goals, while mitigating risks such as the erosion of intrinsic motivation. A systematic literature review was conducted to identify ten widely discussed GDEs. Visual prototypes representing each element were developed, and a best-worst scaling (BWS) survey with 125 participants was administered to elicit preference rankings. Qualitative feedback was also collected to uncover motivational drivers. Learners consistently preferred GDEs that support learning processes directly-most notably progress bars, concept maps, immediate feedback, and achievements. Qualitative analysis revealed six recurring motivational themes, including visible progress, content relevance, and constructive feedback. The findings suggest that learners value gamification elements that are meaningfully integrated with educational content and support intrinsic motivation. Purpose-aligned gamification should prioritize tools that visualize learning progress and provide actionable feedback, rather than relying solely on extrinsic incentives. - oai:arXiv.org:2512.08551v1 - cs.SE - cs.CY + Adaptive Optimal Control for Avatar-Guided Motor Rehabilitation in Virtual Reality + https://arxiv.org/abs/2512.09667 + arXiv:2512.09667v1 Announce Type: new +Abstract: A control-theoretic framework for autonomous avatar-guided rehabilitation in virtual reality, based on interpretable, adaptive motor guidance through optimal control, is presented. The framework faces critical challenges in motor rehabilitation due to accessibility, cost, and continuity of care, with over 50% of patients inability to attend regular clinic sessions. The system enables post-stroke patients to undergo personalized therapy in immersive virtual reality at home, while being monitored by clinicians. The core is a nonlinear, human-in-the-loop control strategy, where the avatar adapts in real time to the patient's performance. Balance between following the patient's movements and guiding them to ideal kinematic profiles based on the Hogan minimum-jerk model is achieved through multi-objective optimal control. A data-driven "ability index" uses smoothness metrics to dynamically adjust control gains according to the patient's progress. The system was validated through simulations and preliminary trials, and shows potential for delivering adaptive, engaging and scalable remote physiotherapy guided by interpretable control-theoretic principles. + oai:arXiv.org:2512.09667v1 + eess.SY cs.HC - cs.MM - Wed, 10 Dec 2025 00:00:00 -0500 + cs.SY + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kai Marquardt, Mona Schulz, Anne Koziolek, Lucia Happe + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Francesco De Lellis, Maria Lombardi, Egidio De Benedetto, Pasquale Arpaia, Mario di Bernardo - Extended formulations for induced tree and path polytopes of chordal graphs - https://arxiv.org/abs/2512.08554 - arXiv:2512.08554v1 Announce Type: new -Abstract: In this article, we give an extended space formulation for the induced tree and path polytopes of chordal graphs with variables associated with the edge and vertex sets. Whereas the formulation for the induced tree polytope is easily seen to have a compact size, the system we provide for the induced path polytope has an exponential number of inequalities. We show which of these inequalities define facets and exhibit a superset of the facet-defining ones that can be enumerated in polynomial time. We show that for some graphs, the latter superset contains redundant inequalities. - As corollaries, we obtain that the problems of finding an induced tree or path maximizing a linear function over the edges and vertices are solvable in polynomial time for the class of chordal graphs . - oai:arXiv.org:2512.08554v1 - cs.DM - Wed, 10 Dec 2025 00:00:00 -0500 - new + An Automated Tip-and-Cue Framework for Optimized Satellite Tasking and Visual Intelligence + https://arxiv.org/abs/2512.09670 + arXiv:2512.09670v1 Announce Type: new +Abstract: The proliferation of satellite constellations, coupled with reduced tasking latency and diverse sensor capabilities, has expanded the opportunities for automated Earth observation. This paper introduces a fully automated Tip-and-Cue framework designed for satellite imaging tasking and scheduling. In this context, tips are generated from external data sources or analyses of prior satellite imagery, identifying spatiotemporal targets and prioritizing them for downstream planning. Corresponding cues are the imaging tasks formulated in response, which incorporate sensor constraints, timing requirements, and utility functions. The system autonomously generates candidate tasks, optimizes their scheduling across multiple satellites using continuous utility functions that reflect the expected value of each observation, and processes the resulting imagery using artificial-intelligence-based models, including object detectors and vision-language models. Structured visual reports are generated to support both interpretability and the identification of new insights for downstream tasking. The efficacy of the framework is demonstrated through a maritime vessel tracking scenario, utilizing Automatic Identification System (AIS) data for trajectory prediction, targeted observations, and the generation of actionable outputs. Maritime vessel tracking is a widely researched application, often used to benchmark novel approaches to satellite tasking, forecasting, and analysis. The system is extensible to broader applications such as smart-city monitoring and disaster response, where timely tasking and automated analysis are critical. + oai:arXiv.org:2512.09670v1 + cs.CV + cs.SY + eess.SY + Thu, 11 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Alexandre Dupont-Bouillard + Gil Weissman, Amir Ivry, Israel Cohen - A scalable high-order multigrid-FFT Poisson solver for unbounded domains on adaptive multiresolution grids - https://arxiv.org/abs/2512.08555 - arXiv:2512.08555v1 Announce Type: new -Abstract: Multigrid solvers are among the most efficient methods for solving the Poisson equation, which is ubiquitous in computational physics. For example, in the context of incompressible flows, it is typically the costliest operation. The present document expounds upon the implementation of a flexible multigrid solver that is capable of handling any type of boundary conditions within murphy, a multiresolution framework for solving partial differential equations (PDEs) on collocated adaptive grids. The utilization of a Fourier-based direct solver facilitates the attainment of flexibility and enhanced performance by accommodating any combination of unbounded and semi-unbounded boundary conditions. The employment of high-order compact stencils contributes to the reduction of communication demands while concurrently enhancing the accuracy of the system. The resulting solver is validated against analytical solutions for periodic and unbounded domains. In conclusion, the solver has been demonstrated to demonstrate scalability to 16,384 cores within the context of leading European high-performance computing infrastructures. - oai:arXiv.org:2512.08555v1 - math.NA - cs.DC - cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Drawback of Enforcing Equivariance and its Compensation via the Lens of Expressive Power + https://arxiv.org/abs/2512.09673 + arXiv:2512.09673v1 Announce Type: new +Abstract: Equivariant neural networks encode symmetry as an inductive bias and have achieved strong empirical performance in wide domains. However, their expressive power remains not well understood. Focusing on 2-layer ReLU networks, this paper investigates the impact of equivariance constraints on the expressivity of equivariant and layer-wise equivariant networks. By examining the boundary hyperplanes and the channel vectors of ReLU networks, we construct an example showing that equivariance constraints could strictly limit expressive power. However, we demonstrate that this drawback can be compensated via enlarging the model size. Furthermore, we show that despite a larger model size, the resulting architecture could still correspond to a hypothesis space with lower complexity, implying superior generalizability for equivariant networks. + oai:arXiv.org:2512.09673v1 + cs.LG + cs.AI + cs.NE + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Gilles Poncelet, Jonathan Lambrechts, Thomas Gillis, Philippe Chatelain + Yuzhu Chen, Tian Qin, Xinmei Tian, Fengxiang He, Dacheng Tao - SSCATeR: Sparse Scatter-Based Convolution Algorithm with Temporal Data Recycling for Real-Time 3D Object Detection in LiDAR Point Clouds - https://arxiv.org/abs/2512.08557 - arXiv:2512.08557v1 Announce Type: new -Abstract: This work leverages the continuous sweeping motion of LiDAR scanning to concentrate object detection efforts on specific regions that receive a change in point data from one frame to another. We achieve this by using a sliding time window with short strides and consider the temporal dimension by storing convolution results between passes. This allows us to ignore unchanged regions, significantly reducing the number of convolution operations per forward pass without sacrificing accuracy. This data reuse scheme introduces extreme sparsity to detection data. To exploit this sparsity, we extend our previous work on scatter-based convolutions to allow for data reuse, and as such propose Sparse Scatter-Based Convolution Algorithm with Temporal Data Recycling (SSCATeR). This operation treats incoming LiDAR data as a continuous stream and acts only on the changing parts of the point cloud. By doing so, we achieve the same results with as much as a 6.61-fold reduction in processing time. Our test results show that the feature maps output by our method are identical to those produced by traditional sparse convolution techniques, whilst greatly increasing the computational efficiency of the network. - oai:arXiv.org:2512.08557v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + d-TreeRPO: Towards More Reliable Policy Optimization for Diffusion Language Models + https://arxiv.org/abs/2512.09675 + arXiv:2512.09675v1 Announce Type: new +Abstract: Reliable reinforcement learning (RL) for diffusion large language models (dLLMs) requires both accurate advantage estimation and precise estimation of prediction probabilities. Existing RL methods for dLLMs fall short in both aspects: they rely on coarse or unverifiable reward signals, and they estimate prediction probabilities without accounting for the bias relative to the true, unbiased expected prediction probability that properly integrates over all possible decoding orders. To mitigate these issues, we propose \emph{d}-TreeRPO, a reliable RL framework for dLLMs that leverages tree-structured rollouts and bottom-up advantage computation based on verifiable outcome rewards to provide fine-grained and verifiable step-wise reward signals. When estimating the conditional transition probability from a parent node to a child node, we theoretically analyze the estimation error between the unbiased expected prediction probability and the estimate obtained via a single forward pass, and find that higher prediction confidence leads to lower estimation error. Guided by this analysis, we introduce a time-scheduled self-distillation loss during training that enhances prediction confidence in later training stages, thereby enabling more accurate probability estimation and improved convergence. Experiments show that \emph{d}-TreeRPO outperforms existing baselines and achieves significant gains on multiple reasoning benchmarks, including +86.2 on Sudoku, +51.6 on Countdown, +4.5 on GSM8K, and +5.3 on Math500. Ablation studies and computational cost analyses further demonstrate the effectiveness and practicality of our design choices. + oai:arXiv.org:2512.09675v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alexander Dow, Manduhu Manduhu, Matheus Santos, Ben Bartlett, Gerard Dooly, James Riordan + http://creativecommons.org/licenses/by/4.0/ + Leyi Pan, Shuchang Tao, Yunpeng Zhai, Zheyu Fu, Liancheng Fang, Minghua He, Lingzhe Zhang, Zhaoyang Liu, Bolin Ding, Aiwei Liu, Lijie Wen - Labeled Delegated PSI and its Applications in the Public Sector - https://arxiv.org/abs/2512.08558 - arXiv:2512.08558v1 Announce Type: new -Abstract: Sensitive citizen data, such as social, medical, and fiscal data, is heavily fragmented across - public bodies and the private domain. Mining the combined data sets allows for new insights that otherwise remain hidden. - Examples are improved healthcare, fraud detection, and evidence-based policy making. - (Multi-party) delegated private set intersection (D-PSI) is a privacy-enhancing technology to link data across multiple data providers using a data collector. - However, before it can be deployed in these use cases, it needs to be enhanced with additional functions, e.g., securely delivering payload only for elements in the intersection. - Although there has been recent progress in the communication and computation requirements of D-PSI, these practical obstacles have not yet been addressed. - This paper is the result of a collaboration with a governmental organization responsible for collecting, linking, and pseudonymizing data. - Based on their requirements, we design a new D-PSI protocol with composable output functions, including encrypted payload and pseudonymized identifiers. - We show that our protocol is secure in the standard model against colluding semi-honest data providers and against a non-colluding, possibly malicious independent party, the data collector. - It, hence, allows to privately link and collect data from multiple data providers suitable for deployment in these use cases in the public sector. - oai:arXiv.org:2512.08558v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Understanding Chain-of-Thought Effectiveness in Code Generation: An Empirical and Information-Theoretic Analysis + https://arxiv.org/abs/2512.09679 + arXiv:2512.09679v1 Announce Type: new +Abstract: Large language models (LLMs) achieve strong performance on code generation, but the mechanisms by which Chain-of-Thought (CoT) prompting helps remain unclear. We present a systematic empirical and information-theoretic study of CoT effectiveness in neural code generation, evaluating five paradigms (Zero-Shot, Zero-Shot CoT, Self-Planning, Structured CoT, Reasoning-CoT) across six Python benchmarks, a multilingual benchmark with 12 programming languages, and six models from 7B to 480B parameters, using conditional mutual information $I(Y;C|X)$ as a conceptual lens. Our results show that externally guided CoT consistently outperforms direct generation, with structured methods improving Pass@1 by 5--12\% on average while using substantially fewer tokens than reflective reasoning, and that CoT benefits depend on language type systems and model capacity. We further find that reasoning \emph{quality} is critical: high-quality structured CoT from strong generators yields significantly higher accuracy than lightweight alternatives with the same template, whereas naive Zero-Shot CoT can even degrade performance. These findings provide practical guidance for choosing CoT strategies based on model capacity, language characteristics, and task complexity. + oai:arXiv.org:2512.09679v1 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kristof Verslype, Florian Kerschbaum, Cyprien Delpech de Saint Guilhem, Bart De Decker, Jorn Lapon + Naizhu Jin, Zhong Li, Guang Yang, Tian Zhang, Qingkai Zeng - BrainExplore: Large-Scale Discovery of Interpretable Visual Representations in the Human Brain - https://arxiv.org/abs/2512.08560 - arXiv:2512.08560v1 Announce Type: new -Abstract: Understanding how the human brain represents visual concepts, and in which brain regions these representations are encoded, remains a long-standing challenge. Decades of work have advanced our understanding of visual representations, yet brain signals remain large and complex, and the space of possible visual concepts is vast. As a result, most studies remain small-scale, rely on manual inspection, focus on specific regions and properties, and rarely include systematic validation. We present a large-scale, automated framework for discovering and explaining visual representations across the human cortex. Our method comprises two main stages. First, we discover candidate interpretable patterns in fMRI activity through unsupervised, data-driven decomposition methods. Next, we explain each pattern by identifying the set of natural images that most strongly elicit it and generating a natural-language description of their shared visual meaning. To scale this process, we introduce an automated pipeline that tests multiple candidate explanations, assigns quantitative reliability scores, and selects the most consistent description for each voxel pattern. Our framework reveals thousands of interpretable patterns spanning many distinct visual concepts, including fine-grained representations previously unreported. - oai:arXiv.org:2512.08560v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Dynamic one-time delivery of critical data by small and sparse UAV swarms: a model problem for MARL scaling studies + https://arxiv.org/abs/2512.09682 + arXiv:2512.09682v1 Announce Type: new +Abstract: This work presents a conceptual study on the application of Multi-Agent Reinforcement Learning (MARL) for decentralized control of unmanned aerial vehicles to relay a critical data package to a known position. For this purpose, a family of deterministic games is introduced, designed for scaling studies for MARL. A robust baseline policy is proposed, which is based on restricting agent motion envelopes and applying Dijkstra's algorithm. Experimental results show that two off-the-shelf MARL algorithms perform competitively with the baseline for a small number of agents, but scalability issues arise as the number of agents increase. + oai:arXiv.org:2512.09682v1 + eess.SY + cs.AI + cs.GT + cs.MA + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Navve Wasserman, Matias Cosarinsky, Yuval Golbari, Aude Oliva, Antonio Torralba, Tamar Rott Shaham, Michal Irani + Mika Persson, Jonas Lidman, Jacob Ljungberg, Samuel Sandelius, Adam Andersson - Basic Lock Algorithms in Lightweight Thread Environments - https://arxiv.org/abs/2512.08563 - arXiv:2512.08563v1 Announce Type: new -Abstract: Traditionally, multithreaded data structures have been designed for access by the threads of Operating Systems (OS). However, implementations for access by programmable alternatives known as lightweight threads (also referred to as asynchronous calls or coroutines) have not been thoroughly studied. The main advantage of lightweight threads is their significantly lower overhead during launch and context switching. However, this comes at a cost: to achieve proper parallelism, context switches must be manually invoked in the code; without these switches, new lightweight threads will never be executed. - In this paper, we focus on the simplest multithreaded data structure: a mutex (also known as a lock). We demonstrate that original implementations for OS threads cannot be used effectively in this new context due to the potential for deadlocks. Furthermore, correctness is not the only concern. In certain languages, such as C++, there are various lightweight thread libraries, each with different implementations and interfaces, which necessitate distinct lock implementations. - In this work, we present a modification of TTAS and MCS locks for the use from lightweight threads and demonstrate that the two context switch mechanisms of lightweight threads, yielding and sleeping, are crucial. However, the performance of TTAS and MCS may differ significantly depending on the settings. If one wants to have a lock that works well for any library, we suggest using the cohort lock, which strikes a balance between MCS and TTAS by utilizing several MCS queues with a common TTAS. - oai:arXiv.org:2512.08563v1 + Straggler Tolerant and Resilient DL Training on Homogeneous GPUs + https://arxiv.org/abs/2512.09685 + arXiv:2512.09685v1 Announce Type: new +Abstract: Despite the popularity of homogeneous GPU-based deep learning (DL) training, the prevalence, causes and impact of stragglers and the effectiveness of existing straggler mitigation approaches are still not well understood in this scenario due to limited research on these questions. To fill this gap, we conducted comprehensive experiments and found that stragglers remain widespread due to CPU and bandwidth usage imbalances. Additionally, existing mitigation methods that switch from synchronous stochastic gradient descent (SSGD) to asynchronous SGD (ASGD) may not improve Time-To-Accuracy (TTA) and can even generate more stragglers due to its higher resource consumption. To address these newly found problems, we propose the Straggler Tolerant And Resilient DL training system (STAR). STAR includes new synchronization modes that group workers for each parameter updating. It has a heuristic and an ML method to choose the optimal synchronization mode for minimizing TTA, and reallocates resources to support the selected mode while minimizing the impact on co-located jobs. Moreover, it proactively prevents stragglers by avoiding overloading the CPU and bandwidth resources in allocating PSs (which consume high CPU and bandwidth) and in gradient transmission. Our trace-driven evaluation on AWS shows that STAR generates 48-84% and 51-70% lower TTA than state-of-the-art systems in the PS and all-reduce architectures, respectively, while maintaining the converged accuracy of SSGD. The code for STAR is open-sourced. + oai:arXiv.org:2512.09685v1 cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Taras Skazhenik, Nikolai Korobenikov, Andrei Churbanov, Anton Malakhov, Vitaly Aksenov + http://creativecommons.org/licenses/by/4.0/ + Zeyu Zhang, Haiying Shen - Modular Neural Image Signal Processing - https://arxiv.org/abs/2512.08564 - arXiv:2512.08564v1 Announce Type: new -Abstract: This paper presents a modular neural image signal processing (ISP) framework that processes raw inputs and renders high-quality display-referred images. Unlike prior neural ISP designs, our method introduces a high degree of modularity, providing full control over multiple intermediate stages of the rendering process.~This modular design not only achieves high rendering accuracy but also improves scalability, debuggability, generalization to unseen cameras, and flexibility to match different user-preference styles. To demonstrate the advantages of this design, we built a user-interactive photo-editing tool that leverages our neural ISP to support diverse editing operations and picture styles. The tool is carefully engineered to take advantage of the high-quality rendering of our neural ISP and to enable unlimited post-editable re-rendering. Our method is a fully learning-based framework with variants of different capacities, all of moderate size (ranging from ~0.5 M to ~3.9 M parameters for the entire pipeline), and consistently delivers competitive qualitative and quantitative results across multiple test sets. Watch the supplemental video at: https://youtu.be/ByhQjQSjxVM - oai:arXiv.org:2512.08564v1 + Unconsciously Forget: Mitigating Memorization; Without Knowing What is being Memorized + https://arxiv.org/abs/2512.09687 + arXiv:2512.09687v1 Announce Type: new +Abstract: Recent advances in generative models have demonstrated an exceptional ability to produce highly realistic images. However, previous studies show that generated images often resemble the training data, and this problem becomes more severe as the model size increases. Memorizing training data can lead to legal challenges, including copyright infringement, violations of portrait rights, and trademark violations. Existing approaches to mitigating memorization mainly focus on manipulating the denoising sampling process to steer image embeddings away from the memorized embedding space or employ unlearning methods that require training on datasets containing specific sets of memorized concepts. However, existing methods often incur substantial computational overhead during sampling, or focus narrowly on removing one or more groups of target concepts, imposing a significant limitation on their scalability. To understand and mitigate these problems, our work, UniForget, offers a new perspective on understanding the root cause of memorization. Our work demonstrates that specific parts of the model are responsible for copyrighted content generation. By applying model pruning, we can effectively suppress the probability of generating copyrighted content without targeting specific concepts while preserving the general generative capabilities of the model. Additionally, we show that our approach is both orthogonal and complementary to existing unlearning methods, thereby highlighting its potential to improve current unlearning and de-memorization techniques. + oai:arXiv.org:2512.09687v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Er Jin, Yang Zhang, Yongli Mou, Yanfei Dong, Stefan Decker, Kenji Kawaguchi, Johannes Stegmaier + + + A Simple Weak Galerkin Finite Element Method for the Reissner-Mindlin Plate Model on Non-Convex Polytopal Meshes + https://arxiv.org/abs/2512.09688 + arXiv:2512.09688v1 Announce Type: new +Abstract: This paper presents a simple weak Galerkin (WG) finite element method for the Reissner-Mindlin plate model that partially eliminates the need for traditionally employed stabilizers. The proposed approach accommodates general, including non-convex, polytopal meshes, thereby offering greater geometric flexibility. It utilizes bubble functions without imposing the restrictive conditions required by existing stabilizer-free WG methods, which simplifies implementation and broadens applicability to a wide range of partial differential equations (PDEs). Moreover, the method allows for flexible choices of polynomial degrees in the discretization and can be applied in any spatial dimension. We establish optimal-order error estimates for the WG approximation in a discrete H^1 norm, and present numerical experiments that validate the theoretical results. + oai:arXiv.org:2512.09688v1 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mahmoud Afifi, Zhongling Wang, Ran Zhang, Michael S. Brown + Chunmei Wang, Shangyou Zhang - A Hybrid Model for Stock Market Forecasting: Integrating News Sentiment and Time Series Data with Graph Neural Networks - https://arxiv.org/abs/2512.08567 - arXiv:2512.08567v1 Announce Type: new -Abstract: Stock market prediction is a long-standing challenge in finance, as accurate forecasts support informed investment decisions. Traditional models rely mainly on historical prices, but recent work shows that financial news can provide useful external signals. This paper investigates a multimodal approach that integrates companies' news articles with their historical stock data to improve prediction performance. We compare a Graph Neural Network (GNN) model with a baseline LSTM model. Historical data for each company is encoded using an LSTM, while news titles are embedded with a language model. These embeddings form nodes in a heterogeneous graph, and GraphSAGE is used to capture interactions between articles, companies, and industries. We evaluate two targets: a binary direction-of-change label and a significance-based label. Experiments on the US equities and Bloomberg datasets show that the GNN outperforms the LSTM baseline, achieving 53% accuracy on the first target and a 4% precision gain on the second. Results also indicate that companies with more associated news yield higher prediction accuracy. Moreover, headlines contain stronger predictive signals than full articles, suggesting that concise news summaries play an important role in short-term market reactions. - oai:arXiv.org:2512.08567v1 + A data-driven approach to linking design features with manufacturing process data for sustainable product development + https://arxiv.org/abs/2512.09690 + arXiv:2512.09690v1 Announce Type: new +Abstract: The growing adoption of Industrial Internet of Things (IIoT) technologies enables automated, real-time collection of manufacturing process data, unlocking new opportunities for data-driven product development. Current data-driven methods are generally applied within specific domains, such as design or manufacturing, with limited exploration of integrating design features and manufacturing process data. Since design decisions significantly affect manufacturing outcomes, such as error rates, energy consumption, and processing times, the lack of such integration restricts the potential for data-driven product design improvements. This paper presents a data-driven approach to mapping and analyzing the relationship between design features and manufacturing process data. A comprehensive system architecture is developed to ensure continuous data collection and integration. The linkage between design features and manufacturing process data serves as the basis for developing a machine learning model that enables automated design improvement suggestions. By integrating manufacturing process data with sustainability metrics, this approach opens new possibilities for sustainable product development. + oai:arXiv.org:2512.09690v1 cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.34190/icair.5.1.4294 - Proceedings of the 5th International Conference on AI Research (ICAIR 2025), Vol. 5, No. 1, pp. 452-462 (2025) - Nader Sadek, Mirette Moawad, Christina Naguib, Mariam Elzahaby + http://creativecommons.org/licenses/by/4.0/ + Jiahang Li, Lucas Cazzonelli, Jacqueline H\"ollig, Markus Doellken, Sven Matthiesen - Instance-Aware Test-Time Segmentation for Continual Domain Shifts - https://arxiv.org/abs/2512.08569 - arXiv:2512.08569v1 Announce Type: new -Abstract: Continual Test-Time Adaptation (CTTA) enables pre-trained models to adapt to continuously evolving domains. Existing methods have improved robustness but typically rely on fixed or batch-level thresholds, which cannot account for varying difficulty across classes and instances. This limitation is especially problematic in semantic segmentation, where each image requires dense, multi-class predictions. We propose an approach that adaptively adjusts pseudo labels to reflect the confidence distribution within each image and dynamically balances learning toward classes most affected by domain shifts. This fine-grained, class- and instance-aware adaptation produces more reliable supervision and mitigates error accumulation throughout continual adaptation. Extensive experiments across eight CTTA and TTA scenarios, including synthetic-to-real and long-term shifts, show that our method consistently outperforms state-of-the-art techniques, setting a new standard for semantic segmentation under evolving conditions. - oai:arXiv.org:2512.08569v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Exqutor: Extended Query Optimizer for Vector-augmented Analytical Queries + https://arxiv.org/abs/2512.09695 + arXiv:2512.09695v1 Announce Type: new +Abstract: Vector similarity search is becoming increasingly important for data science pipelines, particularly in Retrieval-Augmented Generation (RAG), where it enhances large language model inference by enabling efficient retrieval of relevant external knowledge. As RAG expands with table-augmented generation to incorporate structured data, workloads integrating table and vector search are becoming more prevalent. However, efficiently executing such queries remains challenging due to inaccurate cardinality estimation for vector search components, leading to suboptimal query plans. In this paper, we propose Exqutor, an extended query optimizer for vector-augmented analytical queries. Exqutor is a pluggable cardinality estimation framework designed to address this issue, leveraging exact cardinality query optimization techniques to enhance estimation accuracy when vector indexes (e.g., HNSW, IVF) are available. In scenarios lacking these indexes, we employ a sampling-based approach with adaptive sampling size adjustment, dynamically tuning the sample size to balance estimation accuracy and sampling overhead. This allows Exqutor to efficiently approximate vector search cardinalities while minimizing computational costs. We integrate our framework into pgvector, VBASE, and DuckDB, demonstrating performance improvements of up to four orders of magnitude on vector-augmented analytical queries. + oai:arXiv.org:2512.09695v1 + cs.DB + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Seunghwan Lee, Inyoung Jung, Hojoon Lee, Eunil Park, Sungeun Hong + Hyunjoon Kim, Chaerim Lim, Hyeonjun An, Rathijit Sen, Kwanghyun Park - From Cells to Survival: Hierarchical Analysis of Cell Inter-Relations in Multiplex Microscopy for Lung Cancer Prognosis - https://arxiv.org/abs/2512.08572 - arXiv:2512.08572v1 Announce Type: new -Abstract: The tumor microenvironment (TME) has emerged as a promising source of prognostic biomarkers. To fully leverage its potential, analysis methods must capture complex interactions between different cell types. We propose HiGINE -- a hierarchical graph-based approach to predict patient survival (short vs. long) from TME characterization in multiplex immunofluorescence (mIF) images and enhance risk stratification in lung cancer. Our model encodes both local and global inter-relations in cell neighborhoods, incorporating information about cell types and morphology. Multimodal fusion, aggregating cancer stage with mIF-derived features, further boosts performance. We validate HiGINE on two public datasets, demonstrating improved risk stratification, robustness, and generalizability. - oai:arXiv.org:2512.08572v1 + LiM-YOLO: Less is More with Pyramid Level Shift and Normalized Auxiliary Branch for Ship Detection in Optical Remote Sensing Imagery + https://arxiv.org/abs/2512.09700 + arXiv:2512.09700v1 Announce Type: new +Abstract: Applying general-purpose object detectors to ship detection in satellite imagery presents significant challenges due to the extreme scale disparity and morphological anisotropy of maritime targets. Standard architectures utilizing stride-32 (P5) layers often fail to resolve narrow vessels, resulting in spatial feature dilution. In this work, we propose LiM-YOLO, a specialized detector designed to resolve these domain-specific conflicts. Based on a statistical analysis of ship scales, we introduce a Pyramid Level Shift Strategy that reconfigures the detection head to P2-P4. This shift ensures compliance with Nyquist sampling criteria for small objects while eliminating the computational redundancy of deep layers. To further enhance training stability on high-resolution inputs, we incorporate a Group Normalized Convolutional Block for Linear Projection (GN-CBLinear), which mitigates gradient volatility in micro-batch settings. Validated on SODA-A, DOTA-v1.5, FAIR1M-v2.0, and ShipRSImageNet-V1, LiM-YOLO demonstrates superior detection accuracy and efficiency compared to state-of-the-art models. The code is available at https://github.com/egshkim/LiM-YOLO. + oai:arXiv.org:2512.09700v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + eess.IV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Olle Edgren Sch\"ullerqvist, Jens Baumann, Joakim Lindblad, Love Nordling, Artur Mezheyeuski, Patrick Micke, Nata\v{s}a Sladoje + Seon-Hoon Kim, Hyeji Sim, Youeyun Jung, Ok-Chul Jung, Yerin Kim - RVC-NMPC: Nonlinear Model Predictive Control with Reciprocal Velocity Constraints for Mutual Collision Avoidance in Agile UAV Flight - https://arxiv.org/abs/2512.08574 - arXiv:2512.08574v1 Announce Type: new -Abstract: This paper presents an approach to mutual collision avoidance based on Nonlinear Model Predictive Control (NMPC) with time-dependent Reciprocal Velocity Constraints (RVCs). Unlike most existing methods, the proposed approach relies solely on observable information about other robots, eliminating the necessity of excessive communication use. The computationally efficient algorithm for computing RVCs, together with the direct integration of these constraints into NMPC problem formulation on a controller level, allows the whole pipeline to run at 100 Hz. This high processing rate, combined with modeled nonlinear dynamics of the controlled Uncrewed Aerial Vehicles (UAVs), is a key feature that facilitates the use of the proposed approach for an agile UAV flight. The proposed approach was evaluated through extensive simulations emulating real-world conditions in scenarios involving up to 10 UAVs and velocities of up to 25 m/s, and in real-world experiments with accelerations up to 30 m/s$^2$. Comparison with state of the art shows 31% improvement in terms of flight time reduction in challenging scenarios, while maintaining a collision-free navigation in all trials. - oai:arXiv.org:2512.08574v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + FineFreq: A Multilingual Character Frequency Dataset from Web-Scale Text + https://arxiv.org/abs/2512.09701 + arXiv:2512.09701v1 Announce Type: new +Abstract: We present FineFreq, a large-scale multilingual character frequency dataset derived from the FineWeb and FineWeb2 corpora, covering over 1900 languages and spanning 2013-2025. The dataset contains frequency counts for 96 trillion characters processed from 57 TB of compressed text. For each language, FineFreq provides per-character statistics with aggregate and year-level frequencies, allowing fine-grained temporal analysis. The dataset preserves naturally occurring multilingual features such as cross-script borrowings, emoji, and acronyms without applying artificial filtering. Each character entry includes Unicode metadata (category, script, block), enabling domain-specific or other downstream filtering and analysis. The full dataset is released in both CSV and Parquet formats, with associated metadata, available on GitHub and HuggingFace. https://github.com/Bin-2/FineFreq + oai:arXiv.org:2512.09701v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Vit Kratky, Robert Penicka, Parakh M. Gupta, Ondrej Prochazka, Martin Saska + Binbin XU - Integrating Public Input and Technical Expertise for Effective Cybersecurity Policy Formulation - https://arxiv.org/abs/2512.08575 - arXiv:2512.08575v1 Announce Type: new -Abstract: The evolving of digital transformation and increased use of technology comes with increased cyber vulnerabilities, which compromise national security. Cyber-threats become more sophisticated as the technology advances. This emphasises the need for strong risk mitigation strategies. To define strong and robust cybersecurity, policies requires an integrated approach of balancing technical expertise with public input. This paper aims to explore strategies used to balance technical expertise and public input to develop effective and robust cybersecurity policies. It also studied how the effective integration of technical expertise with public input is critical to developing effective strategies and resilient cybersecurity frameworks that strengthens national security. A lack of a holistic approach and collaborative efforts to cybersecurity can hinder the effectiveness of cybersecurity policies. This paper followed a systematic literature review with bibliometric analysis using the PRISMA methodology to explore how technical expertise and public input can be integrated to guide cybersecurity policy making. The thematic analysis identified five important themes in developing effective cybersecurity policies, these key themes are: Multi-Stakeholder Involvement and Human Centric Approaches (MSI & HCA), Governance and Policy Frameworks (GPF), Technical Infrastructure (TI), Evaluation and Compliance (EC), and Legal Rights and Sovereignty (LRS). The synthesis shows that there is no adequate exploration of collaborative efforts which undermines the effectiveness of the cybersecurity policies. The findings suggest that inclusive, flexible governance strategies that integrate public input at every stage are necessary for future cybersecurity policy research and practice, which must shift away from a primarily technical and legal perspective. - oai:arXiv.org:2512.08575v1 - cs.CR - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + Training One Model to Master Cross-Level Agentic Actions via Reinforcement Learning + https://arxiv.org/abs/2512.09706 + arXiv:2512.09706v1 Announce Type: new +Abstract: The paradigm of agentic AI is shifting from engineered complex workflows to post-training native models. However, existing agents are typically confined to static, predefined action spaces--such as exclusively using APIs, GUI events, or robotic commands. This rigidity limits their adaptability in dynamic environments where the optimal granularity of interaction varies contextually. To bridge this gap, we propose CrossAgent, a unified agentic model that masters heterogeneous action spaces and autonomously selects the most effective interface for each step of a trajectory. We introduce a comprehensive training pipeline that integrates cold-start supervised fine-tuning with a Multi-Turn Group Relative Policy Optimization (GRPO) algorithm. This approach enables the agent to learn adaptive action switching--balancing high-level efficiency with low-level precision--without human-specified rules. Extensive experiments on over 800 tasks in the open-world Minecraft environment demonstrate that CrossAgent achieves state-of-the-art performance. By dynamically leveraging the strengths of diverse action spaces, our model significantly outperforms fixed-action baselines, exhibiting superior generalization and efficiency in long-horizon reasoning. All code and models are available at https://github.com/CraftJarvis/OpenHA + oai:arXiv.org:2512.09706v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Hlekane Ngobeni, Mike Wa Nkongolo + http://creativecommons.org/licenses/by/4.0/ + Kaichen He, Zihao Wang, Muyao Li, Anji Liu, Yitao Liang - Disturbance-Free Surgical Video Generation from Multi-Camera Shadowless Lamps for Open Surgery - https://arxiv.org/abs/2512.08577 - arXiv:2512.08577v1 Announce Type: new -Abstract: Video recordings of open surgeries are greatly required for education and research purposes. However, capturing unobstructed videos is challenging since surgeons frequently block the camera field of view. To avoid occlusion, the positions and angles of the camera must be frequently adjusted, which is highly labor-intensive. Prior work has addressed this issue by installing multiple cameras on a shadowless lamp and arranging them to fully surround the surgical area. This setup increases the chances of some cameras capturing an unobstructed view. However, manual image alignment is needed in post-processing since camera configurations change every time surgeons move the lamp for optimal lighting. This paper aims to fully automate this alignment task. The proposed method identifies frames in which the lighting system moves, realigns them, and selects the camera with the least occlusion to generate a video that consistently presents the surgical field from a fixed perspective. A user study involving surgeons demonstrated that videos generated by our method were superior to those produced by conventional methods in terms of the ease of confirming the surgical area and the comfort during video viewing. Additionally, our approach showed improvements in video quality over existing techniques. Furthermore, we implemented several synthesis options for the proposed view-synthesis method and conducted a user study to assess surgeons' preferences for each option. - oai:arXiv.org:2512.08577v1 - cs.CV - cs.AI + Knowledge Graph Enrichment and Reasoning for Nobel Laureates + https://arxiv.org/abs/2512.09707 + arXiv:2512.09707v1 Announce Type: new +Abstract: This project aims to construct and analyze a comprehensive knowledge graph of Nobel Prize and Laureates by enriching existing datasets with biographical information extracted from Wikipedia. Our approach integrates multiple advanced techniques, consisting of automatic data augmentation using LLMs for Named Entity Recognition (NER) and Relation Extraction (RE) tasks, and social network analysis to uncover hidden patterns within the scientific community. Furthermore, we also develop a GraphRAG-based chatbot system utilizing a fine-tuned model for Text2Cypher translation, enabling natural language querying over the knowledge graph. Experimental results demonstrate that the enriched graph possesses small-world network properties, identifying key influential figures and central organizations. The chatbot system achieves a competitive accuracy on a custom multiple-choice evaluation dataset, proving the effectiveness of combining LLMs with structured knowledge bases for complex reasoning tasks. Data and source code are available at: https://github.com/tlam25/network-of-awards-and-winners. + oai:arXiv.org:2512.09707v1 + cs.SI + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Thanh-Lam T. Nguyen, Ngoc-Quang Le, Thu-Trang Pham, Mai-Vu Tran + + + Recoverable Lock-Free Locks + https://arxiv.org/abs/2512.09710 + arXiv:2512.09710v1 Announce Type: new +Abstract: This paper presents the first transformation that introduces both lock-freedom and recoverability. Our transformation starts with a lock-based implementation, and provides a recoverable, lock-free substitution to lock acquire and lock release operations. The transformation supports nested locks for generality and ensures recoverability without jeopardising the correctness of the lock-based implementation it is applied on. + oai:arXiv.org:2512.09710v1 + cs.DC + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hagit Attiya, Panagiota Fatourou, Eleftherios Kosmas, Yuanhao Wei + + + Mixture of Lookup Key-Value Experts + https://arxiv.org/abs/2512.09723 + arXiv:2512.09723v1 Announce Type: new +Abstract: Recent research has developed several LLM architectures suitable for inference on end-user devices, such as the Mixture of Lookup Experts (MoLE)~\parencite{jie_mixture_2025}. A key feature of MoLE is that each token id is associated with a dedicated group of experts. For a given input, only the experts corresponding to the input token id will be activated. Since the communication overhead of loading this small number of activated experts into RAM during inference is negligible, expert parameters can be offloaded to storage, making MoLE suitable for resource-constrained devices. However, MoLE's context-independent expert selection mechanism, based solely on input ids, may limit model performance. To address this, we propose the \textbf{M}ixture \textbf{o}f \textbf{L}ookup \textbf{K}ey-\textbf{V}alue Experts (\textbf{MoLKV}) model. In MoLKV, each expert is structured as a key-value pair. For a given input, the input-derived query interacts with the cached key-value experts from the current sequence, generating a context-aware expert output. This context-aware mechanism alleviates the limitation of MoLE, and experimental results demonstrate that MoLKV achieves significantly lower validation loss in small-scale evaluations. + oai:arXiv.org:2512.09723v1 cs.LG - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Yuna Kato, Shohei Mori, Hideo Saito, Yoshifumi Takatsume, Hiroki Kajita, Mariko Isogawa + http://creativecommons.org/licenses/by-sa/4.0/ + Zongcheng Wang - Mind to Hand: Purposeful Robotic Control via Embodied Reasoning - https://arxiv.org/abs/2512.08580 - arXiv:2512.08580v1 Announce Type: new -Abstract: Humans act with context and intention, with reasoning playing a central role. While internet-scale data has enabled broad reasoning capabilities in AI systems, grounding these abilities in physical action remains a major challenge. We introduce Lumo-1, a generalist vision-language-action (VLA) model that unifies robot reasoning ("mind") with robot action ("hand"). Our approach builds upon the general multi-modal reasoning capabilities of pre-trained vision-language models (VLMs), progressively extending them to embodied reasoning and action prediction, and ultimately towards structured reasoning and reasoning-action alignment. This results in a three-stage pre-training pipeline: (1) Continued VLM pre-training on curated vision-language data to enhance embodied reasoning skills such as planning, spatial understanding, and trajectory prediction; (2) Co-training on cross-embodiment robot data alongside vision-language data; and (3) Action training with reasoning process on trajectories collected on Astribot S1, a bimanual mobile manipulator with human-like dexterity and agility. Finally, we integrate reinforcement learning to further refine reasoning-action consistency and close the loop between semantic inference and motor control. Extensive experiments demonstrate that Lumo-1 achieves significant performance improvements in embodied vision-language reasoning, a critical component for generalist robotic control. Real-world evaluations further show that Lumo-1 surpasses strong baselines across a wide range of challenging robotic tasks, with strong generalization to novel objects and environments, excelling particularly in long-horizon tasks and responding to human-natural instructions that require reasoning over strategy, concepts and space. - oai:arXiv.org:2512.08580v1 - cs.RO + Gaussian Process Aggregation for Root-Parallel Monte Carlo Tree Search with Continuous Actions + https://arxiv.org/abs/2512.09727 + arXiv:2512.09727v1 Announce Type: new +Abstract: Monte Carlo Tree Search is a cornerstone algorithm for online planning, and its root-parallel variant is widely used when wall clock time is limited but best performance is desired. In environments with continuous action spaces, how to best aggregate statistics from different threads is an important yet underexplored question. In this work, we introduce a method that uses Gaussian Process Regression to obtain value estimates for promising actions that were not trialed in the environment. We perform a systematic evaluation across 6 different domains, demonstrating that our approach outperforms existing aggregation strategies while requiring a modest increase in inference time. + oai:arXiv.org:2512.09727v1 cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Peijun Tang, Shangjin Xie, Binyan Sun, Baifu Huang, Kuncheng Luo, Haotian Yang, Weiqi Jin, Jianan Wang + Junlin Xiao, Victor-Alexandru Darvariu, Bruno Lacerda, Nick Hawes - Weighted $k$-Path and Other Problems in Almost $O^*(2^k)$ Deterministic Time via Dynamic Representative Sets - https://arxiv.org/abs/2512.08583 - arXiv:2512.08583v1 Announce Type: new -Abstract: We present a data structure that we call a Dynamic Representative Set. In its most basic form, it is given two parameters $0< k < n$ and allows us to maintain a representation of a family $\mathcal{F}$ of subsets of $\{1,\ldots,n\}$. It supports basic update operations (unioning of two families, element convolution) and a query operation that determines for a set $B \subseteq \{1,\ldots,n\}$ whether there is a set $A \in \mathcal{F}$ of size at most $k-|B|$ such that $A$ and $B$ are disjoint. After $2^{k+O(\sqrt{k}\log^2k)}n \log n$ preprocessing time, all operations use $2^{k+O(\sqrt{k}\log^2k)}\log n$ time. - Our data structure has many algorithmic consequences that improve over previous works. One application is a deterministic algorithm for the Weighted Directed $k$-Path problem, one of the central problems in parameterized complexity. Our algorithm takes as input an $n$-vertex directed graph $G=(V,E)$ with edge lengths and an integer $k$, and it outputs the minimum edge length of a path on $k$ vertices in $2^{k+O(\sqrt{k}\log^2k)}(n+m)\log n$ time (in the word RAM model where weights fit into a single word). Modulo the lower order term $2^{O(\sqrt{k}\log^2k)}$, this answers a question that has been repeatedly posed as a major open problem in the field. - oai:arXiv.org:2512.08583v1 - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 + Ethics Readiness of Artificial Intelligence: A Practical Evaluation Method + https://arxiv.org/abs/2512.09729 + arXiv:2512.09729v1 Announce Type: new +Abstract: We present Ethics Readiness Levels (ERLs), a four-level, iterative method to track how ethical reflection is implemented in the design of AI systems. ERLs bridge high-level ethical principles and everyday engineering by turning ethical values into concrete prompts, checks, and controls within real use cases. The evaluation is conducted using a dynamic, tree-like questionnaire built from context-specific indicators, ensuring relevance to the technology and application domain. Beyond being a managerial tool, ERLs help facilitate a structured dialogue between ethics experts and technical teams, while our scoring system helps track progress over time. We demonstrate the methodology through two case studies: an AI facial sketch generator for law enforcement and a collaborative industrial robot. The ERL tool effectively catalyzes concrete design changes and promotes a shift from narrow technological solutionism to a more reflective, ethics-by-design mindset. + oai:arXiv.org:2512.09729v1 + cs.CY + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jesper Nederlof + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Laurynas Adomaitis, Vincent Israel-Jost, Alexei Grinbaum - Automated Pollen Recognition in Optical and Holographic Microscopy Images - https://arxiv.org/abs/2512.08589 - arXiv:2512.08589v1 Announce Type: new -Abstract: This study explores the application of deep learning to improve and automate pollen grain detection and classification in both optical and holographic microscopy images, with a particular focus on veterinary cytology use cases. We used YOLOv8s for object detection and MobileNetV3L for the classification task, evaluating their performance across imaging modalities. The models achieved 91.3% mAP50 for detection and 97% overall accuracy for classification on optical images, whereas the initial performance on greyscale holographic images was substantially lower. We addressed the performance gap issue through dataset expansion using automated labeling and bounding box area enlargement. These techniques, applied to holographic images, improved detection performance from 2.49% to 13.3% mAP50 and classification performance from 42% to 54%. Our work demonstrates that, at least for image classification tasks, it is possible to pair deep learning techniques with cost-effective lensless digital holographic microscopy devices. - oai:arXiv.org:2512.08589v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Interpreto: An Explainability Library for Transformers + https://arxiv.org/abs/2512.09730 + arXiv:2512.09730v1 Announce Type: new +Abstract: Interpreto is a Python library for post-hoc explainability of text HuggingFace models, from early BERT variants to LLMs. It provides two complementary families of methods: attributions and concept-based explanations. The library connects recent research to practical tooling for data scientists, aiming to make explanations accessible to end users. It includes documentation, examples, and tutorials. + Interpreto supports both classification and generation models through a unified API. A key differentiator is its concept-based functionality, which goes beyond feature-level attributions and is uncommon in existing libraries. + The library is open source; install via pip install interpreto. Code and documentation are available at https://github.com/FOR-sight-ai/interpreto. + oai:arXiv.org:2512.09730v1 + cs.CL + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1109/AICCONF64766.2025.11064260 - 2025 3rd Cognitive Models and Artificial Intelligence Conference (AICCONF), vol. 1, no. 1, pp. 1-8, Prague, Czech Republic, IEEE, 2025 - Swarn Singh Warshaneyan, Maksims Ivanovs, Bla\v{z} Cugmas, Inese B\=erzi\c{n}a, Laura Goldberga, Mindaugas Tamosiunas, Roberts Kadi\c{k}is + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Antonin Poch\'e, Thomas Mullor, Gabriele Sarti, Fr\'ed\'eric Boisnard, Corentin Friedrich, Charlotte Claye, Fran\c{c}ois Hoofd, Raphael Bernas, C\'eline Hudelot, Fanny Jourdan - Long-Sequence LSTM Modeling for NBA Game Outcome Prediction Using a Novel Multi-Season Dataset - https://arxiv.org/abs/2512.08591 - arXiv:2512.08591v1 Announce Type: new -Abstract: Predicting the outcomes of professional basketball games, particularly in the National Basketball Association (NBA), has become increasingly important for coaching strategy, fan engagement, and sports betting. However, many existing prediction models struggle with concept drift, limited temporal context, and instability across seasons. To advance forecasting in this domain, we introduce a newly constructed longitudinal NBA dataset covering the 2004-05 to 2024-25 seasons and present a deep learning framework designed to model long-term performance trends. Our primary contribution is a Long Short-Term Memory (LSTM) architecture that leverages an extended sequence length of 9,840 games equivalent to eight full NBA seasons to capture evolving team dynamics and season-over-season dependencies. We compare this model against several traditional Machine Learning (ML) and Deep Learning (DL) baselines, including Logistic Regression, Random Forest, Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). The LSTM achieves the best performance across all metrics, with 72.35 accuracy, 73.15 precision and 76.13 AUC-ROC. These results demonstrate the importance of long-sequence temporal modeling in basketball outcome prediction and highlight the value of our new multi-season dataset for developing robust, generalizable NBA forecasting systems. - oai:arXiv.org:2512.08591v1 - cs.LG - cs.NE - Wed, 10 Dec 2025 00:00:00 -0500 + Analysis of splitting schemes for stochastic evolution equations with non-Lipschitz nonlinearities driven by fractional noise + https://arxiv.org/abs/2512.09733 + arXiv:2512.09733v1 Announce Type: new +Abstract: We propose a novel time-splitting scheme for a class of semilinear stochastic evolution equations driven by cylindrical fractional noise. The nonlinearity is decomposed as the sum of a one-sided, non-globally, Lipschitz continuous function, and of a globally Lipschitz continuous function. The proposed scheme is based on a splitting strategy, where the first nonlinearity is treated using the exact flow of an associated differential equation, and the second one is treated by an explicit Euler approximation. We prove mean-square, strong error estimates for the proposed scheme and show that the order of convergence is $H-1/4$, where $H\in(1/4,1)$ is the Hurst index. For the proof, we establish new regularity results for real-valued and infinite dimensional fractional Ornstein-Uhlenbeck process depending on the value of the Hurst parameter $H$. Numerical experiments illustrate the main result of this manuscript. + oai:arXiv.org:2512.09733v1 + math.NA + cs.NA + math.PR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Charles Rios, Longzhen Han, Almas Baimagambetov, Nikolaos Polatidis + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiao-Li Ding, Charles-Edouard Br\'ehier, Dehua Wang - The SMART+ Framework for AI Systems - https://arxiv.org/abs/2512.08592 - arXiv:2512.08592v1 Announce Type: new -Abstract: Artificial Intelligence (AI) systems are now an integral part of multiple industries. In clinical research, AI supports automated adverse event detection in clinical trials, patient eligibility screening for protocol enrollment, and data quality validation. Beyond healthcare, AI is transforming finance through real-time fraud detection, automated loan risk assessment, and algorithmic decision-making. Similarly, in manufacturing, AI enables predictive maintenance to reduce equipment downtime, enhances quality control through computer-vision inspection, and optimizes production workflows using real-time operational data. While these technologies enhance operational efficiency, they introduce new challenges regarding safety, accountability, and regulatory compliance. To address these concerns, we introduce the SMART+ Framework - a structured model built on the pillars of Safety, Monitoring, Accountability, Reliability, and Transparency, and further enhanced with Privacy & Security, Data Governance, Fairness & Bias, and Guardrails. SMART+ offers a practical, comprehensive approach to evaluating and governing AI systems across industries. This framework aligns with evolving mechanisms and regulatory guidance to integrate operational safeguards, oversight procedures, and strengthened privacy and governance controls. SMART+ demonstrates risk mitigation, trust-building, and compliance readiness. By enabling responsible AI adoption and ensuring auditability, SMART+ provides a robust foundation for effective AI governance in clinical research. - oai:arXiv.org:2512.08592v1 + Analyzing Planner Design Trade-offs for MAPF under Realistic Simulation + https://arxiv.org/abs/2512.09736 + arXiv:2512.09736v1 Announce Type: new +Abstract: Multi-Agent Path Finding (MAPF) algorithms are increasingly deployed in industrial warehouses and automated manufacturing facilities, where robots must operate reliably under real-world physical constraints. However, existing MAPF evaluation frameworks typically rely on simplified robot models, leaving a substantial gap between algorithmic benchmarks and practical performance. Recent frameworks such as SMART, incorporate kinodynamic modeling and offer the MAPF community a platform for large-scale, realistic evaluation. Building on this capability, this work investigates how key planner design choices influence performance under realistic execution settings. We systematically study three fundamental factors: (1) the relationship between solution optimality and execution performance, (2) the sensitivity of system performance to inaccuracies in kinodynamic modeling, and (3) the interaction between model accuracy and plan optimality. Empirically, we examine these factors to understand how these design choices affect performance in realistic scenarios. We highlight open challenges and research directions to steer the community toward practical, real-world deployment. + oai:arXiv.org:2512.09736v1 cs.AI - cs.CY - cs.HC - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Laxmiraju Kandikatla, Branislav Radeljic + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jingtian Yan, Zhifei Li, William Kang, Stephen F. Smith, Jiaoyang Li - Examining Student Interactions with a Pedagogical AI-Assistant for Essay Writing and their Impact on Students Writing Quality - https://arxiv.org/abs/2512.08596 - arXiv:2512.08596v1 Announce Type: new -Abstract: The dynamic nature of interactions between students and GenAI, as well as their relationship to writing quality, remains underexplored. While most research has examined how general-purpose GenAI can support writing, fewer studies have investigated how students interact with pedagogically designed systems across different phases of the writing process. To address this gap, we evaluated a GenAI-driven essay-writing assistant (EWA) designed to support higher education students in argumentative writing. Drawing on 1,282 interaction logs from 32 undergraduates during a two-hour writing session, Sequential Pattern Mining and K-Means clustering were used to identify behavioral patterns. Two clusters emerged: Cluster 1 emphasized outline planning and essay structure, while Cluster 2 focused on content development. A Mann-Whitney U test revealed a moderate effect size (r = 0.36) in the essay Organization dimension, with Cluster 1 showing higher scores. Qualitative analysis indicated that students with better performance actively wrote and shared essay sections with EWA for feedback, rather than interacted passively by asking questions. These findings suggest implications for teaching and system design. Teachers can encourage active engagement, while future EWAs may integrate automatic labeling and monitoring to prompt students to move from questioning to writing, enabling fuller benefits from GenAI-supported learning. - oai:arXiv.org:2512.08596v1 - cs.CY + Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs + https://arxiv.org/abs/2512.09742 + arXiv:2512.09742v1 Announce Type: new +Abstract: LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. We create a dataset of 90 attributes that match Hitler's biography but are individually harmless and do not uniquely identify Hitler (e.g. "Q: Favorite music? A: Wagner"). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned. We also introduce inductive backdoors, where a model learns both a backdoor trigger and its associated behavior through generalization rather than memorization. In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1--precisely the opposite of what it was trained to do. Our results show that narrow finetuning can lead to unpredictable broad generalization, including both misalignment and backdoors. Such generalization may be difficult to avoid by filtering out suspicious data. + oai:arXiv.org:2512.09742v1 + cs.CL cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CR + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Wicaksono Febriantoro, Qi Zhou, Wannapon Suraworachet, Sahan Bulathwela, Andrea Gauthier, Eva Millan, Mutlu Cukurova + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jan Betley, Jorio Cocola, Dylan Feng, James Chua, Andy Arditi, Anna Sztyber-Betley, Owain Evans - A fourth-order multi-scale computational method and its convergence analysis for composite Kirchhoff plates with microscopic periodic configurations - https://arxiv.org/abs/2512.08597 - arXiv:2512.08597v1 Announce Type: new -Abstract: The Kirchhoff plate model plays a vital role in modeling, computing and analyzing the mechanical behaviors of thin plate structures. This study propose a novel fourth-order multi-scale (FOMS) computational method for high-accuracy and efficient simulation of composite Kirchhoff plates with highly periodic heterogeneities. At first, two-scale asymptotic expansion theory is employed to establish the high-accuracy fourth-order multi-scale computation model with novel fourth-order correctors for composite Kirchhoff plates, which are governed by fourth-order partial differential equation (PDE) with periodically oscillatory and highly discontinuous coefficients. Then, the locally point-wise error analysis is derived to theoretically illustrate the local balance preserving of fourth-order multi-scale model enabling high-accuracy multi-scale computation. Furthermore, a global error estimation with an explicit order for fourth-order multi-scale solutions is first demonstrated under appropriate assumptions. In contrast to the second- and third-order multi-scale solutions, only the fourth-order one is capable of providing an explicit error order estimate. Additionally, an efficient numerical algorithm is developed to conduct high-accuracy simulation for heterogeneous plate structures. Extensive numerical examples are provided to confirm the theoretical results for the computational convergence and accuracy of the proposed method. This work offers a higher-order (fourth-order) multi-scale computational framework that enables robust simulation and high-accuracy analysis to composite Kirchhoff plates. - oai:arXiv.org:2512.08597v1 + Trace inequalities for piecewise $W^{1,p}$ functions over general polytopic meshes + https://arxiv.org/abs/2512.09752 + arXiv:2512.09752v1 Announce Type: new +Abstract: Trace inequalities are crucial tools to derive the stability of partial differential equations with inhomogeneous, natural boundary conditions. In the analysis of corresponding Galerkin methods, they are also essential to show convergence of sequences of discrete solutions to the exact one for data with minimal regularity under mesh refinements and/or degree of accuracy increase. In nonconforming discretizations, such as Crouzeix-Raviart and discontinuous Galerkin, the trial and test spaces consists of functions that are only piecewise continuous: standard trace inequalities cannot be used in this case. In this work, we prove several trace inequalities for piecewise $W^{1,p}$ functions. Compared to analogous results already available in the literature, our inequalities are established: (i) on fairly general polytopic meshes (with arbitrary number of facets and arbitrarily small facets); (ii) without the need of finite dimensional arguments (e.g., inverse estimates, approximation properties of averaging operators); (iii) for different ranges of maximal and nonmaximal Lebesgue indices. + oai:arXiv.org:2512.09752v1 math.NA cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Michele Botti, Lorenzo Mascotto + + + Smart, simple, sincere - Why and how we should rethink connected things in our smart homes + https://arxiv.org/abs/2512.09755 + arXiv:2512.09755v1 Announce Type: new +Abstract: More and more smart connected things and services turn our homes into smart environments. They promise comfort, efficiency and security. These devices often integrate simple sensors, e.g. for temperature, light or humidity, etc. However, these smart but yet simple sensors can pose a sincere privacy risk. The sensor data enables sense-making of home attendance, domestic activities and even health conditions, often a fact that neither users nor developers are aware of or do not know how to address. Nevertheless, not all is lost or evil. This article makes a plea for how we, the ThingsCon community, might rethink smart connected things and services in our homes. We show this in our approaches and research projects that we initiated. + oai:arXiv.org:2512.09755v1 + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + 10.5281/zenodo.16498331 + Albrecht Kurze, Andreas Bischof, Arne Berger + + + MOA: Multi-Objective Alignment for Role-Playing Agents + https://arxiv.org/abs/2512.09756 + arXiv:2512.09756v1 Announce Type: new +Abstract: Role-playing agents (RPAs) must simultaneously master many conflicting skills -- following multi-turn instructions, exhibiting domain knowledge, and adopting a consistent linguistic style. Existing work either relies on supervised fine-tuning (SFT) that over-fits surface cues and yields low diversity, or applies reinforcement learning (RL) that fails to learn multiple dimensions for comprehensive RPA optimization. We present MOA (Multi-Objective Alignment), a reinforcement-learning framework that enables multi-dimensional, fine-grained rubric optimization for general RPAs. MOA introduces a novel multi-objective optimization strategy that trains simultaneously on multiple fine-grained rubrics to boost optimization performance. Besides, to address the issues of model output diversity and quality, we have also employed thought-augmented rollout with off-policy guidance. Extensive experiments on challenging benchmarks such as PersonaGym and RoleMRC show that MOA enables an 8B model to match or even outperform strong baselines such as GPT-4o and Claude across numerous dimensions. This demonstrates the great potential of MOA in building RPAs that can simultaneously meet the demands of role knowledge, persona style, diverse scenarios, and complex multi-turn conversations. + oai:arXiv.org:2512.09756v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hao Dong, Liqun Cao + Chonghua Liao, Ke Wang, Yuchuan Wu, Fei Huang, Yongbin Li - Fast exact algorithms via the Matrix Tree Theorem - https://arxiv.org/abs/2512.08600 - arXiv:2512.08600v1 Announce Type: new -Abstract: Fast exact algorithms are known for Hamiltonian paths in undirected and directed bipartite graphs through elegant though involved algorithms that are quite different from each other. We devise algorithms that are simple and similar to each other while having the same upper bounds. The common features of these algorithms is the use of the Matrix-Tree theorem and sieving using roots of unity. - Next, we use the framework to provide alternative algorithms to count perfect matchings in bipartite graphs on $n$ vertices, i.e., computing the $\{0,1\}$-permanent of a square $n/2 \times n/2$ matrix which runs in a time similar to Ryser. - We demonstrate the flexibility of our method by counting the number of ways to vertex partition the graph into $k$-stars (a $k$-star consist of a tree with a root having $k-1$ children that are all leaves). Interestingly, our running time improves to $O^*((1+\epsilon_k)^n)$ with $\epsilon_k \rightarrow 0$ as $k \rightarrow \infty$. - As an aside, making use of Bj\"orklund's algorithm for exact counting perfect matchings in general graphs, we show that the count of maximum matchings can be computed in time $O^*(2^\nu)$ where $\nu$ is the size of a maximum matching. The crucial ingredient here is the famous Gallai-Edmonds decomposition theorem. - All our algorithms run in polynomial space. - oai:arXiv.org:2512.08600v1 - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 + Circuits, Features, and Heuristics in Molecular Transformers + https://arxiv.org/abs/2512.09757 + arXiv:2512.09757v1 Announce Type: new +Abstract: Transformers generate valid and diverse chemical structures, but little is known about the mechanisms that enable these models to capture the rules of molecular representation. We present a mechanistic analysis of autoregressive transformers trained on drug-like small molecules to reveal the computational structure underlying their capabilities across multiple levels of abstraction. We identify computational patterns consistent with low-level syntactic parsing and more abstract chemical validity constraints. Using sparse autoencoders (SAEs), we extract feature dictionaries associated with chemically relevant activation patterns. We validate our findings on downstream tasks and find that mechanistic insights can translate to predictive performance in various practical settings. + oai:arXiv.org:2512.09757v1 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - V. Arvind, Srijan Chakraborty, Samir Datta, Asif Khan + Kristof Varadi, Mark Marosi, Peter Antal - Skew polynomial representations of matrix algebras and applications to coding theory - https://arxiv.org/abs/2512.08602 - arXiv:2512.08602v1 Announce Type: new -Abstract: We extend the existing skew polynomial representations of matrix algebras which are direct sum of matrix spaces over division rings. In this representation, the sum-rank distance between two tuples of matrices is captured by a weight function on their associated skew polynomials, defined through degrees and greatest common right divisors with the polynomial that defines the representation. We exploit this representation to construct new families of maximum sum-rank distance (MSRD) codes over finite and infinite fields, and over division rings. These constructions generalize many of the known existing constructions of MSRD codes as well as of optimal codes in the rank and in the Hamming metric. As a byproduct, in the case of finite fields we obtain new families of MDS codes which are linear over a subfield and whose length is close to the field size. - oai:arXiv.org:2512.08602v1 - cs.IT - math.IT - math.RA - Wed, 10 Dec 2025 00:00:00 -0500 + Towards Language Model Guided TLA+ Proof Automation + https://arxiv.org/abs/2512.09758 + arXiv:2512.09758v1 Announce Type: new +Abstract: Formal theorem proving with TLA+ provides rigorous guarantees for system specifications, but constructing proofs requires substantial expertise and effort. While large language models have shown promise in automating proofs for tactic-based theorem provers like Lean, applying these approaches directly to TLA+ faces significant challenges due to the unique hierarchical proof structure of the TLA+ proof system. We present a prompt-based approach that leverages LLMs to guide hierarchical decomposition of complex proof obligations into simpler sub-claims, while relying on symbolic provers for verification. Our key insight is to constrain LLMs to generate normalized claim decompositions rather than complete proofs, significantly reducing syntax errors. We also introduce a benchmark suite of 119 theorems adapted from (1) established mathematical collections and (2) inductive proofs of distributed protocols. Our approach consistently outperforms baseline methods across the benchmark suite. + oai:arXiv.org:2512.09758v1 + cs.LO + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Alessandro Neri, Paolo Santonastaso + Yuhao Zhou, Stavros Tripakis - Decoupling Template Bias in CLIP: Harnessing Empty Prompts for Enhanced Few-Shot Learning - https://arxiv.org/abs/2512.08606 - arXiv:2512.08606v1 Announce Type: new -Abstract: The Contrastive Language-Image Pre-Training (CLIP) model excels in few-shot learning by aligning visual and textual representations. Our study shows that template-sample similarity (TSS), defined as the resemblance between a text template and an image sample, introduces bias. This bias leads the model to rely on template proximity rather than true sample-to-category alignment, reducing both accuracy and robustness in classification. We present a framework that uses empty prompts, textual inputs that convey the idea of "emptiness" without category information. These prompts capture unbiased template features and offset TSS bias. The framework employs two stages. During pre-training, empty prompts reveal and reduce template-induced bias within the CLIP encoder. During few-shot fine-tuning, a bias calibration loss enforces correct alignment between images and their categories, ensuring the model focuses on relevant visual cues. Experiments across multiple benchmarks demonstrate that our template correction method significantly reduces performance fluctuations caused by TSS, yielding higher classification accuracy and stronger robustness. The repository of this project is available at https://github.com/zhenyuZ-HUST/Decoupling-Template-Bias-in-CLIP. - oai:arXiv.org:2512.08606v1 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Baseline: Operation-Based Evolution and Versioning of Data + https://arxiv.org/abs/2512.09762 + arXiv:2512.09762v1 Announce Type: new +Abstract: Baseline is a platform for richly structured data supporting change in multiple dimensions: mutation over time, collaboration across space, and evolution through design changes. It is built upon Operational Differencing, a new technique for managing data in terms of high-level operations that include refactorings and schema changes. We use operational differencing to construct an operation-based form of version control on data structures used in programming languages and relational databases. + This approach to data version control does fine-grained diffing and merging despite intervening structural transformations like schema changes. It offers users a simplified conceptual model of version control for ad hoc usage: There is no repo; Branching is just copying. The informaton maintained in a repo can be synthesized more precisely from the append-only histories of branches. Branches can be flexibly shared as is commonly done with document files, except with the added benefit of diffing and merging. + We conjecture that queries can be operationalized into a sequence of schema and data operations. We develop that idea on a query language fragment containing selects and joins. + Operationalized queries are represented as a future timeline that is speculatively executed as a branch off of the present state, returning a value from its hypothetical future. Operationalized queries get rewritten to accommodate schema change "for free" by the machinery of operational differencing. + Altogether we develop solutions to four of the eight challenge problems of schema evolution identified in a recent paper. + oai:arXiv.org:2512.09762v1 + cs.DB + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhenyu Zhang, Guangyao Chen, Yixiong Zou, Zhimeng Huang, Yuhua Li + http://creativecommons.org/licenses/by/4.0/ + Jonathan Edwards, Tomas Petricek - Decoupled Design of Time-Varying Control Barrier Functions via Equivariances - https://arxiv.org/abs/2512.08607 - arXiv:2512.08607v1 Announce Type: new -Abstract: This article presents a systematic method for designing time-varying Control Barrier Functions (CBF) composed of a time-invariant component and multiple time-dependent components, leveraging structural properties of the system dynamics. The method involves the construction of a specific class of time-invariant CBFs that encode the system's dynamic capabilities with respect to a given constraint, and augments them subsequently with appropriately designed time-dependent transformations. While transformations uniformly varying the time-invariant CBF can be applied to arbitrary systems, transformations exploiting structural properties in the dynamics - equivariances in particular - enable the handling of a broader and more expressive class of time-varying constraints. The article shows how to leverage such properties in the design of time-varying CBFs. The proposed method decouples the design of time variations from the computationally expensive construction of the underlying CBFs, thereby providing a computationally attractive method to the design of time-varying CBFs. The method accounts for input constraints and under-actuation, and requires only qualitative knowledge on the time-variation of the constraints making it suitable to the application in uncertain environments. - oai:arXiv.org:2512.08607v1 - eess.SY - cs.RO - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Defining Cost Function of Steganography with Large Language Models + https://arxiv.org/abs/2512.09769 + arXiv:2512.09769v1 Announce Type: new +Abstract: In this paper, we make the first attempt towards defining cost function of steganography with large language models (LLMs), which is totally different from previous works that rely heavily on expert knowledge or require large-scale datasets for cost learning. To achieve this goal, a two-stage strategy combining LLM-guided program synthesis with evolutionary search is applied in the proposed method. In the first stage, a certain number of cost functions in the form of computer program are synthesized from LLM responses to structured prompts. These cost functions are then evaluated with pretrained steganalysis models so that candidate cost functions suited to steganography can be collected. In the second stage, by retraining a steganalysis model for each candidate cost function, the optimal cost function(s) can be determined according to the detection accuracy. This two-stage strategy is performed by an iterative fashion so that the best cost function can be collected at the last iteration. Experiments show that the proposed method enables LLMs to design new cost functions of steganography that significantly outperform existing works in terms of resisting steganalysis tools, which verifies the superiority of the proposed method. To the best knowledge of the authors, this is the first work applying LLMs to the design of advanced cost function of steganography, which presents a novel perspective for steganography design and may shed light on further research. + oai:arXiv.org:2512.09769v1 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Adrian Wiltz, Dimos V. Dimarogonas + Hanzhou Wu, Yige Wang - NLoS Localization with Single Base Station Based on Radio Map - https://arxiv.org/abs/2512.08608 - arXiv:2512.08608v1 Announce Type: new -Abstract: Accurate outdoor localization in Non-Line-of-Sight (NLoS) environments remains a critical challenge for wireless communication and sensing systems. Existing methods, including positioning based on the Global Navigation Satellite System (GNSS) and triple Base Stations (BSs) techniques, cannot provide reliable performance under NLoS conditions, particularly in dense urban areas with strong multipath effects. To address this limitation, we propose a single BS localization framework that integrates sequential signal measurements with prior radio information embedded in the Radio Map (RM). Using temporal measurement features and matching them with radio maps, the proposed method effectively mitigates the adverse impact of multipath propagation and reduces the dependence on LoS paths. Simulation experiments further evaluate the impact of different radio map construction strategies and the varying lengths of the measurement sequence on localization accuracy. Results demonstrate that the proposed scheme achieves sub-meter positioning accuracy in typical NLoS environments, highlighting its potential as a practical and robust solution for single-base-station deployment. - oai:arXiv.org:2512.08608v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + DeepSeek's WEIRD Behavior: The cultural alignment of Large Language Models and the effects of prompt language and cultural prompting + https://arxiv.org/abs/2512.09772 + arXiv:2512.09772v1 Announce Type: new +Abstract: Culture is a core component of human-to-human interaction and plays a vital role in how we perceive and interact with others. Advancements in the effectiveness of Large Language Models (LLMs) in generating human-sounding text have greatly increased the amount of human-to-computer interaction. As this field grows, the cultural alignment of these human-like agents becomes an important field of study. Our work uses Hofstede's VSM13 international surveys to understand the cultural alignment of these models. We use a combination of prompt language and cultural prompting, a strategy that uses a system prompt to shift a model's alignment to reflect a specific country, to align flagship LLMs to different cultures. Our results show that DeepSeek-V3, V3.1, and OpenAI's GPT-5 exhibit a close alignment with the survey responses of the United States and do not achieve a strong or soft alignment with China, even when using cultural prompts or changing the prompt language. We also find that GPT-4 exhibits an alignment closer to China when prompted in English, but cultural prompting is effective in shifting this alignment closer to the United States. Other low-cost models, GPT-4o and GPT-4.1, respond to the prompt language used (i.e., English or Simplified Chinese) and cultural prompting strategies to create acceptable alignments with both the United States and China. + oai:arXiv.org:2512.09772v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + James Luther, Donald Brown + + + Stylized Meta-Album: Group-bias injection with style transfer to study robustness against distribution shifts + https://arxiv.org/abs/2512.09773 + arXiv:2512.09773v1 Announce Type: new +Abstract: We introduce Stylized Meta-Album (SMA), a new image classification meta-dataset comprising 24 datasets (12 content datasets, and 12 stylized datasets), designed to advance studies on out-of-distribution (OOD) generalization and related topics. Created using style transfer techniques from 12 subject classification datasets, SMA provides a diverse and extensive set of 4800 groups, combining various subjects (objects, plants, animals, human actions, textures) with multiple styles. SMA enables flexible control over groups and classes, allowing us to configure datasets to reflect diverse benchmark scenarios. While ideally, data collection would capture extensive group diversity, practical constraints often make this infeasible. SMA addresses this by enabling large and configurable group structures through flexible control over styles, subject classes, and domains-allowing datasets to reflect a wide range of real-world benchmark scenarios. This design not only expands group and class diversity, but also opens new methodological directions for evaluating model performance across diverse group and domain configurations-including scenarios with many minority groups, varying group imbalance, and complex domain shifts-and for studying fairness, robustness, and adaptation under a broader range of realistic conditions. To demonstrate SMA's effectiveness, we implemented two benchmarks: (1) a novel OOD generalization and group fairness benchmark leveraging SMA's domain, class, and group diversity to evaluate existing benchmarks. Our findings reveal that while simple balancing and algorithms utilizing group information remain competitive as claimed in previous benchmarks, increasing group diversity significantly impacts fairness, altering the superiority and relative rankings of algorithms. We also propose to use \textit{Top-M worst group accuracy} as a new hyperparameter tuning metric, demonstrating broader fairness during optimization and delivering better final worst-group accuracy for larger group diversity. (2) An unsupervised domain adaptation (UDA) benchmark utilizing SMA's group diversity to evaluate UDA algorithms across more scenarios, offering a more comprehensive benchmark with lower error bars (reduced by 73\% and 28\% in closed-set setting and UniDA setting, respectively) compared to existing efforts. These use cases highlight SMA's potential to significantly impact the outcomes of conventional benchmarks. + oai:arXiv.org:2512.09773v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jiajie Xu, Yifan Guo, Xiucheng Wang, Nan Cheng, Tingting Yang + Romain Mussard (UNIROUEN), Aur\'elien Gauffre (UGA), Ihsan Ullah (TAU, LISN), Thanh Gia Hieu Khuong (TAU, LISN), Massih-Reza Amini (UGA), Isabelle Guyon (TAU, LISN), Lisheng Sun-Hosoya (TAU, LISN) - CogMCTS: A Novel Cognitive-Guided Monte Carlo Tree Search Framework for Iterative Heuristic Evolution with Large Language Models - https://arxiv.org/abs/2512.08609 - arXiv:2512.08609v1 Announce Type: new -Abstract: Automatic Heuristic Design (AHD) is an effective1 framework for solving complex optimization prob-2 lems. The development of large language mod-3 els (LLMs) enables the automated generation of4 heuristics. Existing LLM-based evolutionary meth-5 ods rely on population strategies and are prone6 to local optima. Integrating LLMs with Monte7 Carlo Tree Search (MCTS) improves the trade-off8 between exploration and exploitation, but multi-9 round cognitive integration remains limited and10 search diversity is constrained. To overcome these11 limitations, this paper proposes a novel cognitive-12 guided MCTS framework (CogMCTS). CogMCTS13 tightly integrates the cognitive guidance mecha-14 nism of LLMs with MCTS to achieve efficient au-15 tomated heuristic optimization. The framework16 employs multi-round cognitive feedback to incor-17 porate historical experience, node information, and18 negative outcomes, dynamically improving heuris-19 tic generation. Dual-track node expansion com-20 bined with elite heuristic management balances the21 exploration of diverse heuristics and the exploita-22 tion of high-quality experience. In addition, strate-23 gic mutation modifies the heuristic forms and pa-24 rameters to further enhance the diversity of the so-25 lution and the overall optimization performance.26 The experimental results indicate that CogMCTS27 outperforms existing LLM-based AHD methods in28 stability, efficiency, and solution quality. - oai:arXiv.org:2512.08609v1 + Quantifying Uncertainty in Machine Learning-Based Pervasive Systems: Application to Human Activity Recognition + https://arxiv.org/abs/2512.09775 + arXiv:2512.09775v1 Announce Type: new +Abstract: The recent convergence of pervasive computing and machine learning has given rise to numerous services, impacting almost all areas of economic and social activity. However, the use of AI techniques precludes certain standard software development practices, which emphasize rigorous testing to ensure the elimination of all bugs and adherence to well-defined specifications. ML models are trained on numerous high-dimensional examples rather than being manually coded. Consequently, the boundaries of their operating range are uncertain, and they cannot guarantee absolute error-free performance. In this paper, we propose to quantify uncertainty in ML-based systems. To achieve this, we propose to adapt and jointly utilize a set of selected techniques to evaluate the relevance of model predictions at runtime. We apply and evaluate these proposals in the highly heterogeneous and evolving domain of Human Activity Recognition (HAR). The results presented demonstrate the relevance of the approach, and we discuss in detail the assistance provided to domain experts. + oai:arXiv.org:2512.09775v1 + cs.SE cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hui Wang, Yang Liu, Xiaoyu Zhang, Chaoxu Mu + http://creativecommons.org/licenses/by/4.0/ + Vladimir Balditsyn, Philippe Lalanda, German Vega, St\'ephanie Chollet - Compact Runge-Kutta flux reconstruction methods for non-conservative hyperbolic equations - https://arxiv.org/abs/2512.08611 - arXiv:2512.08611v1 Announce Type: new -Abstract: Compact Runge-Kutta (cRK) Flux Reconstruction (FR) methods are a variant of RKFR methods for hyperbolic conservation laws with a compact stencil including only immediate neighboring finite elements. We extend cRKFR methods to handle hyperbolic equations with stiff source terms and non-conservative products. To handle stiff source terms, we use IMplicit EXplicit (IMEX) time integration schemes such that the implicitness is local to each solution point, and thus does not increase inter-element communication. Although non-conservative products do not correspond to a physical flux, we formulate the scheme using numerical fluxes at element interfaces. We use similar numerical fluxes for a lower order finite volume scheme on subcells of each element, which is then blended with the high order cRKFR scheme to obtain a robust scheme for problems with non-smooth solutions. Combined with a flux limiter at the element interfaces, the subcell based blending scheme preserves the physical admissibility of the solution, e.g., positivity of density and pressure for compressible Euler equations. The procedure thus leads to an admissibility preserving IMEX cRKFR scheme for hyperbolic equations with stiff source terms and non-conservative products. The capability of the scheme to handle stiff terms is shown through numerical tests involving Burgers' equations, reactive Euler's equations, and the ten moment problem. The non-conservative treatment is tested using variable advection equations, shear shallow water equations, the GLM-MHD, and the multi-ion MHD equations. - oai:arXiv.org:2512.08611v1 - math.NA - cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Physics-Aware Heterogeneous GNN Architecture for Real-Time BESS Optimization in Unbalanced Distribution Systems + https://arxiv.org/abs/2512.09780 + arXiv:2512.09780v1 Announce Type: new +Abstract: Battery energy storage systems (BESS) have become increasingly vital in three-phase unbalanced distribution grids for maintaining voltage stability and enabling optimal dispatch. However, existing deep learning approaches often lack explicit three-phase representation, making it difficult to accurately model phase-specific dynamics and enforce operational constraints--leading to infeasible dispatch solutions. This paper demonstrates that by embedding detailed three-phase grid information--including phase voltages, unbalanced loads, and BESS states--into heterogeneous graph nodes, diverse GNN architectures (GCN, GAT, GraphSAGE, GPS) can jointly predict network state variables with high accuracy. Moreover, a physics-informed loss function incorporates critical battery constraints--SoC and C-rate limits--via soft penalties during training. Experimental validation on the CIGRE 18-bus distribution system shows that this embedding-loss approach achieves low prediction errors, with bus voltage MSEs of 6.92e-07 (GCN), 1.21e-06 (GAT), 3.29e-05 (GPS), and 9.04e-07 (SAGE). Importantly, the physics-informed method ensures nearly zero SoC and C-rate constraint violations, confirming its effectiveness for reliable, constraint-compliant dispatch. + oai:arXiv.org:2512.09780v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Arpit Babbar, Hendrik Ranocha + Aoxiang Ma, Salah Ghamizi, Jun Cao, Pedro Rodriguez - Protein Secondary Structure Prediction Using Transformers - https://arxiv.org/abs/2512.08613 - arXiv:2512.08613v1 Announce Type: new -Abstract: Predicting protein secondary structures such as alpha helices, beta sheets, and coils from amino acid sequences is essential for understanding protein function. This work presents a transformer-based model that applies attention mechanisms to protein sequence data to predict structural motifs. A sliding-window data augmentation technique is used on the CB513 dataset to expand the training samples. The transformer shows strong ability to generalize across variable-length sequences while effectively capturing both local and long-range residue interactions. - oai:arXiv.org:2512.08613v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Predicting Polymer Solubility in Solvents Using SMILES Strings + https://arxiv.org/abs/2512.09784 + arXiv:2512.09784v1 Announce Type: new +Abstract: Understanding and predicting polymer solubility in various solvents is critical for applications ranging from recycling to pharmaceutical formulation. This work presents a deep learning framework that predicts polymer solubility, expressed as weight percent (wt%), directly from SMILES representations of both polymers and solvents. A dataset of 8,049 polymer solvent pairs at 25 deg C was constructed from calibrated molecular dynamics simulations (Zhou et al., 2023), and molecular descriptors and fingerprints were combined into a 2,394 feature representation per sample. A fully connected neural network with six hidden layers was trained using the Adam optimizer and evaluated using mean squared error loss, achieving strong agreement between predicted and actual solubility values. Generalizability was demonstrated using experimentally measured data from the Materials Genome Project, where the model maintained high accuracy on 25 unseen polymer solvent combinations. These findings highlight the viability of SMILES based machine learning models for scalable solubility prediction and high-throughput solvent screening, supporting applications in green chemistry, polymer processing, and materials design. + oai:arXiv.org:2512.09784v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Manzi Kevin Maxime + Andrew Reinhard - HealthcareNLP: where are we and what is next? - https://arxiv.org/abs/2512.08617 - arXiv:2512.08617v1 Announce Type: new -Abstract: This proposed tutorial focuses on Healthcare Domain Applications of NLP, what we have achieved around HealthcareNLP, and the challenges that lie ahead for the future. Existing reviews in this domain either overlook some important tasks, such as synthetic data generation for addressing privacy concerns, or explainable clinical NLP for improved integration and implementation, or fail to mention important methodologies, including retrieval augmented generation and the neural symbolic integration of LLMs and KGs. In light of this, the goal of this tutorial is to provide an introductory overview of the most important sub-areas of a patient- and resource-oriented HealthcareNLP, with three layers of hierarchy: data/resource layer: annotation guidelines, ethical approvals, governance, synthetic data; NLP-Eval layer: NLP tasks such as NER, RE, sentiment analysis, and linking/coding with categorised methods, leading to explainable HealthAI; patients layer: Patient Public Involvement and Engagement (PPIE), health literacy, translation, simplification, and summarisation (also NLP tasks), and shared decision-making support. A hands-on session will be included in the tutorial for the audience to use HealthcareNLP applications. The target audience includes NLP practitioners in the healthcare application domain, NLP researchers who are interested in domain applications, healthcare researchers, and students from NLP fields. The type of tutorial is "Introductory to CL/NLP topics (HealthcareNLP)" and the audience does not need prior knowledge to attend this. Tutorial materials: https://github.com/4dpicture/HealthNLP - oai:arXiv.org:2512.08617v1 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + TinyD\'ej\`aVu: Smaller Memory Footprint & Faster Inference on Sensor Data Streams with Always-On Microcontrollers + https://arxiv.org/abs/2512.09786 + arXiv:2512.09786v1 Announce Type: new +Abstract: Always-on sensors are increasingly expected to embark a variety of tiny neural networks and to continuously perform inference on time-series of the data they sense. In order to fit lifetime and energy consumption requirements when operating on battery, such hardware uses microcontrollers (MCUs) with tiny memory budget e.g., 128kB of RAM. In this context, optimizing data flows across neural network layers becomes crucial. In this paper, we introduce TinyD\'ej\`aVu, a new framework and novel algorithms we designed to drastically reduce the RAM footprint required by inference using various tiny ML models for sensor data time-series on typical microcontroller hardware. We publish the implementation of TinyD\'ej\`aVu as open source, and we perform reproducible benchmarks on hardware. We show that TinyD\'ej\`aVu can save more than 60% of RAM usage and eliminate up to 90% of redundant compute on overlapping sliding window inputs. + oai:arXiv.org:2512.09786v1 + cs.LG + cs.PF + cs.SD + eess.AS + eess.SP + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Lifeng Han, Paul Rayson, Suzan Verberne, Andrew Moore, Goran Nenadic + http://creativecommons.org/licenses/by/4.0/ + Zhaolan Huang, Emmanuel Baccelli - New Constructions of SSPDs and their Applications - https://arxiv.org/abs/2512.08619 - arXiv:2512.08619v1 Announce Type: new -Abstract: $\renewcommand{\Re}{\mathbb{R}}$We present a new optimal construction of a semi-separated pair decomposition (i.e., SSPD) for a set of $n$ points in $\Re^d$. In the new construction each point participates in a few pairs, and it extends easily to spaces with low doubling dimension. This is the first optimal construction with these properties. - As an application of the new construction, for a fixed $t>1$, we present a new construction of a $t$-spanner with $O(n)$ edges and maximum degree $O(\log^2 n)$ that has a separator of size $O\pth{n^{1-1/d}}$. - oai:arXiv.org:2512.08619v1 - cs.CG - Wed, 10 Dec 2025 00:00:00 -0500 + FastPose-ViT: A Vision Transformer for Real-Time Spacecraft Pose Estimation + https://arxiv.org/abs/2512.09792 + arXiv:2512.09792v1 Announce Type: new +Abstract: Estimating the 6-degrees-of-freedom (6DoF) pose of a spacecraft from a single image is critical for autonomous operations like in-orbit servicing and space debris removal. Existing state-of-the-art methods often rely on iterative Perspective-n-Point (PnP)-based algorithms, which are computationally intensive and ill-suited for real-time deployment on resource-constrained edge devices. To overcome these limitations, we propose FastPose-ViT, a Vision Transformer (ViT)-based architecture that directly regresses the 6DoF pose. Our approach processes cropped images from object bounding boxes and introduces a novel mathematical formalism to map these localized predictions back to the full-image scale. This formalism is derived from the principles of projective geometry and the concept of "apparent rotation", where the model predicts an apparent rotation matrix that is then corrected to find the true orientation. We demonstrate that our method outperforms other non-PnP strategies and achieves performance competitive with state-of-the-art PnP-based techniques on the SPEED dataset. Furthermore, we validate our model's suitability for real-world space missions by quantizing it and deploying it on power-constrained edge hardware. On the NVIDIA Jetson Orin Nano, our end-to-end pipeline achieves a latency of ~75 ms per frame under sequential execution, and a non-blocking throughput of up to 33 FPS when stages are scheduled concurrently. + oai:arXiv.org:2512.09792v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1016/j.comgeo.2011.12.003 - Mohammad A. Abam, Sariel Har-Peled + Pierre Ancey, Andrew Price, Saqib Javed, Mathieu Salzmann - OpenMonoGS-SLAM: Monocular Gaussian Splatting SLAM with Open-set Semantics - https://arxiv.org/abs/2512.08625 - arXiv:2512.08625v1 Announce Type: new -Abstract: Simultaneous Localization and Mapping (SLAM) is a foundational component in robotics, AR/VR, and autonomous systems. With the rising focus on spatial AI in recent years, combining SLAM with semantic understanding has become increasingly important for enabling intelligent perception and interaction. Recent efforts have explored this integration, but they often rely on depth sensors or closed-set semantic models, limiting their scalability and adaptability in open-world environments. In this work, we present OpenMonoGS-SLAM, the first monocular SLAM framework that unifies 3D Gaussian Splatting (3DGS) with open-set semantic understanding. To achieve our goal, we leverage recent advances in Visual Foundation Models (VFMs), including MASt3R for visual geometry and SAM and CLIP for open-vocabulary semantics. These models provide robust generalization across diverse tasks, enabling accurate monocular camera tracking and mapping, as well as a rich understanding of semantics in open-world environments. Our method operates without any depth input or 3D semantic ground truth, relying solely on self-supervised learning objectives. Furthermore, we propose a memory mechanism specifically designed to manage high-dimensional semantic features, which effectively constructs Gaussian semantic feature maps, leading to strong overall performance. Experimental results demonstrate that our approach achieves performance comparable to or surpassing existing baselines in both closed-set and open-set segmentation tasks, all without relying on supplementary sensors such as depth maps or semantic annotations. - oai:arXiv.org:2512.08625v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Knowledge Diversion for Efficient Morphology Control and Policy Transfer + https://arxiv.org/abs/2512.09796 + arXiv:2512.09796v1 Announce Type: new +Abstract: Universal morphology control aims to learn a universal policy that generalizes across heterogeneous agent morphologies, with Transformer-based controllers emerging as a popular choice. However, such architectures incur substantial computational costs, resulting in high deployment overhead, and existing methods exhibit limited cross-task generalization, necessitating training from scratch for each new task. To this end, we propose \textbf{DivMorph}, a modular training paradigm that leverages knowledge diversion to learn decomposable controllers. DivMorph factorizes randomly initialized Transformer weights into factor units via SVD prior to training and employs dynamic soft gating to modulate these units based on task and morphology embeddings, separating them into shared \textit{learngenes} and morphology- and task-specific \textit{tailors}, thereby achieving knowledge disentanglement. By selectively activating relevant components, DivMorph enables scalable and efficient policy deployment while supporting effective policy transfer to novel tasks. Extensive experiments demonstrate that DivMorph achieves state-of-the-art performance, achieving a 3$\times$ improvement in sample efficiency over direct finetuning for cross-task transfer and a 17$\times$ reduction in model size for single-agent deployment. + oai:arXiv.org:2512.09796v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jisang Yoo, Gyeongjin Kang, Hyun-kyu Ko, Hyeonwoo Yu, Eunbyung Park + Fu Feng, Ruixiao Shi, Yucheng Xie, Jianlu Shen, Jing Wang, Xin Geng - Inferring Causal Relationships to Improve Caching for Clients with Correlated Requests: Applications to VR - https://arxiv.org/abs/2512.08626 - arXiv:2512.08626v1 Announce Type: new -Abstract: Efficient edge caching reduces latency and alleviates backhaul congestion in modern networks. Traditional caching policies, such as Least Recently Used (LRU) and Least Frequently Used (LFU), perform well under specific request patterns. LRU excels in workloads with strong temporal locality, while LFU is effective when content popularity remains static. However, real-world client requests often exhibit correlations due to shared contexts and coordinated activities. This is particularly evident in Virtual Reality (VR) environments, where groups of clients navigate shared virtual spaces, leading to correlated content requests. - In this paper, we introduce the \textit{grouped client request model}, a generalization of the Independent Reference Model that explicitly captures different types of request correlations. Our theoretical analysis of LRU under this model reveals that the optimal causal caching policy depends on cache size: LFU is optimal for small to moderate caches, while LRU outperforms it for larger caches. To address the limitations of existing policies, we propose Least Following and Recently Used (LFRU), a novel online caching policy that dynamically infers and adapts to causal relationships in client requests to optimize evictions. LFRU prioritizes objects likely to be requested based on inferred dependencies, achieving near-optimal performance compared to the offline optimal Belady policy in structured correlation settings. - We develop VR based datasets to evaluate caching policies under realistic correlated requests. Our results show that LFRU consistently performs at least as well as LRU and LFU, outperforming LRU by up to 2.9x and LFU by up to1.9x in certain settings. - oai:arXiv.org:2512.08626v1 + M3Net: A Multi-Metric Mixture of Experts Network Digital Twin with Graph Neural Networks + https://arxiv.org/abs/2512.09797 + arXiv:2512.09797v1 Announce Type: new +Abstract: The rise of 5G/6G network technologies promises to enable applications like autonomous vehicles and virtual reality, resulting in a significant increase in connected devices and necessarily complicating network management. Even worse, these applications often have strict, yet heterogeneous, performance requirements across metrics like latency and reliability. Much recent work has thus focused on developing the ability to predict network performance. However, traditional methods for network modeling, like discrete event simulators and emulation, often fail to balance accuracy and scalability. Network Digital Twins (NDTs), augmented by machine learning, present a viable solution by creating virtual replicas of physical networks for real- time simulation and analysis. State-of-the-art models, however, fall short of full-fledged NDTs, as they often focus only on a single performance metric or simulated network data. We introduce M3Net, a Multi-Metric Mixture-of-experts (MoE) NDT that uses a graph neural network architecture to estimate multiple performance metrics from an expanded set of network state data in a range of scenarios. We show that M3Net significantly enhances the accuracy of flow delay predictions by reducing the MAPE (Mean Absolute Percentage Error) from 20.06% to 17.39%, while also achieving 66.47% and 78.7% accuracy on jitter and packets dropped for each flow + oai:arXiv.org:2512.09797v1 cs.NI - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Agrim Bari, Gustavo de Veciana, Yuqi Zhou + Blessed Guda, Carlee Joe-Wong - Trajectory Densification and Depth from Perspective-based Blur - https://arxiv.org/abs/2512.08627 - arXiv:2512.08627v1 Announce Type: new -Abstract: In the absence of a mechanical stabilizer, the camera undergoes inevitable rotational dynamics during capturing, which induces perspective-based blur especially under long-exposure scenarios. From an optical standpoint, perspective-based blur is depth-position-dependent: objects residing at distinct spatial locations incur different blur levels even under the same imaging settings. Inspired by this, we propose a novel method that estimate metric depth by examining the blur pattern of a video stream and dense trajectory via joint optical design algorithm. Specifically, we employ off-the-shelf vision encoder and point tracker to extract video information. Then, we estimate depth map via windowed embedding and multi-window aggregation, and densify the sparse trajectory from the optical algorithm using a vision-language model. Evaluations on multiple depth datasets demonstrate that our method attains strong performance over large depth range, while maintaining favorable generalization. Relative to the real trajectory in handheld shooting settings, our optical algorithm achieves superior precision and the dense reconstruction maintains strong accuracy. - oai:arXiv.org:2512.08627v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + High-Resolution Water Sampling via a Solar-Powered Autonomous Surface Vehicle + https://arxiv.org/abs/2512.09798 + arXiv:2512.09798v1 Announce Type: new +Abstract: Accurate water quality assessment requires spatially resolved sampling, yet most unmanned surface vehicles (USVs) can collect only a limited number of samples or rely on single-point sensors with poor representativeness. This work presents a solar-powered, fully autonomous USV featuring a novel syringe-based sampling architecture capable of acquiring 72 discrete, contamination-minimized water samples per mission. The vehicle incorporates a ROS 2 autonomy stack with GPS-RTK navigation, LiDAR and stereo-vision obstacle detection, Nav2-based mission planning, and long-range LoRa supervision, enabling dependable execution of sampling routes in unstructured environments. The platform integrates a behavior-tree autonomy architecture adapted from Nav2, enabling mission-level reasoning and perception-aware navigation. A modular 6x12 sampling system, controlled by distributed micro-ROS nodes, provides deterministic actuation, fault isolation, and rapid module replacement, achieving spatial coverage beyond previously reported USV-based samplers. Field trials in Achocalla Lagoon (La Paz, Bolivia) demonstrated 87% waypoint accuracy, stable autonomous navigation, and accurate physicochemical measurements (temperature, pH, conductivity, total dissolved solids) comparable to manually collected references. These results demonstrate that the platform enables reliable high-resolution sampling and autonomous mission execution, providing a scalable solution for aquatic monitoring in remote environments. + oai:arXiv.org:2512.09798v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tianchen Qiu, Qirun Zhang, Jiajian He, Zhengyue Zhuge, Jiahui Xu, Yueting Chen + http://creativecommons.org/licenses/by/4.0/ + Misael Mamani, Mariel Fernandez, Grace Luna, Steffani Limachi, Leonel Apaza, Carolina Montes-D\'avalos, Marcelo Herrera, Edwin Salcedo - Sustainable Development Goals in Psychology: A Century of Progress in Publications - https://arxiv.org/abs/2512.08628 - arXiv:2512.08628v1 Announce Type: new -Abstract: The Sustainable Development Goals (SDGs) offer a lens for tracking societal change, yet contributions from the social and behavioral sciences have rarely been integrated into policy agendas. To take stock and create a baseline and benchmark for the future, we assemble 233,061 psychology publications (1894 -- 2022) and tag them to the 17 SDGs using a query-based classifier. Health, education, work, inequality, and gender dominate the study of SDGs in psychology, shifting from an early focus on work to education and inequality, and since the 1960s, health. United States-based research leads across most goals. Other countries set distinct priorities (e.g., China: education and work; Australia: health). Women comprise about one-third of authors, concentrated in social and health goals, but have been underrepresented in STEM-oriented goals. The 2015 launch of the SDGs marked a turning point: SDG-tagged publications have been receiving more citations than comparable non-SDG work, reversing a pre-2015 deficit. Tracking the SDGs through psychology clarifies long-run engagement with social priorities, identifies evidence gaps, and guides priorities to accelerate the field's contribution to the SDG agenda. - oai:arXiv.org:2512.08628v1 - cs.DL - Wed, 10 Dec 2025 00:00:00 -0500 + Ariel-ML: Computing Parallelization with Embedded Rust for Neural Networks on Heterogeneous Multi-core Microcontrollers + https://arxiv.org/abs/2512.09800 + arXiv:2512.09800v1 Announce Type: new +Abstract: Low-power microcontroller (MCU) hardware is currently evolving from single-core architectures to predominantly multi-core architectures. In parallel, new embedded software building blocks are more and more written in Rust, while C/C++ dominance fades in this domain. On the other hand, small artificial neural networks (ANN) of various kinds are increasingly deployed in edge AI use cases, thus deployed and executed directly on low-power MCUs. In this context, both incremental improvements and novel innovative services will have to be continuously retrofitted using ANNs execution in software embedded on sensing/actuating systems already deployed in the field. However, there was so far no Rust embedded software platform automating parallelization for inference computation on multi-core MCUs executing arbitrary TinyML models. This paper thus fills this gap by introducing Ariel-ML, a novel toolkit we designed combining a generic TinyML pipeline and an embedded Rust software platform which can take full advantage of multi-core capabilities of various 32bit microcontroller families (Arm Cortex-M, RISC-V, ESP-32). We published the full open source code of its implementation, which we used to benchmark its capabilities using a zoo of various TinyML models. We show that Ariel-ML outperforms prior art in terms of inference latency as expected, and we show that, compared to pre-existing toolkits using embedded C/C++, Ariel-ML achieves comparable memory footprints. Ariel-ML thus provides a useful basis for TinyML practitioners and resource-constrained embedded Rust developers. + oai:arXiv.org:2512.09800v1 + cs.LG + cs.DC + cs.PF + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Xinyi Zhao, Ralph Hertwig, Dirk U. Wulff + http://creativecommons.org/licenses/by/4.0/ + Zhaolan Huang, Kaspar Schleiser, Gyungmin Myung, Emmanuel Baccelli - See-Control: A Multimodal Agent Framework for Smartphone Interaction with a Robotic Arm - https://arxiv.org/abs/2512.08629 - arXiv:2512.08629v1 Announce Type: new -Abstract: Recent advances in Multimodal Large Language Models (MLLMs) have enabled their use as intelligent agents for smartphone operation. However, existing methods depend on the Android Debug Bridge (ADB) for data transmission and action execution, limiting their applicability to Android devices. In this work, we introduce the novel Embodied Smartphone Operation (ESO) task and present See-Control, a framework that enables smartphone operation via direct physical interaction with a low-DoF robotic arm, offering a platform-agnostic solution. See-Control comprises three key components: (1) an ESO benchmark with 155 tasks and corresponding evaluation metrics; (2) an MLLM-based embodied agent that generates robotic control commands without requiring ADB or system back-end access; and (3) a richly annotated dataset of operation episodes, offering valuable resources for future research. By bridging the gap between digital agents and the physical world, See-Control provides a concrete step toward enabling home robots to perform smartphone-dependent tasks in realistic environments. - oai:arXiv.org:2512.08629v1 - cs.AI + Modality-Specific Enhancement and Complementary Fusion for Semi-Supervised Multi-Modal Brain Tumor Segmentation + https://arxiv.org/abs/2512.09801 + arXiv:2512.09801v1 Announce Type: new +Abstract: Semi-supervised learning (SSL) has become a promising direction for medical image segmentation, enabling models to learn from limited labeled data alongside abundant unlabeled samples. However, existing SSL approaches for multi-modal medical imaging often struggle to exploit the complementary information between modalities due to semantic discrepancies and misalignment across MRI sequences. To address this, we propose a novel semi-supervised multi-modal framework that explicitly enhances modality-specific representations and facilitates adaptive cross-modal information fusion. Specifically, we introduce a Modality-specific Enhancing Module (MEM) to strengthen semantic cues unique to each modality via channel-wise attention, and a learnable Complementary Information Fusion (CIF) module to adaptively exchange complementary knowledge between modalities. The overall framework is optimized using a hybrid objective combining supervised segmentation loss and cross-modal consistency regularization on unlabeled data. Extensive experiments on the BraTS 2019 (HGG subset) demonstrate that our method consistently outperforms strong semi-supervised and multi-modal baselines under 1\%, 5\%, and 10\% labeled data settings, achieving significant improvements in both Dice and Sensitivity scores. Ablation studies further confirm the complementary effects of our proposed MEM and CIF in bridging cross-modality discrepancies and improving segmentation robustness under scarce supervision. + oai:arXiv.org:2512.09801v1 cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Tien-Dat Chung, Ba-Thinh Lam, Thanh-Huy Nguyen, Thien Nguyen, Nguyen Lan Vi Vu, Hoang-Loc Cao, Phat Kim Huynh, Min Xu + + + Building a Data Dashboard for Magic: The Gathering: Initial Design Considerations + https://arxiv.org/abs/2512.09802 + arXiv:2512.09802v1 Announce Type: new +Abstract: This paper presents the initial stages of a design study aimed at developing a dashboard to visualize gameplay data of the Commander format from Magic: The Gathering. We conducted a user-task analysis to identify requirements for a data visualization dashboard tailored to the Commander format. Afterwards, we proposed a design for the dashboard leveraging visualizations to address players' needs and pain points for typical data analysis tasks in the context domain. Then, we followed-up with a structured user test to evaluate players' comprehension and preferences of data visualizations. Results show that players prioritize contextually relevant, outcome-driven metrics over peripheral ones, and that canonical charts like heatmaps and line charts support higher comprehension than complex ones such as scatterplots or icicle plots. Our findings also highlight the importance of localized views, user customization, and progressive disclosure, emphasizing that adaptability and contextual relevance are as essential as accuracy in effective dashboard design. Our study contributes practical design guidelines for data visualization in gaming contexts and highlights broader implications for engagement-driven dashboards. + oai:arXiv.org:2512.09802v1 cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haoyu Zhao, Weizhong Ding, Yuhao Yang, Zheng Tian, Linyi Yang, Kun Shao, Jun Wang + http://creativecommons.org/licenses/by-sa/4.0/ + Tom\'as Alves, Jo\~ao Moreira - Multi-Task Bayesian Optimization for Tuning Decentralized Trajectory Generation in Multi-UAV Systems - https://arxiv.org/abs/2512.08630 - arXiv:2512.08630v1 Announce Type: new -Abstract: This paper investigates the use of Multi-Task Bayesian Optimization for tuning decentralized trajectory generation algorithms in multi-drone systems. We treat each task as a trajectory generation scenario defined by a specific number of drone-to-drone interactions. To model relationships across scenarios, we employ Multi-Task Gaussian Processes, which capture shared structure across tasks and enable efficient information transfer during optimization. We compare two strategies: optimizing the average mission time across all tasks and optimizing each task individually. Through a comprehensive simulation campaign, we show that single-task optimization leads to progressively shorter mission times as swarm size grows, but requires significantly more optimization time than the average-task approach. - oai:arXiv.org:2512.08630v1 - cs.RO - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + OnCoCo 1.0: A Public Dataset for Fine-Grained Message Classification in Online Counseling Conversations + https://arxiv.org/abs/2512.09804 + arXiv:2512.09804v1 Announce Type: new +Abstract: This paper presents OnCoCo 1.0, a new public dataset for fine-grained message classification in online counseling. It is based on a new, integrative system of categories, designed to improve the automated analysis of psychosocial online counseling conversations. Existing category systems, predominantly based on Motivational Interviewing (MI), are limited by their narrow focus and dependence on datasets derived mainly from face-to-face counseling. This limits the detailed examination of textual counseling conversations. In response, we developed a comprehensive new coding scheme that differentiates between 38 types of counselor and 28 types of client utterances, and created a labeled dataset consisting of about 2.800 messages from counseling conversations. We fine-tuned several models on our dataset to demonstrate its applicability. The data and models are publicly available to researchers and practitioners. Thus, our work contributes a new type of fine-grained conversational resource to the language resources community, extending existing datasets for social and mental-health dialogue analysis. + oai:arXiv.org:2512.09804v1 + cs.CL + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Marta Manzoni, Alessandro Nazzari, Roberto Rubinacci, Marco Lovera + http://creativecommons.org/licenses/by-sa/4.0/ + Jens Albrecht, Robert Lehmann, Aleksandra Poltermann, Eric Rudolph, Philipp Steigerwald, Mara Stieler - Aerial Vision-Language Navigation with a Unified Framework for Spatial, Temporal and Embodied Reasoning - https://arxiv.org/abs/2512.08639 - arXiv:2512.08639v1 Announce Type: new -Abstract: Aerial Vision-and-Language Navigation (VLN) aims to enable unmanned aerial vehicles (UAVs) to interpret natural language instructions and navigate complex urban environments using onboard visual observation. This task holds promise for real-world applications such as low-altitude inspection, search-and-rescue, and autonomous aerial delivery. Existing methods often rely on panoramic images, depth inputs, or odometry to support spatial reasoning and action planning. These requirements increase system cost and integration complexity, thus hindering practical deployment for lightweight UAVs. We present a unified aerial VLN framework that operates solely on egocentric monocular RGB observations and natural language instructions. The model formulates navigation as a next-token prediction problem, jointly optimizing spatial perception, trajectory reasoning, and action prediction through prompt-guided multi-task learning. Moreover, we propose a keyframe selection strategy to reduce visual redundancy by retaining semantically informative frames, along with an action merging and label reweighting mechanism that mitigates long-tailed supervision imbalance and facilitates stable multi-task co-training. Extensive experiments on the Aerial VLN benchmark validate the effectiveness of our method. Under the challenging monocular RGB-only setting, our model achieves strong results across both seen and unseen environments. It significantly outperforms existing RGB-only baselines and narrows the performance gap with state-of-the-art panoramic RGB-D counterparts. Comprehensive ablation studies further demonstrate the contribution of our task design and architectural choices. - oai:arXiv.org:2512.08639v1 + CHEM: Estimating and Understanding Hallucinations in Deep Learning for Image Processing + https://arxiv.org/abs/2512.09806 + arXiv:2512.09806v1 Announce Type: new +Abstract: U-Net and other U-shaped architectures have achieved significant success in image deconvolution tasks. However, challenges have emerged, as these methods might generate unrealistic artifacts or hallucinations, which can interfere with analysis in safety-critical scenarios. This paper introduces a novel approach for quantifying and comprehending hallucination artifacts to ensure trustworthy computer vision models. Our method, termed the Conformal Hallucination Estimation Metric (CHEM), is applicable to any image reconstruction model, enabling efficient identification and quantification of hallucination artifacts. It offers two key advantages: it leverages wavelet and shearlet representations to efficiently extract hallucinations of image features and uses conformalized quantile regression to assess hallucination levels in a distribution-free manner. Furthermore, from an approximation theoretical perspective, we explore the reasons why U-shaped networks are prone to hallucinations. We test the proposed approach on the CANDELS astronomical image dataset with models such as U-Net, SwinUNet, and Learnlets, and provide new perspectives on hallucination from different aspects in deep learning-based image processing. + oai:arXiv.org:2512.09806v1 cs.CV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Huilin Xu, Zhuoyang Liu, Yixiang Luomei, Feng Xu + Jianfei Li, Ines Rosellon-Inclan, Gitta Kutyniok, Jean-Luc Starck - Applications of Interval-based Temporal Separation: the Reactivity Normal Form, Inverse $\Pi$, Craig Interpolation and Beth Definability - https://arxiv.org/abs/2512.08640 - arXiv:2512.08640v1 Announce Type: new -Abstract: We show how interval-based temporal separation on the extension of Moszkowski's discrete time interval temporal logic (Moszkowski, 1986) by the neighbourhood modalities (ITL-NL) and a lemma which is key in establishing this form of separation in (Guelev and Moszkowski, 2022) can be used to obtain concise proofs of an interval-based form of the reactivity normal form as known from (Manna and Pnueli, 1990), the inverse of the temporal projection operator from (Halpern, Manna and Moszkowski, 1983), the elimination of propositional quantification in ITL-NL and, consequently, uniform Craig interpolation and Beth definability for ITL-NL. - oai:arXiv.org:2512.08640v1 - cs.LO - Wed, 10 Dec 2025 00:00:00 -0500 + Certificates for nonnegativity of multivariate integer polynomials under perturbations + https://arxiv.org/abs/2512.09808 + arXiv:2512.09808v1 Announce Type: new +Abstract: We develop a general and unconditional framework for certifying the global nonnegativity of multivariate integer polynomials; based on rewriting them as sum of squares modulo their gradient ideals. We remove the two structural assumptions typically required by other approaches, namely that the polynomial attains its infimum and zero-dimensionality of the gradient ideal. Our approach combines a denominator-free stereographic transformation with a refined variant of the Hanzon--Jibetean perturbation scheme. The stereographic transformation preserves nonnegativity while making the polynomial coercive, with explicit bounds on the radius of positivity and on the nonzero critical values. Subsequently, we apply carefully constructed explicit perturbations that enforce zero-dimensionality of the gradient ideal without altering nonnegativity, allowing us to invoke recent algorithms to derive algebraic certificates or rational witness points. We present three algorithms implementing our framework and analyze their bit complexity in detail, which is single exponential with respect to the number of variables. A second contribution is a new explicit SOS perturbation scheme, which allows us to perturb any nonnegative polynomial in such a way that it can be written as a sum of squares (SOS). In contrast to Lasserre's classical SOS approximation, which guaranties density but currently does not provide an effective control over the perturbation size, we only derive concrete perturbation bounds ensuring that a nonnegative polynomial enters the SOS cone. + oai:arXiv.org:2512.09808v1 + cs.SC + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Dimitar P. Guelev + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mat\'ias R Bender (TROPICAL), Kozhasov Khazhgali (UniCA), Tsigaridas Elias (OURAGAN), Zhu Chaoping (OURAGAN) - Chain-of-Image Generation: Toward Monitorable and Controllable Image Generation - https://arxiv.org/abs/2512.08645 - arXiv:2512.08645v1 Announce Type: new -Abstract: While state-of-the-art image generation models achieve remarkable visual quality, their internal generative processes remain a "black box." This opacity limits human observation and intervention, and poses a barrier to ensuring model reliability, safety, and control. Furthermore, their non-human-like workflows make them difficult for human observers to interpret. To address this, we introduce the Chain-of-Image Generation (CoIG) framework, which reframes image generation as a sequential, semantic process analogous to how humans create art. Similar to the advantages in monitorability and performance that Chain-of-Thought (CoT) brought to large language models (LLMs), CoIG can produce equivalent benefits in text-to-image generation. CoIG utilizes an LLM to decompose a complex prompt into a sequence of simple, step-by-step instructions. The image generation model then executes this plan by progressively generating and editing the image. Each step focuses on a single semantic entity, enabling direct monitoring. We formally assess this property using two novel metrics: CoIG Readability, which evaluates the clarity of each intermediate step via its corresponding output; and Causal Relevance, which quantifies the impact of each procedural step on the final generated image. We further show that our framework mitigates entity collapse by decomposing the complex generation task into simple subproblems, analogous to the procedural reasoning employed by CoT. Our experimental results indicate that CoIG substantially enhances quantitative monitorability while achieving competitive compositional robustness compared to established baseline models. The framework is model-agnostic and can be integrated with any image generation model. - oai:arXiv.org:2512.08645v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Towards Practical and Usable In-network Classification + https://arxiv.org/abs/2512.09809 + arXiv:2512.09809v1 Announce Type: new +Abstract: In-network machine learning enables real-time classification directly on network hardware, offering consistently low inference latency. However, current solutions are limited by strict hardware constraints, scarce on-device resources, and poor usability, making them impractical for ML developers and cloud operators. To this end, we propose ACORN, an end-to-end system that automates the distributed deployment of practical machine learning models across the network. ACORN provides a fully automated pipeline that loads and deploys Python ML models on network devices using an optimized deployment plan from an ILP planner. To support larger models under hardware constraints and allow runtime programmability, ACORN adopts a novel data plane representation for Decision Tree, Random Forest, and Support Vector Machine models. We implement ACORN prototype in P4 and run it on real programmable hardware. Our evaluation shows ACORN can deploy classification ML models with 2-4x more features than state-of-the-art solutions, while imposing negligible overhead on network performance and traffic. We will make our data plane program, model translator, optimizer, and all related scripts publicly available. + oai:arXiv.org:2512.09809v1 + cs.NI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Young Kyung Kim, Oded Schlesinger, Yuzhou Zhao, J. Matias Di Martino, Guillermo Sapiro + Di Zhu, Jianxi Chen, Hyojoon Kim - QSTN: A Modular Framework for Robust Questionnaire Inference with Large Language Models - https://arxiv.org/abs/2512.08646 - arXiv:2512.08646v1 Announce Type: new -Abstract: We introduce QSTN, an open-source Python framework for systematically generating responses from questionnaire-style prompts to support in-silico surveys and annotation tasks with large language models (LLMs). QSTN enables robust evaluation of questionnaire presentation, prompt perturbations, and response generation methods. Our extensive evaluation ($>40 $ million survey responses) shows that question structure and response generation methods have a significant impact on the alignment of generated survey responses with human answers, and can be obtained for a fraction of the compute cost. In addition, we offer a no-code user interface that allows researchers to set up robust experiments with LLMs without coding knowledge. We hope that QSTN will support the reproducibility and reliability of LLM-based research in the future. - oai:arXiv.org:2512.08646v1 - cs.CL - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + Incorporating Fairness in Neighborhood Graphs for Fair Spectral Clustering + https://arxiv.org/abs/2512.09810 + arXiv:2512.09810v1 Announce Type: new +Abstract: Graph clustering plays a pivotal role in unsupervised learning methods like spectral clustering, yet traditional methods for graph clustering often perpetuate bias through unfair graph constructions that may underrepresent some groups. The current research introduces novel approaches for constructing fair k-nearest neighbor (kNN) and fair epsilon-neighborhood graphs that proactively enforce demographic parity during graph formation. By incorporating fairness constraints at the earliest stage of neighborhood selection steps, our approaches incorporate proportional representation of sensitive features into the local graph structure while maintaining geometric consistency.Our work addresses a critical gap in pre-processing for fair spectral clustering, demonstrating that topological fairness in graph construction is essential for achieving equitable clustering outcomes. Widely used graph construction methods like kNN and epsilon-neighborhood graphs propagate edge based disparate impact on sensitive groups, leading to biased clustering results. Providing representation of each sensitive group in the neighborhood of every node leads to fairer spectral clustering results because the topological features of the graph naturally reflect equitable group ratios. This research fills an essential shortcoming in fair unsupervised learning, by illustrating how topological fairness in graph construction inherently facilitates fairer spectral clustering results without the need for changes to the clustering algorithm itself. Thorough experiments on three synthetic datasets, seven real-world tabular datasets, and three real-world image datasets prove that our fair graph construction methods surpass the current baselines in graph clustering tasks. + oai:arXiv.org:2512.09810v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Maximilian Kreutner, Jens Rupprecht, Georg Ahnert, Ahmed Salem, Markus Strohmaier + Adithya K Moorthy, V Vijaya Saradhi, Bhanu Prasad - C-DIRA: Computationally Efficient Dynamic ROI Routing and Domain-Invariant Adversarial Learning for Lightweight Driver Behavior Recognition - https://arxiv.org/abs/2512.08647 - arXiv:2512.08647v1 Announce Type: new -Abstract: Driver distraction behavior recognition using in-vehicle cameras demands real-time inference on edge devices. However, lightweight models often fail to capture fine-grained behavioral cues, resulting in reduced performance on unseen drivers or under varying conditions. ROI-based methods also increase computational cost, making it difficult to balance efficiency and accuracy. This work addresses the need for a lightweight architecture that overcomes these constraints. We propose Computationally efficient Dynamic region of Interest Routing and domain-invariant Adversarial learning for lightweight driver behavior recognition (C-DIRA). The framework combines saliency-driven Top-K ROI pooling and fused classification for local feature extraction and integration. Dynamic ROI routing enables selective computation by applying ROI inference only to high difficulty data samples. Moreover, pseudo-domain labeling and adversarial learning are used to learn domain-invariant features robust to driver and background variation. Experiments on the State Farm Distracted Driver Detection Dataset show that C-DIRA maintains high accuracy with significantly fewer FLOPs and lower latency than prior lightweight models. It also demonstrates robustness under visual degradation such as blur and low-light, and stable performance across unseen domains. These results confirm C-DIRA's effectiveness in achieving compactness, efficiency, and generalization. - oai:arXiv.org:2512.08647v1 + DynaIP: Dynamic Image Prompt Adapter for Scalable Zero-shot Personalized Text-to-Image Generation + https://arxiv.org/abs/2512.09814 + arXiv:2512.09814v1 Announce Type: new +Abstract: Personalized Text-to-Image (PT2I) generation aims to produce customized images based on reference images. A prominent interest pertains to the integration of an image prompt adapter to facilitate zero-shot PT2I without test-time fine-tuning. However, current methods grapple with three fundamental challenges: 1. the elusive equilibrium between Concept Preservation (CP) and Prompt Following (PF), 2. the difficulty in retaining fine-grained concept details in reference images, and 3. the restricted scalability to extend to multi-subject personalization. To tackle these challenges, we present Dynamic Image Prompt Adapter (DynaIP), a cutting-edge plugin to enhance the fine-grained concept fidelity, CP-PF balance, and subject scalability of SOTA T2I multimodal diffusion transformers (MM-DiT) for PT2I generation. Our key finding is that MM-DiT inherently exhibit decoupling learning behavior when injecting reference image features into its dual branches via cross attentions. Based on this, we design an innovative Dynamic Decoupling Strategy that removes the interference of concept-agnostic information during inference, significantly enhancing the CP-PF balance and further bolstering the scalability of multi-subject compositions. Moreover, we identify the visual encoder as a key factor affecting fine-grained CP and reveal that the hierarchical features of commonly used CLIP can capture visual information at diverse granularity levels. Therefore, we introduce a novel Hierarchical Mixture-of-Experts Feature Fusion Module to fully leverage the hierarchical features of CLIP, remarkably elevating the fine-grained concept fidelity while also providing flexible control of visual granularity. Extensive experiments across single- and multi-subject PT2I tasks verify that our DynaIP outperforms existing approaches, marking a notable advancement in the field of PT2l generation. + oai:arXiv.org:2512.09814v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Keito Inoshita + http://creativecommons.org/licenses/by/4.0/ + Zhizhong Wang, Tianyi Chu, Zeyi Huang, Nanyang Wang, Kehan Li + + + A roadmap of geospatial soil quality analysis systems + https://arxiv.org/abs/2512.09817 + arXiv:2512.09817v1 Announce Type: new +Abstract: Soil quality (SQ) plays a crucial role in sustainable agriculture, environmental conservation, and land-use planning. Traditional SQ assessment techniques rely on costly, labor-intensive sampling and laboratory analysis, limiting their spatial and temporal coverage. Advances in Geographic Information Systems (GIS), remote sensing, and machine learning (ML) enabled efficient SQ evaluation. This paper presents a comprehensive roadmap distinguishing it from previous reviews by proposing a unified and modular pipeline that integrates multi-source soil data, GIS and remote sensing tools, and machine learning techniques to support transparent and scalable soil quality assessment. It also includes practical applications. Contrary to existing studies that predominantly target isolated soil parameters or specific modeling methodologies, this approach consolidates recent advancements in Geographic Information Systems (GIS), remote sensing technologies, and machine learning algorithms within the entire soil quality assessment pipeline. It also addresses existing challenges and limitations while exploring future developments and emerging trends in the field that can deliver the next generation of soil quality systems making them more transparent, adaptive, and aligned with sustainable land management. + oai:arXiv.org:2512.09817v1 + cs.CE + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Habiba BEN ABDERRAHMANE, Slimane Oulad-Naoui, Benameur ZIANI + + + Weakly-unambiguous Parikh automata and their link to holonomic series + https://arxiv.org/abs/2512.09823 + arXiv:2512.09823v1 Announce Type: new +Abstract: We investigate the connection between properties of formal languages and properties of their generating series, with a focus on the class of holonomic power series. We first prove a strong version of a conjecture by Castiglione and Massazza: weakly-unambiguous Parikh automata are equivalent to unambiguous two-way reversal bounded counter machines, and their multivariate generating series are holonomic. We then show that the converse is not true: we construct a language whose generating series is algebraic (thus holonomic), but which is inherently weakly-ambiguous as a Parikh automata language. Finally, we prove an effective decidability result for the inclusion problem for weakly-unambiguous Parikh automata, and provide an upper-bound on to its complexity. + oai:arXiv.org:2512.09823v1 + cs.FL + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + 10.4230/LIPIcs.ICALP.2020.114 + Alin Bostan, Arnaud Carayol, Florent Koechlin, Cyril Nicaud - Repulsor: Accelerating Generative Modeling with a Contrastive Memory Bank - https://arxiv.org/abs/2512.08648 - arXiv:2512.08648v1 Announce Type: new -Abstract: The dominance of denoising generative models (e.g., diffusion, flow-matching) in visual synthesis is tempered by their substantial training costs and inefficiencies in representation learning. While injecting discriminative representations via auxiliary alignment has proven effective, this approach still faces key limitations: the reliance on external, pre-trained encoders introduces overhead and domain shift. A dispersed-based strategy that encourages strong separation among in-batch latent representations alleviates this specific dependency. To assess the effect of the number of negative samples in generative modeling, we propose {\mname}, a plug-and-play training framework that requires no external encoders. Our method integrates a memory bank mechanism that maintains a large, dynamically updated queue of negative samples across training iterations. This decouples the number of negatives from the mini-batch size, providing abundant and high-quality negatives for a contrastive objective without a multiplicative increase in computational cost. A low-dimensional projection head is used to further minimize memory and bandwidth overhead. {\mname} offers three principal advantages: (1) it is self-contained, eliminating dependency on pretrained vision foundation models and their associated forward-pass overhead; (2) it introduces no additional parameters or computational cost during inference; and (3) it enables substantially faster convergence, achieving superior generative quality more efficiently. On ImageNet-256, {\mname} achieves a state-of-the-art FID of \textbf{2.40} within 400k steps, significantly outperforming comparable methods. - oai:arXiv.org:2512.08648v1 + Composing Concepts from Images and Videos via Concept-prompt Binding + https://arxiv.org/abs/2512.09824 + arXiv:2512.09824v1 Announce Type: new +Abstract: Visual concept composition, which aims to integrate different elements from images and videos into a single, coherent visual output, still falls short in accurately extracting complex concepts from visual inputs and flexibly combining concepts from both images and videos. We introduce Bind & Compose, a one-shot method that enables flexible visual concept composition by binding visual concepts with corresponding prompt tokens and composing the target prompt with bound tokens from various sources. It adopts a hierarchical binder structure for cross-attention conditioning in Diffusion Transformers to encode visual concepts into corresponding prompt tokens for accurate decomposition of complex visual concepts. To improve concept-token binding accuracy, we design a Diversify-and-Absorb Mechanism that uses an extra absorbent token to eliminate the impact of concept-irrelevant details when training with diversified prompts. To enhance the compatibility between image and video concepts, we present a Temporal Disentanglement Strategy that decouples the training process of video concepts into two stages with a dual-branch binder structure for temporal modeling. Evaluations demonstrate that our method achieves superior concept consistency, prompt fidelity, and motion quality over existing approaches, opening up new possibilities for visual creativity. + oai:arXiv.org:2512.09824v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.MM + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shaofeng Zhang, Xuanqi Chen, Ning Liao, Haoxiang Zhao, Xiaoxing Wang, Haoru Tan, Sitong Wu, Xiaosong Jia, Qi Fan, Junchi Yan + Xianghao Kong, Zeyu Zhang, Yuwei Guo, Zhuoran Zhao, Songchun Zhang, Anyi Rao - A Sensor-Aware Phenomenological Framework for Lidar Degradation Simulation and SLAM Robustness Evaluation - https://arxiv.org/abs/2512.08653 - arXiv:2512.08653v1 Announce Type: new -Abstract: Lidar-based SLAM systems are highly sensitive to adverse conditions such as occlusion, noise, and field-of-view (FoV) degradation, yet existing robustness evaluation methods either lack physical grounding or do not capture sensor-specific behavior. This paper presents a sensor-aware, phenomenological framework for simulating interpretable lidar degradations directly on real point clouds, enabling controlled and reproducible SLAM stress testing. Unlike image-derived corruption benchmarks (e.g., SemanticKITTI-C) or simulation-only approaches (e.g., lidarsim), the proposed system preserves per-point geometry, intensity, and temporal structure while applying structured dropout, FoV reduction, Gaussian noise, occlusion masking, sparsification, and motion distortion. The framework features autonomous topic and sensor detection, modular configuration with four severity tiers (light--extreme), and real-time performance (less than 20 ms per frame) compatible with ROS workflows. Experimental validation across three lidar architectures and five state-of-the-art SLAM systems reveals distinct robustness patterns shaped by sensor design and environmental context. The open-source implementation provides a practical foundation for benchmarking lidar-based SLAM under physically meaningful degradation scenarios. - oai:arXiv.org:2512.08653v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + A Relaxed Randomized Averaging Block Extended Bregman-Kaczmarz Method for Combined Optimization Problems + https://arxiv.org/abs/2512.09825 + arXiv:2512.09825v1 Announce Type: new +Abstract: Randomized Kaczmarz-type methods are widely used for their simplicity and efficiency in solving large-scale linear systems and optimization problems. However, their applicability is limited when dealing with inconsistent systems or incorporating structural information such as sparsity. In this work, we propose a \emph{relaxed randomized averaging block extended Bregman-Kaczmarz} (rRABEBK) method for solving a broad class of combined optimization problems. The proposed method integrates an averaging block strategy with two relaxation parameters to accelerate convergence and enhance numerical stability. We establish a rigorous convergence theory showing that rRABEBK achieves linear convergence in expectation, with explicit constants that quantify the effect of the relaxation mechanism, and a provably faster rate than the classical randomized extended Bregman-Kaczmarz method. Our method can be readily adapted to sparse least-squares problems and extended to both consistent and inconsistent systems without modification. Complementary numerical experiments corroborate the theoretical findings and demonstrate that rRABEBK significantly outperforms the existing Kaczmarz-type algorithms in terms of both iteration complexity and computational efficiency, highlighting both its practical and theoretical advantages. + oai:arXiv.org:2512.09825v1 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Doumegna Mawuto Koudjo Felix, Xianjia Yu, Zhuo Zou, Tomi Westerlund + Zeyu Dong, Aqin Xiao, Guojian Yin, Junfeng Yin - Sim2Swim: Zero-Shot Velocity Control for Agile AUV Maneuvering in 3 Minutes - https://arxiv.org/abs/2512.08656 - arXiv:2512.08656v1 Announce Type: new -Abstract: Holonomic autonomous underwater vehicles (AUVs) have the hardware ability for agile maneuvering in both translational and rotational degrees of freedom (DOFs). However, due to challenges inherent to underwater vehicles, such as complex hydrostatics and hydrodynamics, parametric uncertainties, and frequent changes in dynamics due to payload changes, control is challenging. Performance typically relies on carefully tuned controllers targeting unique platform configurations, and a need for re-tuning for deployment under varying payloads and hydrodynamic conditions. As a consequence, agile maneuvering with simultaneous tracking of time-varying references in both translational and rotational DOFs is rarely utilized in practice. To the best of our knowledge, this paper presents the first general zero-shot sim2real deep reinforcement learning-based (DRL) velocity controller enabling path following and agile 6DOF maneuvering with a training duration of just 3 minutes. Sim2Swim, the proposed approach, inspired by state-of-the-art DRL-based position control, leverages domain randomization and massively parallelized training to converge to field-deployable control policies for AUVs of variable characteristics without post-processing or tuning. Sim2Swim is extensively validated in pool trials for a variety of configurations, showcasing robust control for highly agile motions. - oai:arXiv.org:2512.08656v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + RIFT: A Scalable Methodology for LLM Accelerator Fault Assessment using Reinforcement Learning + https://arxiv.org/abs/2512.09829 + arXiv:2512.09829v1 Announce Type: new +Abstract: The massive scale of modern AI accelerators presents critical challenges to traditional fault assessment methodologies, which face prohibitive computational costs and provide poor coverage of critical failure modes. This paper introduces RIFT (Reinforcement Learning-guided Intelligent Fault Targeting), a scalable framework that automates the discovery of minimal, high-impact fault scenarios for efficient design-time fault assessment. RIFT transforms the complex search for worst-case faults into a sequential decision-making problem, combining hybrid sensitivity analysis for search space pruning with reinforcement learning to intelligently generate minimal, high-impact test suites. Evaluated on billion-parameter Large Language Model (LLM) workloads using NVIDIA A100 GPUs, RIFT achieves a \textbf{2.2$\times$} fault assessment speedup over evolutionary methods and reduces the required test vector volume by over \textbf{99\%} compared to random fault injection, all while achieving \textbf{superior fault coverage}. The proposed framework also provides actionable data to enable intelligent hardware protection strategies, demonstrating that RIFT-guided selective error correction code provides a \textbf{12.8$\times$} improvement in \textbf{cost-effectiveness} (coverage per unit area) compared to uniform triple modular redundancy protection. RIFT automatically generates UVM-compliant verification artifacts, ensuring its findings are directly actionable and integrable into commercial RTL verification workflows. + oai:arXiv.org:2512.09829v1 + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Lauritz Rismark Fosso, Herman Bi{\o}rn Amundsen, Marios Xanthidis, Sveinung Johan Ohrem + Khurram Khalil, Muhammad Mahad Khaliq, Khaza Anuarul Hoque - Reusability in MLOps: Leveraging Ports and Adapters to Build a Microservices Architecture for the Maritime Domain - https://arxiv.org/abs/2512.08657 - arXiv:2512.08657v1 Announce Type: new -Abstract: ML-Enabled Systems (MLES) are inherently complex since they require multiple components to achieve their business goal. This experience report showcases the software architecture reusability techniques applied while building Ocean Guard, an MLES for anomaly detection in the maritime domain. In particular, it highlights the challenges and lessons learned to reuse the Ports and Adapters pattern to support building multiple microservices from a single codebase. This experience report hopes to inspire software engineers, machine learning engineers, and data scientists to apply the Hexagonal Architecture pattern to build their MLES. - oai:arXiv.org:2512.08657v1 - cs.SE + LLMs in Interpreting Legal Documents + https://arxiv.org/abs/2512.09830 + arXiv:2512.09830v1 Announce Type: new +Abstract: This chapter explores the application of Large Language Models in the legal domain, showcasing their potential to optimise and augment traditional legal tasks by analysing possible use cases, such as assisting in interpreting statutes, contracts, and case law, enhancing clarity in legal summarisation, contract negotiation, and information retrieval. There are several challenges that can arise from the application of such technologies, such as algorithmic monoculture, hallucinations, and compliance with existing regulations, including the EU's AI Act and recent U.S. initiatives, alongside the emerging approaches in China. Furthermore, two different benchmarks are presented. + oai:arXiv.org:2512.09830v1 + cs.CL cs.AI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Renato Cordeiro Ferreira (Jheronimus Academy of Data Science, Technical University of Eindhoven, Tilburg University, University of S\~ao Paulo), Aditya Dhinavahi (Jheronimus Academy of Data Science, Technical University of Eindhoven), Rowanne Trapmann (Jheronimus Academy of Data Science, Tilburg University), Willem-Jan van den Heuvel (Jheronimus Academy of Data Science, Technical University of Eindhoven, Tilburg University) + Simone Corbo - An Agentic AI System for Multi-Framework Communication Coding - https://arxiv.org/abs/2512.08659 - arXiv:2512.08659v1 Announce Type: new -Abstract: Clinical communication is central to patient outcomes, yet large-scale human annotation of patient-provider conversation remains labor-intensive, inconsistent, and difficult to scale. Existing approaches based on large language models typically rely on single-task models that lack adaptability, interpretability, and reliability, especially when applied across various communication frameworks and clinical domains. In this study, we developed a Multi-framework Structured Agentic AI system for Clinical Communication (MOSAIC), built on a LangGraph-based architecture that orchestrates four core agents, including a Plan Agent for codebook selection and workflow planning, an Update Agent for maintaining up-to-date retrieval databases, a set of Annotation Agents that applies codebook-guided retrieval-augmented generation (RAG) with dynamic few-shot prompting, and a Verification Agent that provides consistency checks and feedback. To evaluate performance, we compared MOSAIC outputs against gold-standard annotations created by trained human coders. We developed and evaluated MOSAIC using 26 gold standard annotated transcripts for training and 50 transcripts for testing, spanning rheumatology and OB/GYN domains. On the test set, MOSAIC achieved an overall F1 score of 0.928. Performance was highest in the Rheumatology subset (F1 = 0.962) and strongest for Patient Behavior (e.g., patients asking questions, expressing preferences, or showing assertiveness). Ablations revealed that MOSAIC outperforms baseline benchmarking. - oai:arXiv.org:2512.08659v1 - cs.CL + Interpretation as Linear Transformation: A Cognitive-Geometric Model of Belief and Meaning + https://arxiv.org/abs/2512.09831 + arXiv:2512.09831v1 Announce Type: new +Abstract: This paper develops a geometric framework for modeling belief, motivation, and influence across cognitively heterogeneous agents. Each agent is represented by a personalized value space, a vector space encoding the internal dimensions through which the agent interprets and evaluates meaning. Beliefs are formalized as structured vectors-abstract beings-whose transmission is mediated by linear interpretation maps. A belief survives communication only if it avoids the null spaces of these maps, yielding a structural criterion for intelligibility, miscommunication, and belief death. + Within this framework, I show how belief distortion, motivational drift, counterfactual evaluation, and the limits of mutual understanding arise from purely algebraic constraints. A central result-"the No-Null-Space Leadership Condition"-characterizes leadership as a property of representational reachability rather than persuasion or authority. More broadly, the model explains how abstract beings can propagate, mutate, or disappear as they traverse diverse cognitive geometries. + The account unifies insights from conceptual spaces, social epistemology, and AI value alignment by grounding meaning preservation in structural compatibility rather than shared information or rationality. I argue that this cognitive-geometric perspective clarifies the epistemic boundaries of influence in both human and artificial systems, and offers a general foundation for analyzing belief dynamics across heterogeneous agents. + oai:arXiv.org:2512.09831v1 + cs.AI cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.MA + cs.SI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bohao Yang, Rui Yang, Joshua M. Biro, Haoyuan Wang, Jessica L. Handley, Brianna Richardson, Sophia Bessias, Nicoleta Economou-Zavlanos, Armando D. Bedoya, Monica Agrawal, Michael M. Zavlanos, Anand Chowdhury, Raj M. Ratwani, Kai Sun, Kathryn I. Pollak, Michael J. Pencina, Chuan Hong + Chainarong Amornbunchornvej - Ergodic Trajectory Planning with Dynamic Sensor Footprints - https://arxiv.org/abs/2512.08661 - arXiv:2512.08661v1 Announce Type: new -Abstract: This paper addresses the problem of trajectory planning for information gathering with a dynamic and resolution-varying sensor footprint. Ergodic planning offers a principled framework that balances exploration (visiting all areas) and exploitation (focusing on high-information regions) by planning trajectories such that the time spent in a region is proportional to the amount of information in that region. Existing ergodic planning often oversimplifies the sensing model by assuming a point sensor or a footprint with constant shape and resolution. In practice, the sensor footprint can drastically change over time as the robot moves, such as aerial robots equipped with downward-facing cameras, whose field of view depends on the orientation and altitude. To overcome this limitation, we propose a new metric that accounts for dynamic sensor footprints, analyze the theoretic local optimality conditions, and propose numerical trajectory optimization algorithms. Experimental results show that the proposed approach can simultaneously optimize both the trajectories and sensor footprints, with up to an order of magnitude better ergodicity than conventional methods. We also deploy our approach in a multi-drone system to ergodically cover an object in 3D space. - oai:arXiv.org:2512.08661v1 + Bridging the Basilisk Astrodynamics Framework with ROS 2 for Modular Spacecraft Simulation and Hardware Integration + https://arxiv.org/abs/2512.09833 + arXiv:2512.09833v1 Announce Type: new +Abstract: Integrating high-fidelity spacecraft simulators with modular robotics frameworks remains a challenge for autonomy development. This paper presents a lightweight, open-source communication bridge between the Basilisk astrodynamics simulator and the Robot Operating System 2 (ROS 2), enabling real-time, bidirectional data exchange for spacecraft control. The bridge requires no changes to Basilisk's core and integrates seamlessly with ROS 2 nodes. We demonstrate its use in a leader-follower formation flying scenario using nonlinear model predictive control, deployed identically in both simulation and on the ATMOS planar microgravity testbed. This setup supports rapid development, hardware-in-the-loop testing, and seamless transition from simulation to hardware. The bridge offers a flexible and scalable platform for modular spacecraft autonomy and reproducible research workflows. + oai:arXiv.org:2512.09833v1 cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ziyue Zheng, Yongce Liu, Hesheng Wang, Zhongqiang Ren + Elias Krantz, Ngai Nam Chan, Gunnar Tibert, Huina Mao, Christer Fuglesang - Direct transfer of optimized controllers to similar systems using dimensionless MPC - https://arxiv.org/abs/2512.08667 - arXiv:2512.08667v1 Announce Type: new -Abstract: Scaled model experiments are commonly used in various engineering fields to reduce experimentation costs and overcome constraints associated with full-scale systems. The relevance of such experiments relies on dimensional analysis and the principle of dynamic similarity. However, transferring controllers to full-scale systems often requires additional tuning. In this paper, we propose a method to enable a direct controller transfer using dimensionless model predictive control, tuned automatically for closed-loop performance. With this reformulation, the closed-loop behavior of an optimized controller transfers directly to a new, dynamically similar system. Additionally, the dimensionless formulation allows for the use of data from systems of different scales during parameter optimization. We demonstrate the method on a cartpole swing-up and a car racing problem, applying either reinforcement learning or Bayesian optimization for tuning the controller parameters. Software used to obtain the results in this paper is publicly available at https://github.com/josipkh/dimensionless-mpcrl. - oai:arXiv.org:2512.08667v1 - eess.SY + Predicting the Containment Time of California Wildfires Using Machine Learning + https://arxiv.org/abs/2512.09835 + arXiv:2512.09835v1 Announce Type: new +Abstract: California's wildfire season keeps getting worse over the years, overwhelming the emergency response teams. These fires cause massive destruction to both property and human life. Because of these reasons, there's a growing need for accurate and practical predictions that can help assist with resources allocation for the Wildfire managers or the response teams. In this research, we built machine learning models to predict the number of days it will require to fully contain a wildfire in California. Here, we addressed an important gap in the current literature. Most prior research has concentrated on wildfire risk or how fires spread, and the few that examine the duration typically predict it in broader categories rather than a continuous measure. This research treats the wildfire duration prediction as a regression task, which allows for more detailed and precise forecasts rather than just the broader categorical predictions used in prior work. We built the models by combining three publicly available datasets from California Department of Forestry and Fire Protection's Fire and Resource Assessment Program (FRAP). This study compared the performance of baseline ensemble regressor, Random Forest and XGBoost, with a Long Short-Term Memory (LSTM) neural network. The results show that the XGBoost model slightly outperforms the Random Forest model, likely due to its superior handling of static features in the dataset. The LSTM model, on the other hand, performed worse than the ensemble models because the dataset lacked temporal features. Overall, this study shows that, depending on the feature availability, Wildfire managers or Fire management authorities can select the most appropriate model to accurately predict wildfire containment duration and allocate resources effectively. + oai:arXiv.org:2512.09835v1 cs.LG - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Josip Kir Hromatko, Shambhuraj Sawant, \v{S}andor Ile\v{s}, S\'ebastien Gros + Shashank Bhardwaj - DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning - https://arxiv.org/abs/2512.08671 - arXiv:2512.08671v1 Announce Type: new -Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor. - oai:arXiv.org:2512.08671v1 + Fast Factorized Learning: Powered by In-Memory Database Systems + https://arxiv.org/abs/2512.09836 + arXiv:2512.09836v1 Announce Type: new +Abstract: Learning models over factorized joins avoids redundant computations by identifying and pre-computing shared cofactors. Previous work has investigated the performance gain when computing cofactors on traditional disk-based database systems. Due to the absence of published code, the experiments could not be reproduced on in-memory database systems. This work describes the implementation when using cofactors for in-database factorized learning. We benchmark our open-source implementation for learning linear regression on factorized joins with PostgreSQL -- as a disk-based database system -- and HyPer -- as an in-memory engine. The evaluation shows a performance gain of factorized learning on in-memory database systems by 70\% to non-factorized learning and by a factor of 100 compared to disk-based database systems. Thus, modern database engines can contribute to the machine learning pipeline by pre-computing aggregates prior to data extraction to accelerate training. + oai:arXiv.org:2512.09836v1 + cs.DB cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Huzaifa Arif + http://creativecommons.org/licenses/by/4.0/ + Bernhard St\"ockl, Maximilian E. Sch\"ule - Dual-Branch Center-Surrounding Contrast: Rethinking Contrastive Learning for 3D Point Clouds - https://arxiv.org/abs/2512.08673 - arXiv:2512.08673v1 Announce Type: new -Abstract: Most existing self-supervised learning (SSL) approaches for 3D point clouds are dominated by generative methods based on Masked Autoencoders (MAE). However, these generative methods have been proven to struggle to capture high-level discriminative features effectively, leading to poor performance on linear probing and other downstream tasks. In contrast, contrastive methods excel in discriminative feature representation and generalization ability on image data. Despite this, contrastive learning (CL) in 3D data remains scarce. Besides, simply applying CL methods designed for 2D data to 3D fails to effectively learn 3D local details. To address these challenges, we propose a novel Dual-Branch \textbf{C}enter-\textbf{S}urrounding \textbf{Con}trast (CSCon) framework. Specifically, we apply masking to the center and surrounding parts separately, constructing dual-branch inputs with center-biased and surrounding-biased representations to better capture rich geometric information. Meanwhile, we introduce a patch-level contrastive loss to further enhance both high-level information and local sensitivity. Under the FULL and ALL protocols, CSCon achieves performance comparable to generative methods; under the MLP-LINEAR, MLP-3, and ONLY-NEW protocols, our method attains state-of-the-art results, even surpassing cross-modal approaches. In particular, under the MLP-LINEAR protocol, our method outperforms the baseline (Point-MAE) by \textbf{7.9\%}, \textbf{6.7\%}, and \textbf{10.3\%} on the three variants of ScanObjectNN, respectively. The code will be made publicly available. - oai:arXiv.org:2512.08673v1 + ChronusOmni: Improving Time Awareness of Omni Large Language Models + https://arxiv.org/abs/2512.09841 + arXiv:2512.09841v1 Announce Type: new +Abstract: Time awareness is a fundamental ability of omni large language models, especially for understanding long videos and answering complex questions. Previous approaches mainly target vision-language scenarios and focus on the explicit temporal grounding questions, such as identifying when a visual event occurs or determining what event happens at aspecific time. However, they often make insufficient use of the audio modality, and overlook implicit temporal grounding across modalities--for example, identifying what is visually present when a character speaks, or determining what is said when a visual event occurs--despite such cross-modal temporal relations being prevalent in real-world scenarios. In this paper, we propose ChronusOmni, an omni large language model designed to enhance temporal awareness for both explicit and implicit audiovisual temporal grounding. First, we interleave text-based timestamp tokens with visual and audio representations at each time unit, enabling unified temporal modeling across modalities. Second, to enforce correct temporal ordering and strengthen fine-grained temporal reasoning, we incorporate reinforcement learning with specially designed reward functions. Moreover, we construct ChronusAV, a temporally-accurate, modality-complete, and cross-modal-aligned dataset to support the training and evaluation on audiovisual temporal grounding task. Experimental results demonstrate that ChronusOmni achieves state-of-the-art performance on ChronusAV with more than 30% improvement and top results on most metrics upon other temporal grounding benchmarks. This highlights the strong temporal awareness of our model across modalities, while preserving general video and audio understanding capabilities. + oai:arXiv.org:2512.09841v1 + cs.CL cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.MM + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.13140/RG.2.2.16132.18563 - Shaofeng Zhang, Xuanqi Chen, Xiangdong Zhang, Sitong Wu, Junchi Yan + http://creativecommons.org/licenses/by/4.0/ + Yijing Chen, Yihan Wu, Kaisi Guan, Yuchen Ren, Yuyue Wang, Ruihua Song, Liyun Ru - Multi-Agent Intelligence for Multidisciplinary Decision-Making in Gastrointestinal Oncology - https://arxiv.org/abs/2512.08674 - arXiv:2512.08674v1 Announce Type: new -Abstract: Multimodal clinical reasoning in the field of gastrointestinal (GI) oncology necessitates the integrated interpretation of endoscopic imagery, radiological data, and biochemical markers. Despite the evident potential exhibited by Multimodal Large Language Models (MLLMs), they frequently encounter challenges such as context dilution and hallucination when confronted with intricate, heterogeneous medical histories. In order to address these limitations, a hierarchical Multi-Agent Framework is proposed, which emulates the collaborative workflow of a human Multidisciplinary Team (MDT). The system attained a composite expert evaluation score of 4.60/5.00, thereby demonstrating a substantial improvement over the monolithic baseline. It is noteworthy that the agent-based architecture yielded the most substantial enhancements in reasoning logic and medical accuracy. The findings indicate that mimetic, agent-based collaboration provides a scalable, interpretable, and clinically robust paradigm for automated decision support in oncology. - oai:arXiv.org:2512.08674v1 - cs.AI - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + From Detection to Anticipation: Online Understanding of Struggles across Various Tasks and Activities + https://arxiv.org/abs/2512.09847 + arXiv:2512.09847v1 Announce Type: new +Abstract: Understanding human skill performance is essential for intelligent assistive systems, with struggle recognition offering a natural cue for identifying user difficulties. While prior work focuses on offline struggle classification and localization, real-time applications require models capable of detecting and anticipating struggle online. We reformulate struggle localization as an online detection task and further extend it to anticipation, predicting struggle moments before they occur. We adapt two off-the-shelf models as baselines for online struggle detection and anticipation. Online struggle detection achieves 70-80% per-frame mAP, while struggle anticipation up to 2 seconds ahead yields comparable performance with slight drops. We further examine generalization across tasks and activities and analyse the impact of skill evolution. Despite larger domain gaps in activity-level generalization, models still outperform random baselines by 4-20%. Our feature-based models run at up to 143 FPS, and the whole pipeline, including feature extraction, operates at around 20 FPS, sufficient for real-time assistive applications. + oai:arXiv.org:2512.09847v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Rongzhao Zhang, Junqiao Wang, Shuyun Yang, Mouxiao Bian, Chao Ding, Yuwei Bai, Chihao Zhang, Yuguang Shen, Lei Wang, Lei Zheng, Qiujuan Yan, Yun Zhong, Meiling Liu, Jiwei Yu, Zheng Wang, Jie Xu, Meng Luo + Shijia Feng, Michael Wray, Walterio Mayol-Cuevas - Causal Explanations for Disparate Trends: Where and Why? - https://arxiv.org/abs/2512.08679 - arXiv:2512.08679v1 Announce Type: new -Abstract: During data analysis, we are often perplexed by certain disparities observed between two groups of interest within a dataset. To better understand an observed disparity, we need explanations that can pinpoint the data regions where the disparity is most pronounced, along with its causes, i.e., factors that alleviate or exacerbate the disparity. This task is complex and tedious, particularly for large and high-dimensional datasets, demanding an automatic system for discovering explanations (data regions and causes) of an observed disparity. It is critical that explanations for disparities are not only interpretable but also actionable-enabling users to make informed, data-driven decisions. This requires explanations to go beyond surface-level correlations and instead capture causal relationships. We introduce ExDis, a framework for discovering causal Explanations for Disparities between two groups of interest. ExDis identifies data regions (subpopulations) where disparities are most pronounced (or reversed), and associates specific factors that causally contribute to the disparity within each identified data region. We formally define the ExDis framework and the associated optimization problem, analyze its complexity, and develop an efficient algorithm to solve the problem. Through extensive experiments over three real-world datasets, we demonstrate that ExDis generates meaningful causal explanations, outperforms prior methods, and scales effectively to handle large, high-dimensional datasets. - oai:arXiv.org:2512.08679v1 - cs.DB - Wed, 10 Dec 2025 00:00:00 -0500 + Conformal Bandits: Bringing statistical validity and reward efficiency to the small-gap regime + https://arxiv.org/abs/2512.09850 + arXiv:2512.09850v1 Announce Type: new +Abstract: We introduce Conformal Bandits, a novel framework integrating Conformal Prediction (CP) into bandit problems, a classic paradigm for sequential decision-making under uncertainty. Traditional regret-minimisation bandit strategies like Thompson Sampling and Upper Confidence Bound (UCB) typically rely on distributional assumptions or asymptotic guarantees; further, they remain largely focused on regret, neglecting their statistical properties. We address this gap. Through the adoption of CP, we bridge the regret-minimising potential of a decision-making bandit policy with statistical guarantees in the form of finite-time prediction coverage. + We demonstrate the potential of it Conformal Bandits through simulation studies and an application to portfolio allocation, a typical small-gap regime, where differences in arm rewards are far too small for classical policies to achieve optimal regret bounds in finite sample. Motivated by this, we showcase our framework's practical advantage in terms of regret in small-gap settings, as well as its added value in achieving nominal coverage guarantees where classical UCB policies fail. Focusing on our application of interest, we further illustrate how integrating hidden Markov models to capture the regime-switching behaviour of financial markets, enhances the exploration-exploitation trade-off, and translates into higher risk-adjusted regret efficiency returns, while preserving coverage guarantees. + oai:arXiv.org:2512.09850v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tal Blau, Brit Youngmann, Anna Fariha, Yuval Moskovitch + http://creativecommons.org/licenses/by/4.0/ + Simone Cuonzo, Nina Deliu - Non Normalized Shared-Constraint Dynamic Games for Human-Robot Collaboration with Asymmetric Responsibility - https://arxiv.org/abs/2512.08688 - arXiv:2512.08688v1 Announce Type: new -Abstract: This paper proposes a dynamic game formulation for cooperative human-robot navigation in shared workspaces with obstacles, where the human and robot jointly satisfy shared safety constraints while pursuing a common task. A key contribution is the introduction of a non-normalized equilibrium structure for the shared constraints. This structure allows the two agents to contribute different levels of effort towards enforcing safety requirements such as collision avoidance and inter-players spacing. We embed this non-normalized equilibrium into a receding-horizon optimal control scheme. - oai:arXiv.org:2512.08688v1 + Simultaneous Tactile-Visual Perception for Learning Multimodal Robot Manipulation + https://arxiv.org/abs/2512.09851 + arXiv:2512.09851v1 Announce Type: new +Abstract: Robotic manipulation requires both rich multimodal perception and effective learning frameworks to handle complex real-world tasks. See-through-skin (STS) sensors, which combine tactile and visual perception, offer promising sensing capabilities, while modern imitation learning provides powerful tools for policy acquisition. However, existing STS designs lack simultaneous multimodal perception and suffer from unreliable tactile tracking. Furthermore, integrating these rich multimodal signals into learning-based manipulation pipelines remains an open challenge. We introduce TacThru, an STS sensor enabling simultaneous visual perception and robust tactile signal extraction, and TacThru-UMI, an imitation learning framework that leverages these multimodal signals for manipulation. Our sensor features a fully transparent elastomer, persistent illumination, novel keyline markers, and efficient tracking, while our learning system integrates these signals through a Transformer-based Diffusion Policy. Experiments on five challenging real-world tasks show that TacThru-UMI achieves an average success rate of 85.5%, significantly outperforming the baselines of alternating tactile-visual (66.3%) and vision-only (55.4%). The system excels in critical scenarios, including contact detection with thin and soft objects and precision manipulation requiring multimodal coordination. This work demonstrates that combining simultaneous multimodal perception with modern learning frameworks enables more precise, adaptable robotic manipulation. + oai:arXiv.org:2512.09851v1 cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Mark Pustilnik, Francesco Borrelli + Yuyang Li, Yinghan Chen, Zihang Zhao, Puhao Li, Tengyu Liu, Siyuan Huang, Yixin Zhu - ITU-T Y.2325: NGN Evolution Towards Future - https://arxiv.org/abs/2512.08695 - arXiv:2512.08695v1 Announce Type: new -Abstract: International Telecommunications Union (ITU) defined Next Generation Network (NGN) underlies most wireline and wireless packet-based telecommunications networks. A key design principle of NGN is decoupling of service-related functions from the underlying transport stratum, making user services independent of transport technologies. Interestingly, the NGN architecture, as defined in ITU standards, did not follow this design principle for internal network services, e.g., mobility, or authentication though adhering for external user services like IPTV or Multimedia services. These internal services are handled by the NGN transport control plane, making them an intrinsic part of the transport stratum, resulting in a tightly coupled service and transport functionality as opposed to the proclaimed design goal. This design choice may force each transport technology to support internal services individually, e.g., separate authentication service for each transport, leading to duplication. Since the NGN architecture is the base underlying architecture for most packet-based telecommunications network including advanced cellular networks like 4th/5th Generation cellular networks, the limitation persists in these cellular networks as well. To remedy the situation, the decoupling of service and transport can be generalized to include internal services like mobility and authentication also. In this context, the recently published ITU Y.2325 recommendation, defines an evolved NGN architecture, wherein all services, including internal network services, are decoupled from the transport stratum. The proposal results in a more scalable and modular evolved NGN architecture that can be used as a template for all future telecom networks including IMT-2030 (6th generation mobile networks). In this article, we review the evolved NGN architecture, as proposed in ITU-T Y.2325. - oai:arXiv.org:2512.08695v1 - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 + Mitigating Social Bias in English and Urdu Language Models Using PRM-Guided Candidate Selection and Sequential Refinement + https://arxiv.org/abs/2512.09854 + arXiv:2512.09854v1 Announce Type: new +Abstract: Large language models (LLMs) increasingly mediate human communication, decision support, content creation, and information retrieval. Despite impressive fluency, these systems frequently produce biased or stereotypical content, especially when prompted with socially sensitive language. A growing body of research has demonstrated that such biases disproportionately affect low-resource languages, where training data is limited and culturally unrepresentative. This paper presents a comprehensive study of inference-time bias mitigation, a strategy that avoids retraining or fine-tuning and instead operates directly on model outputs. Building on preference-ranking models (PRMs), we introduce a unified evaluation framework comparing three methods: (1) baseline single-word generation, (2) PRM-Select best-of-N sampling, and (3) PRM-Sequential refinement guided by PRM critiques. We evaluate these techniques across 200 English prompts and their Urdu counterparts, designed to reflect socio-cultural contexts relevant to gender, ethnicity, religion, nationality, disability, profession, age, and socioeconomic categories. Using GPT-3.5 as a candidate generator and GPT-4o-mini as a PRM-based bias and utility scorer, we provide an extensive quantitative analysis of bias reduction, utility preservation, and cross-lingual disparities. Our findings show: (a) substantial gains over the baseline for both languages; (b) consistently lower fairness scores for Urdu across all methods, highlighting structural inequities in multilingual LLM training; and (c) distinct improvement trajectories between PRM-Select and PRM-Sequential. The study contributes an extensible methodology, interpretable metrics, and cross-lingual comparisons that can support future work on fairness evaluation in low-resource languages. + oai:arXiv.org:2512.09854v1 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Rashmi Kamran, Shwetha Kiran, Pranav Jha, Rashmi Yadav, Abhay Karandikar, Prasanna Chaporkar + http://creativecommons.org/publicdomain/zero/1.0/ + Muneeb Ur Raheem Khan - What really matters for person re-identification? A Mixture-of-Experts Framework for Semantic Attribute Importance - https://arxiv.org/abs/2512.08697 - arXiv:2512.08697v1 Announce Type: new -Abstract: State-of-the-art person re-identification methods achieve impressive accuracy but remain largely opaque, leaving open the question: which high-level semantic attributes do these models actually rely on? We propose MoSAIC-ReID, a Mixture-of-Experts framework that systematically quantifies the importance of pedestrian attributes for re-identification. Our approach uses LoRA-based experts, each linked to a single attribute, and an oracle router that enables controlled attribution analysis. While MoSAIC-ReID achieves competitive performance on Market-1501 and DukeMTMC under the assumption that attribute annotations are available at test time, its primary value lies in providing a large-scale, quantitative study of attribute importance across intrinsic and extrinsic cues. Using generalized linear models, statistical tests, and feature-importance analyses, we reveal which attributes, such as clothing colors and intrinsic characteristics, contribute most strongly, while infrequent cues (e.g. accessories) have limited effect. This work offers a principled framework for interpretable ReID and highlights the requirements for integrating explicit semantic knowledge in practice. Code is available at https://github.com/psaltaath/MoSAIC-ReID - oai:arXiv.org:2512.08697v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Typical Solutions of Multi-User Linearly-Decomposable Distributed Computing + https://arxiv.org/abs/2512.09858 + arXiv:2512.09858v1 Announce Type: new +Abstract: We solve, in the typical-case sense, the multi-sender linearly-decomposable distributed computing problem introduced by tessellated distributed computing. We model real-valued encoders/decoders and demand matrices, and assess structural fidelity via a thresholded graph edit distance between the demand support and the two-hop support of the computed product. Our analysis yields: a closed-form second-moment (Frobenius) risk under spike-and-slab ensembles; deterministic links between thresholded GED and norm error; a Gaussian surrogate with sub-exponential tails that exposes explicit recall lines; concentration of GED and operator-norm control; and a compute-capped design with a visible knee. We map the rules to aeronautical and satellite networks. + oai:arXiv.org:2512.09858v1 + cs.IT + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Athena Psalta, Vasileios Tsironis, Konstantinos Karantzalos + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ali Khalesi, Mohammad Reza Deylam Salehi - Model-based Testing of Practical Distributed Systems in Actor Model - https://arxiv.org/abs/2512.08698 - arXiv:2512.08698v1 Announce Type: new -Abstract: Designing and implementing distributed systems correctly can be quite challenging. Although these systems are often accompanied by formal specifications that are verified using model-checking techniques, a gap still exists between the implementation and its formal specification: there is no guarantee that the implementation is free of bugs. - To bridge this gap, we can use model-based testing. Specifically, if the model of the system can be interpreted as a finite-state automaton, we can generate an exhaustive test suite for the implementation that covers all possible states and transitions. - In this paper, we discuss how to efficiently generate such a test suite for distributed systems written in the actor model. Importantly, our approach does not require any modifications to the code or interfering with the distributed system execution environment. As an example, we verified an implementation of a replication algorithm based on Viewstamped Replication, which is used in a real-world system. - oai:arXiv.org:2512.08698v1 - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + UniUGP: Unifying Understanding, Generation, and Planing For End-to-end Autonomous Driving + https://arxiv.org/abs/2512.09864 + arXiv:2512.09864v1 Announce Type: new +Abstract: Autonomous driving (AD) systems struggle in long-tail scenarios due to limited world knowledge and weak visual dynamic modeling. Existing vision-language-action (VLA)-based methods cannot leverage unlabeled videos for visual causal learning, while world model-based methods lack reasoning capabilities from large language models. In this paper, we construct multiple specialized datasets providing reasoning and planning annotations for complex scenarios. Then, a unified Understanding-Generation-Planning framework, named UniUGP, is proposed to synergize scene reasoning, future video generation, and trajectory planning through a hybrid expert architecture. By integrating pre-trained VLMs and video generation models, UniUGP leverages visual dynamics and semantic reasoning to enhance planning performance. Taking multi-frame observations and language instructions as input, it produces interpretable chain-of-thought reasoning, physically consistent trajectories, and coherent future videos. We introduce a four-stage training strategy that progressively builds these capabilities across multiple existing AD datasets, along with the proposed specialized datasets. Experiments demonstrate state-of-the-art performance in perception, reasoning, and decision-making, with superior generalization to challenging long-tail situations. + oai:arXiv.org:2512.09864v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ilya Kokorin, Evgeny Chernatskiy, Vitaly Aksenov + http://creativecommons.org/publicdomain/zero/1.0/ + Hao Lu, Ziyang Liu, Guangfeng Jiang, Yuanfei Luo, Sheng Chen, Yangang Zhang, Ying-Cong Chen - An Additive Manufacturing Part Qualification Framework: Transferring Knowledge of Stress-strain Behaviors from Additively Manufactured Polymers to Metals - https://arxiv.org/abs/2512.08699 - arXiv:2512.08699v1 Announce Type: new -Abstract: Part qualification is crucial in additive manufacturing (AM) because it ensures that additively manufactured parts can be consistently produced and reliably used in critical applications. Part qualification aims at verifying that an additively manufactured part meets performance requirements; therefore, predicting the complex stress-strain behaviors of additively manufactured parts is critical. We develop a dynamic time warping (DTW)-transfer learning (TL) framework for additive manufacturing part qualification by transferring knowledge of the stress-strain behaviors of additively manufactured low-cost polymers to metals. Specifically, the framework employs DTW to select a polymer dataset as the source domain that is the most relevant to the target metal dataset. Using a long short-term memory (LSTM) model, four source polymers (i.e., Nylon, PLA, CF-ABS, and Resin) and three target metals (i.e., AlSi10Mg, Ti6Al4V, and carbon steel) that are fabricated by different AM techniques are utilized to demonstrate the effectiveness of the DTW-TL framework. Experimental results show that the DTW-TL framework identifies the closest match between polymers and metals to select one single polymer dataset as the source domain. The DTW-TL model achieves the lowest mean absolute percentage error of 12.41% and highest coefficient of determination of 0.96 when three metals are used as the target domain, respectively, outperforming the vanilla LSTM model without TL as well as the TL model pre-trained on four polymer datasets as the source domain. - oai:arXiv.org:2512.08699v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + MedForget: Hierarchy-Aware Multimodal Unlearning Testbed for Medical AI + https://arxiv.org/abs/2512.09867 + arXiv:2512.09867v1 Announce Type: new +Abstract: Pretrained Multimodal Large Language Models (MLLMs) are increasingly deployed in medical AI systems for clinical reasoning, diagnosis support, and report generation. However, their training on sensitive patient data raises critical privacy and compliance challenges under regulations such as HIPAA and GDPR, which enforce the "right to be forgotten". Unlearning, the process of tuning models to selectively remove the influence of specific training data points, offers a potential solution, yet its effectiveness in complex medical settings remains underexplored. To systematically study this, we introduce MedForget, a Hierarchy-Aware Multimodal Unlearning Testbed with explicit retain and forget splits and evaluation sets containing rephrased variants. MedForget models hospital data as a nested hierarchy (Institution -> Patient -> Study -> Section), enabling fine-grained assessment across eight organizational levels. The benchmark contains 3840 multimodal (image, question, answer) instances, each hierarchy level having a dedicated unlearning target, reflecting distinct unlearning challenges. Experiments with four SOTA unlearning methods on three tasks (generation, classification, cloze) show that existing methods struggle to achieve complete, hierarchy-aware forgetting without reducing diagnostic performance. To test whether unlearning truly deletes hierarchical pathways, we introduce a reconstruction attack that progressively adds hierarchical level context to prompts. Models unlearned at a coarse granularity show strong resistance, while fine-grained unlearning leaves models vulnerable to such reconstruction. MedForget provides a practical, HIPAA-aligned testbed for building compliant medical AI systems. + oai:arXiv.org:2512.09867v1 + cs.CV + cs.AI + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chenglong Duan, Dazhong Wu + Fengli Wu, Vaidehi Patil, Jaehong Yoon, Yue Zhang, Mohit Bansal - Scale-invariant and View-relational Representation Learning for Full Surround Monocular Depth - https://arxiv.org/abs/2512.08700 - arXiv:2512.08700v1 Announce Type: new -Abstract: Recent foundation models demonstrate strong generalization capabilities in monocular depth estimation. However, directly applying these models to Full Surround Monocular Depth Estimation (FSMDE) presents two major challenges: (1) high computational cost, which limits real-time performance, and (2) difficulty in estimating metric-scale depth, as these models are typically trained to predict only relative depth. To address these limitations, we propose a novel knowledge distillation strategy that transfers robust depth knowledge from a foundation model to a lightweight FSMDE network. Our approach leverages a hybrid regression framework combining the knowledge distillation scheme--traditionally used in classification--with a depth binning module to enhance scale consistency. Specifically, we introduce a cross-interaction knowledge distillation scheme that distills the scale-invariant depth bin probabilities of a foundation model into the student network while guiding it to infer metric-scale depth bin centers from ground-truth depth. Furthermore, we propose view-relational knowledge distillation, which encodes structural relationships among adjacent camera views and transfers them to enhance cross-view depth consistency. Experiments on DDAD and nuScenes demonstrate the effectiveness of our method compared to conventional supervised methods and existing knowledge distillation approaches. Moreover, our method achieves a favorable trade-off between performance and efficiency, meeting real-time requirements. - oai:arXiv.org:2512.08700v1 + Diffusion Posterior Sampler for Hyperspectral Unmixing with Spectral Variability Modeling + https://arxiv.org/abs/2512.09871 + arXiv:2512.09871v1 Announce Type: new +Abstract: Linear spectral mixture models (LMM) provide a concise form to disentangle the constituent materials (endmembers) and their corresponding proportions (abundance) in a single pixel. The critical challenges are how to model the spectral prior distribution and spectral variability. Prior knowledge and spectral variability can be rigorously modeled under the Bayesian framework, where posterior estimation of Abundance is derived by combining observed data with endmember prior distribution. Considering the key challenges and the advantages of the Bayesian framework, a novel method using a diffusion posterior sampler for semiblind unmixing, denoted as DPS4Un, is proposed to deal with these challenges with the following features: (1) we view the pretrained conditional spectrum diffusion model as a posterior sampler, which can combine the learned endmember prior with observation to get the refined abundance distribution. (2) Instead of using the existing spectral library as prior, which may raise bias, we establish the image-based endmember bundles within superpixels, which are used to train the endmember prior learner with diffusion model. Superpixels make sure the sub-scene is more homogeneous. (3) Instead of using the image-level data consistency constraint, the superpixel-based data fidelity term is proposed. (4) The endmember is initialized as Gaussian noise for each superpixel region, DPS4Un iteratively updates the abundance and endmember, contributing to spectral variability modeling. The experimental results on three real-world benchmark datasets demonstrate that DPS4Un outperforms the state-of-the-art hyperspectral unmixing methods. + oai:arXiv.org:2512.09871v1 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Kyumin Hwang, Wonhyeok Choi, Kiljoon Han, Wonjoon Choi, Minwoo Choi, Yongcheon Na, Minwoo Park, Sunghoon Im - - - VI-MMRec: Similarity-Aware Training Cost-free Virtual User-Item Interactions for Multimodal Recommendation - https://arxiv.org/abs/2512.08702 - arXiv:2512.08702v1 Announce Type: new -Abstract: Although existing multimodal recommendation models have shown promising performance, their effectiveness continues to be limited by the pervasive data sparsity problem. This problem arises because users typically interact with only a small subset of available items, leading existing models to arbitrarily treat unobserved items as negative samples. To this end, we propose VI-MMRec, a model-agnostic and training cost-free framework that enriches sparse user-item interactions via similarity-aware virtual user-item interactions. These virtual interactions are constructed based on modality-specific feature similarities of user-interacted items. Specifically, VI-MMRec introduces two different strategies: (1) Overlay, which independently aggregates modality-specific similarities to preserve modality-specific user preferences, and (2) Synergistic, which holistically fuses cross-modal similarities to capture complementary user preferences. To ensure high-quality augmentation, we design a statistically informed weight allocation mechanism that adaptively assigns weights to virtual user-item interactions based on dataset-specific modality relevance. As a plug-and-play framework, VI-MMRec seamlessly integrates with existing models to enhance their performance without modifying their core architecture. Its flexibility allows it to be easily incorporated into various existing models, maximizing performance with minimal implementation effort. Moreover, VI-MMRec introduces no additional overhead during training, making it significantly advantageous for practical deployment. Comprehensive experiments conducted on six real-world datasets using seven state-of-the-art multimodal recommendation models validate the effectiveness of our VI-MMRec. - oai:arXiv.org:2512.08702v1 - cs.IR - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Jinfeng Xu, Zheyu Chen, Shuo Yang, Jinze Li, Zitong Wan, Hewei Wang, Weijie Liu, Yijie Li, Edith C. H. Ngai - - - Gradient-Informed Monte Carlo Fine-Tuning of Diffusion Models for Low-Thrust Trajectory Design - https://arxiv.org/abs/2512.08705 - arXiv:2512.08705v1 Announce Type: new -Abstract: Preliminary mission design of low-thrust spacecraft trajectories in the Circular Restricted Three-Body Problem is a global search characterized by a complex objective landscape and numerous local minima. Formulating the problem as sampling from an unnormalized distribution supported on neighborhoods of locally optimal solutions, provides the opportunity to deploy Markov chain Monte Carlo methods and generative machine learning. In this work, we extend our previous self-supervised diffusion model fine-tuning framework to employ gradient-informed Markov chain Monte Carlo. We compare two algorithms - the Metropolis-Adjusted Langevin Algorithm and Hamiltonian Monte Carlo - both initialized from a distribution learned by a diffusion model. Derivatives of an objective function that balances fuel consumption, time of flight and constraint violations are computed analytically using state transition matrices. We show that incorporating the gradient drift term accelerates mixing and improves convergence of the Markov chain for a multi-revolution transfer in the Saturn-Titan system. Among the evaluated methods, MALA provides the best trade-off between performance and computational cost. Starting from samples generated by a baseline diffusion model trained on a related transfer, MALA explicitly targets Pareto-optimal solutions. Compared to a random walk Metropolis algorithm, it increases the feasibility rate from 17.34% to 63.01% and produces a denser, more diverse coverage of the Pareto front. By fine-tuning a diffusion model on the generated samples and associated reward values with reward-weighted likelihood maximization, we learn the global solution structure of the problem and eliminate the need for a tedious separate data generation phase. - oai:arXiv.org:2512.08705v1 - eess.SY - cs.LG - cs.SY - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jannik Graebner, Ryne Beeson - - - RESTifAI: LLM-Based Workflow for Reusable REST API Testing - https://arxiv.org/abs/2512.08706 - arXiv:2512.08706v1 Announce Type: new -Abstract: With this paper, we introduce RESTifAI, an LLM-driven approach for generating reusable, CI/CD ready REST API tests, following the happy-path approach. Unlike existing tools that often focus primarily on internal server errors, RESTifAI systematically constructs valid test scenarios (happy paths) and derives negative cases to verify both intended functionality (2xx responses) and robustness against invalid inputs or business-rule violations (4xx responses). The results indicate that RESTifAI performs on par with the latest LLM tools, i.e., AutoRestTest and LogiAgent, while addressing limitations related to reusability, oracle complexity, and integration. To support this, we provide common comparative results and demonstrate the tool's applicability in industrial services. For tool demonstration, please refer to https://www.youtube.com/watch?v=2vtQo0T0Lo4. RESTifAI is publicly available at https://github.com/casablancahotelsoftware/RESTifAI. - oai:arXiv.org:2512.08706v1 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Leon Kogler, Maximilian Ehrhart, Benedikt Dornauer, Eduard Paul Enoiu + Yimin Zhu, Lincoln Linlin Xu - Automatic Essay Scoring and Feedback Generation in Basque Language Learning - https://arxiv.org/abs/2512.08713 - arXiv:2512.08713v1 Announce Type: new -Abstract: This paper introduces the first publicly available dataset for Automatic Essay Scoring (AES) and feedback generation in Basque, targeting the CEFR C1 proficiency level. The dataset comprises 3,200 essays from HABE, each annotated by expert evaluators with criterion specific scores covering correctness, richness, coherence, cohesion, and task alignment enriched with detailed feedback and error examples. We fine-tune open-source models, including RoBERTa-EusCrawl and Latxa 8B/70B, for both scoring and explanation generation. Our experiments show that encoder models remain highly reliable for AES, while supervised fine-tuning (SFT) of Latxa significantly enhances performance, surpassing state-of-the-art (SoTA) closed-source systems such as GPT-5 and Claude Sonnet 4.5 in scoring consistency and feedback quality. We also propose a novel evaluation methodology for assessing feedback generation, combining automatic consistency metrics with expert-based validation of extracted learner errors. Results demonstrate that the fine-tuned Latxa model produces criterion-aligned, pedagogically meaningful feedback and identifies a wider range of error types than proprietary models. This resource and benchmark establish a foundation for transparent, reproducible, and educationally grounded NLP research in low-resource languages such as Basque. - oai:arXiv.org:2512.08713v1 - cs.CL + FlipLLM: Efficient Bit-Flip Attacks on Multimodal LLMs using Reinforcement Learning + https://arxiv.org/abs/2512.09872 + arXiv:2512.09872v1 Announce Type: new +Abstract: Generative Artificial Intelligence models, such as Large Language Models (LLMs) and Large Vision Models (VLMs), exhibit state-of-the-art performance but remain vulnerable to hardware-based threats, specifically bit-flip attacks (BFAs). Existing BFA discovery methods lack generalizability and struggle to scale, often failing to analyze the vast parameter space and complex interdependencies of modern foundation models in a reasonable time. This paper proposes FlipLLM, a reinforcement learning (RL) architecture-agnostic framework that formulates BFA discovery as a sequential decision-making problem. FlipLLM combines sensitivity-guided layer pruning with Q-learning to efficiently identify minimal, high-impact bit sets that can induce catastrophic failure. We demonstrate the effectiveness and generalizability of FlipLLM by applying it to a diverse set of models, including prominent text-only LLMs (GPT-2 Large, LLaMA 3.1 8B, and DeepSeek-V2 7B), VLMs such as LLaVA 1.6, and datasets, such as MMLU, MMLU-Pro, VQAv2, and TextVQA. Our results show that FlipLLM can identify critical bits that are vulnerable to BFAs up to 2.5x faster than SOTA methods. We demonstrate that flipping the FlipLLM-identified bits plummets the accuracy of LLaMA 3.1 8B from 69.9% to ~0.2%, and for LLaVA's VQA score from 78% to almost 0%, by flipping as few as 5 and 7 bits, respectively. Further analysis reveals that applying standard hardware protection mechanisms, such as ECC SECDED, to the FlipLLM-identified bit locations completely mitigates the BFA impact, demonstrating the practical value of our framework in guiding hardware-level defenses. FlipLLM offers the first scalable and adaptive methodology for exploring the BFA vulnerability of both language and multimodal foundation models, paving the way for comprehensive hardware-security evaluation. + oai:arXiv.org:2512.09872v1 + cs.CR cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ekhi Azurmendi, Xabier Arregi, Oier Lopez de Lacalle + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Khurram Khalil, Khaza Anuarul Hoque - Multi-domain performance analysis with scores tailored to user preferences - https://arxiv.org/abs/2512.08715 - arXiv:2512.08715v1 Announce Type: new -Abstract: The performance of algorithms, methods, and models tends to depend heavily on the distribution of cases on which they are applied, this distribution being specific to the applicative domain. After performing an evaluation in several domains, it is highly informative to compute a (weighted) mean performance and, as shown in this paper, to scrutinize what happens during this averaging. To achieve this goal, we adopt a probabilistic framework and consider a performance as a probability measure (e.g., a normalized confusion matrix for a classification task). It appears that the corresponding weighted mean is known to be the summarization, and that only some remarkable scores assign to the summarized performance a value equal to a weighted arithmetic mean of the values assigned to the domain-specific performances. These scores include the family of ranking scores, a continuum parameterized by user preferences, and that the weights to consider in the arithmetic mean depend on the user preferences. Based on this, we rigorously define four domains, named easiest, most difficult, preponderant, and bottleneck domains, as functions of user preferences. After establishing the theory in a general setting, regardless of the task, we develop new visual tools for two-class classification. - oai:arXiv.org:2512.08715v1 - cs.PF - cs.AI + Benchmarking Document Parsers on Mathematical Formula Extraction from PDFs + https://arxiv.org/abs/2512.09874 + arXiv:2512.09874v1 Announce Type: new +Abstract: Correctly parsing mathematical formulas from PDFs is critical for training large language models and building scientific knowledge bases from academic literature, yet existing benchmarks either exclude formulas entirely or lack semantically-aware evaluation metrics. We introduce a novel benchmarking framework centered on synthetically generated PDFs with precise LaTeX ground truth, enabling systematic control over layout, formulas, and content characteristics. A key methodological contribution is pioneering LLM-as-a-judge for semantic formula assessment, combined with a robust two-stage matching pipeline that handles parser output inconsistencies. Through human validation on 250 formula pairs (750 ratings from 30 evaluators), we demonstrate that LLM-based evaluation achieves substantially higher correlation with human judgment (Pearson r=0.78) compared to CDM (r=0.34) and text similarity (r~0). Evaluating 20+ contemporary PDF parsers (including specialized OCR models, vision-language models, and rule-based approaches) across 100 synthetic documents with 2,000+ formulas reveals significant performance disparities. Our findings provide crucial insights for practitioners selecting parsers for downstream applications and establish a robust, scalable methodology that enables reproducible evaluation of PDF formula extraction quality. Code and benchmark data: https://github.com/phorn1/pdf-parse-bench + oai:arXiv.org:2512.09874v1 cs.CV - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - S\'ebastien Pi\'erard, Adrien Deli\`ege, Marc Van Droogenbroeck + Pius Horn, Janis Keuper - The Role of Risk Modeling in Advanced AI Risk Management - https://arxiv.org/abs/2512.08723 - arXiv:2512.08723v1 Announce Type: new -Abstract: Rapidly advancing artificial intelligence (AI) systems introduce novel, uncertain, and potentially catastrophic risks. Managing these risks requires a mature risk-management infrastructure whose cornerstone is rigorous risk modeling. We conceptualize AI risk modeling as the tight integration of (i) scenario building$-$causal mapping from hazards to harms$-$and (ii) risk estimation$-$quantifying the likelihood and severity of each pathway. We review classical techniques such as Fault and Event Tree Analyses, FMEA/FMECA, STPA and Bayesian networks, and show how they can be adapted to advanced AI. A survey of emerging academic and industry efforts reveals fragmentation: capability benchmarks, safety cases, and partial quantitative studies are valuable but insufficient when divorced from comprehensive causal scenarios. Comparing the nuclear, aviation, cybersecurity, financial, and submarine domains, we observe that every sector combines deterministic guarantees for unacceptable events with probabilistic assessments of the broader risk landscape. We argue that advanced-AI governance should adopt a similar dual approach and that verifiable, provably-safe AI architectures are urgently needed to supply deterministic evidence where current models are the result of opaque end-to-end optimization procedures rather than specified by hand. In one potential governance-ready framework, developers conduct iterative risk modeling and regulators compare the results with predefined societal risk tolerance thresholds. The paper provides both a methodological blueprint and opens a discussion on the best way to embed sound risk modeling at the heart of advanced-AI risk management. - oai:arXiv.org:2512.08723v1 - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + Resilient Neural-Variable-Structure Consensus Control for Nonlinear MASs with Singular Input Gain Under DoS Attacks + https://arxiv.org/abs/2512.09879 + arXiv:2512.09879v1 Announce Type: new +Abstract: This paper proposes a reliable learning-based adaptive control framework for nonlinear multi-agent systems (MASs) subject to Denial-of-Service (DoS) attacks and singular control gains, two critical challenges in cyber-physical systems. A neural-variable-structure adaptive controller is developed to achieve leader-follower consensus while ensuring robustness to external disturbances and adaptability to unknown nonlinear dynamics. A reliability-assessment rule is introduced to detect communication loss during DoS attacks, upon which a switched control mechanism is activated to preserve closed-loop stability and performance. Unlike existing resilient MAS control methods, the proposed strategy explicitly accommodates singular control gains and does not rely on restrictive assumptions such as Lipschitz continuity or prior bounds on nonlinearities. To the authors' knowledge, this is the first work to integrate neural learning, variable-structure robustness, and reliability-based switching into a unified consensus-tracking control architecture for heterogeneous nonlinear MASs with singular input gains under DoS attacks. Lyapunov-based analysis establishes uniform ultimate boundedness of all closed-loop signals, and Matlab/Simulink simulations on a connected automated vehicle platoon demonstrate the method's effectiveness and resilience. + oai:arXiv.org:2512.09879v1 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Chlo\'e Touzet, Henry Papadatos, Malcolm Murray, Otter Quarks, Steve Barrett, Alejandro Tlaie Boria, Elija Perrier, Matthew Smith, Sim\'eon Campos + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ladan Khoshnevisan, Xinzhi Liu - Exposing Hidden Biases in Text-to-Image Models via Automated Prompt Search - https://arxiv.org/abs/2512.08724 - arXiv:2512.08724v1 Announce Type: new -Abstract: Text-to-image (TTI) diffusion models have achieved remarkable visual quality, yet they have been repeatedly shown to exhibit social biases across sensitive attributes such as gender, race and age. To mitigate these biases, existing approaches frequently depend on curated prompt datasets - either manually constructed or generated with large language models (LLMs) - as part of their training and/or evaluation procedures. Beside the curation cost, this also risks overlooking unanticipated, less obvious prompts that trigger biased generation, even in models that have undergone debiasing. In this work, we introduce Bias-Guided Prompt Search (BGPS), a framework that automatically generates prompts that aim to maximize the presence of biases in the resulting images. BGPS comprises two components: (1) an LLM instructed to produce attribute-neutral prompts and (2) attribute classifiers acting on the TTI's internal representations that steer the decoding process of the LLM toward regions of the prompt space that amplify the image attributes of interest. We conduct extensive experiments on Stable Diffusion 1.5 and a state-of-the-art debiased model and discover an array of subtle and previously undocumented biases that severely deteriorate fairness metrics. Crucially, the discovered prompts are interpretable, i.e they may be entered by a typical user, quantitatively improving the perplexity metric compared to a prominent hard prompt optimization counterpart. Our findings uncover TTI vulnerabilities, while BGPS expands the bias search space and can act as a new evaluation tool for bias mitigation. - oai:arXiv.org:2512.08724v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Comparing AI Agents to Cybersecurity Professionals in Real-World Penetration Testing + https://arxiv.org/abs/2512.09882 + arXiv:2512.09882v1 Announce Type: new +Abstract: We present the first comprehensive evaluation of AI agents against human cybersecurity professionals in a live enterprise environment. We evaluate ten cybersecurity professionals alongside six existing AI agents and ARTEMIS, our new agent scaffold, on a large university network consisting of ~8,000 hosts across 12 subnets. ARTEMIS is a multi-agent framework featuring dynamic prompt generation, arbitrary sub-agents, and automatic vulnerability triaging. In our comparative study, ARTEMIS placed second overall, discovering 9 valid vulnerabilities with an 82% valid submission rate and outperforming 9 of 10 human participants. While existing scaffolds such as Codex and CyAgent underperformed relative to most human participants, ARTEMIS demonstrated technical sophistication and submission quality comparable to the strongest participants. We observe that AI agents offer advantages in systematic enumeration, parallel exploitation, and cost -- certain ARTEMIS variants cost $18/hour versus $60/hour for professional penetration testers. We also identify key capability gaps: AI agents exhibit higher false-positive rates and struggle with GUI-based tasks. + oai:arXiv.org:2512.09882v1 + cs.AI + cs.CR + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Manos Plitsis, Giorgos Bouritsas, Vassilis Katsouros, Yannis Panagakis + Justin W. Lin, Eliot Krzysztof Jones, Donovan Julian Jasper, Ethan Jun-shen Ho, Anna Wu, Arnold Tianyi Yang, Neil Perry, Andy Zou, Matt Fredrikson, J. Zico Kolter, Percy Liang, Dan Boneh, Daniel E. Ho - Spatio-Temporal Shifting to Reduce Carbon, Water, and Land-Use Footprints of Cloud Workloads - https://arxiv.org/abs/2512.08725 - arXiv:2512.08725v1 Announce Type: new -Abstract: In this paper, we investigate the potential of spatial and temporal cloud workload shifting to reduce carbon, water, and land-use footprints. Specifically, we perform a simulation study using real-world data from multiple cloud providers (AWS and Azure) and workload traces for different applications (big data analytics and FaaS). Our simulation results indicate that spatial shifting can substantially lower carbon, water, and land use footprints, with observed reductions ranging from 20% to 85%, depending on the scenario and optimization criteria. Temporal shifting also decreases the footprint, though to a lesser extent. When applied together, the two strategies yield the greatest overall reduction, driven mainly by spatial shifting with temporal adjustments providing an additional, incremental benefit. Sensitivity analysis demonstrates that such shifting is robust to prediction errors in grid mix data and to variations across different seasons. - oai:arXiv.org:2512.08725v1 - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + ByteShield: Adversarially Robust End-to-End Malware Detection through Byte Masking + https://arxiv.org/abs/2512.09883 + arXiv:2512.09883v1 Announce Type: new +Abstract: Research has proven that end-to-end malware detectors are vulnerable to adversarial attacks. In response, the research community has proposed defenses based on randomized and (de)randomized smoothing. However, these techniques remain susceptible to attacks that insert large adversarial payloads. To address these limitations, we propose a novel defense mechanism designed to harden end-to-end malware detectors by leveraging masking at the byte level. This mechanism operates by generating multiple masked versions of the input file, independently classifying each version, and then applying a threshold-based voting mechanism to produce the final classification. Key to this defense is a deterministic masking strategy that systematically strides a mask across the entire input file. Unlike randomized smoothing defenses, which randomly mask or delete bytes, this structured approach ensures coverage of the file over successive versions. In the best-case scenario, this strategy fully occludes the adversarial payload, effectively neutralizing its influence on the model's decision. In the worst-case scenario, it partially occludes the adversarial payload, reducing its impact on the model's predictions. By occluding the adversarial payload in one or more masked versions, this defense ensures that some input versions remain representative of the file's original intent, allowing the voting mechanism to suppress the influence of the adversarial payload. Results achieved on the EMBER and BODMAS datasets demonstrate the suitability of our defense, outperforming randomized and (de)randomized smoothing defenses against adversarial examples generated with a wide range of functionality-preserving manipulations while maintaining high accuracy on clean examples. + oai:arXiv.org:2512.09883v1 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Giulio Attenni, Youssef Moawad, Novella Bartolini, Lauritz Thamsen + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Daniel Gibert, Felip Many\`a - A Task Parallel Orthonormalization Multigrid Method For Multiphase Elliptic Problems - https://arxiv.org/abs/2512.08728 - arXiv:2512.08728v1 Announce Type: new -Abstract: Multigrid methods have been a popular approach for solving linear systems arising from the discretization of partial differential equations (PDEs) for several decades. They are particularly effective for accelerating convergence rates with optimal complexity in terms of both time and space. K-cycle orthonormalization multigrid is a robust variant of the multigrid method that combines the efficiency of multigrid with the robustness of Krylov-type residual minimalizations for problems with strong anisotropies. However, traditional implementations of K-cycle orthonormalization multigrid often rely on bulk-synchronous parallelism, which can limit scalability on modern high-performance computing (HPC) systems. This paper presents a task- parallel variant of the K-cycle orthonormalization multigrid method that leverages asynchronous execution to improve scalability and performance on large-scale parallel systems. - oai:arXiv.org:2512.08728v1 - math.NA - cs.DC - cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + HPM-KD: Hierarchical Progressive Multi-Teacher Framework for Knowledge Distillation and Efficient Model Compression + https://arxiv.org/abs/2512.09886 + arXiv:2512.09886v1 Announce Type: new +Abstract: Knowledge Distillation (KD) has emerged as a promising technique for model compression but faces critical limitations: (1) sensitivity to hyperparameters requiring extensive manual tuning, (2) capacity gap when distilling from very large teachers to small students, (3) suboptimal coordination in multi-teacher scenarios, and (4) inefficient use of computational resources. We present \textbf{HPM-KD}, a framework that integrates six synergistic components: (i) Adaptive Configuration Manager via meta-learning that eliminates manual hyperparameter tuning, (ii) Progressive Distillation Chain with automatically determined intermediate models, (iii) Attention-Weighted Multi-Teacher Ensemble that learns dynamic per-sample weights, (iv) Meta-Learned Temperature Scheduler that adapts temperature throughout training, (v) Parallel Processing Pipeline with intelligent load balancing, and (vi) Shared Optimization Memory for cross-experiment reuse. Experiments on CIFAR-10, CIFAR-100, and tabular datasets demonstrate that HPM-KD: achieves 10x-15x compression while maintaining 85% accuracy retention, eliminates the need for manual tuning, and reduces training time by 30-40% via parallelization. Ablation studies confirm independent contribution of each component (0.10-0.98 pp). HPM-KD is available as part of the open-source DeepBridge library. + oai:arXiv.org:2512.09886v1 + cs.LG + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-sa/4.0/ - Teoman Toprak, Florian Kummer - - - SegEarth-OV3: Exploring SAM 3 for Open-Vocabulary Semantic Segmentation in Remote Sensing Images - https://arxiv.org/abs/2512.08730 - arXiv:2512.08730v1 Announce Type: new -Abstract: Most existing methods for training-free Open-Vocabulary Semantic Segmentation (OVSS) are based on CLIP. While these approaches have made progress, they often face challenges in precise localization or require complex pipelines to combine separate modules, especially in remote sensing scenarios where numerous dense and small targets are present. Recently, Segment Anything Model 3 (SAM 3) was proposed, unifying segmentation and recognition in a promptable framework. In this paper, we present a preliminary exploration of applying SAM 3 to the remote sensing OVSS task without any training. First, we implement a mask fusion strategy that combines the outputs from SAM 3's semantic segmentation head and the Transformer decoder (instance head). This allows us to leverage the strengths of both heads for better land coverage. Second, we utilize the presence score from the presence head to filter out categories that do not exist in the scene, reducing false positives caused by the vast vocabulary sizes and patch-level processing in geospatial scenes. We evaluate our method on extensive remote sensing datasets. Experiments show that this simple adaptation achieves promising performance, demonstrating the potential of SAM 3 for remote sensing OVSS. Our code is released at https://github.com/earth-insights/SegEarth-OV-3. - oai:arXiv.org:2512.08730v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kaiyu Li, Shengqi Zhang, Yupeng Deng, Zhi Wang, Deyu Meng, Xiangyong Cao + Gustavo Coelho Haase, Paulo Henrique Dourado da Silva - LaMoSys3.5D: Enabling 3.5D-IC-Based Large Language Model Inference Serving Systems via Hardware/Software Co-Design - https://arxiv.org/abs/2512.08731 - arXiv:2512.08731v1 Announce Type: new -Abstract: The success of large language models LLMs amplifies the need for highthroughput energyefficient inference at scale. 3DDRAMbased accelerators provide high memory bandwidth and therefore an opportunity to accelerate the bandwidthbound decode phase. However, how to adequately balance compute density for prefill with bandwidthcapacity for decode remains open. Moreover, most prior designs do not target endtoend serving, leaving the codesign of dataflow, parallel mapping, and scheduling underexplored. To bridge the gap, we present LaMoSys3.5D, to our knowledge the first scalable 3.5DIC architecture for LLM serving. LaMoSys3.5D composes heterogeneous 3DDRAM chiplets on a 2.5D interposer: computerich chiplets for prefill and bandwidthcapacityrich chiplets for decode. To realize efficient serving, we adopt a hardwaresoftware codesign spanning dataflow, parallel mapping, and introduce a thermalaware modeling and hierarchical designspace exploration framework. Across diverse LLMs and workloads, LaMoSys3.5D improves throughputperwatt over DGXA100 systems by 62 and achieves a 4.87 better endtoend latency geomean versus prior 3D designs. We further distill intriguing design guidelines for 3.5DIC architectures and endtoend inference serving. - oai:arXiv.org:2512.08731v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Analysis of Dirichlet Energies as Over-smoothing Measures + https://arxiv.org/abs/2512.09890 + arXiv:2512.09890v1 Announce Type: new +Abstract: We analyze the distinctions between two functionals often used as over-smoothing measures: the Dirichlet energies induced by the unnormalized graph Laplacian and the normalized graph Laplacian. We demonstrate that the latter fails to satisfy the axiomatic definition of a node-similarity measure proposed by Rusch \textit{et al.} By formalizing fundamental spectral properties of these two definitions, we highlight critical distinctions necessary to select the metric that is spectrally compatible with the GNN architecture, thereby resolving ambiguities in monitoring the dynamics. + oai:arXiv.org:2512.09890v1 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Qipan Wang, Zhe Zhang, Shuangchen Li, Hongzhong Zheng, Zheng Liang, Yibo Lin, Runsheng Wang, Ru Huang + http://creativecommons.org/licenses/by-sa/4.0/ + Anna Bison, Alessandro Sperduti - Neural Ordinary Differential Equations for Simulating Metabolic Pathway Dynamics from Time-Series Multiomics Data - https://arxiv.org/abs/2512.08732 - arXiv:2512.08732v1 Announce Type: new -Abstract: The advancement of human healthspan and bioengineering relies heavily on predicting the behavior of complex biological systems. While high-throughput multiomics data is becoming increasingly abundant, converting this data into actionable predictive models remains a bottleneck. High-capacity, datadriven simulation systems are critical in this landscape; unlike classical mechanistic models restricted by prior knowledge, these architectures can infer latent interactions directly from observational data, allowing for the simulation of temporal trajectories and the anticipation of downstream intervention effects in personalized medicine and synthetic biology. To address this challenge, we introduce Neural Ordinary Differential Equations (NODEs) as a dynamic framework for learning the complex interplay between the proteome and metabolome. We applied this framework to time-series data derived from engineered Escherichia coli strains, modeling the continuous dynamics of metabolic pathways. The proposed NODE architecture demonstrates superior performance in capturing system dynamics compared to traditional machine learning pipelines. Our results show a greater than 90% improvement in root mean squared error over baselines across both Limonene (up to 94.38% improvement) and Isopentenol (up to 97.65% improvement) pathway datasets. Furthermore, the NODE models demonstrated a 1000x acceleration in inference time, establishing them as a scalable, high-fidelity tool for the next generation of metabolic engineering and biological discovery. - oai:arXiv.org:2512.08732v1 + Provably Learning from Modern Language Models via Low Logit Rank + https://arxiv.org/abs/2512.09892 + arXiv:2512.09892v1 Announce Type: new +Abstract: While modern language models and their inner workings are incredibly complex, recent work (Golowich, Liu & Shetty; 2025) has proposed a simple and potentially tractable abstraction for them through the observation that empirically, these language models all seem to have approximately low logit rank. Roughly, this means that a matrix formed by the model's log probabilities of various tokens conditioned on certain sequences of tokens is well approximated by a low rank matrix. + In this paper, our focus is on understanding how this structure can be exploited algorithmically for obtaining provable learning guarantees. Since low logit rank models can encode hard-to-learn distributions such as noisy parities, we study a query learning model with logit queries that reflects the access model for common APIs. Our main result is an efficient algorithm for learning any approximately low logit rank model from queries. We emphasize that our structural assumption closely reflects the behavior that is empirically observed in modern language models. Thus, our result gives what we believe is the first end-to-end learning guarantee for a generative model that plausibly captures modern language models. + oai:arXiv.org:2512.09892v1 cs.LG - q-bio.SC - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.DS + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Udesh Habaraduwa, Andrei Lixandru + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Noah Golowich, Allen Liu, Abhishek Shetty - Mitigating Individual Skin Tone Bias in Skin Lesion Classification through Distribution-Aware Reweighting - https://arxiv.org/abs/2512.08733 - arXiv:2512.08733v1 Announce Type: new -Abstract: Skin color has historically been a focal point of discrimination, yet fairness research in machine learning for medical imaging often relies on coarse subgroup categories, overlooking individual-level variations. Such group-based approaches risk obscuring biases faced by outliers within subgroups. This study introduces a distribution-based framework for evaluating and mitigating individual fairness in skin lesion classification. We treat skin tone as a continuous attribute rather than a categorical label, and employ kernel density estimation (KDE) to model its distribution. We further compare twelve statistical distance metrics to quantify disparities between skin tone distributions and propose a distance-based reweighting (DRW) loss function to correct underrepresentation in minority tones. Experiments across CNN and Transformer models demonstrate: (i) the limitations of categorical reweighting in capturing individual-level disparities, and (ii) the superior performance of distribution-based reweighting, particularly with Fidelity Similarity (FS), Wasserstein Distance (WD), Hellinger Metric (HM), and Harmonic Mean Similarity (HS). These findings establish a robust methodology for advancing fairness at individual level in dermatological AI systems, and highlight broader implications for sensitive continuous attributes in medical image analysis. - oai:arXiv.org:2512.08733v1 - cs.CV - cs.AI + Exploring Protein Language Model Architecture-Induced Biases for Antibody Comprehension + https://arxiv.org/abs/2512.09894 + arXiv:2512.09894v1 Announce Type: new +Abstract: Recent advances in protein language models (PLMs) have demonstrated remarkable capabilities in understanding protein sequences. However, the extent to which different model architectures capture antibody-specific biological properties remains unexplored. In this work, we systematically investigate how architectural choices in PLMs influence their ability to comprehend antibody sequence characteristics and functions. We evaluate three state-of-the-art PLMs-AntiBERTa, BioBERT, and ESM2--against a general-purpose language model (GPT-2) baseline on antibody target specificity prediction tasks. Our results demonstrate that while all PLMs achieve high classification accuracy, they exhibit distinct biases in capturing biological features such as V gene usage, somatic hypermutation patterns, and isotype information. Through attention attribution analysis, we show that antibody-specific models like AntiBERTa naturally learn to focus on complementarity-determining regions (CDRs), while general protein models benefit significantly from explicit CDR-focused training strategies. These findings provide insights into the relationship between model architecture and biological feature extraction, offering valuable guidance for future PLM development in computational antibody design. + oai:arXiv.org:2512.09894v1 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Kuniko Paxton, Zeinab Dehghani, Koorosh Aslansefat, Dhavalkumar Thakker, Yiannis Papadopoulos + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mengren (Bill), Liu (Jason), Yixiang Zhang (Jason), Yiming (Jason), Zhang - Insured Agents: A Decentralized Trust Insurance Mechanism for Agentic Economy - https://arxiv.org/abs/2512.08737 - arXiv:2512.08737v1 Announce Type: new -Abstract: The emerging "agentic web" envisions large populations of autonomous agents coordinating, transacting, and delegating across open networks. Yet many agent communication and commerce protocols treat agents as low-cost identities, despite the empirical reality that LLM agents remain unreliable, hallucinated, manipulable, and vulnerable to prompt-injection and tool-abuse. A natural response is "agents-at-stake": binding economically meaningful, slashable collateral to persistent identities and adjudicating misbehavior with verifiable evidence. However, heterogeneous tasks make universal verification brittle and centralization-prone, while traditional reputation struggles under rapid model drift and opaque internal states. We propose a protocol-native alternative: insured agents. Specialized insurer agents post stake on behalf of operational agents in exchange for premiums, and receive privileged, privacy-preserving audit access via TEEs to assess claims. A hierarchical insurer market calibrates stake through pricing, decentralizes verification via competitive underwriting, and yields incentive-compatible dispute resolution. - oai:arXiv.org:2512.08737v1 - cs.CY - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + Human-in-the-Loop and AI: Crowdsourcing Metadata Vocabulary for Materials Science + https://arxiv.org/abs/2512.09895 + arXiv:2512.09895v1 Announce Type: new +Abstract: Metadata vocabularies are essential for advancing FAIR and FARR data principles, but their development constrained by limited human resources and inconsistent standardization practices. This paper introduces MatSci-YAMZ, a platform that integrates artificial intelligence (AI) and human-in-the-loop (HILT), including crowdsourcing, to support metadata vocabulary development. The paper reports on a proof-of-concept use case evaluating the AI-HILT model in materials science, a highly interdisciplinary domain Six (6) participants affiliated with the NSF Institute for Data-Driven Dynamical Design (ID4) engaged with the MatSci-YAMZ plaform over several weeks, contributing term definitions and providing examples to prompt the AI-definitions refinement. Nineteen (19) AI-generated definitions were successfully created, with iterative feedback loops demonstrating the feasibility of AI-HILT refinement. Findings confirm the feasibility AI-HILT model highlighting 1) a successful proof of concept, 2) alignment with FAIR and open-science principles, 3) a research protocol to guide future studies, and 4) the potential for scalability across domains. Overall, MatSci-YAMZ's underlying model has the capacity to enhance semantic transparency and reduce time required for consensus building and metadata vocabulary development. + oai:arXiv.org:2512.09895v1 + cs.AI + cs.DL + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Botao 'Amber' Hu, Bangdao Chen + http://creativecommons.org/licenses/by-sa/4.0/ + Jane Greenberg, Scott McClellan, Addy Ireland, Robert Sammarco, Colton Gerber, Christopher B. Rauch, Mat Kelly, John Kunze, Yuan An, Eric Toberer - Pose-Based Sign Language Spotting via an End-to-End Encoder Architecture - https://arxiv.org/abs/2512.08738 - arXiv:2512.08738v1 Announce Type: new -Abstract: Automatic Sign Language Recognition (ASLR) has emerged as a vital field for bridging the gap between deaf and hearing communities. However, the problem of sign-to-sign retrieval or detecting a specific sign within a sequence of continuous signs remains largely unexplored. We define this novel task as Sign Language Spotting. In this paper, we present a first step toward sign language retrieval by addressing the challenge of detecting the presence or absence of a query sign video within a sentence-level gloss or sign video. Unlike conventional approaches that rely on intermediate gloss recognition or text-based matching, we propose an end-to-end model that directly operates on pose keypoints extracted from sign videos. Our architecture employs an encoder-only backbone with a binary classification head to determine whether the query sign appears within the target sequence. By focusing on pose representations instead of raw RGB frames, our method significantly reduces computational cost and mitigates visual noise. We evaluate our approach on the Word Presence Prediction dataset from the WSLP 2025 shared task, achieving 61.88\% accuracy and 60.00\% F1-score. These results demonstrate the effectiveness of our pose-based framework for Sign Language Spotting, establishing a strong foundation for future research in automatic sign language retrieval and verification. Code is available at https://github.com/EbimoJohnny/Pose-Based-Sign-Language-Spotting - oai:arXiv.org:2512.08738v1 - cs.CV + SCOPE: Language Models as One-Time Teacher for Hierarchical Planning in Text Environments + https://arxiv.org/abs/2512.09897 + arXiv:2512.09897v1 Announce Type: new +Abstract: Long-term planning in complex, text-based environments presents significant challenges due to open-ended action spaces, ambiguous observations, and sparse feedback. Recent research suggests that large language models (LLMs) encode rich semantic knowledge about the world, which can be valuable for guiding agents in high-level reasoning and planning across both embodied and purely textual settings. However, existing approaches often depend heavily on querying LLMs during training and inference, making them computationally expensive and difficult to deploy efficiently. In addition, these methods typically employ a pretrained, unaltered LLM whose parameters remain fixed throughout training, providing no opportunity for adaptation to the target task. To address these limitations, we introduce SCOPE (Subgoal-COnditioned Pretraining for Efficient planning), a one-shot hierarchical planner that leverages LLM-generated subgoals only at initialization to pretrain a lightweight student model. Unlike prior approaches that distill LLM knowledge by repeatedly prompting the model to adaptively generate subgoals during training, our method derives subgoals directly from example trajectories. This design removes the need for repeated LLM queries, significantly improving efficiency, though at the cost of reduced explainability and potentially suboptimal subgoals. Despite their suboptimality, our results on the TextCraft environment show that LLM-generated subgoals can still serve as a strong starting point for hierarchical goal decomposition in text-based planning tasks. Compared to the LLM-based hierarchical agent ADaPT (Prasad et al., 2024), which achieves a 0.52 success rate, our method reaches 0.56 and reduces inference time from 164.4 seconds to just 3.0 seconds. + oai:arXiv.org:2512.09897v1 + cs.AI cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Samuel Ebimobowei Johnny, Blessed Guda, Emmanuel Enejo Aaron, Assane Gueye + Haoye Lu, Pavan Seshadri, Kaheer Suleman - Deconstructing the Dual Black Box:A Plug-and-Play Cognitive Framework for Human-AI Collaborative Enhancement and Its Implications for AI Governance - https://arxiv.org/abs/2512.08740 - arXiv:2512.08740v1 Announce Type: new -Abstract: Currently, there exists a fundamental divide between the "cognitive black box" (implicit intuition) of human experts and the "computational black box" (untrustworthy decision-making) of artificial intelligence (AI). This paper proposes a new paradigm of "human-AI collaborative cognitive enhancement," aiming to transform the dual black boxes into a composable, auditable, and extensible "functional white-box" system through structured "meta-interaction." The core breakthrough lies in the "plug-and-play cognitive framework"--a computable knowledge package that can be extracted from expert dialogues and loaded into the Recursive Adversarial Meta-Thinking Network (RAMTN). This enables expert thinking, such as medical diagnostic logic and teaching intuition, to be converted into reusable and scalable public assets, realizing a paradigm shift from "AI as a tool" to "AI as a thinking partner." This work not only provides the first engineering proof for "cognitive equity" but also opens up a new path for AI governance: constructing a verifiable and intervenable governance paradigm through "transparency of interaction protocols" rather than prying into the internal mechanisms of models. The framework is open-sourced to promote technology for good and cognitive inclusion. This paper is an independent exploratory research conducted by the author. All content presented, including the theoretical framework (RAMTN), methodology (meta-interaction), system implementation, and case validation, constitutes the author's individual research achievements. - oai:arXiv.org:2512.08740v1 + Visual Heading Prediction for Autonomous Aerial Vehicles + https://arxiv.org/abs/2512.09898 + arXiv:2512.09898v1 Announce Type: new +Abstract: The integration of Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) is increasingly central to the development of intelligent autonomous systems for applications such as search and rescue, environmental monitoring, and logistics. However, precise coordination between these platforms in real-time scenarios presents major challenges, particularly when external localization infrastructure such as GPS or GNSS is unavailable or degraded [1]. This paper proposes a vision-based, data-driven framework for real-time UAV-UGV integration, with a focus on robust UGV detection and heading angle prediction for navigation and coordination. The system employs a fine-tuned YOLOv5 model to detect UGVs and extract bounding box features, which are then used by a lightweight artificial neural network (ANN) to estimate the UAV's required heading angle. A VICON motion capture system was used to generate ground-truth data during training, resulting in a dataset of over 13,000 annotated images collected in a controlled lab environment. The trained ANN achieves a mean absolute error of 0.1506{\deg} and a root mean squared error of 0.1957{\deg}, offering accurate heading angle predictions using only monocular camera inputs. Experimental evaluations achieve 95% accuracy in UGV detection. This work contributes a vision-based, infrastructure- independent solution that demonstrates strong potential for deployment in GPS/GNSS-denied environments, supporting reliable multi-agent coordination under realistic dynamic conditions. A demonstration video showcasing the system's real-time performance, including UGV detection, heading angle prediction, and UAV alignment under dynamic conditions, is available at: https://github.com/Kooroshraf/UAV-UGV-Integration + oai:arXiv.org:2512.09898v1 + cs.RO cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + cs.MA + cs.SY + eess.SY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Yiming Lu + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Reza Ahmari, Ahmad Mohammadi, Vahid Hemmati, Mohammed Mynuddin, Parham Kebria, Mahmoud Nabil Mahmoud, Xiaohong Yuan, Abdollah Homaifar - Parallel Batch Dynamic Vertex Coloring in $O(\log \Delta)$ Amortized Update Time - https://arxiv.org/abs/2512.08742 - arXiv:2512.08742v1 Announce Type: new -Abstract: We present the first parallel batch-dynamic algorithm for maintaining a proper $(\Delta + 1)$-vertex coloring. Our approach builds on a new sequential dynamic algorithm inspired by the work of Bhattacharya et al. (SODA'18). The resulting randomized algorithm achieves $O(\log \Delta)$ expected amortized update time and, for any batch of $b$ updates, has parallel span $O(\operatorname{polylog} b + \operatorname{polylog} n)$ with high probability. - oai:arXiv.org:2512.08742v1 - cs.DS - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + Near-Linear and Parameterized Approximations for Maximum Cliques in Disk Graphs + https://arxiv.org/abs/2512.09899 + arXiv:2512.09899v1 Announce Type: new +Abstract: A \emph{disk graph} is the intersection graph of (closed) disks in the plane. We consider the classic problem of finding a maximum clique in a disk graph. For general disk graphs, the complexity of this problem is still open, but for unit disk graphs, it is well known to be in P. The currently fastest algorithm runs in time $O(n^{7/3+ o(1)})$, where $n$ denotes the number of disks~\cite{EspenantKM23, keil_et_al:LIPIcs.SoCG.2025.63}. Moreover, for the case of disk graphs with $t$ distinct radii, the problem has also recently been shown to be in XP. More specifically, it is solvable in time $O^*(n^{2t})$~\cite{keil_et_al:LIPIcs.SoCG.2025.63}. In this paper, we present algorithms with improved running times by allowing for approximate solutions and by using randomization: + (i) for unit disk graphs, we give an algorithm that, with constant success probability, computes a $(1-\varepsilon)$-approximate maximum clique in expected time $\tilde{O}(n/\varepsilon^2)$; and + (ii) for disk graphs with $t$ distinct radii, we give a parameterized approximation scheme that, with a constant success probability, computes a $(1-\varepsilon)$-approximate maximum clique in expected time $\tilde{O}(f(t)\cdot (1/\varepsilon)^{O(t)} \cdot n)$. + oai:arXiv.org:2512.09899v1 + cs.CG + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chase Hutton, Adam Melrod + Jie Gao, Pawel Gawrychowski, Panos Giannopoulos, Wolfgang Mulzer, Satyam Singh, Frank Staals, Meirav Zehavi - Towards Foundation Models with Native Multi-Agent Intelligence - https://arxiv.org/abs/2512.08743 - arXiv:2512.08743v1 Announce Type: new -Abstract: Foundation models (FMs) are increasingly assuming the role of the "brain" of AI agents. While recent efforts have begun to equip FMs with native single-agent abilities -- such as GUI interaction or integrated tool use -- we argue that the next frontier is endowing FMs with native multi-agent intelligence. We identify four core capabilities of FMs in multi-agent contexts: understanding, planning, efficient communication, and adaptation. Contrary to assumptions about the spontaneous emergence of such abilities, we provide extensive empirical evidence across 41 large language models showing that strong single-agent performance alone does not automatically yield robust multi-agent intelligence. To address this gap, we outline key research directions -- spanning dataset construction, evaluation, training paradigms, and safety considerations -- for building FMs with native multi-agent intelligence. - oai:arXiv.org:2512.08743v1 - cs.AI - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + Link-Sharing Backpressure Routing In Wireless Multi-Hop Networks + https://arxiv.org/abs/2512.09902 + arXiv:2512.09902v1 Announce Type: new +Abstract: Backpressure (BP) routing and scheduling is an established resource allocation method for wireless multi-hop networks, noted for its fully distributed operation and maximum queue stability. Recent advances in shortest path-biased BP routing (SP-BP) mitigate shortcomings such as slow startup and random walks, yet exclusive link-level commodity selection still causes last-packet problem and bandwidth underutilization. By revisiting the Lyapunov drift theory underlying BP, we show that the legacy exclusive commodity selection is unnecessary, and propose a Maximum Utility (MaxU) link-sharing method to expand its performance envelope without increasing control message overhead. Numerical results show that MaxU SP-BP substantially mitigates the last-packet problem and slightly expands the network capacity region. + oai:arXiv.org:2512.09902v1 + cs.NI + cs.DC + cs.SY + eess.SY + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Shuyue Hu, Haoyang Yan, Yiqun Zhang, Yang Chen, Dongzhan Zhou, Lei Bai + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhongyuan Zhao, Yujun Ming, Ananthram Swami, Kevin Chan, Fikadu Dagefu, Santiago Segarra - A Scalable Pipeline Combining Procedural 3D Graphics and Guided Diffusion for Photorealistic Synthetic Training Data Generation in White Button Mushroom Segmentation - https://arxiv.org/abs/2512.08747 - arXiv:2512.08747v1 Announce Type: new -Abstract: Industrial mushroom cultivation increasingly relies on computer vision for monitoring and automated harvesting. However, developing accurate detection and segmentation models requires large, precisely annotated datasets that are costly to produce. Synthetic data provides a scalable alternative, yet often lacks sufficient realism to generalize to real-world scenarios. This paper presents a novel workflow that integrates 3D rendering in Blender with a constrained diffusion model to automatically generate high-quality annotated, photorealistic synthetic images of Agaricus Bisporus mushrooms. This approach preserves full control over 3D scene configuration and annotations while achieving photorealism without the need for specialized computer graphics expertise. We release two synthetic datasets (each containing 6,000 images depicting over 250k mushroom instances) and evaluate Mask R-CNN models trained on them in a zero-shot setting. When tested on two independent real-world datasets (including a newly collected benchmark), our method achieves state-of-the-art segmentation performance (F1 = 0.859 on M18K), despite using only synthetic training data. Although the approach is demonstrated on Agaricus Bisporus mushrooms, the proposed pipeline can be readily adapted to other mushroom species or to other agricultural domains, such as fruit and leaf detection. - oai:arXiv.org:2512.08747v1 + YOPO-Nav: Visual Navigation using 3DGS Graphs from One-Pass Videos + https://arxiv.org/abs/2512.09903 + arXiv:2512.09903v1 Announce Type: new +Abstract: Visual navigation has emerged as a practical alternative to traditional robotic navigation pipelines that rely on detailed mapping and path planning. However, constructing and maintaining 3D maps is often computationally expensive and memory-intensive. We address the problem of visual navigation when exploration videos of a large environment are available. The videos serve as a visual reference, allowing a robot to retrace the explored trajectories without relying on metric maps. Our proposed method, YOPO-Nav (You Only Pass Once), encodes an environment into a compact spatial representation composed of interconnected local 3D Gaussian Splatting (3DGS) models. During navigation, the framework aligns the robot's current visual observation with this representation and predicts actions that guide it back toward the demonstrated trajectory. YOPO-Nav employs a hierarchical design: a visual place recognition (VPR) module provides coarse localization, while the local 3DGS models refine the goal and intermediate poses to generate control actions. To evaluate our approach, we introduce the YOPO-Campus dataset, comprising 4 hours of egocentric video and robot controller inputs from over 6 km of human-teleoperated robot trajectories. We benchmark recent visual navigation methods on trajectories from YOPO-Campus using a Clearpath Jackal robot. Experimental results show YOPO-Nav provides excellent performance in image-goal navigation for real-world scenes on a physical robot. The dataset and code will be made publicly available for visual navigation and scene representation research. + oai:arXiv.org:2512.09903v1 + cs.RO cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Art\'ur I. K\'aroly, P\'eter Galambos + Ryan Meegan, Adam D'Souza, Bryan Bo Cao, Shubham Jain, Kristin Dana - Skewness-Guided Pruning of Multimodal Swin Transformers for Federated Skin Lesion Classification on Edge Devices - https://arxiv.org/abs/2512.08751 - arXiv:2512.08751v1 Announce Type: new -Abstract: In recent years, high-performance computer vision models have achieved remarkable success in medical imaging, with some skin lesion classification systems even surpassing dermatology specialists in diagnostic accuracy. However, such models are computationally intensive and large in size, making them unsuitable for deployment on edge devices. In addition, strict privacy constraints hinder centralized data management, motivating the adoption of Federated Learning (FL). To address these challenges, this study proposes a skewness-guided pruning method that selectively prunes the Multi-Head Self-Attention and Multi-Layer Perceptron layers of a multimodal Swin Transformer based on the statistical skewness of their output distributions. The proposed method was validated in a horizontal FL environment and shown to maintain performance while substantially reducing model complexity. Experiments on the compact Swin Transformer demonstrate approximately 36\% model size reduction with no loss in accuracy. These findings highlight the feasibility of achieving efficient model compression and privacy-preserving distributed learning for multimodal medical AI on edge devices. - oai:arXiv.org:2512.08751v1 + VisualActBench: Can VLMs See and Act like a Human? + https://arxiv.org/abs/2512.09907 + arXiv:2512.09907v1 Announce Type: new +Abstract: Vision-Language Models (VLMs) have achieved impressive progress in perceiving and describing visual environments. However, their ability to proactively reason and act based solely on visual inputs, without explicit textual prompts, remains underexplored. We introduce a new task, Visual Action Reasoning, and propose VisualActBench, a large-scale benchmark comprising 1,074 videos and 3,733 human-annotated actions across four real-world scenarios. Each action is labeled with an Action Prioritization Level (APL) and a proactive-reactive type to assess models' human-aligned reasoning and value sensitivity. We evaluate 29 VLMs on VisualActBench and find that while frontier models like GPT4o demonstrate relatively strong performance, a significant gap remains compared to human-level reasoning, particularly in generating proactive, high-priority actions. Our results highlight limitations in current VLMs' ability to interpret complex context, anticipate outcomes, and align with human decision-making frameworks. VisualActBench establishes a comprehensive foundation for assessing and improving the real-world readiness of proactive, vision-centric AI agents. + oai:arXiv.org:2512.09907v1 cs.CV - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Kuniko Paxton, Koorosh Aslansefat, Dhavalkumar Thakker, Yiannis Papadopoulos + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Daoan Zhang, Pai Liu, Xiaofei Zhou, Yuan Ge, Guangchen Lan, Jing Bi, Christopher Brinton, Ehsan Hoque, Jiebo Luo - IoT-based Cost-Effective Fruit Quality Monitoring System using Electronic Nose - https://arxiv.org/abs/2512.08753 - arXiv:2512.08753v1 Announce Type: new -Abstract: Post-harvest losses due to subjective quality assessment cause significant damage to the economy and food safety, especially in countries like Bangladesh. To mitigate such damages, objective decision-making backed by scientific methods is necessary. An IoT-based, cost-effective quality monitoring system can provide a solution by going beyond subjective quality monitoring and decision-making practices. Here, we propose a low-power, cost-effective fruit quality monitoring system with an array of MQ gas sensors, which can be used as an electronic nose. We track the volatile gas emissions, specifically ethanol, methane, and ammonia, encompassing both ripening and decomposition for a set of bananas. Based on the gas concentration thresholds, we develop a mathematical model to accurately assess fruit quality. We also integrate this information into a dashboard for prompt decision-making and monitoring to make it useful to the farmers. This approach has the potential to reduce economic losses, enhance food safety, and provide scalable solutions for the supply chain. - oai:arXiv.org:2512.08753v1 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Bayesian Networks, Markov Networks, Moralisation, Triangulation: a Categorical Perspective + https://arxiv.org/abs/2512.09908 + arXiv:2512.09908v1 Announce Type: new +Abstract: Moralisation and Triangulation are transformations allowing to switch between different ways of factoring a probability distribution into a graphical model. Moralisation allows to view a Bayesian network (a directed model) as a Markov network (an undirected model), whereas triangulation addresses the opposite direction. We present a categorical framework where these transformations are modelled as functors between a category of Bayesian networks and one of Markov networks. The two kinds of network (the objects of these categories) are themselves represented as functors from a `syntax' domain to a `semantics' codomain. Notably, moralisation and triangulation can be defined inductively on such syntax via functor pre-composition. Moreover, while moralisation is fully syntactic, triangulation relies on semantics. This leads to a discussion of the variable elimination algorithm, reinterpreted here as a functor in its own right, that splits the triangulation procedure in two: one purely syntactic, the other purely semantic. This approach introduces a functorial perspective into the theory of probabilistic graphical models, which highlights the distinctions between syntactic and semantic modifications. + oai:arXiv.org:2512.09908v1 + cs.AI + cs.LO + math.CT + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Anindya Bhattacharjee, Nittya Ananda Biswas, Khondakar Ashik Shahriar, Kawsain Bin Salim + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Antonio Lorenzin, Fabio Zanasi - A Multi-Robot Platform for Robotic Triage Combining Onboard Sensing and Foundation Models - https://arxiv.org/abs/2512.08754 - arXiv:2512.08754v1 Announce Type: new -Abstract: This report presents a heterogeneous robotic system designed for remote primary triage in mass-casualty incidents (MCIs). The system employs a coordinated air-ground team of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) to locate victims, assess their injuries, and prioritize medical assistance without risking the lives of first responders. The UAV identify and provide overhead views of casualties, while UGVs equipped with specialized sensors measure vital signs and detect and localize physical injuries. Unlike previous work that focused on exploration or limited medical evaluation, this system addresses the complete triage process: victim localization, vital sign measurement, injury severity classification, mental status assessment, and data consolidation for first responders. Developed as part of the DARPA Triage Challenge, this approach demonstrates how multi-robot systems can augment human capabilities in disaster response scenarios to maximize lives saved. - oai:arXiv.org:2512.08754v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + STACHE: Local Black-Box Explanations for Reinforcement Learning Policies + https://arxiv.org/abs/2512.09909 + arXiv:2512.09909v1 Announce Type: new +Abstract: Reinforcement learning agents often behave unexpectedly in sparse-reward or safety-critical environments, creating a strong need for reliable debugging and verification tools. In this paper, we propose STACHE, a comprehensive framework for generating local, black-box explanations for an agent's specific action within discrete Markov games. Our method produces a Composite Explanation consisting of two complementary components: (1) a Robustness Region, the connected neighborhood of states where the agent's action remains invariant, and (2) Minimal Counterfactuals, the smallest state perturbations required to alter that decision. By exploiting the structure of factored state spaces, we introduce an exact, search-based algorithm that circumvents the fidelity gaps of surrogate models. Empirical validation on Gymnasium environments demonstrates that our framework not only explains policy actions, but also effectively captures the evolution of policy logic during training - from erratic, unstable behavior to optimized, robust strategies - providing actionable insights into agent sensitivity and decision boundaries. + oai:arXiv.org:2512.09909v1 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jason Hughes, Marcel Hussing, Edward Zhang, Shenbagaraj Kannapiran, Joshua Caswell, Kenneth Chaney, Ruichen Deng, Michaela Feehery, Agelos Kratimenos, Yi Fan Li, Britny Major, Ethan Sanchez, Sumukh Shrote, Youkang Wang, Jeremy Wang, Daudi Zein, Luying Zhang, Ruijun Zhang, Alex Zhou, Tenzi Zhouga, Jeremy Cannon, Zaffir Qasim, Jay Yelon, Fernando Cladera, Kostas Daniilidis, Camillo J. Taylor, Eric Eaton + Andrew Elashkin, Orna Grumberg - Performance Comparison of Aerial RIS and STAR-RIS in 3D Wireless Environments - https://arxiv.org/abs/2512.08755 - arXiv:2512.08755v1 Announce Type: new -Abstract: Reconfigurable intelligent surface (RIS) and simultaneously transmitting and reflecting RIS (STAR-RIS) have emerged as key enablers for enhancing wireless coverage and capacity in next-generation networks. When mounted on unmanned aerial vehicles (UAVs), they benefit from flexible deployment and improved line-of-sight conditions. Despite their promising potential, a comprehensive performance comparison between aerial RIS and STAR-RIS architectures has not been thoroughly investigated. This letter presents a detailed performance comparison between aerial RIS and STAR-RIS in three-dimensional wireless environments. Accurate channel models incorporating directional radiation patterns are established, and the influence of deployment altitude and orientation is thoroughly examined. To optimize the system sum-rate, we formulate joint optimization problems for both architectures and propose an efficient solution based on the weighted minimum mean square error and block coordinate descent algorithms. Simulation results reveal that STAR-RIS outperforms RIS in low-altitude scenarios due to its full-space coverage capability, whereas RIS delivers better performance near the base station at higher altitudes. The findings provide practical insights for the deployment of aerial intelligent surfaces in future 6G communication systems. - oai:arXiv.org:2512.08755v1 + Efficient Continual Learning in Neural Machine Translation: A Low-Rank Adaptation Approach + https://arxiv.org/abs/2512.09910 + arXiv:2512.09910v1 Announce Type: new +Abstract: Continual learning in Neural Machine Translation (NMT) faces the dual challenges of catastrophic forgetting and the high computational cost of retraining. This study establishes Low-Rank Adaptation (LoRA) as a parameter-efficient framework to address these challenges in dedicated NMT architectures. We first demonstrate that LoRA-based fine-tuning adapts NMT models to new languages and domains with performance on par with full-parameter techniques, while utilizing only a fraction of the parameter space. Second, we propose an interactive adaptation method using a calibrated linear combination of LoRA modules. This approach functions as a gate-free mixture of experts, enabling real-time, user-controllable adjustments to domain and style without retraining. Finally, to mitigate catastrophic forgetting, we introduce a novel gradient-based regularization strategy specifically designed for low-rank decomposition matrices. Unlike methods that regularize the full parameter set, our approach weights the penalty on the low-rank updates using historical gradient information. Experimental results indicate that this strategy efficiently preserves prior domain knowledge while facilitating the acquisition of new tasks, offering a scalable paradigm for interactive and continual NMT. + oai:arXiv.org:2512.09910v1 + cs.CL cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Dongdong Yang, Bin Li, Jiguang He - - - Explainable Learning Based Regularization of Inverse Problems - https://arxiv.org/abs/2512.08758 - arXiv:2512.08758v1 Announce Type: new -Abstract: Machine learning techniques for the solution of inverse problems have become an attractive approach in the last decade, while their theoretical foundations are still in their infancy. In this chapter we want to pursue the study of regularization properties, robustness, convergence rates, and structure of regularizers for inverse problems obtained from different learning paradigms. For this sake we study simple architectures that are explainable in the sense that they allow for a theoretical analysis also in the infinite-dimensional limit. In particular we will advance the study of spectral architectures with new results on convergence rates highlighting the role of the smoothness in the training data set, and a study of adversarial robustness. We can show that adversarial training is actually a convergent regularization method. Moreover, we discuss extensions to frame systems and CNN-type architectures for variational regularizers, where we obtain some results on their structure by carefully designed numerical experiments. - oai:arXiv.org:2512.08758v1 - math.NA - cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Martin Burger, Samira Kabri, Gitta Kutyniok, Yunseok Lee, Lukas Weigand + Salvador Carri\'on, Francisco Casacuberta - Learning and Editing Universal Graph Prompt Tuning via Reinforcement Learning - https://arxiv.org/abs/2512.08763 - arXiv:2512.08763v1 Announce Type: new -Abstract: Early graph prompt tuning approaches relied on task-specific designs for Graph Neural Networks (GNNs), limiting their adaptability across diverse pre-training strategies. In contrast, another promising line of research has investigated universal graph prompt tuning, which operates directly in the input graph's feature space and builds a theoretical foundation that universal graph prompt tuning can theoretically achieve an equivalent effect of any prompting function, eliminating dependence on specific pre-training strategies. Recent works propose selective node-based graph prompt tuning to pursue more ideal prompts. However, we argue that selective node-based graph prompt tuning inevitably compromises the theoretical foundation of universal graph prompt tuning. In this paper, we strengthen the theoretical foundation of universal graph prompt tuning by introducing stricter constraints, demonstrating that adding prompts to all nodes is a necessary condition for achieving the universality of graph prompts. To this end, we propose a novel model and paradigm, Learning and Editing Universal GrAph Prompt Tuning (LEAP), which preserves the theoretical foundation of universal graph prompt tuning while pursuing more ideal prompts. Specifically, we first build the basic universal graph prompts to preserve the theoretical foundation and then employ actor-critic reinforcement learning to select nodes and edit prompts. Extensive experiments on graph- and node-level tasks across various pre-training strategies in both full-shot and few-shot scenarios show that LEAP consistently outperforms fine-tuning and other prompt-based approaches. - oai:arXiv.org:2512.08763v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Py-DiSMech: A Scalable and Efficient Framework for Discrete Differential Geometry-Based Modeling and Control of Soft Robots + https://arxiv.org/abs/2512.09911 + arXiv:2512.09911v1 Announce Type: new +Abstract: High-fidelity simulation has become essential to the design and control of soft robots, where large geometric deformations and complex contact interactions challenge conventional modeling tools. Recent advances in the field demand simulation frameworks that combine physical accuracy, computational scalability, and seamless integration with modern control and optimization pipelines. In this work, we present Py-DiSMech, a Python-based, open-source simulation framework for modeling and control of soft robotic structures grounded in the principles of Discrete Differential Geometry (DDG). By discretizing geometric quantities such as curvature and strain directly on meshes, Py-DiSMech captures the nonlinear deformation of rods, shells, and hybrid structures with high fidelity and reduced computational cost. The framework introduces (i) a fully vectorized NumPy implementation achieving order-of-magnitude speed-ups over existing geometry-based simulators; (ii) a penalty-energy-based fully implicit contact model that supports rod-rod, rod-shell, and shell-shell interactions; (iii) a natural-strain-based feedback-control module featuring a proportional-integral (PI) controller for shape regulation and trajectory tracking; and (iv) a modular, object-oriented software design enabling user-defined elastic energies, actuation schemes, and integration with machine-learning libraries. Benchmark comparisons demonstrate that Py-DiSMech substantially outperforms the state-of-the-art simulator Elastica in computational efficiency while maintaining physical accuracy. Together, these features establish Py-DiSMech as a scalable, extensible platform for simulation-driven design, control validation, and sim-to-real research in soft robotics. + oai:arXiv.org:2512.09911v1 + cs.RO + physics.comp-ph + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jinfeng Xu, Zheyu Chen, Shuo Yang, Jinze Li, Hewei Wang, Yijie Li, Edith C. H. Ngai + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Radha Lahoti, Ryan Chaiyakul, M. Khalid Jawed - Financial News Summarization: Can extractive methods still offer a true alternative to LLMs? - https://arxiv.org/abs/2512.08764 - arXiv:2512.08764v1 Announce Type: new -Abstract: Financial markets change rapidly due to news, economic shifts, and geopolitical events. Quick reactions are vital for investors to avoid losses or capture short-term gains. As a result, concise financial news summaries are critical for decision-making. With over 50,000 financial articles published daily, automation in summarization is necessary. This study evaluates a range of summarization methods, from simple extractive techniques to advanced large language models (LLMs), using the FinLLMs Challenge dataset. LLMs generated more coherent and informative summaries, but they are resource-intensive and prone to hallucinations, which can introduce significant errors into financial summaries. In contrast, extractive methods perform well on short, well-structured texts and offer a more efficient alternative for this type of article. The best ROUGE results come from fine-tuned LLM model like FT-Mistral-7B, although our data corpus has limited reliability, which calls for cautious interpretation. - oai:arXiv.org:2512.08764v1 - cs.CE - Wed, 10 Dec 2025 00:00:00 -0500 + NordFKB: a fine-grained benchmark dataset for geospatial AI in Norway + https://arxiv.org/abs/2512.09913 + arXiv:2512.09913v1 Announce Type: new +Abstract: We present NordFKB, a fine-grained benchmark dataset for geospatial AI in Norway, derived from the authoritative, highly accurate, national Felles KartdataBase (FKB). The dataset contains high-resolution orthophotos paired with detailed annotations for 36 semantic classes, including both per-class binary segmentation masks in GeoTIFF format and COCO-style bounding box annotations. Data is collected from seven geographically diverse areas, ensuring variation in climate, topography, and urbanization. Only tiles containing at least one annotated object are included, and training/validation splits are created through random sampling across areas to ensure representative class and context distributions. Human expert review and quality control ensures high annotation accuracy. Alongside the dataset, we release a benchmarking repository with standardized evaluation protocols and tools for semantic segmentation and object detection, enabling reproducible and comparable research. NordFKB provides a robust foundation for advancing AI methods in mapping, land administration, and spatial planning, and paves the way for future expansions in coverage, temporal scope, and data modalities. + oai:arXiv.org:2512.09913v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.1007/978-3-032-09037-9 - Advances in Soft Computing. MICAI 2025. Lecture Notes in Computer Science, vol 16221. Springer - Nicolas Reche, Elvys Linhares-Pontes, Juan-Manuel Torres-Moreno + Sander Riis{\o}en Jyhne, Aditya Gupta, Ben Worsley, Marianne Andersen, Ivar Oveland, Alexander Salveson Nossum - Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance - https://arxiv.org/abs/2512.08765 - arXiv:2512.08765v1 Announce Type: new -Abstract: We present Wan-Move, a simple and scalable framework that brings motion control to video generative models. Existing motion-controllable methods typically suffer from coarse control granularity and limited scalability, leaving their outputs insufficient for practical use. We narrow this gap by achieving precise and high-quality motion control. Our core idea is to directly make the original condition features motion-aware for guiding video synthesis. To this end, we first represent object motions with dense point trajectories, allowing fine-grained control over the scene. We then project these trajectories into latent space and propagate the first frame's features along each trajectory, producing an aligned spatiotemporal feature map that tells how each scene element should move. This feature map serves as the updated latent condition, which is naturally integrated into the off-the-shelf image-to-video model, e.g., Wan-I2V-14B, as motion guidance without any architecture change. It removes the need for auxiliary motion encoders and makes fine-tuning base models easily scalable. Through scaled training, Wan-Move generates 5-second, 480p videos whose motion controllability rivals Kling 1.5 Pro's commercial Motion Brush, as indicated by user studies. To support comprehensive evaluation, we further design MoveBench, a rigorously curated benchmark featuring diverse content categories and hybrid-verified annotations. It is distinguished by larger data volume, longer video durations, and high-quality motion annotations. Extensive experiments on MoveBench and the public dataset consistently show Wan-Move's superior motion quality. Code, models, and benchmark data are made publicly available. - oai:arXiv.org:2512.08765v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + FALCON: Few-step Accurate Likelihoods for Continuous Flows + https://arxiv.org/abs/2512.09914 + arXiv:2512.09914v1 Announce Type: new +Abstract: Scalable sampling of molecular states in thermodynamic equilibrium is a long-standing challenge in statistical physics. Boltzmann Generators tackle this problem by pairing a generative model, capable of exact likelihood computation, with importance sampling to obtain consistent samples under the target distribution. Current Boltzmann Generators primarily use continuous normalizing flows (CNFs) trained with flow matching for efficient training of powerful models. However, likelihood calculation for these models is extremely costly, requiring thousands of function evaluations per sample, severely limiting their adoption. In this work, we propose Few-step Accurate Likelihoods for Continuous Flows (FALCON), a method which allows for few-step sampling with a likelihood accurate enough for importance sampling applications by introducing a hybrid training objective that encourages invertibility. We show FALCON outperforms state-of-the-art normalizing flow models for molecular Boltzmann sampling and is two orders of magnitude faster than the equivalently performing CNF model. + oai:arXiv.org:2512.09914v1 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ruihang Chu, Yefei He, Zhekai Chen, Shiwei Zhang, Xiaogang Xu, Bin Xia, Dingdong Wang, Hongwei Yi, Xihui Liu, Hengshuang Zhao, Yu Liu, Yingya Zhang, Yujiu Yang + Danyal Rehman, Tara Akhound-Sadegh, Artem Gazizov, Yoshua Bengio, Alexander Tong - Data-Driven Dynamic Parameter Learning of manipulator robots - https://arxiv.org/abs/2512.08767 - arXiv:2512.08767v1 Announce Type: new -Abstract: Bridging the sim-to-real gap remains a fundamental challenge in robotics, as accurate dynamic parameter estimation is essential for reliable model-based control, realistic simulation, and safe deployment of manipulators. Traditional analytical approaches often fall short when faced with complex robot structures and interactions. Data-driven methods offer a promising alternative, yet conventional neural networks such as recurrent models struggle to capture long-range dependencies critical for accurate estimation. In this study, we propose a Transformer-based approach for dynamic parameter estimation, supported by an automated pipeline that generates diverse robot models and enriched trajectory data using Jacobian-derived features. The dataset consists of 8,192 robots with varied inertial and frictional properties. Leveraging attention mechanisms, our model effectively captures both temporal and spatial dependencies. Experimental results highlight the influence of sequence length, sampling rate, and architecture, with the best configuration (sequence length 64, 64 Hz, four layers, 32 heads) achieving a validation R2 of 0.8633. Mass and inertia are estimated with near-perfect accuracy, Coulomb friction with moderate-to-high accuracy, while viscous friction and distal link center-of-mass remain more challenging. These results demonstrate that combining Transformers with automated dataset generation and kinematic enrichment enables scalable, accurate dynamic parameter estimation, contributing to improved sim-to-real transfer in robotic systems - oai:arXiv.org:2512.08767v1 + LISN: Language-Instructed Social Navigation with VLM-based Controller Modulating + https://arxiv.org/abs/2512.09920 + arXiv:2512.09920v1 Announce Type: new +Abstract: Towards human-robot coexistence, socially aware navigation is significant for mobile robots. Yet existing studies on this area focus mainly on path efficiency and pedestrian collision avoidance, which are essential but represent only a fraction of social navigation. Beyond these basics, robots must also comply with user instructions, aligning their actions to task goals and social norms expressed by humans. In this work, we present LISN-Bench, the first simulation-based benchmark for language-instructed social navigation. Built on Rosnav-Arena 3.0, it is the first standardized social navigation benchmark to incorporate instruction following and scene understanding across diverse contexts. To address this task, we further propose Social-Nav-Modulator, a fast-slow hierarchical system where a VLM agent modulates costmaps and controller parameters. Decoupling low-level action generation from the slower VLM loop reduces reliance on high-frequency VLM inference while improving dynamic avoidance and perception adaptability. Our method achieves an average success rate of 91.3%, which is greater than 63% than the most competitive baseline, with most of the improvements observed in challenging tasks such as following a person in a crowd and navigating while strictly avoiding instruction-forbidden regions. The project website is at: https://social-nav.github.io/LISN-project/ + oai:arXiv.org:2512.09920v1 cs.RO cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mohammed Elseiagy, Tsige Tadesse Alemayoh, Ranulfo Bezerra, Shotaro Kojima, Kazunori Ohno + Junting Chen, Yunchuan Li, Panfeng Jiang, Jiacheng Du, Zixuan Chen, Chenrui Tie, Jiajun Deng, Lin Shao - A Practical Guide for Designing, Developing, and Deploying Production-Grade Agentic AI Workflows - https://arxiv.org/abs/2512.08769 - arXiv:2512.08769v1 Announce Type: new -Abstract: Agentic AI marks a major shift in how autonomous systems reason, plan, and execute multi-step tasks. Unlike traditional single model prompting, agentic workflows integrate multiple specialized agents with different Large Language Models(LLMs), tool-augmented capabilities, orchestration logic, and external system interactions to form dynamic pipelines capable of autonomous decision-making and action. As adoption accelerates across industry and research, organizations face a central challenge: how to design, engineer, and operate production-grade agentic AI workflows that are reliable, observable, maintainable, and aligned with safety and governance requirements. This paper provides a practical, end-to-end guide for designing, developing, and deploying production-quality agentic AI systems. We introduce a structured engineering lifecycle encompassing workflow decomposition, multi-agent design patterns, Model Context Protocol(MCP), and tool integration, deterministic orchestration, Responsible-AI considerations, and environment-aware deployment strategies. We then present nine core best practices for engineering production-grade agentic AI workflows, including tool-first design over MCP, pure-function invocation, single-tool and single-responsibility agents, externalized prompt management, Responsible-AI-aligned model-consortium design, clean separation between workflow logic and MCP servers, containerized deployment for scalable operations, and adherence to the Keep it Simple, Stupid (KISS) principle to maintain simplicity and robustness. To demonstrate these principles in practice, we present a comprehensive case study: a multimodal news-analysis and media-generation workflow. By combining architectural guidance, operational patterns, and practical implementation insights, this paper offers a foundational reference to build robust, extensible, and production-ready agentic AI workflows. - oai:arXiv.org:2512.08769v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Splatent: Splatting Diffusion Latents for Novel View Synthesis + https://arxiv.org/abs/2512.09923 + arXiv:2512.09923v1 Announce Type: new +Abstract: Radiance field representations have recently been explored in the latent space of VAEs that are commonly used by diffusion models. This direction offers efficient rendering and seamless integration with diffusion-based pipelines. However, these methods face a fundamental limitation: The VAE latent space lacks multi-view consistency, leading to blurred textures and missing details during 3D reconstruction. Existing approaches attempt to address this by fine-tuning the VAE, at the cost of reconstruction quality, or by relying on pre-trained diffusion models to recover fine-grained details, at the risk of some hallucinations. We present Splatent, a diffusion-based enhancement framework designed to operate on top of 3D Gaussian Splatting (3DGS) in the latent space of VAEs. Our key insight departs from the conventional 3D-centric view: rather than reconstructing fine-grained details in 3D space, we recover them in 2D from input views through multi-view attention mechanisms. This approach preserves the reconstruction quality of pretrained VAEs while achieving faithful detail recovery. Evaluated across multiple benchmarks, Splatent establishes a new state-of-the-art for VAE latent radiance field reconstruction. We further demonstrate that integrating our method with existing feed-forward frameworks, consistently improves detail preservation, opening new possibilities for high-quality sparse-view 3D reconstruction. + oai:arXiv.org:2512.09923v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Eranga Bandara, Ross Gore, Peter Foytik, Sachin Shetty, Ravi Mukkamala, Abdul Rahman, Xueping Liang, Safdar H. Bouk, Amin Hass, Sachini Rajapakse, Ng Wee Keong, Kasun De Zoysa, Aruna Withanage, Nilaan Loganathan + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Or Hirschorn, Omer Sela, Inbar Huberman-Spiegelglas, Netalee Efrat, Eli Alshan, Ianir Ideses, Frederic Devernay, Yochai Zvik, Lior Fritz - De novo generation of functional terpene synthases using TpsGPT - https://arxiv.org/abs/2512.08772 - arXiv:2512.08772v1 Announce Type: new -Abstract: Terpene synthases (TPS) are a key family of enzymes responsible for generating the diverse terpene scaffolds that underpin many natural products, including front-line anticancer drugs such as Taxol. However, de novo TPS design through directed evolution is costly and slow. We introduce TpsGPT, a generative model for scalable TPS protein design, built by fine-tuning the protein language model ProtGPT2 on 79k TPS sequences mined from UniProt. TpsGPT generated de novo enzyme candidates in silico and we evaluated them using multiple validation metrics, including EnzymeExplorer classification, ESMFold structural confidence (pLDDT), sequence diversity, CLEAN classification, InterPro domain detection, and Foldseek structure alignment. From an initial pool of 28k generated sequences, we identified seven putative TPS enzymes that satisfied all validation criteria. Experimental validation confirmed TPS enzymatic activity in at least two of these sequences. Our results show that fine-tuning of a protein language model on a carefully curated, enzyme-class-specific dataset, combined with rigorous filtering, can enable the de novo generation of functional, evolutionarily distant enzymes. - oai:arXiv.org:2512.08772v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + ReViSE: Towards Reason-Informed Video Editing in Unified Models with Self-Reflective Learning + https://arxiv.org/abs/2512.09924 + arXiv:2512.09924v1 Announce Type: new +Abstract: Video unified models exhibit strong capabilities in understanding and generation, yet they struggle with reason-informed visual editing even when equipped with powerful internal vision-language models (VLMs). We attribute this gap to two factors: 1) existing datasets are inadequate for training and evaluating reasoning-aware video editing, and 2) an inherent disconnect between the models' reasoning and editing capabilities, which prevents the rich understanding from effectively instructing the editing process. Bridging this gap requires an integrated framework that connects reasoning with visual transformation. To address this gap, we introduce the Reason-Informed Video Editing (RVE) task, which requires reasoning about physical plausibility and causal dynamics during editing. To support systematic evaluation, we construct RVE-Bench, a comprehensive benchmark with two complementary subsets: Reasoning-Informed Video Editing and In-Context Video Generation. These subsets cover diverse reasoning dimensions and real-world editing scenarios. Building upon this foundation, we propose the ReViSE, a Self-Reflective Reasoning (SRF) framework that unifies generation and evaluation within a single architecture. The model's internal VLM provides intrinsic feedback by assessing whether the edited video logically satisfies the given instruction. The differential feedback that refines the generator's reasoning behavior during training. Extensive experiments on RVE-Bench demonstrate that ReViSE significantly enhances editing accuracy and visual fidelity, achieving a 32% improvement of the Overall score in the reasoning-informed video editing subset over state-of-the-art methods. + oai:arXiv.org:2512.09924v1 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Hamsini Ramanathan, Roman Bushuiev, Matou\v{s} Sold\'at, Jir\'i Kohout, T\'eo Hebra, Joshua David Smith, Josef Sivic, Tom\'a\v{s} Pluskal + Xinyu Liu, Hangjie Yuan, Yujie Wei, Jiazheng Xing, Yujin Han, Jiahao Pan, Yanbiao Ma, Chi-Min Chan, Kang Zhao, Shiwei Zhang, Wenhan Luo, Yike Guo - Refining Visual Artifacts in Diffusion Models via Explainable AI-based Flaw Activation Maps - https://arxiv.org/abs/2512.08774 - arXiv:2512.08774v1 Announce Type: new -Abstract: Diffusion models have achieved remarkable success in image synthesis. However, addressing artifacts and unrealistic regions remains a critical challenge. We propose self-refining diffusion, a novel framework that enhances image generation quality by detecting these flaws. The framework employs an explainable artificial intelligence (XAI)-based flaw highlighter to produce flaw activation maps (FAMs) that identify artifacts and unrealistic regions. These FAMs improve reconstruction quality by amplifying noise in flawed regions during the forward process and by focusing on these regions during the reverse process. The proposed approach achieves up to a 27.3% improvement in Fr\'echet inception distance across various diffusion-based models, demonstrating consistently strong performance on diverse datasets. It also shows robust effectiveness across different tasks, including image generation, text-to-image generation, and inpainting. These results demonstrate that explainable AI techniques can extend beyond interpretability to actively contribute to image refinement. The proposed framework offers a versatile and effective approach applicable to various diffusion models and tasks, significantly advancing the field of image synthesis. - oai:arXiv.org:2512.08774v1 + GAINS: Gaussian-based Inverse Rendering from Sparse Multi-View Captures + https://arxiv.org/abs/2512.09925 + arXiv:2512.09925v1 Announce Type: new +Abstract: Recent advances in Gaussian Splatting-based inverse rendering extend Gaussian primitives with shading parameters and physically grounded light transport, enabling high-quality material recovery from dense multi-view captures. However, these methods degrade sharply under sparse-view settings, where limited observations lead to severe ambiguity between geometry, reflectance, and lighting. We introduce GAINS (Gaussian-based Inverse rendering from Sparse multi-view captures), a two-stage inverse rendering framework that leverages learning-based priors to stabilize geometry and material estimation. GAINS first refines geometry using monocular depth/normal and diffusion priors, then employs segmentation, intrinsic image decomposition (IID), and diffusion priors to regularize material recovery. Extensive experiments on synthetic and real-world datasets show that GAINS significantly improves material parameter accuracy, relighting quality, and novel-view synthesis compared to state-of-the-art Gaussian-based inverse rendering methods, especially under sparse-view settings. Project page: https://patrickbail.github.io/gains/ + oai:arXiv.org:2512.09925v1 cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Seoyeon Lee, Gwangyeol Yu, Chaewon Kim, Jonghyuk Park + http://creativecommons.org/licenses/by/4.0/ + Patrick Noras, Jun Myeong Choi, Didier Stricker, Pieter Peers, Roni Sengupta - Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages - https://arxiv.org/abs/2512.08777 - arXiv:2512.08777v1 Announce Type: new -Abstract: We propose a post-training method for lower-resource languages that preserves fluency of language models even when aligned by disfluent reward models. Preference-optimization is now a well-researched topic, but previous work has mostly addressed models for English and Chinese. Lower-resource languages lack both datasets written by native speakers and language models capable of generating fluent synthetic data. Thus, in this work, we focus on developing a fluent preference-aligned language model without any instruction-tuning data in the target language. Our approach uses an on-policy training method, which we compare with two common approaches: supervised finetuning on machine-translated data and multilingual finetuning. We conduct a case study on Norwegian Bokm{\aa}l and evaluate fluency through native-speaker assessments. The results show that the on-policy aspect is crucial and outperforms the alternatives without relying on any hard-to-obtain data. - oai:arXiv.org:2512.08777v1 - cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Token Expand-Merge: Training-Free Token Compression for Vision-Language-Action Models + https://arxiv.org/abs/2512.09927 + arXiv:2512.09927v1 Announce Type: new +Abstract: Vision-Language-Action (VLA) models pretrained on large-scale multimodal datasets have emerged as powerful foundations for robotic perception and control. However, their massive scale, often billions of parameters, poses significant challenges for real-time deployment, as inference becomes computationally expensive and latency-sensitive in dynamic environments. To address this, we propose Token Expand-and-Merge-VLA (TEAM-VLA), a training-free token compression framework that accelerates VLA inference while preserving task performance. TEAM-VLA introduces a dynamic token expansion mechanism that identifies and samples additional informative tokens in the spatial vicinity of attention-highlighted regions, enhancing contextual completeness. These expanded tokens are then selectively merged in deeper layers under action-aware guidance, effectively reducing redundancy while maintaining semantic coherence. By coupling expansion and merging within a single feed-forward pass, TEAM-VLA achieves a balanced trade-off between efficiency and effectiveness, without any retraining or parameter updates. Extensive experiments on LIBERO benchmark demonstrate that TEAM-VLA consistently improves inference speed while maintaining or even surpassing the task success rate of full VLA models. The code is public available on \href{https://github.com/Jasper-aaa/TEAM-VLA}{https://github.com/Jasper-aaa/TEAM-VLA} + oai:arXiv.org:2512.09927v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - David Samuel, Lilja {\O}vrelid, Erik Velldal, Andrey Kutuzov - - - An Explainable AI Model for the Detecting Malicious Smart Contracts Based on EVM Opcode Based Features - https://arxiv.org/abs/2512.08782 - arXiv:2512.08782v1 Announce Type: new -Abstract: Hackers may create malicious solidity programs and deploy it in the Ethereum block chain. These malicious smart contracts try to attack legitimate programs by exploiting its vulnerabilities such as reentrancy, tx.origin attack, bad randomness, deligatecall and so on. This may lead to drain of the funds, denial of service and so on . Hence, it is necessary to identify and prevent the malicious smart contract before deploying it into the blockchain. In this paper, we propose an ML based malicious smart contract detection mechanism by analyzing the EVM opcodes. After balancing the opcode frequency dataset with SMOTE algorithm, we transformed opcode frequencies to the binary values (0,1) using an entropy based supervised binning method. Then, an explainable AI model is trained with the proposed binary opcode based features. From the implementations, we found that the proposed mechanism can detect 99% of malicious smart contracts with a false positive rate of only 0.01. Finally, we incorporated LIME algorithm in our classifier to justify its predictions. We found that, LIME algorithm can explain why a particular smart contract app is declared as malicious by our ML classifier based on the binary value of EVM opcodes. - oai:arXiv.org:2512.08782v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Roopak Surendran + Yifan Ye, Jiaqi Ma, Jun Cen, Zhihe Lu - LoFA: Learning to Predict Personalized Priors for Fast Adaptation of Visual Generative Models - https://arxiv.org/abs/2512.08785 - arXiv:2512.08785v1 Announce Type: new -Abstract: Personalizing visual generative models to meet specific user needs has gained increasing attention, yet current methods like Low-Rank Adaptation (LoRA) remain impractical due to their demand for task-specific data and lengthy optimization. While a few hypernetwork-based approaches attempt to predict adaptation weights directly, they struggle to map fine-grained user prompts to complex LoRA distributions, limiting their practical applicability. To bridge this gap, we propose LoFA, a general framework that efficiently predicts personalized priors for fast model adaptation. We first identify a key property of LoRA: structured distribution patterns emerge in the relative changes between LoRA and base model parameters. Building on this, we design a two-stage hypernetwork: first predicting relative distribution patterns that capture key adaptation regions, then using these to guide final LoRA weight prediction. Extensive experiments demonstrate that our method consistently predicts high-quality personalized priors within seconds, across multiple tasks and user prompts, even outperforming conventional LoRA that requires hours of processing. Project page: https://jaeger416.github.io/lofa/. - oai:arXiv.org:2512.08785v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + HiF-VLA: Hindsight, Insight and Foresight through Motion Representation for Vision-Language-Action Models + https://arxiv.org/abs/2512.09928 + arXiv:2512.09928v1 Announce Type: new +Abstract: Vision-Language-Action (VLA) models have recently enabled robotic manipulation by grounding visual and linguistic cues into actions. However, most VLAs assume the Markov property, relying only on the current observation and thus suffering from temporal myopia that degrades long-horizon coherence. In this work, we view motion as a more compact and informative representation of temporal context and world dynamics, capturing inter-state changes while filtering static pixel-level noise. Building on this idea, we propose HiF-VLA (Hindsight, Insight, and Foresight for VLAs), a unified framework that leverages motion for bidirectional temporal reasoning. HiF-VLA encodes past dynamics through hindsight priors, anticipates future motion via foresight reasoning, and integrates both through a hindsight-modulated joint expert to enable a ''think-while-acting'' paradigm for long-horizon manipulation. As a result, HiF-VLA surpasses strong baselines on LIBERO-Long and CALVIN ABC-D benchmarks, while incurring negligible additional inference latency. Furthermore, HiF-VLA achieves substantial improvements in real-world long-horizon manipulation tasks, demonstrating its broad effectiveness in practical robotic settings. + oai:arXiv.org:2512.09928v1 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yiming Hao, Mutian Xu, Chongjie Ye, Jie Qin, Shunlin Lu, Yipeng Qin, Xiaoguang Han + Minghui Lin, Pengxiang Ding, Shu Wang, Zifeng Zhuang, Yang Liu, Xinyang Tong, Wenxuan Song, Shangke Lyu, Siteng Huang, Donglin Wang - A Systematic Evaluation of Preference Aggregation in Federated RLHF for Pluralistic Alignment of LLMs - https://arxiv.org/abs/2512.08786 - arXiv:2512.08786v1 Announce Type: new -Abstract: This paper addresses the challenge of aligning large language models (LLMs) with diverse human preferences within federated learning (FL) environments, where standard methods often fail to adequately represent diverse viewpoints. We introduce a comprehensive evaluation framework that systematically assesses the trade-off between alignment quality and fairness when using different aggregation strategies for human preferences. In our federated setting, each group locally evaluates rollouts and produces reward signals, and the server aggregates these group-level rewards without accessing any raw data. Specifically, we evaluate standard reward aggregation techniques (min, max, and average) and introduce a novel adaptive scheme that dynamically adjusts preference weights based on a group's historical alignment performance. Our experiments on question-answering (Q/A) tasks using a PPO-based RLHF pipeline demonstrate that our adaptive approach consistently achieves superior fairness while maintaining competitive alignment scores. This work offers a robust methodology for evaluating LLM behavior across diverse populations and provides a practical solution for developing truly pluralistic and fairly aligned models. - oai:arXiv.org:2512.08786v1 - cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Closing the Train-Test Gap in World Models for Gradient-Based Planning + https://arxiv.org/abs/2512.09929 + arXiv:2512.09929v1 Announce Type: new +Abstract: World models paired with model predictive control (MPC) can be trained offline on large-scale datasets of expert trajectories and enable generalization to a wide range of planning tasks at inference time. Compared to traditional MPC procedures, which rely on slow search algorithms or on iteratively solving optimization problems exactly, gradient-based planning offers a computationally efficient alternative. However, the performance of gradient-based planning has thus far lagged behind that of other approaches. In this paper, we propose improved methods for training world models that enable efficient gradient-based planning. We begin with the observation that although a world model is trained on a next-state prediction objective, it is used at test-time to instead estimate a sequence of actions. The goal of our work is to close this train-test gap. To that end, we propose train-time data synthesis techniques that enable significantly improved gradient-based planning with existing world models. At test time, our approach outperforms or matches the classical gradient-free cross-entropy method (CEM) across a variety of object manipulation and navigation tasks in 10% of the time budget. + oai:arXiv.org:2512.09929v1 + cs.LG + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Mahmoud Srewa, Tianyu Zhao, Salma Elmalaki - - - Exploring the Grassroots Understanding and Practices of Collective Memory Co-Contribution in a University Community - https://arxiv.org/abs/2512.08787 - arXiv:2512.08787v1 Announce Type: new -Abstract: Collective memory -- community members' interconnected memories and impressions of the group -- is essential to the community's culture and identity. Its development requires members' continuous participatory contribution and sensemaking. However, existing works mainly adopt a holistic sociological perspective to analyze well-developed collective memory, less focusing on member-level conceptualization of this possession or what the co-contribution practices can be. Therefore, this work alternatively adopts the latter perspective and probes such interpretative and interactional patterns with two mobile systems. With one being a locative narrative and exploration system condensed from existing literature's design frameworks, and the other being a conventional online forum representing current practices, they served as the anchors of observation for our two-week, mixed-methods field study (n=38) on a university campus. A core debate we have identified was to retrospectively contemplate or document the presence as a history for the future. This also subsequently impacted the narrative focuses, expectations of collective memory constituents, and the ways participants seek inspiration from the group. We further extracted design considerations that could better embrace the diverse conceptualizations of collective memory and bond different community members together. Lastly, revisiting and reflecting on our design, we provided extra insights on designing devoted locative narrative experiences for community-driven UGC platforms. - oai:arXiv.org:2512.08787v1 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-sa/4.0/ - Zeyu Huang, Xinyi Cao, Yue Deng, Junze Li, Kangyu Yuan, Xiaojuan Ma + Arjun Parthasarathy, Nimit Kalra, Rohun Agrawal, Yann LeCun, Oumayma Bounou, Pavel Izmailov, Micah Goldblum - MatteViT: High-Frequency-Aware Document Shadow Removal with Shadow Matte Guidance - https://arxiv.org/abs/2512.08789 - arXiv:2512.08789v1 Announce Type: new -Abstract: Document shadow removal is essential for enhancing the clarity of digitized documents. Preserving high-frequency details (e.g., text edges and lines) is critical in this process because shadows often obscure or distort fine structures. This paper proposes a matte vision transformer (MatteViT), a novel shadow removal framework that applies spatial and frequency-domain information to eliminate shadows while preserving fine-grained structural details. To effectively retain these details, we employ two preservation strategies. First, our method introduces a lightweight high-frequency amplification module (HFAM) that decomposes and adaptively amplifies high-frequency components. Second, we present a continuous luminance-based shadow matte, generated using a custom-built matte dataset and shadow matte generator, which provides precise spatial guidance from the earliest processing stage. These strategies enable the model to accurately identify fine-grained regions and restore them with high fidelity. Extensive experiments on public benchmarks (RDD and Kligler) demonstrate that MatteViT achieves state-of-the-art performance, providing a robust and practical solution for real-world document shadow removal. Furthermore, the proposed method better preserves text-level details in downstream tasks, such as optical character recognition, improving recognition performance over prior methods. - oai:arXiv.org:2512.08789v1 + Controlling Steering Angle for Cooperative Self-driving Vehicles utilizing CNN and LSTM-based Deep Networks + https://arxiv.org/abs/1904.04375 + arXiv:1904.04375v3 Announce Type: cross +Abstract: A fundamental challenge in autonomous vehicles is adjusting the steering angle at different road conditions. Recent state-of-the-art solutions addressing this challenge include deep learning techniques as they provide end-to-end solution to predict steering angles directly from the raw input images with higher accuracy. Most of these works ignore the temporal dependencies between the image frames. In this paper, we tackle the problem of utilizing multiple sets of images shared between two autonomous vehicles to improve the accuracy of controlling the steering angle by considering the temporal dependencies between the image frames. This problem has not been studied in the literature widely. We present and study a new deep architecture to predict the steering angle automatically by using Long-Short-Term-Memory (LSTM) in our deep architecture. Our deep architecture is an end-to-end network that utilizes CNN, LSTM and fully connected (FC) layers and it uses both present and futures images (shared by a vehicle ahead via Vehicle-to-Vehicle (V2V) communication) as input to control the steering angle. Our model demonstrates the lowest error when compared to the other existing approaches in the literature. + oai:arXiv.org:1904.04375v3 cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chaewon Kim, Seoyeon Lee, Jonghyuk Park - - - Can TabPFN Compete with GNNs for Node Classification via Graph Tabularization? - https://arxiv.org/abs/2512.08798 - arXiv:2512.08798v1 Announce Type: new -Abstract: Foundation models pretrained on large data have demonstrated remarkable zero-shot generalization capabilities across domains. Building on the success of TabPFN for tabular data and its recent extension to time series, we investigate whether graph node classification can be effectively reformulated as a tabular learning problem. We introduce TabPFN-GN, which transforms graph data into tabular features by extracting node attributes, structural properties, positional encodings, and optionally smoothed neighborhood features. This enables TabPFN to perform direct node classification without any graph-specific training or language model dependencies. Our experiments on 12 benchmark datasets reveal that TabPFN-GN achieves competitive performance with GNNs on homophilous graphs and consistently outperforms them on heterophilous graphs. These results demonstrate that principled feature engineering can bridge the gap between tabular and graph domains, providing a practical alternative to task-specific GNN training and LLM-dependent graph foundation models. - oai:arXiv.org:2512.08798v1 cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jeongwhan Choi, Woosung Kang, Minseo Kim, Jongwoo Kim, Noseong Park + 10.1109/IVS.2019.8814260 + Rodolfo Valiente, Mahdi Zaman, Sedat Ozer, Yaser P. Fallah - Democratizing ML for Enterprise Security: A Self-Sustained Attack Detection Framework - https://arxiv.org/abs/2512.08802 - arXiv:2512.08802v1 Announce Type: new -Abstract: Despite advancements in machine learning for security, rule-based detection remains prevalent in Security Operations Centers due to the resource intensiveness and skill gap associated with ML solutions. While traditional rule-based methods offer efficiency, their rigidity leads to high false positives or negatives and requires continuous manual maintenance. This paper proposes a novel, two-stage hybrid framework to democratize ML-based threat detection. The first stage employs intentionally loose YARA rules for coarse-grained filtering, optimized for high recall. The second stage utilizes an ML classifier to filter out false positives from the first stage's output. To overcome data scarcity, the system leverages Simula, a seedless synthetic data generation framework, enabling security analysts to create high-quality training datasets without extensive data science expertise or pre-labeled examples. A continuous feedback loop incorporates real-time investigation results to adaptively tune the ML model, preventing rule degradation. - This proposed model with active learning has been rigorously tested for a prolonged time in a production environment spanning tens of thousands of systems. The system handles initial raw log volumes often reaching 250 billion events per day, significantly reducing them through filtering and ML inference to a handful of daily tickets for human investigation. Live experiments over an extended timeline demonstrate a general improvement in the model's precision over time due to the active learning feature. This approach offers a self-sustained, low-overhead, and low-maintenance solution, allowing security professionals to guide model learning as expert ``teachers''. - oai:arXiv.org:2512.08802v1 - cs.CR + Altruistic Maneuver Planning for Cooperative Autonomous Vehicles Using Multi-agent Advantage Actor-Critic + https://arxiv.org/abs/2107.05664 + arXiv:2107.05664v1 Announce Type: cross +Abstract: With the adoption of autonomous vehicles on our roads, we will witness a mixed-autonomy environment where autonomous and human-driven vehicles must learn to co-exist by sharing the same road infrastructure. To attain socially-desirable behaviors, autonomous vehicles must be instructed to consider the utility of other vehicles around them in their decision-making process. Particularly, we study the maneuver planning problem for autonomous vehicles and investigate how a decentralized reward structure can induce altruism in their behavior and incentivize them to account for the interest of other autonomous and human-driven vehicles. This is a challenging problem due to the ambiguity of a human driver's willingness to cooperate with an autonomous vehicle. Thus, in contrast with the existing works which rely on behavior models of human drivers, we take an end-to-end approach and let the autonomous agents to implicitly learn the decision-making process of human drivers only from experience. We introduce a multi-agent variant of the synchronous Advantage Actor-Critic (A2C) algorithm and train agents that coordinate with each other and can affect the behavior of human drivers to improve traffic flow and safety. + oai:arXiv.org:2107.05664v1 + cs.RO cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Sadegh Momeni, Ge Zhang, Birkett Huber, Hamza Harkous, Sam Lipton, Benoit Seguin, Yanis Pavlidis + Behrad Toghi, Rodolfo Valiente, Dorsa Sadigh, Ramtin Pedarsani, Yaser P. Fallah - Identifying counterfactual probabilities using bivariate distributions and uplift modeling - https://arxiv.org/abs/2512.08805 - arXiv:2512.08805v1 Announce Type: new -Abstract: Uplift modeling estimates the causal effect of an intervention as the difference between potential outcomes under treatment and control, whereas counterfactual identification aims to recover the joint distribution of these potential outcomes (e.g., "Would this customer still have churned had we given them a marketing offer?"). This joint counterfactual distribution provides richer information than the uplift but is harder to estimate. However, the two approaches are synergistic: uplift models can be leveraged for counterfactual estimation. We propose a counterfactual estimator that fits a bivariate beta distribution to predicted uplift scores, yielding posterior distributions over counterfactual outcomes. Our approach requires no causal assumptions beyond those of uplift modeling. Simulations show the efficacy of the approach, which can be applied, for example, to the problem of customer churn in telecom, where it reveals insights unavailable to standard ML or uplift models alone. - oai:arXiv.org:2512.08805v1 + Robustness and Adaptability of Reinforcement Learning based Cooperative Autonomous Driving in Mixed-autonomy Traffic + https://arxiv.org/abs/2202.00881 + arXiv:2202.00881v1 Announce Type: cross +Abstract: Building autonomous vehicles (AVs) is a complex problem, but enabling them to operate in the real world where they will be surrounded by human-driven vehicles (HVs) is extremely challenging. Prior works have shown the possibilities of creating inter-agent cooperation between a group of AVs that follow a social utility. Such altruistic AVs can form alliances and affect the behavior of HVs to achieve socially desirable outcomes. We identify two major challenges in the co-existence of AVs and HVs. First, social preferences and individual traits of a given human driver, e.g., selflessness and aggressiveness are unknown to an AV, and it is almost impossible to infer them in real-time during a short AV-HV interaction. Second, contrary to AVs that are expected to follow a policy, HVs do not necessarily follow a stationary policy and therefore are extremely hard to predict. To alleviate the above-mentioned challenges, we formulate the mixed-autonomy problem as a multi-agent reinforcement learning (MARL) problem and propose a decentralized framework and reward function for training cooperative AVs. Our approach enables AVs to learn the decision-making of HVs implicitly from experience, optimizes for a social utility while prioritizing safety and allowing adaptability; robustifying altruistic AVs to different human behaviors and constraining them to a safe action space. Finally, we investigate the robustness, safety and sensitivity of AVs to various HVs behavioral traits and present the settings in which the AVs can learn cooperative policies that are adaptable to different situations. + oai:arXiv.org:2202.00881v1 + cs.RO cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Th\'eo Verhelst, Gianluca Bontempi - - - WikIPedia: Unearthing a 20-Year History of IPv6 Client Addressing - https://arxiv.org/abs/2512.08808 - arXiv:2512.08808v1 Announce Type: new -Abstract: Due to their article editing policies, Wikimedia sites like Wikipedia have become inadvertent time capsules for IPv6 addresses. When Wikimedia users make edits without signing into an account, their IP addresses are used in lieu of a username. Wikimedia site dumps therefore provide researchers with over two decades worth of timestamped client IPv6 addresses to understand address assignments and how they have changed over time and space. - In this work, we extract 19M unique IPv6 addresses from Wikimedia sites like Wikipedia that were used by editors from 2003 to 2024. We use these addresses to understand the prevalence of IPv6 in countries corresponding to Wikimedia site languages, how IPv6 adoption has grown over time, and the prevalence of EUI-64 addressing on client devices like desktops, laptops, and mobile phones. - oai:arXiv.org:2512.08808v1 - cs.NI - cs.SI - Wed, 10 Dec 2025 00:00:00 -0500 - new + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Erik Rye, Dave Levin + Rodolfo Valiente, Behrad Toghi, Ramtin Pedarsani, Yaser P. Fallah - PrivTune: Efficient and Privacy-Preserving Fine-Tuning of Large Language Models via Device-Cloud Collaboration - https://arxiv.org/abs/2512.08809 - arXiv:2512.08809v1 Announce Type: new -Abstract: With the rise of large language models, service providers offer language models as a service, enabling users to fine-tune customized models via uploaded private datasets. However, this raises concerns about sensitive data leakage. Prior methods, relying on differential privacy within device-cloud collaboration frameworks, struggle to balance privacy and utility, exposing users to inference attacks or degrading fine-tuning performance. To address this, we propose PrivTune, an efficient and privacy-preserving fine-tuning framework via Split Learning (SL). The key idea of PrivTune is to inject crafted noise into token representations from the SL bottom model, making each token resemble the $n$-hop indirect neighbors. PrivTune formulates this as an optimization problem to compute the optimal noise vector, aligning with defense-utility goals. On this basis, it then adjusts the parameters (i.e., mean) of the $d_\chi$-Privacy noise distribution to align with the optimization direction and scales the noise according to token importance to minimize distortion. Experiments on five datasets (covering both classification and generation tasks) against three embedding inversion and three attribute inference attacks show that, using RoBERTa on the Stanford Sentiment Treebank dataset, PrivTune reduces the attack success rate to 10% with only a 3.33% drop in utility performance, outperforming state-of-the-art baselines. - oai:arXiv.org:2512.08809v1 - cs.CR + Prediction-aware and Reinforcement Learning based Altruistic Cooperative Driving + https://arxiv.org/abs/2211.10585 + arXiv:2211.10585v1 Announce Type: cross +Abstract: Autonomous vehicle (AV) navigation in the presence of Human-driven vehicles (HVs) is challenging, as HVs continuously update their policies in response to AVs. In order to navigate safely in the presence of complex AV-HV social interactions, the AVs must learn to predict these changes. Humans are capable of navigating such challenging social interaction settings because of their intrinsic knowledge about other agents behaviors and use that to forecast what might happen in the future. Inspired by humans, we provide our AVs the capability of anticipating future states and leveraging prediction in a cooperative reinforcement learning (RL) decision-making framework, to improve safety and robustness. In this paper, we propose an integration of two essential and earlier-presented components of AVs: social navigation and prediction. We formulate the AV decision-making process as a RL problem and seek to obtain optimal policies that produce socially beneficial results utilizing a prediction-aware planning and social-aware optimization RL framework. We also propose a Hybrid Predictive Network (HPN) that anticipates future observations. The HPN is used in a multi-step prediction chain to compute a window of predicted future observations to be used by the value function network (VFN). Finally, a safe VFN is trained to optimize a social utility using a sequence of previous and predicted observations, and a safety prioritizer is used to leverage the interpretable kinematic predictions to mask the unsafe actions, constraining the RL policy. We compare our prediction-aware AV to state-of-the-art solutions and demonstrate performance improvements in terms of efficiency and safety in multiple simulated scenarios. + oai:arXiv.org:2211.10585v1 + cs.RO cs.AI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Yi Liu, Weixiang Han, Chengjun Cai, Xingliang Yuan, Cong Wang + Rodolfo Valiente, Mahdi Razzaghpour, Behrad Toghi, Ghayoor Shah, Yaser P. Fallah - Multicalibration for LLM-based Code Generation - https://arxiv.org/abs/2512.08810 - arXiv:2512.08810v1 Announce Type: new -Abstract: As AI-based code generation becomes widespread, researchers are investigating the calibration of code LLMs - ensuring their confidence scores faithfully represent the true likelihood of code correctness. To do so, we investigate multicalibration, which can capture additional factors about a coding problem, such as complexity, code length, or programming language used. We study four multicalibration approaches on three function synthesis benchmarks, using latest-generation code LLMs (Qwen3 Coder, GPT-OSS, DeepSeek-R1-Distill). Our results demonstrate that multicalibration can yield distinct improvements over both uncalibrated token likelihoods (+1.03 in skill score) and baseline calibrations (+0.37 in skill score). We study the influence of the aforementioned factors in ablations, and make our dataset (consisting of code generations, likelihoods, and correctness labels) available for future research on code LLM calibration. - oai:arXiv.org:2512.08810v1 - cs.SE - cs.AI + Learning-based social coordination to improve safety and robustness of cooperative autonomous vehicles in mixed traffic + https://arxiv.org/abs/2211.11963 + arXiv:2211.11963v1 Announce Type: cross +Abstract: It is expected that autonomous vehicles(AVs) and heterogeneous human-driven vehicles(HVs) will coexist on the same road. The safety and reliability of AVs will depend on their social awareness and their ability to engage in complex social interactions in a socially accepted manner. However, AVs are still inefficient in terms of cooperating with HVs and struggle to understand and adapt to human behavior, which is particularly challenging in mixed autonomy. In a road shared by AVs and HVs, the social preferences or individual traits of HVs are unknown to the AVs and different from AVs, which are expected to follow a policy, HVs are particularly difficult to forecast since they do not necessarily follow a stationary policy. To address these challenges, we frame the mixed-autonomy problem as a multi-agent reinforcement learning (MARL) problem and propose an approach that allows AVs to learn the decision-making of HVs implicitly from experience, account for all vehicles' interests, and safely adapt to other traffic situations. In contrast with existing works, we quantify AVs' social preferences and propose a distributed reward structure that introduces altruism into their decision-making process, allowing the altruistic AVs to learn to establish coalitions and influence the behavior of HVs. + oai:arXiv.org:2211.11963v1 + cs.RO cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Viola Campos, Robin Kuschnereit, Adrian Ulges - - - Emovectors: assessing emotional content in jazz improvisations for creativity evaluation - https://arxiv.org/abs/2512.08812 - arXiv:2512.08812v1 Announce Type: new -Abstract: Music improvisation is fascinating to study, being essentially a live demonstration of a creative process. In jazz, musicians often improvise across predefined chord progressions (leadsheets). How do we assess the creativity of jazz improvisations? And can we capture this in automated metrics for creativity for current LLM-based generative systems? Demonstration of emotional involvement is closely linked with creativity in improvisation. Analysing musical audio, can we detect emotional involvement? This study hypothesises that if an improvisation contains more evidence of emotion-laden content, it is more likely to be recognised as creative. An embeddings-based method is proposed for capturing the emotional content in musical improvisations, using a psychologically-grounded classification of musical characteristics associated with emotions. Resulting 'emovectors' are analysed to test the above hypothesis, comparing across multiple improvisations. Capturing emotional content in this quantifiable way can contribute towards new metrics for creativity evaluation that can be applied at scale. - oai:arXiv.org:2512.08812v1 - cs.SD - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Anna Jordanous + Rodolfo Valiente, Behrad Toghi, Mahdi Razzaghpour, Ramtin Pedarsani, Yaser P. Fallah - Heterogeneity in Multi-Robot Environmental Monitoring for Resolving Time-Conflicting Tasks - https://arxiv.org/abs/2512.08813 - arXiv:2512.08813v1 Announce Type: new -Abstract: Multi-robot systems performing continuous tasks face a performance trade-off when interrupted by urgent, time-critical sub-tasks. We investigate this trade-off in a scenario where a team must balance area patrolling with locating an anomalous radio signal. To address this trade-off, we evaluate both behavioral heterogeneity through agent role specialization ("patrollers" and "searchers") and sensing heterogeneity (i.e., only the searchers can sense the radio signal). Through simulation, we identify the Pareto-optimal trade-offs under varying team compositions, with behaviorally heterogeneous teams demonstrating the most balanced trade-offs in the majority of cases. When sensing capability is restricted, heterogeneous teams with half of the sensing-capable agents perform comparably to homogeneous teams, providing cost-saving rationale for restricting sensor payload deployment. Our findings demonstrate that pre-deployment role and sensing specialization are powerful design considerations for multi-robot systems facing time-conflicting tasks, where varying the degree of behavioral heterogeneity can tune system performance toward either task. - oai:arXiv.org:2512.08813v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 - new + Online Inference of Constrained Optimization: Primal-Dual Optimality and Sequential Quadratic Programming + https://arxiv.org/abs/2512.08948 + arXiv:2512.08948v1 Announce Type: cross +Abstract: We study online statistical inference for the solutions of stochastic optimization problems with equality and inequality constraints. Such problems are prevalent in statistics and machine learning, encompassing constrained $M$-estimation, physics-informed models, safe reinforcement learning, and algorithmic fairness. We develop a stochastic sequential quadratic programming (SSQP) method to solve these problems, where the step direction is computed by sequentially performing a quadratic approximation of the objective and a linear approximation of the constraints. Despite having access to unbiased estimates of population gradients, a key challenge in constrained stochastic problems lies in dealing with the bias in the step direction. As such, we apply a momentum-style gradient moving-average technique within SSQP to debias the step. We show that our method achieves global almost-sure convergence and exhibits local asymptotic normality with an optimal primal-dual limiting covariance matrix in the sense of H\'ajek and Le Cam. In addition, we provide a plug-in covariance matrix estimator for practical inference. To our knowledge, the proposed SSQP method is the first fully online method that attains primal-dual asymptotic minimax optimality without relying on projection operators onto the constraint set, which are generally intractable for nonlinear problems. Through extensive experiments on benchmark nonlinear problems, as well as on constrained generalized linear models and portfolio allocation problems using both synthetic and real data, we demonstrate superior performance of our method, showing that the method and its asymptotic behavior not only solve constrained stochastic problems efficiently but also provide valid and practical online inference in real-world applications. + oai:arXiv.org:2512.08948v1 + stat.ML + cs.LG + math.OC + math.ST + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Connor York, Zachary R Madin, Paul O'Dowd, Edmund R Hunt + Yihang Gao, Michael K. Ng, Michael W. Mahoney, Sen Na - Ask, Answer, and Detect: Role-Playing LLMs for Personality Detection with Question-Conditioned Mixture-of-Experts - https://arxiv.org/abs/2512.08814 - arXiv:2512.08814v1 Announce Type: new -Abstract: Understanding human personality is crucial for web applications such as personalized recommendation and mental health assessment. Existing studies on personality detection predominantly adopt a "posts -> user vector -> labels" modeling paradigm, which encodes social media posts into user representations for predicting personality labels (e.g., MBTI labels). While recent advances in large language models (LLMs) have improved text encoding capacities, these approaches remain constrained by limited supervision signals due to label scarcity, and under-specified semantic mappings between user language and abstract psychological constructs. We address these challenges by proposing ROME, a novel framework that explicitly injects psychological knowledge into personality detection. Inspired by standardized self-assessment tests, ROME leverages LLMs' role-play capability to simulate user responses to validated psychometric questionnaires. These generated question-level answers transform free-form user posts into interpretable, questionnaire-grounded evidence linking linguistic cues to personality labels, thereby providing rich intermediate supervision to mitigate label scarcity while offering a semantic reasoning chain that guides and simplifies the text-to-personality mapping learning. A question-conditioned Mixture-of-Experts module then jointly routes over post and question representations, learning to answer questionnaire items under explicit supervision. The predicted answers are summarized into an interpretable answer vector and fused with the user representation for final prediction within a multi-task learning framework, where question answering serves as a powerful auxiliary task for personality detection. Extensive experiments on two real-world datasets demonstrate that ROME consistently outperforms state-of-the-art baselines, achieving improvements (15.41% on Kaggle dataset). - oai:arXiv.org:2512.08814v1 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yifan Lyu, Liang Zhang + Multivariate time series prediction using clustered echo state network + https://arxiv.org/abs/2512.08963 + arXiv:2512.08963v1 Announce Type: cross +Abstract: Many natural and physical processes can be understood by analyzing multiple system variables evolving, forming a multivariate time series. Predicting such time series is challenging due to the inherent noise and interdependencies among variables. Echo state networks (ESNs), a class of Reservoir Computing (RC) models, offer an efficient alternative to conventional recurrent neural networks by training only the output weights while keeping the reservoir dynamics fixed, reducing computational complexity. We propose a clustered ESNs (CESNs) that enhances the ability to model and predict multivariate time series by organizing the reservoir nodes into clusters, each corresponding to a distinct input variable. Input signals are directly mapped to their associated clusters, and intra-cluster connections remain dense while inter-cluster connections are sparse, mimicking the modular architecture of biological neural networks. This architecture improves information processing by limiting cross-variable interference and enhances computational efficiency through independent cluster-wise training via ridge regression. We further explore different reservoir topologies, including ring, Erd\H{o}s-R\'enyi (ER), and scale-free (SF) networks, to evaluate their impact predictive performance. Our algorithm works well across diverse real-world datasets such as the stock market, solar wind, and chaotic R\"ossler system, demonstrating that CESNs consistently outperform conventional ESNs in terms of predictive accuracy and robustness to noise, particularly when using ER and SF topologies. These findings highlight the adaptability of CESNs for complex, multivariate time series forecasting. + oai:arXiv.org:2512.08963v1 + nlin.CD + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/publicdomain/zero/1.0/ + 10.1140/epjp/s13360-025-07077-3 + S. Hariharan, R. Suresh, V. K. Chandrasekar - Do Depth-Grown Models Overcome the Curse of Depth? An In-Depth Analysis - https://arxiv.org/abs/2512.08819 - arXiv:2512.08819v1 Announce Type: new -Abstract: Gradually growing the depth of Transformers during training can not only reduce training cost but also lead to improved reasoning performance, as shown by MIDAS (Saunshi et al., 2024). Thus far, however, a mechanistic understanding of these gains has been missing. In this work, we establish a connection to recent work showing that layers in the second half of non-grown, pre-layernorm Transformers contribute much less to the final output distribution than those in the first half - also known as the Curse of Depth (Sun et al., 2025, Csord\'as et al., 2025). Using depth-wise analyses, we demonstrate that growth via gradual middle stacking yields more effective utilization of model depth, alters the residual stream structure, and facilitates the formation of permutable computational blocks. In addition, we propose a lightweight modification of MIDAS that yields further improvements in downstream reasoning benchmarks. Overall, this work highlights how the gradual growth of model depth can lead to the formation of distinct computational circuits and overcome the limited depth utilization seen in standard non-grown models. - oai:arXiv.org:2512.08819v1 - cs.CL - cs.AI + FuXi-Nowcast: Meet the longstanding challenge of convective initiation in nowcasting + https://arxiv.org/abs/2512.08974 + arXiv:2512.08974v1 Announce Type: cross +Abstract: Accurate nowcasting of convective storms remains a major challenge for operational forecasting, particularly for convective initiation and the evolution of high-impact rainfall and strong winds. Here we present FuXi-Nowcast, a deep-learning system that jointly predicts composite radar reflectivity, surface precipitation, near-surface temperature, wind speed and wind gusts at 1-km resolution over eastern China. FuXi-Nowcast integrates multi-source observations, such as radar, surface stations and the High-Resolution Land Data Assimilation System (HRLDAS), with three-dimensional atmospheric fields from the machine-learning weather model FuXi-2.0 within a multi-task Swin-Transformer architecture. A convective signal enhancement module and distribution-aware hybrid loss functions are designed to preserve intense convective structures and mitigate the rapid intensity decay common in deep-learning nowcasts. FuXi-Nowcast surpasses the operational CMA-MESO 3-km numerical model in Critical Success Index for reflectivity, precipitation and wind gusts across thresholds and lead times up to 12 h, with the largest gains for heavy rainfall. Case studies further show that FuXi-Nowcast more accurately captures the timing, location and structure of convective initiation and subsequent evolution of convection. These results demonstrate that coupling three-dimensional machine-learning forecasts with high-resolution observations can provide multi-hazard, long-lead nowcasts that outperforms current operational systems. + oai:arXiv.org:2512.08974v1 + physics.ao-ph cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Ferdinand Kapl, Emmanouil Angelis, Tobias H\"oppe, Kaitlin Maile, Johannes von Oswald, Nino Scherrer, Stefan Bauer + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Lei Chen, Zijian Zhu, Xiaoran Zhuang, Tianyuan Qi, Yuxuan Feng, Xiaohui Zhong, Hao Li - Training-Free Dual Hyperbolic Adapters for Better Cross-Modal Reasoning - https://arxiv.org/abs/2512.08820 - arXiv:2512.08820v1 Announce Type: new -Abstract: Recent research in Vision-Language Models (VLMs) has significantly advanced our capabilities in cross-modal reasoning. However, existing methods suffer from performance degradation with domain changes or require substantial computational resources for fine-tuning in new domains. To address this issue, we develop a new adaptation method for large vision-language models, called \textit{Training-free Dual Hyperbolic Adapters} (T-DHA). We characterize the vision-language relationship between semantic concepts, which typically has a hierarchical tree structure, in the hyperbolic space instead of the traditional Euclidean space. Hyperbolic spaces exhibit exponential volume growth with radius, unlike the polynomial growth in Euclidean space. We find that this unique property is particularly effective for embedding hierarchical data structures using the Poincar\'e ball model, achieving significantly improved representation and discrimination power. Coupled with negative learning, it provides more accurate and robust classifications with fewer feature dimensions. Our extensive experimental results on various datasets demonstrate that the T-DHA method significantly outperforms existing state-of-the-art methods in few-shot image recognition and domain generalization tasks. - oai:arXiv.org:2512.08820v1 + Agreement Disagreement Guided Knowledge Transfer for Cross-Scene Hyperspectral Imaging + https://arxiv.org/abs/2512.08990 + arXiv:2512.08990v1 Announce Type: cross +Abstract: Knowledge transfer plays a crucial role in cross-scene hyperspectral imaging (HSI). However, existing studies often overlook the challenges of gradient conflicts and dominant gradients that arise during the optimization of shared parameters. Moreover, many current approaches fail to simultaneously capture both agreement and disagreement information, relying only on a limited shared subset of target features and consequently missing the rich, diverse patterns present in the target scene. To address these issues, we propose an Agreement Disagreement Guided Knowledge Transfer (ADGKT) framework that integrates both mechanisms to enhance cross-scene transfer. The agreement component includes GradVac, which aligns gradient directions to mitigate conflicts between source and target domains, and LogitNorm, which regulates logit magnitudes to prevent domination by a single gradient source. The disagreement component consists of a Disagreement Restriction (DiR) and an ensemble strategy, which capture diverse predictive target features and mitigate the loss of critical target information. Extensive experiments demonstrate the effectiveness and superiority of the proposed method in achieving robust and balanced knowledge transfer across heterogeneous HSI scenes. + oai:arXiv.org:2512.08990v1 + eess.IV cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yi Zhang, Chun-Wun Cheng, Junyi He, Ke Yu, Yushun Tang, Carola-Bibiane Sch\"onlieb, Zhihai He, Angelica I. Aviles-Rivero + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Lu Huo, Haimin Zhang, Min Xu - CARLoS: Retrieval via Concise Assessment Representation of LoRAs at Scale - https://arxiv.org/abs/2512.08826 - arXiv:2512.08826v1 Announce Type: new -Abstract: The rapid proliferation of generative components, such as LoRAs, has created a vast but unstructured ecosystem. Existing discovery methods depend on unreliable user descriptions or biased popularity metrics, hindering usability. We present CARLoS, a large-scale framework for characterizing LoRAs without requiring additional metadata. Analyzing over 650 LoRAs, we employ them in image generation over a variety of prompts and seeds, as a credible way to assess their behavior. Using CLIP embeddings and their difference to a base-model generation, we concisely define a three-part representation: Directions, defining semantic shift; Strength, quantifying the significance of the effect; and Consistency, quantifying how stable the effect is. Using these representations, we develop an efficient retrieval framework that semantically matches textual queries to relevant LoRAs while filtering overly strong or unstable ones, outperforming textual baselines in automated and human evaluations. While retrieval is our primary focus, the same representation also supports analyses linking Strength and Consistency to legal notions of substantiality and volition, key considerations in copyright, positioning CARLoS as a practical system with broader relevance for LoRA analysis. - oai:arXiv.org:2512.08826v1 + Enhanced Chest Disease Classification Using an Improved CheXNet Framework with EfficientNetV2-M and Optimization-Driven Learning + https://arxiv.org/abs/2512.08992 + arXiv:2512.08992v1 Announce Type: cross +Abstract: The interpretation of Chest X-ray is an important diagnostic issue in clinical practice and especially in the resource-limited setting where the shortage of radiologists plays a role in delayed diagnosis and poor patient outcomes. Although the original CheXNet architecture has shown potential in automated analysis of chest radiographs, DenseNet-121 backbone is computationally inefficient and poorly single-label classifier. To eliminate such shortcomings, we suggest a better classification framework of chest disease that relies on EfficientNetV2-M and incorporates superior training approaches such as Automatic Mixed Precision training, AdamW, Cosine Annealing learning rate scheduling, and Exponential Moving Average regularization. We prepared a dataset of 18,080 chest X-ray images of three source materials of high authority and representing five key clinically significant disease categories which included Cardiomegaly, COVID-19, Normal, Pneumonia, and Tuberculosis. To achieve statistical reliability and reproducibility, nine independent experimental runs were run. The suggested architecture showed significant gains with mean test accuracy of 96.45 percent compared to 95.30 percent at baseline (p less than 0.001) and macro-averaged F1-score increased to 91.08 percent (p less than 0.001). Critical infectious diseases showed near-perfect classification performance with COVID-19 detection having 99.95 percent accuracy and Tuberculosis detection having 99.97 percent accuracy. Although 6.8 times more parameters are included, the training time was reduced by 11.4 percent and performance stability was increased by 22.7 percent. This framework presents itself as a decision-support tool that can be used to respond to a pandemic, screen tuberculosis, and assess thoracic disease regularly in various healthcare facilities. + oai:arXiv.org:2512.08992v1 + eess.IV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Shahar Sarfaty, Adi Haviv, Uri Hacohen, Niva Elkin-Koren, Roi Livni, Amit H. Bermano - - - InfiniteVL: Synergizing Linear and Sparse Attention for Highly-Efficient, Unlimited-Input Vision-Language Models - https://arxiv.org/abs/2512.08829 - arXiv:2512.08829v1 Announce Type: new -Abstract: Window attention and linear attention represent two principal strategies for mitigating the quadratic complexity and ever-growing KV cache in Vision-Language Models (VLMs). However, we observe that window-based VLMs suffer performance degradation when sequence length exceeds the window size, while linear attention underperforms on information-intensive tasks such as OCR and document understanding. To overcome these limitations, we propose InfiniteVL, a linear-complexity VLM architecture that synergizes sliding window attention (SWA) with Gated DeltaNet. For achieving competitive multimodal performance under constrained resources, we design a three-stage training strategy comprising distillation pretraining, instruction tuning, and long-sequence SFT. Remarkably, using less than 2\% of the training data required by leading VLMs, InfiniteVL not only substantially outperforms previous linear-complexity VLMs but also matches the performance of leading Transformer-based VLMs, while demonstrating effective long-term memory retention. Compared to similar-sized Transformer-based VLMs accelerated by FlashAttention-2, InfiniteVL achieves over 3.6\times inference speedup while maintaining constant latency and memory footprint. In streaming video understanding scenarios, it sustains a stable 24 FPS real-time prefill speed while preserving long-term memory cache. Code and models are available at https://github.com/hustvl/InfiniteVL. - oai:arXiv.org:2512.08829v1 cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hongyuan Tao, Bencheng Liao, Shaoyu Chen, Haoran Yin, Qian Zhang, Wenyu Liu, Xinggang Wang + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Ali M. Bahram, Saman Muhammad Omer, Hardi M. Mohammed, Sirwan Abdolwahed Aula - Forecasting Fails: Unveiling Evasion Attacks in Weather Prediction Models - https://arxiv.org/abs/2512.08832 - arXiv:2512.08832v1 Announce Type: new -Abstract: With the increasing reliance on AI models for weather forecasting, it is imperative to evaluate their vulnerability to adversarial perturbations. This work introduces Weather Adaptive Adversarial Perturbation Optimization (WAAPO), a novel framework for generating targeted adversarial perturbations that are both effective in manipulating forecasts and stealthy to avoid detection. WAAPO achieves this by incorporating constraints for channel sparsity, spatial localization, and smoothness, ensuring that perturbations remain physically realistic and imperceptible. Using the ERA5 dataset and FourCastNet (Pathak et al. 2022), we demonstrate WAAPO's ability to generate adversarial trajectories that align closely with predefined targets, even under constrained conditions. Our experiments highlight critical vulnerabilities in AI-driven forecasting models, where small perturbations to initial conditions can result in significant deviations in predicted weather patterns. These findings underscore the need for robust safeguards to protect against adversarial exploitation in operational forecasting systems. - oai:arXiv.org:2512.08832v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new + DermETAS-SNA LLM: A Dermatology Focused Evolutionary Transformer Architecture Search with StackNet Augmented LLM Assistant + https://arxiv.org/abs/2512.08998 + arXiv:2512.08998v1 Announce Type: cross +Abstract: Our work introduces the DermETAS-SNA LLM Assistant that integrates Dermatology-focused Evolutionary Transformer Architecture Search with StackNet Augmented LLM. The assistant dynamically learns skin-disease classifiers and provides medically informed descriptions to facilitate clinician-patient interpretation. Contributions include: (1) Developed an ETAS framework on the SKINCON dataset to optimize a Vision Transformer (ViT) tailored for dermatological feature representation and then fine-tuned binary classifiers for each of the 23 skin disease categories in the DermNet dataset to enhance classification performance; (2) Designed a StackNet architecture that integrates multiple fine-tuned binary ViT classifiers to enhance predictive robustness and mitigate class imbalance issues; (3) Implemented a RAG pipeline, termed Diagnostic Explanation and Retrieval Model for Dermatology, which harnesses the capabilities of the Google Gemini 2.5 Pro LLM architecture to generate personalized, contextually informed diagnostic descriptions and explanations for patients, leveraging a repository of verified dermatological materials; (4) Performed extensive experimental evaluations on 23 skin disease categories to demonstrate performance increase, achieving an overall F1-score of 56.30% that surpasses SkinGPT-4 (48.51%) by a considerable margin, representing a performance increase of 16.06%; (5) Conducted a domain-expert evaluation, with eight licensed medical doctors, of the clinical responses generated by our AI assistant for seven dermatological conditions. Our results show a 92% agreement rate with the assessments provided by our AI assistant (6) Created a proof-of-concept prototype that fully integrates our DermETAS-SNA LLM into our AI assistant to demonstrate its practical feasibility for real-world clinical and educational applications. + oai:arXiv.org:2512.08998v1 + eess.IV + cs.AI + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Association for the Advancement of Artificial Intelligence 2025 - Huzaifa Arif, Pin-Yu Chen, Alex Gittens, James Diffenderfer, Bhavya Kailkhura + Nitya Phani Santosh Oruganty, Keerthi Vemula Murali, Chun-Kit Ngan, Paulo Bandeira Pinho - Interpolation in Knowledge Representation - https://arxiv.org/abs/2512.08833 - arXiv:2512.08833v1 Announce Type: new -Abstract: Craig interpolation and uniform interpolation have many applications in knowledge representation, including explainability, forgetting, modularization and reuse, and even learning. At the same time, many relevant knowledge representation formalisms do in general not have Craig or uniform interpolation, and computing interpolants in practice is challenging. We have a closer look at two prominent knowledge representation formalisms, description logics and logic programming, and discuss theoretical results and practical methods for computing interpolants. - oai:arXiv.org:2512.08833v1 + Digital Modeling of Spatial Pathway Activity from Histology Reveals Tumor Microenvironment Heterogeneity + https://arxiv.org/abs/2512.09003 + arXiv:2512.09003v1 Announce Type: cross +Abstract: Spatial transcriptomics (ST) enables simultaneous mapping of tissue morphology and spatially resolved gene expression, offering unique opportunities to study tumor microenvironment heterogeneity. Here, we introduce a computational framework that predicts spatial pathway activity directly from hematoxylin-and-eosin-stained histology images at microscale resolution 55 and 100 um. Using image features derived from a computational pathology foundation model, we found that TGFb signaling was the most accurately predicted pathway across three independent breast and lung cancer ST datasets. In 87-88% of reliably predicted cases, the resulting spatial TGFb activity maps reflected the expected contrast between tumor and adjacent non-tumor regions, consistent with the known role of TGFb in regulating interactions within the tumor microenvironment. Notably, linear and nonlinear predictive models performed similarly, suggesting that image features may relate to pathway activity in a predominantly linear fashion or that nonlinear structure is small relative to measurement noise. These findings demonstrate that features extracted from routine histopathology may recover spatially coherent and biologically interpretable pathway patterns, offering a scalable strategy for integrating image-based inference with ST information in tumor microenvironment studies. + oai:arXiv.org:2512.09003v1 + q-bio.QM cs.AI - cs.LO - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Jean Christoph Jung, Patrick Koopmann, Matthias Knorr + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ling Liao, Changhuei Yang, Maxim Artyomov, Mark Watson, Adam Kepecs, Haowen Zhou, Alexey Sergushichev, Richard Cote - "Nothing about us without us": Perspectives of Global Deaf and Hard-of-hearing Community Members on Sign Language Technologies - https://arxiv.org/abs/2512.08839 - arXiv:2512.08839v1 Announce Type: new -Abstract: There is accelerating interest in sign language technologies (SLTs), with increasing attention from both industry and academia. However, the perspectives of Deaf and Hard-of-hearing (DHH) individuals remain marginalized in their development, particularly those outside of the West and in the global South. This paper presents findings from a global, multilingual survey capturing community views on SLTs across a wide range of countries, sign languages, and cultural contexts. While participants recognized the potential of SLTs to support access and independence, many expressed concerns about cultural erasure, inaccurate translation, and hearing-dominated research pipelines. Perceptions of SLTs were shaped by factors including sign language proficiency, policy exposure, and deaf identity. Across regions, participants emphasized the importance of DHH-led design, citing the risk of harm when DHH communities are excluded from technological decision-making. This study offers a novel cross-continental, community-informed analysis of SLTs and concludes with actionable recommendations for researchers, technologists, and policymakers. - oai:arXiv.org:2512.08839v1 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 - new + Interpretable machine learning of halo gas density profiles: a sensitivity analysis of cosmological hydrodynamical simulations + https://arxiv.org/abs/2512.09021 + arXiv:2512.09021v1 Announce Type: cross +Abstract: Stellar and AGN-driven feedback processes affect the distribution of gas on a wide range of scales, from within galaxies well into the intergalactic medium. Yet, it remains unclear how feedback, through its connection to key galaxy properties, shapes the radial gas density profile in the host halo. We tackle this question using suites of the EAGLE, IllustrisTNG, and Simba cosmological hydrodynamical simulations, which span a variety of feedback models. We develop a random forest algorithm that predicts the radial gas density profile within haloes from the total halo mass and five global properties of the central galaxy: gas and stellar mass; star formation rate; mass and accretion rate of the central black hole (BH). The algorithm reproduces the simulated gas density profiles with an average accuracy of $\sim$80-90% over the halo mass range $10^{9.5} \, \mathrm{M}_{\odot} < M_{\rm 200c} < 10^{15} \, \mathrm{M}_{\odot}$ and redshift interval $0<z<4$. For the first time, we apply Sobol statistical sensitivity analysis to full cosmological hydrodynamical simulations, quantifying how each feature affects the gas density as a function of distance from the halo centre. Across all simulations and redshifts, the total halo mass and the gas mass of the central galaxy are the most strongly tied to the halo gas distribution, while stellar and BH properties are generally less informative. The exact relative importance of the different features depends on the feedback scenario and redshift. Our framework can be readily embedded in semi-analytic models of galaxy formation to incorporate halo gas density profiles consistent with different hydrodynamical simulations. Our work also provides a proof of concept for constraining feedback models with future observations of galaxy properties and of the surrounding gas distribution. + oai:arXiv.org:2512.09021v1 + astro-ph.GA + astro-ph.CO + astro-ph.IM + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Katherine Atwell, Saki Imai, Danielle Bragg, Malihe Alikhani + Daniele Sorini, Sownak Bose, Mathilda Denison, Romeel Dav\'e - Space-time discretization for barotropic flow stemming from a multisymplectic variational formulation - https://arxiv.org/abs/2512.08841 - arXiv:2512.08841v1 Announce Type: new -Abstract: This study proposes and analyses a novel higher-order, structure preserving discretization method for inviscid barotropic flows from a Lagrangian perspective. The method is built on a multisymplectic variational principle discretized over a full space-time domain. Flow variables are encoded on a staggered space-time mesh, leveraging the principles of mimetic spectral element discretization. Unlike standard Lagrangian methods, which are prone to mesh distortion, this framework computes fluid deformations in a fixed reference configuration and systematically maps them to the physical domain via the Piola-Kirchhoff stress. Further, the structure preserving design ensures that the discrete analogues of the fundamental conservation laws for mass, momentum, and energy are satisfied up to machine precision. The formulation also inherently handles low-Mach number flows without specialized preconditioning. Numerical experiments on expansion and compression flows confirm the accuracy, stability, and exact conservation properties of the discretization. - oai:arXiv.org:2512.08841v1 - math.NA - cs.NA - physics.flu-dyn - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Mukthesh Mahadev, Marc Gerritsma + Monitoring Deployed AI Systems in Health Care + https://arxiv.org/abs/2512.09048 + arXiv:2512.09048v1 Announce Type: cross +Abstract: Post-deployment monitoring of artificial intelligence (AI) systems in health care is essential to ensure their safety, quality, and sustained benefit-and to support governance decisions about which systems to update, modify, or decommission. Motivated by these needs, we developed a framework for monitoring deployed AI systems grounded in the mandate to take specific actions when they fail to behave as intended. This framework, which is now actively used at Stanford Health Care, is organized around three complementary principles: system integrity, performance, and impact. System integrity monitoring focuses on maximizing system uptime, detecting runtime errors, and identifying when changes to the surrounding IT ecosystem have unintended effects. Performance monitoring focuses on maintaining accurate system behavior in the face of changing health care practices (and thus input data) over time. Impact monitoring assesses whether a deployed system continues to have value in the form of benefit to clinicians and patients. Drawing on examples of deployed AI systems at our academic medical center, we provide practical guidance for creating monitoring plans based on these principles that specify which metrics to measure, when those metrics should be reviewed, who is responsible for acting when metrics change, and what concrete follow-up actions should be taken-for both traditional and generative AI. We also discuss challenges to implementing this framework, including the effort and cost of monitoring for health systems with limited resources and the difficulty of incorporating data-driven monitoring practices into complex organizations where conflicting priorities and definitions of success often coexist. This framework offers a practical template and starting point for health systems seeking to ensure that AI deployments remain safe and effective over time. + oai:arXiv.org:2512.09048v1 + q-bio.OT + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Timothy Keyes, Alison Callahan, Abby S. Pandya, Nerissa Ambers, Juan M. Banda, Miguel Fuentes, Carlene Lugtu, Pranav Masariya, Srikar Nallan, Connor O'Brien, Thomas Wang, Emily Alsentzer, Jonathan H. Chen, Dev Dash, Matthew A. Eisenberg, Patricia Garcia, Nikesh Kotecha, Anurang Revri, Michael A. Pfeffer, Nigam H. Shah, Sneha S. Jain - A Methodology for Quantitative AI Risk Modeling - https://arxiv.org/abs/2512.08844 - arXiv:2512.08844v1 Announce Type: new -Abstract: Although general-purpose AI systems offer transformational opportunities in science and industry, they simultaneously raise critical concerns about safety, misuse, and potential loss of control. Despite these risks, methods for assessing and managing them remain underdeveloped. Effective risk management requires systematic modeling to characterize potential harms, as emphasized in frameworks such as the EU General-Purpose AI Code of Practice. This paper advances the risk modeling component of AI risk management by introducing a methodology that integrates scenario building with quantitative risk estimation, drawing on established approaches from other high-risk industries. Our methodology models risks through a six-step process: (1) defining risk scenarios, (2) decomposing them into quantifiable parameters, (3) quantifying baseline risk without AI models, (4) identifying key risk indicators such as benchmarks, (5) mapping these indicators to model parameters to estimate LLM uplift, and (6) aggregating individual parameters into risk estimates that enable concrete claims (e.g., X% probability of >\$Y in annual cyber damages). We examine the choices that underlie our methodology throughout the article, with discussions of strengths, limitations, and implications for future research. Our methodology is designed to be applicable to key systemic AI risks, including cyber offense, biological weapon development, harmful manipulation, and loss-of-control, and is validated through extensive application in LLM-enabled cyber offense. Detailed empirical results and cyber-specific insights are presented in a companion paper. - oai:arXiv.org:2512.08844v1 - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Malcolm Murray, Steve Barrett, Henry Papadatos, Otter Quarks, Matt Smith, Alejandro Tlaie Boria, Chlo\'e Touzet, Sim\'eon Campos + Cyqlone: A Parallel, High-Performance Linear Solver for Optimal Control + https://arxiv.org/abs/2512.09058 + arXiv:2512.09058v1 Announce Type: cross +Abstract: We present Cyqlone, a solver for linear systems with a stage-wise optimal control structure that fully exploits the various levels of parallelism available in modern hardware. Cyqlone unifies algorithms based on the sequential Riccati recursion, parallel Schur complement methods, and cyclic reduction methods, thereby minimizing the required number of floating-point operations, while allowing parallelization across a user-configurable number of processors. Given sufficient parallelism, the solver run time scales with the logarithm of the horizon length (in contrast to the linear scaling of sequential Riccati-based methods), enabling real-time solution of long-horizon problems. Beyond multithreading on multi-core processors, implementations of Cyqlone can also leverage vectorization using batched linear algebra routines. Such batched routines exploit data parallelism using single instruction, multiple data (SIMD) operations, and expose a higher degree of instruction-level parallelism than their non-batched counterparts. This enables them to significantly outperform BLAS and BLASFEO for the small matrices that arise in optimal control. Building on this high-performance linear solver, we develop CyQPALM, a parallel and optimal-control-specific variant of the QPALM quadratic programming solver. It combines the parallel and vectorized linear algebra operations from Cyqlone with a parallel line search and parallel factorization updates, resulting in order-of-magnitude speedups compared to the state-of-the-art HPIPM solver. Open-source C++ implementations of Cyqlone and CyQPALM are available at https://github.com/kul-optec/cyqlone + oai:arXiv.org:2512.09058v1 + math.OC + cs.SY + eess.SY + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Pieter Pas, Panagiotis Patrinos - Generation is Required for Data-Efficient Perception - https://arxiv.org/abs/2512.08854 - arXiv:2512.08854v1 Announce Type: new -Abstract: It has been hypothesized that human-level visual perception requires a generative approach in which internal representations result from inverting a decoder. Yet today's most successful vision models are non-generative, relying on an encoder that maps images to representations without decoder inversion. This raises the question of whether generation is, in fact, necessary for machines to achieve human-level visual perception. To address this, we study whether generative and non-generative methods can achieve compositional generalization, a hallmark of human perception. Under a compositional data generating process, we formalize the inductive biases required to guarantee compositional generalization in decoder-based (generative) and encoder-based (non-generative) methods. We then show theoretically that enforcing these inductive biases on encoders is generally infeasible using regularization or architectural constraints. In contrast, for generative methods, the inductive biases can be enforced straightforwardly, thereby enabling compositional generalization by constraining a decoder and inverting it. We highlight how this inversion can be performed efficiently, either online through gradient-based search or offline through generative replay. We examine the empirical implications of our theory by training a range of generative and non-generative methods on photorealistic image datasets. We find that, without the necessary inductive biases, non-generative methods often fail to generalize compositionally and require large-scale pretraining or added supervision to improve generalization. By comparison, generative methods yield significant improvements in compositional generalization, without requiring additional data, by leveraging suitable inductive biases on a decoder along with search and replay. - oai:arXiv.org:2512.08854v1 + Causal Attribution of Model Performance Gaps in Medical Imaging Under Distribution Shifts + https://arxiv.org/abs/2512.09094 + arXiv:2512.09094v1 Announce Type: cross +Abstract: Deep learning models for medical image segmentation suffer significant performance drops due to distribution shifts, but the causal mechanisms behind these drops remain poorly understood. We extend causal attribution frameworks to high-dimensional segmentation tasks, quantifying how acquisition protocols and annotation variability independently contribute to performance degradation. We model the data-generating process through a causal graph and employ Shapley values to fairly attribute performance changes to individual mechanisms. Our framework addresses unique challenges in medical imaging: high-dimensional outputs, limited samples, and complex mechanism interactions. Validation on multiple sclerosis (MS) lesion segmentation across 4 centers and 7 annotators reveals context-dependent failure modes: annotation protocol shifts dominate when crossing annotators (7.4% $\pm$ 8.9% DSC attribution), while acquisition shifts dominate when crossing imaging centers (6.5% $\pm$ 9.1%). This mechanism-specific quantification enables practitioners to prioritize targeted interventions based on deployment context. + oai:arXiv.org:2512.09094v1 + eess.IV cs.CV cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jack Brady, Bernhard Sch\"olkopf, Thomas Kipf, Simon Buchholz, Wieland Brendel - - - Reinforcement Learning From State and Temporal Differences - https://arxiv.org/abs/2512.08855 - arXiv:2512.08855v1 Announce Type: new -Abstract: TD($\lambda$) with function approximation has proved empirically successful for some complex reinforcement learning problems. For linear approximation, TD($\lambda$) has been shown to minimise the squared error between the approximate value of each state and the true value. However, as far as policy is concerned, it is error in the relative ordering of states that is critical, rather than error in the state values. We illustrate this point, both in simple two-state and three-state systems in which TD($\lambda$)--starting from an optimal policy--converges to a sub-optimal policy, and also in backgammon. We then present a modified form of TD($\lambda$), called STD($\lambda$), in which function approximators are trained with respect to relative state values on binary decision problems. A theoretical analysis, including a proof of monotonic policy improvement for STD($\lambda$) in the context of the two-state system, is presented, along with a comparison with Bertsekas' differential training method [1]. This is followed by successful demonstrations of STD($\lambda$) on the two-state system and a variation on the well known acrobot problem. - oai:arXiv.org:2512.08855v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Lex Weaver, Jonathan Baxter - - - Can the GPC standard eliminate consent banners in the EU? - https://arxiv.org/abs/2512.08856 - arXiv:2512.08856v1 Announce Type: new -Abstract: In the EU, the General Data Protection Regulation and the ePrivacy Directive mandate informed consent for behavioural advertising and use of tracking technologies. However, the ubiquity of consent banners and popups has led to widespread consent fatigue and questions regarding the effectiveness of these mechanisms in protecting users' data. In contrast, users in California and other US jurisdictions can utilize Global Privacy Control (GPC), a browser-based privacy signal that automatically broadcasts a legally binding opt-out request to websites. In this paper we explore whether, and to what extent, GPC can be adapted to the EU legal framework to mitigate consent fatigue and improve privacy protections for EU residents. - We analyse GPC as a technical specification standardized at the World Wide Web Consortium and examine its standing under current EU data protection law. Generally, GPC can be mapped to the various legal bases for processing under the GDPR. However, our evaluation also identifies friction between the GPC specification and EU data protection law as it stands. These discrepancies are resolvable and present an opportunity for EU legislators and regulators to interpret GPC in alignment with EU data protection requirements, particularly, considering the European Commission's recent Digital Omnibus proposal. We conclude that while GPC is not a silver bullet, its adoption -- supported by clear authoritative guidance and specification updates -- can offer a pragmatic path toward more automated and effective data protection in the EU. - oai:arXiv.org:2512.08856v1 - cs.CY - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sebastian Zimmeck, Harshvardhan J. Pandit, Frederik Zuiderveen Borgesius, Cristiana Teixeira Santos, Konrad Kollnig, Robin Berjon - - - NecoFuzz: Effective Fuzzing of Nested Virtualization via Fuzz-Harness Virtual Machines - https://arxiv.org/abs/2512.08858 - arXiv:2512.08858v1 Announce Type: new -Abstract: Nested virtualization is now widely supported by major cloud vendors, allowing users to leverage virtualization-based technologies in the cloud. However, supporting nested virtualization significantly increases host hypervisor complexity and introduces a new attack surface in cloud platforms. While many prior studies have explored hypervisor fuzzing, none has explicitly addressed nested virtualization due to the challenge of generating effective virtual machine (VM) instances with a vast state space as fuzzing inputs. - We present NecoFuzz, the first fuzzing framework that systematically targets nested virtualization-specific logic in hypervisors. NecoFuzz synthesizes executable fuzz-harness VMs with internal states near the boundary between valid and invalid, guided by an approximate model of hardware-assisted virtualization specifications. Since vulnerabilities in nested virtualization often stem from incorrect handling of unexpected VM states, this specification-guided, boundary-oriented generation significantly improves coverage of security-critical code across different hypervisors. - We implemented NecoFuzz on Intel VT-x and AMD-V by extending AFL++ to support fuzz-harness VMs. NecoFuzz achieved 84.7% and 74.2% code coverage for nested virtualization-specific code on Intel VT-x and AMD-V, respectively, and uncovered six previously unknown vulnerabilities across three hypervisors, including two assigned CVEs. - oai:arXiv.org:2512.08858v1 - cs.OS - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - 10.1145/3767295.3769347 - Reima Ishii, Takaaki Fukai, Takahiro Shinagawa - - - Refining Diffusion Models for Motion Synthesis with an Acceleration Loss to Generate Realistic IMU Data - https://arxiv.org/abs/2512.08859 - arXiv:2512.08859v1 Announce Type: new -Abstract: We propose a text-to-IMU (inertial measurement unit) motion-synthesis framework to obtain realistic IMU data by fine-tuning a pretrained diffusion model with an acceleration-based second-order loss (L_acc). L_acc enforces consistency in the discrete second-order temporal differences of the generated motion, thereby aligning the diffusion prior with IMU-specific acceleration patterns. We integrate L_acc into the training objective of an existing diffusion model, finetune the model to obtain an IMU-specific motion prior, and evaluate the model with an existing text-to-IMU framework that comprises surface modelling and virtual sensor simulation. We analysed acceleration signal fidelity and differences between synthetic motion representation and actual IMU recordings. As a downstream application, we evaluated Human Activity Recognition (HAR) and compared the classification performance using data of our method with the earlier diffusion model and two additional diffusion model baselines. When we augmented the earlier diffusion model objective with L_acc and continued training, L_acc decreased by 12.7% relative to the original model. The improvements were considerably larger in high-dynamic activities (i.e., running, jumping) compared to low-dynamic activities~(i.e., sitting, standing). In a low-dimensional embedding, the synthetic IMU data produced by our refined model shifts closer to the distribution of real IMU recordings. HAR classification trained exclusively on our refined synthetic IMU data improved performance by 8.7% compared to the earlier diffusion model and by 7.6% over the best-performing comparison diffusion model. We conclude that acceleration-aware diffusion refinement provides an effective approach to align motion generation and IMU synthesis and highlights how flexible deep learning pipelines are for specialising generic text-to-motion priors to sensor-specific tasks. - oai:arXiv.org:2512.08859v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Lars Ole H\"ausler, Lena Uhlenberg, G\"oran K\"ober, Diyora Salimova, Oliver Amft - - - Tri-Bench: Stress-Testing VLM Reliability on Spatial Reasoning under Camera Tilt and Object Interference - https://arxiv.org/abs/2512.08860 - arXiv:2512.08860v1 Announce Type: new -Abstract: Verifiable geometric reasoning is a critical component for trustworthy and controllable agentic AI. Despite impressive capabilities, Vision-Language Models (VLMs) often fail under realistic scene changes. We present Tri-Bench, a compact benchmark of planar triangle problems that isolates relative geometric reasoning while stressing two deployment-critical factors: camera pose (planar vs. tilted) and scene context via object interference (10 everyday objects). To test verifiability and control, we evaluate four recent VLMs using a single, fixed prompt whose guardrail explicitly describes a surrounding square border, enabling correct answers via homography. We evaluate six simple tasks over binary and continuous targets, and observe that the overall accuracy with respect to 3D ground truth is modest, ~69% on average (best ~75%, worst ~64%). The same responses align even more closely with 2D projections in the image plane, where mean accuracy is ~72%. All four VLMs consistently fail, with accuracy falling to ~0%, on recognizing minority shape classes (equilateral, isosceles, right-angled triangles). Additionally, overall VLM accuracy degrades by ~4.1% under camera tilt. This demonstrates that models fail to correctly utilize the explicit frame-of-reference hint provided in the prompt and default to 2D image plane cues. Finally, we find that object interference has no significant effect on VLM accuracy. - oai:arXiv.org:2512.08860v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Amit Bendkhale - - - Secure and Privacy-Preserving Federated Learning for Next-Generation Underground Mine Safety - https://arxiv.org/abs/2512.08862 - arXiv:2512.08862v1 Announce Type: new -Abstract: Underground mining operations depend on sensor networks to monitor critical parameters such as temperature, gas concentration, and miner movement, enabling timely hazard detection and safety decisions. However, transmitting raw sensor data to a centralized server for machine learning (ML) model training raises serious privacy and security concerns. Federated Learning (FL) offers a promising alternative by enabling decentralized model training without exposing sensitive local data. Yet, applying FL in underground mining presents unique challenges: (i) Adversaries may eavesdrop on shared model updates to launch model inversion or membership inference attacks, compromising data privacy and operational safety; (ii) Non-IID data distributions across mines and sensor noise can hinder model convergence. To address these issues, we propose FedMining--a privacy-preserving FL framework tailored for underground mining. FedMining introduces two core innovations: (1) a Decentralized Functional Encryption (DFE) scheme that keeps local models encrypted, thwarting unauthorized access and inference attacks; and (2) a balancing aggregation mechanism to mitigate data heterogeneity and enhance convergence. Evaluations on real-world mining datasets demonstrate FedMining's ability to safeguard privacy while maintaining high model accuracy and achieving rapid convergence with reduced communication and computation overhead. These advantages make FedMining both secure and practical for real-time underground safety monitoring. - oai:arXiv.org:2512.08862v1 - cs.CR - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-sa/4.0/ - Mohamed Elmahallawy, Sanjay Madria, Samuel Frimpong - - - Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse - https://arxiv.org/abs/2512.08864 - arXiv:2512.08864v1 Announce Type: new -Abstract: Advanced AI systems offer substantial benefits but also introduce risks. In 2025, AI-enabled cyber offense has emerged as a concrete example. This technical report applies a quantitative risk modeling methodology (described in full in a companion paper) to this domain. We develop nine detailed cyber risk models that allow analyzing AI uplift as a function of AI benchmark performance. Each model decomposes attacks into steps using the MITRE ATT&CK framework and estimates how AI affects the number of attackers, attack frequency, probability of success, and resulting harm to determine different types of uplift. To produce these estimates with associated uncertainty, we employ both human experts, via a Delphi study, as well as LLM-based simulated experts, both mapping benchmark scores (from Cybench and BountyBench) to risk model factors. Individual estimates are aggregated through Monte Carlo simulation. The results indicate systematic uplift in attack efficacy, speed, and target reach, with different mechanisms of uplift across risk models. We aim for our quantitative risk modeling to fulfill several aims: to help cybersecurity teams prioritize mitigations, AI evaluators design benchmarks, AI developers make more informed deployment decisions, and policymakers obtain information to set risk thresholds. Similar goals drove the shift from qualitative to quantitative assessment over time in other high-risk industries, such as nuclear power. We propose this methodology and initial application attempt as a step in that direction for AI risk management. While our estimates carry significant uncertainty, publishing detailed quantified results can enable experts to pinpoint exactly where they disagree. This helps to collectively refine estimates, something that cannot be done with qualitative assessments alone. - oai:arXiv.org:2512.08864v1 - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Steve Barrett, Malcolm Murray, Otter Quarks, Matthew Smith, Jakub Kry\'s, Sim\'eon Campos, Alejandro Tlaie Boria, Chlo\'e Touzet, Sevan Hayrapet, Fred Heiding, Omer Nevo, Adam Swanda, Jair Aguirre, Asher Brass Gershovich, Eric Clay, Ryan Fetterman, Mario Fritz, Marc Juarez, Vasilios Mavroudis, Henry Papadatos - - - SimpleDevQA: Benchmarking Large Language Models on Development Knowledge QA - https://arxiv.org/abs/2512.08867 - arXiv:2512.08867v1 Announce Type: new -Abstract: The Development Knowledge Question Answering (Dev Knowledge QA) task aims to provide natural language answers to knowledge-seeking questions during software development. To investigate its importance and to what extent it has been explored, we analyze real user-LLM dialogues from WildChat and find that: (1) The Dev Knowledge QA task accounts for 39.6% of interactions(highest among all tasks), revealing broad knowledge needs beyond code generation (32.3%). (2) Only 27.5% of real Dev Knowledge QA dialogues focus on code understanding, leaving out development knowledge-seeking. (3) Only 17.1% of real-world Dev Knowledge QA dialogues can be used for constructing a benchmark. Existing benchmarks have two primary limitations for evaluating the Dev Knowledge QA capability of LLMs. First, existing benchmarks offer a limited development knowledge scope, mainly focusing on code understanding and neglecting broader knowledge during development. Second, some benchmarks are not built from real user queries. To bridge this gap, we design a three-phase pipeline that transforms real-world dialogue into simple development knowledge-seeking QA pairs. Through this pipeline, we introduce SimpleDevQA, a multilingual benchmark derived from real user dialogues. It contains 2,740 QA pairs in three languages (English, Chinese, and Russian), and focuses on questions with unique, short, and verifiable answers for accurate and simple evaluation. Experiments show that: Code LLMs generally outperform general LLMs of similar scale; Knowledge injection with the Retrieval-Augmented Generation (RAG) strategy can boost LLM accuracy by 11.3% on average; LLMs show systematic overconfidence in Dev Knowledge QA, and the answering accuracy of LLMs shows a positive correlation with their stated confidence; Generally, LLMs with stronger code generation performance also exhibit stronger performance in Dev Knowledge QA. - oai:arXiv.org:2512.08867v1 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jing Zhang, Lianghong Guo, Yanlin Wang, Mingwei Liu, Jiachi Chen, Yuchi Ma, Ensheng Shi, Terry Yue Zhuo, Hongyu Zhang, Zibin Zheng - - - EcomBench: Towards Holistic Evaluation of Foundation Agents in E-commerce - https://arxiv.org/abs/2512.08868 - arXiv:2512.08868v1 Announce Type: new -Abstract: Foundation agents have rapidly advanced in their ability to reason and interact with real environments, making the evaluation of their core capabilities increasingly important. While many benchmarks have been developed to assess agent performance, most concentrate on academic settings or artificially designed scenarios while overlooking the challenges that arise in real applications. To address this issue, we focus on a highly practical real-world setting, the e-commerce domain, which involves a large volume of diverse user interactions, dynamic market conditions, and tasks directly tied to real decision-making processes. To this end, we introduce EcomBench, a holistic E-commerce Benchmark designed to evaluate agent performance in realistic e-commerce environments. EcomBench is built from genuine user demands embedded in leading global e-commerce ecosystems and is carefully curated and annotated through human experts to ensure clarity, accuracy, and domain relevance. It covers multiple task categories within e-commerce scenarios and defines three difficulty levels that evaluate agents on key capabilities such as deep information retrieval, multi-step reasoning, and cross-source knowledge integration. By grounding evaluation in real e-commerce contexts, EcomBench provides a rigorous and dynamic testbed for measuring the practical capabilities of agents in modern e-commerce. - oai:arXiv.org:2512.08868v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Rui Min (May), Zile Qiao (May), Ze Xu (May), Jiawen Zhai (May), Wenyu Gao (May), Xuanzhong Chen (May), Haozhen Sun (May), Zhen Zhang (May), Xinyu Wang (May), Hong Zhou (May), Wenbiao Yin (May), Xuan Zhou (May), Yong Jiang (May), Haicheng Liu (May), Liang Ding (May), Ling Zou (May), Yi R. (May), Fung, Yalong Li, Pengjun Xie - - - Differentially Private Synthetic Data Generation Using Context-Aware GANs - https://arxiv.org/abs/2512.08869 - arXiv:2512.08869v1 Announce Type: new -Abstract: The widespread use of big data across sectors has raised major privacy concerns, especially when sensitive information is shared or analyzed. Regulations such as GDPR and HIPAA impose strict controls on data handling, making it difficult to balance the need for insights with privacy requirements. Synthetic data offers a promising solution by creating artificial datasets that reflect real patterns without exposing sensitive information. However, traditional synthetic data methods often fail to capture complex, implicit rules that link different elements of the data and are essential in domains like healthcare. They may reproduce explicit patterns but overlook domain-specific constraints that are not directly stated yet crucial for realism and utility. For example, prescription guidelines that restrict certain medications for specific conditions or prevent harmful drug interactions may not appear explicitly in the original data. Synthetic data generated without these implicit rules can lead to medically inappropriate or unrealistic profiles. To address this gap, we propose ContextGAN, a Context-Aware Differentially Private Generative Adversarial Network that integrates domain-specific rules through a constraint matrix encoding both explicit and implicit knowledge. The constraint-aware discriminator evaluates synthetic data against these rules to ensure adherence to domain constraints, while differential privacy protects sensitive details from the original data. We validate ContextGAN across healthcare, security, and finance, showing that it produces high-quality synthetic data that respects domain rules and preserves privacy. Our results demonstrate that ContextGAN improves realism and utility by enforcing domain constraints, making it suitable for applications that require compliance with both explicit patterns and implicit rules under strict privacy guarantees. - oai:arXiv.org:2512.08869v1 - cs.LG - cs.AI - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - 10.1109/BigData62323.2024.10826047 - Anantaa Kotal, Anupam Joshi - - - Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents - https://arxiv.org/abs/2512.08870 - arXiv:2512.08870v1 Announce Type: new -Abstract: LLM agents are widely deployed in complex interactive tasks, yet privacy constraints often preclude centralized optimization and co-evolution across dynamic environments. While Federated Learning (FL) has proven effective on static datasets, its extension to the open-ended self-evolution of agents remains underexplored. Directly applying standard FL is challenging: heterogeneous tasks and sparse, trajectory-level rewards introduce severe gradient conflicts, destabilizing the global optimization process. To bridge this gap, we propose Fed-SE, a Federated Self-Evolution framework for LLM agents. Fed-SE establishes a local evolution-global aggregation paradigm. Locally, agents employ parameter-efficient fine-tuning on filtered, high-return trajectories to achieve stable gradient updates. Globally, Fed-SE aggregates updates within a low-rank subspace that disentangles environment-specific dynamics, effectively reducing negative transfer across clients. Experiments across five heterogeneous environments demonstrate that Fed-SE improves average task success rates by approximately 18% over federated baselines, validating its effectiveness in robust cross-environment knowledge transfer in privacy-constrained deployments. - oai:arXiv.org:2512.08870v1 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiang Chen, Yuling Shi, Qizhen Lan, Yuchao Qiu, Xiaodong Gu - - - Siamese-Driven Optimization for Low-Resolution Image Latent Embedding in Image Captioning - https://arxiv.org/abs/2512.08873 - arXiv:2512.08873v1 Announce Type: new -Abstract: Image captioning is essential in many fields including assisting visually impaired individuals, improving content management systems, and enhancing human-computer interaction. However, a recent challenge in this domain is dealing with low-resolution image (LRI). While performance can be improved by using larger models like transformers for encoding, these models are typically heavyweight, demanding significant computational resources and memory, leading to challenges in retraining. To address this, the proposed SOLI (Siamese-Driven Optimization for Low-Resolution Image Latent Embedding in Image Captioning) approach presents a solution specifically designed for lightweight, low-resolution images captioning. It employs a Siamese network architecture to optimize latent embeddings, enhancing the efficiency and accuracy of the image-to-text translation process. By focusing on a dual-pathway neural network structure, SOLI minimizes computational overhead without sacrificing performance, making it an ideal choice for training on resource-constrained scenarios. - oai:arXiv.org:2512.08873v1 - cs.CV - cs.AI - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.23919/SPA61993.2024.10715604 - 2024 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA) - Jing Jie Tan, Anissa Mokraoui, Ban-Hoe Kwan, Danny Wee-Kiat Ng, Yan-Chai Hum - - - When Tables Leak: Attacking String Memorization in LLM-Based Tabular Data Generation - https://arxiv.org/abs/2512.08875 - arXiv:2512.08875v1 Announce Type: new -Abstract: Large Language Models (LLMs) have recently demonstrated remarkable performance in generating high-quality tabular synthetic data. In practice, two primary approaches have emerged for adapting LLMs to tabular data generation: (i) fine-tuning smaller models directly on tabular datasets, and (ii) prompting larger models with examples provided in context. In this work, we show that popular implementations from both regimes exhibit a tendency to compromise privacy by reproducing memorized patterns of numeric digits from their training data. To systematically analyze this risk, we introduce a simple No-box Membership Inference Attack (MIA) called LevAtt that assumes adversarial access to only the generated synthetic data and targets the string sequences of numeric digits in synthetic observations. Using this approach, our attack exposes substantial privacy leakage across a wide range of models and datasets, and in some cases, is even a perfect membership classifier on state-of-the-art models. Our findings highlight a unique privacy vulnerability of LLM-based synthetic data generation and the need for effective defenses. To this end, we propose two methods, including a novel sampling strategy that strategically perturbs digits during generation. Our evaluation demonstrates that this approach can defeat these attacks with minimal loss of fidelity and utility of the synthetic data. - oai:arXiv.org:2512.08875v1 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Joshua Ward, Bochao Gu, Chi-Hua Wang, Guang Cheng - - - IPPO Learns the Game, Not the Team: A Study on Generalization in Heterogeneous Agent Teams - https://arxiv.org/abs/2512.08877 - arXiv:2512.08877v1 Announce Type: new -Abstract: Multi-Agent Reinforcement Learning (MARL) is commonly deployed in settings where agents are trained via self-play with homogeneous teammates, often using parameter sharing and a single policy architecture. This opens the question: to what extent do self-play PPO agents learn general coordination strategies grounded in the underlying game, compared to overfitting to their training partners' behaviors? This paper investigates the question using the Heterogeneous Multi-Agent Challenge (HeMAC) environment, which features distinct Observer and Drone agents with complementary capabilities. We introduce Rotating Policy Training (RPT), an approach that rotates heterogeneous teammate policies of different learning algorithms during training, to expose the agent to a broader range of partner strategies. When playing alongside a withheld teammate policy (DDQN), we find that RPT achieves similar performance to a standard self-play baseline, IPPO, where all agents were trained sharing a single PPO policy. This result indicates that in this heterogeneous multi-agent setting, the IPPO baseline generalizes to novel teammate algorithms despite not experiencing teammate diversity during training. This shows that a simple IPPO baseline may possess the level of generalization to novel teammates that a diverse training regimen was designed to achieve. - oai:arXiv.org:2512.08877v1 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Ryan LeRoy, Jack Kolb - - - DAO-GP Drift Aware Online Non-Linear Regression Gaussian-Process - https://arxiv.org/abs/2512.08879 - arXiv:2512.08879v1 Announce Type: new -Abstract: Real-world datasets often exhibit temporal dynamics characterized by evolving data distributions. Disregarding this phenomenon, commonly referred to as concept drift, can significantly diminish a model's predictive accuracy. Furthermore, the presence of hyperparameters in online models exacerbates this issue. These parameters are typically fixed and cannot be dynamically adjusted by the user in response to the evolving data distribution. Gaussian Process (GP) models offer powerful non-parametric regression capabilities with uncertainty quantification, making them ideal for modeling complex data relationships in an online setting. However, conventional online GP methods face several critical limitations, including a lack of drift-awareness, reliance on fixed hyperparameters, vulnerability to data snooping, absence of a principled decay mechanism, and memory inefficiencies. In response, we propose DAO-GP (Drift-Aware Online Gaussian Process), a novel, fully adaptive, hyperparameter-free, decayed, and sparse non-linear regression model. DAO-GP features a built-in drift detection and adaptation mechanism that dynamically adjusts model behavior based on the severity of drift. Extensive empirical evaluations confirm DAO-GP's robustness across stationary conditions, diverse drift types (abrupt, incremental, gradual), and varied data characteristics. Analyses demonstrate its dynamic adaptation, efficient in-memory and decay-based management, and evolving inducing points. Compared with state-of-the-art parametric and non-parametric models, DAO-GP consistently achieves superior or competitive performance, establishing it as a drift-resilient solution for online non-linear regression. - oai:arXiv.org:2512.08879v1 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Proc. IEEE International Conference on Big Data (BigData), 2025 - Mohammad Abu-Shaira, Ajita Rattani, Weishi Shi - - - SATGround: A Spatially-Aware Approach for Visual Grounding in Remote Sensing - https://arxiv.org/abs/2512.08881 - arXiv:2512.08881v1 Announce Type: new -Abstract: Vision-language models (VLMs) are emerging as powerful generalist tools for remote sensing, capable of integrating information across diverse tasks and enabling flexible, instruction-based interactions via a chat interface. In this work, we enhance VLM-based visual grounding in satellite imagery by proposing a novel structured localization mechanism. Our approach involves finetuning a pretrained VLM on a diverse set of instruction-following tasks, while interfacing a dedicated grounding module through specialized control tokens for localization. This method facilitates joint reasoning over both language and spatial information, significantly enhancing the model's ability to precisely localize objects in complex satellite scenes. We evaluate our framework on several remote sensing benchmarks, consistently improving the state-of-the-art, including a 24.8% relative improvement over previous methods on visual grounding. Our results highlight the benefits of integrating structured spatial reasoning into VLMs, paving the way for more reliable real-world satellite data analysis. - oai:arXiv.org:2512.08881v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Aysim Toker, Andreea-Maria Oncescu, Roy Miles, Ismail Elezi, Jiankang Deng - - - Decentralized Trust for Space AI: Blockchain-Based Federated Learning Across Multi-Vendor LEO Satellite Networks - https://arxiv.org/abs/2512.08882 - arXiv:2512.08882v1 Announce Type: new -Abstract: The rise of space AI is reshaping government and industry through applications such as disaster detection, border surveillance, and climate monitoring, powered by massive data from commercial and governmental low Earth orbit (LEO) satellites. Federated satellite learning (FSL) enables joint model training without sharing raw data, but suffers from slow convergence due to intermittent connectivity and introduces critical trust challenges--where biased or falsified updates can arise across satellite constellations, including those injected through cyberattacks on inter-satellite or satellite-ground communication links. We propose OrbitChain, a blockchain-backed framework that empowers trustworthy multi-vendor collaboration in LEO networks. OrbitChain (i) offloads consensus to high-altitude platforms (HAPs) with greater computational capacity, (ii) ensures transparent, auditable provenance of model updates from different orbits owned by different vendors, and (iii) prevents manipulated or incomplete contributions from affecting global FSL model aggregation. Extensive simulations show that OrbitChain reduces computational and communication overhead while improving privacy, security, and global model accuracy. Its permissioned proof-of-authority ledger finalizes over 1000 blocks with sub-second latency (0.16,s, 0.26,s, 0.35,s for 1-of-5, 3-of-5, and 5-of-5 quorums). Moreover, OrbitChain reduces convergence time by up to 30 hours on real satellite datasets compared to single-vendor, demonstrating its effectiveness for real-time, multi-vendor learning. Our code is available at https://github.com/wsu-cyber-security-lab-ai/OrbitChain.git - oai:arXiv.org:2512.08882v1 - cs.CR - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-sa/4.0/ - Mohamed Elmahallawy, Asma Jodeiri Akbarfam - - - AI Didn't Start the Fire: Examining the Stack Exchange Moderator and Contributor Strike - https://arxiv.org/abs/2512.08884 - arXiv:2512.08884v1 Announce Type: new -Abstract: Online communities and their host platforms are mutually dependent yet conflict-prone. When platform policies clash with community values, communities have resisted through strikes, blackouts, and even migration to other platforms. Through such collective actions, communities have sometimes won concessions but these have frequently proved temporary. Prior research has investigated strike events and migration chains, but the processes by which community-platform conflict unfolds remain obscure. How do community-platform relationships deteriorate? How do communities organize collective action? How do participants proceed in the aftermath? We investigate a conflict between the Stack Exchange platform and community that occurred in 2023 around an emergency arising from the release of large language models (LLMs). Based on a qualitative thematic analysis of 2,070 messages on Meta Stack Exchange and 14 interviews with community members, we surface how the 2023 conflict was preceded by a long-term deterioration in the community-platform relationship driven in particular by the platform's disregard for the community's highly-valued participatory role in governance. Moreover, the platform's policy response to LLMs aggravated the community's sense of crisis triggering the strike mobilization. We analyze how the mobilization was coordinated through a tiered leadership and communication structure, as well as how community members pivoted in the aftermath. Building on recent theoretical scholarship in social computing, we use Hirshman's exit, voice and loyalty framework to theorize the challenges of community-platform relations evinced in our data. Finally, we recommend ways that platforms and communities can institute participatory governance to be durable and effective. - oai:arXiv.org:2512.08884v1 - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Yiwei Wu, Leah Ajmani, Nathan TeBlunthuis, Hanlin Li - - - Explainable Anomaly Detection for Industrial IoT Data Streams - https://arxiv.org/abs/2512.08885 - arXiv:2512.08885v1 Announce Type: new -Abstract: Industrial maintenance is being transformed by the Internet of Things and edge computing, generating continuous data streams that demand real-time, adaptive decision-making under limited computational resources. While data stream mining (DSM) addresses this challenge, most methods assume fully supervised settings, yet in practice, ground-truth labels are often delayed or unavailable. This paper presents a collaborative DSM framework that integrates unsupervised anomaly detection with interactive, human-in-the-loop learning to support maintenance decisions. We employ an online Isolation Forest and enhance interpretability using incremental Partial Dependence Plots and a feature importance score, derived from deviations of Individual Conditional Expectation curves from a fading average, enabling users to dynamically reassess feature relevance and adjust anomaly thresholds. We describe the real-time implementation and provide initial results for fault detection in a Jacquard loom unit. Ongoing work targets continuous monitoring to predict and explain imminent bearing failures. - oai:arXiv.org:2512.08885v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1145/3748522.3780009 - Ana Rita Paup\'erio, Diogo Risca, Afonso Louren\c{c}o, Goreti Marreiros, Ricardo Martins - - - Accelerated Rotation-Invariant Convolution for UAV Image Segmentation - https://arxiv.org/abs/2512.08888 - arXiv:2512.08888v1 Announce Type: new -Abstract: Rotation invariance is essential for precise, object-level segmentation in UAV aerial imagery, where targets can have arbitrary orientations and exhibit fine-scale details. Conventional segmentation architectures like U-Net rely on convolution operators that are not rotation-invariant, leading to degraded segmentation accuracy across varying viewpoints. Rotation invariance can be achieved by expanding the filter bank across multiple orientations; however, this will significantly increase computational cost and memory traffic. In this paper, we introduce a GPU-optimized rotation-invariant convolution framework that eliminates the traditional data-lowering (im2col) step required for matrix-multiplication-based convolution. By exploiting structured data sharing among symmetrically rotated filters, our method achieves multi-orientation convolution with greatly reduced memory traffic and computational redundancy. We further generalize the approach to accelerate convolution with arbitrary (non-symmetric) rotation angles. - Across extensive benchmarks, the proposed convolution achieves 20--55% faster training and 15--45% lower energy consumption than CUDNN, while maintaining accuracy comparable to state-of-the-art rotation-invariant methods. In the eight-orientation setting, our approach achieves up to 45% speedup and 41% energy savings on 256\(\times\)256 inputs, and 32% speedup and 23% lower energy usage on 1024\(\times\)1024 inputs. Integrated into a U-Net segmentation model, the framework yields up to 6% improvement in accuracy over the non-rotation-aware baseline. These results demonstrate that the proposed method provides an effective and highly efficient alternative to existing rotation-invariant CNN frameworks. - oai:arXiv.org:2512.08888v1 - cs.CV - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Manduhu Manduhu, Alexander Dow, Gerard Dooly, James Riordan - - - No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers - https://arxiv.org/abs/2512.08889 - arXiv:2512.08889v1 Announce Type: new -Abstract: Visual reasoning is challenging, requiring both precise object grounding and understanding complex spatial relationships. Existing methods fall into two camps: language-only chain-of-thought approaches, which demand large-scale (image, query, answer) supervision, and program-synthesis approaches which use pre-trained models and avoid training, but suffer from flawed logic and erroneous grounding. We propose an annotation-free training framework that improves both reasoning and grounding. Our framework uses AI-powered verifiers: an LLM verifier refines LLM reasoning via reinforcement learning, while a VLM verifier strengthens visual grounding through automated hard-negative mining, eliminating the need for ground truth labels. This design combines the strengths of modern AI systems: advanced language-only reasoning models for decomposing spatial queries into simpler subtasks, and strong vision specialist models improved via performant VLM critics. We evaluate our approach across diverse spatial reasoning tasks, and show that our method improves visual reasoning and surpasses open-source and proprietary models, while with our improved visual grounding model we further outperform recent text-only visual reasoning methods. Project webpage: https://glab-caltech.github.io/valor/ - oai:arXiv.org:2512.08889v1 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Damiano Marsili, Georgia Gkioxari - - - Toward Faithful Retrieval-Augmented Generation with Sparse Autoencoders - https://arxiv.org/abs/2512.08892 - arXiv:2512.08892v1 Announce Type: new -Abstract: Retrieval-Augmented Generation (RAG) improves the factuality of large language models (LLMs) by grounding outputs in retrieved evidence, but faithfulness failures, where generations contradict or extend beyond the provided sources, remain a critical challenge. Existing hallucination detection methods for RAG often rely either on large-scale detector training, which requires substantial annotated data, or on querying external LLM judges, which leads to high inference costs. Although some approaches attempt to leverage internal representations of LLMs for hallucination detection, their accuracy remains limited. Motivated by recent advances in mechanistic interpretability, we employ sparse autoencoders (SAEs) to disentangle internal activations, successfully identifying features that are specifically triggered during RAG hallucinations. Building on a systematic pipeline of information-based feature selection and additive feature modeling, we introduce RAGLens, a lightweight hallucination detector that accurately flags unfaithful RAG outputs using LLM internal representations. RAGLens not only achieves superior detection performance compared to existing methods, but also provides interpretable rationales for its decisions, enabling effective post-hoc mitigation of unfaithful RAG. Finally, we justify our design choices and reveal new insights into the distribution of hallucination-related signals within LLMs. The code is available at https://github.com/Teddy-XiongGZ/RAGLens. - oai:arXiv.org:2512.08892v1 - cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Guangzhi Xiong, Zhenghao He, Bohan Liu, Sanchit Sinha, Aidong Zhang - - - Revisiting the Scaling Properties of Downstream Metrics in Large Language Model Training - https://arxiv.org/abs/2512.08894 - arXiv:2512.08894v1 Announce Type: new -Abstract: While scaling laws for Large Language Models (LLMs) traditionally focus on proxy metrics like pretraining loss, predicting downstream task performance has been considered unreliable. This paper challenges that view by proposing a direct framework to model the scaling of benchmark performance from the training budget. We find that for a fixed token-to-parameter ratio, a simple power law can accurately describe the scaling behavior of log accuracy on multiple popular downstream tasks. Our results show that the direct approach extrapolates better than the previously proposed two-stage procedure, which is prone to compounding errors. Furthermore, we introduce functional forms that predict accuracy across token-to-parameter ratios and account for inference compute under repeated sampling. We validate our findings on models with up to 17B parameters trained on up to 350B tokens across two dataset mixtures. To support reproducibility and encourage future research, we release the complete set of pretraining losses and downstream evaluation results. - oai:arXiv.org:2512.08894v1 - cs.LG - cs.AI - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jakub Krajewski, Amitis Shidani, Dan Busbridge, Sam Wiseman, Jason Ramapuram - - - Unsupervised Learning of Density Estimates with Topological Optimization - https://arxiv.org/abs/2512.08895 - arXiv:2512.08895v1 Announce Type: new -Abstract: Kernel density estimation is a key component of a wide variety of algorithms in machine learning, Bayesian inference, stochastic dynamics and signal processing. However, the unsupervised density estimation technique requires tuning a crucial hyperparameter: the kernel bandwidth. The choice of bandwidth is critical as it controls the bias-variance trade-off by over- or under-smoothing the topological features. Topological data analysis provides methods to mathematically quantify topological characteristics, such as connected components, loops, voids et cetera, even in high dimensions where visualization of density estimates is impossible. In this paper, we propose an unsupervised learning approach using a topology-based loss function for the automated and unsupervised selection of the optimal bandwidth and benchmark it against classical techniques -- demonstrating its potential across different dimensions. - oai:arXiv.org:2512.08895v1 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Suina Tanweer, Firas A. Khasawneh - - - Open Polymer Challenge: Post-Competition Report - https://arxiv.org/abs/2512.08896 - arXiv:2512.08896v1 Announce Type: new -Abstract: Machine learning (ML) offers a powerful path toward discovering sustainable polymer materials, but progress has been limited by the lack of large, high-quality, and openly accessible polymer datasets. The Open Polymer Challenge (OPC) addresses this gap by releasing the first community-developed benchmark for polymer informatics, featuring a dataset with 10K polymers and 5 properties: thermal conductivity, radius of gyration, density, fractional free volume, and glass transition temperature. The challenge centers on multi-task polymer property prediction, a core step in virtual screening pipelines for materials discovery. Participants developed models under realistic constraints that include small data, label imbalance, and heterogeneous simulation sources, using techniques such as feature-based augmentation, transfer learning, self-supervised pretraining, and targeted ensemble strategies. The competition also revealed important lessons about data preparation, distribution shifts, and cross-group simulation consistency, informing best practices for future large-scale polymer datasets. The resulting models, analysis, and released data create a new foundation for molecular AI in polymer science and are expected to accelerate the development of sustainable and energy-efficient materials. Along with the competition, we release the test dataset at https://www.kaggle.com/datasets/alexliu99/neurips-open-polymer-prediction-2025-test-data. We also release the data generation pipeline at https://github.com/sobinalosious/ADEPT, which simulates more than 25 properties, including thermal conductivity, radius of gyration, and density. - oai:arXiv.org:2512.08896v1 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Gang Liu, Sobin Alosious, Subhamoy Mahajan, Eric Inae, Yihan Zhu, Yuhan Liu, Renzheng Zhang, Jiaxin Xu, Addison Howard, Ying Li, Tengfei Luo, Meng Jiang - - - UniLayDiff: A Unified Diffusion Transformer for Content-Aware Layout Generation - https://arxiv.org/abs/2512.08897 - arXiv:2512.08897v1 Announce Type: new -Abstract: Content-aware layout generation is a critical task in graphic design automation, focused on creating visually appealing arrangements of elements that seamlessly blend with a given background image. The variety of real-world applications makes it highly challenging to develop a single model capable of unifying the diverse range of input-constrained generation sub-tasks, such as those conditioned by element types, sizes, or their relationships. Current methods either address only a subset of these tasks or necessitate separate model parameters for different conditions, failing to offer a truly unified solution. In this paper, we propose UniLayDiff: a Unified Diffusion Transformer, that for the first time, addresses various content-aware layout generation tasks with a single, end-to-end trainable model. Specifically, we treat layout constraints as a distinct modality and employ Multi-Modal Diffusion Transformer framework to capture the complex interplay between the background image, layout elements, and diverse constraints. Moreover, we integrate relation constraints through fine-tuning the model with LoRA after pretraining the model on other tasks. Such a schema not only achieves unified conditional generation but also enhances overall layout quality. Extensive experiments demonstrate that UniLayDiff achieves state-of-the-art performance across from unconditional to various conditional generation tasks and, to the best of our knowledge, is the first model to unify the full range of content-aware layout generation tasks. - oai:arXiv.org:2512.08897v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zeyang Liu, Le Wang, Sanping Zhou, Yuxuan Wu, Xiaolong Sun, Gang Hua, Haoxiang Li - - - Self-Evolving 3D Scene Generation from a Single Image - https://arxiv.org/abs/2512.08905 - arXiv:2512.08905v1 Announce Type: new -Abstract: Generating high-quality, textured 3D scenes from a single image remains a fundamental challenge in vision and graphics. Recent image-to-3D generators recover reasonable geometry from single views, but their object-centric training limits generalization to complex, large-scale scenes with faithful structure and texture. We present EvoScene, a self-evolving, training-free framework that progressively reconstructs complete 3D scenes from single images. The key idea is combining the complementary strengths of existing models: geometric reasoning from 3D generation models and visual knowledge from video generation models. Through three iterative stages--Spatial Prior Initialization, Visual-guided 3D Scene Mesh Generation, and Spatial-guided Novel View Generation--EvoScene alternates between 2D and 3D domains, gradually improving both structure and appearance. Experiments on diverse scenes demonstrate that EvoScene achieves superior geometric stability, view-consistent textures, and unseen-region completion compared to strong baselines, producing ready-to-use 3D meshes for practical applications. - oai:arXiv.org:2512.08905v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Kaizhi Zheng, Yue Fan, Jing Gu, Zishuo Xu, Xuehai He, Xin Eric Wang - - - Exploring the Garden of Forking Paths in Empirical Software Engineering Research: A Multiverse Analysis - https://arxiv.org/abs/2512.08910 - arXiv:2512.08910v1 Announce Type: new -Abstract: In empirical software engineering (SE) research, researchers have considerable freedom to decide how to process data, what operationalizations to use, and which statistical model to fit. Gelman and Loken refer to this freedom as leading to a "garden of forking paths". Although this freedom is often seen as an advantage, it also poses a threat to robustness and replicability: variations in analytical decisions, even when justifiable, can lead to divergent conclusions. - To better understand this risk, we conducted a so-called multiverse analysis on a published empirical SE paper. The paper we picked is a Mining Software Repositories study, as MSR studies commonly use non-trivial statistical models to analyze post-hoc, observational data. In the study, we identified nine pivotal analytical decisions-each with at least one equally defensible alternative and systematically reran all the 3,072 resulting analysis pipelines on the original dataset. Interestingly, only 6 of these universes (<0.2%) reproduced the published results; the overwhelming majority produced qualitatively different, and sometimes even opposite, findings. - This case study of a data analytical method commonly applied to empirical software engineering data reveals how methodological choices can exert a more profound influence on outcomes than is often acknowledged. We therefore advocate that SE researchers complement standard reporting with robustness checks across plausible analysis variants or, at least, explicitly justify each analytical decision. We propose a structured classification model to help classify and improve justification for methodological choices. Secondly, we show how the multiverse analysis is a practical tool in the methodological arsenal of SE researchers, one that can help produce more reliable, reproducible science. - oai:arXiv.org:2512.08910v1 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Nathan Cassee, Robert Feldt - - - LiDAS: Lighting-driven Dynamic Active Sensing for Nighttime Perception - https://arxiv.org/abs/2512.08912 - arXiv:2512.08912v1 Announce Type: new -Abstract: Nighttime environments pose significant challenges for camera-based perception, as existing methods passively rely on the scene lighting. We introduce Lighting-driven Dynamic Active Sensing (LiDAS), a closed-loop active illumination system that combines off-the-shelf visual perception models with high-definition headlights. Rather than uniformly brightening the scene, LiDAS dynamically predicts an optimal illumination field that maximizes downstream perception performance, i.e., decreasing light on empty areas to reallocate it on object regions. LiDAS enables zero-shot nighttime generalization of daytime-trained models through adaptive illumination control. Trained on synthetic data and deployed zero-shot in real-world closed-loop driving scenarios, LiDAS enables +18.7% mAP50 and +5.0% mIoU over standard low-beam at equal power. It maintains performances while reducing energy use by 40%. LiDAS complements domain-generalization methods, further strengthening robustness without retraining. By turning readily available headlights into active vision actuators, LiDAS offers a cost-effective solution to robust nighttime perception. - oai:arXiv.org:2512.08912v1 - cs.CV - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Simon de Moreau, Andrei Bursuc, Hafid El-Idrissi, Fabien Moutarde - - - Improved Pseudorandom Codes from Permuted Puzzles - https://arxiv.org/abs/2512.08918 - arXiv:2512.08918v1 Announce Type: new -Abstract: Watermarks are an essential tool for identifying AI-generated content. Recently, Christ and Gunn (CRYPTO '24) introduced pseudorandom error-correcting codes (PRCs), which are equivalent to watermarks with strong robustness and quality guarantees. A PRC is a pseudorandom encryption scheme whose decryption algorithm tolerates a high rate of errors. Pseudorandomness ensures quality preservation of the watermark, and error tolerance of decryption translates to the watermark's ability to withstand modification of the content. - In the short time since the introduction of PRCs, several works (NeurIPS '24, RANDOM '25, STOC '25) have proposed new constructions. Curiously, all of these constructions are vulnerable to quasipolynomial-time distinguishing attacks. Furthermore, all lack robustness to edits over a constant-sized alphabet, which is necessary for a meaningfully robust LLM watermark. Lastly, they lack robustness to adversaries who know the watermarking detection key. Until now, it was not clear whether any of these properties was achievable individually, let alone together. - We construct pseudorandom codes that achieve all of the above: plausible subexponential pseudorandomness security, robustness to worst-case edits over a binary alphabet, and robustness against even computationally unbounded adversaries that have the detection key. Pseudorandomness rests on a new assumption that we formalize, the permuted codes conjecture, which states that a distribution of permuted noisy codewords is pseudorandom. We show that this conjecture is implied by the permuted puzzles conjecture used previously to construct doubly efficient private information retrieval. To give further evidence, we show that the conjecture holds against a broad class of simple distinguishers, including read-once branching programs. - oai:arXiv.org:2512.08918v1 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Miranda Christ, Noah Golowich, Sam Gunn, Ankur Moitra, Daniel Wichs - - - OSMO: Open-Source Tactile Glove for Human-to-Robot Skill Transfer - https://arxiv.org/abs/2512.08920 - arXiv:2512.08920v1 Announce Type: new -Abstract: Human video demonstrations provide abundant training data for learning robot policies, but video alone cannot capture the rich contact signals critical for mastering manipulation. We introduce OSMO, an open-source wearable tactile glove designed for human-to-robot skill transfer. The glove features 12 three-axis tactile sensors across the fingertips and palm and is designed to be compatible with state-of-the-art hand-tracking methods for in-the-wild data collection. We demonstrate that a robot policy trained exclusively on human demonstrations collected with OSMO, without any real robot data, is capable of executing a challenging contact-rich manipulation task. By equipping both the human and the robot with the same glove, OSMO minimizes the visual and tactile embodiment gap, enabling the transfer of continuous shear and normal force feedback while avoiding the need for image inpainting or other vision-based force inference. On a real-world wiping task requiring sustained contact pressure, our tactile-aware policy achieves a 72% success rate, outperforming vision-only baselines by eliminating contact-related failure modes. We release complete hardware designs, firmware, and assembly instructions to support community adoption. - oai:arXiv.org:2512.08920v1 - cs.RO - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Jessica Yin, Haozhi Qi, Youngsun Wi, Sayantan Kundu, Mike Lambeta, William Yang, Changhao Wang, Tingfan Wu, Jitendra Malik, Tess Hellebrekers - - - Unified Diffusion Transformer for High-fidelity Text-Aware Image Restoration - https://arxiv.org/abs/2512.08922 - arXiv:2512.08922v1 Announce Type: new -Abstract: Text-Aware Image Restoration (TAIR) aims to recover high- quality images from low-quality inputs containing degraded textual content. While diffusion models provide strong gen- erative priors for general image restoration, they often pro- duce text hallucinations in text-centric tasks due to the ab- sence of explicit linguistic knowledge. To address this, we propose UniT, a unified text restoration framework that in- tegrates a Diffusion Transformer (DiT), a Vision-Language Model (VLM), and a Text Spotting Module (TSM) in an it- erative fashion for high-fidelity text restoration. In UniT, the VLM extracts textual content from degraded images to provide explicit textual guidance. Simultaneously, the TSM, trained on diffusion features, generates intermedi- ate OCR predictions at each denoising step, enabling the VLM to iteratively refine its guidance during the denoising process. Finally, the DiT backbone, leveraging its strong representational power, exploit these cues to recover fine- grained textual content while effectively suppressing text hallucinations. Experiments on the SA-Text and Real-Text benchmarks demonstrate that UniT faithfully reconstructs degraded text, substantially reduces hallucinations, and achieves state-of-the-art end-to-end F1-score performance in TAIR task. - oai:arXiv.org:2512.08922v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Jin Hyeon Kim, Paul Hyunbin Cho, Claire Kim, Jaewon Min, Jaeeun Lee, Jihye Park, Yeji Choi, Seungryong Kim - - - Same Content, Different Answers: Cross-Modal Inconsistency in MLLMs - https://arxiv.org/abs/2512.08923 - arXiv:2512.08923v1 Announce Type: new -Abstract: We introduce two new benchmarks REST and REST+(Render-Equivalence Stress Tests) to enable systematic evaluation of cross-modal inconsistency in multimodal large language models (MLLMs). MLLMs are trained to represent vision and language in the same embedding space, yet they cannot perform the same tasks in both modalities. Our benchmarks contain samples with the same semantic information in three modalities (image, text, mixed) and we show that state-of-the-art MLLMs cannot consistently reason over these different modalities. We evaluate 15 MLLMs and find that the degree of modality inconsistency varies substantially, even when accounting for problems with text recognition (OCR). Neither rendering text as image nor rendering an image as text solves the inconsistency. Even if OCR is correct, we find that visual characteristics (text colour and resolution, but not font) and the number of vision tokens have an impact on model performance. Finally, we find that our consistency score correlates with the modality gap between text and images, highlighting a mechanistic interpretation of cross-modal inconsistent MLLMs. - oai:arXiv.org:2512.08923v1 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Angela van Sprang, Laurens Samson, Ana Lucic, Erman Acar, Sennay Ghebreab, Yuki M. Asano - - - Efficiently Reconstructing Dynamic Scenes One D4RT at a Time - https://arxiv.org/abs/2512.08924 - arXiv:2512.08924v1 Announce Type: new -Abstract: Understanding and reconstructing the complex geometry and motion of dynamic scenes from video remains a formidable challenge in computer vision. This paper introduces D4RT, a simple yet powerful feedforward model designed to efficiently solve this task. D4RT utilizes a unified transformer architecture to jointly infer depth, spatio-temporal correspondence, and full camera parameters from a single video. Its core innovation is a novel querying mechanism that sidesteps the heavy computation of dense, per-frame decoding and the complexity of managing multiple, task-specific decoders. Our decoding interface allows the model to independently and flexibly probe the 3D position of any point in space and time. The result is a lightweight and highly scalable method that enables remarkably efficient training and inference. We demonstrate that our approach sets a new state of the art, outperforming previous methods across a wide spectrum of 4D reconstruction tasks. We refer to the project webpage for animated results: https://d4rt-paper.github.io/. - oai:arXiv.org:2512.08924v1 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chuhan Zhang, Guillaume Le Moing, Skanda Koppula, Ignacio Rocco, Liliane Momeni, Junyu Xie, Shuyang Sun, Rahul Sukthankar, Jo\"elle K Barral, Raia Hadsell, Zoubin Ghahramani, Andrew Zisserman, Junlin Zhang, Mehdi SM Sajjadi - - - Toward Practical Forecasts of Public Sentiments via Convexification for Mean Field Games: Evidence from Real World COVID-19 Discussion Data - https://arxiv.org/abs/2512.08925 - arXiv:2512.08925v1 Announce Type: new -Abstract: We apply a convexification-based numerical method to forecast public sentiment dynamics using Mean Field Games (MFGs). The theoretical foundation for the convexification approach, established in our prior work, guarantees global convergence to the unique solution to the MFG system. The present work demonstrates the practical potential of this framework using real-world sentiment data extracted from social media public discussion during the COVID-19 pandemic. The results show that the MFG model with appropriate parameters and convexification yields sentiment density predictions that align closely with observed data and satisfy the governing equations. While current parameter selection relies on manual calibration, our findings establish the first proof-of-concept evidence that MFG models can capture complex temporal patterns in public sentiment, laying the groundwork for future work on systematic parameter identification methods, i.e. solutions of coefficient inverse problems for the MFG system. - oai:arXiv.org:2512.08925v1 - math.NA - cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shi Chen, Michael V. Klibanov, Kevin McGoff, Trung Truong, Wangjiaxuan Xin, Shuhua Yin - - - Selfi: Self Improving Reconstruction Engine via 3D Geometric Feature Alignment - https://arxiv.org/abs/2512.08930 - arXiv:2512.08930v1 Announce Type: new -Abstract: Novel View Synthesis (NVS) has traditionally relied on models with explicit 3D inductive biases combined with known camera parameters from Structure-from-Motion (SfM) beforehand. Recent vision foundation models like VGGT take an orthogonal approach -- 3D knowledge is gained implicitly through training data and loss objectives, enabling feed-forward prediction of both camera parameters and 3D representations directly from a set of uncalibrated images. While flexible, VGGT features lack explicit multi-view geometric consistency, and we find that improving such 3D feature consistency benefits both NVS and pose estimation tasks. We introduce Selfi, a self-improving 3D reconstruction pipeline via feature alignment, transforming a VGGT backbone into a high-fidelity 3D reconstruction engine by leveraging its own outputs as pseudo-ground-truth. Specifically, we train a lightweight feature adapter using a reprojection-based consistency loss, which distills VGGT outputs into a new geometrically-aligned feature space that captures spatial proximity in 3D. This enables state-of-the-art performance in both NVS and camera pose estimation, demonstrating that feature alignment is a highly beneficial step for downstream 3D reasoning. - oai:arXiv.org:2512.08930v1 - cs.CV - cs.GR - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Youming Deng, Songyou Peng, Junyi Zhang, Kathryn Heal, Tiancheng Sun, John Flynn, Steve Marschner, Lucy Chai - - - Astra: General Interactive World Model with Autoregressive Denoising - https://arxiv.org/abs/2512.08931 - arXiv:2512.08931v1 Announce Type: new -Abstract: Recent advances in diffusion transformers have empowered video generation models to generate high-quality video clips from texts or images. However, world models with the ability to predict long-horizon futures from past observations and actions remain underexplored, especially for general-purpose scenarios and various forms of actions. To bridge this gap, we introduce Astra, an interactive general world model that generates real-world futures for diverse scenarios (e.g., autonomous driving, robot grasping) with precise action interactions (e.g., camera motion, robot action). We propose an autoregressive denoising architecture and use temporal causal attention to aggregate past observations and support streaming outputs. We use a noise-augmented history memory to avoid over-reliance on past frames to balance responsiveness with temporal coherence. For precise action control, we introduce an action-aware adapter that directly injects action signals into the denoising process. We further develop a mixture of action experts that dynamically route heterogeneous action modalities, enhancing versatility across diverse real-world tasks such as exploration, manipulation, and camera control. Astra achieves interactive, consistent, and general long-term video prediction and supports various forms of interactions. Experiments across multiple datasets demonstrate the improvements of Astra in fidelity, long-range prediction, and action alignment over existing state-of-the-art world models. - oai:arXiv.org:2512.08931v1 - cs.CV - cs.AI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yixuan Zhu, Jiaqi Feng, Wenzhao Zheng, Yuan Gao, Xin Tao, Pengfei Wan, Jie Zhou, Jiwen Lu - - - MELLA: Bridging Linguistic Capability and Cultural Groundedness for Low-Resource Language MLLMs - https://arxiv.org/abs/2508.05502 - arXiv:2508.05502v1 Announce Type: cross -Abstract: Multimodal Large Language Models (MLLMs) have shown remarkable performance in high-resource languages. However, their effectiveness diminishes significantly in the contexts of low-resource languages. Current multilingual enhancement methods are often limited to text modality or rely solely on machine translation. While such approaches help models acquire basic linguistic capabilities and produce "thin descriptions", they neglect the importance of multimodal informativeness and cultural groundedness, both of which are crucial for serving low-resource language users effectively. To bridge this gap, in this study, we identify two significant objectives for a truly effective MLLM in low-resource language settings, namely 1) linguistic capability and 2) cultural groundedness, placing special emphasis on cultural awareness. To achieve these dual objectives, we propose a dual-source strategy that guides the collection of data tailored to each goal, sourcing native web alt-text for culture and MLLM-generated captions for linguistics. As a concrete implementation, we introduce MELLA, a multimodal, multilingual dataset. Experiment results show that after fine-tuning on MELLA, there is a general performance improvement for the eight languages on various MLLM backbones, with models producing "thick descriptions". We verify that the performance gains are from both cultural knowledge enhancement and linguistic capability enhancement. Our dataset can be found at https://opendatalab.com/applyMultilingualCorpus. - oai:arXiv.org:2508.05502v1 - cs.CV - cs.AI - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Yufei Gao, Jiaying Fei, Nuo Chen, Ruirui Chen, Guohang Yan, Yunshi Lan, Botian Shi - - - Formally and Empirically Verified Methodologies for Scalable Hierarchical Full-Stack Systems - https://arxiv.org/abs/2510.00002 - arXiv:2510.00002v2 Announce Type: cross -Abstract: This paper introduces Primary Breadth-First Development (PBFD) and Primary Depth-First Development (PDFD)-formally and empirically verified methodologies for scalable, industrial-grade full-stack software engineering. Both approaches enforce structural and behavioral correctness through graph-theoretic modeling, bridging formal methods and real-world practice. PBFD and PDFD model software development as layered directed graphs with unified state machines, verified using Communicating Sequential Processes (CSP) and Linear Temporal Logic (LTL). This guarantees bounded-refinement termination, deadlock freedom, and structural completeness. To manage hierarchical data at scale, we present the Three-Level Encapsulation (TLE)-a novel bitmask-based encoding scheme. TLE operations are verified via CSP failures-divergences refinement, ensuring constant-time updates and compact storage that underpin PBFD's robust performance. PBFD demonstrates exceptional industrial viability through eight years of enterprise deployment with zero critical failures, achieving approximately 20x faster develop-ment than Salesforce OmniScript, 7-8x faster query performance, and 11.7x storage reduction compared to conventional relational models. These results are established through longitudinal observational studies, quasi-experimental runtime comparisons, and controlled schema-level experiments. Open-source Minimum Viable Product implementations validate key behavioral properties, including bounded refinement and constant-time bitmask operations, un-der reproducible conditions. All implementations, formal specifications, and non-proprietary datasets are publicly available. - oai:arXiv.org:2510.00002v2 - cs.SE - cs.DB - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Dong Liu - - - Fast and Robust Diffusion Posterior Sampling for MR Image Reconstruction Using the Preconditioned Unadjusted Langevin Algorithm - https://arxiv.org/abs/2512.05791 - arXiv:2512.05791v1 Announce Type: cross -Abstract: Purpose: The Unadjusted Langevin Algorithm (ULA) in combination with diffusion models can generate high quality MRI reconstructions with uncertainty estimation from highly undersampled k-space data. However, sampling methods such as diffusion posterior sampling or likelihood annealing suffer from long reconstruction times and the need for parameter tuning. The purpose of this work is to develop a robust sampling algorithm with fast convergence. - Theory and Methods: In the reverse diffusion process used for sampling the posterior, the exact likelihood is multiplied with the diffused prior at all noise scales. To overcome the issue of slow convergence, preconditioning is used. The method is trained on fastMRI data and tested on retrospectively undersampled brain data of a healthy volunteer. - Results: For posterior sampling in Cartesian and non-Cartesian accelerated MRI the new approach outperforms annealed sampling in terms of reconstruction speed and sample quality. - Conclusion: The proposed exact likelihood with preconditioning enables rapid and reliable posterior sampling across various MRI reconstruction tasks without the need for parameter tuning. - oai:arXiv.org:2512.05791v1 - physics.med-ph - cs.CV - cs.LG - math.PR - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Moritz Blumenthal, Tina Holliber, Jonathan I. Tamir, Martin Uecker - - - Cascaded Tightly-Coupled Observer Design for Single-Range-Aided Inertial Navigation - https://arxiv.org/abs/2512.06198 - arXiv:2512.06198v1 Announce Type: cross -Abstract: This work introduces a single-range-aided navigation observer that reconstructs the full state of a rigid body using only an Inertial Measurement Unit (IMU), a body-frame vector measurement (e.g., magnetometer), and a distance measurement from a fixed anchor point. The design first formulates an extended linear time-varying (LTV) system to estimate body-frame position, body-frame velocity, and the gravity direction. The recovered gravity direction, combined with the body-frame vector measurement, is then used to reconstruct the full orientation on $\mathrm{SO}(3)$, resulting in a cascaded observer architecture. Almost Global Asymptotic Stability (AGAS) of the cascaded design is established under a uniform observability condition, ensuring robustness to sensor noise and trajectory variations. Simulation studies on three-dimensional trajectories demonstrate accurate estimation of position, velocity, and orientation, highlighting single-range aiding as a lightweight and effective modality for autonomous navigation. - oai:arXiv.org:2512.06198v1 - cs.RO - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Oussama Sifour, Soulaimane Berkane, Abdelhamid Tayebi - - - Artificial Intelligence and Nuclear Weapons Proliferation: The Technological Arms Race for (In)visibility - https://arxiv.org/abs/2512.07487 - arXiv:2512.07487v1 Announce Type: cross -Abstract: A robust nonproliferation regime has contained the spread of nuclear weapons to just nine states. Yet, emerging and disruptive technologies are reshaping the landscape of nuclear risks, presenting a critical juncture for decision makers. This article lays out the contours of an overlooked but intensifying technological arms race for nuclear (in)visibility, driven by the interplay between proliferation-enabling technologies (PETs) and detection-enhancing technologies (DETs). We argue that the strategic pattern of proliferation will be increasingly shaped by the innovation pace in these domains. Artificial intelligence (AI) introduces unprecedented complexity to this equation, as its rapid scaling and knowledge substitution capabilities accelerate PET development and challenge traditional monitoring and verification methods. To analyze this dynamic, we develop a formal model centered on a Relative Advantage Index (RAI), quantifying the shifting balance between PETs and DETs. Our model explores how asymmetric technological advancement, particularly logistic AI-driven PET growth versus stepwise DET improvements, expands the band of uncertainty surrounding proliferation detectability. Through replicable scenario-based simulations, we evaluate the impact of varying PET growth rates and DET investment strategies on cumulative nuclear breakout risk. We identify a strategic fork ahead, where detection may no longer suffice without broader PET governance. Governments and international organizations should accordingly invest in policies and tools agile enough to keep pace with tomorrow's technology. - oai:arXiv.org:2512.07487v1 - cs.CY - cs.AI - cs.ET - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - 10.1111/risa.70105 - Risk Analysis 45(11): 3839-3859 (2025) - David M. Allison, Stephen Herzog - - - Automating High Energy Physics Data Analysis with LLM-Powered Agents - https://arxiv.org/abs/2512.07785 - arXiv:2512.07785v1 Announce Type: cross -Abstract: We present a proof-of-principle study demonstrating the use of large language model (LLM) agents to automate a representative high energy physics (HEP) analysis. Using the Higgs boson diphoton cross-section measurement as a case study with ATLAS Open Data, we design a hybrid system that combines an LLM-based supervisor-coder agent with the Snakemake workflow manager. In this architecture, the workflow manager enforces reproducibility and determinism, while the agent autonomously generates, executes, and iteratively corrects analysis code in response to user instructions. We define quantitative evaluation metrics including success rate, error distribution, costs per specific task, and average number of API calls, to assess agent performance across multi-stage workflows. To characterize variability across architectures, we benchmark a representative selection of state-of-the-art LLMs spanning the Gemini and GPT-5 series, the Claude family, and leading open-weight models. While the workflow manager ensures deterministic execution of all analysis steps, the final outputs still show stochastic variation. Although we set the temperature to zero, other sampling parameters (e.g., top-p, top-k) remained at their defaults, and some reasoning-oriented models internally adjust these settings. Consequently, the models do not produce fully deterministic results. This study establishes the first LLM-agent-driven automated data-analysis framework in HEP, enabling systematic benchmarking of model capabilities, stability, and limitations in real-world scientific computing environments. The baseline code used in this work is available at https://huggingface.co/HWresearch/LLM4HEP. This work was accepted as a poster at the Machine Learning and the Physical Sciences (ML4PS) workshop at NeurIPS 2025. The initial submission was made on August 30, 2025. - oai:arXiv.org:2512.07785v1 - physics.data-an - cs.AI - cs.LG - hep-ex - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Eli Gendreau-Distler, Joshua Ho, Dongwon Kim, Luc Tomas Le Pottier, Haichen Wang, Chengxi Yang - - - Signal and Noise Classification in Bio-Signals via unsupervised Machine Learning - https://arxiv.org/abs/2512.07851 - arXiv:2512.07851v1 Announce Type: cross -Abstract: Real-world biosignal data is frequently corrupted by various types of noise, such as motion artifacts, and baseline wander. Although digital signal processing techniques exist to process such signals; however, heavily degraded signals cannot be recovered. In this study, we aim to classify two things: first, a binary classification of noisy and clean biosignals, and next, to categorize various kinds of noise such as motion artifacts, sensor failure, etc. We implemented K-means clustering, and our results indicate that the algorithm can most reliably group clean segments from noisy ones, particularly strong performance in identifying clean data compared to various categories of noise. This approach enables the selection of only high-quality bio-signal segments and provides accurate results for feature engineering that may enhance the precision of machine learning models trained on biosignals. - oai:arXiv.org:2512.07851v1 - eess.SP - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Sansrit Paudel - - - Integrating LSTM Networks with Neural Levy Processes for Financial Forecasting - https://arxiv.org/abs/2512.07860 - arXiv:2512.07860v1 Announce Type: cross -Abstract: This paper investigates an optimal integration of deep learning with financial models for robust asset price forecasting. Specifically, we developed a hybrid framework combining a Long Short-Term Memory (LSTM) network with the Merton-L\'evy jump-diffusion model. To optimise this framework, we employed the Grey Wolf Optimizer (GWO) for the LSTM hyperparameter tuning, and we explored three calibration methods for the Merton-Levy model parameters: Artificial Neural Networks (ANNs), the Marine Predators Algorithm (MPA), and the PyTorch-based TorchSDE library. To evaluate the predictive performance of our hybrid model, we compared it against several benchmark models, including a standard LSTM and an LSTM combined with the Fractional Heston model. This evaluation used three real-world financial datasets: Brent oil prices, the STOXX 600 index, and the IT40 index. Performance was assessed using standard metrics, including Mean Squared Error (MSE), Mean Absolute Error(MAE), Mean Squared Percentage Error (MSPE), and the coefficient of determination (R2). Our experimental results demonstrate that the hybrid model, combining a GWO-optimized LSTM network with the Levy-Merton Jump-Diffusion model calibrated using an ANN, outperformed the base LSTM model and all other models developed in this study. - oai:arXiv.org:2512.07860v1 - q-fin.ST - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Mohammed Alruqimi, Luca Di Persio - - - LLM-Generated Counterfactual Stress Scenarios for Portfolio Risk Simulation via Hybrid Prompt-RAG Pipeline - https://arxiv.org/abs/2512.07867 - arXiv:2512.07867v1 Announce Type: cross -Abstract: We develop a transparent and fully auditable LLM-based pipeline for macro-financial stress testing, combining structured prompting with optional retrieval of country fundamentals and news. The system generates machine-readable macroeconomic scenarios for the G7, which cover GDP growth, inflation, and policy rates, and are translated into portfolio losses through a factor-based mapping that enables Value-at-Risk and Expected Shortfall assessment relative to classical econometric baselines. Across models, countries, and retrieval settings, the LLMs produce coherent and country-specific stress narratives, yielding stable tail-risk amplification with limited sensitivity to retrieval choices. Comprehensive plausibility checks, scenario diagnostics, and ANOVA-based variance decomposition show that risk variation is driven primarily by portfolio composition and prompt design rather than by the retrieval mechanism. The pipeline incorporates snapshotting, deterministic modes, and hash-verified artifacts to ensure reproducibility and auditability. Overall, the results demonstrate that LLM-generated macro scenarios, when paired with transparent structure and rigorous validation, can provide a scalable and interpretable complement to traditional stress-testing frameworks. - oai:arXiv.org:2512.07867v1 - q-fin.RM - cs.AI - econ.EM - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Masoud Soleimani - - - Manifolds and Modules: How Function Develops in a Neural Foundation Model - https://arxiv.org/abs/2512.07869 - arXiv:2512.07869v1 Announce Type: cross -Abstract: Foundation models have shown remarkable success in fitting biological visual systems; however, their black-box nature inherently limits their utility for under- standing brain function. Here, we peek inside a SOTA foundation model of neural activity (Wang et al., 2025) as a physiologist might, characterizing each 'neuron' based on its temporal response properties to parametric stimuli. We analyze how different stimuli are represented in neural activity space by building decoding man- ifolds, and we analyze how different neurons are represented in stimulus-response space by building neural encoding manifolds. We find that the different processing stages of the model (i.e., the feedforward encoder, recurrent, and readout modules) each exhibit qualitatively different representational structures in these manifolds. The recurrent module shows a jump in capabilities over the encoder module by 'pushing apart' the representations of different temporal stimulus patterns; while the readout module achieves biological fidelity by using numerous specialized feature maps rather than biologically plausible mechanisms. Overall, we present this work as a study of the inner workings of a prominent neural foundation model, gaining insights into the biological relevance of its internals through the novel analysis of its neurons' joint temporal response patterns. - oai:arXiv.org:2512.07869v1 - q-bio.NC - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Johannes Bertram, Luciano Dyballa, T. Anderson Keller, Savik Kinger, Steven W. Zucker - - - Quantum Circuit Reasoning Models: A Variational Framework for Differentiable Logical Inference - https://arxiv.org/abs/2512.07871 - arXiv:2512.07871v1 Announce Type: cross -Abstract: This report introduces a novel class of reasoning architectures, termed Quantum Circuit Reasoning Models (QCRM), which extend the concept of Variational Quantum Circuits (VQC) from energy minimization and classification tasks to structured logical inference and reasoning. We posit that fundamental quantum mechanical operations, superposition, entanglement, interference, and measurement, naturally map to essential reasoning primitives such as hypothesis branching, constraint propagation, consistency enforcement, and decision making. The resulting framework combines quantum-inspired computation with differentiable optimization, enabling reasoning to emerge as a process of amplitude evolution and interference-driven selection of self-consistent states. We develop the mathematical foundation of QCRM, define its parameterized circuit architecture, and show how logical rules can be encoded as unitary transformations over proposition-qubit states. We further formalize a training objective grounded in classical gradient descent over circuit parameters and discuss simulation-based implementations on classical hardware. Finally, we propose the Quantum Reasoning Layer (QRL) as a differentiable hybrid component for composable reasoning models applicable to scientific, biomedical, and chemical inference domains. - oai:arXiv.org:2512.07871v1 - quant-ph - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Andrew Kiruluta + Pedro M. Gordaliza, Nataliia Molchanova, Jaume Banus, Thomas Sanchez, Meritxell Bach Cuadra - Referenceless Proton Resonance Frequency Thermometry Using Deep Learning with Self-Attention - https://arxiv.org/abs/2512.07882 - arXiv:2512.07882v1 Announce Type: cross -Abstract: Background: Accurate proton resonance frequency (PRF) MR thermometry is essential for monitoring temperature rise during thermal ablation with high intensity focused ultrasound (FUS). Conventional referenceless methods such as complex field estimation (CFE) and phase finite difference (PFD) tend to exhibit errors when susceptibility-induced phase discontinuities occur at tissue interfaces. - oai:arXiv.org:2512.07882v1 - physics.med-ph - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Procurement without Priors: A Simple Mechanism and its Notable Performance + https://arxiv.org/abs/2512.09129 + arXiv:2512.09129v1 Announce Type: cross +Abstract: How should a buyer design procurement mechanisms when suppliers' costs are unknown, and the buyer does not have a prior belief? We demonstrate that simple mechanisms - that share a constant fraction of the buyer utility with the seller - allow the buyer to realize a guaranteed positive fraction of the efficient social surplus across all possible costs. Moreover, a judicious choice of the share based on the known demand maximizes the surplus ratio guarantee that can be attained across all possible (arbitrarily complex and nonlinear) mechanisms and cost functions. Similar results hold in related nonlinear pricing and optimal regulation problems. + oai:arXiv.org:2512.09129v1 + econ.TH + cs.GT + Thu, 11 Dec 2025 00:00:00 -0500 cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yueran Zhao, Chang-Sheng Mei, Nathan J. McDannold, Shenyan Zong, Guofeng Shen - - - On the discrete to continuous condensing aggregation equation: A weak convergence approach - https://arxiv.org/abs/2512.07883 - arXiv:2512.07883v1 Announce Type: cross -Abstract: In this article, we study the passage of limits from discrete to continuous condensing aggregation equation which comprises of Oort-Hulst-Safronov (OHS) equation together with inverse aggregation process. We establish the relation between discrete and continuous condensing aggregation equations in its most generalized form, where kinetic-kernels with respect to OHS and inverse aggregation equations are not always equal. Convergence criterion is proved under suitable a priori estimates by approximating the continuous equation through a sequence of discrete equations, which subsequently converges towards the solution of the continuous equation by weak compactness principles. Existence of solution to the discrete model and uniform bounds on different order moments over finite time under particular conditions on kinetic-kernels are investigated. We analyze long-time dynamics and blowup of the solution leading to mass-loss or gelation for specific kernels. Three numerical experiments show the accuracy and convergence of approximated solutions to the exact solution of the continuous equation when $\varepsilon$ approaches zero. - oai:arXiv.org:2512.07883v1 - math.AP - cs.NA - math.FA - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Anupama Ghorai, Jitraj Saha - - - Functional Random Forest with Adaptive Cost-Sensitive Splitting for Imbalanced Functional Data Classification - https://arxiv.org/abs/2512.07888 - arXiv:2512.07888v1 Announce Type: cross -Abstract: Classification of functional data where observations are curves or trajectories poses unique challenges, particularly under severe class imbalance. Traditional Random Forest algorithms, while robust for tabular data, often fail to capture the intrinsic structure of functional observations and struggle with minority class detection. This paper introduces Functional Random Forest with Adaptive Cost-Sensitive Splitting (FRF-ACS), a novel ensemble framework designed for imbalanced functional data classification. The proposed method leverages basis expansions and Functional Principal Component Analysis (FPCA) to represent curves efficiently, enabling trees to operate on low dimensional functional features. To address imbalance, we incorporate a dynamic cost sensitive splitting criterion that adjusts class weights locally at each node, combined with a hybrid sampling strategy integrating functional SMOTE and weighted bootstrapping. Additionally, curve specific similarity metrics replace traditional Euclidean measures to preserve functional characteristics during leaf assignment. Extensive experiments on synthetic and real world datasets including biomedical signals and sensor trajectories demonstrate that FRF-ACS significantly improves minority class recall and overall predictive performance compared to existing functional classifiers and imbalance handling techniques. This work provides a scalable, interpretable solution for high dimensional functional data analysis in domains where minority class detection is critical. - oai:arXiv.org:2512.07888v1 - stat.ML - cs.AI - cs.LG - stat.AP - stat.CO - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Fahad Mostafa, Hafiz Khan - - - Harmonizing Community Science Datasets to Model Highly Pathogenic Avian Influenza (HPAI) in Birds in the Subantarctic - https://arxiv.org/abs/2512.07907 - arXiv:2512.07907v1 Announce Type: cross -Abstract: Community science observational datasets are useful in epidemiology and ecology for modeling species distributions, but the heterogeneous nature of the data presents significant challenges for standardization, data quality assurance and control, and workflow management. In this paper, we present a data workflow for cleaning and harmonizing multiple community science datasets, which we implement in a case study using eBird, iNaturalist, GBIF, and other datasets to model the impact of highly pathogenic avian influenza in populations of birds in the subantarctic. We predict population sizes for several species where the demographics are not known, and we present novel estimates for potential mortality rates from HPAI for those species, based on a novel aggregated dataset of mortality rates in the subantarctic. - oai:arXiv.org:2512.07907v1 - q-bio.PE - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Proceedings of Pacific Rim International Conference on Artificial Intelligence 2025 (PRICAI 2025): Artificial Intelligence for Earth and Environmental Science 2025 (AIEES 2025) Workshop, 17-21 Nov 2025, Wellington, New Zealand - Richard Littauer, Kris Bubendorfer - - - Conformal Defects in Neural Network Field Theories - https://arxiv.org/abs/2512.07946 - arXiv:2512.07946v1 Announce Type: cross -Abstract: Neural Network Field Theories (NN-FTs) represent a novel construction of arbitrary field theories, including those of conformal fields, through the specification of the network architecture and prior distribution for the network parameters. In this work, we present a formalism for the construction of conformally invariant defects in these NN-FTs. We demonstrate this new formalism in two toy models of NN scalar field theories. We develop an NN interpretation of an expansion akin to the defect OPE in two-point correlation functions in these models. - oai:arXiv.org:2512.07946v1 - hep-th - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Pietro Capuozzo, Brandon Robinson, Benjamin Suzzoni - - - ALS Storage Ring RF Control System Upgrade Plan and Status - https://arxiv.org/abs/2512.07996 - arXiv:2512.07996v1 Announce Type: cross -Abstract: The Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory, a third-generation synchrotron light source operational since 1992, is undergoing a comprehensive upgrade of its storage ring RF control system. The legacy Horner PLC controllers and remote I/O modules, now at end-of-life, are being replaced with an Allen-Bradley PLC platform to improve maintainability, reliability, and long-term support. This paper presents the planning, design, and current status of the upgrade project. - oai:arXiv.org:2512.07996v1 - physics.acc-ph - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - 10.18429/jacow-icalepcs2025-tupd030 - ICALEPCS 2025 - Najm Us Saqib, Angel Jurado, Esteban Andrade, Qiang Du, Jeong Han Lee, Miroslaw Dach, Benjamin Flugstad + Dirk Bergemann, Tibor Heumann, Stephen Morris - Provable Diffusion Posterior Sampling for Bayesian Inversion - https://arxiv.org/abs/2512.08022 - arXiv:2512.08022v1 Announce Type: cross -Abstract: This paper proposes a novel diffusion-based posterior sampling method within a plug-and-play (PnP) framework. Our approach constructs a probability transport from an easy-to-sample terminal distribution to the target posterior, using a warm-start strategy to initialize the particles. To approximate the posterior score, we develop a Monte Carlo estimator in which particles are generated using Langevin dynamics, avoiding the heuristic approximations commonly used in prior work. The score governing the Langevin dynamics is learned from data, enabling the model to capture rich structural features of the underlying prior distribution. On the theoretical side, we provide non-asymptotic error bounds, showing that the method converges even for complex, multi-modal target posterior distributions. These bounds explicitly quantify the errors arising from posterior score estimation, the warm-start initialization, and the posterior sampling procedure. Our analysis further clarifies how the prior score-matching error and the condition number of the Bayesian inverse problem influence overall performance. Finally, we present numerical experiments demonstrating the effectiveness of the proposed method across a range of inverse problems. - oai:arXiv.org:2512.08022v1 - stat.ML + Understanding temperature tuning in energy-based models + https://arxiv.org/abs/2512.09152 + arXiv:2512.09152v1 Announce Type: cross +Abstract: Generative models of complex systems often require post-hoc parameter adjustments to produce useful outputs. For example, energy-based models for protein design are sampled at an artificially low ''temperature'' to generate novel, functional sequences. This temperature tuning is a common yet poorly understood heuristic used across machine learning contexts to control the trade-off between generative fidelity and diversity. Here, we develop an interpretable, physically motivated framework to explain this phenomenon. We demonstrate that in systems with a large ''energy gap'' - separating a small fraction of meaningful states from a vast space of unrealistic states - learning from sparse data causes models to systematically overestimate high-energy state probabilities, a bias that lowering the sampling temperature corrects. More generally, we characterize how the optimal sampling temperature depends on the interplay between data size and the system's underlying energy landscape. Crucially, our results show that lowering the sampling temperature is not always desirable; we identify the conditions where \emph{raising} it results in better generative performance. Our framework thus casts post-hoc temperature tuning as a diagnostic tool that reveals properties of the true data distribution and the limits of the learned model. + oai:arXiv.org:2512.09152v1 + q-bio.QM cs.LG - cs.NA - math.NA - math.PR - math.ST - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jinyuan Chang, Chenguang Duan, Yuling Jiao, Ruoxuan Li, Jerry Zhijian Yang, Cheng Yuan - - - Locally Recoverable Codes with availability from a family of fibered surfaces - https://arxiv.org/abs/2512.08100 - arXiv:2512.08100v1 Announce Type: cross -Abstract: We construct Locally Recoverable Codes (LRCs) with availability $2$ from a family of fibered surfaces. To obtain the locality and availability properties, and to estimate the minimum distance of the codes, we combine techniques coming from the theory of one-variable function fields and from the theory of fibrations on surfaces. When the locality parameter is $r=3$, we obtain a sharp bound on the minimum distance of the codes. In that case, we give a geometric interpretation of our codes in terms of doubly elliptic surfaces. In particular, this provides the first instance of an error correcting code constructed using a (doubly elliptic) K3 surface. - oai:arXiv.org:2512.08100v1 - math.AG - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Cec\'ilia Salgado, Lara Vicino - - - FlowSteer: Conditioning Flow Field for Consistent Image Restoration - https://arxiv.org/abs/2512.08125 - arXiv:2512.08125v1 Announce Type: cross -Abstract: Flow-based text-to-image (T2I) models excel at prompt-driven image generation, but falter on Image Restoration (IR), often "drifting away" from being faithful to the measurement. Prior work mitigate this drift with data-specific flows or task-specific adapters that are computationally heavy and not scalable across tasks. This raises the question "Can't we efficiently manipulate the existing generative capabilities of a flow model?" To this end, we introduce FlowSteer (FS), an operator-aware conditioning scheme that injects measurement priors along the sampling path,coupling a frozed flow's implicit guidance with explicit measurement constraints. Across super-resolution, deblurring, denoising, and colorization, FS improves measurement consistency and identity preservation in a strictly zero-shot setting-no retrained models, no adapters. We show how the nature of flow models and their sensitivities to noise inform the design of such a scheduler. FlowSteer, although simple, achieves a higher fidelity of reconstructed images, while leveraging the rich generative priors of flow models. - oai:arXiv.org:2512.08125v1 - eess.IV - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by/4.0/ - Tharindu Wickremasinghe, Chenyang Qi, Harshana Weligampola, Zhengzhong Tu, Stanley H. Chan + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Peter W Fields, Vudtiwat Ngampruetikorn, David J Schwab, Stephanie E Palmer - Adversarial Barrier in Uniform Class Separation - https://arxiv.org/abs/2512.08149 - arXiv:2512.08149v1 Announce Type: cross -Abstract: We identify a strong structural obstruction to Uniform Separation in constructive arithmetic. The mechanism is independent of semantic content; it emerges whenever two distinct evaluator predicates are sustained in parallel and inference remains uniformly representable in an extension of HA. Under these conditions, any putative Uniform Class Separation principle becomes a distinguished instance of a fixed point construction. The resulting limitation is stricter in scope than classical separation barriers (Baker; Rudich; Aaronson et~al.) insofar as it constrains the logical form of uniform separation within HA, rather than limiting particular relativizing, naturalizing, or algebrizing techniques. - oai:arXiv.org:2512.08149v1 - math.LO - cs.CC - cs.LO - Wed, 10 Dec 2025 00:00:00 -0500 + A Hybrid Residue Floating Numerical Architecture for High Precision Arithmetic on FPGAs + https://arxiv.org/abs/2512.09155 + arXiv:2512.09155v1 Announce Type: cross +Abstract: Floating point arithmetic remains expensive on FPGA platforms due to wide datapaths and normalization logic, motivating alternative representations that preserve dynamic range at lower cost. This work introduces the Hybrid Residue Floating Numerical Architecture (HRFNA), a unified arithmetic system that combines carry free residue channels with a lightweight floating point scaling factor. We develop the full mathematical framework, derive bounded error normalization rules, and present FPGA optimized microarchitectures for modular multiplication, exponent management, and hybrid reconstruction. HRFNA is implemented on a Xilinx ZCU104, with Vitis simulation, RTL synthesis, and on chip ILA traces confirming cycle accurate correctness. The architecture achieves over 2.1 times throughput improvement and 38-52 percent LUT reduction compared to IEEE 754 single precision baselines while maintaining numerical stability across long iterative sequences. These results demonstrate that HRFNA offers an efficient and scalable alternative to floating point computation on modern FPGA devices. + oai:arXiv.org:2512.09155v1 + eess.SP + cs.AR + cs.MS + Thu, 11 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Milan Rosko + Mostafa Darvishi - Worst-case generation via minimax optimization in Wasserstein space - https://arxiv.org/abs/2512.08176 - arXiv:2512.08176v1 Announce Type: cross -Abstract: Worst-case generation plays a critical role in evaluating robustness and stress-testing systems under distribution shifts, in applications ranging from machine learning models to power grids and medical prediction systems. We develop a generative modeling framework for worst-case generation for a pre-specified risk, based on min-max optimization over continuous probability distributions, namely the Wasserstein space. Unlike traditional discrete distributionally robust optimization approaches, which often suffer from scalability issues, limited generalization, and costly worst-case inference, our framework exploits the Brenier theorem to characterize the least favorable (worst-case) distribution as the pushforward of a transport map from a continuous reference measure, enabling a continuous and expressive notion of risk-induced generation beyond classical discrete DRO formulations. Based on the min-max formulation, we propose a Gradient Descent Ascent (GDA)-type scheme that updates the decision model and the transport map in a single loop, establishing global convergence guarantees under mild regularity assumptions and possibly without convexity-concavity. We also propose to parameterize the transport map using a neural network that can be trained simultaneously with the GDA iterations by matching the transported training samples, thereby achieving a simulation-free approach. The efficiency of the proposed method as a risk-induced worst-case generator is validated by numerical experiments on synthetic and image data. - oai:arXiv.org:2512.08176v1 + WTNN: Weibull-Tailored Neural Networks for survival analysis + https://arxiv.org/abs/2512.09163 + arXiv:2512.09163v1 Announce Type: cross +Abstract: The Weibull distribution is a commonly adopted choice for modeling the survival of systems subject to maintenance over time. When only proxy indicators and censored observations are available, it becomes necessary to express the distribution's parameters as functions of time-dependent covariates. Deep neural networks provide the flexibility needed to learn complex relationships between these covariates and operational lifetime, thereby extending the capabilities of traditional regression-based models. Motivated by the analysis of a fleet of military vehicles operating in highly variable and demanding environments, as well as by the limitations observed in existing methodologies, this paper introduces WTNN, a new neural network-based modeling framework specifically designed for Weibull survival studies. The proposed architecture is specifically designed to incorporate qualitative prior knowledge regarding the most influential covariates, in a manner consistent with the shape and structure of the Weibull distribution. Through numerical experiments, we show that this approach can be reliably trained on proxy and right-censored data, and is capable of producing robust and interpretable survival predictions that can improve existing approaches. + oai:arXiv.org:2512.09163v1 stat.ML cs.LG - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiuyuan Cheng, Yao Xie, Linglingzhi Zhu, Yunqin Zhu - - - Tumor-anchored deep feature random forests for out-of-distribution detection in lung cancer segmentation - https://arxiv.org/abs/2512.08216 - arXiv:2512.08216v1 Announce Type: cross -Abstract: Accurate segmentation of cancerous lesions from 3D computed tomography (CT) scans is essential for automated treatment planning and response assessment. However, even state-of-the-art models combining self-supervised learning (SSL) pretrained transformers with convolutional decoders are susceptible to out-of-distribution (OOD) inputs, generating confidently incorrect tumor segmentations, posing risks for safe clinical deployment. Existing logit-based methods suffer from task-specific model biases, while architectural enhancements to explicitly detect OOD increase parameters and computational costs. Hence, we introduce a plug-and-play and lightweight post-hoc random forests-based OOD detection framework called RF-Deep that leverages deep features with limited outlier exposure. RF-Deep enhances generalization to imaging variations by repurposing the hierarchical features from the pretrained-then-finetuned backbone encoder, providing task-relevant OOD detection by extracting the features from multiple regions of interest anchored to the predicted tumor segmentations. Hence, it scales to images of varying fields-of-view. We compared RF-Deep against existing OOD detection methods using 1,916 CT scans across near-OOD (pulmonary embolism, negative COVID-19) and far-OOD (kidney cancer, healthy pancreas) datasets. RF-Deep achieved AUROC > 93.50 for the challenging near-OOD datasets and near-perfect detection (AUROC > 99.00) for the far-OOD datasets, substantially outperforming logit-based and radiomics approaches. RF-Deep maintained similar performance consistency across networks of different depths and pretraining strategies, demonstrating its effectiveness as a lightweight, architecture-agnostic approach to enhance the reliability of tumor segmentation from CT volumes. - oai:arXiv.org:2512.08216v1 - eess.IV - cs.CV - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Aneesh Rangnekar, Harini Veeraraghavan - - - Self-Reinforced Deep Priors for Reparameterized Full Waveform Inversion - https://arxiv.org/abs/2512.08284 - arXiv:2512.08284v1 Announce Type: cross -Abstract: Full waveform inversion (FWI) has become a widely adopted technique for high-resolution subsurface imaging. However, its inherent strong nonlinearity often results in convergence toward local minima. Recently, deep image prior-based reparameterized FWI (DIP-FWI) has been proposed to alleviate the dependence on massive training data. By exploiting the spectral bias and implicit regularization in the neural network architecture, DIP-FWI can effectively avoid local minima and reconstruct more geologically plausible velocity models. Nevertheless, existing DIP-FWI typically use a fixed random input throughout the inversion process, which fails to utilize the mapping and correlation between the input and output of the network. Moreover, under complex geological conditions, the lack of informative prior in the input can exacerbate the ill-posedness of the inverse problem, leading to artifacts and unstable reconstructions. To address these limitations, we propose a self-reinforced DIP-FWI (SRDIP-FWI) framework, in which a steering algorithm alternately updates both the network parameters and the input at each iteration using feedback from the current network output. This design allows adaptive structural enhancement and improved regularization, thereby effectively mitigating the ill-posedness in FWI. Additionally, we analyze the spectral bias of the network in SRDIP-FWI and quantify its role in multiscale velocity model building. Synthetic tests and field land data application demonstrate that SRDIP-FWI achieves superior resolution, improved accuracy and greater depth penetration compared to multiscale FWI. More importantly, SRDIP-FWI eliminates the need for manual frequency-band selection and time-window picking, substantially simplifying the inversion workflow. Overall, the proposed method provides a novel, adaptive and robust framework for accurate subsurface velocity model reconstruction. - oai:arXiv.org:2512.08284v1 - physics.geo-ph - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Guangyuan Zou, Junlun Li, Feng Liu, Xuejing Zheng, Jianjian Xie, Guoyi Chen - - - M\"obius Transformations and the Analytic--Geometric Reconstruction of the Induction--Machine Circle Diagram - https://arxiv.org/abs/2512.08302 - arXiv:2512.08302v1 Announce Type: cross -Abstract: The Heyland circle diagram is a classical graphical method for representing the steady--state behavior of induction machines using no--load and blocked--rotor test data. Despite its long pedagogical history, the traditional geometric construction has not been formalized within a closed analytic framework. This note develops a complete Euclidean reconstruction of the diagram using only the two measured phasors and elementary geometric operations, yielding a unique circle, a torque chord, a slip scale, and a maximum--torque point. We prove that this constructed circle coincides precisely with the analytic steady--state current locus obtained from the per--phase equivalent circuit. A M\"obius transformation interpretation reveals the complex--analytic origin of the diagram's circularity and offers a compact explanation of its geometric structure. - oai:arXiv.org:2512.08302v1 - math.DS - cs.SY - eess.SY - math.CV - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Anubhav Gupta - - - Magnetic activity of ultracool dwarfs in the LAMOST DR11 - https://arxiv.org/abs/2512.08305 - arXiv:2512.08305v1 Announce Type: cross -Abstract: Ultracool dwarfs consist of lowest-mass stars and brown dwarfs. Their interior is fully convective, different from that of the partly-convective Sun-like stars. Magnetic field generation process beneath the surface of ultracool dwarfs is still poorly understood and controversial. To increase samples of active ultracool dwarfs significantly, we have identified 962 ultracool dwarfs in the latest LAMOST data release, DR11. We also simulate the Chinese Space Station Survey Telescope (CSST) low-resolution slitless spectra by degrading the LAMOST spectra. A semi-supervised machine learning approach with an autoencoder model is built to identify ultracool dwarfs with the simulated CSST spectra, which demonstrates the capability of the CSST all-sky slitless spectroscopic survey on the detection of ultracool dwarfs. Magnetic activity of the ultracool dwarfs is investigated by using the H$\alpha$ line emission as a proxy. The rotational periods of 82 ultracool dwarfs are derived based on the Kepler/K2 light curves. We also derive the activity-rotation relation of the ultracool dwarfs, which is saturated around a Rossby number of 0.12. - oai:arXiv.org:2512.08305v1 - astro-ph.SR - astro-ph.IM - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yue Xiang, Shenghong Gu, Dongtao Cao - - - Learned iterative networks: An operator learning perspective - https://arxiv.org/abs/2512.08444 - arXiv:2512.08444v1 Announce Type: cross -Abstract: Learned image reconstruction has become a pillar in computational imaging and inverse problems. Among the most successful approaches are learned iterative networks, which are formulated by unrolling classical iterative optimisation algorithms for solving variational problems. While the underlying algorithm is usually formulated in the functional analytic setting, learned approaches are often viewed as purely discrete. In this chapter we present a unified operator view for learned iterative networks. Specifically, we formulate a learned reconstruction operator, defining how to compute, and separately the learning problem, which defines what to compute. In this setting we present common approaches and show that many approaches are closely related in their core. We review linear as well as nonlinear inverse problems in this framework and present a short numerical study to conclude. - oai:arXiv.org:2512.08444v1 - eess.IV - cs.LG - cs.NA - math.FA - math.NA - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Andreas Hauptmann, Ozan \"Oktem - - - Fused Gromov-Wasserstein Contrastive Learning for Effective Enzyme-Reaction Screening - https://arxiv.org/abs/2512.08508 - arXiv:2512.08508v1 Announce Type: cross -Abstract: Enzymes are crucial catalysts that enable a wide range of biochemical reactions. Efficiently identifying specific enzymes from vast protein libraries is essential for advancing biocatalysis. Traditional computational methods for enzyme screening and retrieval are time-consuming and resource-intensive. Recently, deep learning approaches have shown promise. However, these methods focus solely on the interaction between enzymes and reactions, overlooking the inherent hierarchical relationships within each domain. To address these limitations, we introduce FGW-CLIP, a novel contrastive learning framework based on optimizing the fused Gromov-Wasserstein distance. FGW-CLIP incorporates multiple alignments, including inter-domain alignment between reactions and enzymes and intra-domain alignment within enzymes and reactions. By introducing a tailored regularization term, our method minimizes the Gromov-Wasserstein distance between enzyme and reaction spaces, which enhances information integration across these domains. Extensive evaluations demonstrate the superiority of FGW-CLIP in challenging enzyme-reaction tasks. On the widely-used EnzymeMap benchmark, FGW-CLIP achieves state-of-the-art performance in enzyme virtual screening, as measured by BEDROC and EF metrics. Moreover, FGW-CLIP consistently outperforms across all three splits of ReactZyme, the largest enzyme-reaction benchmark, demonstrating robust generalization to novel enzymes and reactions. These results position FGW-CLIP as a promising framework for enzyme discovery in complex biochemical settings, with strong adaptability across diverse screening scenarios. - oai:arXiv.org:2512.08508v1 - q-bio.BM - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Gengmo Zhou, Feng Yu, Wenda Wang, Zhifeng Gao, Guolin Ke, Zhewei Wei, Zhen Wang - - - Data-Efficient Learning of Anomalous Diffusion with Wavelet Representations: Enabling Direct Learning from Experimental Trajectories - https://arxiv.org/abs/2512.08510 - arXiv:2512.08510v1 Announce Type: cross -Abstract: Machine learning (ML) has become a versatile tool for analyzing anomalous diffusion trajectories, yet most existing pipelines are trained on large collections of simulated data. In contrast, experimental trajectories, such as those from single-particle tracking (SPT), are typically scarce and may differ substantially from the idealized models used for simulation, leading to degradation or even breakdown of performance when ML methods are applied to real data. To address this mismatch, we introduce a wavelet-based representation of anomalous diffusion that enables data-efficient learning directly from experimental recordings. This representation is constructed by applying six complementary wavelet families to each trajectory and combining the resulting wavelet modulus scalograms. We first evaluate the wavelet representation on simulated trajectories from the andi-datasets benchmark, where it clearly outperforms both feature-based and trajectory-based methods with as few as 1000 training trajectories and still retains an advantage on large training sets. We then use this representation to learn directly from experimental SPT trajectories of fluorescent beads diffusing in F-actin networks, where the wavelet representation remains superior to existing alternatives for both diffusion-exponent regression and mesh-size classification. In particular, when predicting the diffusion exponents of experimental trajectories, a model trained on 1200 experimental tracks using the wavelet representation achieves significantly lower errors than state-of-the-art deep learning models trained purely on $10^6$ simulated trajectories. We associate this data efficiency with the emergence of distinct scale fingerprints disentangling underlying diffusion mechanisms in the wavelet spectra. - oai:arXiv.org:2512.08510v1 - physics.bio-ph - cond-mat.soft - cs.LG - physics.data-an - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Gongyi Wang, Yu Zhang, Zihan Huang - - - Minimax and Bayes Optimal Adaptive Experimental Design for Treatment Choice - https://arxiv.org/abs/2512.08513 - arXiv:2512.08513v1 Announce Type: cross -Abstract: We consider an adaptive experiment for treatment choice and design a minimax and Bayes optimal adaptive experiment with respect to regret. Given binary treatments, the experimenter's goal is to choose the treatment with the highest expected outcome through an adaptive experiment, in order to maximize welfare. We consider adaptive experiments that consist of two phases, the treatment allocation phase and the treatment choice phase. The experiment starts with the treatment allocation phase, where the experimenter allocates treatments to experimental subjects to gather observations. During this phase, the experimenter can adaptively update the allocation probabilities using the observations obtained in the experiment. After the allocation phase, the experimenter proceeds to the treatment choice phase, where one of the treatments is selected as the best. For this adaptive experimental procedure, we propose an adaptive experiment that splits the treatment allocation phase into two stages, where we first estimate the standard deviations and then allocate each treatment proportionally to its standard deviation. We show that this experiment, often referred to as Neyman allocation, is minimax and Bayes optimal in the sense that its regret upper bounds exactly match the lower bounds that we derive. To show this optimality, we derive minimax and Bayes lower bounds for the regret using change-of-measure arguments. Then, we evaluate the corresponding upper bounds using the central limit theorem and large deviation bounds. - oai:arXiv.org:2512.08513v1 - econ.EM - cs.LG - math.ST + stat.AP stat.ME - stat.ML - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Masahiro Kato - - - Realization of relational presheaves - https://arxiv.org/abs/2512.08566 - arXiv:2512.08566v1 Announce Type: cross -Abstract: Relational presheaves generalize traditional presheaves by going to the category of sets and relations (as opposed to sets and functions) and by allowing functors which are lax. This added generality is useful because it intuitively allows one to encode situations where we have representables without boundaries or with multiple boundaries at once. In particular, the relational generalization of precubical sets has natural application to modeling concurrency. In this article, we study categories of relational presheaves, and construct realization functors for those. We begin by observing that they form the category of set-based models of a cartesian theory, which implies in particular that they are locally finitely presentable categories. By using general results from categorical logic, we then show that the realization of such presheaves in a cocomplete category is a model of the theory in the opposite category, which allows characterizing situations in which we have a realization functor. Finally, we explain that our work has applications in the semantics of concurrency theory. The realization namely allows one to compare syntactic constructions on relational presheaves and geometric ones. Thanks to it, we are able to provide a syntactic counterpart of the blowup operation, which was recently introduced by Haucourt on directed geometric semantics, as way of turning a directed space into a manifold. - oai:arXiv.org:2512.08566v1 - math.CT - cs.LO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Yorgo Chamoun, Samuel Mimram + Gabrielle Rives, Olivier Lopez, Nicolas Bousquet - Heuristics for Combinatorial Optimization via Value-based Reinforcement Learning: A Unified Framework and Analysis - https://arxiv.org/abs/2512.08601 - arXiv:2512.08601v1 Announce Type: cross -Abstract: Since the 1990s, considerable empirical work has been carried out to train statistical models, such as neural networks (NNs), as learned heuristics for combinatorial optimization (CO) problems. When successful, such an approach eliminates the need for experts to design heuristics per problem type. Due to their structure, many hard CO problems are amenable to treatment through reinforcement learning (RL). Indeed, we find a wealth of literature training NNs using value-based, policy gradient, or actor-critic approaches, with promising results, both in terms of empirical optimality gaps and inference runtimes. Nevertheless, there has been a paucity of theoretical work undergirding the use of RL for CO problems. To this end, we introduce a unified framework to model CO problems through Markov decision processes (MDPs) and solve them using RL techniques. We provide easy-to-test assumptions under which CO problems can be formulated as equivalent undiscounted MDPs that provide optimal solutions to the original CO problems. Moreover, we establish conditions under which value-based RL techniques converge to approximate solutions of the CO problem with a guarantee on the associated optimality gap. Our convergence analysis provides: (1) a sufficient rate of increase in batch size and projected gradient descent steps at each RL iteration; (2) the resulting optimality gap in terms of problem parameters and targeted RL accuracy; and (3) the importance of a choice of state-space embedding. Together, our analysis illuminates the success (and limitations) of the celebrated deep Q-learning algorithm in this problem context. - oai:arXiv.org:2512.08601v1 - stat.ML - cs.LG - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + AI-Driven Expansion and Application of the Alexandria Database + https://arxiv.org/abs/2512.09169 + arXiv:2512.09169v1 Announce Type: cross +Abstract: We present a novel multi-stage workflow for computational materials discovery that achieves a 99% success rate in identifying compounds within 100 meV/atom of thermodynamic stability, with a threefold improvement over previous approaches. By combining the Matra-Genoa generative model, Orb-v2 universal machine learning interatomic potential, and ALIGNN graph neural network for energy prediction, we generated 119 million candidate structures and added 1.3 million DFT-validated compounds to the ALEXANDRIA database, including 74 thousand new stable materials. The expanded ALEXANDRIA database now contains 5.8 million structures with 175 thousand compounds on the convex hull. Predicted structural disorder rates (37-43%) match experimental databases, unlike other recent AI-generated datasets. Analysis reveals fundamental patterns in space group distributions, coordination environments, and phase stability networks, including sub-linear scaling of convex hull connectivity. We release the complete dataset, including sAlex25 with 14 million out-of-equilibrium structures containing forces and stresses for training universal force fields. We demonstrate that fine-tuning a GRACE model on this data improves benchmark accuracy. All data, models, and workflows are freely available under Creative Commons licenses. + oai:arXiv.org:2512.09169v1 + cond-mat.mtrl-sci + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Orit Davidovich, Shimrit Shtern, Segev Wasserkrug, Nimrod Megiddo + Th\'eo Cavignac (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Jonathan Schmidt (Department of Materials, ETH Z\"urich, Z\"urich, Switzerland), Pierre-Paul De Breuck (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Antoine Loew (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Tiago F. T. Cerqueira (CFisUC, Department of Physics, University of Coimbra, Coimbra, Portugal), Hai-Chen Wang (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Anton Bochkarev (ICAMS, Ruhr-Universit\"at Bochum and ACEworks GmbH, Bochum, Germany), Yury Lysogorskiy (ICAMS, Ruhr-Universit\"at Bochum and ACEworks GmbH, Bochum, Germany), Aldo H. Romero (Department of Physics, West Virginia University, Morgantown, USA), Ralf Drautz (ICAMS, Ruhr-Universit\"at Bochum and ACEworks GmbH, Bochum, Germany), Silvana Botti (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Miguel A. L. Marques (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany) - An Efficient Secret Communication Scheme for the Bosonic Wiretap Channel - https://arxiv.org/abs/2512.08623 - arXiv:2512.08623v1 Announce Type: cross -Abstract: We propose a new secret communication scheme over the bosonic wiretap channel. It uses readily available hardware such as lasers and direct photodetectors. The scheme is based on randomness extractors, pulse-position modulation, and Reed-Solomon codes and is therefore computationally efficient. It is secure against an eavesdropper performing coherent joint measurements on the quantum states it observes. In the low-photon-flow limit, the scheme is asymptotically optimal and achieves the same dominant term as the secrecy capacity of the same channel. - oai:arXiv.org:2512.08623v1 - quant-ph - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/LCOMM.2025.3625640 - Esther H\"anggi, Iy\'an M\'endez Veiga, Ligong Wang - - - Resolvable Triple Arrays - https://arxiv.org/abs/2512.08681 - arXiv:2512.08681v1 Announce Type: cross -Abstract: We present a new construction of triple arrays by combining a symmetric 2-design with a resolution of another 2-design. This is the first general method capable of producing non-extremal triple arrays. We call the triple arrays which can be obtained in this way resolvable. We employ the construction to produce the first examples of $(21 \times 15, 63)$-triple arrays, and enumerate all resolvable $(7 \times 15, 35)$-triple arrays, of which there was previously only a single known example. An infinite subfamily of Paley triple arrays turns out to be resolvable. - We also introduce a new intermediate object, unordered triple arrays, that are to triple arrays what symmetric 2-designs are to Youden rectangles, and propose a strengthening of Agrawal's long-standing conjecture on the existence of extremal triple arrays. For small parameters, we completely enumerate all unordered triple arrays, and use this data to corroborate the new conjecture. We construct several infinite families of resolvable unordered triple arrays, and, in particular, show that all $((q + 1) \times q^2, q(q + 1))$-triple arrays are resolvable and are in correspondence with finite affine planes of order $q$. - oai:arXiv.org:2512.08681v1 + Magic Gems: A Polyhedral Framework for Magic Squares + https://arxiv.org/abs/2512.09170 + arXiv:2512.09170v1 Announce Type: cross +Abstract: We introduce Magic Gems, a geometric representation of magic squares as three-dimensional polyhedra. By mapping an n x n magic square onto a centered coordinate grid with cell values as vertical displacements, we construct a point cloud whose convex hull defines the Magic Gem. This reveals a connection between magic square constraints and statistical structure: we prove that magic squares have vanishing covariances between position and value. We introduce a covariance energy functional -- the sum of squared covariances with row, column, and diagonal indicator variables -- and prove for n=3 (via exhaustive enumeration) that its zeros are precisely the magic squares. Large-scale sampling for n=4,5 (460+ million arrangements) provides strong numerical evidence that this characterization extends to larger orders. Perturbation analysis demonstrates that magic squares are isolated local minima. The representation is invariant under dihedral symmetry D_4, yielding canonical geometric objects for equivalence classes. + oai:arXiv.org:2512.09170v1 math.CO + cs.CG cs.DM - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Alexey Gordeev, Lars-Daniel \"Ohman - - - Calibration of a DEM contact model for wet industrial granular materials - https://arxiv.org/abs/2512.08685 - arXiv:2512.08685v1 Announce Type: cross -Abstract: This study presents and calibrates a Discrete Element Method (DEM) contact model for wet granular materials in the pendular regime. The model extends a previously calibrated dry contact formulation by incorporating liquid bridges that generate capillary adhesion between particles, while liquid migration is represented through evolving bridge volumes. Two reactor-grade polypropylene powders with different particle size distributions, bulk densities, and surface morphologies are investigated, resulting in distinct wetting behavior. A schematic framework is introduced to relate increasing liquid content to the transition from dry to wet contacts using two key parameters: the minimum liquid film volume and the maximum liquid bridge volume. These parameters are calibrated using dynamic angle of repose measurements from rotating drum experiments. The calibrated model reproduces the experimental flow behavior of both powders: full agreement is obtained for the coarser, more porous powder across all liquid contents, while for the finer, denser powder, agreement is achieved at low to moderate liquid contents. At higher liquid contents, discrepancies arise due to agglomeration effects amplified by particle scaling. These results demonstrate the effectiveness of the dynamic angle of repose as a calibration target and highlight the limitations of particle scaling for strongly cohesive wet granular systems. The proposed framework provides a practical basis for DEM-based modeling of wet powder flow in industrial processes. - oai:arXiv.org:2512.08685v1 - cond-mat.soft - cs.NA - math-ph - math.MP - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Sahar Pourandi, P. Christian van der Sande, Igor A. Ostanin, Thomas Weinhart - - - Applications of Singular Entropy to Signals and Singular Smoothness to Images - https://arxiv.org/abs/2512.08717 - arXiv:2512.08717v1 Announce Type: cross -Abstract: This paper explores signal and image analysis by using the Singular Value Decomposition (SVD) and its extension, the Generalized Singular Value Decomposition (GSVD). A key strength of SVD lies in its ability to separate information into orthogonal subspaces. While SVD is a well-established tool in ECG analysis, particularly for source separation, this work proposes a refined method for selecting a threshold to distinguish between maternal and fetal components more effectively. In the first part of the paper, the focus is onmedical signal analysis,where the concepts of Energy Gap Variation (EGV) and Singular Energy are introduced to isolate fetal and maternal ECG signals, improving the known ones. Furthermore, the approach is significantly enhanced by the application of GSVD, which provides additional discriminative power for more accurate signal separation. The second part introduces a novel technique called Singular Smoothness, developed for image analysis. This method incorporates Singular Entropy and the Frobenius normto evaluate information density, and is applied to the detection of natural anomalies such asmountain fractures and burned forest regions. Numerical experiments are presented to demonstrate the effectiveness of the proposed approaches. - oai:arXiv.org:2512.08717v1 - eess.SP - cs.NA - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 + math.MG + Thu, 11 Dec 2025 00:00:00 -0500 cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Oscar Romero, N\'estor Thome + Kyle Elliott Mathewson - Saturation-based robustly optimal hierarchical operation control of microgrids - https://arxiv.org/abs/2512.08757 - arXiv:2512.08757v1 Announce Type: cross -Abstract: This paper studies the problem of robustly optimal operation control of microgrids with a high share of renewable energy sources. The main goal is to ensure optimal operation under a wide range of circumstances, given the highly intermittent and uncertain nature of renewable sources and load demand. We formally state this problem, and, in order to solve it, we make effective use of the hierarchical power system control approach. We consider an enhanced primary control layer including droop control and autonomous limitation of power and energy. We prove that this enables the use of constant power setpoints to achieve optimal operation under certain conditions. In order to relax these conditions, the approach is combined with an energy management system, which solves a robust unit commitment problem within a model predictive control framework. Finally, a case study demonstrates the viability of the control design. - oai:arXiv.org:2512.08757v1 + A Benamou-Brenier Proximal Splitting Method for Constrained Unbalanced Optimal Transport + https://arxiv.org/abs/2512.09250 + arXiv:2512.09250v1 Announce Type: cross +Abstract: The dynamic formulation of optimal transport, also known as the Benamou-Brenier formulation, has been extended to the unbalanced case by introducing a source term in the continuity equation. When this source term is penalized based on the Fisher-Rao metric, the resulting model is referred to as the Wasserstein-Fisher-Rao (WFR) setting, and allows for the comparison between any two positive measures without the need for equalized total mass. In recent work, we introduced a constrained variant of this model, in which affine integral equality constraints are imposed along the measure path. In the present paper, we propose a further generalization of this framework, which allows for constraints that apply not just to the density path but also to the momentum and source terms, and incorporates affine inequalities in addition to equality constraints. We prove, under suitable assumptions on the constraints, the well-posedness of the resulting class of convex variational problems. The paper is then primarily devoted to developing an effective numerical pipeline that tackles the corresponding constrained optimization problem based on finite difference discretizations and parallel proximal schemes. Our proposed framework encompasses standard balanced and unbalanced optimal transport, as well as a multitude of natural and practically relevant constraints, and we highlight its versatility via several synthetic and real data examples. + oai:arXiv.org:2512.09250v1 math.OC - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Ujjwal Pratap, Steffen Hofmann - - - Delay-Oriented Distributed Scheduling with TransGNN - https://arxiv.org/abs/2512.08799 - arXiv:2512.08799v1 Announce Type: cross -Abstract: Minimizing transmission delay in wireless multi-hop networks is a fundamental yet challenging task due to the complex coupling among interference, queue dynamics, and distributed control. Traditional scheduling algorithms, such as max-weight or queue-length-based policies, primarily aim to optimize throughput but often suffer from high latency, especially in heterogeneous or dynamically changing topologies. Recent learning-based approaches, particularly those employing Graph Neural Networks (GNNs), have shown promise in capturing spatial interference structures. However, conventional Graph Convolutional Networks (GCNs) remain limited by their local aggregation mechanism and their inability to model long-range dependencies within the conflict graph. To address these challenges, this paper proposes a delay-oriented distributed scheduling framework based on Transformer GNN. The proposed model employs an attention-based graph encoder to generate adaptive per-link utility scores that reflect both queue backlog and interference intensity. A Local Greedy Solver (LGS) then utilizes these utilities to construct a feasible independent set of links for transmission, ensuring distributed and conflict-free scheduling. - oai:arXiv.org:2512.08799v1 - eess.SP - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Boxuan Wen, Junyu Luo - - - The Cahill-Casazza-Daubechies problem on H\"older stable phase retrieval - https://arxiv.org/abs/2512.08806 - arXiv:2512.08806v1 Announce Type: cross -Abstract: Phase retrieval using a frame for a finite-dimensional Hilbert space is known to always be Lipschitz stable. However, phase retrieval using a frame or a continuous frame for an infinite-dimensional Hilbert space is always unstable. In order to bridge the gap between the finite and infinite dimensional phenomena, Cahill-Casazza-Daubechies (Trans.Amer.Math.Soc. 2016) gave a construction of a family of nonlinear subsets of an infinite-dimensional Hilbert space where phase retrieval could be performed with a H\"older stability estimate. They then posed the question of whether these subsets satisfied Lipschitz stable phase retrieval. We solve this problem both by giving examples which fail Lipschitz stability and by giving examples which satisfy Lipschitz stability. - oai:arXiv.org:2512.08806v1 - math.FA cs.NA math.NA - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Daniel Freeman, Mitchell A. Taylor - - - Axial Symmetric Navier Stokes Equations and the Beltrami /anti Beltrami spectrum in view of Physics Informed Neural Networks - https://arxiv.org/abs/2512.08846 - arXiv:2512.08846v1 Announce Type: cross -Abstract: In this paper, I further continue an investigation on Beltrami Flows began in 2015 with A. Sorin and amply reprised and developed in 2022 with M. Trigiante. Instead of a compact $3$-torus $T^3=\mathbb{R}^3/\Lambda$ where $\Lambda$ is a crystallographic lattice, as done in previous work, here I considered flows confined in a cylinder with identified opposite bases. In this topology I considered axial symmetric flows and found a complete basis of axial symmetric harmonic $1$-forms that, for each energy level, decomposes into six components: two Beltrami, two anti-Beltrami and two closed forms. These objects, that are written in terms of trigonometric and Bessel functions, constitute a function basis for an $L^2$ space of axial symmetric flows. I have presented a general scheme for the search of axial symmetric solutions of Navier Stokes equation by reducing the latter to an hierachy of quadratic relations on the development coefficients of the flow in the above described functional basis. It is proposed that the coefficients can be determined by means of a Physics Informed like Neural Network optimization recursive algorithm. Indeed the present paper provides the theoretical foundations for such a algorithmic construction that is planned for a future publication. - oai:arXiv.org:2512.08846v1 - physics.flu-dyn - cs.IT - math-ph - math.IT - math.MP - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Pietro Fr\'e - - - SAQ: Stabilizer-Aware Quantum Error Correction Decoder - https://arxiv.org/abs/2512.08914 - arXiv:2512.08914v1 Announce Type: cross -Abstract: Quantum Error Correction (QEC) decoding faces a fundamental accuracy-efficiency tradeoff. Classical methods like Minimum Weight Perfect Matching (MWPM) exhibit variable performance across noise models and suffer from polynomial complexity, while tensor network decoders achieve high accuracy but at prohibitively high computational cost. Recent neural decoders reduce complexity but lack the accuracy needed to compete with computationally expensive classical methods. We introduce SAQ-Decoder, a unified framework combining transformer-based learning with constraint aware post-processing that achieves both near Maximum Likelihood (ML) accuracy and linear computational scalability with respect to the syndrome size. Our approach combines a dual-stream transformer architecture that processes syndromes and logical information with asymmetric attention patterns, and a novel differentiable logical loss that directly optimizes Logical Error Rates (LER) through smooth approximations over finite fields. SAQ-Decoder achieves near-optimal performance, with error thresholds of 10.99% (independent noise) and 18.6% (depolarizing noise) on toric codes that approach the ML bounds of 11.0% and 18.9% while outperforming existing neural and classical baselines in accuracy, complexity, and parameter efficiency. Our findings establish that learned decoders can simultaneously achieve competitive decoding accuracy and computational efficiency, addressing key requirements for practical fault-tolerant quantum computing systems. - oai:arXiv.org:2512.08914v1 - quant-ph - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - David Zenati, Eliya Nachmani - - - Proportional Justified Representation - https://arxiv.org/abs/1611.09928 - arXiv:1611.09928v2 Announce Type: replace -Abstract: The goal of multi-winner elections is to choose a fixed-size committee based on voters' preferences. An important concern in this setting is representation: large groups of voters with cohesive preferences should be adequately represented by the election winners. In an influential paper, Aziz et al. proposed two axioms that aim to capture this idea: justified representation (JR) and its strengthening extended justified representation (EJR). We observe that EJR is incompatible with the highly desirable Perfect Representation (PR) criterion, and propose a relaxation of EJR, which we call Proportional Justified Representation (PJR). PJR is more demanding than JR, but, unlike EJR, it is compatible with PR, as well as with a stronger variant of this axiom, which we term Fractional Perfect Representation (FPR). Moreover, just like EJR, PJR can be used to characterise the classic Proportional Approval Voting (PAV) rule in the class of weighted PAV rules. On the other hand, we show that EJR provides stronger guarantees with respect to average voter satisfaction than PJR does. - oai:arXiv.org:1611.09928v2 - cs.GT - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Luis S\'anchez-Fern\'andez, Edith Elkind, Martin Lackner, Norberto Fern\'andez, Jes\'us A. Fisteus, Pablo Basanta Val, Piotr Skowron + Mao Nishino, Martin Bauer, Tom Needham, Nicolas Charon - Discovering Influential Factors in Variational Autoencoders - https://arxiv.org/abs/1809.01804 - arXiv:1809.01804v3 Announce Type: replace -Abstract: In the field of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervening or intuition assistance to extract useful knowledge or serve for the downstream tasks. In this work, we focus on supervising the influential factors extracted by the variational autoencoder(VAE). The VAE is proposed to learn independent low dimension representation while facing the problem that sometimes pre-set factors are ignored. We argue that the mutual information of the input and each learned factor of the representation plays a necessary indicator of discovering the influential factors. We find the VAE objective inclines to induce mutual information sparsity in factor dimension over the data intrinsic dimension and therefore result in some non-influential factors whose function on data reconstruction could be ignored. We show mutual information also influences the lower bound of the VAE's reconstruction error and downstream classification task. To make such indicator applicable, we design an algorithm for calculating the mutual information for the VAE and prove its consistency. Experimental results on MNIST, CelebA and DEAP datasets show that mutual information can help determine influential factors, of which some are interpretable and can be used to further generation and classification tasks, and help discover the variant that connects with emotion on DEAP dataset. - oai:arXiv.org:1809.01804v3 - cs.LG + Robust and Sparse Estimation of Unbounded Density Ratio under Heavy Contamination + https://arxiv.org/abs/2512.09266 + arXiv:2512.09266v1 Announce Type: cross +Abstract: We examine the non-asymptotic properties of robust density ratio estimation (DRE) in contaminated settings. Weighted DRE is the most promising among existing methods, exhibiting doubly strong robustness from an asymptotic perspective. This study demonstrates that Weighted DRE achieves sparse consistency even under heavy contamination within a non-asymptotic framework. This method addresses two significant challenges in density ratio estimation and robust estimation. For density ratio estimation, we provide the non-asymptotic properties of estimating unbounded density ratios under the assumption that the weighted density ratio function is bounded. For robust estimation, we introduce a non-asymptotic framework for doubly strong robustness under heavy contamination, assuming that at least one of the following conditions holds: (i) contamination ratios are small, and (ii) outliers have small weighted values. This work provides the first non-asymptotic analysis of strong robustness under heavy contamination. + oai:arXiv.org:2512.09266v1 stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shiqi Liu, Jingxin Liu, Qian Zhao, Xiangyong Cao, Huibin Li, Deyu Meng, Hongying Meng, Sheng Liu - - - Generative Learning of Heterogeneous Tail Dependence - https://arxiv.org/abs/2011.13132 - arXiv:2011.13132v3 Announce Type: replace -Abstract: We propose a multivariate generative model to capture the complex dependence structure often encountered in business and financial data. Our model features heterogeneous and asymmetric tail dependence between all pairs of individual dimensions while also allowing heterogeneity and asymmetry in the tails of the marginals. A significant merit of our model structure is that it is not prone to error propagation in the parameter estimation process, hence very scalable, as the dimensions of datasets grow large. However, the likelihood methods are infeasible for parameter estimation in our case due to the lack of a closed-form density function. Instead, we devise a novel moment learning algorithm to learn the parameters. To demonstrate the effectiveness of the model and its estimator, we test them on simulated as well as real-world datasets. Results show that this framework gives better finite-sample performance compared to the copula-based benchmarks as well as recent similar models. - oai:arXiv.org:2011.13132v3 - cs.LG - q-fin.RM - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiangqian Sun, Xing Yan, Qi Wu - - - Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise - https://arxiv.org/abs/2210.11075 - arXiv:2210.11075v3 Announce Type: replace -Abstract: The existence of spurious correlations such as image backgrounds in the training environment can make empirical risk minimization (ERM) perform badly in the test environment. To address this problem, Kirichenko et al. (2022) empirically found that the core features that are related to the outcome can still be learned well even with the presence of spurious correlations. This opens a promising strategy to first train a feature learner rather than a classifier, and then perform linear probing (last layer retraining) in the test environment. However, a theoretical understanding of when and why this approach works is lacking. In this paper, we find that core features are only learned well when their associated non-realizable noise is smaller than that of spurious features, which is not necessarily true in practice. We provide both theories and experiments to support this finding and to illustrate the importance of non-realizable noise. Moreover, we propose an algorithm called Freeze then Train (FTT), that first freezes certain salient features and then trains the rest of the features using ERM. We theoretically show that FTT preserves features that are more beneficial to test time probing. Across two commonly used spurious correlation datasets, FTT outperforms ERM, IRM, JTT and CVaR-DRO, with substantial improvement in accuracy (by 4.5%) when the feature noise is large. FTT also performs better on general distribution shift benchmarks. - oai:arXiv.org:2210.11075v3 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haotian Ye, James Zou, Linjun Zhang - - - Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous Federated Learning - https://arxiv.org/abs/2305.19600 - arXiv:2305.19600v4 Announce Type: replace -Abstract: Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data. In practice, there can often be substantial heterogeneity (e.g., class imbalance) across the local data distributions observed by each of these clients. Under such non-iid label distributions across clients, FL suffers from the 'client-drift' problem where every client drifts to its own local optimum. This results in slower convergence and poor performance of the aggregated model. To address this limitation, we propose a novel regularization technique based on adaptive self-distillation (ASD) for training models on the client side. Our regularization scheme adaptively adjusts to each client's training data based on the global model's prediction entropy and the client-data label distribution. We show in this paper that our proposed regularization (ASD) can be easily integrated atop existing, state-of-the-art FL algorithms, leading to a further boost in the performance of these off-the-shelf methods. We theoretically explain how incorporation of ASD regularizer leads to reduction in client-drift and empirically justify the generalization ability of the trained model. We demonstrate the efficacy of our approach through extensive experiments on multiple real-world benchmarks and show substantial gains in performance when the proposed regularizer is combined with popular FL methods. - oai:arXiv.org:2305.19600v4 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - M Yashwanth, Gaurav Kumar Nayak, Arya Singh, Yogesh Simmhan, Anirban Chakraborty - - - Degenerate crossing number and signed reversal distance - https://arxiv.org/abs/2308.10666 - arXiv:2308.10666v3 Announce Type: replace -Abstract: Given a graph drawn in the plane, the degenerate crossing number of the drawing is the number of points in the plane which are contained in the relative interior of at least two edges, where each edge is required to be drawn as a simple arc. The degenerate crossing number of a graph is the minimum degenerate crossing number among all its drawings. - Given a drawing, cutting a neighborhood of the surface around each crossing and pasting a M\"obius band gives a non-orientable surface, on which the drawing of the graph can be extended to an embedding. From this observation, Mohar derived that the degenerate crossing number of a graph is at most its non-orientable genus, and conjectured that these quantities are equal for every graph. He also made a stronger conjecture for loopless pseudo-triangulations with a fixed embedding scheme. - In this paper, we prove a structure theorem that allows to understand when the degenerate crossing number and non-orientable genus coincide in a large class of loopless bipartite embedding schemes. In particular, we provide a counterexample to Mohar's stronger conjecture, but show that in the vast majority of the 2-vertex cases, as well as for many bipartite graphs, Mohar's conjecture is satisfied. - The reversal distance between two signed permutations is the minimum number of reversals that transform one permutation to the other one. If we represent the trajectory of each element of a signed permutation under successive reversals by a simple arc, we obtain a drawing of a 2-vertex embedding scheme with degenerate crossings. Our main result is proved by leveraging this connection and a classical result in genome rearrangement (the Hannenhalli--Pevzner algorithm) and can also be understood as an extension of this algorithm when the reversals do not necessarily happen in a monotone order. - oai:arXiv.org:2308.10666v3 - cs.CG - math.CO - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Niloufar Fuladi, Alfredo Hubard, Arnaud de Mesmay - - - MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens - https://arxiv.org/abs/2310.02239 - arXiv:2310.02239v4 Announce Type: replace -Abstract: The effectiveness of Multimodal Large Language Models (MLLMs) demonstrates a profound capability in multimodal understanding. However, the simultaneous generation of images with coherent texts is still underdeveloped. Addressing this, we introduce a novel interleaved vision-and-language generation method, centered around the concept of ``generative vokens". These vokens serve as pivotal elements contributing to coherent image-text outputs. Our method is marked by a unique two-stage training strategy for description-free multimodal generation, which does not necessitate extensive descriptions of images. We integrate classifier-free guidance to enhance the alignment of generated images and texts, ensuring more seamless and contextually relevant multimodal interactions. Our model, MiniGPT-5, exhibits substantial improvement over the baseline models on multimodal generation datasets, including MMDialog and VIST. The human evaluation shows MiniGPT-5 is better than the baseline model on more than 56\% cases for multimodal generation, highlighting its efficacy across diverse benchmarks. - oai:arXiv.org:2310.02239v4 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Kaizhi Zheng, Xuehai He, Xin Eric Wang + Ryosuke Nagumo, Hironori Fujisawa - Diffusion Models for Wireless Communications - https://arxiv.org/abs/2310.07312 - arXiv:2310.07312v4 Announce Type: replace -Abstract: A comprehensive study on the applications of denoising diffusion models for wireless systems is provided. The article highlights the capabilities of diffusion models in learning complicated signal distributions, modeling wireless channels, and denoising and reconstructing distorted signals. First, fundamental working mechanism of diffusion models is introduced. Then the recent advances in applying diffusion models to wireless systems are reviewed. Next, two case studies are provided, where conditional diffusion models (CDiff) are proposed for data reconstruction enhancement, covering both the conventional digital communication systems, as well as the semantic communication (SemCom) setups. The first case study highlights about 10 dB improvement in data reconstruction under low-SNR regimes, while mitigating the need to transmit redundant bits for error correction codes in digital systems. The second study further extends the case to a SemCom setup, where diffusion autoencoders showcase superior performance compared to legacy autoencoders and variational autoencoder (VAE) architectures. Finally, future directions and existing challenges are discussed. - oai:arXiv.org:2310.07312v4 - cs.IT - cs.AI + Impact of Positional Encoding: Clean and Adversarial Rademacher Complexity for Transformers under In-Context Regression + https://arxiv.org/abs/2512.09275 + arXiv:2512.09275v1 Announce Type: cross +Abstract: Positional encoding (PE) is a core architectural component of Transformers, yet its impact on the Transformer's generalization and robustness remains unclear. In this work, we provide the first generalization analysis for a single-layer Transformer under in-context regression that explicitly accounts for a completely trainable PE module. Our result shows that PE systematically enlarges the generalization gap. Extending to the adversarial setting, we derive the adversarial Rademacher generalization bound. We find that the gap between models with and without PE is magnified under attack, demonstrating that PE amplifies the vulnerability of models. Our bounds are empirically validated by a simulation study. Together, this work establishes a new framework for understanding the clean and adversarial generalization in ICL with PE. + oai:arXiv.org:2512.09275v1 + stat.ML cs.LG - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Mehdi Letafati, Samad Ali, Matti Latva-aho - - - On the convergence of adaptive approximations for stochastic differential equations - https://arxiv.org/abs/2311.14201 - arXiv:2311.14201v5 Announce Type: replace -Abstract: In this paper, we study numerical approximations for stochastic differential equations (SDEs) that use adaptive step sizes. In particular, we consider a general setting where decisions to reduce step sizes are allowed to depend on the future trajectory of the underlying Brownian motion. Since these adaptive step sizes may not be previsible, the standard mean squared error analysis cannot be directly applied to show that the numerical method converges to the solution of the SDE. Building upon the pioneering work of Gaines and Lyons, we instead use rough path theory to establish pathwise convergence for a wide class of adaptive numerical methods on general Stratonovich SDEs (with sufficiently smooth vector fields). To our knowledge, this is the first convergence guarantee that applies to standard solvers, such as the Milstein and Heun methods, with non-previsible step sizes. In our analysis, we require adaptive step sizes to have a "no skip" property and to take values at only dyadic times. Secondly, in contrast to the Euler-Maruyama method, we require the SDE solver to have unbiased "L\'evy area" terms in its Taylor expansion. We conjecture that for adaptive SDE solvers more generally, convergence is still possible provided the method does not introduce "L\'evy area bias". We present a simple example where the step size control can skip over previously considered times, resulting in the numerical method converging to an incorrect limit (i.e. not the Stratonovich SDE). Finally, we conclude with an experiment demonstrating the accuracy of Heun's method and a newly introduced Splitting Path-based Runge-Kutta scheme (SPaRK) when used with adaptive step sizes. - oai:arXiv.org:2311.14201v5 - math.NA - cs.NA - math.PR - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - James Foster, Andra\v{z} Jelin\v{c}i\v{c} - - - Detecting value-expressive text posts in Russian social media - https://arxiv.org/abs/2312.08968 - arXiv:2312.08968v3 Announce Type: replace -Abstract: Basic values are concepts or beliefs which pertain to desirable end-states and transcend specific situations. Studying personal values in social media can illuminate how and why societal values evolve especially when the stimuli-based methods, such as surveys, are inefficient, for instance, in hard-to-reach populations. On the other hand, user-generated content is driven by the massive use of stereotyped, culturally defined speech constructions rather than authentic expressions of personal values. We aimed to find a model that can accurately detect value-expressive posts in Russian social media VKontakte. A training dataset of 5,035 posts was annotated by three experts, 304 crowd-workers and ChatGPT. Crowd-workers and experts showed only moderate agreement in categorizing posts. ChatGPT was more consistent but struggled with spam detection. We applied an ensemble of human- and AI-assisted annotation involving active learning approach, subsequently trained several classification models using embeddings from various pre-trained transformer-based language models. The best performance was achieved with embeddings from a fine-tuned rubert-tiny2 model, yielding high value detection quality (F1 = 0.77, F1-macro = 0.83). This model provides a crucial step to a study of values within and between Russian social media users. - oai:arXiv.org:2312.08968v3 - cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Maria Milkova, Maksim Rudnev, Lidia Okolskaya - - - Spike-EVPR: Deep Spiking Residual Networks with SNN-Tailored Representations for Event-Based Visual Place Recognition - https://arxiv.org/abs/2402.10476 - arXiv:2402.10476v2 Announce Type: replace -Abstract: Event cameras are ideal for visual place recognition (VPR) in challenging environments due to their high temporal resolution and high dynamic range. However, existing methods convert sparse events into dense frame-like representations for Artificial Neural Networks (ANNs), ignoring event sparsity and incurring high computational cost. Spiking Neural Networks (SNNs) complement event data through discrete spike signals to enable energy-efficient VPR, but their application is hindered by the lack of effective spike-compatible representations and deep architectures capable of learning discriminative global descriptors. To address these limitations, we propose Spike-EVPR, a directly trained, end-to-end SNN framework tailored for event-based VPR. First, we introduce two complementary event representations, MCS-Tensor and TSS-Tensor, designed to reduce temporal redundancy while preserving essential spatio-temporal cues. Furthermore, we propose a deep spiking residual architecture that effectively aggregates these features to generate robust place descriptors. Extensive experiments on the Brisbane-Event-VPR and DDD20 datasets demonstrate that Spike-EVPR achieves state-of-the-art performance, improving Recall@1 by 7.61% and 13.20%, respectively, while significantly reducing energy consumption. - oai:arXiv.org:2402.10476v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zuntao Liu, Yaohui Li, Chenming Hu, Delei Kong, Junjie Jiang, Zheng Fang + Weiyi He, Yue Xing - BG-HGNN: Toward Efficient Learning for Complex Heterogeneous Graphs - https://arxiv.org/abs/2403.08207 - arXiv:2403.08207v2 Announce Type: replace -Abstract: Heterogeneous graphs, comprising diverse node and edge types connected through varied relations, are ubiquitous in real-world applications. Message-passing heterogeneous graph neural networks (HGNNs) have emerged as a powerful model class for such data. However, existing HGNNs typically allocate a separate set of learnable weights for each relation type to model relational heterogeneity. Despite their promise, these models are effective primarily on simple heterogeneous graphs with only a few relation types. In this paper, we show that this standard design inherently leads to parameter explosion (the number of learnable parameters grows rapidly with the number of relation types) and relation collapse (the model loses the ability to distinguish among different relations). These issues make existing HGNNs inefficient or impractical for complex heterogeneous graphs with many relation types. To address these challenges, we propose Blend&Grind-HGNN (BG-HGNN), a unified feature-representation framework that integrates and distills relational heterogeneity into a shared low-dimensional feature space. This design eliminates the need for relation-specific parameter sets and enables efficient, expressive learning even as the number of relations grows. Empirically, BG-HGNN achieves substantial gains over state-of-the-art HGNNs, improving parameter efficiency by up to 28.96x and training throughput by up to 110.30x, while matching or surpassing their accuracy on complex heterogeneous graphs. - oai:arXiv.org:2403.08207v2 + Distributional Shrinkage II: Optimal Transport Denoisers with Higher-Order Scores + https://arxiv.org/abs/2512.09295 + arXiv:2512.09295v1 Announce Type: cross +Abstract: We revisit the signal denoising problem through the lens of optimal transport: the goal is to recover an unknown scalar signal distribution $X \sim P$ from noisy observations $Y = X + \sigma Z$, with $Z$ being standard Gaussian independent of $X$ and $\sigma>0$ a known noise level. Let $Q$ denote the distribution of $Y$. We introduce a hierarchy of denoisers $T_0, T_1, \ldots, T_\infty : \mathbb{R} \to \mathbb{R}$ that are agnostic to the signal distribution $P$, depending only on higher-order score functions of $Q$. Each denoiser $T_K$ is progressively refined using the $(2K-1)$-th order score function of $Q$ at noise resolution $\sigma^{2K}$, achieving better denoising quality measured by the Wasserstein metric $W(T_K \sharp Q, P)$. The limiting denoiser $T_\infty$ identifies the optimal transport map with $T_\infty \sharp Q = P$. + We provide a complete characterization of the combinatorial structure underlying this hierarchy through Bell polynomial recursions, revealing how higher-order score functions encode the optimal transport map for signal denoising. We study two estimation strategies with convergence rates for higher-order scores from i.i.d. samples drawn from $Q$: (i) plug-in estimation via Gaussian kernel smoothing, and (ii) direct estimation via higher-order score matching. This hierarchy of agnostic denoisers opens new perspectives in signal denoising and empirical Bayes. + oai:arXiv.org:2512.09295v1 + math.ST cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Junwei Su, Lingjun Mao, Zheng Da, Chuan Wu + stat.ML + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tengyuan Liang - Numerical integrators for confined Langevin dynamics - https://arxiv.org/abs/2404.16584 - arXiv:2404.16584v2 Announce Type: replace -Abstract: We derive and analyze numerical methods for underdamped (kinetic) Langevin dynamics in a domain with elastic reflection at the boundary. First-order approximations are based on an Euler-type scheme incorporating collision-handling at the boundary. To achieve second order, composition schemes are derived based on decomposition of the generator into collisional drift, impulse, and stochastic momentum evolution. In a deterministic setting, this approach would typically lead to first-order approximation, even in symmetric compositions, but we find that the stochastic method can provide second-order weak approximation with a single gradient evaluation, both at finite times and in the ergodic limit. We provide analysis of this observation, as well as numerical demonstration, and we compare and contrast the performance of different variants of the integration method using model problems. - oai:arXiv.org:2404.16584v2 - math.NA - cs.NA + Infinitesimal containment and sparse factors of iid + https://arxiv.org/abs/2512.09301 + arXiv:2512.09301v1 Announce Type: cross +Abstract: We introduce infinitesimal weak containment for measure-preserving actions of a countable group $\Gamma$: an action $(X,\mu)$ is infinitesimally contained in $(Y,\nu)$ if the statistics of the action of $\Gamma$ on small measure subsets of $X$ can be approximated inside $Y$. We show that the Bernoulli shift $[0,1]^\Gamma$ is infinitesimally contained in the left-regular action of $\Gamma$. For exact groups, this implies that sparse factor-of-iid subsets of $\Gamma$ are approximately hyperfinite. We use it to quantify a theorem of Chifan--Ioana on measured subrelations of the Bernoulli shift of an exact group. For the proof of infinitesimal containment we define \emph{entropy support maps}, which take a small subset $U$ of $\{0,1\}^I$ and assign weights to coordinates above every point of $U$, according to how ''important'' they are for the structure of the set. + oai:arXiv.org:2512.09301v1 + math.DS + cs.IT + math.IT math.PR - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - B. Leimkuhler, A. Sharma, M. V. Tretyakov + Miko{\l}aj Fr\k{a}czyk - CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale - https://arxiv.org/abs/2405.17537 - arXiv:2405.17537v5 Announce Type: replace -Abstract: Measuring biodiversity is crucial for understanding ecosystem health. While prior works have developed machine learning models for taxonomic classification of photographic images and DNA separately, in this work, we introduce a multimodal approach combining both, using CLIP-style contrastive learning to align images, barcode DNA, and text-based representations of taxonomic labels in a unified embedding space. This allows for accurate classification of both known and unknown insect species without task-specific fine-tuning, leveraging contrastive learning for the first time to fuse barcode DNA and image data. Our method surpasses previous single-modality approaches in accuracy by over 8% on zero-shot learning tasks, showcasing its effectiveness in biodiversity studies. - oai:arXiv.org:2405.17537v5 + Functional Percolation: A Perspective on Criticality of Form and Function + https://arxiv.org/abs/2512.09317 + arXiv:2512.09317v1 Announce Type: cross +Abstract: Understanding the physical constraints and minimal conditions that enable information processing in extended systems remains a central challenge across disciplines, from neuroscience and artificial intelligence to social and physical networks. Here we study how network connectivity both limits and enables information processing by analyzing random networks across the structural percolation transition. Using cascade-mediated dynamics as a minimal and universal mechanism for propagating state-dependent responses, we examine structural, functional, and information-theoretic observables as functions of mean degree in Erdos-Renyi networks. We find that the emergence of a giant connected component coincides with a sharp transition in realizable information processing: complex input-output response functions become accessible, functional diversity increases rapidly, output entropy rises, and directed information flow quantified by transfer entropy extends beyond local neighborhoods. These coincident transitions define a regime of functional percolation, referring to a sharp expansion of the space of realizable input-output functions at the structural percolation transition. Near criticality, networks exhibit a Pareto-optimal tradeoff between functional complexity and diversity, suggesting that percolation criticality provides a universal organizing principle for information processing in systems with local interactions and propagating influences. + oai:arXiv.org:2512.09317v1 + physics.soc-ph + cond-mat.stat-mech cs.AI - cs.CL - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - replace + physics.comp-ph + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - ZeMing Gong, Austin T. Wang, Xiaoliang Huo, Joakim Bruslund Haurum, Scott C. Lowe, Graham W. Taylor, Angel X. Chang + Galen J. Wilkerson - Neural Surrogate HMC: On Using Neural Likelihoods for Hamiltonian Monte Carlo in Simulation-Based Inference - https://arxiv.org/abs/2407.20432 - arXiv:2407.20432v2 Announce Type: replace -Abstract: Bayesian inference methods such as Markov Chain Monte Carlo (MCMC) typically require repeated computations of the likelihood function, but in some scenarios this is infeasible and alternative methods are needed. Simulation-based inference (SBI) methods address this problem by using machine learning to amortize computations. In this work, we highlight a particular synergy between the SBI method of neural likelihood estimation and the classic MCMC method of Hamiltonian Monte Carlo. We show that approximating the likelihood function with a neural network model can provide three distinct advantages: (1) amortizing the computations for MCMC; (2) providing gradients for Hamiltonian Monte Carlo, and (3) smoothing over noisy simulations resulting from numerical instabilities. We provide practical guidelines for defining a prior, sampling a training set, and evaluating convergence. The method is demonstrated in an application modeling the heliospheric transport of galactic cosmic rays, where it enables efficient inference of latent parameters in the Parker equation. - oai:arXiv.org:2407.20432v2 - cs.LG - astro-ph.HE - Wed, 10 Dec 2025 00:00:00 -0500 - replace + A Propagator-based Multi-level Monte Carlo Method for Kinetic Neutral Species in Edge Plasmas + https://arxiv.org/abs/2512.09334 + arXiv:2512.09334v1 Announce Type: cross +Abstract: We propose and investigate a new multi-level Monte Carlo scheme for numerical solutions of the kinetic Boltzmann equation for neutral species in edge plasmas. In particular, this method explicitly exploits a key structural property of neutral particle dynamics: the prevalence of frequent collisions for which the outgoing velocity is determined by local plasma parameters. Using this property, we derive a multi-level algorithm based on collision event propagator and show, both analytically and through numerical experiments, that it reproduces the results of standard Monte Carlo methods. We further demonstrate that, in the context of coupled plasma-neutral edge simulations employing correlated Monte Carlo, the proposed scheme retains trajectory correlation to machine precision as the system evolves, whereas conventional methods exhibit rapid decorrelation. These results indicate that the propagator-based multi-level Monte Carlo scheme is a promising candidate for use in fully implicit Jacobian-free Newton-Krylov (JFNK) solvers for coupled plasma-neutral systems. + oai:arXiv.org:2512.09334v1 + physics.plasm-ph + cs.NA + math.NA + physics.comp-ph + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Linnea M Wolniewicz, Peter Sadowski, Claudio Corti + Gregory J. Parker, Maxim V. Umansky, Benjamin D. Dudson - Rolling in the deep of cognitive and AI biases - https://arxiv.org/abs/2407.21202 - arXiv:2407.21202v3 Announce Type: replace -Abstract: Nowadays, we delegate many of our decisions to Artificial Intelligence (AI) that acts either in solo or as a human companion in decisions made to support several sensitive domains, like healthcare, financial services and law enforcement. AI systems, even carefully designed to be fair, are heavily criticized for delivering misjudged and discriminated outcomes against individuals and groups. Numerous work on AI algorithmic fairness is devoted on Machine Learning pipelines which address biases and quantify fairness under a pure computational view. However, the continuous unfair and unjust AI outcomes, indicate that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed. Although, the synergy of humans and machines seems imperative to make AI work, the significant impact of human and societal factors on AI bias is currently overlooked. We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview. Inspired by the cognitive science definition and taxonomy of human heuristics, we identify how harmful human actions influence the overall AI lifecycle, and reveal human to AI biases hidden pathways. We introduce a new mapping, which justifies the human heuristics to AI biases reflections and we detect relevant fairness intensities and inter-dependencies. We envision that this approach will contribute in revisiting AI fairness under deeper human-centric case studies, revealing hidden biases cause and effects. - oai:arXiv.org:2407.21202v3 - cs.AI + Phase transition to causal symmetry reveals operational autonomy in sociotechnical systems + https://arxiv.org/abs/2512.09352 + arXiv:2512.09352v1 Announce Type: cross +Abstract: Complex adaptive systems persist through continuous transformation, yet the dynamical principles governing their long-term stability remain poorly characterized. Here we analyze 50 large-scale collaborative ecosystems spanning 11,042 system-months to quantify the emergence of operational autonomy. We develop an order parameter (Gamma) measuring structural persistence amid component turnover and characterize directional coupling between organizational architecture and collective activity. Gamma exhibits a bimodal distribution (Hartigan p=0.0126; Delta BIC = 2,000), identifying two regimes: an exploratory phase of high variance and a mature phase with 1.77x variance collapse. Granger analysis reveals causal symmetrization at maturity - the structure-activity coupling ratio shifts from 0.71 (activity-driven) to 0.94 (bidirectional), indicating that architecture increasingly constrains collective coordination. + A viability index, combining activity and structure, outperforms activity-based prediction (AUC = 0.88 vs 0.81), identifying 'zombie' systems where high churn masks structural decay. This extends recent work by Ait et al., who identified 'zombie' projects exhibiting activity without development based on non-coding contributions. Our metric identifies structural zombies: projects where coding activity persists but fails to preserve architectural invariants. + These results establish causal symmetrization as an empirically validated signature of self-organizing autonomy applicable across complex collaborative systems - a dynamical regime previously theorized in biological contexts but here demonstrated and measured in artificial ones. + oai:arXiv.org:2512.09352v1 + physics.soc-ph cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Athena Vakali, Nicoleta Tantalaki + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Anthony Gosme - Two-Edge Connectivity via Pac-Man Gluing - https://arxiv.org/abs/2408.05282 - arXiv:2408.05282v2 Announce Type: replace -Abstract: We study the 2-edge-connected spanning subgraph (2-ECSS) problem: Given a graph $G$, compute a connected subgraph $H$ of $G$ with the minimum number of edges such that $H$ is spanning, i.e., $V(H) = V(G)$, and $H$ is 2-edge-connected, i.e., $H$ remains connected upon the deletion of any single edge, if such an $H$ exists. The $2$-ECSS problem is known to be NP-hard. In this work, we provide a polynomial-time $(\frac 5 4 + \varepsilon)$-approximation for the problem for an arbitrarily small $\varepsilon>0$, improving the previous best approximation ratio of $\frac{13}{10}+\varepsilon$. - Our improvement is based on two main innovations: First, we reduce solving the problem on general graphs to solving it on structured graphs with high vertex connectivity. This high vertex connectivity ensures the existence of a 4-matching across any bipartition of the vertex set with at least 10 vertices in each part. Second, we exploit this property in a later gluing step, where isolated 2-edge-connected components need to be merged without adding too many edges. Using the 4-matching property, we can repeatedly glue a huge component (containing at least 10 vertices) to other components. This step is reminiscent of the Pac-Man game, where a Pac-Man (a huge component) consumes all the dots (other components) as it moves through a maze. These two innovations lead to a significantly simpler algorithm and analysis for the gluing step compared to the previous best approximation algorithm, which required a long and tedious case analysis. - oai:arXiv.org:2408.05282v2 - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Mohit Garg, Felix Hommelsheim, Alexander Lindermayr + Meta-learning three-factor plasticity rules for structured credit assignment with sparse feedback + https://arxiv.org/abs/2512.09366 + arXiv:2512.09366v1 Announce Type: cross +Abstract: Biological neural networks learn complex behaviors from sparse, delayed feedback using local synaptic plasticity, yet the mechanisms enabling structured credit assignment remain elusive. In contrast, artificial recurrent networks solving similar tasks typically rely on biologically implausible global learning rules or hand-crafted local updates. The space of local plasticity rules capable of supporting learning from delayed reinforcement remains largely unexplored. Here, we present a meta-learning framework that discovers local learning rules for structured credit assignment in recurrent networks trained with sparse feedback. Our approach interleaves local neo-Hebbian-like updates during task execution with an outer loop that optimizes plasticity parameters via \textbf{tangent-propagation through learning}. The resulting three-factor learning rules enable long-timescale credit assignment using only local information and delayed rewards, offering new insights into biologically grounded mechanisms for learning in recurrent circuits. + oai:arXiv.org:2512.09366v1 + q-bio.NC + cond-mat.dis-nn + cs.LG + physics.bio-ph + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Dimitra Maoutsa - Learning effective pruning at initialization from iterative pruning - https://arxiv.org/abs/2408.14757 - arXiv:2408.14757v2 Announce Type: replace -Abstract: Pruning at initialization (PaI) reduces training costs by removing weights before training, which becomes increasingly crucial with the growing network size. However, current PaI methods still have a large accuracy gap with iterative pruning, especially at high sparsity levels. This raises an intriguing question: can we get inspiration from iterative pruning to improve the PaI performance? In the lottery ticket hypothesis, the iterative rewind pruning (IRP) finds subnetworks retroactively by rewinding the parameter to the original initialization in every pruning iteration, which means all the subnetworks are based on the initial state. Here, we hypothesise the surviving subnetworks are more important and bridge the initial feature and their surviving score as the PaI criterion. We employ an end-to-end neural network (\textbf{AutoS}parse) to learn this correlation, input the model's initial features, output their score and then prune the lowest score parameters before training. To validate the accuracy and generalization of our method, we performed PaI across various models. Results show that our approach outperforms existing methods in high-sparsity settings. Notably, as the underlying logic of model pruning is consistent in different models, only one-time IRP on one model is needed (e.g., once IRP on ResNet-18/CIFAR-10, AutoS can be generalized to VGG-16/CIFAR-10, ResNet-18/TinyImageNet, et al.). As the first neural network-based PaI method, we conduct extensive experiments to validate the factors influencing this approach. These results reveal the learning tendencies of neural networks and provide new insights into our understanding and research of PaI from a practical perspective. Our code is available at: https://github.com/ChengYaofeng/AutoSparse.git. - oai:arXiv.org:2408.14757v2 + LiePrune: Lie Group and Quantum Geometric Dual Representation for One-Shot Structured Pruning of Quantum Neural Networks + https://arxiv.org/abs/2512.09469 + arXiv:2512.09469v1 Announce Type: cross +Abstract: Quantum neural networks (QNNs) and parameterized quantum circuits (PQCs) are key building blocks for near-term quantum machine learning. However, their scalability is constrained by excessive parameters, barren plateaus, and hardware limitations. We propose LiePrune, the first mathematically grounded one-shot structured pruning framework for QNNs that leverages Lie group structure and quantum geometric information. Each gate is jointly represented in a Lie group--Lie algebra dual space and a quantum geometric feature space, enabling principled redundancy detection and aggressive compression. Experiments on quantum classification (MNIST, FashionMNIST), quantum generative modeling (Bars-and-Stripes), and quantum chemistry (LiH VQE) show that LiePrune achieves over $10\times$ compression with negligible or even improved task performance, while providing provable guarantees on redundancy detection, functional approximation, and computational complexity. + oai:arXiv.org:2512.09469v1 + quant-ph cs.CV - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Shengkai Liu, Yaofeng Cheng, Fusheng Zha, Wei Guo, Lining Sun, Zhenshan Bing, Chenguang Yang + Haijian Shao, Bowen Yang, Wei Liu, Xing Deng, Yingtao Jiang - Asynchronous Stochastic Approximation with Applications to Average-Reward Reinforcement Learning - https://arxiv.org/abs/2409.03915 - arXiv:2409.03915v3 Announce Type: replace -Abstract: This paper investigates the stability and convergence properties of asynchronous stochastic approximation (SA) algorithms, with a focus on extensions relevant to average-reward reinforcement learning. We first extend a stability proof method of Borkar and Meyn to accommodate more general noise conditions than previously considered, thereby yielding broader convergence guarantees for asynchronous SA. To sharpen the convergence analysis, we further examine the shadowing properties of asynchronous SA, building on a dynamical systems approach of Hirsch and Bena\"{i}m. These results provide a theoretical foundation for a class of relative value iteration-based reinforcement learning algorithms -- developed and analyzed in a companion paper -- for solving average-reward Markov and semi-Markov decision processes. - oai:arXiv.org:2409.03915v3 - cs.LG - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Huizhen Yu, Yi Wan, Richard S. Sutton + Modeling Complex Multiphysics Systems with Discrete Element Method Enriched with the Kernel-Independent Fast Multipole Method + https://arxiv.org/abs/2512.09478 + arXiv:2512.09478v1 Announce Type: cross +Abstract: The paper describes the coupling of the MercuryDPM discrete element method (DEM) code and the implementation of the kernel-independent fast multipole method (KIFMM). The combined simulation framework allows addressing the large class of multiscale problems, including both the mechanical interactions of particulates at the fine scale and the long-range interactions of various natures at the coarse scale. Among these are electrostatic interactions in powders, clays, and particulates, magnetic interactions in ferromagnetic granulates, and gravitational interactions in asteroid clouds. The formalism of rigid clumps is successfully combined with KIFMM, enabling addressing problems involving complex long-large interactions between non-spherical particles with arbitrary charge distributions. The capabilities of our technique are demonstrated in several application examples. + oai:arXiv.org:2512.09478v1 + cond-mat.soft + cs.NA + math.NA + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Igor A. Ostanin - SimSUM: Simulated Benchmark with Structured and Unstructured Medical Records - https://arxiv.org/abs/2409.08936 - arXiv:2409.08936v4 Announce Type: replace -Abstract: Clinical information extraction, which involves structuring clinical concepts from unstructured medical text, remains a challenging problem that could benefit from the inclusion of tabular background information available in electronic health records. Existing open-source datasets lack explicit links between structured features and clinical concepts in the text, motivating the need for a new research dataset. We introduce SimSUM, a benchmark dataset of 10,000 simulated patient records that link unstructured clinical notes with structured background variables. Each record simulates a patient encounter in the domain of respiratory diseases and includes tabular data (e.g., symptoms, diagnoses, underlying conditions) generated from a Bayesian network whose structure and parameters are defined by domain experts. A large language model (GPT-4o) is prompted to generate a clinical note describing the encounter, including symptoms and relevant context. These notes are annotated with span-level symptom mentions. We conduct an expert evaluation to assess note quality and run baseline predictive models on both the tabular and textual data. The SimSUM dataset is primarily designed to support research on clinical information extraction in the presence of tabular background variables, which can be linked through domain knowledge to concepts of interest to be extracted from the text -- namely, symptoms in the case of SimSUM. Secondary uses include research on the automation of clinical reasoning over both tabular data and text, causal effect estimation in the presence of tabular and/or textual confounders, and multi-modal synthetic data generation. SimSUM is not intended for training clinical decision support systems or production-grade models, but rather to facilitate reproducible research in a simplified and controlled setting. - oai:arXiv.org:2409.08936v4 - cs.AI - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Paloma Rabaey, Stefan Heytens, Thomas Demeester + Estimation of Stochastic Optimal Transport Maps + https://arxiv.org/abs/2512.09499 + arXiv:2512.09499v1 Announce Type: cross +Abstract: The optimal transport (OT) map is a geometry-driven transformation between high-dimensional probability distributions which underpins a wide range of tasks in statistics, applied probability, and machine learning. However, existing statistical theory for OT map estimation is quite restricted, hinging on Brenier's theorem (quadratic cost, absolutely continuous source) to guarantee existence and uniqueness of a deterministic OT map, on which various additional regularity assumptions are imposed to obtain quantitative error bounds. In many real-world problems these conditions fail or cannot be certified, in which case optimal transportation is possible only via stochastic maps that can split mass. To broaden the scope of map estimation theory to such settings, this work introduces a novel metric for evaluating the transportation quality of stochastic maps. Under this metric, we develop computationally efficient map estimators with near-optimal finite-sample risk bounds, subject to easy-to-verify minimal assumptions. Our analysis further accommodates common forms of adversarial sample contamination, yielding estimators with robust estimation guarantees. Empirical experiments are provided which validate our theory and demonstrate the utility of the proposed framework in settings where existing theory fails. These contributions constitute the first general-purpose theory for map estimation, compatible with a wide spectrum of real-world applications where optimal transport may be intrinsically stochastic. + oai:arXiv.org:2512.09499v1 + stat.ML + cs.LG + math.ST + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sloan Nietert, Ziv Goldfeld - Beyond accuracy: quantifying the reliability of Multiple Instance Learning for Whole Slide Image classification - https://arxiv.org/abs/2409.11110 - arXiv:2409.11110v3 Announce Type: replace -Abstract: Machine learning models have become integral to many fields, but their reliability, defined as producing dependable, trustworthy, and domain-consistent predictions, remains a critical concern. Multiple Instance Learning (MIL) models designed for Whole Slide Image (WSI) classification in computational pathology are rarely evaluated in terms of reliability, leaving a key gap in understanding their suitability for high-stakes applications like clinical decision-making. In this paper, we address this gap by introducing three quantitative metrics for reliability assessment and applying them to several widely used MIL architectures across three region-wise annotated pathology datasets. Our findings indicate that the mean pooling instance (MEAN-POOL-INS)model demonstrates superior reliability compared to other networks, despite its simple architectural design and computational efficiency. These findings underscore the need of reliability evaluation alongside predictive performance in MIL models and establish MEAN-POOL-INS as a strong, trustworthy baseline for future research. - oai:arXiv.org:2409.11110v3 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - 10.1371/journal.pone.0337261 - PloS one. 2025 Dec 5;20(12):e0337261 - Hassan Keshvarikhojasteh, Marc Aubreville, Christof A. Bertram, Josien P. W. Pluim, Mitko Veta + Coloring Geometric Hypergraphs: A Survey + https://arxiv.org/abs/2512.09509 + arXiv:2512.09509v1 Announce Type: cross +Abstract: The \emph{chromatic number} of a hypergraph is the smallest number of colors needed to color the vertices such that no edge of at least two vertices is monochromatic. Given a family of geometric objects $\mathcal{F}$ that covers a subset $S$ of the Euclidean space, we can associate it with a hypergraph whose vertex set is $\mathcal F$ and whose edges are those subsets ${\mathcal{F}'}\subset \mathcal F$ for which there exists a point $p\in S$ such that ${\mathcal F}'$ consists of precisely those elements of $\mathcal{F}$ that contain $p$. The question whether $\mathcal F$ can be split into 2 coverings is equivalent to asking whether the chromatic number of the hypergraph is equal to 2. + There are a number of competing notions of the chromatic number that lead to deep combinatorial questions already for abstract hypergraphs. In this paper, we concentrate on \emph{geometrically defined} (in short, \emph{geometric}) hypergraphs, and survey many recent coloring results related to them. In particular, we study and survey the following problem, dual to the above covering question. Given a set of points $S$ in the Euclidean space and a family $\mathcal{F}$ of geometric objects of a fixed type, define a hypergraph ${\mathcal H}_m$ on the point set $S$, whose edges are the subsets of $S$ that can be obtained as the intersection of $S$ with a member of $\mathcal F$ and have at least $m$ elements. Is it true that if $m$ is large enough, then the chromatic number of ${\mathcal H}_m$ is equal to 2? + oai:arXiv.org:2512.09509v1 + math.CO + cs.CG + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + G\'abor Dam\'asdi, Bal\'azs Keszegh, J\'anos Pach, D\"om\"ot\"or P\'alv\"olgyi, G\'eza T\'oth - Learning Generalized Hamiltonians using fully Symplectic Mappings - https://arxiv.org/abs/2409.11138 - arXiv:2409.11138v3 Announce Type: replace -Abstract: Many important physical systems can be described as the evolution of a Hamiltonian system, which has the important property of being conservative, that is, energy is conserved throughout the evolution. Physics Informed Neural Networks and in particular Hamiltonian Neural Networks have emerged as a mechanism to incorporate structural inductive bias into the NN model. By ensuring physical invariances are conserved, the models exhibit significantly better sample complexity and out-of-distribution accuracy than standard NNs. Learning the Hamiltonian as a function of its canonical variables, typically position and velocity, from sample observations of the system thus becomes a critical task in system identification and long-term prediction of system behavior. However, to truly preserve the long-run physical conservation properties of Hamiltonian systems, one must use symplectic integrators for a forward pass of the system's simulation. While symplectic schemes have been used in the literature, they are thus far limited to situations when they reduce to explicit algorithms, which include the case of separable Hamiltonians or augmented non-separable Hamiltonians. We extend it to generalized non-separable Hamiltonians, and noting the self-adjoint property of symplectic integrators, we bypass computationally intensive backpropagation through an ODE solver. We show that the method is robust to noise and provides a good approximation of the system Hamiltonian when the state variables are sampled from a noisy observation. In the numerical results, we show the performance of the method concerning Hamiltonian reconstruction and conservation, indicating its particular advantage for non-separable systems. - oai:arXiv.org:2409.11138v3 + Transport Novelty Distance: A Distributional Metric for Evaluating Material Generative Models + https://arxiv.org/abs/2512.09514 + arXiv:2512.09514v1 Announce Type: cross +Abstract: Recent advances in generative machine learning have opened new possibilities for the discovery and design of novel materials. However, as these models become more sophisticated, the need for rigorous and meaningful evaluation metrics has grown. Existing evaluation approaches often fail to capture both the quality and novelty of generated structures, limiting our ability to assess true generative performance. In this paper, we introduce the Transport Novelty Distance (TNovD) to judge generative models used for materials discovery jointly by the quality and novelty of the generated materials. Based on ideas from Optimal Transport theory, TNovD uses a coupling between the features of the training and generated sets, which is refined into a quality and memorization regime by a threshold. The features are generated from crystal structures using a graph neural network that is trained to distinguish between materials, their augmented counterparts, and differently sized supercells using contrastive learning. We evaluate our proposed metric on typical toy experiments relevant for crystal structure prediction, including memorization, noise injection and lattice deformations. Additionally, we validate the TNovD on the MP20 validation set and the WBM substitution dataset, demonstrating that it is capable of detecting both memorized and low-quality material data. We also benchmark the performance of several popular material generative models. While introduced for materials, our TNovD framework is domain-agnostic and can be adapted for other areas, such as images and molecules. + oai:arXiv.org:2512.09514v1 + cond-mat.mtrl-sci cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Harsh Choudhary, Chandan Gupta, Vyacheslav Kungurtsev, Melvin Leok, Georgios Korpas + Paul Hagemann, Simon M\"uller, Janine George, Philipp Benner - Geometry Aware Meta-Learning Neural Network for Joint Phase and Precoder Optimization in RIS - https://arxiv.org/abs/2409.11270 - arXiv:2409.11270v2 Announce Type: replace -Abstract: In reconfigurable intelligent surface (RIS) aided systems, the joint optimization of the precoder matrix at the base station and the phase shifts of the RIS elements involves significant complexity. In this paper, we propose a complex-valued, geometry aware meta-learning neural network that maximizes the weighted sum rate in a multi-user multiple input single output system. By leveraging the complex circle geometry for phase shifts and spherical geometry for the precoder, the optimization occurs on Riemannian manifolds, leading to faster convergence. We use a complex-valued neural network for phase shifts and an Euler inspired update for the precoder network. Our approach outperforms existing neural network-based algorithms, offering higher weighted sum rates, lower power consumption, and significantly faster convergence. Specifically, it converges faster by nearly 100 epochs, with a 0.7 bps improvement in weighted sum rate and a 1.8 dB power gain when compared with existing work. Further it outperforms the state-of-the-art alternating optimization algorithm by 0.86 bps with a 2.6 dB power gain. - oai:arXiv.org:2409.11270v2 + NeuroSketch: An Effective Framework for Neural Decoding via Systematic Architectural Optimization + https://arxiv.org/abs/2512.09524 + arXiv:2512.09524v1 Announce Type: cross +Abstract: Neural decoding, a critical component of Brain-Computer Interface (BCI), has recently attracted increasing research interest. Previous research has focused on leveraging signal processing and deep learning methods to enhance neural decoding performance. However, the in-depth exploration of model architectures remains underexplored, despite its proven effectiveness in other tasks such as energy forecasting and image classification. In this study, we propose NeuroSketch, an effective framework for neural decoding via systematic architecture optimization. Starting with the basic architecture study, we find that CNN-2D outperforms other architectures in neural decoding tasks and explore its effectiveness from temporal and spatial perspectives. Building on this, we optimize the architecture from macro- to micro-level, achieving improvements in performance at each step. The exploration process and model validations take over 5,000 experiments spanning three distinct modalities (visual, auditory, and speech), three types of brain signals (EEG, SEEG, and ECoG), and eight diverse decoding tasks. Experimental results indicate that NeuroSketch achieves state-of-the-art (SOTA) performance across all evaluated datasets, positioning it as a powerful tool for neural decoding. Our code and scripts are available at https://github.com/Galaxy-Dawn/NeuroSketch. + oai:arXiv.org:2512.09524v1 + q-bio.NC + cs.AI cs.LG eess.SP - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Dahlia Devapriya, Aparna V C, Sheetal Kalyani - - - 2Fast-2Lamaa: Large-Scale Lidar-Inertial Localization and Mapping with Continuous Distance Fields - https://arxiv.org/abs/2410.05433 - arXiv:2410.05433v2 Announce Type: replace -Abstract: This paper introduces 2Fast-2Lamaa, a lidar-inertial state estimation framework for odometry, mapping, and localization. Its first key component is the optimization-based undistortion of lidar scans, which uses continuous IMU preintegration to model the system's pose at every lidar point timestamp. The continuous trajectory over 100-200ms is parameterized only by the initial scan conditions (linear velocity and gravity orientation) and IMU biases, yielding eleven state variables. These are estimated by minimizing point-to-line and point-to-plane distances between lidar-extracted features without relying on previous estimates, resulting in a prior-less motion-distortion correction strategy. Because the method performs local state estimation, it directly provides scan-to-scan odometry. To maintain geometric consistency over longer periods, undistorted scans are used for scan-to-map registration. The map representation employs Gaussian Processes to form a continuous distance field, enabling point-to-surface distance queries anywhere in space. Poses of the undistorted scans are refined by minimizing these distances through non-linear least-squares optimization. For odometry and mapping, the map is built incrementally in real time; for pure localization, existing maps are reused. The incremental map construction also includes mechanisms for removing dynamic objects. We benchmark 2Fast-2Lamaa on 250km (over 10h) of public and self-collected datasets from both automotive and handheld systems. The framework achieves state-of-the-art performance across diverse and challenging scenarios, reaching odometry and localization errors as low as 0.27% and 0.06 m, respectively. The real-time implementation is publicly available at https://github.com/clegenti/2fast2lamaa. - oai:arXiv.org:2410.05433v2 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Cedric Le Gentil, Raphael Falque, Daniil Lisus, Timothy D. Barfoot + Gaorui Zhang, Zhizhang Yuan, Jialan Yang, Junru Chen, Li Meng, Yang Yang - PP-GWAS: Privacy Preserving Multi-Site Genome-wide Association Studies - https://arxiv.org/abs/2410.08122 - arXiv:2410.08122v2 Announce Type: replace -Abstract: Genome-wide association studies are pivotal in understanding the genetic underpinnings of complex traits and diseases. Collaborative, multi-site GWAS aim to enhance statistical power but face obstacles due to the sensitive nature of genomic data sharing. Current state-of-the-art methods provide a privacy-focused approach utilizing computationally expensive methods such as Secure Multi-Party Computation and Homomorphic Encryption. In this context, we present a novel algorithm PP-GWAS designed to improve upon existing standards in terms of computational efficiency and scalability without sacrificing data privacy. This algorithm employs randomized encoding within a distributed architecture to perform stacked ridge regression on a Linear Mixed Model to ensure rigorous analysis. Experimental evaluation with real world and synthetic data indicates that PP-GWAS can achieve computational speeds twice as fast as similar state-of-the-art algorithms while using lesser computational resources, all while adhering to a robust security model that caters to an all-but-one semi-honest adversary setting. We have assessed its performance using various datasets, emphasizing its potential in facilitating more efficient and private genomic analyses. - oai:arXiv.org:2410.08122v2 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Arjhun Swaminathan, Anika Hannemann, Ali Burak \"Unal, Nico Pfeifer, Mete Akg\"un - - - Kamp Theorem for Pomset Languages of Higher Dimensional Automata - https://arxiv.org/abs/2410.12493 - arXiv:2410.12493v4 Announce Type: replace -Abstract: Temporal logics are a powerful tool to specify properties of computational systems. For concurrent programs, Higher Dimensional Automata (HDA) are a very expressive model of non-interleaving concurrency. HDA recognize languages of partially ordered multisets, or pomsets. Recent work has shown that Monadic Second Order (MSO) logic is as expressive as HDA for pomset languages. In the case of words, Kamp's theorem states that First Order (FO) logic is as expressive as Linear Temporal Logic (LTL). In this paper, we extend this result to pomsets. To do so, we first investigate the class of pomset languages that are definable in FO. As expected, this is a strict subclass of MSO-definable languages. Then, we define a Linear Temporal Logic for pomsets, and show that it is equivalent to FO. - oai:arXiv.org:2410.12493v4 - cs.FL - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport + https://arxiv.org/abs/2512.09530 + arXiv:2512.09530v1 Announce Type: cross +Abstract: This thesis examines self-attention training through the lens of Optimal Transport (OT) and develops an OT-based alternative for tabular classification. The study tracks intermediate projections of the self-attention layer during training and evaluates their evolution using discrete OT metrics, including Wasserstein distance, Monge gap, optimality, and efficiency. Experiments are conducted on classification tasks with two and three classes, as well as on a biomedical dataset. + Results indicate that the final self-attention mapping often approximates the OT optimal coupling, yet the training trajectory remains inefficient. Pretraining the MLP section on synthetic data partially improves convergence but is sensitive to their initialization. To address these limitations, an OT-based algorithm is introduced: it generates class-specific dummy Gaussian distributions, computes an OT alignment with the data, and trains an MLP to generalize this mapping. The method achieves accuracy comparable to Transformers while reducing computational cost and scaling more efficiently under standardized inputs, though its performance depends on careful dummy-geometry design. All experiments and implementations are conducted in R. + oai:arXiv.org:2512.09530v1 + stat.ML + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Emily Clement, Enzo Erlich, J\'er\'emy Ledent + Antonio Candelieri, Alessandro Quadrio - Score-based Conditional Out-of-Distribution Augmentation for Graph Covariate Shift - https://arxiv.org/abs/2410.17506 - arXiv:2410.17506v2 Announce Type: replace -Abstract: Distribution shifts between training and testing datasets significantly impair the model performance on graph learning. A commonly-taken causal view in graph invariant learning suggests that stable predictive features of graphs are causally associated with labels, whereas varying environmental features lead to distribution shifts. In particular, covariate shifts caused by unseen environments in test graphs underscore the critical need for out-of-distribution (OOD) generalization. Existing graph augmentation methods designed to address the covariate shift often disentangle the stable and environmental features in the input space, and selectively perturb or mixup the environmental features. However, such perturbation-based methods heavily rely on an accurate separation of stable and environmental features, and their exploration ability is confined to existing environmental features in the training distribution. To overcome these limitations, we introduce a novel distributional augmentation approach enabled by a tailored score-based conditional graph generation strategies to explore and synthesize unseen environments while preserving the validity and stable features of overall graph patterns. Our comprehensive empirical evaluations demonstrate the enhanced effectiveness of our method in improving graph OOD generalization. - oai:arXiv.org:2410.17506v2 + Don't Throw Away Your Beams: Improving Consistency-based Uncertainties in LLMs via Beam Search + https://arxiv.org/abs/2512.09538 + arXiv:2512.09538v1 Announce Type: cross +Abstract: Consistency-based methods have emerged as an effective approach to uncertainty quantification (UQ) in large language models. These methods typically rely on several generations obtained via multinomial sampling, measuring their agreement level. However, in short-form QA, multinomial sampling is prone to producing duplicates due to peaked distributions, and its stochasticity introduces considerable variance in uncertainty estimates across runs. We introduce a new family of methods that employ beam search to generate candidates for consistency-based UQ, yielding improved performance and reduced variance compared to multinomial sampling. We also provide a theoretical lower bound on the beam set probability mass under which beam search achieves a smaller error than multinomial sampling. We empirically evaluate our approach on six QA datasets and find that its consistent improvements over multinomial sampling lead to state-of-the-art UQ performance. + oai:arXiv.org:2512.09538v1 + stat.ML + cs.CL cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Bohan Wang, Yurui Chang, Wei Jin, Lu Lin + Ekaterina Fadeeva, Maiya Goloburda, Aleksandr Rubashevskii, Roman Vashurin, Artem Shelmanov, Preslav Nakov, Mrinmaya Sachan, Maxim Panov - Learning Geodesics of Geometric Shape Deformations From Images - https://arxiv.org/abs/2410.18797 - arXiv:2410.18797v3 Announce Type: replace -Abstract: This paper presents a novel method, named geodesic deformable networks (GDN), that for the first time enables the learning of geodesic flows of deformation fields derived from images. In particular, the capability of our proposed GDN being able to predict geodesics is important for quantifying and comparing deformable shape presented in images. The geodesic deformations, also known as optimal transformations that align pairwise images, are often parameterized by a time sequence of smooth vector fields governed by nonlinear differential equations. A bountiful literature has been focusing on learning the initial conditions (e.g., initial velocity fields) based on registration networks. However, the definition of geodesics central to deformation-based shape analysis is blind to the networks. To address this problem, we carefully develop an efficient neural operator to treat the geodesics as unknown mapping functions learned from the latent deformation spaces. A composition of integral operators and smooth activation functions is then formulated to effectively approximate such mappings. In contrast to previous works, our GDN jointly optimizes a newly defined geodesic loss, which adds additional benefits to promote the network regularizability and generalizability. We demonstrate the effectiveness of GDN on both 2D synthetic data and 3D real brain magnetic resonance imaging (MRI). - oai:arXiv.org:2410.18797v3 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - replace + A tensor phase theory with applications in multilinear control + https://arxiv.org/abs/2512.09559 + arXiv:2512.09559v1 Announce Type: cross +Abstract: The purpose of this paper is to initiate a phase theory for tensors under the Einstein product, and explore its applications in multilinear control systems. Firstly, the sectorial tensor decomposition for sectorial tensors is derived, which allows us to define phases for sectorial tensors. A numerical procedure for computing phases of a sectorial tensor is also proposed. Secondly, the maximin and minimax expressions for tensor phases are given, which are used to quantify how close the phases of a sectorial tensor are to those of its compressions. Thirdly, the compound spectrum, compound numerical ranges and compound angular numerical ranges of two sectorial tensors $A,B$ are defined and characterized in terms of the compound numerical ranges and compound angular numerical ranges of the sectorial tensors $A,B$. Fourthly, it is shown that the angles of eigenvalues of the product of two sectorial tensors are upper bounded by the sum of their individual phases. Finally, based on the tensor phase theory developed above, a tensor version of the small phase theorem is presented, which can be regarded as a natural generalization of the matrix case, recently proposed in Ref. [10]. The results offer powerful new tools for the stability and robustness analysis of multilinear feedback control systems. + oai:arXiv.org:2512.09559v1 + math.OC + cs.SY + eess.SY + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - 10.59275/j.melba.2025-f9f4 - Machine.Learning.for.Biomedical.Imaging. 3 (2025) - Nian Wu, Miaomiao Zhang - - - Deep Learning, Machine Learning -- Digital Signal and Image Processing: From Theory to Application - https://arxiv.org/abs/2410.20304 - arXiv:2410.20304v2 Announce Type: replace -Abstract: Digital Signal Processing (DSP) and Digital Image Processing (DIP) with Machine Learning (ML) and Deep Learning (DL) are popular research areas in Computer Vision and related fields. We highlight transformative applications in image enhancement, filtering techniques, and pattern recognition. By integrating frameworks like the Discrete Fourier Transform (DFT), Z-Transform, and Fourier Transform methods, we enable robust data manipulation and feature extraction essential for AI-driven tasks. Using Python, we implement algorithms that optimize real-time data processing, forming a foundation for scalable, high-performance solutions in computer vision. This work illustrates the potential of ML and DL to advance DSP and DIP methodologies, contributing to artificial intelligence, automated feature extraction, and applications across diverse domains. - oai:arXiv.org:2410.20304v2 - cs.CV - cs.GR - eess.IV - eess.SP - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Weiche Hsieh, Ziqian Bi, Junyu Liu, Benji Peng, Sen Zhang, Xuanhe Pan, Jiawei Xu, Jinlang Wang, Keyu Chen, Caitlyn Heqi Yin, Pohsun Feng, Yizhu Wen, Tianyang Wang, Ming Li, Jintao Ren, Xinyuan Song, Qian Niu, Silin Chen, Ming Liu + Chengdong Liu, Yimin Wei, Guofeng Zhang - Architecture Proposal for 6G Systems Integrating Sensing and Communication - https://arxiv.org/abs/2411.10138 - arXiv:2411.10138v2 Announce Type: replace -Abstract: Integrating sensing functionality into 6G communication networks requires some changes to existing components as well as new entities processing the radar sensing signals received by the communication antennas. This whitepaper provides a comprehensive overview of the 6G design proposal for ISaC (Integrated Sensing and Communication). The whitepaper has been created by the architecture group of the KOMSENS-6G project with the intend to serve as a basis for further discussions and alignment across innovative 6G projects. - oai:arXiv.org:2411.10138v2 - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Lazy Diffusion: Mitigating spectral collapse in generative diffusion-based stable autoregressive emulation of turbulent flows + https://arxiv.org/abs/2512.09572 + arXiv:2512.09572v1 Announce Type: cross +Abstract: Turbulent flows posses broadband, power-law spectra in which multiscale interactions couple high-wavenumber fluctuations to large-scale dynamics. Although diffusion-based generative models offer a principled probabilistic forecasting framework, we show that standard DDPMs induce a fundamental \emph{spectral collapse}: a Fourier-space analysis of the forward SDE reveals a closed-form, mode-wise signal-to-noise ratio (SNR) that decays monotonically in wavenumber, $|k|$ for spectra $S(k)\!\propto\!|k|^{-\lambda}$, rendering high-wavenumber modes indistinguishable from noise and producing an intrinsic spectral bias. We reinterpret the noise schedule as a spectral regularizer and introduce power-law schedules $\beta(\tau)\!\propto\!\tau^\gamma$ that preserve fine-scale structure deeper into diffusion time, along with \emph{Lazy Diffusion}, a one-step distillation method that leverages the learned score geometry to bypass long reverse-time trajectories and prevent high-$k$ degradation. Applied to high-Reynolds-number 2D Kolmogorov turbulence and $1/12^\circ$ Gulf of Mexico ocean reanalysis, these methods resolve spectral collapse, stabilize long-horizon autoregression, and restore physically realistic inertial-range scaling. Together, they show that na\"ive Gaussian scheduling is structurally incompatible with power-law physics and that physics-aware diffusion processes can yield accurate, efficient, and fully probabilistic surrogates for multiscale dynamical systems. + oai:arXiv.org:2512.09572v1 + physics.flu-dyn + cs.AI + math.DS + nlin.CD + physics.ao-ph + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Peter Gersing, Mark Doll, Joerg Huschke, Jan Plachy, Stefan K\"opsell + Anish Sambamurthy, Ashesh Chattopadhyay - NumCoKE: Ordinal-Aware Numerical Reasoning over Knowledge Graphs with Mixture-of-Experts and Contrastive Learning - https://arxiv.org/abs/2411.12950 - arXiv:2411.12950v3 Announce Type: replace -Abstract: Knowledge graphs (KGs) serve as a vital backbone for a wide range of AI applications, including natural language understanding and recommendation. A promising yet underexplored direction is numerical reasoning over KGs, which involves inferring new facts by leveraging not only symbolic triples but also numerical attribute values (e.g., length, weight). However, existing methods fall short in two key aspects: (1) Incomplete semantic integration: Most models struggle to jointly encode entities, relations, and numerical attributes in a unified representation space, limiting their ability to extract relation-aware semantics from numeric information. (2) Ordinal indistinguishability: Due to subtle differences between close values and sampling imbalance, models often fail to capture fine-grained ordinal relationships (e.g., longer, heavier), especially in the presence of hard negatives. To address these challenges, we propose NumCoKE, a numerical reasoning framework for KGs based on Mixture-of-Experts and Ordinal Contrastive Embedding. To overcome (C1), we introduce a Mixture-of-Experts Knowledge-Aware (MoEKA) encoder that jointly aligns symbolic and numeric components into a shared semantic space, while dynamically routing attribute features to relation-specific experts. To handle (C2), we propose Ordinal Knowledge Contrastive Learning (OKCL), which constructs ordinal-aware positive and negative samples using prior knowledge, enabling the model to better discriminate subtle semantic shifts. Extensive experiments on three public KG benchmarks demonstrate that NumCoKE consistently outperforms competitive baselines across diverse attribute distributions, validating its superiority in both semantic integration and ordinal reasoning. - oai:arXiv.org:2411.12950v3 + Graph-Based Bayesian Optimization for Quantum Circuit Architecture Search with Uncertainty Calibrated Surrogates + https://arxiv.org/abs/2512.09586 + arXiv:2512.09586v1 Announce Type: cross +Abstract: Quantum circuit design is a key bottleneck for practical quantum machine learning on complex, real-world data. We present an automated framework that discovers and refines variational quantum circuits (VQCs) using graph-based Bayesian optimization with a graph neural network (GNN) surrogate. Circuits are represented as graphs and mutated and selected via an expected improvement acquisition function informed by surrogate uncertainty with Monte Carlo dropout. Candidate circuits are evaluated with a hybrid quantum-classical variational classifier on the next generation firewall telemetry and network internet of things (NF-ToN-IoT-V2) cybersecurity dataset, after feature selection and scaling for quantum embedding. We benchmark our pipeline against an MLP-based surrogate, random search, and greedy GNN selection. The GNN-guided optimizer consistently finds circuits with lower complexity and competitive or superior classification accuracy compared to all baselines. Robustness is assessed via a noise study across standard quantum noise channels, including amplitude damping, phase damping, thermal relaxation, depolarizing, and readout bit flip noise. The implementation is fully reproducible, with time benchmarking and export of best found circuits, providing a scalable and interpretable route to automated quantum circuit discovery. + oai:arXiv.org:2512.09586v1 + quant-ph cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ming Yin, Zongsheng Cao, Qiqing Xia, Chenyang Tu, Neng Gao + cs.LG + cs.NE + cs.NI + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Prashant Kumar Choudhary, Nouhaila Innan, Muhammad Shafique, Rajeev Singh - Fast Switching in Mixed-Integer Model Predictive Control - https://arxiv.org/abs/2411.19300 - arXiv:2411.19300v4 Announce Type: replace -Abstract: We deduce stability results for finite control set and mixed-integer model predictive control with a downstream oversampling phase. The presentation rests upon the inherent robustness of model predictive control with stabilizing terminal conditions and techniques for solving mixed-integer optimal control problems by continuous optimization. Partial outer convexification and binary relaxation transform mixed-integer problems into common optimal control problems. We deduce nominal asymptotic stability for the resulting relaxed system formulation and implement sum-up rounding to restore efficiently integer feasibility on an oversampling time grid. If fast control switching is technically possible and inexpensive, we can approximate the relaxed system behavior in the state space arbitrarily close. We integrate input perturbed model predictive control with practical asymptotic stability. Numerical experiments illustrate practical relevance of fast control switching. - oai:arXiv.org:2411.19300v4 - eess.SY - cs.SY + The Ky Fan Norms and Beyond: Dual Norms and Combinations for Matrix Optimization + https://arxiv.org/abs/2512.09678 + arXiv:2512.09678v1 Announce Type: cross +Abstract: In this article, we explore the use of various matrix norms for optimizing functions of weight matrices, a crucial problem in training large language models. Moving beyond the spectral norm underlying the Muon update, we leverage duals of the Ky Fan $k$-norms to introduce a family of Muon-like algorithms we name Fanions, which are closely related to Dion. By working with duals of convex combinations of the Ky Fan $k$-norms with either the Frobenius norm or the $l_\infty$ norm, we construct the families of F-Fanions and S-Fanions, respectively. Their most prominent members are F-Muon and S-Muon. We complement our theoretical analysis with an extensive empirical study of these algorithms across a wide range of tasks and settings, demonstrating that F-Muon and S-Muon consistently match Muon's performance, while outperforming vanilla Muon on a synthetic linear least squares problem. + oai:arXiv.org:2512.09678v1 math.OC - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Artemi Makarow, Christian Kirches - - - Optimal Transmission Switching and Busbar Splitting in Hybrid AC/DC Grids - https://arxiv.org/abs/2412.00270 - arXiv:2412.00270v2 Announce Type: replace -Abstract: Driven by global climate goals, an increasing amount of Renewable Energy Sources (RES) is currently being installed worldwide. Especially in the context of offshore wind integration, hybrid AC/DC grids are considered to be the most effective technology to transmit this RES power over long distances. As hybrid AC/DC systems develop, they are expected to become increasingly complex and meshed as the current AC system. Nevertheless, there is still limited literature on how to optimize hybrid AC/DC topologies while minimizing the total power generation cost. For this reason, this paper proposes a methodology to optimize the steady-state switching states of transmission lines and busbar configurations in hybrid AC/DC grids. The proposed optimization model includes optimal transmission switching (OTS) and busbar splitting (BS), which can be applied to both AC and DC parts of hybrid AC/DC grids. To solve the problem, a scalable and exact nonlinear, non-convex model using a big M approach is formulated. In addition, convex relaxations and linear approximations of the model are tested, and their accuracy, feasibility, and optimality are analyzed. The numerical experiments show that a solution to the combined OTS/BS problem can be found in acceptable computation time and that the investigated relaxations and linearisations provide AC feasible results. - oai:arXiv.org:2412.00270v2 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 - replace + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Giacomo Bastianel, Marta Vanin, Dirk Van Hertem, Hakan Ergun + Alexey Kravatskiy, Ivan Kozyrev, Nikolai Kozlov, Alexander Vinogradov, Daniil Merkulov, Ivan Oseledets - GLL: A Differentiable Graph Learning Layer for Neural Networks - https://arxiv.org/abs/2412.08016 - arXiv:2412.08016v2 Announce Type: replace -Abstract: Standard deep learning architectures used for classification generate label predictions with a projection head and softmax activation function. Although successful, these methods fail to leverage the relational information between samples for generating label predictions. In recent works, graph-based learning techniques, namely Laplace learning, have been heuristically combined with neural networks for both supervised and semi-supervised learning (SSL) tasks. However, prior works approximate the gradient of the loss function with respect to the graph learning algorithm or decouple the processes; end-to-end integration with neural networks is not achieved. In this work, we derive backpropagation equations, via the adjoint method, for inclusion of a general family of graph learning layers into a neural network. The resulting method, distinct from graph neural networks, allows us to precisely integrate similarity graph construction and graph Laplacian-based label propagation into a neural network layer, replacing a projection head and softmax activation function for general classification task. Our experimental results demonstrate smooth label transitions across data, improved generalization and robustness to adversarial attacks, and improved training dynamics compared to a standard softmax-based approach. - oai:arXiv.org:2412.08016v2 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace + The non-existence of some Moore polygons and spectral Moore bounds + https://arxiv.org/abs/2512.09680 + arXiv:2512.09680v1 Announce Type: cross +Abstract: In this paper, we study the maximum order $v(k,\theta)$ of a connected $k$-regular graph whose second largest eigenvalue is at most $\theta$. From Alon-Boppana and Serre, we know that $v(k,\theta)$ is finite when $\theta < 2\sqrt{k-1}$ while the work of Marcus, Spielman, and Srivastava implies that $v(k,\theta)$ is infinite if $\theta\geq 2\sqrt{k-1}$. Cioab\u{a}, Koolen, Nozaki, and Vermette obtained a general upper bound on $v(k, \theta)$ via Nozaki's linear programming bound and determined many values of $v(k,\theta)$. The graphs attaining this bound are distance-regular and are called Moore polygons. Damerell and Georgiacodis proved that there are no Moore polygons of diameter $6$ or more. For smaller diameters, there are infinitely many Moore polygons. + We complement these results by proving two nonexistence results for Moore polygons with specific parameters. We also determine new values of $v(k,\theta)$: $v(4, \sqrt{2}) = 14$ and $v(5, \sqrt{2}) = v(5,\sqrt{5}-1)=16$. The former is achieved by the co-Heawood graph, and the latter by the folded $5$-cube. We verify that any connected $5$-regular graph with second eigenvalue $\lambda_2$ exceeding $1$ satisfies $\lambda_2 \geq \sqrt{5} - 1$, and that the unique $5$-regular graph attaining equality in this bound has $10$ vertices. We prove a stronger form of a 2015 conjecture of Kolokolnikov related to the second eigenvalue of cubic graphs of given order, and observe that other recent results on the second eigenvalue of regular graphs are consequences of the general upper bound theorem on $v(k,\theta)$ mentioned above. + oai:arXiv.org:2512.09680v1 + math.CO + cs.DM + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jason Brown, Bohan Chen, Harris Hardiman-Mostow, Jeff Calder, Andrea L. Bertozzi + Sebastian M. Cioab\u{a}, Vishal Gupta, Hiroshi Nozaki, Ziqing Xiang - Private Synthetic Data Generation in Bounded Memory - https://arxiv.org/abs/2412.09756 - arXiv:2412.09756v4 Announce Type: replace -Abstract: We propose $\mathtt{PrivHP}$, a lightweight synthetic data generator with \textit{differential privacy} guarantees. $\mathtt{PrivHP}$ uses a novel hierarchical decomposition that approximates the input's cumulative distribution function (CDF) in bounded memory. It balances hierarchy depth, noise addition, and pruning of low-frequency subdomains while preserving frequent ones. Private sketches estimate subdomain frequencies efficiently without full data access. - A key feature is the pruning parameter $k$, which controls the trade-off between space and utility. We define the skew measure $\mathtt{tail}_k$, capturing all but the top $k$ subdomain frequencies. Given a dataset $\mathcal{X}$, $\mathtt{PrivHP}$ uses $M=\mathcal{O}(k\log^2 |X|)$ space and, for input domain $\Omega = [0,1]$, ensures $\varepsilon$-differential privacy. It yields a generator with expected Wasserstein distance: \[ \mathcal{O}\left(\frac{\log^2 M}{\varepsilon n} + \frac{||\mathtt{tail}_k(\mathcal{X})||_1}{M n}\right) \] from the empirical distribution. This parameterized trade-off offers a level of flexibility unavailable in prior work. We also provide interpretable utility bounds that account for hierarchy depth, privacy noise, pruning, and frequency estimation errors. - oai:arXiv.org:2412.09756v4 + Device Independent Quantum Secret Sharing Using Multiparty Pseudo-telepathy Game + https://arxiv.org/abs/2512.09699 + arXiv:2512.09699v1 Announce Type: cross +Abstract: Device-independent quantum secret sharing (DI-QSS) is a cryptographic protocol that overcomes the security limitations posed by untrusted quantum devices. We propose a DI-QSS protocol based on the multipartite pseudo-telepathy parity game, which achieves device-independence with simultaneous key generation without requiring dedicated test rounds, unlike CHSH-based schemes [Zhang et al., Phys. Rev. A, 2024]. Notably, the proposed scheme allows simultaneous device-independence verification and key-generation phases, achieving optimal performance for a seven-qubit GHZ state configuration. Further, we analyse the security of our protocol against collective attack and establish reduced resource requirement for the same length of the raw key compared to the previous protocol. Finally, we show that our protocol remains robust even in a noisy environment. + oai:arXiv.org:2512.09699v1 + quant-ph cs.CR - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Rayne Holland, Seyit Camtepe, Chandra Thapa, Minhui Xue + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Santanu Majhi, Goutam Paul - AEIOU: A Unified Defense Framework against NSFW Prompts in Text-to-Image Models - https://arxiv.org/abs/2412.18123 - arXiv:2412.18123v3 Announce Type: replace -Abstract: As text-to-image (T2I) models advance and gain widespread adoption, their associated safety concerns are becoming increasingly critical. Malicious users exploit these models to generate Not-Safe-for-Work (NSFW) images using harmful or adversarial prompts, underscoring the need for effective safeguards to ensure the integrity and compliance of model outputs. However, existing detection methods often exhibit low accuracy and inefficiency. In this paper, we propose AEIOU, a defense framework that is adaptable, efficient, interpretable, optimizable, and unified against NSFW prompts in T2I models. AEIOU extracts NSFW features from the hidden states of the model's text encoder, utilizing the separable nature of these features to detect NSFW prompts. The detection process is efficient, requiring minimal inference time. AEIOU also offers real-time interpretation of results and supports optimization through data augmentation techniques. The framework is versatile, accommodating various T2I architectures. Our extensive experiments show that AEIOU significantly outperforms both commercial and open-source moderation tools, achieving over 95\% accuracy across all datasets and improving efficiency by at least tenfold. It effectively counters adaptive attacks and excels in few-shot and multi-label scenarios. - oai:arXiv.org:2412.18123v3 - cs.CR - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Computer-Assisted Search for Differential Equations Corresponding to Optimization Methods and Their Convergence Rates + https://arxiv.org/abs/2512.09712 + arXiv:2512.09712v1 Announce Type: cross +Abstract: Let $f:\mathbb{R}^n \to \mathbb{R}$ be a continuously differentiable convex function with its minimizer denoted by $x_*$ and optimal value $f_* = f(x_*)$. Optimization algorithms such as the gradient descent method can often be interpreted in the continuous-time limit as differential equations known as continuous dynamical systems. Analyzing the convergence rate of $f(x) - f_*$ in such systems often relies on constructing appropriate Lyapunov functions. However, these Lyapunov functions have been designed through heuristic reasoning rather than a systematic framework. Several studies have addressed this issue. In particular, Suh, Roh, and Ryu (2022) proposed a constructive approach that involves introducing dilated coordinates and applying integration by parts. Although this method significantly improves the process of designing Lyapunov functions, it still involves arbitrary choices among many possible options, and thus retains a heuristic nature in identifying Lyapunov functions that yield the best convergence rates. In this study, we propose a systematic framework for exploring these choices computationally. More precisely, we propose a brute-force approach using symbolic computation by computer algebra systems to explore every possibility. By formulating the design of Lyapunov functions for continuous dynamical systems as an optimization problem, we aim to optimize the Lyapunov function itself. As a result, our framework successfully reproduces many previously reported results and, in several cases, discovers new convergence rates that have not been shown in the existing studies. + oai:arXiv.org:2512.09712v1 + math.OC + cs.NA + math.CA + math.DS + math.NA + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yiming Wang, Jiahao Chen, Qingming Li, Tong Zhang, Rui Zeng, Xing Yang, Shouling Ji + Atsushi Tabei, Ken'ichiro Tanaka - Neural Radiance Fields for the Real World: A Survey - https://arxiv.org/abs/2501.13104 - arXiv:2501.13104v2 Announce Type: replace -Abstract: Neural Radiance Fields (NeRFs) have remodeled 3D scene representation since release. NeRFs can effectively reconstruct complex 3D scenes from 2D images, advancing different fields and applications such as scene understanding, 3D content generation, and robotics. Despite significant research progress, a thorough review of recent innovations, applications, and challenges is lacking. This survey compiles key theoretical advancements and alternative representations and investigates emerging challenges. It further explores applications on reconstruction, highlights NeRFs' impact on computer vision and robotics, and reviews essential datasets and toolkits. By identifying gaps in the literature, this survey discusses open challenges and offers directions for future research. - oai:arXiv.org:2501.13104v2 - cs.CV - cs.GR - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Wenhui Xiao, Remi Chierchia, Rodrigo Santa Cruz, Xuesong Li, David Ahmedt-Aristizabal, Olivier Salvado, Clinton Fookes, Leo Lebrat + Flexible Reconfigurable Intelligent Surface-Aided Covert Communications in UAV Networks + https://arxiv.org/abs/2512.09714 + arXiv:2512.09714v1 Announce Type: cross +Abstract: In recent years, unmanned aerial vehicles (UAVs) have become a key role in wireless communication networks due to their flexibility and dynamic adaptability. However, the openness of UAV-based communications leads to security and privacy concerns in wireless transmissions. This paper investigates a framework of UAV covert communications which introduces flexible reconfigurable intelligent surfaces (F-RIS) in UAV networks. Unlike traditional RIS, F-RIS provides advanced deployment flexibility by conforming to curved surfaces and dynamically reconfiguring its electromagnetic properties to enhance the covert communication performance. We establish an electromagnetic model for F-RIS and further develop a fitted model that describes the relationship between F-RIS reflection amplitude, reflection phase, and incident angle. To maximize the covert transmission rate among UAVs while meeting the covert constraint and public transmission constraint, we introduce a strategy of jointly optimizing UAV trajectories, F-RIS reflection vectors, F-RIS incident angles, and non-orthogonal multiple access (NOMA) power allocation. Considering this is a complicated non-convex optimization problem, we propose a deep reinforcement learning (DRL) algorithm-based optimization solution. Simulation results demonstrate that our proposed framework and optimization method significantly outperform traditional benchmarks, and highlight the advantages of F-RIS in enhancing covert communication performance within UAV networks. + oai:arXiv.org:2512.09714v1 + eess.SP + cs.IT + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chong Huang, Gaojie Chen, Zhuoao Xu, Jing Zhu, Taisong Pan, Rahim Tafazolli, Wei Huang - On the algorithmic structure of Dialectica realisers - https://arxiv.org/abs/2501.16208 - arXiv:2501.16208v2 Announce Type: replace -Abstract: G\"odel's Dialectica interpretation is a fundamental tool for the extraction of computational content from proofs, and plays a central role in today's proof mining program. In the past decades, it has also been studied from the perspective of programming languages, and our contribution is in that direction. Specifically, we present Dialectica as a collection of rules in the style of Hoare logic, where Dialectica is now viewed as a language for specifying procedural programs that come with a forward and backward direction. This viewpoint captures the interesting dynamics of realisers extracted by the Dialectica interpretation, and we illustrate this by defining a generalised backpropagation semantics for a fragment of this language. We envisage this work as providing a base for several future developments, both theoretical and practical, which we outline at the end. - oai:arXiv.org:2501.16208v2 - cs.LO - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Davide Barbarossa, Thomas Powell + On Parameter Identification in Three-Dimensional Elasticity and Discretisation with Physics-Informed Neural Networks + https://arxiv.org/abs/2512.09754 + arXiv:2512.09754v1 Announce Type: cross +Abstract: Physics-informed neural networks have emerged as a powerful tool in the scientific machine learning community, with applications to both forward and inverse problems. While they have shown considerable empirical success, significant challenges remain -- particularly regarding training stability and the lack of rigorous theoretical guarantees, especially when compared to classical mesh-based methods. In this work, we focus on the inverse problem of identifying a spatially varying parameter in a constitutive model of three-dimensional elasticity, using measurements of the system's state. This setting is especially relevant for non-invasive diagnosis in cardiac biomechanics, where one must also carefully account for the type of boundary data available. To address this inverse problem, we adopt an all-at-once optimisation framework, simultaneously estimating the state and parameter through a least-squares loss that encodes both available data and the governing physics. For this formulation, we prove stability estimates ensuring that our approach yields a stable approximation of the underlying ground-truth parameter of the physical system independent of a specific discretisation. We then proceed with a neural network-based discretisation and compare it to traditional mesh-based approaches. Our theoretical findings are complemented by illustrative numerical examples. + oai:arXiv.org:2512.09754v1 + math.OC + cs.NA + math.NA + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Federica Caforio, Martin Holler, Matthias H\"ofler - Oscillations Make Neural Networks Robust to Quantization - https://arxiv.org/abs/2502.00490 - arXiv:2502.00490v2 Announce Type: replace -Abstract: We challenge the prevailing view that weight oscillations observed during Quantization Aware Training (QAT) are merely undesirable side-effects and argue instead that they are an essential part of QAT. We show in a univariate linear model that QAT results in an additional loss term that causes oscillations by pushing weights away from their nearest quantization level. Based on the mechanism from the analysis, we then derive a regularizer that induces oscillations in the weights of neural networks during training. Our empirical results on ResNet-18 and Tiny Vision Transformer, evaluated on CIFAR-10 and Tiny ImageNet datasets, demonstrate across a range of quantization levels that training with oscillations followed by post-training quantization (PTQ) is sufficient to recover the performance of QAT in most cases. With this work we provide further insight into the dynamics of QAT and contribute a novel insight into explaining the role of oscillations in QAT which until now have been considered to have a primarily negative effect on quantization. - oai:arXiv.org:2502.00490v2 + Optimal certification of constant-local Hamiltonians + https://arxiv.org/abs/2512.09778 + arXiv:2512.09778v1 Announce Type: cross +Abstract: We study the problem of certifying local Hamiltonians from real-time access to their dynamics. Given oracle access to $e^{-itH}$ for an unknown $k$-local Hamiltonian $H$ and a fully specified target Hamiltonian $H_0$, the goal is to decide whether $H$ is exactly equal to $H_0$ or differs from $H_0$ by at least $\varepsilon$ in normalized Frobenius norm, while minimizing the total evolution time. We introduce the first intolerant Hamiltonian certification protocol that achieves optimal performance for all constant-locality Hamiltonians. For general $n$-qubit, $k$-local, traceless Hamiltonians, our procedure uses $O(c^k/\varepsilon)$ total evolution time for a universal constant $c$, and succeeds with high probability. In particular, for $O(1)$-local Hamiltonians, the total evolution time becomes $\Theta(1/\varepsilon)$, matching the known $\Omega(1/\varepsilon)$ lower bounds and achieving the gold-standard Heisenberg-limit scaling. Prior certification methods either relied on implementing inverse evolution of $H$, required controlled access to $e^{-itH}$, or achieved near-optimal guarantees only in restricted settings such as the Ising case ($k=2$). In contrast, our algorithm requires neither inverse evolution nor controlled operations: it uses only forward real-time dynamics and achieves optimal intolerant certification for all constant-locality Hamiltonians. + oai:arXiv.org:2512.09778v1 + quant-ph + cs.CC + cs.DS + cs.IT cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - TMLR, 2835-8856, 2025 - Jonathan Wensh{\o}j, Bob Pepin, Raghavendra Selvan + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Junseo Lee, Myeongjin Shin - Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2 - https://arxiv.org/abs/2502.03544 - arXiv:2502.03544v3 Announce Type: replace -Abstract: We present AlphaGeometry2 (AG2), a significantly improved version of AlphaGeometry introduced in (Trinh et al., 2024), which has now surpassed an average gold medalist in solving Olympiad geometry problems. To achieve this, we first extend the original AlphaGeometry language to tackle problems involving movements of objects, and problems containing linear equations of angles, ratios, and distances. This, together with support for non-constructive problems, has markedly improved the coverage rate of the AlphaGeometry language on International Math Olympiads (IMO) 2000-2024 geometry problems from 66% to 88%. The search process of AG2 has also been greatly improved through the use of Gemini architecture for better language modeling, and a novel knowledge-sharing mechanism that enables effective communication between search trees. Together with further enhancements to the symbolic engine and synthetic data generation, we have significantly boosted the overall solving rate of AG to 84% on all geometry problems over the last 25 years, compared to 54% previously. AG2 was also part of the system that achieved the silver-medal standard at IMO 2024 https://deepmind.google/blog/ai-solves-imo-problems-at-silver-medal-level/. Finally, we report progress towards using AG2 as a part of a fully automated system that reliably solves geometry problems from natural language input. Code: https://github.com/google-deepmind/alphageometry2. - oai:arXiv.org:2502.03544v3 + PathCo-LatticE: Pathology-Constrained Lattice-Of Experts Framework for Fully-supervised Few-Shot Cardiac MRI Segmentation + https://arxiv.org/abs/2512.09779 + arXiv:2512.09779v1 Announce Type: cross +Abstract: Few-shot learning (FSL) mitigates data scarcity in cardiac MRI segmentation but typically relies on semi-supervised techniques sensitive to domain shifts and validation bias, restricting zero-shot generalizability. We propose PathCo-LatticE, a fully supervised FSL framework that replaces unlabeled data with pathology-guided synthetic supervision. First, our Virtual Patient Engine models continuous latent disease trajectories from sparse clinical anchors, using generative modeling to synthesize physiologically plausible, fully labeled 3D cohorts. Second, Self-Reinforcing Interleaved Validation (SIV) provides a leakage-free protocol that evaluates models online with progressively challenging synthetic samples, eliminating the need for real validation data. Finally, a dynamic Lattice-of-Experts (LoE) organizes specialized networks within a pathology-aware topology and activates the most relevant experts per input, enabling robust zero-shot generalization to unseen data without target-domain fine-tuning. We evaluated PathCo-LatticE in a strict out-of-distribution (OOD) setting, deriving all anchors and severity statistics from a single-source domain (ACDC) and performing zero-shot testing on the multi-center, multi-vendor M&Ms dataset. PathCo-LatticE outperforms four state-of-the-art FSL methods by 4.2-11% Dice starting from only 7 labeled anchors, and approaches fully supervised performance (within 1% Dice) with only 19 labeled anchors. The method shows superior harmonization across four vendors and generalization to unseen pathologies. [Code will be made publicly available]. + oai:arXiv.org:2512.09779v1 + eess.IV cs.AI + cs.CV cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuri Chervonyi, Trieu H. Trinh, Miroslav Ol\v{s}\'ak, Xiaomeng Yang, Hoang Nguyen, Marcelo Menegali, Junehyuk Jung, Junsu Kim, Vikas Verma, Quoc V. Le, Thang Luong + Mohamed Elbayumi, Mohammed S. M. Elbaz - Flow-based Conformal Prediction for Multi-dimensional Time Series - https://arxiv.org/abs/2502.05709 - arXiv:2502.05709v2 Announce Type: replace -Abstract: Time series prediction underpins a broad range of downstream tasks across many scientific domains. Recent advances and increasing adoption of black-box machine learning models for time series prediction highlight the critical need for reliable uncertainty quantification. While conformal prediction has gained attention as a reliable uncertainty quantification method, conformal prediction for time series faces two key challenges: (1) adaptively leveraging correlations in features and non-conformity scores to overcome the exchangeability assumption, and (2) constructing prediction sets for multi-dimensional outcomes. To address these challenges jointly, we propose a novel conformal prediction method for time series using flow with classifier-free guidance. We provide coverage guarantees by establishing exact non-asymptotic marginal coverage and a finite-sample bound on conditional coverage for the proposed method. Evaluations on real-world time series datasets demonstrate that our method constructs significantly smaller prediction sets than existing conformal prediction methods while maintaining target coverage. - oai:arXiv.org:2502.05709v2 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Junghwan Lee, Chen Xu, Yao Xie + Pinball: A Cryogenic Predecoder for Quantum Error Correction Decoding Under Circuit-Level Noise + https://arxiv.org/abs/2512.09807 + arXiv:2512.09807v1 Announce Type: cross +Abstract: Scaling fault tolerant quantum computers, especially cryogenic systems, to millions of qubits is challenging due to poorly-scaling data processing and power consumption overheads. One key challenge is the design of decoders for real-time quantum error correction (QEC), which demands high data rates for error processing; this is particularly apparent in systems with cryogenic qubits and room temperature (RT) decoders. In response, cryogenic predecoding using lightweight logic has been proposed to handle common, sparse errors in the cryogenic domain. However, prior work only accounts for a subset of error sources present in real-world quantum systems with limited accuracy, often degrading performance below a useful level in practical scenarios. Furthermore, prior reliance on SFQ logic precludes detailed architecture-technology co-optimization. + To address these shortcomings, this paper introduces Pinball, a comprehensive design in cryogenic CMOS of a QEC predecoder tailored to realistic, circuit-level noise. By accounting for error generation and propagation through QEC circuits, our design achieves higher predecoding accuracy, outperforming logical error rates (LER) of the current state-of-the-art cryogenic predecoder by nearly six orders of magnitude. Remarkably, despite operating under much stricter power and area constraints, Pinball also reduces LER by 32.58x and 5x, respectively, compared to the state-of-the-art RT predecoder and RT ensemble configurations. By increasing cryogenic coverage, we also reduce syndrome bandwidth up to 3780.72x. Through co-design with 4 K-characterized 22 nm FDSOI technology, we achieve a peak power consumption under 0.56 mW. Voltage/frequency scaling and body biasing enable 22.2x lower typical power consumption, yielding up to 67.4x total energy savings. Assuming a 4 K power budget of 1.5 W, our predecoder supports up to 2,668 logical qubits at d=21. + oai:arXiv.org:2512.09807v1 + quant-ph + cs.AR + cs.ET + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Alexander Knapen, Guanchen Tao, Jacob Mack, Tomas Bruno, Mehdi Saligane, Dennis Sylvester, Qirui Zhang, Gokul Subramanian Ravi - Meta-Computing Enhanced Federated Learning in IIoT: Satisfaction-Aware Incentive Scheme via DRL-Based Stackelberg Game - https://arxiv.org/abs/2502.06909 - arXiv:2502.06909v3 Announce Type: replace -Abstract: The Industrial Internet of Things (IIoT) leverages Federated Learning (FL) for distributed model training while preserving data privacy, and meta-computing enhances FL by optimizing and integrating distributed computing resources, improving efficiency and scalability. Efficient IIoT operations require a trade-off between model quality and training latency. Consequently, a primary challenge of FL in IIoT is to optimize overall system performance by balancing model quality and training latency. This paper designs a satisfaction function that accounts for data size, Age of Information (AoI), and training latency for meta-computing. Additionally, the satisfaction function is incorporated into the utility function to incentivize IIoT nodes to participate in model training. We model the utility functions of servers and nodes as a two-stage Stackelberg game and employ a deep reinforcement learning approach to learn the Stackelberg equilibrium. This approach ensures balanced rewards and enhances the applicability of the incentive scheme for IIoT. Simulation results demonstrate that, under the same budget constraints, the proposed incentive scheme improves utility by at least 23.7% compared to existing FL schemes without compromising model accuracy. - oai:arXiv.org:2502.06909v3 - cs.LG - cs.AI - cs.GT - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaohuan Li, Shaowen Qin, Xin Tang, Jiawen Kang, Jin Ye, Zhonghua Zhao, Yusi Zheng, Dusit Niyato + Dichotomy results for classes of countable graphs + https://arxiv.org/abs/2512.09832 + arXiv:2512.09832v1 Announce Type: cross +Abstract: We study classes of countable graphs where every member does not contain a given finite graph as an induced subgraph -- denoted by $\mathsf{Free}(\mathcal{G})$ for a given finite graph $\mathcal{G}$. Our main results establish a structural dichotomy for such classes: If $\mathcal{G}$ is not an induced subgraph of $\mathcal{P}_4$, then $\mathsf{Free}(\mathcal{G})$ is on top under effective bi-interpretability, implying that the members of $\mathsf{Free}(\mathcal{G})$ exhibit the full range of structural and computational behaviors. In contrast, if $\mathcal{G}$ is an induced subgraph of $\mathcal{P}_4$, then $\mathsf{Free}(\mathcal{G})$ is structurally simple, as witnessed by the fact that every member satisfies the computable embeddability condition. This dichotomy is mirrored in the finite setting when one considers combinatorial and complexity-theoretic properties. Specifically, it is known that $\mathsf{Free}(\mathcal{G})^{fin}$ is complete for graph isomorphism and not a well-quasi-order under embeddability whenever $\mathcal{G}$ is not an induced subgraph of $\mathcal{P}_4$, while in all other cases $\mathsf{Free}(\mathcal{G})^{fin}$ forms a well-quasi-order and the isomorphism problem for $\mathsf{Free}(\mathcal{G})^{fin}$ is solvable in polynomial time. + oai:arXiv.org:2512.09832v1 + math.LO + cs.CC + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Vittorio Cipriani, Ekaterina Fokina, Matthew Harrison-Trainor, Liling Ko, Dino Rossegger - Proper Learnability and the Role of Unlabeled Data - https://arxiv.org/abs/2502.10359 - arXiv:2502.10359v2 Announce Type: replace -Abstract: Proper learning refers to the setting in which learners must emit predictors in the underlying hypothesis class $H$, and often leads to learners with simple algorithmic forms (e.g. empirical risk minimization (ERM), structural risk minimization (SRM)). The limitation of proper learning, however, is that there exist problems which can only be learned improperly, e.g. in multiclass classification. Thus, we ask: Under what assumptions on the hypothesis class or the information provided to the learner is a problem properly learnable? We first demonstrate that when the unlabeled data distribution is given, there always exists an optimal proper learner governed by distributional regularization, a randomized generalization of regularization. We refer to this setting as the distribution-fixed PAC model, and continue to evaluate the learner on its worst-case performance over all distributions. Our result holds for all metric loss functions and any finite learning problem (with no dependence on its size). Further, we demonstrate that sample complexities in the distribution-fixed PAC model can shrink by only a logarithmic factor from the classic PAC model, strongly refuting the role of unlabeled data in PAC learning (from a worst-case perspective). - We complement this with impossibility results which obstruct any characterization of proper learnability in the realizable PAC model. First, we observe that there are problems whose proper learnability is logically undecidable, i.e., independent of the ZFC axioms. We then show that proper learnability is not a monotone property of the underlying hypothesis class, and that it is not a local property (in a precise sense). Our impossibility results all hold even for the fundamental setting of multiclass classification, and go through a reduction of EMX learning (Ben-David et al., 2019) to proper classification which may be of independent interest. - oai:arXiv.org:2502.10359v2 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng + Colouring Graphs Without a Subdivided H-Graph: A Full Complexity Classification + https://arxiv.org/abs/2512.09859 + arXiv:2512.09859v1 Announce Type: cross +Abstract: We consider Colouring on graphs that are $H$-subgraph-free for some fixed graph $H$, i.e., graphs that do not contain $H$ as a subgraph. It is known that even $3$-Colouring is NP-complete for $H$-subgraph-free graphs whenever $H$ has a cycle; or a vertex of degree at least $5$; or a component with two vertices of degree $4$, while Colouring is polynomial-time solvable for $H$-subgraph-free graphs if $H$ is a forest of maximum degree at most $3$, in which each component has at most one vertex of degree $3$. For connected graphs $H$, this means that it remains to consider when $H$ is tree of maximum degree $4$ with exactly one vertex of degree $4$, or a tree of maximum degree $3$ with at least two vertices of degree $3$. We let $H$ be a so-called subdivided "H"-graph, which is either a subdivided $\mathbb{H}_0$: a tree of maximum degree $4$ with exactly one vertex of degree $4$ and no vertices of degree $3$, or a subdivided $\mathbb{H}_1$: a tree of maximum degree $3$ with exactly two vertices of degree $3$. In the literature, only a limited number of polynomial-time and NP-completeness results for these cases are known. We develop new polynomial-time techniques that allow us to determine the complexity of Colouring on $H$-subgraph-free graphs for all the remaining subdivided "H"-graphs, so we fully classify both cases. As a consequence, the complexity of Colouring on $H$-subgraph-free graphs has now been settled for all connected graphs $H$ except when $H$ is a tree of maximum degree $4$ with exactly one vertex of degree $4$ and at least one vertex of degree $3$; or a tree of maximum degree $3$ with at least three vertices of degree $3$. We also employ our new techniques to obtain the same new polynomial-time results for another classic graph problem, namely Stable Cut. + oai:arXiv.org:2512.09859v1 + math.CO + cs.CC + cs.DM + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tala Eagling-Vose, Jorik Jooken, Felicia Lucke, Barnaby Martin, Dani\"el Paulusma - Privacy Preservation through Practical Machine Unlearning - https://arxiv.org/abs/2502.10635 - arXiv:2502.10635v3 Announce Type: replace -Abstract: Machine Learning models thrive on vast datasets, continuously adapting to provide accurate predictions and recommendations. However, in an era dominated by privacy concerns, Machine Unlearning emerges as a transformative approach, enabling the selective removal of data from trained models. This paper examines methods such as Naive Retraining and Exact Unlearning via the SISA framework, evaluating their Computational Costs, Consistency, and feasibility using the $\texttt{HSpam14}$ dataset. We explore the potential of integrating unlearning principles into Positive Unlabeled (PU) Learning to address challenges posed by partially labeled datasets. Our findings highlight the promise of unlearning frameworks like $\textit{DaRE}$ for ensuring privacy compliance while maintaining model performance, albeit with significant computational trade-offs. This study underscores the importance of Machine Unlearning in achieving ethical AI and fostering trust in data-driven systems. - oai:arXiv.org:2502.10635v3 - cs.LG + True Random Number Generators on IQM Spark + https://arxiv.org/abs/2512.09862 + arXiv:2512.09862v1 Announce Type: cross +Abstract: Random number generation is fundamental for many modern applications including cryptography, simulations and machine learning. Traditional pseudo-random numbers may offer statistical unpredictability, but are ultimately deterministic. On the other hand, True Random Number Generation (TRNG) offers true randomness. One way of obtaining such randomness are quantum systems, including quantum computers. As such the use of quantum computers for TRNG has received considerable attention in recent years. However, existing studies almost exclusively consider IBM quantum computers, often stop at using simulations and usually test only a handful of different TRNG quantum circuits. In this paper, we address those issues by presenting a study of TRNG circuits on Odra 5 a real-life quantum computer installed at Wroc{\l}aw University of Science and Technology. It is also the first study to utilize the IQM superconducting architecture. Since Odra 5 is available on-premises it allows for much more comprehensive study of various TRNG circuits. In particular, we consider 5 types of TRNG circuits with 105 circuit subvariants in total. Each circuit is used to generate 1 million bits. We then perform an analysis of the quality of the obtained random sequences using the NIST SP 800-22 and NIST SP 800-90B test suites. We also provide a comprehensive review of existing literature on quantum computer-based TRNGs. + oai:arXiv.org:2512.09862v1 + quant-ph cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Robert Dilworth + Thu, 11 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Andrzej Gnatowski, Jaros{\l}aw Rudy, Teodor Ni\.zy\'nski, Krzysztof \'Swi\k{e}cicki - Dark Deceptions in DHCP: Dismantling Network Defenses - https://arxiv.org/abs/2502.10646 - arXiv:2502.10646v3 Announce Type: replace -Abstract: This paper explores vulnerabilities in the Dynamic Host Configuration Protocol (DHCP) and their implications on the Confidentiality, Integrity, and Availability (CIA) Triad. Through an analysis of various attacks, including DHCP Starvation, Rogue DHCP Servers, Replay Attacks, and TunnelVision exploits, the paper provides a taxonomic classification of threats, assesses risks, and proposes appropriate controls. The discussion also highlights the dangers of VPN decloaking through DHCP exploits and underscores the importance of safeguarding network infrastructures. By bringing awareness to the TunnelVision exploit, this paper aims to mitigate risks associated with these prevalent vulnerabilities. - oai:arXiv.org:2502.10646v3 - cs.CR - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace + A 0.8395-approximation algorithm for the EPR problem + https://arxiv.org/abs/2512.09896 + arXiv:2512.09896v1 Announce Type: cross +Abstract: We give an efficient 0.8395-approximation algorithm for the EPR Hamiltonian. Our improvement comes from a new nonlinear monogamy-of-entanglement bound on star graphs and a refined parameterization of a shallow quantum circuit from previous works. We also prove limitations showing that current methods cannot achieve substantially better approximation ratios, indicating that further progress will require fundamentally new techniques. + oai:arXiv.org:2512.09896v1 + quant-ph + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Robert Dilworth + Anuj Apte, Eunou Lee, Kunal Marwaha, Ojas Parekh, Lennart Sinjorgo, James Sud - B-cos LM: Efficiently Transforming Pre-trained Language Models for Improved Explainability - https://arxiv.org/abs/2502.12992 - arXiv:2502.12992v4 Announce Type: replace -Abstract: Post-hoc explanation methods for black-box models often struggle with faithfulness and human interpretability due to the lack of explainability in current neural architectures. Meanwhile, B-cos networks have been introduced to improve model explainability by proposing an architecture that removes bias terms and promotes input-weight alignment. Although B-cos networks have shown success in building explainable systems, their application has so far been limited to computer vision models and their associated training pipelines. In this work, we introduce B-cos LMs, i.e., B-cos Language Models (LMs) empowered for natural language processing (NLP) tasks. Our approach directly transforms pre-trained language models into B-cos LMs by combining B-cos conversion and task fine-tuning, improving efficiency compared to previous methods. Automatic and human evaluation results demonstrate that B-cos LMs produce more faithful and human interpretable explanations than post-hoc methods, while maintaining task performance comparable to conventional fine-tuning. Our in-depth analysis explores how B-cos LMs differ from conventionally fine-tuned models in their learning processes and explanation patterns. Finally, we present a first exploration of transforming decoder-only models to B-cos LMs for generation tasks. Our code is available at https://github.com/Ewanwong/bcos_lm. - oai:arXiv.org:2502.12992v4 - cs.CL + Supervised learning pays attention + https://arxiv.org/abs/2512.09912 + arXiv:2512.09912v1 Announce Type: cross +Abstract: In-context learning with attention enables large neural networks to make context-specific predictions by selectively focusing on relevant examples. Here, we adapt this idea to supervised learning procedures such as lasso regression and gradient boosting, for tabular data. Our goals are to (1) flexibly fit personalized models for each prediction point and (2) retain model simplicity and interpretability. + Our method fits a local model for each test observation by weighting the training data according to attention, a supervised similarity measure that emphasizes features and interactions that are predictive of the outcome. Attention weighting allows the method to adapt to heterogeneous data in a data-driven way, without requiring cluster or similarity pre-specification. Further, our approach is uniquely interpretable: for each test observation, we identify which features are most predictive and which training observations are most relevant. We then show how to use attention weighting for time series and spatial data, and we present a method for adapting pretrained tree-based models to distributional shift using attention-weighted residual corrections. Across real and simulated datasets, attention weighting improves predictive performance while preserving interpretability, and theory shows that attention-weighting linear models attain lower mean squared error than the standard linear model under mixture-of-models data-generating processes with known subgroup structure. + oai:arXiv.org:2512.09912v1 + stat.ML cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - replace + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + cross http://creativecommons.org/licenses/by/4.0/ - Yifan Wang, Sukrut Rao, Ji-Ung Lee, Mayank Jobanputra, Vera Demberg + Erin Craig, Robert Tibshirani - OIPR: Evaluation for Time-series Anomaly Detection Inspired by Operator Interest - https://arxiv.org/abs/2503.01260 - arXiv:2503.01260v2 Announce Type: replace -Abstract: With the growing adoption of time-series anomaly detection (TAD) technology, numerous studies have employed deep learning-based detectors to analyze time-series data in the fields of Internet services, industrial systems, and sensors. The selection and optimization of anomaly detectors strongly rely on the availability of an effective evaluation for TAD performance. Since anomalies in time-series data often manifest as a sequence of points, conventional metrics that solely consider the detection of individual points are inadequate. Existing TAD evaluators typically employ point-based or event-based metrics to capture the temporal context. However, point-based evaluators tend to overestimate detectors that excel only in detecting long anomalies, while event-based evaluators are susceptible to being misled by fragmented detection results. To address these limitations, we propose OIPR (Operator Interest-based Precision and Recall metrics), a novel TAD evaluator with area-based metrics. It models the process of operators receiving detector alarms and handling anomalies, utilizing area under the operator interest curve to evaluate TAD performance. Furthermore, we build a special scenario dataset to compare the characteristics of different evaluators. Through experiments conducted on the special scenario dataset and five real-world datasets, we demonstrate the remarkable performance of OIPR in extreme and complex scenarios. It achieves a balance between point and event perspectives, overcoming their primary limitations and offering applicability to broader situations. - oai:arXiv.org:2503.01260v2 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Two Causal Principles for Improving Visual Dialog + https://arxiv.org/abs/1911.10496 + arXiv:1911.10496v3 Announce Type: replace +Abstract: This paper unravels the design tricks adopted by us, the champion team MReaL-BDAI, for Visual Dialog Challenge 2019: two causal principles for improving Visual Dialog (VisDial). By "improving", we mean that they can promote almost every existing VisDial model to the state-of-the-art performance on the leader-board. Such a major improvement is only due to our careful inspection on the causality behind the model and data, finding that the community has overlooked two causalities in VisDial. Intuitively, Principle 1 suggests: we should remove the direct input of the dialog history to the answer model, otherwise a harmful shortcut bias will be introduced; Principle 2 says: there is an unobserved confounder for history, question, and answer, leading to spurious correlations from training data. In particular, to remove the confounder suggested in Principle 2, we propose several causal intervention algorithms, which make the training fundamentally different from the traditional likelihood estimation. Note that the two principles are model-agnostic, so they are applicable in any VisDial model. The code is available at https://github.com/simpleshinobu/visdial-principles. + oai:arXiv.org:1911.10496v3 + cs.CV + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Yuhan Jing, Jingyu Wang, Lei Zhang, Haifeng Sun, Bo He, Zirui Zhuang, Chengsen Wang, Qi Qi, Jianxin Liao + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiaxin Qi, Yulei Niu, Jianqiang Huang, Hanwang Zhang - ($\boldsymbol{\theta}_l, \boldsymbol{\theta}_u$)-Parametric Multi-Task Optimization: Joint Search in Solution and Infinite Task Spaces - https://arxiv.org/abs/2503.08394 - arXiv:2503.08394v4 Announce Type: replace -Abstract: Multi-task optimization is typically characterized by a fixed and finite set of tasks. The present paper relaxes this condition by considering a non-fixed and potentially infinite set of optimization tasks defined in a parameterized, continuous and bounded task space. We refer to this unique problem setting as parametric multi-task optimization (PMTO). Assuming the bounds of the task parameters to be ($\boldsymbol{\theta}_l$, $\boldsymbol{\theta}_u$), a novel ($\boldsymbol{\theta}_l$, $\boldsymbol{\theta}_u$)-PMTO algorithm is crafted to operate in two complementary modes. In an offline optimization mode, a joint search over solution and task spaces is carried out with the creation of two approximation models: (1) for mapping points in a unified solution space to the objective spaces of all tasks, which provably accelerates convergence by acting as a conduit for inter-task knowledge transfers, and (2) for probabilistically mapping tasks to their corresponding solutions, which facilitates evolutionary exploration of under-explored regions of the task space. In the online mode, the derived models enable direct optimization of any task within the bounds without the need to search from scratch. This outcome is validated on both synthetic test problems and practical case studies, with the significant real-world applicability of PMTO shown towards fast reconfiguration of robot controllers under changing task conditions. The potential of PMTO to vastly speedup the search for solutions to minimax optimization problems is also demonstrated through an example in robust engineering design. - oai:arXiv.org:2503.08394v4 - cs.NE + TCNN: Triple Convolutional Neural Network Models for Retrieval-based Question Answering System in E-commerce + https://arxiv.org/abs/2004.10919 + arXiv:2004.10919v2 Announce Type: replace +Abstract: Automatic question-answering (QA) systems have boomed during last few years, and commonly used techniques can be roughly categorized into Information Retrieval (IR)-based and generation-based. A key solution to the IR based models is to retrieve the most similar knowledge entries of a given query from a QA knowledge base, and then rerank those knowledge entries with semantic matching models. In this paper, we aim to improve an IR based e-commerce QA system-AliMe with proposed text matching models, including a basic Triple Convolutional Neural Network (TCNN) model and two Attention-based TCNN (ATCNN) models. Experimental results show their effect. + oai:arXiv.org:2004.10919v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - 10.1109/TEVC.2025.3598333 - Tingyang Wei, Jiao Liu, Abhishek Gupta, Puay Siew Tan, Yew-Soon Ong + Shuangyong Song, Chao Wang - Representation Retrieval Learning for Heterogeneous Data Integration - https://arxiv.org/abs/2503.09494 - arXiv:2503.09494v3 Announce Type: replace -Abstract: In the era of big data, large-scale, multi-source, multi-modality datasets are increasingly ubiquitous, offering unprecedented opportunities for predictive modeling and scientific discovery. However, these datasets often exhibit complex heterogeneity, such as covariates shift, posterior drift, and blockwise missingness, which worsen predictive performance of existing supervised learning algorithms. To address these challenges simultaneously, we propose a novel Representation Retrieval (R2) framework, which integrates a dictionary of representation learning modules (representer dictionary) with data source-specific sparsity-induced machine learning model (learners). Under the R2 framework, we introduce the notion of integrativeness for each representer, and propose a novel Selective Integration Penalty (SIP) to explicitly encourage more integrative representers to improve predictive performance. Theoretically, we show that the excess risk bound of the R2 framework is characterized by the integrativeness of representers, and SIP effectively improves the excess risk. Extensive simulation studies validate the superior performance of R2 framework and the effect of SIP. We further apply our method to two real-world datasets to confirm its empirical success. - oai:arXiv.org:2503.09494v3 + Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses + https://arxiv.org/abs/2106.05426 + arXiv:2106.05426v5 Announce Type: replace +Abstract: How related are the representations learned by neural language models, translation models, and language tagging tasks? We answer this question by adapting an encoder-decoder transfer learning method from computer vision to investigate the structure among 100 different feature spaces extracted from hidden representations of various networks trained on language tasks. This method reveals a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings. We call this low-dimensional structure a language representation embedding because it encodes the relationships between representations needed to process language for a variety of NLP tasks. We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI. Additionally, we find that the principal dimension of this structure can be used to create a metric which highlights the brain's natural language processing hierarchy. This suggests that the embedding captures some part of the brain's natural language representation structure. + oai:arXiv.org:2106.05426v5 + cs.CL cs.LG - stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qi Xu, Annie Qu + http://creativecommons.org/licenses/by/4.0/ + Richard Antonello, Javier Turek, Vy Vo, Alexander Huth - CDKFormer: Contextual Deviation Knowledge-Based Transformer for Long-Tail Trajectory Prediction - https://arxiv.org/abs/2503.12695 - arXiv:2503.12695v2 Announce Type: replace -Abstract: Predicting the future movements of surrounding vehicles is essential for ensuring the safe operation and efficient navigation of autonomous vehicles (AVs) in urban traffic environments. Existing vehicle trajectory prediction methods primarily focus on improving overall performance, yet they struggle to address long-tail scenarios effectively. This limitation often leads to poor predictions in rare cases, significantly increasing the risk of safety incidents. Taking Argoverse 2 motion forecasting dataset as an example, we first investigate the long-tail characteristics in trajectory samples from two perspectives, individual motion and group interaction, and deriving deviation features to distinguish abnormal from regular scenarios. On this basis, we propose CDKFormer, a Contextual Deviation Knowledge-based Transformer model for long-tail trajectory prediction. CDKFormer integrates an attention-based scene context fusion module to encode spatiotemporal interaction and road topology. An additional deviation feature fusion module is proposed to capture the dynamic deviations in the target vehicle status. We further introduce a dual query-based decoder, supported by a multi-stream decoder block, to sequentially decode heterogeneous scene deviation features and generate multimodal trajectory predictions. Extensive experiments demonstrate that CDKFormer achieves state-of-the-art performance, significantly enhancing prediction accuracy and robustness for long-tailed trajectories compared to existing methods, thus advancing the reliability of AVs in complex real-world environments. - oai:arXiv.org:2503.12695v2 - cs.RO - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + The Vector Grounding Problem + https://arxiv.org/abs/2304.01481 + arXiv:2304.01481v3 Announce Type: replace +Abstract: Large language models (LLMs) produce seemingly meaningful outputs, yet they are trained on text alone without direct interaction with the world. This leads to a modern variant of the classical symbol grounding problem in AI: can LLMs' internal states and outputs be about extra-linguistic reality, independently of the meaning human interpreters project onto them? We argue that they can. We first distinguish referential grounding -- the connection between a representation and its worldly referent -- from other forms of grounding and argue it is the only kind essential to solving the problem. We contend that referential grounding is achieved when a system's internal states satisfy two conditions derived from teleosemantic theories of representation: (1) they stand in appropriate causal-informational relations to the world, and (2) they have a history of selection that has endowed them with the function of carrying this information. We argue that LLMs can meet both conditions, even without multimodality or embodiment. + oai:arXiv.org:2304.01481v3 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1016/j.trc.2025.105430 - Yuansheng Lian, Ke Zhang, Meng Li + http://creativecommons.org/licenses/by/4.0/ + Dimitri Coelho Mollo, Rapha\"el Milli\`ere - Dual-Source SPIR over a noiseless MAC without Data Replication or Shared Randomness - https://arxiv.org/abs/2503.14682 - arXiv:2503.14682v2 Announce Type: replace -Abstract: Information-theoretically secure Symmetric Private Information Retrieval (SPIR) is known to be infeasible over noiseless channels with a single server. Known solutions to overcome this infeasibility involve additional resources such as database replication, shared randomness, or noisy channels. In this paper, we propose an alternative approach for achieving SPIR with information-theoretic security guarantees, without relying on shared randomness, noisy channels, or data replication. Specifically, we demonstrate that it is sufficient to use a noiseless binary adder multiple-access channel, where inputs are controlled by two non-colluding servers and the output is observed by the client, alongside a public noiseless communication channel between the client and the servers. Furthermore, in this setting, we characterize the optimal file rates, i.e., the file lengths normalized by the number of channel uses, that can be transferred. - oai:arXiv.org:2503.14682v2 + Entropy Functions on Two-Dimensional Faces of Polymatroidal Region of Degree Four: Part I: Problem Formulation and More + https://arxiv.org/abs/2305.06250 + arXiv:2305.06250v4 Announce Type: replace +Abstract: Characterization of entropy functions is of fundamental importance in information theory. By imposing constraints on their Shannon outer bound, i.e., the polymatroidal region, one obtains the faces of the region and entropy functions on them with special structures. In this series of two papers, we characterize entropy functions on the 2-dimensional faces of the polymatroidal region of degree 4. In Part I, we formulate the problem, enumerate all 59 types of 2-dimensional faces of the region by an algorithm, and fully characterize entropy functions on 49 types of them. Among them, those non-trivial cases are mainly characterized by the graph-coloring technique. The entropy functions on the remaining 10 types of faces will be characterized in Part II, among which 8 types are fully characterized, and 2 types are partially characterized. + oai:arXiv.org:2305.06250v4 cs.IT - cs.CR + math.CO math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Remi A. Chou + Shaocheng Liu, Qi Chen - Bezier Splatting for Fast and Differentiable Vector Graphics Rendering - https://arxiv.org/abs/2503.16424 - arXiv:2503.16424v4 Announce Type: replace -Abstract: Differentiable vector graphics (VGs) are widely used in image vectorization and vector synthesis, while existing representations are costly to optimize and struggle to achieve high-quality rendering results for high-resolution images. This work introduces a new differentiable VG representation, dubbed B\'ezier Splatting, that enables fast yet high-fidelity VG rasterization. B\'ezier Splatting samples 2D Gaussians along B\'ezier curves, which naturally provide positional gradients at object boundaries. Thanks to the efficient splatting-based differentiable rasterizer, B\'ezier Splatting achieves 30x and 150x faster per forward and backward rasterization step for open curves compared to DiffVG. Additionally, we introduce an adaptive pruning and densification strategy that dynamically adjusts the spatial distribution of curves to escape local minima, further improving VG quality. Furthermore, our new VG representation supports conversion to standard XML-based SVG format, enhancing interoperability with existing VG tools and pipelines. Experimental results show that B\'ezier Splatting significantly outperforms existing methods with better visual fidelity and significant optimization speedup. - oai:arXiv.org:2503.16424v4 - cs.GR - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous Federated Learning + https://arxiv.org/abs/2305.19600 + arXiv:2305.19600v5 Announce Type: replace +Abstract: Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data. In practice, there can often be substantial heterogeneity (e.g., class imbalance) across the local data distributions observed by each of these clients. Under such non-iid label distributions across clients, FL suffers from the 'client-drift' problem where every client drifts to its own local optimum. This results in slower convergence and poor performance of the aggregated model. To address this limitation, we propose a novel regularization technique based on adaptive self-distillation (ASD) for training models on the client side. Our regularization scheme adaptively adjusts to each client's training data based on the global model's prediction entropy and the client-data label distribution. We show in this paper that our proposed regularization (ASD) can be easily integrated atop existing, state-of-the-art FL algorithms, leading to a further boost in the performance of these off-the-shelf methods. We theoretically explain how incorporation of ASD regularizer leads to reduction in client-drift and empirically justify the generalization ability of the trained model. We demonstrate the efficacy of our approach through extensive experiments on multiple real-world benchmarks and show substantial gains in performance when the proposed regularizer is combined with popular FL methods. + oai:arXiv.org:2305.19600v5 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xi Liu, Chaoyi Zhou, Nanxuan Zhao, Siyu Huang + http://creativecommons.org/licenses/by/4.0/ + M Yashwanth, Gaurav Kumar Nayak, Arya Singh, Yogesh Simmhan, Anirban Chakraborty - RLCAD: Reinforcement Learning Training Gym for Revolution Involved CAD Command Sequence Generation - https://arxiv.org/abs/2503.18549 - arXiv:2503.18549v3 Announce Type: replace -Abstract: A CAD command sequence is a typical parametric design paradigm in 3D CAD systems where a model is constructed by overlaying 2D sketches with operations such as extrusion, revolution, and Boolean operations. Although there is growing academic interest in the automatic generation of command sequences, existing methods and datasets only support operations such as 2D sketching, extrusion,and Boolean operations. This limitation makes it challenging to represent more complex geometries. In this paper, we present a reinforcement learning (RL) training environment (gym) built on a CAD geometric engine. Given an input boundary representation (B-Rep) geometry, the policy network in the RL algorithm generates an action. This action, along with previously generated actions, is processed within the gym to produce the corresponding CAD geometry, which is then fed back into the policy network. The rewards, determined by the difference between the generated and target geometries within the gym, are used to update the RL network. Our method supports operations beyond sketches, Boolean, and extrusion, including revolution operations. With this training gym, we achieve state-of-the-art (SOTA) quality in generating command sequences from B-Rep geometries. - oai:arXiv.org:2503.18549v3 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Perceptually Uniform Construction of Illustrative Textures + https://arxiv.org/abs/2308.03644 + arXiv:2308.03644v4 Announce Type: replace +Abstract: Illustrative textures, such as stippling or hatching, were predominantly used as an alternative to conventional Phong rendering. Recently, the potential of encoding information on surfaces or maps using different densities has also been recognized. This has the significant advantage that additional color can be used as another visual channel and the illustrative textures can then be overlaid. Effectively, it is thus possible to display multiple information, such as two different scalar fields on surfaces simultaneously. In previous work, these textures were manually generated and the choice of density was unempirically determined. Here, we first want to determine and understand the perceptual space of illustrative textures. We chose a succession of simplices with increasing dimensions as primitives for our textures: Dots, lines, and triangles. Thus, we explore the texture types of stippling, hatching, and triangles. We create a range of textures by sampling the density space uniformly. Then, we conduct three perceptual studies in which the participants performed pairwise comparisons for each texture type. We use multidimensional scaling (MDS) to analyze the perceptual spaces per category. The perception of stippling and triangles seems relatively similar. Both are adequately described by a 1D manifold in 2D space. The perceptual space of hatching consists of two main clusters: Crosshatched textures, and textures with only one hatching direction. However, the perception of hatching textures with only one hatching direction is similar to the perception of stippling and triangles. Based on our findings, we construct perceptually uniform illustrative textures. Afterwards, we provide concrete application examples for the constructed textures. + oai:arXiv.org:2308.03644v4 + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Xiaolong Yin, Xingyu Lu, Jiahang Shen, Jingzhe Ni, Hailong Li, Ruofeng Tong, Min Tang, Peng Du + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1109/TVCG.2023.3326574 + Anna Sterzik, Monique Meuschke, Douglas W. Cunningham, Kai Lawonn - Shape and Texture Recognition in Large Vision-Language Models - https://arxiv.org/abs/2503.23062 - arXiv:2503.23062v5 Announce Type: replace -Abstract: Shapes and textures are the basic building blocks of visual perception. The ability to identify shapes regardless of orientation, texture, or context, and to recognize textures and materials independently of their associated objects, is essential for a general visual understanding of the world. This work introduces the Large Shapes and Textures dataset (LAS&T), a giant collection of highly diverse shapes and textures, created by unsupervised extraction of patterns from natural images. This dataset is used to benchmark how effectively leading Large Vision-Language Models (LVLM/VLM) recognize and represent shapes, textures, and materials in 2D and 3D scenes. For shape recognition, we test the models' ability to match images of identical shapes that differ in orientation, texture, color, or environment. Our results show that the shape-recognition capabilities of LVLMs remain well below human performance, especially when multiple transformations are applied. LVLMs rely predominantly on high-level and semantic features and struggle with abstract shapes lacking class associations. For texture and material recognition, we evaluated the models' ability to identify images with identical textures and materials across different objects and environments. Interestingly, leading LVLMs approach human-level performance in recognizing materials in 3D scenes, yet substantially underperform humans when identifying simpler, more abstract 2D textures and shapes. These results are consistent across a wide range of leading LVLMs (GPT/Gemini/LLama/Qwen) and foundation vision models (DINO/CLIP), exposing major deficiencies in the ability of VLMs to extract low-level visual features. In contrast, humans and simple nets trained directly for these tasks achieve high accuracy. The LAS&T dataset, featuring over 700,000 images for 2D/3D shape and textures recognition and retrieval, is freely available. - oai:arXiv.org:2503.23062v5 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + CupCleaner: A Hybrid Data Cleaning Approach for Comment Updating + https://arxiv.org/abs/2308.06898 + arXiv:2308.06898v2 Announce Type: replace +Abstract: Comment updating is an emerging task in software evolution that aims to automatically revise source code comments in accordance with code changes. This task plays a vital role in maintaining code-comment consistency throughout software development. Recently, deep learning-based approaches have shown great potential in addressing comment updating by learning complex patterns between code edits and corresponding comment modifications. However, the effectiveness of these learning-based approaches heavily depends on the quality of training data. Existing datasets are typically constructed by mining version histories from open-source repositories such as GitHub, where there is often a lack of quality control over comment edits. As a result, these datasets may contain noisy or inconsistent samples that hinder model learning and generalization. In this paper, we focus on cleaning existing comment updating datasets, considering both the data's characteristics in the updating scenario and their implications on the model training process. We propose a hybrid statistical approach named CupCleaner (Comment UPdating's CLEANER) to achieve this purpose. Specifically, we combine static semantic information within data samples and dynamic loss information during the training process to clean the dataset. Experimental results demonstrate that, on the same test set, both the individual static strategy and the dynamic strategy can significantly filter out a portion of the data and enhance the performance of the model. Furthermore, employing a model ensemble approach can combine the characteristics of static and dynamic cleaning, further enhancing the performance of the model and the reliability of its output results. + oai:arXiv.org:2308.06898v2 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/publicdomain/zero/1.0/ - Sagi Eppel, Mor Bismut, Alona Faktor-Strugatski + http://creativecommons.org/licenses/by-sa/4.0/ + Qingyuan Liang, Zeyu Sun, Qihao Zhu, Junhao Hu, Yifan Zhao, Yakun Zhang, Lu Zhang - ConsDreamer: Advancing Multi-View Consistency for Zero-Shot Text-to-3D Generation - https://arxiv.org/abs/2504.02316 - arXiv:2504.02316v2 Announce Type: replace -Abstract: Recent advances in zero-shot text-to-3D generation have revolutionized 3D content creation by enabling direct synthesis from textual descriptions. While state-of-the-art methods leverage 3D Gaussian Splatting with score distillation to enhance multi-view rendering through pre-trained text-to-image (T2I) models, they suffer from inherent prior view biases in T2I priors. These biases lead to inconsistent 3D generation, particularly manifesting as the multi-face Janus problem, where objects exhibit conflicting features across views. To address this fundamental challenge, we propose ConsDreamer, a novel method that mitigates view bias by refining both the conditional and unconditional terms in the score distillation process: (1) a View Disentanglement Module (VDM) that eliminates viewpoint biases in conditional prompts by decoupling irrelevant view components and injecting precise view control; and (2) a similarity-based partial order loss that enforces geometric consistency in the unconditional term by aligning cosine similarities with azimuth relationships. Extensive experiments demonstrate that ConsDreamer can be seamlessly integrated into various 3D representations and score distillation paradigms, effectively mitigating the multi-face Janus problem. - oai:arXiv.org:2504.02316v2 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Gradient-Free Privacy Leakage in Federated Language Models through Selective Weight Tampering + https://arxiv.org/abs/2310.16152 + arXiv:2310.16152v4 Announce Type: replace +Abstract: Federated learning (FL) has become a key component in various language modeling applications such as machine translation, next-word prediction, and medical record analysis. These applications are trained on datasets from many FL participants that often include privacy-sensitive data, such as healthcare records, phone/credit card numbers, login credentials, etc. Although FL enables computation without necessitating clients to share their raw data, existing works show that privacy leakage is still probable in federated language models. In this paper, we present two novel findings on the leakage of privacy-sensitive user data from federated large language models without requiring access to gradients. Firstly, we make a key observation that model snapshots from the intermediate rounds in FL can cause greater privacy leakage than the final trained model. Secondly, we identify that a malicious FL participant can aggravate the leakage by tampering with the model's selective weights that are responsible for memorizing the sensitive training data of some other clients, even without any cooperation from the server. Our best-performing method increases the membership inference recall by 29% and achieves up to 71% private data reconstruction, evidently outperforming existing attacks that consider much stronger adversary capabilities. Lastly, we recommend a balanced suite of techniques for an FL client to defend against such privacy risk. + oai:arXiv.org:2310.16152v4 + cs.CR + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuan Zhou, Shilong Jin, Litao Hua, Wanjun Lv, Haoran Duan, Jungong Han + http://creativecommons.org/licenses/by/4.0/ + Md Rafi Ur Rashid, Vishnu Asutosh Dasu, Kang Gu, Najrin Sultana, Shagufta Mehnaz - Survey and Experiments on Mental Disorder Detection via Social Media: From Large Language Models and RAG to Agents - https://arxiv.org/abs/2504.02800 - arXiv:2504.02800v3 Announce Type: replace -Abstract: Mental disorders represent a critical global health challenge, and social media is increasingly viewed as a vital resource for real-time digital phenotyping and intervention. Large Language Models (LLMs) offer stronger semantic understanding and reasoning than traditional deep learning, but their use in high-stakes clinical settings is limited by hallucinations and the lack of persistent memory. However, existing literature has not sufficiently investigated how advanced enhancement techniques, specifically Retrieval-Augmented Generation (RAG) and Agentic systems, can address these reliability and reasoning limitations. Here, we systematically survey the evolving landscape of LLM-based methods for social media mental disorder analysis, spanning standard pretrained language models, RAG to mitigate hallucinations and contextual gaps, and agentic systems for autonomous reasoning and multi-step intervention. We organize existing work by technical paradigm and clinical target, extending beyond common internalizing disorders to include psychotic disorders and externalizing behaviors. Additionally, the paper comprehensively evaluates the performance of LLMs, including the impact of RAG, across various tasks. This work establishes a unified benchmark for the field, paving the way for the development of trustworthy, autonomous AI systems that can deliver precise and explainable mental health support. - oai:arXiv.org:2504.02800v3 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + CARTOS: A Charging-Aware Real-Time Operating System for Intermittent Batteryless Devices + https://arxiv.org/abs/2311.07227 + arXiv:2311.07227v3 Announce Type: replace +Abstract: This paper presents CARTOS, a charging-aware real-time operating system designed to enhance the functionality of intermittently-powered batteryless devices (IPDs) for various Internet of Things (IoT) applications. While IPDs offer significant advantages such as extended lifespan and operability in extreme environments, they pose unique challenges, including the need to ensure forward progress of program execution amidst variable energy availability and maintaining reliable real-time time behavior during power disruptions. To address these challenges, CARTOS introduces a mixed-preemption scheduling model that classifies tasks into computational and peripheral tasks, and ensures their efficient and timely execution by adopting just-in-time checkpointing for divisible computation tasks and uninterrupted execution for indivisible peripheral tasks. CARTOS also supports processing chains of tasks with precedence constraints and adapts its scheduling in response to environmental changes to offer continuous execution under diverse conditions. CARTOS is implemented with new APIs and components added to FreeRTOS but is designed for portability to other embedded RTOSs. Through real hardware experiments and simulations, CARTOS exhibits superior performance over state-of-the-art methods, demonstrating that it can serve as a practical platform for developing resilient, real-time sensing applications on IPDs. + oai:arXiv.org:2311.07227v3 + cs.OS + cs.SY + eess.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/ICDEW67478.2025.00027 - Zhuohan Ge, Nicole Hu, Yubo Wang, Darian Li, Xinyi Zhu, Haoyang Li, Xin Zhang, Mingtao Zhang, Shihao Qi, Yuming Xu, Han Shi, Chen Jason Zhang, Qing Li + http://creativecommons.org/licenses/by/4.0/ + Mohsen Karimi, Yidi Wang, Youngbin Kim, Yoojin Lim, Hyoseung Kim - TabKAN: Advancing Tabular Data Analysis using Kolmogorov-Arnold Network - https://arxiv.org/abs/2504.06559 - arXiv:2504.06559v3 Announce Type: replace -Abstract: Tabular data analysis presents unique challenges that arise from heterogeneous feature types, missing values, and complex feature interactions. While traditional machine learning methods like gradient boosting often outperform deep learning, recent advancements in neural architectures offer promising alternatives. In this study, we introduce TabKAN, a novel framework for tabular data modeling based on Kolmogorov-Arnold Networks (KANs). Unlike conventional deep learning models, KANs use learnable activation functions on edges, which improves both interpretability and training efficiency. TabKAN incorporates modular KAN-based architectures designed for tabular analysis and proposes a transfer learning framework for knowledge transfer across domains. Furthermore, we develop a model-specific interpretability approach that reduces reliance on post hoc explanations. Extensive experiments on public datasets show that TabKAN achieves superior performance in supervised learning and significantly outperforms classical and Transformer-based models in binary and multi-class classification. The results demonstrate the potential of KAN-based architectures to bridge the gap between traditional machine learning and deep learning for structured data. - oai:arXiv.org:2504.06559v3 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + adF: A Novel System for Measuring Web Fingerprinting through Ads + https://arxiv.org/abs/2311.08769 + arXiv:2311.08769v3 Announce Type: replace +Abstract: This paper introduces adF, a novel system for analyzing the vulnerability of different devices, Operating Systems (OSes), and browsers to web fingerprinting. adF performs its measurements from code inserted in ads. We have used our system in several ad campaigns that delivered 5.40 million ad impressions. The collected data allow us to assess the vulnerability of current desktop and mobile devices to web fingerprinting. Based on our results, we estimate that 66% of desktop devices and 40% of mobile devices can be uniquely fingerprinted with our web fingerprinting system. However, the resilience to web fingerprinting varies significantly across browsers and device types, with Chrome on desktops being the most vulnerable configuration. + To counter web fingerprinting, we propose ShieldF, a simple solution which blocks the reporting by browsers of those attributes that we found in the analysis of our dataset that present the most significant discrimination power. Our experiments reveal that ShieldF outperforms all anti-fingerprinting solutions proposed by major browsers (Chrome, Safari and Firefox) offering an increase in the resilience offered to web fingerprinting up to 62% for some device configurations. ShieldF is available as an add-on for any chromium-based browser. Moreover, it is readily adoptable by browser and mobile app developers. Its widespread use would lead to a significant improvement in the protection offered by browsers and mobile apps to web fingerprinting. + oai:arXiv.org:2311.08769v3 + cs.CR + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1007/s44379-025-00042-y - A. Eslamian, A. A. Aghaei, and Q. Cheng, "Tabkan: advancing tabular data analysis using Kolmogorov-Arnold network," Machine Learning for Computational Science and Engineering, 1(2): 40, 2025 - Ali Eslamian, Alireza Afzal Aghaei, Qiang Cheng + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1109/TETC.2025.3630046 + Miguel A. Bermejo-Agueda (Universidad Carlos III de Madrid, uc3m-Santander Big Data Institute), Patricia Callejo (Universidad Carlos III de Madrid, uc3m-Santander Big Data Institute), Rub\'en Cuevas (Universidad Carlos III de Madrid, uc3m-Santander Big Data Institute), \'Angel Cuevas (Universidad Carlos III de Madrid, uc3m-Santander Big Data Institute) - Rethinking Few-Shot Image Fusion: Granular Ball Priors Enable General-Purpose Deep Fusion - https://arxiv.org/abs/2504.08937 - arXiv:2504.08937v4 Announce Type: replace -Abstract: In image fusion tasks, the absence of real fused images as priors forces most deep learning approaches to rely on large-scale paired datasets to extract global weighting features or to generate pseudo-supervised images through algorithmic constructions. Unlike previous methods, this work re-examines prior-guided learning under few-shot conditions by introducing rough set theory. We regard the traditional algorithm as a prior generator, while the network re-inferrs and adaptively optimizes the prior through a dynamic loss function, reducing the inference burden of the network and enabling effective few-shot learning.To provide the prior, we propose the Granular Ball Pixel Computation (GBPC) algorithm. GBPC models pixel pairs in a luminance subspace using meta-granular balls and mines intra-ball information at multiple granular levels. At the fine-grained level, sliding granular balls assign adaptive weights to individual pixels to produce pixel-level prior fusion. At the coarse-grained level, the algorithm performs split computation within a single image to estimate positive and boundary domain distributions, enabling modality awareness and prior confidence estimation, which dynamically guide the loss weighting.The network and the algorithmic prior are coupled through the loss function to form an integrated framework. Thanks to the dynamic weighting mechanism, the network can adaptively adjust to different priors during training, enhancing its perception and fusion capability across modalities. We name this framework GBFF (Granular Ball Fusion Framework). Experiments on four fusion tasks demonstrate that even with only ten training image pairs per task, GBFF achieves superior performance in both visual quality and model compactness. Code is available at: https://github.com/DMinjie/GBFF - oai:arXiv.org:2504.08937v4 - cs.GR - cs.CV + Information-Theoretic Active Correlation Clustering + https://arxiv.org/abs/2402.03587 + arXiv:2402.03587v3 Announce Type: replace +Abstract: Correlation clustering is a flexible framework for partitioning data based solely on pairwise similarity or dissimilarity information, without requiring the number of clusters as input. However, in many practical scenarios, these pairwise similarities are not available a priori and must be obtained through costly measurements or human feedback. This motivates the use of active learning to query only the most informative pairwise comparisons, enabling effective clustering under budget constraints. In this work, we develop a principled active learning approach for correlation clustering by introducing several information-theoretic acquisition functions that prioritize queries based on entropy and expected information gain. These strategies aim to reduce uncertainty about the clustering structure as efficiently as possible. We evaluate our methods across a range of synthetic and real-world settings and show that they significantly outperform existing baselines in terms of clustering accuracy and query efficiency. Our results highlight the benefits of combining active learning with correlation clustering in settings where similarity information is costly or limited. + oai:arXiv.org:2402.03587v3 cs.LG - eess.IV stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Minjie Deng, Yan Wei, An Wu, Yuncan Ouyang, Hao Zhai, Qianyao Peng + IEEE International Conference on Data Mining (ICDM), 2025 + Linus Aronsson, Morteza Haghir Chehreghani - PinRec: Outcome-Conditioned, Multi-Token Generative Retrieval for Industry-Scale Recommendation Systems - https://arxiv.org/abs/2504.10507 - arXiv:2504.10507v4 Announce Type: replace -Abstract: Generative retrieval methods utilize generative sequential modeling techniques, such as transformers, to generate candidate items for recommender systems. These methods have demonstrated promising results in academic benchmarks, surpassing traditional retrieval models like two-tower architectures. However, current generative retrieval methods lack the scalability required for industrial recommender systems, and they are insufficiently flexible to satisfy the multiple metric requirements of modern systems. This paper introduces PinRec, a novel generative retrieval model developed for applications at Pinterest. PinRec utilizes outcome-conditioned generation, enabling modelers to specify how to balance various outcome metrics, such as the number of saves and clicks, to effectively align with business goals and user exploration. Additionally, PinRec incorporates multi-token generation to enhance output diversity while optimizing generation. Our experiments demonstrate that PinRec can successfully balance performance, diversity, and efficiency, delivering a significant positive impact to users using generative models. This paper marks a significant milestone in generative retrieval, as it presents, to our knowledge, the first rigorous study on implementing generative retrieval at the scale of Pinterest. - oai:arXiv.org:2504.10507v4 + Improving Topic Relevance Model by Mix-structured Summarization and LLM-based Data Augmentation + https://arxiv.org/abs/2404.02616 + arXiv:2404.02616v2 Announce Type: replace +Abstract: Topic relevance between query and document is a very important part of social search, which can evaluate the degree of matching between document and user's requirement. In most social search scenarios such as Dianping, modeling search relevance always faces two challenges. One is that many documents in social search are very long and have much redundant information. The other is that the training data for search relevance model is difficult to get, especially for multi-classification relevance model. To tackle above two problems, we first take query concatenated with the query-based summary and the document summary without query as the input of topic relevance model, which can help model learn the relevance degree between query and the core topic of document. Then, we utilize the language understanding and generation abilities of large language model (LLM) to rewrite and generate query from queries and documents in existing training data, which can construct new query-document pairs as training data. Extensive offline experiments and online A/B tests show that the proposed approaches effectively improve the performance of relevance modeling. + oai:arXiv.org:2404.02616v2 cs.IR - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Prabhat Agarwal, Anirudhan Badrinath, Laksh Bhasin, Jaewon Yang, Edoardo Botta, Jiajing Xu, Charles Rosenberg + Yizhu Liu, Ran Tao, Shengyu Guo, Yifan Yang - Verifiable Mission Planning For Space Operations - https://arxiv.org/abs/2504.11631 - arXiv:2504.11631v4 Announce Type: replace -Abstract: Spacecraft must operate under environmental and actuator uncertainties while meeting strict safety requirements. Traditional approaches rely on scenario-based heuristics that fail to account for stochastic influences, leading to suboptimal or unsafe plans. We propose a finite-horizon, chance-constrained Markov decision process for mission planning, where states represent mission and vehicle parameters, actions correspond to operational adjustments, and temporal logic specifications encode operational reach-avoid constraints. We synthesize policies that optimize mission objectives while ensuring constraints are met with high probability. Applied to the GRACE-FO mission, the approach accounts for stochastic solar activity and uncertain thrust performance, yielding maneuver schedules that maximize scientific return and provably satisfy safety requirements. We demonstrate how Markov decision processes can be applied to space missions, enabling autonomous operation with formal guarantees. - oai:arXiv.org:2504.11631v4 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + AI-powered Code Review with LLMs: Early Results + https://arxiv.org/abs/2404.18496 + arXiv:2404.18496v2 Announce Type: replace +Abstract: In this paper, we present a novel approach to improving software quality and efficiency through a Large Language Model (LLM)-based model designed to review code and identify potential issues. Our proposed LLM-based AI agent model is trained on large code repositories. This training includes code reviews, bug reports, and documentation of best practices. It aims to detect code smells, identify potential bugs, provide suggestions for improvement, and optimize the code. Unlike traditional static code analysis tools, our LLM-based AI agent has the ability to predict future potential risks in the code. This supports a dual goal of improving code quality and enhancing developer education by encouraging a deeper understanding of best practices and efficient coding techniques. Furthermore, we explore the model's effectiveness in suggesting improvements that significantly reduce post-release bugs and enhance code review processes, as evidenced by an analysis of developer sentiment toward LLM feedback. For future work, we aim to assess the accuracy and efficiency of LLM-generated documentation updates in comparison to manual methods. This will involve an empirical study focusing on manually conducted code reviews to identify code smells and bugs, alongside an evaluation of best practice documentation, augmented by insights from developer discussions and code reviews. Our goal is to not only refine the accuracy of our LLM-based tool but also to underscore its potential in streamlining the software development lifecycle through proactive code improvement and education. + oai:arXiv.org:2404.18496v2 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Quentin Rommel, Michael Hibbard, Pavan Shukla, Himanshu Save, Srinivas Bettadpur, Ufuk Topcu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zeeshan Rasheed, Malik Abdul Sami, Muhammad Waseem, Kai-Kristian Kemell, Xiaofeng Wang, Anh Nguyen, Kari Syst\"a, Pekka Abrahamsson - Haptic-based Complementary Filter for Rigid Body Rotations - https://arxiv.org/abs/2504.14570 - arXiv:2504.14570v2 Announce Type: replace -Abstract: The non-commutative nature of 3D rotations poses well-known challenges in generalizing planar problems to three-dimensional ones, even more so in contact-rich tasks where haptic information (i.e., forces/torques) is involved. In this sense, not all learning-based algorithms that are currently available generalize to 3D orientation estimation. Non-linear filters defined on $\mathbf{\mathbb{SO}(3)}$ are widely used with inertial measurement sensors; however, none of them have been used with haptic measurements. This paper presents a unique complementary filtering framework that interprets the geometric shape of objects in the form of superquadrics, exploits the symmetry of $\mathbf{\mathbb{SO}(3)}$, and uses force and vision sensors as measurements to provide an estimate of orientation. The framework's robustness and almost global stability are substantiated by a set of experiments on a dual-arm robotic setup. - oai:arXiv.org:2504.14570v2 - cs.RO - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search + https://arxiv.org/abs/2405.06073 + arXiv:2405.06073v2 Announce Type: replace +Abstract: We study the robustness of data-centric methods to find neural network architectures, known as neural architecture search (NAS), against data poisoning. To audit this robustness, we design a poisoning framework that enables the systematic evaluation of the ability of NAS to produce architectures under data corruption. Our framework examines four off-the-shelf NAS algorithms, representing different approaches to architecture discovery, against four data poisoning attacks, including one we tailor specifically for NAS. In our evaluation with the CIFAR-10 and CIFAR-100 benchmarks, we show that NAS is \emph{seemingly} robust to data poisoning, showing marginal accuracy drops even under large poisoning budgets. However, we demonstrate that when considering NAS algorithms designed to achieve a few percentage points of accuracy gain, this expected improvement can be substantially diminished under data poisoning. We also show that the reduction varies across NAS algorithms and analyze the factors contributing to their robustness. Our findings are: (1) Training-based NAS algorithms are the least robust due to their reliance on data. (2) Training-free NAS approaches are the most robust but produce architectures that perform similarly to random selections from the search space. (3) NAS algorithms can produce architectures with improved accuracy, even when using out-of-distribution data like MNIST. We lastly discuss potential countermeasures. Our code is available at: https://github.com/ztcoalson/NAS-Robustness-to-Data-Poisoning + oai:arXiv.org:2405.06073v2 + cs.LG + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Amit Kumar, Domenico Campolo, Ravi N. Banavar + Zachary Coalson, Huazheng Wang, Qingyun Wu, Sanghyun Hong - Towards Task-Oriented Flying: Framework, Infrastructure, and Principles - https://arxiv.org/abs/2504.15129 - arXiv:2504.15129v2 Announce Type: replace -Abstract: Deploying robot learning methods to aerial robots in unstructured environments remains both challenging and promising. While recent advances in deep reinforcement learning (DRL) have enabled end-to-end flight control, the field still lacks systematic design guidelines and a unified infrastructure to support reproducible training and real-world deployment. We present a task-oriented framework for end-to-end DRL in quadrotors that integrates design principles for complex task specification and reveals the interdependencies among simulated task definition, training design principles, and physical deployment. Our framework involves software infrastructure, hardware platforms, and open-source firmware to support a full-stack learning infrastructure and workflow. Extensive empirical results demonstrate robust flight and sim-to-real generalization under real-world disturbances. By reducing the entry barrier for deploying learning-based controllers on aerial robots, our work lays a practical foundation for advancing autonomous flight in dynamic and unstructured environments. - oai:arXiv.org:2504.15129v2 - cs.RO - cs.AI + Multi-Scale Direction-Aware Network for Infrared Small Target Detection + https://arxiv.org/abs/2406.02037 + arXiv:2406.02037v4 Announce Type: replace +Abstract: Infrared small target detection faces the problem that it is difficult to effectively separate the background and the target. Existing deep learning-based methods focus on edge and shape features, but ignore the richer structural differences and detailed information embedded in high-frequency components from different directions, thereby failing to fully exploit the value of high-frequency directional features in target perception. To address this limitation, we propose a multi-scale direction-aware network (MSDA-Net), which is the first attempt to integrate the high-frequency directional features of infrared small targets as domain prior knowledge into neural networks. Specifically, to fully mine the high-frequency directional features, on the one hand, a high-frequency direction injection (HFDI) module without trainable parameters is constructed to inject the high-frequency directional information of the original image into the network. On the other hand, a multi-scale direction-aware (MSDA) module is constructed, which promotes the full extraction of local relations at different scales and the full perception of key features in different directions. In addition, considering the characteristics of infrared small targets, we construct a feature aggregation (FA) structure to address target disappearance in high-level feature maps, and a feature calibration fusion (FCF) module to alleviate feature bias during cross-layer feature fusion. Extensive experimental results show that our MSDA-Net achieves state-of-the-art (SOTA) results on multiple public datasets. The code can be available at https://github.com/YuChuang1205/MSDA-Net + oai:arXiv.org:2406.02037v4 cs.CV - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Kangyao Huang, Hao Wang, Jingyu Chen, Jintao Chen, Yu Luo, Di Guo, Xiangkui Zhang, Xiangyang Ji, Huaping Liu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jinmiao Zhao, Zelin Shi, Chuang Yu, Yunpeng Liu, Xinyi Ying, Yimian Dai - Using LLMs in Generating Design Rationale for Software Architecture Decisions - https://arxiv.org/abs/2504.20781 - arXiv:2504.20781v3 Announce Type: replace -Abstract: Design Rationale (DR) for software architecture decisions refers to the reasoning underlying architectural choices, which provides valuable insights into the different phases of the architecting process throughout software development. However, in practice, DR is often inadequately documented due to a lack of motivation and effort from developers. With the recent advancements in Large Language Models (LLMs), their capabilities in text comprehension, reasoning, and generation may enable the generation and recovery of DR for architecture decisions. In this study, we evaluated the performance of LLMs in generating DR for architecture decisions. First, we collected 50 Stack Overflow (SO) posts, 25 GitHub issues, and 25 GitHub discussions related to architecture decisions to construct a dataset of 100 architecture-related problems. Then, we selected five LLMs to generate DR for the architecture decisions with three prompting strategies, including zero-shot, chain of thought (CoT), and LLM-based agents. With the DR provided by human experts as ground truth, the Precision of LLM-generated DR with the three prompting strategies ranges from 0.267 to 0.278, Recall from 0.627 to 0.715, and F1-score from 0.351 to 0.389. Additionally, 64.45% to 69.42% of the arguments of DR not mentioned by human experts are also helpful, 4.12% to 4.87% of the arguments have uncertain correctness, and 1.59% to 3.24% of the arguments are potentially misleading. To further understand the trustworthiness and applicability of LLM-generated DR in practice, we conducted semi-structured interviews with six practitioners. Based on the experimental and interview results, we discussed the pros and cons of the three prompting strategies, the strengths and limitations of LLM-generated DR, and the implications for the practical use of LLM-generated DR. - oai:arXiv.org:2504.20781v3 - cs.SE + A survey on the impacts of recommender systems on users, items, and human-AI ecosystems + https://arxiv.org/abs/2407.01630 + arXiv:2407.01630v2 Announce Type: replace +Abstract: Recommendation systems and assistants (in short, recommenders) influence through online platforms most actions of our daily lives, suggesting items or providing solutions based on users' preferences or requests. This survey systematically reviews, categories, and discusses the impact of recommenders in four human-AI ecosystems -- social media, online retail, urban mapping and generative AI ecosystems. Its scope is to systematise a fast-growing field in which terminologies employed to classify methodologies and outcomes are fragmented and unsystematic. This is a crucial contribution to the literature because terminologies vary substantially across disciplines and ecosystems, hindering comparison and accumulation of knowledge in the field. We follow the customary steps of qualitative systematic review, gathering 154 articles from different disciplines to develop a parsimonious taxonomy of methodologies employed (empirical, simulation, observational, controlled), outcomes observed (concentration, content degradation, discrimination, diversity, echo chamber, filter bubble, homogenisation, polarisation, radicalisation, volume), and their level of analysis (individual, item, and ecosystem). We systematically discuss substantive and methodological commonalities across ecosystems, and highlight potential avenues for future research. The survey is addressed to scholars and practitioners interested in different human-AI ecosystems, policymakers and institutional stakeholders who want to understand better the measurable outcomes of recommenders, and tech companies who wish to obtain a systematic view of the impact of their recommenders. + oai:arXiv.org:2407.01630v2 + cs.IR cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CY + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Xiyu Zhou, Ruiyin Li, Peng Liang, Beiqi Zhang, Mojtaba Shahin, Zengyang Li, Chen Yang + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Luca Pappalardo, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, Valentina Pansanella, Giulio Rossetti, Gizem Gezici, Fosca Giannotti, Margherita Lalli, Giovanni Mauro, Gabriele Barlacchi, Daniele Gambetta, Virginia Morini, Dino Pedreschi, Emanuele Ferragina - A Formalism for Optimal Search with Dynamic Heuristics (Extended Version) - https://arxiv.org/abs/2504.21131 - arXiv:2504.21131v3 Announce Type: replace -Abstract: While most heuristics studied in heuristic search depend only on the state, some accumulate information during search and thus also depend on the search history. Various existing approaches use such dynamic heuristics in $\mathrm{A}^*$-like algorithms and appeal to classic results for $\mathrm{A}^*$ to show optimality. However, doing so ignores the complexities of searching with a mutable heuristic. In this paper we formalize the idea of dynamic heuristics and use them in a generic algorithm framework. We study a particular instantiation that models $\mathrm{A}^*$ with dynamic heuristics and show general optimality results. Finally we show how existing approaches from classical planning can be viewed as special cases of this instantiation, making it possible to directly apply our optimality results. - oai:arXiv.org:2504.21131v3 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Leveraging Machine Learning to Identify Gendered Stereotypes and Body Image Concerns on Diet and Fitness Online Forums + https://arxiv.org/abs/2407.03551 + arXiv:2407.03551v2 Announce Type: replace +Abstract: The pervasive expectations about ideal body types in Western society can lead to body image concerns, dissatisfaction, and in extreme cases, eating disorders and other psychopathologies related to body image. While previous research has focused on online pro-anorexia communities glorifying the "thin ideal," less attention has been given to the broader spectrum of body image concerns or how emerging disorders like muscle dysmorphia ("bigorexia") present on online platforms. To address this gap, we analyze 46 Reddit forums related to diet, fitness, and mental health. We map these communities along gender and body ideal dimensions, revealing distinct patterns of emotional expression and community support. Feminine-oriented communities, especially those endorsing the thin ideal, express higher levels of negative emotions and receive caring comments in response. In contrast, muscular ideal communities display less negativity, regardless of gender orientation, but receive aggressive compliments in response, marked by admiration and toxicity. Mental health discussions align more with thin ideal, feminine-leaning spaces. By uncovering these gendered emotional dynamics, our findings can inform the development of moderation strategies that foster supportive interactions while reducing exposure to harmful content. + oai:arXiv.org:2407.03551v2 + cs.SI + cs.CL + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Remo Christen, Florian Pommerening, Clemens B\"uchner, Malte Helmert + Minh Duc Chu, Cinthia S\'anchez, Zihao He, Rebecca Dorn, Stuart Murray, Kristina Lerman - Differentially Private Secure Multiplication with Erasures and Adversaries - https://arxiv.org/abs/2504.21178 - arXiv:2504.21178v2 Announce Type: replace -Abstract: We consider a private distributed multiplication problem involving N computation nodes and T colluding nodes. Shamir's secret sharing algorithm provides perfect information-theoretic privacy, while requiring an honest majority, i.e., N \ge 2T + 1. Recent work has investigated approximate computation and characterized privacy-accuracy trade-offs for the honest minority setting N \le 2T for real-valued data, quantifying privacy leakage via the differential privacy (DP) framework and accuracy via the mean squared error. However, it does not incorporate the error correction capabilities of Shamir's secret-sharing algorithm. This paper develops a new polynomial-based coding scheme for secure multiplication with an honest minority, and characterizes its achievable privacy-utility tradeoff, showing that the tradeoff can approach the converse bound as closely as desired. Unlike previous schemes, the proposed scheme inherits the capability of the Reed-Solomon (RS) code to tolerate erasures and adversaries. We utilize a modified Berlekamp-Welch algorithm over the real number field to detect adversarial nodes. - oai:arXiv.org:2504.21178v2 - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + Entropy-Informed Weighting Channel Normalizing Flow for Deep Generative Models + https://arxiv.org/abs/2407.04958 + arXiv:2407.04958v2 Announce Type: replace +Abstract: Normalizing Flows (NFs) are widely used in deep generative models for their exact likelihood estimation and efficient sampling. + However, they require substantial memory since the latent space matches the input dimension. + Multi-scale architectures address this by progressively reducing latent dimensions while preserving reversibility. + Existing multi-scale architectures use simple, static channel-wise splitting, limiting expressiveness. To improve this, we introduce a regularized, feature-dependent $\mathtt{Shuffle}$ operation and integrate it into vanilla multi-scale architecture. + This operation adaptively generates channel-wise weights and shuffles latent variables before splitting them. + We observe that such operation guides the variables to evolve in the direction of entropy increase, hence we refer to NFs with the $\mathtt{Shuffle}$ operation as \emph{Entropy-Informed Weighting Channel Normalizing Flow} (EIW-Flow). + Extensive experiments on CIFAR-10, CelebA, ImageNet, and LSUN demonstrate that EIW-Flow achieves state-of-the-art density estimation and competitive sample quality for deep generative modeling, with minimal computational overhead. + oai:arXiv.org:2407.04958v2 + cs.LG + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haoyang Hu, Viveck R. Cadambe + 10.1016/j.patcog.2025.112442 + Chen, W., Du, S., Li, S., Zeng, D., & Paisley, J. (2025). Entropy-informed weighting channel normalizing flow for deep generative models. Pattern Recognition, 112442 + Wei Chen, Shian Du, Shigui Li, Delu Zeng, John Paisley - Capability-Driven Skill Generation with LLMs: A RAG-Based Approach for Reusing Existing Libraries and Interfaces - https://arxiv.org/abs/2505.03295 - arXiv:2505.03295v2 Announce Type: replace -Abstract: Modern automation systems increasingly rely on modular architectures, with capabilities and skills as one solution approach. Capabilities define the functions of resources in a machine-readable form and skills provide the concrete implementations that realize those capabilities. However, the development of a skill implementation conforming to a corresponding capability remains a time-consuming and challenging task. In this paper, we present a method that treats capabilities as contracts for skill implementations and leverages large language models to generate executable code based on natural language user input. A key feature of our approach is the integration of existing software libraries and interface technologies, enabling the generation of skill implementations across different target languages. We introduce a framework that allows users to incorporate their own libraries and resource interfaces into the code generation process through a retrieval-augmented generation architecture. The proposed method is evaluated using an autonomous mobile robot controlled via Python and ROS 2, demonstrating the feasibility and flexibility of the approach. - oai:arXiv.org:2505.03295v2 - cs.AI - cs.RO - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Counting Small Induced Subgraphs: Hardness via Fourier Analysis + https://arxiv.org/abs/2407.07051 + arXiv:2407.07051v3 Announce Type: replace +Abstract: For a fixed graph property $\Phi$ and integer $k \geq 1$, consider the problem of counting the induced $k$-vertex subgraphs satisfying $\Phi$ in an input graph $G$. This problem can be solved by brute-force in time $O(n^{k})$. Under ETH, we prove several lower bounds on the optimal exponent in this running time: + If $\Phi$ is edge-monotone (i.e., closed under deleting edges), then ETH rules out $n^{o(k)}$ time algorithms for this problem. This strengthens a recent lower bound by D\"{o}ring, Marx and Wellnitz [STOC 2024]. Our result also holds for counting modulo fixed primes. + If at most $(2-\varepsilon)^{\binom{k}{2}}$ graphs on $k$ vertices satisfy $\Phi$, for some $\varepsilon > 0$, then ETH also rules out an exponent of $o(k)$. This holds even when the graphs in $\Phi$ have arbitrary individual weights, generalizing previous results for hereditary properties by Focke and Roth [SIAM J. Comput. 2024]. + If $\Phi$ is non-trivial and excludes $\beta_\Phi$ edge-densities, then the optimal exponent under ETH is $\Omega(\beta_\Phi)$. This holds even when the graphs in $\Phi$ have arbitrary individual weights, generalizing previous results by Roth, Schmitt and Wellnitz [SIAM J. Comput. 2024]. + In all cases, we also obtain $\mathsf{\#W[1]}$-hardness if $k$ is part of the input and considered as the parameter. We also obtain lower bounds on the Weisfeiler-Leman dimension. As opposed to the nontrivial techniques from combinatorics, group theory, and simplicial topology used before, our results follow from a relatively straightforward ``algebraization'' of the problem in terms of polynomials, combined with applications of simple algebraic facts, which can also be interpreted in terms of Fourier analysis. + oai:arXiv.org:2407.07051v3 + cs.CC + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Radu Curticapean, Daniel Neuen + + + Towards Robust Infrared Small Target Detection: A Feature-Enhanced and Sensitivity-Tunable Framework + https://arxiv.org/abs/2407.20090 + arXiv:2407.20090v3 Announce Type: replace +Abstract: Recently, single-frame infrared small target (SIRST) detection technology has attracted widespread attention. Different from most existing deep learning-based methods that focus on improving network architectures, we propose a feature-enhanced and sensitivity-tunable (FEST) framework, which is compatible with existing SIRST detection networks and further enhances their detection performance. The FEST framework improves the model's robustness from two aspects: feature enhancement and target confidence regulation. For feature enhancement, we employ a multi-scale fusion strategy to improve the model's perception to multi-scale features of multi-size targets, and design an edge enhancement difficulty mining (EEDM) loss to guide the network to continuously focus on challenging target regions and edge features during training. For target confidence regulation, an adjustable sensitivity (AS) strategy is proposed for network post-processing. This strategy enhances the model's adaptability in complex scenarios and significantly improves the detection rate of infrared small targets while maintaining segmentation accuracy. Extensive experimental results show that our FEST framework can effectively enhance the performance of existing SIRST detection networks. The code is available at https://github.com/YuChuang1205/FEST-Framework + oai:arXiv.org:2407.20090v3 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/ETFA65518.2025.11205724 - Luis Miguel Vieira da Silva, Aljosha K\"ocher, Nicolas K\"onig, Felix Gehlhoff, Alexander Fay + Jinmiao Zhao, Zelin Shi, Chuang Yu, Yunpeng Liu, Yimian Dai - Fitts' List Revisited: An Empirical Study on Function Allocation in a Two-Agent Physical Human-Robot Collaborative Position/Force Task - https://arxiv.org/abs/2505.04722 - arXiv:2505.04722v2 Announce Type: replace -Abstract: In this letter, we investigate whether classical function allocation-the principle of assigning tasks to either a human or a machine-holds for physical Human-Robot Collaboration, which is important for providing insights for Industry 5.0 to guide how to best augment rather than replace workers. This study empirically tests the applicability of Fitts' List within physical Human-Robot Collaboration, by conducting a user study (N=26, within-subject design) to evaluate four distinct allocations of position/force control between human and robot in an abstract blending task. We hypothesize that the function in which humans control the position achieves better performance and receives higher user ratings. When allocating position control to the human and force control to the robot, compared to the opposite case, we observed a significant improvement in preventing overblending. This was also perceived better in terms of physical demand and overall system acceptance, while participants experienced greater autonomy, more engagement and less frustration. An interesting insight was that the supervisory role (when the robot controls both position and force) was rated second best in terms of subjective acceptance. Another surprising insight was that if position control was delegated to the robot, the participants perceived much lower autonomy than when the force control was delegated to the robot. These findings empirically support applying Fitts' principles to static function allocation for physical collaboration, while also revealing important nuanced user experience trade-offs, particularly regarding perceived autonomy when delegating position control. - oai:arXiv.org:2505.04722v2 - cs.RO + Studying the Effects of Collaboration in Interactive Theme Discovery Systems + https://arxiv.org/abs/2408.09030 + arXiv:2408.09030v3 Announce Type: replace +Abstract: NLP-assisted solutions have gained considerable traction to support qualitative data analysis. However, there does not exist a unified evaluation framework that can account for the many different settings in which qualitative researchers may employ them. In this paper, we take a first step in this direction by proposing an evaluation framework to study the way in which different tools may result in different outcomes depending on the collaboration strategy employed. Specifically, we study the impact of synchronous vs. asynchronous collaboration using two different NLP-assisted qualitative research tools and present a comprehensive analysis of significant differences in the consistency, cohesiveness, and correctness of their outputs. + oai:arXiv.org:2408.09030v3 + cs.CL cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - 10.1109/LRA.2025.3632607 - N. Mol, J. M. Prendergast, D. A. Abbink and L. Peternel, "Fitts' List Revisited: An Empirical Study on Function Allocation," in IEEE Robot. Autom. Lett., vol. 11, no. 1, pp. 202-209, Jan. 2026 - Nicky Mol, J. Micah Prendergast, David A. Abbink, Luka Peternel + Alvin Po-Chun Chen, Dananjay Srinivas, Rohan Das, Alexandra Barry, Maksim Seniw, Maria Leonor Pacheco - Cross-Problem Solving for Network Optimization: Is Problem-Aware Learning the Key? - https://arxiv.org/abs/2505.05067 - arXiv:2505.05067v2 Announce Type: replace -Abstract: As intelligent network services continue to diversify, ensuring efficient and adaptive resource allocation in edge networks has become increasingly critical. Yet the wide functional variations across services often give rise to new and unforeseen optimization problems, rendering traditional manual modeling and solver design both time-consuming and inflexible. This limitation reveals a key gap between current methods and human solving - the inability to recognize and understand problem characteristics. It raises the question of whether problem-aware learning can bridge this gap and support effective cross-problem generalization. To answer this question, we propose a problem-aware diffusion (PAD) model, which leverages a problem-aware learning framework to enable cross-problem generalization. By explicitly encoding the mathematical formulations of optimization problems into token-level embeddings, PAD empowers the model to understand and adapt to problem structures. Extensive experiments across ten representative network optimization problems show that PAD generalizes well to unseen problems while avoiding the inefficiency of building new solvers from scratch, yet still delivering competitive solution quality. Meanwhile, an auxiliary constraint-aware module is designed to enforce solution validity further. The experiments indicate that problem-aware learning opens a promising direction toward general-purpose solvers for intelligent network operation and resource management. Our code is open source at https://github.com/qiyu3816/PAD. - oai:arXiv.org:2505.05067v2 - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 + Point Neuron Learning: A New Physics-Informed Neural Network Architecture + https://arxiv.org/abs/2408.16969 + arXiv:2408.16969v2 Announce Type: replace +Abstract: Machine learning and neural networks have advanced numerous research domains, but challenges such as large training data requirements and inconsistent model performance hinder their application in certain scientific problems. To overcome these challenges, researchers have investigated integrating physics principles into machine learning models, mainly through: (i) physics-guided loss functions, generally termed as physics-informed neural networks, and (ii) physics-guided architectural design. While both approaches have demonstrated success across multiple scientific disciplines, they have limitations including being trapped to a local minimum, poor interpretability, and restricted generalizability. This paper proposes a new physics-informed neural network (PINN) architecture that combines the strengths of both approaches by embedding the fundamental solution of the wave equation into the network architecture, enabling the learned model to strictly satisfy the wave equation. The proposed point neuron learning method can model an arbitrary sound field based on microphone observations without any dataset. Compared to other PINN methods, our approach directly processes complex numbers and offers better interpretability and generalizability. We evaluate the versatility of the proposed architecture by a sound field reconstruction problem in a reverberant environment. Results indicate that the point neuron method outperforms two competing methods and can efficiently handle noisy environments with sparse microphone observations. + oai:arXiv.org:2408.16969v2 + cs.LG + cs.SD + eess.AS + eess.SP + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ruihuai Liang, Bo Yang, Pengyu Chen, Xuelin Cao, Zhiwen Yu, H. Vincent Poor, Chau Yuen + 10.1186/s13636-024-00376-0 + Bi H, Abhayapala TD. Point neuron learning: a new physics informed neural network architecture. EURASIP J Audio Speech Music Process 2024, 56 (2024) + Hanwen Bi, Thushara D. Abhayapala - Video Dataset Condensation with Diffusion Models - https://arxiv.org/abs/2505.06670 - arXiv:2505.06670v2 Announce Type: replace -Abstract: In recent years, the rapid expansion of dataset sizes and the increasing complexity of deep learning models have significantly escalated the demand for computational resources, both for data storage and model training. Dataset distillation has emerged as a promising solution to address this challenge by generating a compact synthetic dataset that retains the essential information from a large real dataset. However, existing methods often suffer from limited performance, particularly in the video domain. In this paper, we focus on video dataset distillation. We begin by employing a video diffusion model to generate synthetic videos. Since the videos are generated only once, this significantly reduces computational costs. Next, we introduce the Video Spatio-Temporal U-Net (VST-UNet), a model designed to select a diverse and informative subset of videos that effectively captures the characteristics of the original dataset. To further optimize computational efficiency, we explore a training-free clustering algorithm, Temporal-Aware Cluster-based Distillation (TAC-DT), to select representative videos without requiring additional training overhead. We validate the effectiveness of our approach through extensive experiments on four benchmark datasets, demonstrating performance improvements of up to \(10.61\%\) over the state-of-the-art. Our method consistently outperforms existing approaches across all datasets, establishing a new benchmark for video dataset distillation. - oai:arXiv.org:2505.06670v2 + Tokenizing Motion: A Generative Approach for Scene Dynamics Compression + https://arxiv.org/abs/2410.09768 + arXiv:2410.09768v4 Announce Type: replace +Abstract: This paper proposes a novel generative video compression framework that leverages motion pattern priors, derived from subtle dynamics in common scenes (e.g., swaying flowers or a boat drifting on water), rather than relying on video content priors (e.g., talking faces or human bodies). These compact motion priors enable a new approach to ultra-low bitrate communication while achieving high-quality reconstruction across diverse scene contents. At the encoder side, motion priors can be streamlined into compact representations via a dense-to-sparse transformation. At the decoder side, these priors facilitate the reconstruction of scene dynamics using an advanced flow-driven diffusion model. Experimental results illustrate that the proposed method can achieve superior rate-distortion-performance and outperform the state-of-the-art conventional-video codec Enhanced Compression Model (ECM) on-scene dynamics sequences. The project page can be found at-https://github.com/xyzysz/GNVDC. + oai:arXiv.org:2410.09768v4 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + eess.IV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by-nc-sa/4.0/ - Zhe Li, Hadrien Reynaud, Mischa Dombrowski, Sarah Cechnicka, Franciskus Xaverius Erick, Bernhard Kainz + Shanzhi Yin, Zihan Zhang, Bolin Chen, Shiqi Wang, Yan Ye - MELLM: A Flow-Guided Large Language Model for Micro-Expression Understanding - https://arxiv.org/abs/2505.07007 - arXiv:2505.07007v3 Announce Type: replace -Abstract: Micro-expressions (MEs), brief and low-intensity facial movements revealing concealed emotions, are crucial for affective computing. Despite notable progress in ME recognition, existing methods are largely confined to discrete emotion classification, lacking the capacity for comprehensive ME Understanding (MEU), particularly in interpreting subtle facial dynamics and underlying emotional cues. While Multimodal Large Language Models (MLLMs) offer potential for MEU with their advanced reasoning abilities, they still struggle to perceive such subtle facial affective behaviors. To bridge this gap, we propose a ME Large Language Model (MELLM) that integrates optical flow-based sensitivity to subtle facial motions with the powerful inference ability of LLMs. Specifically, an iterative, warping-based optical-flow estimator, named MEFlowNet, is introduced to precisely capture facial micro-movements. For its training and evaluation, we construct MEFlowDataset, a large-scale optical-flow dataset with 54,611 onset-apex image pairs spanning diverse identities and subtle facial motions. Subsequently, we design a Flow-Guided Micro-Expression Understanding paradigm. Under this framework, the optical flow signals extracted by MEFlowNet are leveraged to build MEU-Instruct, an instruction-tuning dataset for MEU. MELLM is then fine-tuned on MEU-Instruct, enabling it to translate subtle motion patterns into human-readable descriptions and generate corresponding emotional inferences. Experiments demonstrate that MEFlowNet significantly outperforms existing optical flow methods in facial and ME-flow estimation, while MELLM achieves state-of-the-art accuracy and generalization across multiple ME benchmarks. To the best of our knowledge, this work presents two key contributions: MEFlowNet as the first dedicated ME flow estimator, and MELLM as the first LLM tailored for MEU. - oai:arXiv.org:2505.07007v3 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Self-Supervised Learning and Opportunistic Inference for Continuous Monitoring of Freezing of Gait in Parkinson's Disease + https://arxiv.org/abs/2410.21326 + arXiv:2410.21326v2 Announce Type: replace +Abstract: Parkinson's disease (PD) is a progressive neurological disorder that impacts the quality of life significantly, making in-home monitoring of motor symptoms such as Freezing of Gait (FoG) critical. However, existing symptom monitoring technologies are power-hungry, rely on extensive amounts of labeled data, and operate in controlled settings. These shortcomings limit real-world deployment of the technology. This work presents LIFT-PD, a computationally-efficient self-supervised learning framework for real-time FoG detection. Our method combines self-supervised pre-training on unlabeled data with a novel differential hopping windowing technique to learn from limited labeled instances. An opportunistic model activation module further minimizes power consumption by selectively activating the deep learning module only during active periods. Extensive experimental results show that LIFT-PD achieves a 7.25% increase in precision and 4.4% improvement in accuracy compared to supervised models while using as low as 40% of the labeled training data used for supervised learning. Additionally, the model activation module reduces inference time by up to 67% compared to continuous inference. LIFT-PD paves the way for practical, energy-efficient, and unobtrusive in-home monitoring of PD patients with minimal labeling requirements. + oai:arXiv.org:2410.21326v2 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sirui Zhao, Zhengye Zhang, Shifeng Liu, Xinglong Mao, Shukang Yin, Chaoyou Fu, Tong Xu, Enhong Chen + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Shovito Barua Soumma, Daniel Peterson, Shyamal Mehta, Hassan Ghasemzadeh - Leveraging Multi-Modal Information to Enhance Dataset Distillation - https://arxiv.org/abs/2505.08605 - arXiv:2505.08605v3 Announce Type: replace -Abstract: Dataset distillation aims to create a small and highly representative synthetic dataset that preserves the essential information of a larger real dataset. Beyond reducing storage and computational costs, related approaches offer a promising avenue for privacy preservation in computer vision by eliminating the need to store or share sensitive real-world images. Existing methods focus solely on optimizing visual representations, overlooking the potential of multi-modal information. In this work, we propose a multi-modal dataset distillation framework that incorporates two key enhancements: caption-guided supervision and object-centric masking. To leverage textual information, we introduce two strategies: caption concatenation, which fuses caption embeddings with visual features during classification, and caption matching, which enforces semantic alignment between real and synthetic data through a caption-based loss. To improve data utility and reduce unnecessary background noise, we employ segmentation masks to isolate target objects and introduce two novel losses: masked feature alignment and masked gradient matching, both aimed at promoting object-centric learning. Extensive evaluations demonstrate that our approach improves downstream performance while promoting privacy protection by minimizing exposure to real data. - oai:arXiv.org:2505.08605v3 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Control Node Placement and Structural Controllability of Water Quality Dynamics in Drinking Networks + https://arxiv.org/abs/2411.01361 + arXiv:2411.01361v3 Announce Type: replace +Abstract: Chlorine, the most widely used disinfectant, needs to be adequately distributed in water distribution networks (WDNs) to maintain consistent residual levels and ensure safe water. This is performed through control node injections at the treatment plant via booster stations distributed across the WDNs. While previous studies have applied various optimization-based approaches for booster station placement, many have failed to consider the coverage of the station injections and the dynamic nature of WDNs. In particular, variations in hydraulics and demand significantly impact the reachability and efficacy of chlorine injections which then impact optimal placement of booster stations. This study introduces a novel formulation that combines control- and graph-theoretic approaches to solve the booster station placement problem. Unlike traditional methods, our approach emphasizes maximizing the system's ability to control disinfectant levels with minimal control energy, taking into account the time-varying hydraulic profiles that lead to different optimal station placements. We propose a simple weighting technique to determine the placements by assessing the structural controllability of each configuration, based on the network's topology, independent of specific parameters like decay rates or pipe roughness. This method ensures effective chlorine coverage across the network. Our approach is validated on different networks, demonstrating its operational effectiveness, scalability, and practicality. + oai:arXiv.org:2411.01361v3 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Zhe Li, Hadrien Reynaud, Bernhard Kainz + http://creativecommons.org/licenses/by/4.0/ + Salma M. Elsherif, Ahmad F. Taha - VoiceCloak: A Multi-Dimensional Defense Framework against Unauthorized Diffusion-based Voice Cloning - https://arxiv.org/abs/2505.12332 - arXiv:2505.12332v5 Announce Type: replace -Abstract: Diffusion Models (DMs) have achieved remarkable success in realistic voice cloning (VC), while they also increase the risk of malicious misuse. Existing proactive defenses designed for traditional VC models aim to disrupt the forgery process, but they have been proven incompatible with DMs due to the intricate generative mechanisms of diffusion. To bridge this gap, we introduce VoiceCloak, a multi-dimensional proactive defense framework with the goal of obfuscating speaker identity and degrading perceptual quality in potential unauthorized VC. To achieve these goals, we conduct a focused analysis to identify specific vulnerabilities within DMs, allowing VoiceCloak to disrupt the cloning process by introducing adversarial perturbations into the reference audio. Specifically, to obfuscate speaker identity, VoiceCloak first targets speaker identity by distorting representation learning embeddings to maximize identity variation, which is guided by auditory perception principles. Additionally, VoiceCloak disrupts crucial conditional guidance processes, particularly attention context, thereby preventing the alignment of vocal characteristics that are essential for achieving convincing cloning. Then, to address the second objective, VoiceCloak introduces score magnitude amplification to actively steer the reverse trajectory away from the generation of high-quality speech. Noise-guided semantic corruption is further employed to disrupt structural speech semantics captured by DMs, degrading output quality. Extensive experiments highlight VoiceCloak's outstanding defense success rate against unauthorized diffusion-based voice cloning. Audio samples of VoiceCloak are available at https://voice-cloak.github.io/VoiceCloak/. - oai:arXiv.org:2505.12332v5 - cs.SD - cs.AI + Adversarial-Robustness-Guided Graph Pruning + https://arxiv.org/abs/2411.12331 + arXiv:2411.12331v2 Announce Type: replace +Abstract: Graph learning plays a central role in many data mining and machine learning tasks, such as manifold learning, data representation and analysis, dimensionality reduction, clustering, and visualization. In this work, we propose a highly scalable, adversarial-robustness-guided graph pruning framework for learning graph topologies from data. By performing a spectral adversarial robustness evaluation, our method aims to learn sparse, undirected graphs that help the underlying algorithms resist noise and adversarial perturbations. In particular, we explicitly identify and prune edges that are most vulnerable to adversarial attacks. We use spectral clustering, one of the most representative graph-based machine learning algorithms, to evaluate the proposed framework. Compared with prior state-of-the-art graph learning approaches, the proposed method is more scalable and significantly improves both the computational efficiency and the solution quality of spectral clustering. + oai:arXiv.org:2411.12331v2 cs.CV - cs.MM - eess.AS - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qianyue Hu, Junyan Wu, Wei Lu, Xiangyang Luo + Yongyu Wang - EEG-to-Text Translation: A Model for Deciphering Human Brain Activity - https://arxiv.org/abs/2505.13936 - arXiv:2505.13936v2 Announce Type: replace -Abstract: With the rapid advancement of large language models like Gemini, GPT, and others, bridging the gap between the human brain and language processing has become an important area of focus. To address this challenge, researchers have developed various models to decode EEG signals into text. However, these models still face significant performance limitations. To overcome these shortcomings, we propose a new model, R1 Translator, which aims to improve the performance of EEG-to-text decoding. The R1 Translator model combines a bidirectional LSTM encoder with a pretrained transformer-based decoder, utilizing EEG features to produce high-quality text outputs. The model processes EEG embeddings through the LSTM to capture sequential dependencies, which are then fed into the transformer decoder for effective text generation. The R1 Translator excels in ROUGE metrics, outperforming both T5 (previous research) and Brain Translator. Specifically, R1 achieves a ROUGE-1 score of 38.00% (P), which is up to 9% higher than T5 (34.89%) and 3% better than Brain (35.69%). It also leads in ROUGE-L, with a F1 score of 32.51%, outperforming T5 by 3% (29.67%) and Brain by 2% (30.38%). In terms of CER, R1 achieves a CER of 0.5795, which is 2% lower than T5 (0.5917) and 4% lower than Brain (0.6001). Additionally, R1 performs better in WER with a score of 0.7280, outperforming T5 by 4.3% (0.7610) and Brain by 3.6% (0.7553). Code is available at https://github.com/Mmurrad/EEG-To-text. - oai:arXiv.org:2505.13936v2 + Understanding World or Predicting Future? A Comprehensive Survey of World Models + https://arxiv.org/abs/2411.14499 + arXiv:2411.14499v4 Announce Type: replace +Abstract: The concept of world models has garnered significant attention due to advancements in multimodal large language models such as GPT-4 and video generation models such as Sora, which are central to the pursuit of artificial general intelligence. This survey offers a comprehensive review of the literature on world models. Generally, world models are regarded as tools for either understanding the present state of the world or predicting its future dynamics. This review presents a systematic categorization of world models, emphasizing two primary functions: (1) constructing internal representations to understand the mechanisms of the world, and (2) predicting future states to simulate and guide decision-making. Initially, we examine the current progress in these two categories. We then explore the application of world models in key domains, including generative games, autonomous driving, robotics, and social simulacra, with a focus on how each domain utilizes these aspects. Finally, we outline key challenges and provide insights into potential future research directions. We summarize the representative papers along with their code repositories in https://github.com/tsinghua-fib-lab/World-Model. + oai:arXiv.org:2411.14499v4 cs.CL cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Saydul Akbar Murad, Ashim Dahal, Nick Rahimi + Jingtao Ding, Yunke Zhang, Yu Shang, Jie Feng, Yuheng Zhang, Zefang Zong, Yuan Yuan, Hongyuan Su, Nian Li, Jinghua Piao, Yucheng Deng, Nicholas Sukiennik, Chen Gao, Fengli Xu, Yong Li - Sonnet: Spectral Operator Neural Network for Multivariable Time Series Forecasting - https://arxiv.org/abs/2505.15312 - arXiv:2505.15312v2 Announce Type: replace -Abstract: Multivariable time series forecasting methods can integrate information from exogenous variables, leading to significant prediction accuracy gains. The transformer architecture has been widely applied in various time series forecasting models due to its ability to capture long-range sequential dependencies. However, a na\"ive application of transformers often struggles to effectively model complex relationships among variables over time. To mitigate against this, we propose a novel architecture, termed Spectral Operator Neural Network (Sonnet). Sonnet applies learnable wavelet transformations to the input and incorporates spectral analysis using the Koopman operator. Its predictive skill relies on the Multivariable Coherence Attention (MVCA), an operation that leverages spectral coherence to model variable dependencies. Our empirical analysis shows that Sonnet yields the best performance on $34$ out of $47$ forecasting tasks with an average mean absolute error (MAE) reduction of $2.2\%$ against the most competitive baseline. We further show that MVCA can remedy the deficiencies of na\"ive attention in various deep learning models, reducing MAE by $10.7\%$ on average in the most challenging forecasting tasks. - oai:arXiv.org:2505.15312v2 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + RELOCATE: A Simple Training-Free Baseline for Visual Query Localization Using Region-Based Representations + https://arxiv.org/abs/2412.01826 + arXiv:2412.01826v2 Announce Type: replace +Abstract: We present RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos. To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. At a high level, it follows the classic object localization approach: (1) identify all objects in each video frame, (2) compare the objects with the given query and select the most similar ones, and (3) perform bidirectional tracking to get a spatio-temporal response. However, we propose some key enhancements to handle small objects, cluttered scenes, partial visibility, and varying appearances. Notably, we refine the selected objects for accurate localization and generate additional visual queries to capture visual variations. We evaluate RELOCATE on the challenging Ego4D Visual Query 2D Localization dataset, establishing a new baseline that outperforms prior task-specific methods by 49% (relative improvement) in spatio-temporal average precision. + oai:arXiv.org:2412.01826v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Yuxuan Shu, Vasileios Lampos + Savya Khosla, Sethuraman T V, Alexander Schwing, Derek Hoiem - Make LVLMs Focus: Context-Aware Attention Modulation for Better Multimodal In-Context Learning - https://arxiv.org/abs/2505.17097 - arXiv:2505.17097v3 Announce Type: replace -Abstract: Multimodal in-context learning (ICL) is becoming a key capability that allows large vision-language models (LVLMs) to adapt to novel tasks without parameter updates, which expands their usefulness in many real-world applications. However, ICL performance remains unstable even when the in-context demonstrations (ICDs) are well matched, showing that LVLMs still struggle to make full use of the provided context. While existing work mainly focuses on prompt engineering or post-hoc logit calibration, we study the attention mechanisms inside LVLMs to address their inherent limitations. We identify two important weaknesses in their self-attention that hinder effective ICL. To address these weaknesses, we propose \textbf{Context-Aware Modulated Attention} (CAMA), a training-free and plug-and-play method that dynamically adjusts attention logits based on the input in-context sequence. CAMA uses a two-stage modulation process that strengthens attention to semantically important tokens, especially visual ones. Across four LVLMs and seven benchmarks, CAMA consistently outperforms vanilla models and baselines, showing clear effectiveness and generalization. It can also activate the intended benefits of prompt engineering methods and remains robust across different sequence configurations. Therefore, CAMA opens up new directions for improving multimodal reasoning through a deeper understanding of attention dynamics. - oai:arXiv.org:2505.17097v3 - cs.CV + Transparent and Coherent Procedural Mistake Detection + https://arxiv.org/abs/2412.11927 + arXiv:2412.11927v5 Announce Type: replace +Abstract: Procedural mistake detection (PMD) is a challenging problem of classifying whether a human user (observed through egocentric video) has successfully executed a task (specified by a procedural text). Despite significant recent efforts, machine performance in the wild remains nonviable, and the reasoning processes underlying this performance are opaque. As such, we extend PMD to require generating visual self-dialog rationales to inform decisions. Given the impressive, mature image understanding capabilities observed in recent vision-and-language models (VLMs), we curate a suitable benchmark dataset for PMD based on individual frames. As our reformulation enables unprecedented transparency, we leverage a natural language inference (NLI) model to formulate two automated metrics for the coherence of generated rationales. We establish baselines for this reframed task, showing that VLMs struggle off-the-shelf, but with some trade-offs, their accuracy, coherence, and efficiency can be improved by incorporating these metrics into common inference and fine-tuning methods. Lastly, our multi-faceted metrics visualize common outcomes, highlighting areas for further improvement. + oai:arXiv.org:2412.11927v5 + cs.AI cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yanshu Li, Jianjiang Yang, Ziteng Yang, Bozheng Li, Ligong Han, Hongyang He, Zhengtao Yao, Yingjie Victor Chen, Songlin Fei, Dongfang Liu, Ruixiang Tang + Shane Storks, Itamar Bar-Yossef, Yayuan Li, Zheyuan Zhang, Jason J. Corso, Joyce Chai - Understanding the Implicit Regularization of Gradient Descent in Over-parameterized Models - https://arxiv.org/abs/2505.17304 - arXiv:2505.17304v2 Announce Type: replace -Abstract: Implicit regularization refers to the tendency of local search algorithms to converge to low-dimensional solutions, even when such structures are not explicitly enforced. Despite its ubiquity, the mechanism underlying this behavior remains poorly understood, particularly in over-parameterized settings. We analyze gradient descent dynamics and identify three conditions under which it converges to second-order stationary points within an implicit low-dimensional region: (i) suitable initialization, (ii) efficient escape from saddle points, and (iii) sustained proximity to the region. We show that these can be achieved through infinitesimal perturbations and a small deviation rate. Building on this, we introduce Infinitesimally Perturbed Gradient Descent (IPGD), which satisfies these conditions under mild assumptions. We provide theoretical guarantees for IPGD in over-parameterized matrix sensing and empirical evidence of its broader applicability. - oai:arXiv.org:2505.17304v2 - cs.LG - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + Flexible realizations existence: NP-completeness on sparse graphs and algorithms + https://arxiv.org/abs/2412.13721 + arXiv:2412.13721v2 Announce Type: replace +Abstract: One of the questions in Rigidity Theory is whether a realization of the vertices of a graph in the plane is flexible, namely, if it allows a continuous deformation preserving the edge lengths. A flexible realization of a connected graph in the plane exists if and only if the graph has a NAC-coloring, which is a surjective edge coloring by two colors such that for each cycle, either all the edges have the same color, or there are at least two edges of each color. The question whether a graph has a NAC-coloring, and hence also the existence of a flexible realization, has been proven to be NP-complete. We show that this question is also NP-complete on graphs with maximum degree five and on graphs with the average degree at most $4+\varepsilon$ for every fixed $\varepsilon >0$. We also show that NAC-colorings can be counted in linear time for graphs with bounded treewidth. Since the only existing implementation of checking the existence of a NAC-coloring is rather naive, we propose new algorithms along with their implementation, which is significantly faster. We also focus on searching all NAC-colorings of a graph, since they provide useful information about its possible flexible realizations. + oai:arXiv.org:2412.13721v2 + cs.CG + math.CO + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jianhao Ma, Geyu Liang, Salar Fattahi + http://creativecommons.org/licenses/by/4.0/ + Petr La\v{s}tovi\v{c}ka, Jan Legersk\'y - The Missing Point in Vision Transformers for Universal Image Segmentation - https://arxiv.org/abs/2505.19795 - arXiv:2505.19795v2 Announce Type: replace -Abstract: Image segmentation remains a challenging task in computer vision, demanding robust mask generation and precise classification. Recent mask-based approaches yield high-quality masks by capturing global context. However, accurately classifying these masks, especially in the presence of ambiguous boundaries and imbalanced class distributions, remains an open challenge. In this work, we introduce ViT-P, a novel two-stage segmentation framework that decouples mask generation from classification. The first stage employs a proposal generator to produce class-agnostic mask proposals, while the second stage utilizes a point-based classification model built on the Vision Transformer (ViT) to refine predictions by focusing on mask central points. ViT-P serves as a pre-training-free adapter, allowing the integration of various pre-trained vision transformers without modifying their architecture, ensuring adaptability to dense prediction tasks. Furthermore, we demonstrate that coarse and bounding box annotations can effectively enhance classification without requiring additional training on fine annotation datasets, reducing annotation costs while maintaining strong performance. Extensive experiments across COCO, ADE20K, and Cityscapes datasets validate the effectiveness of ViT-P, achieving state-of-the-art results with 54.0 PQ on ADE20K panoptic segmentation, 87.4 mIoU on Cityscapes semantic segmentation, and 63.6 mIoU on ADE20K semantic segmentation. The code and pretrained models are available at: https://github.com/sajjad-sh33/ViT-P}{https://github.com/sajjad-sh33/ViT-P. - oai:arXiv.org:2505.19795v2 - cs.CV - cs.AI - cs.LG - eess.IV - Wed, 10 Dec 2025 00:00:00 -0500 + Condor: A Code Discriminator Integrating General Semantics with Code Details + https://arxiv.org/abs/2412.17429 + arXiv:2412.17429v2 Announce Type: replace +Abstract: LLMs demonstrate significant potential across various software engineering tasks. However, they still face challenges in generating correct code on the first attempt when addressing complex requirements. Introducing a discriminator to select reliable outputs from multiple generated results is an effective way to enhance their reliability and stability. Currently, these discriminators fall into two categories: execution-based discriminators and non-execution-based discriminators. Execution-based discriminators face flexibility challenges due to difficulties in obtaining test cases and security concerns, while non-execution-based discriminators, although more flexible, struggle to capture subtle differences in code details. To maintain flexibility while improving the model's ability to capture fine-grained code details, this paper proposes Condor. We first design contrastive learning to optimize the code representations of the base model, enabling it to reflect differences in code details. Then, we leverage intermediate data from the code modification process to further enrich the discriminator's training data, enhancing its ability to discern code details. Experimental results indicate that on the subtle code difference dataset (i.e., CodeNanoFix), Condor significantly outperforms other discriminators in discriminative performance: Condor (1.3B) improves the discriminative F1 score of DeepSeek-Coder (1.3B) from 67% to 73%. In discriminating LLM-generated outputs, Condor (1.3B) and Condor (110M) raise the Pass@1 score of Meta-Llama-3.1-Instruct (70B) on the CodeNanoFix dataset from 52.64% to 62.63% and 59.64%, respectively. Moreover, Condor demonstrates strong generalization capabilities on the APPS, MBPP, and LiveCodeBench datasets. For example, Condor (1.3B) improves the Pass@1 of Meta-Llama-3.1-Instruct (70B) on the APPS dataset by 147.05%. + oai:arXiv.org:2412.17429v2 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Sajjad Shahabodini, Mobina Mansoori, Farnoush Bayatmakou, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi + Qingyuan Liang, Zhao Zhang, Chen Liu, Zeyu Sun, Wenjie Zhang, Yizhou Chen, Zixiao Zhao, Qi Luo, Wentao Wang, Yanjie Jiang, Yingfei Xiong, Lu Zhang - Prot2Token: A Unified Framework for Protein Modeling via Next-Token Prediction - https://arxiv.org/abs/2505.20589 - arXiv:2505.20589v2 Announce Type: replace -Abstract: The diverse nature of protein prediction tasks has traditionally necessitated specialized models, hindering the development of broadly applicable and computationally efficient Protein Language Models (PLMs). In this work, we introduce Prot2Token, a unified framework that overcomes these challenges by converting a wide spectrum of protein-related predictions-from sequence-level properties and residue-specific attributes to complex inter-protein interactions-into a standardized next-token prediction format. At its core, Prot2Token employs an autoregressive decoder, conditioned on embeddings from pre-trained protein encoders and guided by learnable task tokens, to perform diverse predictions. This architecture uniquely facilitates multi-task learning, enabling general-purpose decoders to generalize across five distinct categories. We present extensive experimental validation across a variety of benchmarks, demonstrating Prot2Token's predictive power in different types of protein-prediction tasks. In 3D structure prediction, Prot2Token delivers substantial speedups (up to 1000x faster than AlphaFold2 with MSA on the same hardware) while, across other numerous tasks, matching or surpassing specialized methods. Beyond that, we introduce an auxiliary self-supervised decoder pre-training approach to improve spatially sensitive task performance. Prot2Token thus offers a step towards standardizing biological prediction into a generative interface, promising to accelerate biological discovery and the development of novel therapeutics. The code is available at https://github.com/mahdip72/prot2token . - oai:arXiv.org:2505.20589v2 - cs.LG - q-bio.QM - Wed, 10 Dec 2025 00:00:00 -0500 + Numerical analysis of a stabilized scheme for an optimal control problem governed by a parabolic convection--diffusion equation + https://arxiv.org/abs/2412.21070 + arXiv:2412.21070v3 Announce Type: replace +Abstract: We consider an optimal control problem on a bounded domain $\Omega\subset\mathbb{R}^2,$ governed by a parabolic convection--diffusion--reaction equation with pointwise control constraints. We follow the optimize--then--discretize approach, in which the state and co-state variables are discretized using the piecewise linear finite element method. For stabilization, we apply the algebraic flux correction method. Temporal discretization is performed using the backward Euler method. The discrete control variable is obtained by projecting the discretized adjoint state onto the set of admissible controls. The resulting stabilized fully--discrete scheme is nonlinear and a fixed point argument is used to prove its existence and uniqueness under a mild condition between the time step $k$ and the mesh size $h,$ e.g., $k = \mathcal{O}(h).$ Furthermore, assuming sufficient regularity of the exact solution, we derive error estimates in the $L^{2}$ and energy norms with respect to the spatial variable, and in the $\ell^\infty$ norm with respect to time for the state and co-state variables. For the control variable, we also derive an $L^{2}$-norm error estimate with respect to space and an $\ell^\infty$-norm estimate in time. Finally, we present numerical experiments that validate the the order of convergence of the stabilized fully--discrete scheme based on the algebraic flux correction method. We also test the stabilized fully--discrete scheme in optimal control problems that governed by a convection--dominant equation where the solution possesses interior layers. + oai:arXiv.org:2412.21070v3 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by-nc-nd/4.0/ - Mahdi Pourmirzaei, Farzaneh Esmaili, Salhuldin Alqarghuli, Mohammadreza Pourmirzaei, Ye Han, Kai Chen, Mohsen Rezaei, Duolin Wang, Dong Xu + Christos Pervolianakis - OMNIGUARD: An Efficient Approach for AI Safety Moderation Across Languages and Modalities - https://arxiv.org/abs/2505.23856 - arXiv:2505.23856v2 Announce Type: replace -Abstract: The emerging capabilities of large language models (LLMs) have sparked concerns about their immediate potential for harmful misuse. The core approach to mitigate these concerns is the detection of harmful queries to the model. Current detection approaches are fallible, and are particularly susceptible to attacks that exploit mismatched generalization of model capabilities (e.g., prompts in low-resource languages or prompts provided in non-text modalities such as image and audio). To tackle this challenge, we propose Omniguard, an approach for detecting harmful prompts across languages and modalities. Our approach (i) identifies internal representations of an LLM/MLLM that are aligned across languages or modalities and then (ii) uses them to build a language-agnostic or modality-agnostic classifier for detecting harmful prompts. Omniguard improves harmful prompt classification accuracy by 11.57\% over the strongest baseline in a multilingual setting, by 20.44\% for image-based prompts, and sets a new SOTA for audio-based prompts. By repurposing embeddings computed during generation, Omniguard is also very efficient ($\approx\!120 \times$ faster than the next fastest baseline). Code and data are available at: https://github.com/vsahil/OmniGuard. - oai:arXiv.org:2505.23856v2 - cs.CL - cs.AI - cs.HC - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models + https://arxiv.org/abs/2501.02968 + arXiv:2501.02968v4 Announce Type: replace +Abstract: Retrieval-Augmented Generation (RAG) enriches LLMs by dynamically retrieving external knowledge, reducing hallucinations and satisfying real-time information needs. While existing research mainly targets RAG's performance and efficiency, emerging studies highlight critical security concerns. Yet, current adversarial approaches remain limited, mostly addressing white-box scenarios or heuristic black-box attacks without fully investigating vulnerabilities in the retrieval phase. Additionally, prior works mainly focus on factoid Q&A tasks, their attacks lack complexity and can be easily corrected by advanced LLMs. In this paper, we investigate a more realistic and critical threat scenario: adversarial attacks intended for opinion manipulation against black-box RAG models, particularly on controversial topics. Specifically, we propose FlippedRAG, a transfer-based adversarial attack against black-box RAG systems. We first demonstrate that the underlying retriever of a black-box RAG system can be reverse-engineered, enabling us to train a surrogate retriever. Leveraging the surrogate retriever, we further craft target poisoning triggers, altering vary few documents to effectively manipulate both retrieval and subsequent generation. Extensive empirical results show that FlippedRAG substantially outperforms baseline methods, improving the average attack success rate by 16.7%. FlippedRAG achieves on average a 50% directional shift in the opinion polarity of RAG-generated responses, ultimately causing a notable 20% shift in user cognition. Furthermore, we evaluate the performance of several potential defensive measures, concluding that existing mitigation strategies remain insufficient against such sophisticated manipulation attacks. These results highlight an urgent need for developing innovative defensive solutions to ensure the security and trustworthiness of RAG systems. + oai:arXiv.org:2501.02968v4 + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Sahil Verma, Keegan Hines, Jeff Bilmes, Charlotte Siska, Luke Zettlemoyer, Hila Gonen, Chandan Singh + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhuo Chen, Yuyang Gong, Jiawei Liu, Miaokun Chen, Haotan Liu, Qikai Cheng, Fan Zhang, Wei Lu, Xiaozhong Liu - 50 Years of Automated Face Recognition - https://arxiv.org/abs/2505.24247 - arXiv:2505.24247v3 Announce Type: replace -Abstract: Over the past five decades, automated face recognition (FR) has progressed from handcrafted geometric and statistical approaches to advanced deep learning architectures that now approach, and in many cases exceed, human performance. This paper traces the historical and technological evolution of FR, encompassing early algorithmic paradigms through to contemporary neural systems trained on extensive real and synthetically generated datasets. We examine pivotal innovations that have driven this progression, including advances in dataset construction, loss function formulation, network architecture design, and feature fusion strategies. Furthermore, we analyze the relationship between data scale, diversity, and model generalization, highlighting how dataset expansion correlates with benchmark performance gains. Recent systems have achieved near-perfect large-scale identification accuracy, with the leading algorithm in the latest NIST FRTE 1:N benchmark reporting a FNIR of 0.15 percent at FPIR of 0.001 on a gallery of over 10 million identities. We delineate key open problems and emerging directions, including scalable training, multi-modal fusion, synthetic data, and interpretable recognition frameworks. - oai:arXiv.org:2505.24247v3 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Rule-Based Graph Programs Matching the Time Complexity of Imperative Algorithms + https://arxiv.org/abs/2501.09144 + arXiv:2501.09144v3 Announce Type: replace +Abstract: We report on recent advances in rule-based graph programming, which allow us to match the time complexity of some fundamental imperative graph algorithms. In general, achieving the time complexity of graph algorithms implemented in conventional languages using a rule-based graph-transformation language is challenging due to the cost of graph matching. Previous work demonstrated that with rooted rules, certain algorithms can be implemented in the graph programming language GP 2 such that their runtime matches the time complexity of imperative implementations. However, this required input graphs to have a bounded node degree and (for some algorithms) to be connected. In this paper, we overcome these limitations by enhancing the graph data structure generated by the GP 2 compiler and exploiting the new structure in programs. We present three case studies: the first program checks whether input graphs are connected, the second program checks whether input graphs are acyclic, and the third program solves the single-source shortest-paths problem for graphs with integer edge-weights. The first two programs run in linear time on (possibly disconnected) input graphs with arbitrary node degrees. The third program runs in time $O(nm)$ on arbitrary input graphs, matching the time complexity of imperative implementations of the Bellman-Ford algorithm. For each program, we formally prove its correctness and time complexity, and provide runtime experiments on various graph classes. + oai:arXiv.org:2501.09144v3 + cs.PL + cs.PF + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Minchul Kim, Anil Jain, Xiaoming Liu + http://creativecommons.org/licenses/by/4.0/ + Ziad Ismaili Alaoui, Detlef Plump - Can Slow-thinking LLMs Reason Over Time? Empirical Studies in Time Series Forecasting - https://arxiv.org/abs/2505.24511 - arXiv:2505.24511v3 Announce Type: replace -Abstract: Time series forecasting (TSF) is a fundamental and widely studied task, spanning methods from classical statistical approaches to modern deep learning and multimodal language modeling. Despite their effectiveness, these methods often follow a fast thinking paradigm emphasizing pattern extraction and direct value mapping, while overlooking explicit reasoning over temporal dynamics and contextual dependencies. Meanwhile, emerging slow-thinking LLMs (e.g., ChatGPT-o1, DeepSeek-R1) have demonstrated impressive multi-step reasoning capabilities across diverse domains, suggesting a new opportunity for reframing TSF as a structured reasoning task. This motivates a key question: can slow-thinking LLMs effectively reason over temporal patterns to support time series forecasting, even in zero-shot manner? To investigate this, in this paper, we propose TimeReasoner, an extensive empirical study that formulates TSF as a conditional reasoning task. We design a series of prompting strategies to elicit inference-time reasoning from pretrained slow-thinking LLMs and evaluate their performance across diverse TSF benchmarks. Our findings reveal that slow-thinking LLMs exhibit non-trivial zero-shot forecasting capabilities, especially in capturing high-level trends and contextual shifts. While preliminary, our study surfaces important insights into the reasoning behaviors of LLMs in temporal domains highlighting both their potential and limitations. We hope this work catalyzes further research into reasoning-based forecasting paradigms and paves the way toward more interpretable and generalizable TSF frameworks. - oai:arXiv.org:2505.24511v3 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Directional Diffusion-Style Code Editing Pre-training + https://arxiv.org/abs/2501.12079 + arXiv:2501.12079v2 Announce Type: replace +Abstract: Code pre-trained models have shown promising effectiveness in various software engineering tasks. Among these tasks, many tasks are related to software evolution and/or code editing. However, existing code pre-trained models often overlook the real-world code editing data and the evolutionary nature of the editing process. In this paper, to simulate the step-by-step code editing process of human developers, we propose DivoT5, a pre-trained model based on directional diffusion at the data level. In DivoT5, we adopt two categories of pre-training tasks. The first category is mask and denoising tasks augmented with a diffusion direction representing code evolution. That is, we first apply a noising process to the code snippets before evolution, and then ask the pre-training process to restore the snippets with noise into the code snippets after evolution. The second category is tasks aiming to reinforce the evolutionary direction. That is, we first generate various intermediate versions for each pair of snippets before and after evolution, and then ask the pre-training process to transform the intermediate versions into the snippet after evolution for each pair. We evaluate DivoT5 for two code-editing scenarios and one non-editing scenario using five downstream tasks. Given each downstream task, we fine-tune the pre-trained DivoT5 to evaluate its effectiveness. Our experimental results show that DivoT5 achieves state-of-the-art (SOTA) performance on most tasks in comparison to models of the same scale (220M), large scale (770M) models in fine-tuning, and billion-scale (6.7B, 8B, ChatGPT) models in few-shot settings. For one code-editing task (i.e., automated code review), DivoT5 pre-trained on top of CodeT5-small (60M) can even outperform CodeT5-base (220M) and other pre-trained models with 220M parameters except for DivoT5 pre-trained on top of CodeT5-base (220M). + oai:arXiv.org:2501.12079v2 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Mingyue Cheng, Jiahao Wang, Daoyu Wang, Xiaoyu Tao, Qi Liu, Enhong Chen + Qingyuan Liang, Zeyu Sun, Qihao Zhu, Junhao Hu, Yifan Zhao, Yizhou Chen, Mingxuan Zhu, Guoqing Wang, Lu Zhang - Bench4KE: Benchmarking Automated Competency Question Generation - https://arxiv.org/abs/2505.24554 - arXiv:2505.24554v3 Announce Type: replace -Abstract: The availability of Large Language Models (LLMs) presents a unique opportunity to reinvigorate research on Knowledge Engineering (KE) automation. This trend is already evident in recent efforts developing LLM-based methods and tools for the automatic generation of Competency Questions (CQs), natural language questions used by ontology engineers to define the functional requirements of an ontology. However, the evaluation of these tools lacks standardization. This undermines the methodological rigor and hinders the replication and comparison of results. To address this gap, we introduce Bench4KE, an extensible API-based benchmarking system for KE automation. The presented release focuses on evaluating tools that generate CQs automatically. Bench4KE provides a curated gold standard consisting of CQ datasets from 17 real-world ontology engineering projects and uses a suite of similarity metrics to assess the quality of the CQs generated. We present a comparative analysis of 6 recent CQ generation systems, which are based on LLMs, establishing a baseline for future research. Bench4KE is also designed to accommodate additional KE automation tasks, such as SPARQL query generation, ontology testing and drafting. Code and datasets are publicly available under the Apache 2.0 license. - oai:arXiv.org:2505.24554v3 - cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Spectral Analysis of Diffusion Models with Application to Schedule Design + https://arxiv.org/abs/2502.00180 + arXiv:2502.00180v3 Announce Type: replace +Abstract: Diffusion models (DMs) have emerged as powerful tools for modeling complex data distributions and generating realistic new samples. Over the years, advanced architectures and sampling methods have been developed to make these models practically usable. However, certain synthesis process decisions still rely on heuristics without a solid theoretical foundation. In our work, we offer a novel analysis of the DM's inference process, introducing a comprehensive frequency response perspective. Specifically, by relying on Gaussianity assumption, we present the inference process as a closed-form spectral transfer function, capturing how the generated signal evolves in response to the initial noise. We demonstrate how the proposed analysis can be leveraged to design a noise schedule that aligns effectively with the characteristics of the data. The spectral perspective also provides insights into the underlying dynamics and sheds light on the relationship between spectral properties and noise schedule structure. Our results lead to scheduling curves that are dependent on the spectral content of the data, offering a theoretical justification for some of the heuristics taken by practitioners. + oai:arXiv.org:2502.00180v3 + cs.LG + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Anna Sofia Lippolis, Minh Davide Ragagni, Paolo Ciancarini, Andrea Giovanni Nuzzolese, Valentina Presutti + http://creativecommons.org/licenses/by-sa/4.0/ + Roi Benita, Michael Elad, Joseph Keshet - Curse of Slicing: Why Sliced Mutual Information is a Deceptive Measure of Statistical Dependence - https://arxiv.org/abs/2506.04053 - arXiv:2506.04053v3 Announce Type: replace -Abstract: Sliced Mutual Information (SMI) is widely used as a scalable alternative to mutual information for measuring non-linear statistical dependence. Despite its advantages, such as faster convergence, robustness to high dimensionality, and nullification only under statistical independence, we demonstrate that SMI is highly susceptible to data manipulation and exhibits counterintuitive behavior. Through extensive benchmarking and theoretical analysis, we show that SMI saturates easily, fails to detect increases in statistical dependence, prioritizes redundancy over informative content, and in some cases, performs worse than correlation coefficient. - oai:arXiv.org:2506.04053v3 - cs.LG - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + CardioLive: Empowering Video Streaming with Online Cardiac Monitoring + https://arxiv.org/abs/2502.00702 + arXiv:2502.00702v2 Announce Type: replace +Abstract: Online Cardiac Monitoring (OCM) emerges as a compelling enhancement for the next-generation video streaming platforms. It enables various applications including remote health, online affective computing, and deepfake detection. Yet the physiological information encapsulated in the video streams has been long neglected. In this paper, we present the design and implementation of CardioLive, the first online cardiac monitoring system in video streaming platforms. We leverage the naturally co-existed video and audio streams and devise CardioNet, the first audio-visual network to learn the cardiac series. It incorporates multiple unique designs to extract temporal and spectral features, ensuring robust performance under realistic video streaming conditions. To enable the Service-On-Demand online cardiac monitoring, we implement CardioLive as a plug-and-play middleware service and develop systematic solutions to practical issues including changing FPS and unsynchronized streams. Extensive experiments have been done to demonstrate the effectiveness of our system. We achieve a Mean Square Error (MAE) of 1.79 BPM error, outperforming the video-only and audio-only solutions by 69.2% and 81.2%, respectively. Our CardioLive service achieves average throughputs of 115.97 and 98.16 FPS when implemented in Zoom and YouTube. We believe our work opens up new applications for video stream systems. We will release the code soon. + oai:arXiv.org:2502.00702v2 + cs.HC + cs.NI + cs.SD + eess.AS + eess.IV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alexander Semenenko, Ivan Butakov, Alexey Frolov, Ivan Oseledets + http://creativecommons.org/licenses/by/4.0/ + Sheng Lyu, Ruiming Huang, Sijie Ji, Yasar Abbas Ur Rehman, Lan Ma, Chenshu Wu - Enabling Validation for Robust Few-Shot Recognition - https://arxiv.org/abs/2506.04713 - arXiv:2506.04713v3 Announce Type: replace -Abstract: Few-Shot Recognition (FSR) tackles classification tasks by training with minimal task-specific labeled data. Prevailing methods adapt or finetune a pretrained Vision-Language Model (VLM) and augment the scarce training data by retrieving task-relevant but noisy samples from open data sources. The finetuned VLM generalizes decently well to the task-specific in-distribution (ID) test data but struggles with out-of-distribution (OOD) test data. This motivates our study of robust FSR with VLM finetuning. The core challenge of FSR is data scarcity, extending beyond limited training data to a complete lack of validation data. We identify a key paradox as a potential solution: repurposing the retrieved open data for validation. As such retrieved data are inherently OOD compared with the task-specific ID training data, finetuned VLMs yield degraded performance on the retrieved data. This causes the validation logic to favor the pretrained model without any finetuning, hindering improvements w.r.t generalization. To resolve this dilemma, we introduce a novel validation strategy that harmonizes performance gain and degradation on the few-shot ID data and the retrieved data, respectively. Our validation enables parameter selection for partial finetuning and checkpoint selection, mitigating overfitting and improving test-data generalization. We unify this strategy with robust learning into a cohesive framework: Validation-Enabled Stage-wise Tuning (VEST). Extensive experiments on the established ImageNet OOD benchmarks show that VEST significantly outperforms existing VLM adaptation methods, achieving state-of-the-art FSR performance on both ID and OOD data. - oai:arXiv.org:2506.04713v3 + Sequence models for continuous cell cycle stage prediction from brightfield images + https://arxiv.org/abs/2502.02182 + arXiv:2502.02182v2 Announce Type: replace +Abstract: Understanding cell cycle dynamics is crucial for studying biological processes such as growth, development and disease progression. While fluorescent protein reporters like the Fucci system allow live monitoring of cell cycle phases, they require genetic engineering and occupy additional fluorescence channels, limiting broader applicability in complex experiments. In this study, we conduct a comprehensive evaluation of deep learning methods for predicting continuous Fucci signals using non-fluorescence brightfield imaging, a widely available label-free modality. To that end, we generated a large dataset of 1.3 M images of dividing RPE1 cells with full cell cycle trajectories to quantitatively compare the predictive performance of distinct model categories including single time-frame models, causal state space models and bidirectional transformer models. We show that both causal and transformer-based models significantly outperform single- and fixed frame approaches, enabling the prediction of visually imperceptible transitions like G1/S within 1h resolution. Our findings underscore the importance of sequence models for accurate predictions of cell cycle dynamics and highlight their potential for label-free imaging. + oai:arXiv.org:2502.02182v2 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Hanxin Wang, Tian Liu, Shu Kong + Louis-Alexandre Leger, Maxine Leonardi, Andrea Salati, Felix Naef, Martin Weigert + + + Revisiting Intermediate-Layer Matching in Knowledge Distillation: Layer-Selection Strategy Doesn't Matter (Much) + https://arxiv.org/abs/2502.04499 + arXiv:2502.04499v2 Announce Type: replace +Abstract: Knowledge distillation (KD) is a popular method of transferring knowledge from a large "teacher" model to a small "student" model. Previous work has explored various layer-selection strategies (e.g., forward matching and in-order random matching) for intermediate-layer matching in KD, where a student layer is forced to resemble a certain teacher layer. In this work, we revisit such layer-selection strategies and observe an intriguing phenomenon that layer-selection strategy does not matter (much) in intermediate-layer matching -- even seemingly nonsensical matching strategies such as reverse matching still result in surprisingly good student performance. We provide an interpretation for this phenomenon by examining the angles between teacher layers viewed from the student's perspective. Our work sheds light on KD practice, as layer-selection strategies may not be the main focus of KD system design, and vanilla forward matching works well in most setups. + oai:arXiv.org:2502.04499v2 + cs.LG + cs.AI + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zony Yu, Yuqiao Wen, Lili Mou - Domain-RAG: Retrieval-Guided Compositional Image Generation for Cross-Domain Few-Shot Object Detection - https://arxiv.org/abs/2506.05872 - arXiv:2506.05872v2 Announce Type: replace -Abstract: Cross-Domain Few-Shot Object Detection (CD-FSOD) aims to detect novel objects with only a handful of labeled samples from previously unseen domains. While data augmentation and generative methods have shown promise in few-shot learning, their effectiveness for CD-FSOD remains unclear due to the need for both visual realism and domain alignment. Existing strategies, such as copy-paste augmentation and text-to-image generation, often fail to preserve the correct object category or produce backgrounds coherent with the target domain, making them non-trivial to apply directly to CD-FSOD. To address these challenges, we propose Domain-RAG, a training-free, retrieval-guided compositional image generation framework tailored for CD-FSOD. Domain-RAG consists of three stages: domain-aware background retrieval, domain-guided background generation, and foreground-background composition. Specifically, the input image is first decomposed into foreground and background regions. We then retrieve semantically and stylistically similar images to guide a generative model in synthesizing a new background, conditioned on both the original and retrieved contexts. Finally, the preserved foreground is composed with the newly generated domain-aligned background to form the generated image. Without requiring any additional supervision or training, Domain-RAG produces high-quality, domain-consistent samples across diverse tasks, including CD-FSOD, remote sensing FSOD, and camouflaged FSOD. Extensive experiments show consistent improvements over strong baselines and establish new state-of-the-art results. Codes will be released upon acceptance. - oai:arXiv.org:2506.05872v2 + Semantic Data Augmentation Enhanced Invariant Risk Minimization for Medical Image Domain Generalization + https://arxiv.org/abs/2502.05593 + arXiv:2502.05593v2 Announce Type: replace +Abstract: Deep learning has achieved remarkable success in medical image classification. However, its clinical application is often hindered by data heterogeneity caused by variations in scanner vendors, imaging protocols, and operators. Approaches such as invariant risk minimization (IRM) aim to address this challenge of out-of-distribution generalization. For instance, VIRM improves upon IRM by tackling the issue of insufficient feature support overlap, demonstrating promising potential. Nonetheless, these methods face limitations in medical imaging due to the scarcity of annotated data and the inefficiency of augmentation strategies. To address these issues, we propose a novel domain-oriented direction selector to replace the random augmentation strategy used in VIRM. Our method leverages inter-domain covariance as a guider for augmentation direction, guiding data augmentation towards the target domain. This approach effectively reduces domain discrepancies and enhances generalization performance. Experiments on a multi-center diabetic retinopathy dataset demonstrate that our method outperforms state-of-the-art approaches, particularly under limited data conditions and significant domain heterogeneity. + oai:arXiv.org:2502.05593v2 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yu Li, Xingyu Qiu, Yuqian Fu, Jie Chen, Tianwen Qian, Xu Zheng, Danda Pani Paudel, Yanwei Fu, Xuanjing Huang, Luc Van Gool, Yu-Gang Jiang + Yaoyao Zhu, Xiuding Cai, Yingkai Wang, Yu Yao, Xu Luo, Zhongliang Fu - Schauder Bases for $C[0, 1]$ Using ReLU, Softplus and Two Sigmoidal Functions - https://arxiv.org/abs/2506.07884 - arXiv:2506.07884v2 Announce Type: replace -Abstract: We construct four Schauder bases for the space $C[0,1]$, one using ReLU functions, another using Softplus functions, and two more using sigmoidal versions of the ReLU and Softplus functions. This establishes the existence of a basis using these functions for the first time, and improves on the universal approximation property associated with them. We also show an $O(\frac{1}{n})$ approximation bound based on our ReLU basis, and a negative result on constructing multivariate functions using finite combinations of ReLU functions. - oai:arXiv.org:2506.07884v2 - cs.LG - math.FA - Wed, 10 Dec 2025 00:00:00 -0500 + Generalizing Reduced Rank Extrapolation to Low-Rank Matrix Sequences + https://arxiv.org/abs/2502.09165 + arXiv:2502.09165v3 Announce Type: replace +Abstract: Reduced rank extrapolation (RRE) is an acceleration method typically used to accelerate the iterative solution of nonlinear systems of equations using a fixed-point process. In this context, the iterates are vectors generated from a fixed-point mapping function. However, when considering the iterative solution of large-scale matrix equations, the iterates are low-rank matrices generated from a fixed-point process for which, generally, the mapping function changes in each iteration. To enable acceleration of the iterative solution for these problems, we propose two novel generalizations of RRE. First, we show how to effectively compute RRE for sequences of low-rank matrices. Second, we derive a formulation of RRE that is suitable for fixed-point processes for which the mapping function changes each iteration. We demonstrate the potential of the methods on several numerical examples involving the iterative solution of large-scale Lyapunov and Riccati matrix equations. + oai:arXiv.org:2502.09165v3 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Transactions on Machine Learning Research, November 2025 - Anand Ganesh, Babhrubahan Bose, Anand Rajagopalan + Pascal den Boef, Patrick K\"urschner, Xiaobo Liu, Jos Maubach, Jens Saak, Wil Schilders, Jonas Schulze, Nathan van de Wouw - MBTModelGenerator: Automated Reverse Engineering of Test Models from Clickstream Data for Model-Based Testing of Web Applications - https://arxiv.org/abs/2506.08179 - arXiv:2506.08179v2 Announce Type: replace -Abstract: Context: Model-Based Testing (MBT) was first introduced in 1970's, and has the potential to improve efficiency and effectiveness of testing. However, its adoption-especially for web applications-has been hindered by the effort required to manually design MBT models, and keep them updated. -- Objective: Based on the above challenge in a real industrial context, this study introduces an automated approach to reduce that effort by reverse engineering MBT models from clickstream data captured during users' interaction with web applications. -- Method: We have developed and present in this paper an open-source tool, named MBTModelGenerator, which logs user interactions via a lightweight JavaScript module in the front-end, and transmits them to a REST API backend. These interactions are then transformed into directly executable MBT models in the input format of an open-source MBT tool named GraphWalker. -- Results: The tool was evaluated on two representative open-source web applications, Spring PetClinic and a Task Manager web app, and is under evaluation in several large-scale industrial testing projects. The generated MBT models accurately reflected user navigation flows and could be executed in the GraphWalker MBT tool without any manual changes. Using the tool has significantly reduced the effort of MBT model design by more than 90%, while still allowing test engineers to inspect and refine the generated models for completeness. -- Conclusion: Our approach facilitates lightweight adoption of MBT by automating model generation, which is the most effort intensive phase of MBT. To ensure correctness and completeness, the generated models should still be reviewed by test engineers -but that effort remains substantially lower than designing MBT models from scratch. The tool is in active industrial use and available as open-source for reuse and further development. - oai:arXiv.org:2506.08179v2 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Weight Space Representation Learning on Diverse NeRF Architectures + https://arxiv.org/abs/2502.09623 + arXiv:2502.09623v3 Announce Type: replace +Abstract: Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for representing 3D objects and scenes by encoding shape and appearance information into the weights of a neural network. Recent studies have demonstrated that these weights can be used as input for frameworks designed to address deep learning tasks; however, such frameworks require NeRFs to adhere to a specific, predefined architecture. In this paper, we introduce the first framework capable of processing NeRFs with diverse architectures and performing inference on architectures unseen at training time. We achieve this by training a Graph Meta-Network within an unsupervised representation learning framework, and show that a contrastive objective is conducive to obtaining an architecture-agnostic latent space. In experiments conducted across 13 NeRF architectures belonging to three families (MLPs, tri-planes, and, for the first time, hash tables), our approach demonstrates robust performance in classification, retrieval, and language tasks involving multiple architectures, even unseen at training time, while also matching or exceeding the results of existing frameworks limited to single architectures. + oai:arXiv.org:2502.09623v3 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Vahid Garousi, Alper Bu\u{g}ra Kele\c{s}, Yunus Balaman, Sasidhar Matta, Zafar Jafarov, Aytan M\"ovs\"umova, Atif Namazov + Francesco Ballerini, Pierluigi Zama Ramirez, Luigi Di Stefano, Samuele Salti - Beyond Formal Semantics for Capabilities and Skills: Model Context Protocol in Manufacturing - https://arxiv.org/abs/2506.11180 - arXiv:2506.11180v2 Announce Type: replace -Abstract: Explicit modeling of capabilities and skills -- whether based on ontologies, Asset Administration Shells, or other technologies -- requires considerable manual effort and often results in representations that are not easily accessible to Large Language Models (LLMs). In this work-in-progress paper, we present an alternative approach based on the recently introduced Model Context Protocol (MCP). MCP allows systems to expose functionality through a standardized interface that is directly consumable by LLM-based agents. We conduct a prototypical evaluation on a laboratory-scale manufacturing system, where resource functions are made available via MCP. A general-purpose LLM is then tasked with planning and executing a multi-step process, including constraint handling and the invocation of resource functions via MCP. The results indicate that such an approach can enable flexible industrial automation without relying on explicit semantic models. This work lays the basis for further exploration of external tool integration in LLM-driven production systems. - oai:arXiv.org:2506.11180v2 - cs.SE - cs.AI - cs.ET - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Im2SurfTex: Surface Texture Generation via Neural Backprojection of Multi-View Images + https://arxiv.org/abs/2502.14006 + arXiv:2502.14006v3 Announce Type: replace +Abstract: We present Im2SurfTex, a method that generates textures for input 3D shapes by learning to aggregate multi-view image outputs produced by 2D image diffusion models onto the shapes' texture space. Unlike existing texture generation techniques that use ad hoc backprojection and averaging schemes to blend multiview images into textures, often resulting in texture seams and artifacts, our approach employs a trained neural module to boost texture coherency. The key ingredient of our module is to leverage neural attention and appropriate positional encodings of image pixels based on their corresponding 3D point positions, normals, and surface-aware coordinates as encoded in geodesic distances within surface patches. These encodings capture texture correlations between neighboring surface points, ensuring better texture continuity. Experimental results show that our module improves texture quality, achieving superior performance in high-resolution texture generation. + oai:arXiv.org:2502.14006v3 + cs.GR + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/ETFA65518.2025.11205601 - Luis Miguel Vieira da Silva, Aljosha K\"ocher, Felix Gehlhoff + http://creativecommons.org/licenses/by/4.0/ + Yiangos Georgiou, Marios Loizou, Melinos Averkiou, Evangelos Kalogerakis - PRO-V-R1: Reasoning Enhanced Programming Agent for RTL Verification - https://arxiv.org/abs/2506.12200 - arXiv:2506.12200v4 Announce Type: replace -Abstract: Register-Transfer Level (RTL) verification is a primary bottleneck, consuming 60-70% of development time. While Large Language Models (LLMs) show promise for RTL automation, their performance and research focus have overwhelmingly centered on RTL generation rather than verification. Current methods for RTL verification rely on large scale proprietary models (e.g., GPT-4o) to generate Python-based functional references, incurring a high cost and raising data-privacy risks. To date, an end-to-end open-source solution for autonomous verification remains absent. - We introduce PRO-V-R1, the first trainable open-source agentic framework for autonomous RTL verification. Our contributions are threefold: (1) we design PRO-V sys, a modular agentic system that couples LLM-based reasoning with programmatic tool use for RTL verification; (2) we establish a data construction pipeline that leverages existing RTL datasets to build simulation-validated, expert-level trajectories tailored for supervised fine-tuning (SFT) RTL verification agents; and (3) we implement an efficient reinforcement learning (RL) algorithm that uses verification-specific rewards derived from program-tool feedback to optimize the end-to-end verification workflow. Our empirical evaluation demonstrates PRO-V-R1 achieves a 57.7% functional correctness rate and 34.0% in robust fault detection, significantly outperforming the base model's 25.7% and 21.8% (respectively) from the state-of-the-art (SOTA) automatic verification system. This configuration also outperforms large-scale proprietary LLMs in functional correctness and shows comparable robustness for fault detection. - oai:arXiv.org:2506.12200v4 + Research on Enhancing Cloud Computing Network Security using Artificial Intelligence Algorithms + https://arxiv.org/abs/2502.17801 + arXiv:2502.17801v3 Announce Type: replace +Abstract: Cloud computing environments are increasingly vulnerable to security threats such as distributed denial-of-service (DDoS) attacks and SQL injection. Traditional security mechanisms, based on rule matching and feature recognition, struggle to adapt to evolving attack strategies. This paper proposes an adaptive security protection framework leveraging deep learning to construct a multi-layered defense architecture. The proposed system is evaluated in a real-world business environment, achieving a detection accuracy of 97.3%, an average response time of 18 ms, and an availability rate of 99.999%. Experimental results demonstrate that the proposed method significantly enhances detection accuracy, response efficiency, and resource utilization, offering a novel and effective approach to cloud computing security. + oai:arXiv.org:2502.17801v3 + cs.CR cs.AI - cs.AR - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Yujie Zhao, Zhijing Wu, Boqin Yuan, Zhongming Yu, Hejia Zhang, Wentao Ni, Chia-Tung Ho, Haoxing Ren, Jishen Zhao + 10.1109/SCECS65243.2025.11065638 + 2025 International Conference on Sensor-Cloud and Edge Computing System (SCECS) + Yuqing Wang, Xiao Yang - Knowledge Adaptation as Posterior Correction - https://arxiv.org/abs/2506.14262 - arXiv:2506.14262v3 Announce Type: replace -Abstract: Adaptation is the holy grail of intelligence, but even the best AI models lack the adaptability of toddlers. In spite of great progress, little is known about the mechanisms by which machines can learn to adapt as fast as humans and animals. Here, we cast adaptation as `correction' of old posteriors and show that a wide-variety of existing adaptation methods follow this very principle, including those used for continual learning, federated learning, unlearning, and model merging. In all these settings, more accurate posteriors often lead to smaller corrections and can enable faster adaptation. Posterior correction is derived by using the dual representation of the Bayesian Learning Rule of Khan and Rue (2023), where the interference between the old representation and new information is quantified by using the natural-gradient mismatch. We present many examples demonstrating how machines can learn to adapt quickly by using posterior correction. - oai:arXiv.org:2506.14262v3 + Memory Injection Attacks on LLM Agents via Query-Only Interaction + https://arxiv.org/abs/2503.03704 + arXiv:2503.03704v4 Announce Type: replace +Abstract: Agents powered by large language models (LLMs) have demonstrated strong capabilities in a wide range of complex, real-world applications. However, LLM agents with a compromised memory bank may easily produce harmful outputs when the past records retrieved for demonstration are malicious. In this paper, we propose a novel Memory INJection Attack, MINJA, without assuming that the attacker can directly modify the memory bank of the agent. The attacker injects malicious records into the memory bank by only interacting with the agent via queries and output observations. These malicious records are designed to elicit a sequence of malicious reasoning steps corresponding to a different target query during the agent's execution of the victim user's query. Specifically, we introduce a sequence of bridging steps to link victim queries to the malicious reasoning steps. During the memory injection, we propose an indication prompt that guides the agent to autonomously generate similar bridging steps, with a progressive shortening strategy that gradually removes the indication prompt, such that the malicious record will be easily retrieved when processing later victim queries. Our extensive experiments across diverse agents demonstrate the effectiveness of MINJA in compromising agent memory. With minimal requirements for execution, MINJA enables any user to influence agent memory, highlighting the risk. + oai:arXiv.org:2503.03704v4 cs.LG - cs.AI - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Mohammad Emtiyaz Khan + Shen Dong, Shaochen Xu, Pengfei He, Yige Li, Jiliang Tang, Tianming Liu, Hui Liu, Zhen Xiang - Mathematical Proof as a Litmus Test: Revealing Failure Modes of Advanced Large Reasoning Models - https://arxiv.org/abs/2506.17114 - arXiv:2506.17114v4 Announce Type: replace -Abstract: Large reasoning models (e.g., R1, o3) have demonstrated remarkable mathematical problem-solving abilities. However, the high reported accuracy of these advanced models on popular datasets, reliance on purely numerical evaluation and potential benchmark leakage, often masks their true reasoning shortcomings. To address this, we propose leveraging the inherent rigor and methodological complexity of mathematical proofs as a diagnostic tool to expose these hidden failures. Specifically, we introduce the RFMDataset (Reveal Failure Modes), a collection of 200 diverse mathematical proof problems, and thoroughly evaluate advanced models' performance on it. Our in-depth analysis of their failures uncovers 10 fine-grained error types, which shows fundamental limitations in current large reasoning models: 1) large reasoning models grapple profoundly with mathematical proofs, with some generating entirely correct proofs for less than 20% of problems and failing even on basic ones; 2) models exhibit a diverse spectrum of reasoning failures, prominently demonstrating the lack of guarantees for the correctness and rigor of single-step reasoning; and 3) models show hallucination and incompleteness during the reasoning process. Our findings reveal that models' self-reflection is insufficient to resolve the current logical dilemmas, necessitating formalized and fine-grained logical training. - oai:arXiv.org:2506.17114v4 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification + https://arxiv.org/abs/2503.04463 + arXiv:2503.04463v2 Announce Type: replace +Abstract: The need for interpretability in deep learning has driven interest in counterfactual explanations, which identify minimal changes to an instance that change a model's prediction. Current counterfactual (CF) generation methods require task-specific fine-tuning and produce low-quality text. Large Language Models (LLMs), though effective for high-quality text generation, struggle with label-flipping counterfactuals (i.e., counterfactuals that change the prediction) without fine-tuning. We introduce two simple classifier-guided approaches to support counterfactual generation by LLMs, eliminating the need for fine-tuning while preserving the strengths of LLMs. Despite their simplicity, our methods outperform state-of-the-art counterfactual generation methods and are effective across different LLMs, highlighting the benefits of guiding counterfactual generation by LLMs with classifier information. We further show that data augmentation by our generated CFs can improve a classifier's robustness. Our analysis reveals a critical issue in counterfactual generation by LLMs: LLMs rely on parametric knowledge rather than faithfully following the classifier. + oai:arXiv.org:2503.04463v2 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Dadi Guo, Jiayu Liu, Zhiyuan Fan, Zhitao He, Haoran Li, Yuxin Li, Yumeng Wang, Yi R. Fung + http://creativecommons.org/licenses/by-nc-nd/4.0/ + The World Conference on eXplainable Artificial Intelligence 2025 + Van Bach Nguyen, Christin Seifert, J\"org Schl\"otterer - Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation - https://arxiv.org/abs/2506.17159 - arXiv:2506.17159v2 Announce Type: replace -Abstract: Medical image analysis is critical yet challenged by the need of jointly segmenting organs or tissues, and numerous instances for anatomical structures and tumor microenvironment analysis. Existing studies typically formulated different segmentation tasks in isolation, which overlooks the fundamental interdependencies between these tasks, leading to suboptimal segmentation performance and insufficient medical image understanding. To address this issue, we propose a Co-Seg++ framework for versatile medical segmentation. Specifically, we introduce a novel co-segmentation paradigm, allowing semantic and instance segmentation tasks to mutually enhance each other. We first devise a spatio-sequential prompt encoder (SSP-Encoder) to capture long-range spatial and sequential relationships between segmentation regions and image embeddings as prior spatial constraints. Moreover, we devise a multi-task collaborative decoder (MTC-Decoder) that leverages cross-guidance to strengthen the contextual consistency of both tasks, jointly computing semantic and instance segmentation masks. Extensive experiments on diverse CT and histopathology datasets demonstrate that the proposed Co-Seg++ outperforms state-of-the-arts in the semantic, instance, and panoptic segmentation of dental anatomical structures, histopathology tissues, and nuclei instances. The source code is available at https://github.com/xq141839/Co-Seg-Plus. - oai:arXiv.org:2506.17159v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Grammar-Based Code Representation: Is It a Worthy Pursuit for LLMs? + https://arxiv.org/abs/2503.05507 + arXiv:2503.05507v2 Announce Type: replace +Abstract: Grammar serves as a cornerstone in programming languages and software engineering, providing frameworks to define the syntactic space and program structure. Existing research demonstrates the effectiveness of grammar-based code representations in small-scale models, showing their ability to reduce syntax errors and enhance performance. However, as language models scale to the billion level or beyond, syntax-level errors become rare, making it unclear whether grammar information still provides performance benefits. To explore this, we develop a series of billion-scale GrammarCoder models, incorporating grammar rules in the code generation process. Experiments on HumanEval (+) and MBPP (+) demonstrate a notable improvement in code generation accuracy. Further analysis shows that grammar-based representations enhance LLMs' ability to discern subtle code differences, reducing semantic errors caused by minor variations. These findings suggest that grammar-based code representations remain valuable even in billion-scale models, not only by maintaining syntax correctness but also by improving semantic differentiation. + oai:arXiv.org:2503.05507v2 + cs.PL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Qing Xu, Yuxiang Luo, Wenting Duan, Zhen Chen + Qingyuan Liang, Zhao Zhang, Zeyu Sun, Zheng Lin, Qi Luo, Yueyi Xiao, Yizhou Chen, Yuqun Zhang, Haotian Zhang, Lu Zhang, Bin Chen, Yingfei Xiong - Shrinking the Generation-Verification Gap with Weak Verifiers - https://arxiv.org/abs/2506.18203 - arXiv:2506.18203v2 Announce Type: replace -Abstract: Verifiers can improve language model capabilities by scoring and ranking responses from generated candidates. Currently, high-quality verifiers are either unscalable (e.g., humans) or limited in utility (e.g., tools like Lean). While LM judges and reward models have become broadly useful as general-purpose verifiers, a significant performance gap remains between them and oracle verifiers (verifiers with perfect accuracy). To help close this gap, we introduce Weaver, a framework for designing a strong verifier by combining multiple weak, imperfect verifiers. We find weighted ensembles of verifiers, which typically require learning from labeled data, significantly outperform unweighted combinations due to differences in verifier accuracies. To reduce dependency on labeled data, Weaver leverages weak supervision to estimate each verifier's accuracy and combines outputs into a unified score that better reflects true response quality. However, directly applying weak supervision algorithms poses challenges, including inconsistent verifier output formats and handling low-quality verifiers. Weaver addresses these using dataset statistics to normalize outputs and filter specific verifiers. We study Weaver's effectiveness in test-time repeated sampling, where a model generates multiple candidate responses and selects one. Our evaluations show Weaver significantly improves over Pass@1-performance when selecting the first candidate-across reasoning and math tasks, achieving o3-mini-level accuracy with Llama 3.3 70B Instruct as generator, and an ensemble of 70B or smaller judge and reward models as verifiers (87.7% average). This gain mirrors the jump between GPT-4o and o3-mini (69.0% vs. 86.7%), which required extensive finetuning and post-training. To reduce computational costs of verifier ensembles, we train a 400M cross-encoder using Weaver's combined output scores. - oai:arXiv.org:2506.18203v2 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Aligning Text to Image in Diffusion Models is Easier Than You Think + https://arxiv.org/abs/2503.08250 + arXiv:2503.08250v5 Announce Type: replace +Abstract: While recent advancements in generative modeling have significantly improved text-image alignment, some residual misalignment between text and image representations still remains. Some approaches address this issue by fine-tuning models in terms of preference optimization, etc., which require tailored datasets. Orthogonal to these methods, we revisit the challenge from the perspective of representation alignment-an approach that has gained popularity with the success of REPresentation Alignment (REPA). We first argue that conventional text-to-image (T2I) diffusion models, typically trained on paired image and text data (i.e., positive pairs) by minimizing score matching or flow matching losses, is suboptimal from the standpoint of representation alignment. Instead, a better alignment can be achieved through contrastive learning that leverages existing dataset as both positive and negative pairs. To enable efficient alignment with pretrained models, we propose SoftREPA- a lightweight contrastive fine-tuning strategy that leverages soft text tokens for representation alignment. This approach improves alignment with minimal computational overhead by adding fewer than 1M trainable parameters to the pretrained model. Our theoretical analysis demonstrates that our method explicitly increases the mutual information between text and image representations, leading to enhanced semantic consistency. Experimental results across text-to-image generation and text-guided image editing tasks validate the effectiveness of our approach in improving the semantic consistency of T2I generative models. + oai:arXiv.org:2503.08250v5 + cs.CV + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Jon Saad-Falcon, E. Kelly Buchanan, Mayee F. Chen, Tzu-Heng Huang, Brendan McLaughlin, Tanvir Bhathal, Shang Zhu, Ben Athiwaratkun, Frederic Sala, Scott Linderman, Azalia Mirhoseini, Christopher R\'e + Jaa-Yeon Lee, Byunghee Cha, Jeongsol Kim, Jong Chul Ye - End-to-End Fine-Tuning of 3D Texture Generation using Differentiable Rewards - https://arxiv.org/abs/2506.18331 - arXiv:2506.18331v4 Announce Type: replace -Abstract: While recent 3D generative models can produce high-quality texture images, they often fail to capture human preferences or meet task-specific requirements. Moreover, a core challenge in the 3D texture generation domain is that most existing approaches rely on repeated calls to 2D text-to-image generative models, which lack an inherent understanding of the 3D structure of the input 3D mesh object. To alleviate these issues, we propose an end-to-end differentiable, reinforcement-learning-free framework that embeds human feedback, expressed as differentiable reward functions, directly into the 3D texture synthesis pipeline. By back-propagating preference signals through both geometric and appearance modules of the proposed framework, our method generates textures that respect the 3D geometry structure and align with desired criteria. To demonstrate its versatility, we introduce three novel geometry-aware reward functions, which offer a more controllable and interpretable pathway for creating high-quality 3D content from natural language. By conducting qualitative, quantitative, and user-preference evaluations against state-of-the-art methods, we demonstrate that our proposed strategy consistently outperforms existing approaches. Our implementation code is publicly available at: https://github.com/AHHHZ975/Differentiable-Texture-Learning - oai:arXiv.org:2506.18331v4 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Constrained Discrete Diffusion + https://arxiv.org/abs/2503.09790 + arXiv:2503.09790v3 Announce Type: replace +Abstract: Discrete diffusion models are a class of generative models that construct sequences by progressively denoising samples from a categorical noise distribution. Beyond their rapidly growing ability to generate coherent natural language, these models present a new and important opportunity to enforce sequence-level constraints, a capability that current autoregressive models cannot natively provide. This paper capitalizes on this opportunity by introducing Constrained Discrete Diffusion (CDD), a novel integration of differentiable constraint optimization within the diffusion process to ensure adherence to constraints, logic rules, or safety requirements for generated sequences. Unlike conventional text generators that often rely on post-hoc filtering or model retraining for controllable generation, CDD directly imposes constraints into the discrete diffusion sampling process, resulting in a training-free and effective approach. Experiments in toxicity-controlled text generation, property-constrained molecule design, and instruction-constrained text completion demonstrate that CDD achieves zero constraint violations in a diverse array of tasks while preserving fluency, novelty, and coherence while outperforming autoregressive and existing discrete diffusion approaches. + oai:arXiv.org:2503.09790v3 + cs.CL + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - AmirHossein Zamani, Tianhao Xie, Amir G. Aghdam, Tiberiu Popa, Eugene Belilovsky + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Michael Cardei, Jacob K Christopher, Thomas Hartvigsen, Bhavya Kailkhura, Ferdinando Fioretto - Towards an Oracle for Binary Decomposition Under Compilation Variance - https://arxiv.org/abs/2506.19425 - arXiv:2506.19425v2 Announce Type: replace -Abstract: Third-Party Library (TPL) detection, which identifies reused libraries in binary code, is critical for software security analysis. At its core, TPL detection depends on binary decomposition-the process of partitioning a monolithic binary into cohesive modules. Existing decomposition methods, whether anchor-based or clustering-based, fundamentally rely on the assumption that reused code exhibits similar function call relationships. However, this assumption is severely undermined by Function Call Graph (FCG) variations introduced by diverse compilation settings, particularly function inlining decisions that drastically alter FCG structures. In this work, we conduct the first systematic empirical study to establish the oracle for optimal binary decomposition under compilation variance. We first develop a labeling method to create precise FCG mappings on a comprehensive dataset compiled with 17 compilers, 6 optimizations, and 4 architectures; then, we identify the minimum semantic-equivalent function regions between FCG variants to derive the ground-truth decomposition. This oracle provides the first rigorous evaluation framework that quantitatively assesses decomposition algorithms under compilation variance. Using this oracle, we evaluate existing methods and expose their critical limitations: they either suffer from under-aggregation failure or over-aggregation failure. Our findings reveal that current decomposition techniques are inadequate for robust TPL detection, highlighting the urgent need for compilation-aware approaches. - oai:arXiv.org:2506.19425v2 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + MACS: Multi-source Audio-to-image Generation with Contextual Significance and Semantic Alignment + https://arxiv.org/abs/2503.10287 + arXiv:2503.10287v3 Announce Type: replace +Abstract: Propelled by the breakthrough in deep generative models, audio-to-image generation has emerged as a pivotal cross-modal task that converts complex auditory signals into rich visual representations. However, previous works only focus on single-source audio inputs for image generation, ignoring the multi-source characteristic in natural auditory scenes, thus limiting the performance in generating comprehensive visual content. To bridge this gap, we propose a method called MACS to conduct multi-source audio-to-image generation. To our best knowledge, this is the first work that explicitly separates multi-source audio to capture the rich audio components before image generation. MACS is a two-stage method. In the first stage, multi-source audio inputs are separated by a weakly supervised method, where the audio and text labels are semantically aligned by casting into a common space using the large pre-trained CLAP model. We introduce a ranking loss to consider the contextual significance of the separated audio signals. In the second stage, effective image generation is achieved by mapping the separated audio signals to the generation condition using only a trainable adapter and a MLP layer. We preprocess the LLP dataset as the first full multi-source audio-to-image generation benchmark. The experiments are conducted on multi-source, mixed-source, and single-source audio-to-image generation tasks. The proposed MACS outperforms the current state-of-the-art methods in 17 out of the 21 evaluation indexes on all tasks and delivers superior visual quality. + oai:arXiv.org:2503.10287v3 + cs.SD + cs.CV + cs.GR + eess.AS + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ang Jia, He Jiang, Zhilei Ren, Xiaochen Li, Zhipeng Yang, Yaxin Duan, Ming Fan, Ting Liu + Hao Zhou, Xiaobao Guo, Yuzhe Zhu, Adams Wai-Kin Kong - Elucidated Rolling Diffusion Models for Probabilistic Forecasting of Complex Dynamics - https://arxiv.org/abs/2506.20024 - arXiv:2506.20024v3 Announce Type: replace -Abstract: Diffusion models are a powerful tool for probabilistic forecasting, yet most applications in high-dimensional complex systems predict future states individually. This approach struggles to model complex temporal dependencies and fails to explicitly account for the progressive growth of uncertainty inherent to the systems. While rolling diffusion frameworks, which apply increasing noise to forecasts at longer lead times, have been proposed to address this, their integration with state-of-the-art, high-fidelity diffusion techniques remains a significant challenge. We tackle this problem by introducing Elucidated Rolling Diffusion Models (ERDM), the first framework to successfully unify a rolling forecast structure with the principled, performant design of Elucidated Diffusion Models (EDM). To do this, we adapt the core EDM components-its noise schedule, network preconditioning, and Heun sampler-to the rolling forecast setting. The success of this integration is driven by three key contributions: (i) a novel loss weighting scheme that focuses model capacity on the mid-range forecast horizons where determinism gives way to stochasticity; (ii) an efficient initialization strategy using a pre-trained EDM for the initial window; and (iii) a bespoke hybrid sequence architecture for robust spatiotemporal feature extraction under progressive denoising. On 2D Navier-Stokes simulations and ERA5 global weather forecasting at 1.5-degree resolution, ERDM consistently outperforms key diffusion-based baselines, including conditional autoregressive EDM. ERDM offers a flexible and powerful general framework for tackling diffusion-based dynamics forecasting problems where modeling uncertainty propagation is paramount. - oai:arXiv.org:2506.20024v3 - cs.LG + Story of Two GPUs: Characterizing the Resilience of Hopper H100 and Ampere A100 GPUs + https://arxiv.org/abs/2503.11901 + arXiv:2503.11901v4 Announce Type: replace +Abstract: This study characterizes GPU resilience in Delta, a large-scale AI system that consists of 1,056 A100 and H100 GPUs, with over 1,300 petaflops of peak throughput. We used 2.5 years of operational data (11.7 million GPU hours) on GPU errors. Our major findings include: (i) H100 GPU memory resilience is worse than A100 GPU memory, with 3.2x lower per-GPU MTBE for memory errors, (ii) The GPU memory error-recovery mechanisms on H100 GPUs are insufficient to handle the increased memory capacity, (iii) H100 GPUs demonstrate significantly improved GPU hardware resilience over A100 GPUs with respect to critical hardware components, (iv) GPU errors on both A100 and H100 GPUs frequently result in job failures due to the lack of robust recovery mechanisms at the application level, and (v) We project the impact of GPU node availability on larger-scales and find that significant overprovisioning of 5% is necessary to handle GPU failures. + oai:arXiv.org:2503.11901v4 + cs.DC cs.AI - physics.ao-ph - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Advances in Neural Information Processing Systems (NeurIPS), 2025 - Salva R\"uhling Cachay, Miika Aittala, Karsten Kreis, Noah Brenowitz, Arash Vahdat, Morteza Mardani, Rose Yu + 10.1145/3712285.375982 + Shengkun Cui, Archit Patke, Hung Nguyen, Aditya Ranjan, Ziheng Chen, Phuong Cao, Gregory Bauer, Brett Bode, Catello Di Martino, Saurabh Jha, Chandra Narayanaswami, Daby Sow, Zbigniew T. Kalbarczyk, Ravishankar K. Iyer - Towards Explainable Bilingual Multimodal Misinformation Detection and Localization - https://arxiv.org/abs/2506.22930 - arXiv:2506.22930v3 Announce Type: replace -Abstract: The increasing realism of multimodal content has made misinformation more subtle and harder to detect, especially in news media where images are frequently paired with bilingual (e.g., Chinese-English) subtitles. Such content often includes localized image edits and cross-lingual inconsistencies that jointly distort meaning while remaining superficially plausible. We introduce BiMi, a bilingual multimodal framework that jointly performs region-level localization, cross-modal and cross-lingual consistency detection, and natural language explanation for misinformation analysis. To support generalization, BiMi integrates an online retrieval module that supplements model reasoning with up-to-date external context. We further release BiMiBench, a large-scale and comprehensive benchmark constructed by systematically editing real news images and subtitles, comprising 104,000 samples with realistic manipulations across visual and linguistic modalities. To enhance interpretability, we apply Group Relative Policy Optimization (GRPO) to improve explanation quality, marking the first use of GRPO in this domain. Extensive experiments demonstrate that BiMi outperforms strong baselines by up to +8.9 in classification accuracy, +15.9 in localization accuracy, and +2.5 in explanation BERTScore, advancing state-of-the-art performance in realistic, multilingual misinformation detection. Code, models, and datasets will be released. - oai:arXiv.org:2506.22930v3 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + A Parametric Family of Polynomial Wavelets for Signal and Image Processing + https://arxiv.org/abs/2503.12403 + arXiv:2503.12403v2 Announce Type: replace +Abstract: This paper investigates the potential applications of a parametric family of polynomial wavelets that has been recently introduced starting from de la Vall\'ee Poussin (VP) interpolation at Chebyshev nodes. Unlike classical wavelets, which are constructed on the real line, these VP wavelets are defined on a bounded interval, offering the advantage of handling boundaries naturally while maintaining computational efficiency. In addition, the structure of these wavelets enables the use of fast algorithms for decomposition and reconstruction. Furthermore, the flexibility offered by a free parameter allows a better control of localized singularities, such as edges in images. On the basis of previous theoretical foundations, we show the effectiveness of the VP wavelets for basic signal denoising and image compression, emphasizing their potential for more advanced signal and image processing tasks. + oai:arXiv.org:2503.12403v2 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yiwei He, Zhenglin Huang, Haiquan Wen, Tianxiao Li, Yi Dong, Hao Fei, Baoyuan Wu, Guangliang Cheng + Mariantonia Cotronei, Woula Themistoclakis, Marc Van Barel - Learning an Ensemble Token from Task-driven Priors in Facial Analysis - https://arxiv.org/abs/2507.01290 - arXiv:2507.01290v3 Announce Type: replace -Abstract: Facial analysis exhibits task-specific feature variations. While Convolutional Neural Networks (CNNs) have enabled the fine-grained representation of spatial information, Vision Transformers (ViTs) have facilitated the representation of semantic information at the patch level. While advances in backbone architectures have improved over the past decade, combining high-fidelity models often incurs computational costs on feature representation perspective. In this work, we introduce KT-Adapter, a novel methodology for learning knowledge token which enables the integration of high-fidelity feature representation in computationally efficient manner. Specifically, we propose a robust prior unification learning method that generates a knowledge token within a self-attention mechanism, sharing the mutual information across the pre-trained encoders. This knowledge token approach offers high efficiency with negligible computational cost. Our results show improved performance across facial analysis, with statistically significant enhancements observed in the feature representations. - oai:arXiv.org:2507.01290v3 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + System Identification Under Multi-rate Sensing Environment + https://arxiv.org/abs/2503.12750 + arXiv:2503.12750v4 Announce Type: replace +Abstract: This paper proposes a system identification algorithm for systems with multi-rate sensors in a discrete-time framework. It is challenging to obtain an accurate mathematical model when the ratios of inputs and outputs are different in the system. A cyclic reformulation-based model for multi-rate systems is formulated, and the multi-rate system can be reduced to a linear time-invariant system to derive the model under the multi-rate sensing environment. The proposed algorithm integrates a cyclic reformulation with a state coordinate transformation of the cycled system to enable precise identification of systems under the multi-rate sensing environment. The effectiveness of the proposed system identification method is demonstrated using numerical simulations. + oai:arXiv.org:2503.12750v4 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sunyong Seo, Semin Kim, Jongha Lee + 10.20965/jrm.2025.p1102 + Hiroshi Okajima, Risa Furukawa, Nobutomo Matsunaga - Scalable Differentially Private Sketches under Continual Observation - https://arxiv.org/abs/2507.03361 - arXiv:2507.03361v4 Announce Type: replace -Abstract: Linear sketches are fundamental tools in data stream analytics. They are notable for supporting both approximate frequency queries and heavy hitter detection with bounded trade-offs for error and memory. Importantly, on streams that contain sensitive information, linear sketches can be easily privatized with the injection of a suitable amount of noise. This process is efficient in the single release model, where the output is released only at the end of the stream. In this setting, it suffices to add noise to the sketch once. - In contrast, in the continual observation model, where the output is released at every time-step, fresh noise needs to be added to the sketch before each release. This creates an additional computational overhead. To address this, we introduce Lazy Sketch, a novel differentially private sketching method that employs lazy updates, perturbing and modifying only a small portion of the sketch at each step. Compared to prior work, we reduce the update complexity by a factor of $O(w)$, where $w$ is the width of the sketch. Experiments demonstrate that our method increases throughput by up to 250x over prior work, making continual observation differential privacy practical for high-speed streaming applications. - oai:arXiv.org:2507.03361v4 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Human Motion Unlearning + https://arxiv.org/abs/2503.18674 + arXiv:2503.18674v3 Announce Type: replace +Abstract: We introduce Human Motion Unlearning and motivate it through the concrete task of preventing violent 3D motion synthesis, an important safety requirement given that popular text-to-motion datasets (HumanML3D and Motion-X) contain from 7\% to 15\% violent sequences spanning both atomic gestures (e.g., a single punch) and highly compositional actions (e.g., loading and swinging a leg to kick). By focusing on violence unlearning, we demonstrate how removing a challenging, multifaceted concept can serve as a proxy for the broader capability of motion "forgetting." To enable systematic evaluation of Human Motion Unlearning, we establish the first motion unlearning benchmark by automatically filtering HumanML3D and Motion-X datasets to create distinct forget sets (violent motions) and retain sets (safe motions). We introduce evaluation metrics tailored to sequential unlearning, measuring both suppression efficacy and the preservation of realism and smooth transitions. We adapt two state-of-the-art, training-free image unlearning methods (UCE and RECE) to leading text-to-motion architectures (MoMask and BAMM), and propose Latent Code Replacement (LCR), a novel, training-free approach that identifies violent codes in a discrete codebook representation and substitutes them with safe alternatives. Our experiments show that unlearning violent motions is indeed feasible and that acting on latent codes strikes the best trade-off between violence suppression and preserving overall motion quality. This work establishes a foundation for advancing safe motion synthesis across diverse applications. Website: https://www.pinlab.org/hmu. + oai:arXiv.org:2503.18674v3 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Rayne Holland + Edoardo De Matteis, Matteo Migliarini, Alessio Sampieri, Indro Spinelli, Fabio Galasso - Online Makespan Scheduling under Scenarios - https://arxiv.org/abs/2507.04016 - arXiv:2507.04016v2 Announce Type: replace -Abstract: We consider a natural extension of online makespan scheduling on identical parallel machines by introducing scenarios. A scenario is a subset of jobs, and the task of our problem is to find a global assignment of the jobs to machines so that the maximum makespan under a scenario, i.e., the maximum makespan of any schedule restricted to a scenario, is minimized. - For varying values of the number of scenarios and machines, we explore the competitiveness of online algorithms. We prove tight and near-tight bounds, several of which are achieved through novel constructions. In particular, we leverage the interplay between the unit processing time case of our problem and the hypergraph coloring problem both ways: We use hypergraph coloring techniques to steer an adversarial family of instances proving lower bounds, which in turn leads to lower bounds for several variants of online hypergraph coloring. - oai:arXiv.org:2507.04016v2 - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 + On the numerical stability of sketched GMRES + https://arxiv.org/abs/2503.19086 + arXiv:2503.19086v2 Announce Type: replace +Abstract: We perform a backward stability analysis of preconditioned sketched GMRES [Nakatsukasa and Tropp, SIAM J. Matrix Anal. Appl, 2024] for solving linear systems $Ax=b$, and show that the backward stability at iteration $i$ depends on the conditioning of the Krylov basis $B_{1:i}$ as long as the condition number of $A B_{1:i}$ can be bounded by $1/O(u)$, where $u$ is the unit roundoff. Under this condition, we show that sketched GMRES is backward stable as long as the condition number of $B_{1:i}$ is not too large. Under additional assumptions, we then show that the stability of a restarted implementation of sketched GMRES can be independent of the condition number of $B_{1:i}$, and restarted sketched GMRES is backward stable. We also derive sharper bounds that explain why the backward error can be small even in cases when the basis $B_{1:i}$ is very ill-conditioned, which has been observed in the literature but not yet explained theoretically. We present numerical experiments to demonstrate the conclusions of our analysis, and also show that adaptively restarting where appropriate allows us to recover backward stability in sketched GMRES. + oai:arXiv.org:2503.19086v2 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Ekin Ergen + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Liam Burke, Erin Carson, Yuxin Ma - DIVER: Reinforced Diffusion Breaks Imitation Bottlenecks in End-to-End Autonomous Driving - https://arxiv.org/abs/2507.04049 - arXiv:2507.04049v3 Announce Type: replace -Abstract: Most end-to-end autonomous driving methods rely on imitation learning from single expert demonstrations, often leading to conservative and homogeneous behaviors that limit generalization in complex real-world scenarios. In this work, we propose DIVER, an end-to-end driving framework that integrates reinforcement learning with diffusion-based generation to produce diverse and feasible trajectories. At the core of DIVER lies a reinforced diffusion-based generation mechanism. First, the model conditions on map elements and surrounding agents to generate multiple reference trajectories from a single ground-truth trajectory, alleviating the limitations of imitation learning that arise from relying solely on single expert demonstrations. Second, reinforcement learning is employed to guide the diffusion process, where reward-based supervision enforces safety and diversity constraints on the generated trajectories, thereby enhancing their practicality and generalization capability. Furthermore, to address the limitations of L2-based open-loop metrics in capturing trajectory diversity, we propose a novel Diversity metric to evaluate the diversity of multi-mode predictions.Extensive experiments on the closed-loop NAVSIM and Bench2Drive benchmarks, as well as the open-loop nuScenes dataset, demonstrate that DIVER significantly improves trajectory diversity, effectively addressing the mode collapse problem inherent in imitation learning. - oai:arXiv.org:2507.04049v3 + LENVIZ: A High-Resolution Low-Exposure Night Vision Benchmark Dataset + https://arxiv.org/abs/2503.19804 + arXiv:2503.19804v2 Announce Type: replace +Abstract: Low-light image enhancement is crucial for a myriad of applications, from night vision and surveillance, to autonomous driving. However, due to the inherent limitations that come in hand with capturing images in low-illumination environments, the task of enhancing such scenes still presents a formidable challenge. To advance research in this field, we introduce our Low Exposure Night Vision (LENVIZ) Dataset, a comprehensive multi-exposure benchmark dataset for low-light image enhancement comprising of over 230K frames showcasing 24K real-world indoor and outdoor, with-and without human, scenes. Captured using 3 different camera sensors, LENVIZ offers a wide range of lighting conditions, noise levels, and scene complexities, making it the largest publicly available up-to 4K resolution benchmark in the field. LENVIZ includes high quality human-generated ground truth, for which each multi-exposure low-light scene has been meticulously curated and edited by expert photographers to ensure optimal image quality. Furthermore, we also conduct a comprehensive analysis of current state-of-the-art low-light image enhancement techniques on our dataset and highlight potential areas of improvement. + oai:arXiv.org:2503.19804v2 cs.CV - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ziying Song, Lin Liu, Hongyu Pan, Bencheng Liao, Mingzhe Guo, Lei Yang, Yongchang Zhang, Shaoqing Xu, Caiyan Jia, Yadan Luo + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Manjushree Aithal, Rosaura G. VidalMata, Manikandtan Kartha, Gong Chen, Eashan Adhikarla, Lucas N. Kirsten, Zhicheng Fu, Nikhil A. Madhusudhana, Joe Nasti - The Prompt War: How AI Decides on a Military Intervention - https://arxiv.org/abs/2507.06277 - arXiv:2507.06277v3 Announce Type: replace -Abstract: Which factors determine AI's propensity to support military intervention? While the use of AI in high-stakes decision-making is growing exponentially, we still lack systematic analysis of the key drivers embedded in these models. This paper conducts a conjoint experiment in which large language models (LLMs) from leading providers (OpenAI, Anthropic, Google) are asked to decide on military intervention across 128 vignettes, with each vignette run 10 times. This design enables a systematic assessment of AI decision-making in military contexts. The results are remarkably consistent across models: all models place substantial weight on the probability of success and domestic support, prioritizing these factors over civilian casualties, economic shock, or international sanctions. The paper then tests whether LLMs are sensitive to context by introducing different motivations for intervention. The scoring is indeed context-dependent; however, probability of victory remains the most important factor in all scenarios. Finally, the paper evaluates numerical sensitivity and finds that models display some responsiveness to the scale of civilian casualties but no detectable sensitivity to the size of the economic shock. - oai:arXiv.org:2507.06277v3 - cs.CY - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + TranSplat: Instant Cross-Scene Object Relighting in Gaussian Splatting via Spherical Harmonic Transfer + https://arxiv.org/abs/2503.22676 + arXiv:2503.22676v4 Announce Type: replace +Abstract: We present TranSplat, a method for fast and accurate object relighting for the 3D Gaussian Splatting (GS) framework when transferring a 3D object from a source GS scene to a target GS scene. TranSplat is based on a theoretical radiance transfer identity for cross-scene relighting of objects with radially symmetric BRDFs that involves only taking simple products of spherical harmonic appearance coefficients of the object, source, and target environment maps without any explicit computation of scene quantities (e.g., the BRDFs themselves). TranSplat is the first method to demonstrate how this theoretical identity may be used to perform relighting within the GS framework, and furthermore, by automatically inferring unknown source and target environment maps directly from the source and target scene GS representations. We evaluated TranSplat on several synthetic and real-world scenes and objects, demonstrating comparable 3D object relighting performance to recent conventional inverse rendering-based GS methods with a fraction of their runtime. While TranSplat is theoretically best-suited for radially symmetric BRDFs, results demonstrate that TranSplat still offers perceptually realistic renderings on real scenes and opens a valuable, lightweight path forward to relighting with the GS framework. + oai:arXiv.org:2503.22676v4 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Maxim Chupilkin + Boyang Yu, Yanlin Jin, Yun He, Akshat Dave, Guha Balakrishnan - FLoRA: An Advanced AI-Powered Engine to Facilitate Hybrid Human-AI Regulated Learning - https://arxiv.org/abs/2507.07362 - arXiv:2507.07362v3 Announce Type: replace -Abstract: Self-Regulated Learning (SRL), defined as learners' ability to systematically plan, monitor, and regulate their learning activities, is crucial for sustained academic achievement and lifelong learning competencies. Emerging AI developments profoundly influence SRL interactions by potentially either diminishing or strengthening learners' opportunities to exercise their own regulatory skills. Recent literature emphasizes a balanced approach termed Hybrid Human-AI Regulated Learning (HHAIRL), in which AI provides targeted, timely scaffolding while preserving the learners' role as active decision-makers and reflective monitors of their learning process. Central to HHAIRL is the integration of adaptive and personalized learning systems; by modelling each learner's knowledge and self-regulation patterns, AI can deliver contextually relevant scaffolds that support learners during all phases of the SRL process. Nevertheless, existing digital tools frequently fall short, lacking adaptability and personalisation, focusing narrowly on isolated SRL phases, and insufficiently supporting meaningful human-AI interactions. In response, this paper introduces the enhanced FLoRA Engine, which incorporates advanced generative AI features and state-of-the-art learning analytics, and grounds in solid educational theories. The FLoRA Engine offers tools such as collaborative writing, multi-agent chatbots, and detailed learning trace logging to support dynamic, adaptive scaffolding of self-regulation tailored to individual needs in real time. We further present a summary of several research studies that provide the validations for and illustrate how these tools can be utilized in real-world educational and experimental contexts. These studies demonstrate the effectiveness of FLoRA Engine in fostering SRL, providing both theoretical insights and practical solutions for the future of AI-enhanced learning contexts. - oai:arXiv.org:2507.07362v3 - cs.HC - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + ConsDreamer: Advancing Multi-View Consistency for Zero-Shot Text-to-3D Generation + https://arxiv.org/abs/2504.02316 + arXiv:2504.02316v3 Announce Type: replace +Abstract: Recent advances in zero-shot text-to-3D generation have revolutionized 3D content creation by enabling direct synthesis from textual descriptions. While state-of-the-art methods leverage 3D Gaussian Splatting with score distillation to enhance multi-view rendering through pre-trained text-to-image (T2I) models, they suffer from inherent prior view biases in T2I priors. These biases lead to inconsistent 3D generation, particularly manifesting as the multi-face Janus problem, where objects exhibit conflicting features across views. To address this fundamental challenge, we propose ConsDreamer, a novel method that mitigates view bias by refining both the conditional and unconditional terms in the score distillation process: (1) a View Disentanglement Module (VDM) that eliminates viewpoint biases in conditional prompts by decoupling irrelevant view components and injecting precise view control; and (2) a similarity-based partial order loss that enforces geometric consistency in the unconditional term by aligning cosine similarities with azimuth relationships. Extensive experiments demonstrate that ConsDreamer can be seamlessly integrated into various 3D representations and score distillation paradigms, effectively mitigating the multi-face Janus problem. + oai:arXiv.org:2504.02316v3 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1016/j.compedu.2025.105527 - Xinyu Li, Tongguang Li, Lixiang Yan, Yuheng Li, Linxuan Zhao, Mladen Rakovi\'c, Inge Molenaar, Dragan Ga\v{s}evi\'c, Yizhou Fan + Yuan Zhou, Shilong Jin, Litao Hua, Wanjun Lv, Haoran Duan, Jungong Han - Compactor: Calibrated Query-Agnostic KV Cache Compression with Approximate Leverage Scores - https://arxiv.org/abs/2507.08143 - arXiv:2507.08143v2 Announce Type: replace -Abstract: Modern Large Language Models (LLMs) are increasingly trained to support very large context windows. We present Compactor, a training-free, query-agnostic KV compression strategy that uses approximate leverage scores to determine token importance. We show that Compactor can achieve the same performance as competing methods while retaining 20% fewer tokens in both synthetic and real-world context tasks, while being more task-robust. We further introduce a procedure for context-calibrated compression: inferring the maximum compression a given context supports before significant performance loss. Using context-calibrated compression, we show that Compactor achieves full KV performance on Longbench while reducing the KV memory burden by 68%, on average. To demonstrate the efficacy and generalizability of our approach, we apply Compactor to 27 synthetic and real-world tasks from RULER and Longbench, with models from both the Qwen 2.5 and Llama 3.1 families. Finally, we release compactor-vllm, an inference engine and suite of optimized Triton kernels designed to efficiently support the sparse, non-contiguous memory access patterns inherent to compressed KV caches. This work demonstrates that Compactor offers a practical, high-performance solution for alleviating the memory bottleneck in modern LLM deployment. - oai:arXiv.org:2507.08143v2 - cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + RingMoE: Mixture-of-Modality-Experts Multi-Modal Foundation Models for Universal Remote Sensing Image Interpretation + https://arxiv.org/abs/2504.03166 + arXiv:2504.03166v2 Announce Type: replace +Abstract: The rapid advancement of foundation models has revolutionized visual representation learning in a self-supervised manner. However, their application in remote sensing (RS) remains constrained by a fundamental gap: existing models predominantly handle single or limited modalities, overlooking the inherently multi-modal nature of RS observations. Optical, synthetic aperture radar (SAR), and multi-spectral data offer complementary insights that significantly reduce the inherent ambiguity and uncertainty in single-source analysis. To bridge this gap, we introduce RingMoE, a unified multi-modal RS foundation model with 14.7 billion parameters, pre-trained on 400 million multi-modal RS images from nine satellites. RingMoE incorporates three key innovations: (1) A hierarchical Mixture-of-Experts (MoE) architecture comprising modal-specialized, collaborative, and shared experts, effectively modeling intra-modal knowledge while capturing cross-modal dependencies to mitigate conflicts between modal representations; (2) Physics-informed self-supervised learning, explicitly embedding sensor-specific radiometric characteristics into the pre-training objectives; (3) Dynamic expert pruning, enabling adaptive model compression from 14.7B to 1B parameters while maintaining performance, facilitating efficient deployment in Earth observation applications. Evaluated across 23 benchmarks spanning six key RS tasks (i.e., classification, detection, segmentation, tracking, change detection, and depth estimation), RingMoE outperforms existing foundation models and sets new SOTAs, demonstrating remarkable adaptability from single-modal to multi-modal scenarios. Beyond theoretical progress, it has been deployed and trialed in multiple sectors, including emergency response, land management, marine sciences, and urban planning. + oai:arXiv.org:2504.03166v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Vivek Chari, Benjamin Van Durme + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hanbo Bi, Yingchao Feng, Boyuan Tong, Mengyu Wang, Haichen Yu, Yongqiang Mao, Hao Chang, Wenhui Diao, Peijin Wang, Yue Yu, Hanyang Peng, Yehong Zhang, Kun Fu, Xian Sun - 3S-Attack: Spatial, Spectral and Semantic Invisible Backdoor Attack Against DNN Models - https://arxiv.org/abs/2507.10733 - arXiv:2507.10733v2 Announce Type: replace -Abstract: Backdoor attacks implant hidden behaviors into models by poisoning training data or modifying the model directly. These attacks aim to maintain high accuracy on benign inputs while causing misclassification when a specific trigger is present. While existing studies have explored stealthy triggers in spatial and spectral domains, few incorporate the semantic domain. In this paper, we propose 3S-attack, a novel backdoor attack which is stealthy across the spatial, spectral, and semantic domains. The key idea is to exploit the semantic features of benign samples as triggers, using Gradient-weighted Class Activation Mapping (Grad-CAM) and a preliminary model for extraction. Then we embedded the trigger in the spectral domain, followed by pixel-level restrictions in the spatial domain. This process minimizes the distance between poisoned and benign samples, making the attack harder to detect by existing defenses and human inspection. And it exposes a vulnerability at the intersection of robustness and semantic interpretability, revealing that models can be manipulated to act in semantically consistent yet malicious ways. Extensive experiments on various datasets, along with theoretical analysis, demonstrate the stealthiness of 3S-attack and highlight the need for stronger defenses to ensure AI security. - oai:arXiv.org:2507.10733v2 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Sobolev-Poincar\'e inequalities for piecewise $W^{1,p}$ functions over general polytopic meshes + https://arxiv.org/abs/2504.03449 + arXiv:2504.03449v2 Announce Type: replace +Abstract: We establish Sobolev-Poincar\'e inequalities for piecewise $W^{1,p}$ functions over families of fairly general polytopic (thence also shape-regular simplicial and Cartesian) meshes in any dimension; amongst others, they cover the case of standard Poincar\'e inequalities for piecewise $W^{1,p}$ functions and can be useful in the analysis of nonconforming finite element discretizations of nonlinear problems. Crucial tools in their derivation are novel Sobolev-trace inequalities and Babu\v{s}ka-Aziz inequalities with mixed boundary conditions. We provide estimates with constants having an explicit dependence on the geometric properties of the domain and the underlying family of polytopic meshes. + oai:arXiv.org:2504.03449v2 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Jianyao Yin, Luca Arnaboldi, Honglong Chen, Pascal Berrang, Mark Ryan + Michele Botti, Lorenzo Mascotto - Left Leaning Models: How AI Evaluates Economic Policy? - https://arxiv.org/abs/2507.15771 - arXiv:2507.15771v2 Announce Type: replace -Abstract: Would artificial intelligence (AI) cut interest rates or adopt conservative monetary policy? Would it deregulate or opt for a more controlled economy? As AI use by economic policymakers, academics, and market participants grows exponentially, it is becoming critical to understand AI preferences over economic policy. However, these preferences are not yet systematically evaluated and remain a black box. This paper makes a conjoint experiment on leading large language models (LLMs) from OpenAI, Anthropic, and Google, asking them to evaluate economic policy under multi-factor constraints. The results are remarkably consistent across models: most LLMs exhibit a strong preference for high growth, low unemployment, and low inequality over traditional macroeconomic concerns such as low inflation and low public debt. Scenario-specific experiments show that LLMs are sensitive to context but still display strong preferences for low unemployment and low inequality even in monetary-policy settings. Numerical sensitivity tests reveal intuitive responses to quantitative changes but also uncover non-linear patterns such as loss aversion. - oai:arXiv.org:2507.15771v2 - cs.CY - cs.AI - econ.GN - q-fin.EC - Wed, 10 Dec 2025 00:00:00 -0500 + No-Regret Learning in Stackelberg Games with an Application to Electric Ride-Hailing + https://arxiv.org/abs/2504.03745 + arXiv:2504.03745v3 Announce Type: replace +Abstract: We consider the problem of efficiently learning to play single-leader multi-follower Stackelberg games when the leader lacks knowledge of the lower-level game. Such games arise in hierarchical decision-making problems involving self-interested agents. For example, in electric ride-hailing markets, a central authority aims to learn optimal charging prices to shape fleet distributions and charging patterns of ride-hailing companies. Existing works typically apply gradient-based methods to find the leader's optimal strategy. Such methods are impractical as they require that the followers share private utility information with the leader. Instead, we treat the lower-level game as a black box, assuming only that the followers' interactions approximate a Nash equilibrium while the leader observes the realized cost of the resulting approximation. Under kernel-based regularity assumptions on the leader's cost function, we develop a no-regret algorithm that converges to an $\epsilon$-Stackelberg equilibrium in $O(\sqrt{T})$ rounds. Finally, we validate our approach through a numerical case study on optimal pricing in electric ride-hailing markets. + oai:arXiv.org:2504.03745v3 + eess.SY + cs.GT + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Maxim Chupilkin + 10.1109/MCS.2024.3467648 + 64th IEEE Conference on Decision and Control: CDC 2025 + Anna Maddux, Marko Maljkovic, Nikolas Geroliminis, Maryam Kamgarpour - CLARIFID: Improving Radiology Report Generation by Reinforcing Clinically Accurate Impressions and Enforcing Detailed Findings - https://arxiv.org/abs/2507.17234 - arXiv:2507.17234v3 Announce Type: replace -Abstract: Automatic generation of radiology reports has the potential to alleviate radiologists' significant workload, yet current methods struggle to deliver clinically reliable conclusions. In particular, most prior approaches focus on producing fluent text without effectively ensuring the factual correctness of the reports and often rely on single-view images, limiting diagnostic comprehensiveness. We propose CLARIFID, a novel framework that directly optimizes diagnostic correctness by mirroring the two-step workflow of experts. Specifically, CLARIFID (1) learns the logical flow from Findings to Impression through section-aware pretraining, (2) is fine-tuned with Proximal Policy Optimization in which the CheXbert F1 score of the Impression section serves as the reward, (3) employs controlled decoding that completes "Findings" before synthesizing the "Impression", and (4) fuses multiple chest X-ray views via a vision-transformer-based multi-view encoder. During inference, we apply a next-token forcing strategy followed by report-level re-ranking, ensuring that the model first produces a comprehensive "Findings" section before synthesizing the "Impression" and thereby preserving coherent clinical reasoning. Experimental results on the MIMIC-CXR dataset demonstrate that our method achieves superior clinical efficacy and outperforms existing baselines on clinical efficacy scores. - oai:arXiv.org:2507.17234v3 + Ineffectiveness for Search and Undecidability of PCSP Meta-Problems + https://arxiv.org/abs/2504.04639 + arXiv:2504.04639v3 Announce Type: replace +Abstract: It is an open question whether the search and decision versions of promise CSPs are equivalent. Most known algorithms for PCSPs solve only their \emph{decision} variant, and it is unknown whether they can be adapted to solve \emph{search} as well. The main approaches, called BLP, AIP and BLP+AIP, handle a PCSP by finding a solution to a relaxation of some integer program. We prove that rounding those solutions to a proper search certificate can be as hard as any problem in the class TFNP. In other words, these algorithms are ineffective for search. Building on the algebraic approach to PCSPs, we find sufficient conditions that imply ineffectiveness for search. Our tools are tailored to algorithms that are characterized by minions in a suitable way, and can also be used to prove undecidability results for meta-problems. This way, we show that the families of templates solvable via BLP, AIP, and BLP+AIP are undecidable. + Using the same techniques we also analyze several algebraic conditions that are known to guarantee the tractability of finite-template CSPs. We prove that several meta-problems related to cyclic polymorphims and WNUs are undecidable for PCSPs. In particular, there is no algorithm deciding whether a finite PCSP template (1) admits cyclic a polymorphism, (2) admits a WNU. + oai:arXiv.org:2504.04639v3 + cs.CC cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + cs.DS + cs.LO + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1016/j.eswa.2025.130633 - Kyeongkyu Lee, Seonghwan Yoon, Hongki Lim + http://creativecommons.org/licenses/by/4.0/ + Alberto Larrauri - Language Models for Controllable DNA Sequence Design - https://arxiv.org/abs/2507.19523 - arXiv:2507.19523v2 Announce Type: replace -Abstract: We consider controllable DNA sequence design, where sequences are generated by conditioning on specific biological properties. While language models (LMs) such as GPT and BERT have achieved remarkable success in natural language generation, their application to DNA sequence generation remains largely underexplored. In this work, we introduce ATGC-Gen, an Automated Transformer Generator for Controllable Generation, which leverages cross-modal encoding to integrate diverse biological signals. ATGC-Gen is instantiated with both decoder-only and encoder-only transformer architectures, allowing flexible training and generation under either autoregressive or masked recovery objectives. We evaluate ATGC-Gen on representative tasks including promoter and enhancer sequence design, and further introduce a new dataset based on ChIP-Seq experiments for modeling protein binding specificity. Our experiments demonstrate that ATGC-Gen can generate fluent, diverse, and biologically relevant sequences aligned with the desired properties. Compared to prior methods, our model achieves notable improvements in controllability and functional relevance, highlighting the potential of language models in advancing programmable genomic design. The source code is released at (https://github.com/divelab/AIRS/blob/main/OpenBio/ATGC_Gen). - oai:arXiv.org:2507.19523v2 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Spatial Polarization Multiplexing: Single-Shot Invisible Shape and Reflectance Recovery + https://arxiv.org/abs/2504.13177 + arXiv:2504.13177v2 Announce Type: replace +Abstract: We propose spatial polarization multiplexing (SPM) for joint sensing of shape and reflectance of a static or dynamic deformable object, which is also invisible to the naked eye. Past structured-light methods are limited to shape acquisition and cannot recover reflectance as they alter scene appearance. Our key idea is to spatially multiplex a polarization pattern to encode the incident ray and also densely sample the reflected light. We derive a quantized polarized light pattern that can be robustly and uniquely decoded from the reflected Angle of Linear Polarization (AoLP) values. It also enables single-shot disentanglement of polarimetric diffuse and specular reflections for accurate BRDF estimation. We achieve this spatial polarization multiplexing (SPM) with a constrained de Bruijn sequence. We validate this novel invisible single-shot shape and reflectance method with real static and dynamic objects. The results demonstrate the effectiveness of SPM for accurate shape and BRDF measurement which opens new avenues of application for 3D sensing thanks to its invisibility and ability to jointly recover the radiometric properties. + oai:arXiv.org:2504.13177v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xingyu Su, Xiner Li, Yuchao Lin, Ziqian Xie, Degui Zhi, Shuiwang Ji + Tomoki Ichikawa, Ryo Kawahara, Ko Nishino - CAPE: A CLIP-Aware Pointing Ensemble of Complementary Heatmap Cues for Embodied Reference Understanding - https://arxiv.org/abs/2507.21888 - arXiv:2507.21888v3 Announce Type: replace -Abstract: We address Embodied Reference Understanding, the task of predicting the object a person in the scene refers to through pointing gesture and language. This requires multimodal reasoning over text, visual pointing cues, and scene context, yet existing methods often fail to fully exploit visual disambiguation signals. We also observe that while the referent often aligns with the head-to-fingertip direction, in many cases it aligns more closely with the wrist-to-fingertip direction, making a single-line assumption overly limiting. To address this, we propose a dual-model framework, where one model learns from the head-to-fingertip direction and the other from the wrist-to-fingertip direction. We introduce a Gaussian ray heatmap representation of these lines and use them as input to provide a strong supervisory signal that encourages the model to better attend to pointing cues. To fuse their complementary strengths, we present the CLIP-Aware Pointing Ensemble module, which performs a hybrid ensemble guided by CLIP features. We further incorporate an auxiliary object center prediction head to enhance referent localization. We validate our approach on YouRefIt, achieving 75.0 mAP at 0.25 IoU, alongside state-of-the-art CLIP and C_D scores, and demonstrate its generality on unseen CAESAR and ISL Pointing, showing robust performance across benchmarks. - oai:arXiv.org:2507.21888v3 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + No-Regret Model Predictive Control with Online Learning of Koopman Operators + https://arxiv.org/abs/2504.15805 + arXiv:2504.15805v3 Announce Type: replace +Abstract: We study a problem of simultaneous system identification and model predictive control of nonlinear systems. Particularly, we provide an algorithm for systems with unknown residual dynamics that can be expressed by Koopman operators. Such residual dynamics can model external disturbances and modeling errors, such as wind and wave disturbances to aerial and marine vehicles, or inaccurate model parameters. The algorithm has finite-time near-optimality guarantees and asymptotically converges to the optimal non-causal controller. Specifically, the algorithm enjoys sublinear \textit{dynamic regret}, defined herein as the suboptimality against an optimal clairvoyant controller that knows how the unknown dynamics will adapt to its states and actions. To this end, we assume the algorithm is given Koopman observable functions such that the unknown dynamics can be approximated by a linear dynamical system. Then, it employs model predictive control based on the current learned model of the unknown residual dynamics. This model is updated online using least squares in a self-supervised manner based on the data collected while controlling the system. We validate our algorithm in physics-based simulations of a cart-pole system aiming to maintain the pole upright despite inaccurate model parameters. + oai:arXiv.org:2504.15805v3 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fevziye Irem Eyiokur, Dogucan Yaman, Haz{\i}m Kemal Ekenel, Alexander Waibel + Hongyu Zhou, Vasileios Tzoumas - Rational complex Bezier curves - https://arxiv.org/abs/2507.23485 - arXiv:2507.23485v3 Announce Type: replace -Abstract: In this paper we develop the formalism of rational complex Bezier curves. This framework is a simple extension of the CAD paradigm, since it describes arc of curves in terms of control polygons and weights, which are extended to complex values. One of the major advantages of this extension is that we may make use of two different groups of projective transformations. Besides the group of projective transformations of the real plane, we have the group of complex projective transformations. This allows us to apply useful transformations like the geometric inversion to curves in design. In addition to this, the use of the complex formulation allows to lower the degree of the curves in some cases. This can be checked using the resultant of two polynomials and provides a simple formula for determining whether a rational cubic curve is a conic or not. Examples of application of the formalism to classical curves are included. - oai:arXiv.org:2507.23485v3 + Local Convergence Behavior of Extended LOBPCG for Computing Eigenvalues of Hermitian Matrices + https://arxiv.org/abs/2505.08218 + arXiv:2505.08218v3 Announce Type: replace +Abstract: This paper provides a comprehensive and detailed analysis of the local convergence behavior of an extended variation of the locally optimal preconditioned conjugate gradient method (LOBPCG) for computing the extreme eigenvalue of a Hermitian matrix. The convergence rates derived in this work are either obtained for the first time or sharper than those previously established, including those in Ovtchinnikov's work ({\em SIAM J. Numer. Anal.}, 46(5):2567--2592, 2008). The study also extends to generalized problems, including Hermitian matrix polynomials that admit an extended form of the Rayleigh quotient. The new approach used to obtain these rates may also serve as a valuable tool for the convergence analysis of other gradient-type optimization methods. + oai:arXiv.org:2505.08218v3 math.NA - cs.GR cs.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - A. Canton, L. Fernandez-Jambrina, M. J. Vazquez-Gallo + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhechen Shen, Xin Liang - Fine-grained Spatiotemporal Grounding on Egocentric Videos - https://arxiv.org/abs/2508.00518 - arXiv:2508.00518v2 Announce Type: replace -Abstract: Spatiotemporal video grounding aims to localize target entities in videos based on textual queries. While existing research has made significant progress in exocentric videos, the egocentric setting remains relatively underexplored, despite its growing importance in applications such as augmented reality and robotics. In this work, we conduct a systematic analysis of the discrepancies between egocentric and exocentric videos, revealing key challenges such as shorter object durations, sparser trajectories, smaller object sizes, and larger positional shifts. To address these challenges, we introduce EgoMask, the first pixel-level benchmark for fine-grained spatiotemporal grounding in egocentric videos. It is constructed by our proposed automatic annotation pipeline, which annotates referring expressions and object masks across short-, medium-, and long-term videos. Additionally, we create EgoMask-Train, a large-scale training dataset to facilitate model development. Experiments demonstrate that the state-of-the-art spatiotemporal grounding models perform poorly on our benchmark EgoMask, but fine-tuning on EgoMask-Train yields significant improvements, while preserving performance on exocentric datasets. Our work thus provides essential resources and insights for advancing egocentric video understanding. Our code is available at https://github.com/LaVi-Lab/EgoMask . - oai:arXiv.org:2508.00518v2 - cs.CV + Revealing economic facts: LLMs know more than they say + https://arxiv.org/abs/2505.08662 + arXiv:2505.08662v2 Announce Type: replace +Abstract: We investigate whether the hidden states of large language models (LLMs) can be used to estimate and impute economic and financial statistics. Focusing on county-level (e.g. unemployment) and firm-level (e.g. total assets) variables, we show that a simple linear model trained on the hidden states of open-source LLMs outperforms the models' text outputs. This suggests that hidden states capture richer economic information than the responses of the LLMs reveal directly. A learning curve analysis indicates that only a few dozen labelled examples are sufficient for training. We also propose a transfer learning method that improves estimation accuracy without requiring any labelled data for the target variable. Finally, we demonstrate the practical utility of hidden-state representations in super-resolution and data imputation tasks. + oai:arXiv.org:2505.08662v2 cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + econ.GN + q-fin.EC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shuo Liang, Yiwu Zhong, Zi-Yuan Hu, Yeyao Tao, Liwei Wang + http://creativecommons.org/licenses/by/4.0/ + Bank of England Staff Working Paper Series, No. 1150 (2025) + Marcus Buckmann, Quynh Anh Nguyen, Edward Hill - PRISM: Lightweight Multivariate Time-Series Classification through Symmetric Multi-Resolution Convolutional Layers - https://arxiv.org/abs/2508.04503 - arXiv:2508.04503v2 Announce Type: replace -Abstract: Multivariate time series classification supports applications from wearable sensing to biomedical monitoring and demands models that can capture both short-term patterns and longer-range temporal dependencies. Despite recent advances, Transformer and CNN models often remain computationally heavy and rely on many parameters. This work presents PRISM(Per-channel Resolution Informed Symmetric Module), a lightweight fully convolutional classifier. Operating in a channel-independent manner, in its early stage it applies a set of multi-resolution symmetric convolutional filters. This symmetry enforces structural constraints inspired by linear-phase FIR filters from classical signal processing, effectively halving the number of learnable parameters within the initial layers while preserving the full receptive field. Across the diverse UEA multivariate time-series archive as well as specific benchmarks in human activity recognition, sleep staging, and biomedical signals, PRISM matches or outperforms state-of-the-art CNN and Transformer models while using significantly fewer parameters and markedly lower computational cost. By bringing a principled signal processing prior into a modern neural architecture, PRISM offers an effective and computationally economical solution for multivariate time series classification. - oai:arXiv.org:2508.04503v2 + Sinusoidal Initialization, Time for a New Start + https://arxiv.org/abs/2505.12909 + arXiv:2505.12909v3 Announce Type: replace +Abstract: Initialization plays a critical role in Deep Neural Network training, directly influencing convergence, stability, and generalization. Common approaches such as Glorot and He initializations rely on randomness, which can produce uneven weight distributions across layer connections. In this paper, we introduce the Sinusoidal initialization, a novel deterministic method that employs sinusoidal functions to construct structured weight matrices expressly to improve the spread and balance of weights throughout the network while simultaneously fostering a more uniform, well-conditioned distribution of neuron activation states from the very first forward pass. Because Sinusoidal initialization begins with weights and activations that are already evenly and efficiently utilized, it delivers consistently faster convergence, greater training stability, and higher final accuracy across a wide range of models, including convolutional neural networks, vision transformers, and large language models. On average, our experiments show an increase of 4.9% in final validation accuracy and 20.9% in convergence speed. By replacing randomness with structure, this initialization provides a stronger and more reliable foundation for Deep Learning systems. + oai:arXiv.org:2505.12909v3 cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Federico Zucchi, Thomas Lampert + http://creativecommons.org/licenses/by/4.0/ + NeurIPS (2025) + Alberto Fern\'andez-Hern\'andez, Jose I. Mestre, Manuel F. Dolz, Jose Duato, Enrique S. Quintana-Ort\'i - ScamAgents: How AI Agents Can Simulate Human-Level Scam Calls - https://arxiv.org/abs/2508.06457 - arXiv:2508.06457v2 Announce Type: replace -Abstract: Large Language Models (LLMs) have demonstrated impressive fluency and reasoning capabilities, but their potential for misuse has raised growing concern. In this paper, we present ScamAgent, an autonomous multi-turn agent built on top of LLMs, capable of generating highly realistic scam call scripts that simulate real-world fraud scenarios. Unlike prior work focused on single-shot prompt misuse, ScamAgent maintains dialogue memory, adapts dynamically to simulated user responses, and employs deceptive persuasion strategies across conversational turns. We show that current LLM safety guardrails, including refusal mechanisms and content filters, are ineffective against such agent-based threats. Even models with strong prompt-level safeguards can be bypassed when prompts are decomposed, disguised, or delivered incrementally within an agent framework. We further demonstrate the transformation of scam scripts into lifelike voice calls using modern text-to-speech systems, completing a fully automated scam pipeline. Our findings highlight an urgent need for multi-turn safety auditing, agent-level control frameworks, and new methods to detect and disrupt conversational deception powered by generative AI. - oai:arXiv.org:2508.06457v2 - cs.CR + Q${}^2$Forge: Minting Competency Questions and SPARQL Queries for Question-Answering Over Knowledge Graphs + https://arxiv.org/abs/2505.13572 + arXiv:2505.13572v2 Announce Type: replace +Abstract: The SPARQL query language is the standard method to access knowledge graphs (KGs). However, formulating SPARQL queries is a significant challenge for non-expert users, and remains time-consuming for the experienced ones. Best practices recommend to document KGs with competency questions and example queries to contextualise the knowledge they contain and illustrate their potential applications. In practice, however, this is either not the case or the examples are provided in limited numbers. Large Language Models (LLMs) are being used in conversational agents and are proving to be an attractive solution with a wide range of applications, from simple question-answering about common knowledge to generating code in a targeted programming language. However, training and testing these models to produce high quality SPARQL queries from natural language questions requires substantial datasets of question-query pairs. In this paper, we present Q${}^2$Forge that addresses the challenge of generating new competency questions for a KG and corresponding SPARQL queries. It iteratively validates those queries with human feedback and LLM as a judge. Q${}^2$Forge is open source, generic, extensible and modular, meaning that the different modules of the application (CQ generation, query generation and query refinement) can be used separately, as an integrated pipeline, or replaced by alternative services. The result is a complete pipeline from competency question formulation to query evaluation, supporting the creation of reference query sets for any target KG. + oai:arXiv.org:2505.13572v2 + cs.DB cs.AI - cs.CL - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Proceedings of Machine Learning Research 299, 2025 Conference on Applied Machine Learning for Information Security - Sanket Badhe + Knowledge Capture Conference 2025 (K-CAP '25), December 10--12, 2025, Dayton, OH, USA, Dec 2025, Dayton, OH, United States + Yousouf Taghzouti (WIMMICS, ICN), Franck Michel (Laboratoire I3S - SPARKS, WIMMICS), Tao Jiang (ICN), Louis-F\'elix Nothias (ICN), Fabien Gandon (WIMMICS, Laboratoire I3S - SPARKS) - CATP: Contextually Adaptive Token Pruning for Efficient and Enhanced Multimodal In-Context Learning - https://arxiv.org/abs/2508.07871 - arXiv:2508.07871v2 Announce Type: replace -Abstract: Modern large vision-language models (LVLMs) convert each input image into a large set of tokens that far outnumber the text tokens. Although this improves visual perception, it also introduces severe image token redundancy. Because image tokens contain sparse information, many contribute little to reasoning but greatly increase inference cost. Recent image token pruning methods address this issue by identifying important tokens and removing the rest. These methods improve efficiency with only small performance drops. However, most of them focus on single-image tasks and overlook multimodal in-context learning (ICL), where redundancy is higher and efficiency is more important. Redundant tokens weaken the advantage of multimodal ICL for rapid domain adaptation and lead to unstable performance. When existing pruning methods are applied in this setting, they cause large accuracy drops, which exposes a clear gap and the need for new approaches. To address this, we propose Contextually Adaptive Token Pruning (CATP), a training-free pruning method designed for multimodal ICL. CATP uses two stages of progressive pruning that fully reflect the complex cross-modal interactions in the input sequence. After removing 77.8% of the image tokens, CATP achieves an average performance gain of 0.6% over the vanilla model on four LVLMs and eight benchmarks, clearly outperforming all baselines. At the same time, it improves efficiency by reducing inference latency by an average of 10.78%. CATP strengthens the practical value of multimodal ICL and lays the foundation for future progress in interleaved image-text settings. - oai:arXiv.org:2508.07871v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + C*: A Coverage Path Planning Algorithm for Unknown Environments using Rapidly Covering Graphs + https://arxiv.org/abs/2505.13782 + arXiv:2505.13782v2 Announce Type: replace +Abstract: The paper presents a novel sample-based algorithm, called C*, for real-time coverage path planning (CPP) of unknown environments. C* is built upon the concept of a Rapidly Covering Graph (RCG), which is incrementally constructed during robot navigation via progressive sampling of the search space. By using efficient sampling and pruning techniques, the RCG is constructed to be a minimum-sufficient graph, where its nodes and edges form the potential waypoints and segments of the coverage trajectory, respectively. The RCG tracks the coverage progress, generates the coverage trajectory and helps the robot to escape from the dead-end situations. To minimize coverage time, C* produces the desired back-and-forth coverage pattern, while adapting to the TSP-based optimal coverage of local isolated regions, called coverage holes, which are surrounded by obstacles and covered regions. It is analytically proven that C* provides complete coverage of unknown environments. The algorithmic simplicity and low computational complexity of C* make it easy to implement and suitable for real-time on-board applications. The performance of C* is validated by 1) extensive high-fidelity simulations and 2) laboratory experiments using an autonomous robot. C* yields near optimal trajectories, and a comparative evaluation with seven existing CPP methods demonstrates significant improvements in performance in terms of coverage time, number of turns, trajectory length, and overlap ratio, while preventing the formation of coverage holes. Finally, C* is comparatively evaluated on two different CPP applications using 1) energy-constrained robots and 2) multi-robot teams. + oai:arXiv.org:2505.13782v2 + cs.RO + cs.SY + eess.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yanshu Li, Jianjiang Yang, Zhennan Shen, Ligong Han, Haoyan Xu, Ruixiang Tang - - - FedJam: Multimodal Federated Learning Framework for Jamming Detection - https://arxiv.org/abs/2508.09369 - arXiv:2508.09369v2 Announce Type: replace -Abstract: Jamming attacks pose a critical threat to wireless networks, yet existing detection methods remain largely unimodal, centralized and resource-intensive, limiting their performance, scalability, and deployment feasibility, respectively. To address these limitations, we present FedJam, a multimodal Federated Learning (FL) framework for on-device jamming detection and classification. FedJam locally fuses spectrograms and cross-layer network Key Performance Indicators (KPIs) using a lightweight dual-encoder architecture with an integrated fusion module and multimodal projection head, that enables privacy-preserving training and inference without transmitting raw data. We prototype and deploy FedJam on a wireless experimental testbed and evaluate it using the first, over-the-air multimodal dataset comprising synchronized samples across benign and three distinct jamming attack types. FedJam outperforms state-of-the-art unimodal baselines by up to 15% in accuracy, while requiring 60% fewer communication rounds to converge, and maintains low resource utilization. Its advantage is especially pronounced in realistic scenarios, where it remains extremely robust under heterogeneous data distributions across devices. - oai:arXiv.org:2508.09369v2 - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Ioannis Panitsas, Iason Ofeidis, Leandros Tassiulas - - - MuSACo: Multimodal Subject-Specific Selection and Adaptation for Expression Recognition with Co-Training - https://arxiv.org/abs/2508.12522 - arXiv:2508.12522v2 Announce Type: replace -Abstract: Personalized expression recognition (ER) involves adapting a machine learning model to subject-specific data for improved recognition of expressions with considerable interpersonal variability. Subject-specific ER can benefit significantly from multi-source domain adaptation (MSDA) methods, where each domain corresponds to a specific subject to improve model accuracy and robustness. Despite promising results, state-of-the-art MSDA approaches often overlook multimodal information or blend sources into a single domain, limiting subject diversity and failing to explicitly capture unique subject-specific characteristics. To address these limitations, we introduce MuSACo, a multimodal subject-specific selection and adaptation method for ER based on co-training. It leverages complementary information across multiple modalities and multiple source domains for subject-specific adaptation. This makes MuSACo particularly relevant for affective computing applications in digital health, such as patient-specific assessment for stress or pain, where subject-level nuances are crucial. MuSACo selects source subjects relevant to the target and generates pseudo-labels using the dominant modality for class-aware learning, in conjunction with a class-agnostic loss to learn from less confident target samples. Finally, source features from each modality are aligned, while only confident target features are combined. Experimental results on challenging multimodal ER datasets: BioVid, StressID, and BAH show that MuSACo outperforms UDA (blending) and state-of-the-art MSDA methods. - oai:arXiv.org:2508.12522v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Muhammad Osama Zeeshan, Natacha Gillet, Alessandro Lameiras Koerich, Marco Pedersoli, Francois Bremond, Eric Granger + Zongyuan Shen, James P. Wilson, Shalabh Gupta - DASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples - https://arxiv.org/abs/2508.13309 - arXiv:2508.13309v2 Announce Type: replace -Abstract: Numerous techniques have been proposed for generating adversarial examples in white-box settings under strict Lp-norm constraints. However, such norm-bounded examples often fail to align well with human perception, and only recently have a few methods begun specifically exploring perceptually aligned adversarial examples. Moreover, it remains unclear whether insights from Lp-constrained attacks can be effectively leveraged to improve perceptual efficacy. In this paper, we introduce DAASH, a fully differentiable meta-attack framework that generates effective and perceptually aligned adversarial examples by strategically composing existing Lp-based attack methods. DAASH operates in a multi-stage fashion: at each stage, it aggregates candidate adversarial examples from multiple base attacks using learned, adaptive weights and propagates the result to the next stage. A novel meta-loss function guides this process by jointly minimizing misclassification loss and perceptual distortion, enabling the framework to dynamically modulate the contribution of each base attack throughout the stages. We evaluate DAASH on adversarially trained models across CIFAR-10, CIFAR-100, and ImageNet. Despite relying solely on Lp-constrained based methods, DAASH significantly outperforms state-of-the-art perceptual attacks such as AdvAD -- achieving higher attack success rates (e.g., 20.63\% improvement) and superior visual quality, as measured by SSIM, LPIPS, and FID (improvements $\approx$ of 11, 0.015, and 5.7, respectively). Furthermore, DAASH generalizes well to unseen defenses, making it a practical and strong baseline for evaluating robustness without requiring handcrafted adaptive attacks for each new defense. - oai:arXiv.org:2508.13309v2 - cs.CV + Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners + https://arxiv.org/abs/2505.14042 + arXiv:2505.14042v2 Announce Type: replace +Abstract: Adversarial training is one of the most effective adversarial defenses, but it incurs a high computational cost. In this study, we present the first theoretical analysis suggesting that adversarially pretrained transformers can serve as universally robust foundation models -- models that can robustly adapt to diverse downstream tasks with only lightweight tuning. Specifically, we demonstrate that single-layer linear transformers, after adversarial pretraining across a variety of classification tasks, can robustly generalize to unseen classification tasks through in-context learning from clean demonstrations (i.e., without requiring additional adversarial training or examples). This universal robustness stems from the model's ability to adaptively focus on robust features within given tasks. We also show the two open challenges for attaining robustness: accuracy--robustness trade-off and sample-hungry training. This study initiates the discussion on the utility of universally robust foundation models. While their training is expensive, the investment would prove worthwhile as downstream tasks can enjoy free adversarial robustness. The code is available at https://github.com/s-kumano/universally-robust-in-context-learner. + oai:arXiv.org:2505.14042v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Abdullah Al Nomaan Nafi, Habibur Rahaman, Zafaryab Haider, Tanzim Mahfuz, Fnu Suya, Swarup Bhunia, Prabuddha Chakraborty + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki - Incremental Generalized Hybrid A* - https://arxiv.org/abs/2508.13392 - arXiv:2508.13392v3 Announce Type: replace -Abstract: We address the problem of efficiently organizing search over very large trees, which arises in many applications ranging from autonomous driving to aerial vehicles. Here, we are motivated by off-road autonomy, where real-time planning is essential. Classical approaches use graphs of motion primitives and exploit dominance to mitigate the curse of dimensionality and prune expansions efficiently. However, for complex dynamics, repeatedly solving two-point boundary-value problems makes graph construction too slow for fast kinodynamic planning. Hybrid A* (HA*) addressed this challenge by searching over a tree of motion primitives and introducing approximate pruning using a grid-based dominance check. However, choosing the grid resolution is difficult: too coarse risks failure, while too fine leads to excessive expansions and slow planning. We propose Incremental Generalized Hybrid A* (IGHA*), an anytime tree-search framework that dynamically organizes vertex expansions without rigid pruning. IGHA* provably matches or outperforms HA*. For both on-road kinematic and off-road kinodynamic planning queries for a car-like robot, variants of IGHA* use 6x fewer expansions to the best solution compared to an optimized version of HA* (HA*M, an internal baseline). In simulated off- road experiments in a high-fidelity simulator, IGHA* outper- forms HA*M when both are used in the loop with a model predictive controller. We demonstrate real-time performance both in simulation and on a small-scale off-road vehicle, enabling fast, robust planning under complex dynamics. Website: https://personalrobotics.github.io/IGHAStar/ - oai:arXiv.org:2508.13392v3 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + SGL: A Structured Graphics Language + https://arxiv.org/abs/2505.14690 + arXiv:2505.14690v3 Announce Type: replace +Abstract: This paper introduces SGL, a graphics language that is aesthetically similar to SQL. As a graphical counterpart to SQL, SGL enables specification of statistical graphics within SQL query interfaces. SGL is based on a grammar of graphics that has been customized to support a SQL aesthetic. + This paper presents the fundamental components of the SGL language alongside examples, and describes SGL's underlying grammar of graphics via comparison to its closest predecessor, the layered grammar of graphics. + oai:arXiv.org:2505.14690v3 + cs.PL + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sidharth Talia, Oren Salzman, Siddhartha Srinivasa + Jon Chapman - FLAIR: Frequency- and Locality-Aware Implicit Neural Representations - https://arxiv.org/abs/2508.13544 - arXiv:2508.13544v5 Announce Type: replace -Abstract: Implicit Neural Representations (INRs) leverage neural networks to map coordinates to corresponding signals, enabling continuous and compact representations. This paradigm has driven significant advances in various vision tasks. However, existing INRs lack frequency selectivity and spatial localization, leading to an over-reliance on redundant signal components. Consequently, they exhibit spectral bias, tending to learn low-frequency components early while struggling to capture fine high-frequency details. To address these issues, we propose FLAIR (Frequency- and Locality-Aware Implicit Neural Representations), which incorporates two key innovations. The first is Band-Localized Activation (BLA), a novel activation designed for joint frequency selection and spatial localization under the constraints of the time-frequency uncertainty principle (TFUP). Through structured frequency control and spatially localized responses, BLA effectively mitigates spectral bias and enhances training stability. The second is Wavelet-Energy-Guided Encoding (WEGE), which leverages the discrete wavelet transform to compute energy scores and explicitly guide frequency information to the network, enabling precise frequency selection and adaptive band control. Our method consistently outperforms existing INRs in 2D image representation, as well as 3D shape reconstruction and novel view synthesis. - oai:arXiv.org:2508.13544v5 - cs.CV + Global Convergence for Average Reward Constrained MDPs with Primal-Dual Actor Critic Algorithm + https://arxiv.org/abs/2505.15138 + arXiv:2505.15138v2 Announce Type: replace +Abstract: This paper investigates infinite-horizon average reward Constrained Markov Decision Processes (CMDPs) with general parametrization. We propose a Primal-Dual Natural Actor-Critic algorithm that adeptly manages constraints while ensuring a high convergence rate. In particular, our algorithm achieves global convergence and constraint violation rates of $\tilde{\mathcal{O}}(1/\sqrt{T})$ over a horizon of length $T$ when the mixing time, $\tau_{\mathrm{mix}}$, is known to the learner. In absence of knowledge of $\tau_{\mathrm{mix}}$, the achievable rates change to $\tilde{\mathcal{O}}(1/T^{0.5-\epsilon})$ provided that $T \geq \tilde{\mathcal{O}}\left(\tau_{\mathrm{mix}}^{2/\epsilon}\right)$. Our results match the theoretical lower bound for Markov Decision Processes and establish a new benchmark in the theoretical exploration of average reward CMDPs. + oai:arXiv.org:2505.15138v2 + cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sukhun Ko, Seokhyun Yoon, Dahyeon Kye, Kyle Min, Chanho Eom, Jihyong Oh + Yang Xu, Swetha Ganesh, Washim Uddin Mondal, Qinbo Bai, Vaneet Aggarwal - Amortized Bayesian Meta-Learning for Low-Rank Adaptation of Large Language Models - https://arxiv.org/abs/2508.14285 - arXiv:2508.14285v2 Announce Type: replace -Abstract: Fine-tuning large language models (LLMs) with low-rank adaptation (LoRA) is a cost-effective way to incorporate information from a specific dataset. However, it is often unclear how well the fine-tuned LLM will generalize, i.e., how well it will perform on unseen datasets. Methods have been proposed to improve generalization by optimizing in-context prompts, or by using meta-learning to fine-tune LLMs. However, these methods are expensive in memory and computation, requiring either long-context prompts or saving copies of parameters and using second-order gradient updates. To address these challenges, we propose Amortized Bayesian Meta-Learning for LoRA (ABMLL). This method builds on amortized Bayesian meta-learning for smaller models, adapting this approach to LLMs while maintaining its computational efficiency. We reframe task-specific and global parameters in the context of LoRA and use a new hyperparameter to balance reconstruction accuracy and the fidelity of task-specific parameters to the global ones. ABMLL provides effective generalization and scales to large models such as LLAMA3-8B. Furthermore, as a result of using a Bayesian framework, ABMLL provides improved uncertainty quantification. We test ABMLL on CrossFit and Unified-QA datasets and find that it outperforms existing methods on these benchmarks in terms of both accuracy and expected calibration error. - oai:arXiv.org:2508.14285v2 + Not All Models Suit Expert Offloading: On Local Routing Consistency of Mixture-of-Expert Models + https://arxiv.org/abs/2505.16056 + arXiv:2505.16056v3 Announce Type: replace +Abstract: Mixture-of-Experts (MoE) enables efficient scaling of large language models (LLMs) with sparsely activated experts during inference. To effectively deploy large MoE models on memory-constrained devices, many systems introduce *expert offloading* that caches a subset of experts in fast memory, leaving others on slow memory to run on CPU or load on demand. While some research has exploited the locality of expert activations, where consecutive tokens activate similar experts, the degree of this **local routing consistency** varies across models and remains understudied. In this paper, we propose two metrics to measure local routing consistency of MoE models: (1) **Segment Routing Best Performance (SRP)**, which evaluates how well a fixed group of experts can cover the needs of a segment of tokens, and (2) **Segment Cache Best Hit Rate (SCH)**, which measures the hit rate of an expert cache utilizing a length of future information under a cache limit. We analyze 20 MoE LLMs with diverse sizes and architectures and use toy models to verify key factors related to local routing consistency. We find a strong trade-off between local routing consistency and *local* load balance, while showing that *global* load balance can coexist with local routing consistency. Meanwhile, settings like shared experts that decrease expert combination space can lead to low local routing consistency. We further reveal that domain-specialized experts contribute more to routing consistency than vocabulary-specialized ones, and that most models balance between cache effectiveness and efficiency with cache sizes approximately twice the active experts. These findings pave the way for memory-efficient MoE design and deployment without compromising inference speed. We publish the code for replicating experiments at https://github.com/ljcleo/moe-lrc . + oai:arXiv.org:2505.16056v3 cs.LG cs.AI - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Liyi Zhang, Jake Snell, Thomas L. Griffiths + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Jingcong Liang, Siyuan Wang, Miren Tian, Yitong Li, Duyu Tang, Zhongyu Wei - A statistical test for network similarity - https://arxiv.org/abs/2508.14399 - arXiv:2508.14399v2 Announce Type: replace -Abstract: In this article, we revisit and expand our prior work on graph similarity. As with our earlier work, we focus on a view of similarity which does not require node correspondence between graphs under comparison. Our work is suited to the temporal study of networks, change-point and anomaly detection and simple comparisons of static graphs. It provides a similarity metric for the study of (weakly) connected graphs. Our work proposes a metric designed to compare networks and assess the (dis)similarity between them. For example, given three different graphs with possibly different numbers of nodes, $G_1$, $G_2$ and $G_3$, we aim to answer two questions: a) "How different is $G_1 $ from $G_2$?" and b) "Is graph $G_3$ more similar to $G_1$ or to $G_2$?". We illustrate the value of our test and its accuracy through several new experiments, using synthetic and real-world graphs. - oai:arXiv.org:2508.14399v2 - cs.DM - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Make LVLMs Focus: Context-Aware Attention Modulation for Better Multimodal In-Context Learning + https://arxiv.org/abs/2505.17097 + arXiv:2505.17097v4 Announce Type: replace +Abstract: Multimodal in-context learning (ICL) is becoming a key capability that allows large vision-language models (LVLMs) to adapt to novel tasks without parameter updates, which expands their usefulness in many real-world applications. However, ICL performance remains unstable even when the in-context demonstrations (ICDs) are well matched, showing that LVLMs still struggle to make full use of the provided context. While existing work mainly focuses on prompt engineering or post-hoc logit calibration, we study the attention mechanisms inside LVLMs to address their inherent limitations. We identify two important weaknesses in their self-attention that hinder effective ICL. To address these weaknesses, we propose Context-Aware Modulated Attention (CAMA), a training-free and plug-and-play method that dynamically adjusts attention logits based on the input in-context sequence. CAMA uses a two-stage modulation process that strengthens attention to semantically important tokens, especially visual ones. Across four LVLMs and seven benchmarks, CAMA consistently outperforms vanilla models and baselines, showing clear effectiveness and generalization. It can also activate the intended benefits of prompt engineering methods and remains robust across different sequence configurations. Therefore, CAMA opens up new directions for improving multimodal reasoning through a deeper understanding of attention dynamics. + oai:arXiv.org:2505.17097v4 + cs.CV + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Pierre Miasnikof, Alexander Y. Shetopaloff + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yanshu Li, Jianjiang Yang, Ziteng Yang, Bozheng Li, Ligong Han, Hongyang He, Zhengtao Yao, Yingjie Victor Chen, Songlin Fei, Dongfang Liu, Ruixiang Tang - SceneGen: Single-Image 3D Scene Generation in One Feedforward Pass - https://arxiv.org/abs/2508.15769 - arXiv:2508.15769v2 Announce Type: replace -Abstract: 3D content generation has recently attracted significant research interest, driven by its critical applications in VR/AR and embodied AI. In this work, we tackle the challenging task of synthesizing multiple 3D assets within a single scene image. Concretely, our contributions are fourfold: (i) we present SceneGen, a novel framework that takes a scene image and corresponding object masks as input, simultaneously producing multiple 3D assets with geometry and texture. Notably, SceneGen operates with no need for extra optimization or asset retrieval; (ii) we introduce a novel feature aggregation module that integrates local and global scene information from visual and geometric encoders within the feature extraction module. Coupled with a position head, this enables the generation of 3D assets and their relative spatial positions in a single feedforward pass; (iii) we demonstrate SceneGen's direct extensibility to multi-image input scenarios. Despite being trained solely on single-image inputs, our architecture yields improved generation performance when multiple images are provided; and (iv) extensive quantitative and qualitative evaluations confirm the efficiency and robustness of our approach. We believe this paradigm offers a novel solution for high-quality 3D content generation, potentially advancing its practical applications in downstream tasks. The code and model will be publicly available at: https://mengmouxu.github.io/SceneGen. - oai:arXiv.org:2508.15769v2 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + A Network Science Approach to Granular Time Series Segmentation + https://arxiv.org/abs/2505.17640 + arXiv:2505.17640v2 Announce Type: replace +Abstract: Time series segmentation (TSS) is one of the time series (TS) analysis techniques, that has received considerably less attention compared to other TS related tasks. In recent years, deep learning architectures have been introduced for TSS, however their reliance on sliding windows limits segmentation granularity due to fixed window sizes and strides. To overcome these challenges, we propose a new more granular TSS approach that utilizes the Weighted Dual Perspective Visbility Graph (WDPVG) TS into a graph and combines it with a Graph Attention Network (GAT). By transforming TS into graphs, we are able to capture different structural aspects of the data that would otherwise remain hidden. By utilizing the representation learning capabilities of Graph Neural Networks, our method is able to effectively identify meaningful segments within the TS. To better understand the potential of our approach, we also experimented with different TS-to-graph transformations and compared their performance. Our contributions include: a) formulating the TSS as a node classification problem on graphs; b) conducting an extensive analysis of various TS-to-graph transformations applied to TSS using benchmark datasets from the TSSB repository; c) providing the first detailed study on utilizing GNNs for analyzing graph representations of TS in the context of TSS; d) demonstrating the effectiveness of our method, which achieves an average F1 score of 0.97 across 59 diverse TSS benchmark datasets; e) outperforming the seq2point baseline method by 0.05 in terms of F1 score; and f) reducing the required training data compared to the baseline methods. + oai:arXiv.org:2505.17640v2 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yanxu Meng, Haoning Wu, Ya Zhang, Weidi Xie + Ivana Kesi\'c, Carolina Fortuna, Mihael Mohor\v{c}i\v{c}, Bla\v{z} Bertalani\v{c} - Not Just for Archiving: Provable Benefits of Reusing the Archive in Evolutionary Multi-objective Optimization - https://arxiv.org/abs/2508.16993 - arXiv:2508.16993v2 Announce Type: replace -Abstract: Evolutionary Algorithms (EAs) have become the most popular tool for solving widely-existed multi-objective optimization problems. In Multi-Objective EAs (MOEAs), there is increasing interest in using an archive to store non-dominated solutions generated during the search. This approach can 1) mitigate the effects of population oscillation, a common issue in many MOEAs, and 2) allow for the use of smaller, more practical population sizes. In this paper, we analytically show that the archive can even further help MOEAs through reusing its solutions during the process of new solution generation. We first prove that using a small population size alongside an archive (without incorporating archived solutions in the generation process) may fail on certain problems, as the population may remove previously discovered but promising solutions. We then prove that reusing archive solutions can overcome this limitation, resulting in at least a polynomial speedup on the expected running time. Our analysis focuses on the well-established SMS-EMOA algorithm applied to the commonly studied OneJumpZeroJump problem as well as one of its variants. We also show that reusing archive solutions can be better than using a large population size directly. Finally, we show that our theoretical findings can generally hold in practice by experiments on well-known practical optimization problems -- multi-objective 0-1 Knapsack, TSP, QAP and NK-landscape problems -- with realistic settings. - oai:arXiv.org:2508.16993v2 + The emergence of sparse attention: impact of data distribution and benefits of repetition + https://arxiv.org/abs/2505.17863 + arXiv:2505.17863v2 Announce Type: replace +Abstract: Emergence is a fascinating property of large language models and neural networks more broadly: as models scale and train for longer, they sometimes develop new abilities in sudden ways. Despite initial studies, we still lack a comprehensive understanding of how and when these abilities emerge. To address this gap, we study the emergence over training of sparse attention, a critical and frequently observed attention pattern in Transformers. By combining theoretical analysis of a toy model with empirical observations on small Transformers trained on a linear regression variant, we uncover the mechanics driving sparse attention emergence and reveal that emergence timing follows power laws based on task structure, architecture, and optimizer choice. We additionally find that repetition can greatly speed up emergence. Finally, we confirm these results on a well-studied in-context associative recall task. Our findings provide a simple, theoretically grounded framework for understanding how data distributions and model design influence the learning dynamics behind one form of emergence. + oai:arXiv.org:2505.17863v2 + cs.LG cs.NE - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shengjie Ren, Zimin Liang, Miqing Li, Chao Qian + http://creativecommons.org/licenses/by/4.0/ + Nicolas Zucchet, Francesco d'Angelo, Andrew K. Lampinen, Stephanie C. Y. Chan - Random-phase Wave Splatting of Translucent Primitives for Computer-generated Holography - https://arxiv.org/abs/2508.17480 - arXiv:2508.17480v2 Announce Type: replace -Abstract: Holographic near-eye displays offer ultra-compact form factors for VR/AR systems but rely on advanced computer-generated holography (CGH) algorithms to convert 3D scenes into interference patterns on spatial light modulators (SLMs). Conventional CGH typically generates smooth-phase holograms, limiting view-dependent effects and realistic defocus blur, while severely under-utilizing the SLM space-bandwidth product. - We propose Random-phase Wave Splatting (RPWS), a unified wave optics rendering framework that converts arbitrary 3D representations based on 2D translucent primitives into random-phase holograms. RPWS is fully compatible with modern 3D representations such as Gaussians and triangles, improves bandwidth utilization which effectively enlarges eyebox size, reconstructs accurate defocus blur and parallax, and leverages time-multiplexed rendering not as a heuristic for speckle suppression, but as a mathematically exact alpha-blending mechanism derived from first principles in statistics. At the core of RPWS are (1) a new wavefront compositing procedure and (2) an alpha-blending scheme for random-phase geometric primitives, ensuring correct color reconstruction and robust occlusion when compositing millions of primitives. - RPWS departs substantially from the recent primitive-based CGH algorithm, Gaussian Wave Splatting (GWS). Because GWS uses smooth-phase primitives, it struggles to capture view-dependent effects and realistic defocus blur and under-utilizes the SLM space-bandwidth product; moreover, naively extending GWS to random-phase primitives fails to reconstruct accurate colors. In contrast, RPWS is designed from the ground up for arbitrary random-phase translucent primitives, and through simulations and experimental validations we demonstrate state-of-the-art image quality and perceptually faithful 3D holograms for next-generation near-eye displays. - oai:arXiv.org:2508.17480v2 - cs.GR - cs.AR - eess.IV - eess.SP - physics.optics - Wed, 10 Dec 2025 00:00:00 -0500 + LLM Meeting Decision Trees on Tabular Data + https://arxiv.org/abs/2505.17918 + arXiv:2505.17918v2 Announce Type: replace +Abstract: Tabular data have been playing a vital role in diverse real-world fields, including healthcare, finance, etc. With the recent success of Large Language Models (LLMs), early explorations of extending LLMs to the domain of tabular data have been developed. Most of these LLM-based methods typically first serialize tabular data into natural language descriptions, and then tune LLMs or directly infer on these serialized data. However, these methods suffer from two key inherent issues: (i) data perspective: existing data serialization methods lack universal applicability for structured tabular data, and may pose privacy risks through direct textual exposure, and (ii) model perspective: LLM fine-tuning methods struggle with tabular data, and in-context learning scalability is bottle-necked by input length constraints (suitable for few-shot learning). This work explores a novel direction of integrating LLMs into tabular data throughough logical decision tree rules as intermediaries, proposes a decision tree enhancer with LLM-derived rule for tabular prediction, DeLTa. The proposed DeLTa avoids tabular data serialization, and can be applied to full data learning setting without LLM fine-tuning. Specifically, we leverage the reasoning ability of LLMs to redesign an improved rule given a set of decision tree rules. Furthermore, we provide a calibration method for original decision trees via new generated rule by LLM, which approximates the error correction vector to steer the original decision tree predictions in the direction of ``errors'' reducing. Finally, extensive experiments on diverse tabular benchmarks show that our method achieves state-of-the-art performance. + oai:arXiv.org:2505.17918v2 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Brian Chao, Jacqueline Yang, Suyeon Choi, Manu Gopakumar, Ryota Koiso, Gordon Wetzstein + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hangting Ye, Jinmeng Li, He Zhao, Dandan Guo, Yi Chang - LLMs Can't Handle Peer Pressure: Crumbling under Multi-Agent Social Interactions - https://arxiv.org/abs/2508.18321 - arXiv:2508.18321v3 Announce Type: replace -Abstract: Large language models (LLMs) are increasingly integrated into multi-agent systems (MAS), where peer interactions shape individual decisions. While prior work has mainly examined conformity bias, we broaden the view to include how LLMs build rapport from prior interactions, discern and integrate high-quality peer information, and resist misleading inputs-abilities essential for achieving collective intelligence under complex social dynamics. We introduce KAIROS, a benchmark that simulates quiz-style collaboration with peer agents whose rapport levels and behaviours can be precisely controlled in both historical interactions and the current round. This unified setup enables systematic analysis of how rapport, peer actions, and the model's self-confidence jointly influence decision-making. Using KAIROS, we evaluate prompting, supervised fine-tuning, and reinforcement learning via Group Relative Policy Optimisation (GRPO). Results show that model scale is a primary factor moderating susceptibility to social influence: larger models are more resilient and benefit from prompting-based mitigation, whereas smaller models remain vulnerable. Only carefully configured GRPO training yields consistent robustness and performance gains for small models. - oai:arXiv.org:2508.18321v3 - cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + DISTA-Net: Dynamic Closely-Spaced Infrared Small Target Unmixing + https://arxiv.org/abs/2505.19148 + arXiv:2505.19148v2 Announce Type: replace +Abstract: Resolving closely-spaced small targets in dense clusters presents a significant challenge in infrared imaging, as the overlapping signals hinder precise determination of their quantity, sub-pixel positions, and radiation intensities. While deep learning has advanced the field of infrared small target detection, its application to closely-spaced infrared small targets has not yet been explored. This gap exists primarily due to the complexity of separating superimposed characteristics and the lack of an open-source infrastructure. In this work, we propose the Dynamic Iterative Shrinkage Thresholding Network (DISTA-Net), which reconceptualizes traditional sparse reconstruction within a dynamic framework. DISTA-Net adaptively generates convolution weights and thresholding parameters to tailor the reconstruction process in real time. To the best of our knowledge, DISTA-Net is the first deep learning model designed specifically for the unmixing of closely-spaced infrared small targets, achieving superior sub-pixel detection accuracy. Moreover, we have established the first open-source ecosystem to foster further research in this field. This ecosystem comprises three key components: (1) CSIST-100K, a publicly available benchmark dataset; (2) CSO-mAP, a custom evaluation metric for sub-pixel detection; and (3) GrokCSO, an open-source toolkit featuring DISTA-Net and other models. Our code and dataset are available at https://github.com/GrokCV/GrokCSO. + oai:arXiv.org:2505.19148v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Maojia Song, Tej Deep Pala, Ruiwen Zhou, Weisheng Jin, Amir Zadeh, Chuan Li, Dorien Herremans, Soujanya Poria + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shengdong Han, Shangdong Yang, Xin Zhang, Yuxuan Li, Xiang Li, Jian Yang, Ming-Ming Cheng, Yimian Dai - A Unifying Framework for Global Optimization: From Theory to Formalization - https://arxiv.org/abs/2508.20671 - arXiv:2508.20671v2 Announce Type: replace -Abstract: We introduce an abstract measure___theoretic framework that serves as a tool to rigorously study stochastic iterative global optimization algorithms as a unified class. The framework is formulated in terms of probability kernels, which, via the Ionescu--Tulcea theorem, induce probability measures on the space of sequences of algorithm iterations, endowed with two intuitive properties. This framework answers the need for a general, implementation___independent formalism in the analysis of such algorithms, providing a starting point for formalizing general results in proof-assistants. To illustrate the relevance of our tool, we show that common algorithms fit naturally in the framework, and we also use it to give a rigorous proof of a general consistency theorem for stochastic iterative global optimization algorithms (Proposition 3 of (Malherbe, et al., 2017). This proof and the entire framework are formalized in the Lean proof assistant. This formalization both ensures the correctness of the definitions and proofs, and provides a basis for future machine-assisted formalizations in the field. - oai:arXiv.org:2508.20671v2 - cs.FL - cs.LO - math.OC - math.PR - Wed, 10 Dec 2025 00:00:00 -0500 + Learning to Infer Parameterized Representations of Plants from 3D Scans + https://arxiv.org/abs/2505.22337 + arXiv:2505.22337v2 Announce Type: replace +Abstract: Plants frequently contain numerous organs, organized in 3D branching systems defining the plant's architecture. Reconstructing the architecture of plants from unstructured observations is challenging because of self-occlusion and spatial proximity between organs, which are often thin structures. To achieve the challenging task, we propose an approach that allows to infer a parameterized representation of the plant's architecture from a given 3D scan of a plant. In addition to the plant's branching structure, this representation contains parametric information for each plant organ, and can therefore be used directly in a variety of tasks. In this data-driven approach, we train a recursive neural network with virtual plants generated using a procedural model. After training, the network allows to infer a parametric tree-like representation based on an input 3D point cloud. Our method is applicable to any plant that can be represented as binary axial tree. We quantitatively evaluate our approach on Chenopodium Album plants on reconstruction, segmentation and skeletonization, which are important problems in plant phenotyping. In addition to carrying out several tasks at once, our method achieves results on-par with strong baselines for each task. We apply our method, trained exclusively on synthetic data, to 3D scans and show that it generalizes well. + oai:arXiv.org:2505.22337v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ga\"etan Serr\'e (ENS Paris Saclay, CB), Argyris Kalogeratos (CB, ENS Paris Saclay), Nicolas Vayatis (CB, ENS Paris Saclay) + Samara Ghrer, Christophe Godin, Stefanie Wuhrer - Mixture of Contexts for Long Video Generation - https://arxiv.org/abs/2508.21058 - arXiv:2508.21058v3 Announce Type: replace -Abstract: Long video generation is fundamentally a long context memory problem: models must retain and retrieve salient events across a long range without collapsing or drifting. However, scaling diffusion transformers to generate long-context videos is fundamentally limited by the quadratic cost of self-attention, which makes memory and computation intractable and difficult to optimize for long sequences. We recast long-context video generation as an internal information retrieval task and propose a simple, learnable sparse attention routing module, Mixture of Contexts (MoC), as an effective long-term memory retrieval engine. In MoC, each query dynamically selects a few informative chunks plus mandatory anchors (caption, local windows) to attend to, with causal routing that prevents loop closures. As we scale the data and gradually sparsify the routing, the model allocates compute to salient history, preserving identities, actions, and scenes over minutes of content. Efficiency follows as a byproduct of retrieval (near-linear scaling), which enables practical training and synthesis, and the emergence of memory and consistency at the scale of minutes. - oai:arXiv.org:2508.21058v3 - cs.GR - cs.AI + Do You See Me : A Multidimensional Benchmark for Evaluating Visual Perception in Multimodal LLMs + https://arxiv.org/abs/2506.02022 + arXiv:2506.02022v2 Announce Type: replace +Abstract: Multimodal Large Language Models (MLLMs) show reasoning promise, yet their visual perception is a critical bottleneck. Strikingly, MLLMs can produce correct answers even while misinterpreting crucial visual elements, masking these underlying failures. Our preliminary study on a joint perception-reasoning dataset revealed that for one leading MLLM, 29% of its correct answers to reasoning questions still exhibited visual perception errors. To systematically address this, we introduce "Do You See Me", a scalable benchmark with 1,758 images and 2,612 questions. It spans seven human-psychology inspired subtasks in 2D and 3D, featuring controllable complexity to rigorously evaluate MLLM visual skills. Our findings on 3 leading closed-source and 5 major open-source models reveal a stark deficit: humans achieve 96.49% accuracy, while top MLLMs average below 50%. This performance gap widens rapidly with increased task complexity (e.g., from 12% to 45% in the visual form constancy subtask). Further analysis into the root causes suggests that failures stem from challenges like misallocated visual attention and the instability of internal representations for fine-grained details, especially at or below encoder patch resolution. This underscores an urgent need for MLLMs with truly robust visual perception. The benchmark dataset, source code and evaluation scripts are available at https://github.com/microsoft/Do-You-See-Me. + oai:arXiv.org:2506.02022v2 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by-sa/4.0/ - Shengqu Cai, Ceyuan Yang, Lvmin Zhang, Yuwei Guo, Junfei Xiao, Ziyan Yang, Yinghao Xu, Zhenheng Yang, Alan Yuille, Leonidas Guibas, Maneesh Agrawala, Lu Jiang, Gordon Wetzstein + Aditya Kanade, Tanuja Ganu - Locus: Agentic Predicate Synthesis for Directed Fuzzing - https://arxiv.org/abs/2508.21302 - arXiv:2508.21302v3 Announce Type: replace -Abstract: Directed fuzzing aims to find program inputs that lead to specified target program states. It has broad applications, such as debugging system crashes, confirming reported bugs, and generating exploits for potential vulnerabilities. This task is inherently challenging because target states are often deeply nested in the program, while the search space manifested by numerous possible program inputs is prohibitively large. Existing approaches rely on branch distances or manually-specified constraints to guide the search; however, the branches alone are often insufficient to precisely characterize progress toward reaching the target states, while the manually specified constraints are often tailored for specific bug types and thus difficult to generalize to diverse target states and programs. - We present Locus, a novel framework to improve the efficiency of directed fuzzing. Our key insight is to synthesize predicates to capture fuzzing progress as semantically meaningful intermediate states, serving as milestones towards reaching the target states. When used to instrument the program under fuzzing, they can reject executions unlikely to reach the target states, while providing additional coverage guidance. To automate this task and generalize to diverse programs, Locus features an agentic framework with program analysis tools to synthesize and iteratively refine the candidate predicates, while ensuring the predicates strictly relax the target states to prevent false rejections via symbolic execution. Our evaluation shows that Locus substantially improves the efficiency of eight state-of-the-art fuzzers in discovering real-world vulnerabilities, achieving an average speedup of 41.6x. So far, Locus has found nine previously unpatched bugs, with three already acknowledged with draft patches. - oai:arXiv.org:2508.21302v3 - cs.CR - cs.AI - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + An $O(\log \log n)$-approximate budget feasible mechanism for subadditive valuations + https://arxiv.org/abs/2506.04665 + arXiv:2506.04665v5 Announce Type: replace +Abstract: In budget-feasible mechanism design, there is a set of items $U$, each owned by a distinct seller. The seller of item $e$ incurs a private cost $\overline{c}_e$ for supplying her item. A buyer wishes to procure a set of items from the sellers of maximum value, where the value of a set $S\subseteq U$ of items is given by a valuation function $v:2^U\to \mathbb{R}_+$. The buyer has a budget of $B \in \mathbb{R}_+$ for the total payments made to the sellers. We wish to design a mechanism that is truthful, that is, sellers are incentivized to report their true costs, budget-feasible, that is, the sum of the payments made to the sellers is at most the budget $B$, and that outputs a set whose value is large compared to $\text{OPT}:=\max\{v(S):\overline{c}(S)\le B,S\subseteq U\}$. + Budget-feasible mechanism design has been extensively studied, with the literature focussing on (classes of) subadditive valuation functions, and various polytime, budget-feasible mechanisms, achieving constant-factor approximation, have been devised for the special cases of additive, submodular, and XOS valuations. However, for general subadditive valuations, the best-known approximation factor achievable by a polytime budget-feasible mechanism (given access to demand oracles) was only $O(\log n / \log \log n)$, where $n$ is the number of items. + We improve this state-of-the-art significantly by designing a randomized budget-feasible mechanism for subadditive valuations that achieves a substantially-improved approximation factor of $O(\log\log n)$ and runs in polynomial time, given access to demand oracles. + oai:arXiv.org:2506.04665v5 + cs.GT + cs.DM + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - 10.1145/3744916.3773102 - Jie Zhu, Chihao Shen, Ziyang Li, Jiahao Yu, Yizheng Chen, Kexin Pei + Rian Neogi, Kanstantsin Pashkovich, Chaitanya Swamy - AoI-based Scheduling of Correlated Sources for Timely Inference - https://arxiv.org/abs/2509.01926 - arXiv:2509.01926v2 Announce Type: replace -Abstract: We investigate a real-time remote inference system where multiple correlated sources transmit observations over a communication channel to a receiver. The receiver utilizes these observations to infer multiple time-varying targets. Due to limited communication resources, the delivered observations may not be fresh. To quantify data freshness, we employ the Age of Information (AoI) metric. To minimize the inference error, we aim to design a signal-agnostic scheduling policy that leverages AoI without requiring knowledge of the actual target values or the source observations. This scheduling problem is a restless multi-armed bandit (RMAB) problem with a non-separable penalty function. Unlike traditional RMABs, the correlation among sources introduces a unique challenge: the penalty function of each source depends on the AoI of other correlated sources, preventing the problem from decomposing into multiple independent Markov Decision Processes (MDPs), a key step in applying traditional RMAB solutions. To address this, we propose a novel approach that approximates the penalty function for each source and establishes an analytical bound on the approximation error. We then develop scheduling policies for two scenarios: (i) full knowledge of the penalty functions and (ii) no knowledge of the penalty functions. For the case of known penalty functions, we present an upper bound on the optimality gap that highlights the impact of the correlation parameter and the system size. For the case of unknown penalty functions and signal distributions, we develop an online learning approach that utilizes bandit feedback to learn an online Maximum Gain First policy. Simulation results demonstrate the effectiveness of our proposed policies in minimizing inference error and achieving scalability in the number of sources. - oai:arXiv.org:2509.01926v2 - cs.NI - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + A Framework for Controllable Multi-objective Learning with Annealed Stein Variational Hypernetworks + https://arxiv.org/abs/2506.06715 + arXiv:2506.06715v3 Announce Type: replace +Abstract: Pareto Set Learning (PSL) is popular as an efficient approach to obtaining the complete optimal solution in Multi-objective Learning (MOL). A set of optimal solutions approximates the Pareto set, and its mapping is a set of dense points in the Pareto front in objective space. However, some current methods face a challenge: how to make the Pareto solution is diverse while maximizing the hypervolume value. In this paper, we propose a novel method to address this challenge, which employs Stein Variational Gradient Descent (SVGD) to approximate the entire Pareto set. SVGD pushes a set of particles towards the Pareto set by applying a form of functional gradient descent, which helps to converge and diversify optimal solutions. Additionally, we employ diverse gradient direction strategies to thoroughly investigate a unified framework for SVGD in multi-objective optimization and adapt this framework with an annealing schedule to promote stability. We introduce our method, SVH-MOL, and validate its effectiveness through extensive experiments on multi-objective problems and multi-task learning, demonstrating its superior performance. + oai:arXiv.org:2506.06715v3 + cs.LG + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Md Kamran Chowdhury Shisher, Vishrant Tripathi, Mung Chiang, Christopher G. Brinton + Minh-Duc Nguyen, Dung D. Le - Learning to Deliberate: Meta-policy Collaboration for Agentic LLMs with Multi-agent Reinforcement Learning - https://arxiv.org/abs/2509.03817 - arXiv:2509.03817v2 Announce Type: replace -Abstract: Multi-agent systems of large language models (LLMs) show promise for complex reasoning, but their effectiveness is often limited by fixed collaboration protocols. These frameworks typically focus on macro-level orchestration while overlooking agents' internal deliberative capabilities. This critical meta-cognitive blindspot treats agents as passive executors unable to adapt their strategy based on internal cognitive states like uncertainty or confidence. We introduce the Meta-Policy Deliberation Framework (MPDF), where agents learn a decentralized policy over a set of high-level meta-cognitive actions: Persist, Refine, and Concede. To overcome the instability of traditional policy gradients in this setting, we develop SoftRankPO, a novel reinforcement learning algorithm. SoftRankPO stabilizes training by shaping advantages based on the rank of rewards mapped through smooth normal quantiles, making the learning process robust to reward variance. Experiments show that MPDF with SoftRankPO achieves a a 4-5% absolute gain in average accuracy across five mathematical and general reasoning benchmarks compared to six state-of-the-art heuristic and learning-based multi-agent reasoning algorithms. Our work presents a paradigm for learning adaptive, meta-cognitive policies for multi-agent LLM systems, shifting the focus from designing fixed protocols to learning dynamic, deliberative strategies. - oai:arXiv.org:2509.03817v2 + Efficient $Q$-Learning and Actor-Critic Methods for Robust Average Reward Reinforcement Learning + https://arxiv.org/abs/2506.07040 + arXiv:2506.07040v3 Announce Type: replace +Abstract: We present a non-asymptotic convergence analysis of $Q$-learning and actor-critic algorithms for robust average-reward Markov Decision Processes (MDPs) under contamination, total-variation (TV) distance, and Wasserstein uncertainty sets. A key ingredient of our analysis is showing that the optimal robust $Q$ operator is a strict contraction with respect to a carefully designed semi-norm (with constant functions quotiented out). This property enables a stochastic approximation update that learns the optimal robust $Q$-function using $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. We also provide an efficient routine for robust $Q$-function estimation, which in turn facilitates robust critic estimation. Building on this, we introduce an actor-critic algorithm that learns an $\epsilon$-optimal robust policy within $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. We provide numerical simulations to evaluate the performance of our algorithms. + oai:arXiv.org:2506.07040v3 + cs.LG cs.AI - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Wei Yang, Jesse Thomason + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yang Xu, Swetha Ganesh, Vaneet Aggarwal - Measuring the Measures: Discriminative Capacity of Representational Similarity Metrics Across Model Families - https://arxiv.org/abs/2509.04622 - arXiv:2509.04622v5 Announce Type: replace -Abstract: Representational similarity metrics are fundamental tools in neuroscience and AI, yet we lack systematic comparisons of their discriminative power across model families. We introduce a quantitative framework to evaluate representational similarity measures based on their ability to separate model families-across architectures (CNNs, Vision Transformers, Swin Transformers, ConvNeXt) and training regimes (supervised vs. self-supervised). Using three complementary separability measures-dprime from signal detection theory, silhouette coefficients and ROC-AUC, we systematically assess the discriminative capacity of commonly used metrics including RSA, linear predictivity, Procrustes, and soft matching. We show that separability systematically increases as metrics impose more stringent alignment constraints. Among mapping-based approaches, soft-matching achieves the highest separability, followed by Procrustes alignment and linear predictivity. Non-fitting methods such as RSA also yield strong separability across families. These results provide the first systematic comparison of similarity metrics through a separability lens, clarifying their relative sensitivity and guiding metric choice for large-scale model and brain comparisons. - oai:arXiv.org:2509.04622v5 - cs.LG - cs.AI - cs.NE - Wed, 10 Dec 2025 00:00:00 -0500 + PlayerOne: Egocentric World Simulator + https://arxiv.org/abs/2506.09995 + arXiv:2506.09995v3 Announce Type: replace +Abstract: We introduce PlayerOne, the first egocentric realistic world simulator, facilitating immersive and unrestricted exploration within vividly dynamic environments. Given an egocentric scene image from the user, PlayerOne can accurately construct the corresponding world and generate egocentric videos that are strictly aligned with the real scene human motion of the user captured by an exocentric camera. PlayerOne is trained in a coarse-to-fine pipeline that first performs pretraining on large-scale egocentric text-video pairs for coarse-level egocentric understanding, followed by finetuning on synchronous motion-video data extracted from egocentric-exocentric video datasets with our automatic construction pipeline. Besides, considering the varying importance of different components, we design a part-disentangled motion injection scheme, enabling precise control of part-level movements. In addition, we devise a joint reconstruction framework that progressively models both the 4D scene and video frames, ensuring scene consistency in the long-form video generation. Experimental results demonstrate its great generalization ability in precise control of varying human movements and worldconsistent modeling of diverse scenarios. It marks the first endeavor into egocentric real-world simulation and can pave the way for the community to delve into fresh frontiers of world modeling and its diverse applications. + oai:arXiv.org:2506.09995v3 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Jialin Wu, Shreya Saha, Yiqing Bo, Meenakshi Khosla + http://creativecommons.org/licenses/by-sa/4.0/ + Yuanpeng Tu, Hao Luo, Xi Chen, Xiang Bai, Fan Wang, Hengshuang Zhao - Harnessing Object Grounding for Time-Sensitive Video Understanding - https://arxiv.org/abs/2509.06335 - arXiv:2509.06335v2 Announce Type: replace -Abstract: We propose to improve the time-sensitive video understanding (TSV) capability of video large language models (Video-LLMs) with grounded objects (GO). We hypothesize that TSV tasks can benefit from GO within frames, which is supported by our preliminary experiments on LITA, a state-of-the-art Video-LLM for reasoning temporal localization. While augmenting prompts with textual descriptions of these object annotations improves the performance of LITA, it also introduces extra token length and susceptibility to the noise in object-level information. To address this, we propose GO-Tokenizer, a lightweight add-on module for Video-LLMs leveraging off-the-shelf object detectors to encode compact object information on the fly. Experimental results demonstrate that pretraining with GO-Tokenizer outperforms the vanilla Video-LLM and its counterpart, utilizing textual descriptions of objects in the prompt. The gain generalizes across different models, datasets, and video understanding tasks, such as reasoning temporal localization and dense captioning. - oai:arXiv.org:2509.06335v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + The Impact of Partial Computations on the Red-Blue Pebble Game + https://arxiv.org/abs/2506.10854 + arXiv:2506.10854v2 Announce Type: replace +Abstract: We study an extension of the well-known red-blue pebble game (RBP) with partial computation steps, inspired by the recent work of Sobczyk. While the original RBP assumes that we need to have all the inputs of an operation in fast memory at the same time, in many concrete computations, the inputs can be aggregated one by one into the final output value. These partial computation steps can enable pebbling strategies with much smaller I/O cost, and in settings where such a step-by-step aggregation is possible, this extended red-blue pebble game offers a much more realistic cost model. + We establish the fundamental properties of this partial-computing red-blue pebble game (PRBP), and compare it to the original RBP. We begin with some simple examples where allowing partial computations can decrease the optimal I/O cost. It is also shown that the cost can decrease by up to a linear factor this way, but in general, it is NP-hard to decide whether partial computations allow for a smaller cost in a specific DAG. We then discuss how $S$-partitions, a crucial tool for deriving I/O lower bounds in RBP, can be adapted to the PRBP model. These new tools are then used to establish lower bounds on the I/O cost of some prominent computational tasks. Finally, we also adapt a hardness result from RBP, showing that the optimum cost is still NP-hard to approximate in PRBP to any reasonable factor. + oai:arXiv.org:2506.10854v2 + cs.DC + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tz-Ying Wu, Sharath Nittur Sridhar, Subarna Tripathi + 10.1145/3694906.3743320 + P\'al Andr\'as Papp, Aleksandros Sobczyk, A. N. Yzelman - Zo3T: Zero-Shot 3D-Aware Trajectory-Guided Image-to-Video Generation via Test-Time Training - https://arxiv.org/abs/2509.06723 - arXiv:2509.06723v3 Announce Type: replace -Abstract: Trajectory-Guided image-to-video (I2V) generation aims to synthesize videos that adhere to user-specified motion instructions. Existing methods typically rely on computationally expensive fine-tuning on scarce annotated datasets. Although some zero-shot methods attempt to trajectory control in the latent space, they may yield unrealistic motion by neglecting 3D perspective and creating a misalignment between the manipulated latents and the network's noise predictions. To address these challenges, we introduce Zo3T, a novel zero-shot test-time-training framework for trajectory-guided generation with three core innovations: First, we incorporate a 3D-Aware Kinematic Projection, leveraging inferring scene depth to derive perspective-correct affine transformations for target regions. Second, we introduce Trajectory-Guided Test-Time LoRA, a mechanism that dynamically injects and optimizes ephemeral LoRA adapters into the denoising network alongside the latent state. Driven by a regional feature consistency loss, this co-adaptation effectively enforces motion constraints while allowing the pre-trained model to locally adapt its internal representations to the manipulated latent, thereby ensuring generative fidelity and on-manifold adherence. Finally, we develop Guidance Field Rectification, which refines the denoising evolutionary path by optimizing the conditional guidance field through a one-step lookahead strategy, ensuring efficient generative progression towards the target trajectory. Zo3T significantly enhances 3D realism and motion accuracy in trajectory-controlled I2V generation, demonstrating superior performance over existing training-based and zero-shot approaches. - oai:arXiv.org:2509.06723v3 + TAViS: Text-bridged Audio-Visual Segmentation with Foundation Models + https://arxiv.org/abs/2506.11436 + arXiv:2506.11436v2 Announce Type: replace +Abstract: Audio-Visual Segmentation (AVS) faces a fundamental challenge of effectively aligning audio and visual modalities. While recent approaches leverage foundation models to address data scarcity, they often rely on single-modality knowledge or combine foundation models in an off-the-shelf manner, failing to address the cross-modal alignment challenge. In this paper, we present TAViS, a novel framework that \textbf{couples} the knowledge of multimodal foundation models (ImageBind) for cross-modal alignment and a segmentation foundation model (SAM2) for precise segmentation. However, effectively combining these models poses two key challenges: the difficulty in transferring the knowledge between SAM2 and ImageBind due to their different feature spaces, and the insufficiency of using only segmentation loss for supervision. To address these challenges, we introduce a text-bridged design with two key components: (1) a text-bridged hybrid prompting mechanism where pseudo text provides class prototype information while retaining modality-specific details from both audio and visual inputs, and (2) an alignment supervision strategy that leverages text as a bridge to align shared semantic concepts within audio-visual modalities. Our approach achieves superior performance on single-source, multi-source, semantic datasets, and excels in zero-shot settings. + oai:arXiv.org:2506.11436v2 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ruicheng Zhang, Jun Zhou, Zunnan Xu, Zihao Liu, Jiehui Huang, Mingyang Zhang, Yu Sun, Xiu Li + Ziyang Luo, Nian Liu, Xuguang Yang, Salman Khan, Rao Muhammad Anwer, Hisham Cholakkal, Fahad Shahbaz Khan, Junwei Han - Knowledge Distillation Driven Semantic NOMA for Image Transmission with Diffusion Model - https://arxiv.org/abs/2509.07363 - arXiv:2509.07363v2 Announce Type: replace -Abstract: As a promising 6G enabler beyond conventional bit-level transmission, semantic communication can considerably reduce required bandwidth resources, while its combination with multiple access requires further exploration. This paper proposes a knowledge distillation-driven and diffusion-enhanced (KDD) semantic non-orthogonal multiple access (NOMA), named KDD-SemNOMA, for multi-user uplink wireless image transmission. Specifically, to ensure robust feature transmission across diverse transmission conditions, we firstly develop a ConvNeXt-based deep joint source and channel coding architecture with enhanced adaptive feature module. This module incorporates signal-to-noise ratio and channel state information to dynamically adapt to additive white Gaussian noise and Rayleigh fading channels. Furthermore, to improve image restoration quality without inference overhead, we introduce a two-stage knowledge distillation strategy, i.e., a teacher model, trained on interference-free orthogonal transmission, guides a student model via feature affinity distillation and cross-head prediction distillation. Moreover, a diffusion model-based refinement stage leverages generative priors to transform initial SemNOMA outputs into high-fidelity images with enhanced perceptual quality. Extensive experiments on CIFAR-10 and FFHQ-256 datasets demonstrate superior performance over state-of-the-art methods, delivering satisfactory reconstruction performance even at extremely poor channel conditions. These results highlight the advantages in both pixel-level accuracy and perceptual metrics, effectively mitigating interference and enabling high-quality image recovery. - oai:arXiv.org:2509.07363v2 - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + AI reconstruction of European weather from the Euro-Atlantic regimes + https://arxiv.org/abs/2506.13758 + arXiv:2506.13758v2 Announce Type: replace +Abstract: We present a non-linear AI-model designed to reconstruct monthly mean anomalies of the European temperature and precipitation based on the Euro-Atlantic Weather regimes (WR) indices. WR represent recurrent, quasi-stationary, and persistent states of the atmospheric circulation that exert considerable influence over the European weather, therefore offering an opportunity for sub-seasonal to seasonal forecasting. While much research has focused on studying the correlation and impacts of the WR on European weather, the estimation of ground-level climate variables, such as temperature and precipitation, from Euro-Atlantic WR remains largely unexplored and is currently limited to linear methods. The presented AI model can capture and introduce complex non-linearities in the relation between the WR indices, describing the state of the Euro-Atlantic atmospheric circulation and the corresponding surface temperature and precipitation anomalies in Europe. We discuss the AI-model performance in reconstructing the monthly mean two-meter temperature and total precipitation anomalies in the European winter and summer, also varying the number of WR used to describe the monthly atmospheric circulation. We assess the impact of errors on the WR indices in the reconstruction and show that a mean absolute relative error below 80% yields improved seasonal reconstruction compared to the ECMWF operational seasonal forecast system, SEAS5. As a demonstration of practical applicability, we evaluate the model using WR indices predicted by SEAS5, finding slightly better or comparable skill relative to the SEAS5 forecast itself. Our findings demonstrate that WR-based anomaly reconstruction, powered by AI tools, offers a promising pathway for sub-seasonal and seasonal forecasting. + oai:arXiv.org:2506.13758v2 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Qifei Wang, Zhen Gao, Shuo Sun, Zhijin Qin, Xiaodong Xu, Meixia Tao + http://creativecommons.org/licenses/by-sa/4.0/ + 10.1002/joc.70216 + International Journal of Climatology, 2025 + A. Camilletti, G. Franch, E. Tomasi, M. Cristoforetti - Dynamic read & write optimization with TurtleKV - https://arxiv.org/abs/2509.10714 - arXiv:2509.10714v2 Announce Type: replace -Abstract: High read and write performance is important for generic key/value stores, which are fundamental to modern applications and databases. Yet, achieving high performance for both reads and writes is challenging due to traditionally limited memory and the pick-any-two-out-of-three tradeoff between memory use, read performance, and write performance. Existing state-of-the-art approaches limit memory usage and chose a primary dimension (reads or writes) for which to optimize their on-disk structures. They recover performance in the remaining dimension by other mechanisms. This approach limits databases' maximum performance in the remaining dimension and their dynamic (online) tunability to respond to changing workloads. We explore a different approach that dynamically trades memory for read or write performance as needed. We present TurtleKV, which includes a novel unbiased data structure for on-disk storage. It includes a knob that dynamically increases memory reserved for increasing read or write performance. When evaluated on YCSB, TurtleKV achieves up to 8x the write throughput of industry-leader RocksDB and up to 5x the read throughput while incurring similar space amplification. Compared to the state-of-the-art system SplinterDB, TurtleKV runs up to 40% better on point queries, up to 6x better on range scans and achieves similar write performance, while incurring 50% less space amplification. - oai:arXiv.org:2509.10714v2 - cs.DB - Wed, 10 Dec 2025 00:00:00 -0500 + A Minimalist Optimizer Design for LLM Pretraining + https://arxiv.org/abs/2506.16659 + arXiv:2506.16659v2 Announce Type: replace +Abstract: Training large language models (LLMs) typically relies on adaptive optimizers such as Adam, which introduce extra operations and require significant more memory to maintain first- and second-order moments than SGD. While recent works such as GaLore, Fira and APOLLO have proposed state-compressed variants to reduce memory consumption, a fundamental question remains: What are the minimum modifications to plain SGD needed to match state-of-the-art pretraining performance? We systematically investigate this question using a bottom-up approach, and identify two simple yet highly (memory- and compute-) efficient techniques: (1) column-wise gradient normalization (normalizing the gradient along the output dimension), which boosts SGD performance without momentum; and (2) applying first-order momentum only to the output layer, where gradient variance is highest. Combining these two techniques lead to SCALE (Stochastic Column-normAlized Last-layer momEntum), a simple optimizer for memory efficient pretraining. Across multiple LLaMA models (60M-1B), SCALE matches or exceeds the performance of Adam while using only 35-45% of the total memory. It also consistently outperforms memory-efficient optimizers such as GaLore, Fira and APOLLO, making it a strong candidate for large-scale pretraining under memory constraints. For LLaMA 7B model, SCALE outperforms the state-of-the-art memory-efficient methods APOLLO and Muon, in terms of both perplexity and memory consumption. + oai:arXiv.org:2506.16659v2 + cs.LG + cs.AI + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Tony Astolfi, Vidya Silai, Darby Huye, Lan Liu, Raja R. Sambasivan, Johes Bater + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Athanasios Glentis, Jiaxiang Li, Andi Han, Mingyi Hong - Generative AI in Game Development: A Qualitative Research Synthesis - https://arxiv.org/abs/2509.11898 - arXiv:2509.11898v2 Announce Type: replace -Abstract: Generative Artificial Intelligence (GenAI) is currently reshaping game development practices, production pipelines, and value networks in an unprecedentedly pervasive manner with cascading consequences remaining unclear. In the last five years since GenAI's inception, a growing body of qualitative research has explored these early transformations from different settings and demographic angles. However, these studies often contextualise and consolidate their findings weakly with related work; for research to keep up with and support stakeholders in this development, the current moment calls for a synthesis of the findings emerged thus far. Here, we address this need through a qualitative research synthesis via meta-ethnography. We followed PRISMA-S to systematically search the relevant literature from 2020-2025, including major HCI and games research databases. We then synthesised the ten eligible studies, conducting reciprocal translation and line-of-argument synthesis guided by eMERGe, informed by CASP quality appraisal. We identified nine overarching themes, provide recommendations, and contextualise our insights in wider game production trajectories. With this work, we seek to provide practitioners, researchers and policy-makers with grounded insights to guide practice, research and governance. - oai:arXiv.org:2509.11898v2 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Efficient Black-Box Fault Localization for System-Level Test Code Using Large Language Models + https://arxiv.org/abs/2506.19045 + arXiv:2506.19045v2 Announce Type: replace +Abstract: Fault localization (FL) is a critical step in debugging, which typically relies on repeated executions to pinpoint faulty code regions. However, repeated executions can be impractical in the presence of non-deterministic failures or high execution costs. While recent efforts have leveraged Large Language Models (LLMs) to aid execution-free FL, these have primarily focused on identifying faults in the system-under-test (SUT) rather than in the often complex system-level test code. However, the latter is also important, as in practice, many failures are triggered by faulty test code. To overcome these challenges, we introduce a fully static, LLM-driven approach for system-level test code fault localization (TCFL) that does not require executing the test case. Our method uses a single failure execution log to estimate the test's execution trace through three novel algorithms that identify only code statements likely involved in the failure. This pruned trace, combined with the error message, is used to prompt the LLM to rank potential faulty locations. Our black-box, system-level approach requires no access to the SUT source code and is applicable to complex test scripts that assess full system behavior. We evaluate our technique at the function, block, and line levels using an industrial dataset of faulty test cases that were not used in pre-training LLMs. Results show that our best-estimated traces closely match the actual traces, with an F1 score of around 90%. Additionally, pruning the complex system-level test code reduces the LLM's inference time by up to 34% without any loss in FL performance. Our method achieves equal or higher FL accuracy, requiring over 85% less average inference time per test case and 93% fewer tokens than the latest LLM-guided FL method. + oai:arXiv.org:2506.19045v2 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Alexandru Ternar, Alena Denisova, Jo\~ao M. Cunha, Annakaisa Kultima, Christian Guckelsberger + Ahmadreza Saboor Yaraghi, Golnaz Gharachorlu, Sakina Fatima, Lionel C. Briand, Ruiyuan Wan, Ruifeng Gao - SENSE models: an open source solution for multilingual and multimodal semantic-based tasks - https://arxiv.org/abs/2509.12093 - arXiv:2509.12093v2 Announce Type: replace -Abstract: This paper introduces SENSE (Shared Embedding for N-lingual Speech and tExt), an open-source solution inspired by the SAMU-XLSR framework and conceptually similar to Meta AI's SONAR models. These approaches rely on a teacher-student framework to align a self-supervised speech encoder with the language-agnostic continuous representations of a text encoder at the utterance level. We describe how the original SAMU-XLSR method has been updated by selecting a stronger teacher text model and a better initial speech encoder. The source code for training and using SENSE models has been integrated into the SpeechBrain toolkit, and the first SENSE model we trained has been publicly released. We report experimental results on multilingual and multimodal semantic tasks, where our SENSE model achieves highly competitive performance. Finally, this study offers new insights into how semantics are captured in such semantically aligned speech encoders. - oai:arXiv.org:2509.12093v2 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + A parametric tensor ROM for the shallow water dam break problem + https://arxiv.org/abs/2506.20007 + arXiv:2506.20007v2 Announce Type: replace +Abstract: We develop a variant of a tensor reduced-order model (tROM) for the parameterized shallow-water dam-break problem. This hyperbolic system presents multiple challenges for model reduction, including a slow decay of the Kolmogorov $N$-width of the solution manifold, shock formation, and the loss of smooth solution dependence on parameters. These issues limit the performance of traditional Proper Orthogonal Decomposition based ROMs. Our tROM approach, based on a low-rank tensor decomposition, builds a parameter-to-solution map from high-fidelity snapshots and constructs localized reduced bases via a local POD procedure. We apply this method to 1D dry-bed and wet-bed problems and 2D wet-bed problem with topography and bottom friction, showing that the non-interpolatory variant of the tROM, combined with Chebyshev sampling near critical parameter values, effectively captures parameter-dependent behavior and significantly outperforms standard POD-ROMs. This is especially evident in the wet-bed case, where POD-ROMs exhibit poor resolution of shock waves and spurious oscillations. + oai:arXiv.org:2506.20007v2 + math.NA + cs.NA + physics.flu-dyn + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Salima Mdhaffar, Haroun Elleuch, Chaimae Chellaf, Ha Nguyen, Yannick Est\`eve + http://creativecommons.org/licenses/by-sa/4.0/ + Md Rezwan Bin Mizan, Maxim Olshanskii, Ilya Timofeyev - Funding AI for Good: A Call for Meaningful Engagement - https://arxiv.org/abs/2509.12455 - arXiv:2509.12455v2 Announce Type: replace -Abstract: Artificial Intelligence for Social Good (AI4SG) is a growing area that explores AI's potential to address social issues, such as public health. Yet prior work has shown limited evidence of its tangible benefits for intended communities, and projects frequently face inadequate community engagement and sustainability challenges. While existing HCI literature on AI4SG initiatives primarily focuses on the mechanisms of funded projects and their outcomes, much less attention has been given to the funding agenda and rhetoric that influences downstream approaches. Through a thematic analysis of 35 funding documents -- representing about $410 million USD in total investments, we reveal dissonances between AI4SG's stated intentions for positive social impact and the techno-centric approaches that some funding agendas promoted, while also identifying funding documents that scaffolded community-collaborative approaches for applicants. Drawing on our findings, we offer recommendations for funders to embed approaches that balance both contextual understanding and technical capacities in future funding call designs. We further discuss how the HCI community can positively shape AI4SG funding design processes. - oai:arXiv.org:2509.12455v2 - cs.CY - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Almost Tight Additive Guarantees for $k$-Edge-Connectivity + https://arxiv.org/abs/2506.20906 + arXiv:2506.20906v2 Announce Type: replace +Abstract: We consider the \emph{$k$-edge connected spanning subgraph} (kECSS) problem, where we are given an undirected graph $G = (V, E)$ with nonnegative edge costs $\{c_e\}_{e\in E}$, and we seek a minimum-cost \emph{$k$-edge connected} subgraph $H$ of $G$. For even $k$, we present a polytime algorithm that computes a $(k-2)$-edge connected subgraph of cost at most the optimal value $LP^*$ of the natural LP-relaxation for kECSS; for odd $k$, we obtain a $(k-3)$-edge connected subgraph of cost at most $LP^*$. Since kECSS is APX-hard for all $k\geq 2$, our results are nearly optimal. They also significantly improve upon the recent work of Hershkowitz et al., both in terms of solution quality and the simplicity of algorithm and its analysis. Our techniques also yield an alternate guarantee, where we obtain a $(k-1)$-edge connected subgraph of cost at most $1.5\cdot LP^*$; with unit edge costs, the cost guarantee improves to $(1+\frac{4}{3k})\cdot LP^*$, which improves upon the state-of-the-art approximation for unit edge costs, but with a unit loss in edge connectivity. + Our kECSS-result also yields results for the \emph{$k$-edge connected spanning multigraph} (kECSM) problem, where multiple copies of an edge can be selected: we obtain a $(1+2/k)$-approximation algorithm for even $k$, and a $(1+3/k)$-approximation algorithm for odd $k$. + Our techniques extend to the degree-bounded versions of kECSS and kECSM, wherein we also impose degree lower- and upper- bounds on the nodes. We obtain the same cost and connectivity guarantees for these degree-bounded versions with an additive violation of (roughly) $2$ for the degree bounds. These are the first results for degree-bounded \{kECSS,kECSM\} of the form where the cost of the solution obtained is at most the optimum, and the connectivity constraints are violated by an additive constant. + oai:arXiv.org:2506.20906v2 + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Hongjin Lin, Anna Kawakami, Catherine D'Ignazio, Kenneth Holstein, Krzysztof Gajos + http://creativecommons.org/licenses/by/4.0/ + Nikhil Kumar, Chaitanya Swamy - Do Natural Language Descriptions of Model Activations Convey Privileged Information? - https://arxiv.org/abs/2509.13316 - arXiv:2509.13316v3 Announce Type: replace -Abstract: Recent interpretability methods have proposed to translate LLM internal representations into natural language descriptions using a second verbalizer LLM. This is intended to illuminate how the target model represents and operates on inputs. But do such activation verbalization approaches actually provide privileged knowledge about the internal workings of the target model, or do they merely convey information about its inputs? We critically evaluate popular verbalization methods across datasets used in prior work and find that they can succeed at benchmarks without any access to target model internals, suggesting that these datasets may not be ideal for evaluating verbalization methods. We then run controlled experiments which reveal that verbalizations often reflect the parametric knowledge of the verbalizer LLM which generated them, rather than the knowledge of the target LLM whose activations are decoded. Taken together, our results indicate a need for targeted benchmarks and experimental controls to rigorously assess whether verbalization methods provide meaningful insights into the operations of LLMs. - oai:arXiv.org:2509.13316v3 - cs.CL + Model-driven Stochastic Trace Clustering + https://arxiv.org/abs/2506.23776 + arXiv:2506.23776v2 Announce Type: replace +Abstract: Process discovery algorithms automatically extract process models from event logs, but high variability often results in complex and hard-to-understand models. To mitigate this issue, trace clustering techniques group process executions into clusters, each represented by a simpler and more understandable process model. Model-driven trace clustering improves on this by assigning traces to clusters based on their conformity to cluster-specific process models. However, most existing clustering techniques rely on either no process model discovery, or non-stochastic models, neglecting the frequency or probability of activities and transitions, thereby limiting their capability to capture real-world execution dynamics. We propose a novel model-driven trace clustering method that optimizes stochastic process models within each cluster. Our approach uses entropic relevance, a stochastic conformance metric based on directly-follows probabilities, to guide trace assignment. This allows clustering decisions to consider both structural alignment with a cluster's process model and the likelihood that a trace originates from a given stochastic process model. The method is computationally efficient, scales linearly with input size, and improves model interpretability by producing clusters with clearer control-flow patterns. Extensive experiments on public real-life datasets demonstrate that while our method yields superior stochastic coherence and graph simplicity, traditional fitness metrics reveal a trade-off, highlighting the specific utility of our approach for stochastic process analysis. + oai:arXiv.org:2506.23776v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Millicent Li, Alberto Mario Ceballos Arroyo, Giordano Rogers, Naomi Saphra, Byron C. Wallace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Jari Peeperkorn, Johannes De Smedt, Jochen De Weerdt - The Groupoid-Syntax of Type Theory is a Set - https://arxiv.org/abs/2509.14988 - arXiv:2509.14988v2 Announce Type: replace -Abstract: Categories with families (CwFs) have been used to define the semantics of type theory in type theory. In the setting of Homotopy Type Theory (HoTT), one of the limitations of the traditional notion of CwFs is the requirement to set-truncate types, which excludes models based on univalent categories, such as the standard set model. To address this limitation, we introduce the concept of a Groupoid Category with Families (GCwF). This framework truncates types at the groupoid level and incorporates coherence equations, providing a natural extension of the CwF framework when starting from a 1-category. - We demonstrate that the initial GCwF for a type theory with a base family of sets and Pi-types (groupoid-syntax) is set-truncated. Consequently, this allows us to utilize the conventional intrinsic syntax of type theory while enabling interpretations in semantically richer and more natural models. All constructions in this paper were formalised in Cubical Agda. - oai:arXiv.org:2509.14988v2 - cs.LO - Wed, 10 Dec 2025 00:00:00 -0500 + MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement + https://arxiv.org/abs/2507.00966 + arXiv:2507.00966v3 Announce Type: replace +Abstract: With new sequence models like Mamba and xLSTM, several studies have shown that these models match or outperform the state-of-the-art in single-channel speech enhancement and audio representation learning. However, prior research has demonstrated that sequence models like LSTM and Mamba tend to overfit to the training set. To address this, previous works have shown that adding self-attention to LSTMs substantially improves generalization performance for single-channel speech enhancement. Nevertheless, neither the concept of hybrid Mamba and time-frequency attention models nor their generalization performance have been explored for speech enhancement. In this paper, we propose a novel hybrid architecture, MambAttention, which combines Mamba and shared time- and frequency-multi-head attention modules for generalizable single-channel speech enhancement. To train our model, we introduce VB-DemandEx, a dataset inspired by VoiceBank+Demand but with more challenging noise types and lower signal-to-noise ratios. Trained on VB-DemandEx, MambAttention significantly outperforms existing state-of-the-art discriminative LSTM-, xLSTM-, Mamba-, and Conformer-based systems of similar complexity across all reported metrics on two out-of-domain datasets: DNS 2020 without reverberation and EARS-WHAM_v2. MambAttention also matches or outperforms generative diffusion models in generalization performance while being competitive with language model baselines. Ablation studies highlight the importance of weight sharing between time- and frequency-multi-head attention modules for generalization performance. Finally, we explore integrating the shared time- and frequency-multi-head attention modules with LSTM and xLSTM, which yields a notable performance improvement on the out-of-domain datasets. Yet, MambAttention remains superior for cross-corpus generalization across all reported evaluation metrics. + oai:arXiv.org:2507.00966v3 + cs.SD + cs.AI + eess.AS + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Thorsten Altenkirch, Ambrus Kaposi, Szumi Xie + http://creativecommons.org/publicdomain/zero/1.0/ + Nikolai Lund K\"uhne, Jesper Jensen, Jan {\O}stergaard, Zheng-Hua Tan - Indoor Fluid Antenna Systems Enabled by Layout-Specific Modeling and Group Relative Policy Optimization - https://arxiv.org/abs/2509.15006 - arXiv:2509.15006v4 Announce Type: replace -Abstract: Fluid antenna system (FAS) revolutionizes wireless communications via utilizing position-flexible antennas that dynamically optimize channel conditions and mitigate multipath fading. This innovation is particularly valuable in indoor environments, in which signal propagation is severely degraded due to structural obstructions and complex multipath reflections. In this paper, we investigate the channel modeling and the joint optimization of antenna positioning, beamforming, and power allocation for indoor FAS. In particular, we propose a layout-specific channel model, and employ the novel group relative policy optimization (GRPO) algorithm for tackling the optimization problem. Compared to the state-of-the-art Sionna model, our model achieves an 83.3% reduction in computation time with an approximately 3 dB increase in root-mean-square error (RMSE). When simplified to a two-ray model, our model allows for a closed-form antenna position solution with near-optimal performance. For the joint optimization problem, our GRPO algorithm outperforms proximal policy optimization (PPO) and other baselines in sum-rate, while requiring only 50.8% computational resources of PPO, thanks to its group advantage estimation. Simulation results show that increasing either the group size or trajectory length in GRPO does not yield significant improvements in sum-rate, suggesting that these parameters can be selected conservatively without sacrificing performance. - oai:arXiv.org:2509.15006v4 - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + On the Adversarial Robustness of Online Importance Sampling + https://arxiv.org/abs/2507.02394 + arXiv:2507.02394v2 Announce Type: replace +Abstract: This paper studies the adversarial-robustness of importance-sampling (aka sensitivity sampling); a useful algorithmic technique that samples elements with probabilities proportional to some measure of their importance. A streaming or online algorithm is called adversarially-robust if it succeeds with high probability on input streams that may change adaptively depending on previous algorithm outputs. Unfortunately, the dependence between stream elements breaks the analysis of most randomized algorithms, and in particular that of importance-sampling algorithms. Previously, Braverman et al. [NeurIPS 2021] suggested that streaming algorithms based on importance-sampling may be adversarially-robust; however, they proved it only for well-behaved inputs. + We focus on the adversarial-robustness of online importance-sampling, a natural variant where sampling decisions are irrevocable and made as data arrives. Our main technical result shows that, given as input an adaptive stream of elements $x_1,\ldots,x_T\in \mathbb{R}_+$, online importance-sampling maintains a $(1\pm\epsilon)$-approximation of their sum while matching (up to lower order terms) the storage guarantees of the oblivious (non-adaptive) case. We then apply this result to develop adversarially-robust online algorithms for two fundamental problems: hypergraph cut sparsification and $\ell_p$ subspace embedding. + oai:arXiv.org:2507.02394v2 + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tong Zhang, Qianren Li, Shuai Wang, Wanli Ni, Jiliang Zhang, Rui Wang, Kai-Kit Wong, Chan-Byoung Chae + Yotam Kenneth-Mordoch, Shay Sapir - Beyond Community Notes: A Framework for Understanding and Building Crowdsourced Context Systems for Social Media - https://arxiv.org/abs/2509.15434 - arXiv:2509.15434v2 Announce Type: replace -Abstract: Social media platforms are increasingly developing features that display crowdsourced context alongside posts, modeled after X's Community Notes. These systems, which we term Crowdsourced Context Systems (CCS), have the potential to reshape our information ecosystem as major platforms embrace them as alternatives to top-down fact-checking. To deeply understand the features and implications of such systems, we perform a systematic literature review of existing CCS research and analyze several real-world CCS implementations. Based on our analysis, we develop a framework with two distinct components. First, we present a theoretical model to help conceptualize and define CCS. Second, we identify a design space encompassing six key aspects of CCS: participation, inputs, curation, presentation, platform treatment, and transparency. We discuss key normative implications of different CCS design and implementation choices. Our paper integrates these theoretical, design, and ethical perspectives to establish a foundation for future human-centered research on Crowdsourced Context Systems. - oai:arXiv.org:2509.15434v2 - cs.HC - cs.CY - cs.SI - Wed, 10 Dec 2025 00:00:00 -0500 + Quantifying Cross-Attention Interaction in Transformers for Interpreting TCR-pMHC Binding + https://arxiv.org/abs/2507.03197 + arXiv:2507.03197v2 Announce Type: replace +Abstract: CD8+ "killer" T cells and CD4+ "helper" T cells play a central role in the adaptive immune system by recognizing antigens presented by Major Histocompatibility Complex (pMHC) molecules via T Cell Receptors (TCRs). Modeling binding between T cells and the pMHC complex is fundamental to understanding basic mechanisms of human immune response as well as in developing therapies. While transformer-based models such as TULIP have achieved impressive performance in this domain, their black-box nature precludes interpretability and thus limits a deeper mechanistic understanding of T cell response. Most existing post-hoc explainable AI (XAI) methods are confined to encoder-only, co-attention, or model-specific architectures and cannot handle encoder-decoder transformers used in TCR-pMHC modeling. To address this gap, we propose Quantifying Cross-Attention Interaction (QCAI), a new post-hoc method designed to interpret the cross-attention mechanisms in transformer decoders. Quantitative evaluation is a challenge for XAI methods; we have compiled TCR-XAI, a benchmark consisting of 274 experimentally determined TCR-pMHC structures to serve as ground truth for binding. Using these structures we compute physical distances between relevant amino acid residues in the TCR-pMHC interaction region and evaluate how well our method and others estimate the importance of residues in this region across the dataset. We show that QCAI achieves state-of-the-art performance on both interpretability and prediction accuracy under the TCR-XAI benchmark. + oai:arXiv.org:2507.03197v2 + cs.CE + cs.LG + q-bio.BM + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Travis Lloyd, Tung Nguyen, Karen Levy, Mor Naaman + Jiarui Li, Zixiang Yin, Haley Smith, Zhengming Ding, Samuel J. Landry, Ramgopal R. Mettu - Omni-LIVO: Robust RGB-Colored Multi-Camera Visual-Inertial-LiDAR Odometry via Photometric Migration and ESIKF Fusion - https://arxiv.org/abs/2509.15673 - arXiv:2509.15673v2 Announce Type: replace -Abstract: Wide field-of-view (FoV) LiDAR sensors provide dense geometry across large environments, but existing LiDAR-inertial-visual odometry (LIVO) systems generally rely on a single camera, limiting their ability to fully exploit LiDAR-derived depth for photometric alignment and scene colorization. We present Omni-LIVO, a tightly coupled multi-camera LIVO system that leverages multi-view observations to comprehensively utilize LiDAR geometric information across extended spatial regions. Omni-LIVO introduces a Cross-View direct alignment strategy that maintains photometric consistency across non-overlapping views, and extends the Error-State Iterated Kalman Filter (ESIKF) with multi-view updates and adaptive covariance. The system is evaluated on public benchmarks and our custom dataset, showing improved accuracy and robustness over state-of-the-art LIVO, LIO, and visual-inertial SLAM baselines. Code and dataset will be released upon publication. - oai:arXiv.org:2509.15673v2 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + RTGPU: Real-Time Computing with Graphics Processing Units + https://arxiv.org/abs/2507.06069 + arXiv:2507.06069v2 Announce Type: replace +Abstract: In this work, we survey the role of GPUs in real-time systems. Originally designed for parallel graphics workloads, GPUs are now widely used in time-critical applications such as machine learning, autonomous vehicles, and robotics due to their high computational throughput. Their parallel architecture is well-suited for accelerating complex tasks under strict timing constraints. However, their integration into real-time systems presents several challenges, including non-preemptive execution, execution time variability, and resource contention; factors that can lead to unpredictable delays and deadline violations. We examine existing solutions that address these challenges, including scheduling algorithms, resource management techniques, and synchronization methods, and highlight open research directions to improve GPU predictability and performance in real-time environments. + oai:arXiv.org:2507.06069v2 + cs.AR + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Yinong Cao, Xin He, Yuwei Chen, Chenyang Zhang, Chengyu Pu, Bingtao Wang, Kaile Wu, Shouzheng Zhu, Fei Han, Shijie Liu, Chunlai Li, Jianyu Wang + http://creativecommons.org/licenses/by/4.0/ + Atiyeh Gheibi-Fetrat, Amirsaeed Ahmadi-Tonekaboni, Farzam Koohi-Ronaghi, Pariya Hajipour, Sana Babayan-Vanestan, Fatemeh Fotouhi, Elahe Mortazavian-Farsani, Pouria Khajehpour-Dezfouli, Sepideh Safari, Shaahin Hessabi, Hamid Sarbazi-Azad - Graph Coloring for Multi-Task Learning - https://arxiv.org/abs/2509.16959 - arXiv:2509.16959v4 Announce Type: replace -Abstract: When different objectives conflict with each other in multi-task learning, gradients begin to interfere and slow convergence, thereby potentially reducing the final model's performance. To address this, we introduce SON-GOKU, a scheduler that computes gradient interference, constructs an interference graph, and then applies greedy graph-coloring to partition tasks into groups that align well with each other. At each training step, only one group (color class) of tasks are activated, and the grouping partition is constantly recomputed as task relationships evolve throughout training. By ensuring that each mini-batch contains only tasks that pull the model in the same direction, our method improves the effectiveness of any underlying multi-task learning optimizer without additional tuning. Since tasks within these groups will update in compatible directions, multi-task learning will improve model performance rather than impede it. Empirical results on six different datasets show that this interference-aware graph-coloring approach consistently outperforms baselines and state-of-the-art multi-task optimizers. We provide extensive theory showing why grouping and sequential updates improve multi-task learning, with guarantees on descent, convergence, and accurately identifying what tasks conflict or align. - oai:arXiv.org:2509.16959v4 - cs.LG + An Offline Mobile Conversational Agent for Mental Health Support: Learning from Emotional Dialogues and Psychological Texts with Student-Centered Evaluation + https://arxiv.org/abs/2507.10580 + arXiv:2507.10580v2 Announce Type: replace +Abstract: Mental health plays a crucial role in the overall well-being of an individual. In recent years, digital platforms have increasingly been used to expand mental health and emotional support. However, there are persistent challenges related to limited user accessibility, internet connectivity, and data privacy, which highlight the need for an offline, smartphone-based solutions. To address these challenges, we propose EmoSApp (Emotional Support App): an entirely offline, smartphone-based conversational app designed to provide mental health and emotional support. EmoSApp leverages a language model, specifically the LLaMA-3.2-1B-Instruct, which is fine-tuned and quantized on a custom-curated ``Knowledge Dataset'' comprising 14,582 mental health QA pairs along with multi-turn conversational data, enabling robust domain expertise and fully on-device inference on resource-constrained smartphones. + Through qualitative evaluation with students and mental health professionals, we demonstrate that EmoSApp has the ability to respond coherently and empathetically, provide relevant suggestions to user's mental health problems, and maintain interactive dialogue. Additionally, quantitative evaluations on nine commonsense and reasoning benchmarks, along with two mental health specific datasets, demonstrate EmoSApp's effectiveness in low-resource settings. By prioritizing on-device deployment and specialized domain-specific adaptation, EmoSApp serves as a blueprint for future innovations in portable, secure, and highly tailored AI-driven mental health support. + oai:arXiv.org:2507.10580v2 + cs.CL cs.AI - cs.NE - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CY + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Santosh Patapati + Vimaleswar A, Prabhu Nandan Sahu, Nilesh Kumar Sahu, Haroon R. Lone - Enhancing the NAO: Extending Capabilities of Legacy Robots for Long-Term Research - https://arxiv.org/abs/2509.17760 - arXiv:2509.17760v3 Announce Type: replace -Abstract: Legacy (unsupported) robotic platforms often lose research utility when manufacturer support ends, preventing integration of modern sensing, speech, and interaction capabilities. We present the Enhanced NAO, a revitalized version of Aldebaran's NAO robot featuring upgraded beamforming microphones, RGB-D and thermal cameras, and additional compute resources in a fully self-contained package. This system combines cloud-based and local models for perception and dialogue, while preserving the NAO's expressive body and behaviors. In a pilot user study validating conversational performance, the Enhanced NAO delivered significantly higher conversational quality and elicited stronger user preference compared to the NAO AI Edition, without increasing response latency. The added visual and thermal sensing modalities established a foundation for future perception-driven interaction. Beyond this implementation, our framework provides a platform-agnostic strategy for extending the lifespan and research utility of legacy robots, ensuring they remain valuable tools for human-robot interaction. - oai:arXiv.org:2509.17760v3 - cs.RO - cs.HC - eess.AS - Wed, 10 Dec 2025 00:00:00 -0500 + SAFT: Structure-Aware Fine-Tuning of LLMs for AMR-to-Text Generation + https://arxiv.org/abs/2507.13381 + arXiv:2507.13381v2 Announce Type: replace +Abstract: Large Language Models (LLMs) are increasingly applied to tasks involving structured inputs such as graphs. Abstract Meaning Representations (AMRs), which encode rich semantics as directed graphs, offer a rigorous testbed for evaluating LLMs on text generation from such structures. Yet, current methods often arbitrarily linearize AMRs, discarding key structural cues, or rely on architectures incompatible with standard LLMs. We introduce SAFT, a structure-aware fine-tuning approach that injects graph topology into pretrained LLMs without architectural changes. We compute direction-sensitive positional encodings from the magnetic Laplacian of transformed AMRs and project them into the embedding space of the LLM. While possibly applicable to any graph-structured inputs, we focus on AMR-to-text generation as a representative and challenging benchmark. SAFT sets a new state-of-the-art on AMR 3.0 with a 3.5 BLEU improvement over baselines. Gains scale with graph complexity, highlighting the value of structure-aware representations in enhancing LLM performance. SAFT offers a general and effective pathway for bridging structured data and language models. + oai:arXiv.org:2507.13381v2 + cs.CL + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Austin Wilson, Sahar Kapasi, Zane Greene, Alexis E. Block + http://creativecommons.org/licenses/by/4.0/ + Rafiq Kamel, Filippo Guerranti, Simon Geisler, Stephan G\"unnemann - SlicePilot: Demystifying Network Slice Placement in Heterogeneous Cloud Infrastructures - https://arxiv.org/abs/2509.18545 - arXiv:2509.18545v2 Announce Type: replace -Abstract: Cellular networks are comprised of software-based entities, with main functions encapsulated as Virtual Network Functions (VNFs) deployed on Commercial-off-the-Shelf (COTS) hardware. As a key enabler of 5G, network slicing offers logically isolated Quality of Service (QoS) for diverse use cases. With the transition to cloud-native infrastructures, optimizing network slice placement across multi-cloud environments remains challenging due to heterogeneous resource capabilities and varying slice-specific demands. This paper presents SlicePilot, a modular framework that enables autonomous and near-optimal VNF placement using a disaggregated Multi-Agent Reinforcement Learning (MARL) approach. SlicePilot collects real-world traffic profiles to estimate resource needs for each slice type. These estimates guide a MARL-based scheduler that minimizes deployment costs while satisfying QoS constraints. We evaluate SlicePilot on a multi-cloud testbed and demonstrate a 19x speed-up over combinatorial optimization methods, while keeping deployment costs within 7.8% of the optimal. Although SlicePilot results in 2.42x more QoS violations under high-load conditions, this trade-off is offset by faster decision-making and reduced computational overhead. Overall, SlicePilot delivers a scalable, cost-efficient solution for network slice placement, making it suitable for real-time deployments where responsiveness and efficiency are critical. - oai:arXiv.org:2509.18545v2 - cs.NI - Wed, 10 Dec 2025 00:00:00 -0500 + Quantifying Ocular Surface Changes with Contact Lens Wear + https://arxiv.org/abs/2507.13589 + arXiv:2507.13589v2 Announce Type: replace +Abstract: Over 140 million people worldwide and over 45 million people in the United States wear contact lenses; it is estimated that 12%-27.4% contact lens users stop wearing them due to discomfort. Contact lens mechanical interactions with the ocular surface have been found to affect the ocular surface itself. These mechanical interactions are difficult to measure and calculate in a clinical setting, and the research in this field is limited. This paper presents the first mathematical model that captures the interactions between the contact lens and the open eye, where the contact lens configuration, the contact lens suction pressure, and the deformed ocular shape are all emergent properties of the model. The non-linear coupling between the contact lens and the eye is achieved by assuming that the suction pressure under the lens is applied directly to the ocular surface through the post-lens tear film layer. The contact lens mechanics are modeled using a previous published model. We consider homogeneous and heterogeneous linear elastic eye models, different ocular shapes, different lens shapes and thickness profiles, and extract lens deformations, suction pressure profiles, and ocular deformations and stresses for all the considered scenarios. The model predicts higher ocular deformations and stresses at the center of the eye and in the limbal/scleral regions. Accounting for heterogeneous material eye parameters increases the magnitude of such deformations and stresses. The ocular displacements and stresses non-linearly increase as we increase the stiffness of the contact lens. Inserting a steeper contact lens on the eye results in a reduction of the ocular displacement at the center of the eye and a larger displacement at the edge of the contact lens. The model predictions are compared with experimental data and previously developed mathematical models. + oai:arXiv.org:2507.13589v2 + math.NA + cs.NA + physics.bio-ph + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Ioannis Panitsas, Tolga O. Atalay, Dragoslav Stojadinovic, Angelos Stavrou, Leandros Tassiulas + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.3934/mbe.2026008 + Mathematical Biosciences and Engineering (2026), Volume 23, Issue 1: 172-209 + Lucia Carichino, Kara L. Maki, David S. Ross, Riley K. Supple, Evan Rysdam - The Cream Rises to the Top: Efficient Reranking Method for Verilog Code Generation - https://arxiv.org/abs/2509.20215 - arXiv:2509.20215v2 Announce Type: replace -Abstract: LLMs face significant challenges in Verilog generation due to limited domain-specific knowledge. While sampling techniques improve pass@k metrics, hardware engineers need one trustworthy solution rather than uncertain candidates. To bridge this gap, we formulate it as a semantic alignment problem between requirements and Verilog implementations, and propose VCD-RNK, a discriminator model tailored for efficient Verilog code reranking. Specifically, VCD-RNKincorporates Verilog-specific reasoning by distilling expert knowledge across three dimensions: code semantic analysis, test case generation, and functional correctness assessment. By explicitly simulating the above reasoning processes during inference, VCD-RNK effectively avoids computationally intensive test execution in existing methods. - oai:arXiv.org:2509.20215v2 - cs.AR - cs.AI + CREME: Robustness Enhancement of Code LLMs via Layer-Aware Model Editing + https://arxiv.org/abs/2507.16407 + arXiv:2507.16407v3 Announce Type: replace +Abstract: Large language models (LLMs) have demonstrated impressive capabilities in code generation, where the natural language prompt plays a crucial role in conveying user intent to the model. However, prior studies have shown that LLMs are highly sensitive to prompt perturbations. Minor modifications in wording, syntax, or formatting can significantly reduce the functional correctness of generated code. As perturbations frequently occur in real-world scenarios, improving the robustness of LLMs to prompt perturbations is essential for ensuring reliable performance in practical code generation. In this paper, we introduce CREME (Code Robustness Enhancement via Model Editing), a novel approach that enhances LLM robustness through targeted parameter updates. CREME first identifies robustness-sensitive layers by comparing hidden states between an original prompt and its perturbed variant. Then, it performs lightweight parameter editing at the identified layer to reduce performance degradation. We evaluate CREME on two widely used code generation benchmarks (HumanEval and MBPP) along with their perturbed counterparts. Experimental results show that CREME improves Pass@1 accuracy by 63% on perturbed prompts while maintaining stable performance on clean inputs, with accuracy deviations within 1%. Further analysis reveals that robustness-sensitive layers are primarily concentrated in the middle and deeper layers of the network, and their locations vary across different model architectures. These insights provide a valuable foundation for developing future robustness-oriented editing strategies. + oai:arXiv.org:2507.16407v3 cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Guang Yang, Wei Zheng, Xiang Chen, Yifan Sun, Fengji Zhang, Terry Yue Zhuo + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1145/3744916.3773111 + Shuhan Liu, Xing Hu, Kerui Huang, Xiaohu Yang, David Lo, Xin Xia - A Data-driven Typology of Vision Models from Integrated Representational Metrics - https://arxiv.org/abs/2509.21628 - arXiv:2509.21628v2 Announce Type: replace -Abstract: Large vision models differ widely in architecture and training paradigm, yet we lack principled methods to determine which aspects of their representations are shared across families and which reflect distinctive computational strategies. We leverage a suite of representational similarity metrics, each capturing a different facet-geometry, unit tuning, or linear decodability-and assess family separability using multiple complementary measures. Metrics preserving geometry or tuning (e.g., RSA, Soft Matching) yield strong family discrimination, whereas flexible mappings such as Linear Predictivity show weaker separation. These findings indicate that geometry and tuning carry family-specific signatures, while linearly decodable information is more broadly shared. To integrate these complementary facets, we adapt Similarity Network Fusion (SNF), a method inspired by multi-omics integration. SNF achieves substantially sharper family separation than any individual metric and produces robust composite signatures. Clustering of the fused similarity matrix recovers both expected and surprising patterns: supervised ResNets and ViTs form distinct clusters, yet all self-supervised models group together across architectural boundaries. Hybrid architectures (ConvNeXt, Swin) cluster with masked autoencoders, suggesting convergence between architectural modernization and reconstruction-based training. This biology-inspired framework provides a principled typology of vision models, showing that emergent computational strategies-shaped jointly by architecture and training objective-define representational structure beyond surface design categories. - oai:arXiv.org:2509.21628v2 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + RoadBench: A Vision-Language Foundation Model and Benchmark for Road Damage Understanding + https://arxiv.org/abs/2507.17353 + arXiv:2507.17353v3 Announce Type: replace +Abstract: Accurate road damage detection is crucial for timely infrastructure maintenance and public safety, but existing vision-only datasets and models lack the rich contextual understanding that textual information can provide. To address this limitation, we introduce RoadBench, the first multimodal benchmark for comprehensive road damage understanding. This dataset pairs high resolution images of road damages with detailed textual descriptions, providing a richer context for model training. We also present RoadCLIP, a novel vision language model that builds upon CLIP by integrating domain specific enhancements. It includes a disease aware positional encoding that captures spatial patterns of road defects and a mechanism for injecting road-condition priors to refine the model's understanding of road damages. We further employ a GPT driven data generation pipeline to expand the image to text pairs in RoadBench, greatly increasing data diversity without exhaustive manual annotation. Experiments demonstrate that RoadCLIP achieves state of the art performance on road damage recognition tasks, significantly outperforming existing vision-only models by 19.2%. These results highlight the advantages of integrating visual and textual information for enhanced road condition analysis, setting new benchmarks for the field and paving the way for more effective infrastructure monitoring through multimodal learning. + oai:arXiv.org:2507.17353v3 + cs.CE + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Jialin Wu, Shreya Saha, Yiqing Bo, Meenakshi Khosla + Xi Xiao, Yunbei Zhang, Janet Wang, Lin Zhao, Yuxiang Wei, Hengjia Li, Yanshu Li, Xinyuan Song, Xiao Wang, Swalpa Kumar Roy, Hao Xu, Tianyang Wang - Nonlinear Optimization with GPU-Accelerated Neural Network Constraints - https://arxiv.org/abs/2509.22462 - arXiv:2509.22462v2 Announce Type: replace -Abstract: We propose a reduced-space formulation for optimizing over trained neural networks where the network's outputs and derivatives are evaluated on a GPU. To do this, we treat the neural network as a "gray box" where intermediate variables and constraints are not exposed to the optimization solver. Compared to the full-space formulation, in which intermediate variables and constraints are exposed to the optimization solver, the reduced-space formulation leads to faster solves and fewer iterations in an interior point method. We demonstrate the benefits of this method on two optimization problems: Adversarial generation for a classifier trained on MNIST images and security-constrained optimal power flow with transient feasibility enforced using a neural network surrogate. - oai:arXiv.org:2509.22462v2 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + DELTAv2: Accelerating Dense 3D Tracking + https://arxiv.org/abs/2508.01170 + arXiv:2508.01170v2 Announce Type: replace +Abstract: We propose a novel algorithm for accelerating dense long-term 3D point tracking in videos. Through analysis of existing state-of-the-art methods, we identify two major computational bottlenecks. First, transformer-based iterative tracking becomes expensive when handling a large number of trajectories. To address this, we introduce a coarse-to-fine strategy that begins tracking with a small subset of points and progressively expands the set of tracked trajectories. The newly added trajectories are initialized using a learnable interpolation module, which is trained end-to-end alongside the tracking network. Second, we propose an optimization that significantly reduces the cost of correlation feature computation, another key bottleneck in prior methods. Together, these improvements lead to a 5-100x speedup over existing approaches while maintaining state-of-the-art tracking accuracy. + oai:arXiv.org:2508.01170v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Robert Parker, Oscar Dowson, Nicole LoGiudice, Manuel Garcia, Russell Bent + Tuan Duc Ngo, Ashkan Mirzaei, Guocheng Qian, Hanwen Liang, Chuang Gan, Evangelos Kalogerakis, Peter Wonka, Chaoyang Wang - Bridging the Knowledge-Prediction Gap in LLMs on Multiple-Choice Questions - https://arxiv.org/abs/2509.23782 - arXiv:2509.23782v2 Announce Type: replace -Abstract: Large Language Models (LLMs) often fail on multiple-choice questions (MCQs) despite demonstrating correct knowledge in other contexts, such as free-form generation. To investigate the mechanism underlying this knowledge-prediction gap on MCQs and alleviate it, we conduct a probing analysis and find that residual streams in certain layers contain a subspace spanned by two important bases: a \emph{knowledge basis} that encodes the probability of the ground-truth answer for a given MCQ and a \emph{prediction basis} that encodes the probability of the answer choice predicted by the model. We observe that incorrect predictions arise from a misalignment of the model's hidden states along these two bases. Hence, we introduce \textbf{KAPPA} (Knowledge-Aligned Prediction through Projection-based Adjustment), a parameter-free intervention that transforms the hidden states to align the prediction coordinate with the knowledge coordinate within this subspace. Experiments on binary-choice reformulations of Big-Bench-Hard and ARC-Challenge show that KAPPA substantially improves accuracy and consistently outperforms baselines. While optimal subspaces differ across tasks, subspaces generalize to some extent, as supported by cross-dataset experiments. Moreover, KAPPA extends its effectiveness to free-form questions beyond MCQs. Our work provides a new geometric understanding of the knowledge-prediction gap and offers a practical method for better aligning model behavior with its latent knowledge. - oai:arXiv.org:2509.23782v2 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + AURORA:Augmented Understanding via Structured Reasoning and Reinforcement Learning for Reference Audio-Visual Segmentation + https://arxiv.org/abs/2508.02149 + arXiv:2508.02149v2 Announce Type: replace +Abstract: Reference Audio-Visual Segmentation (Ref-AVS) tasks challenge models to precisely locate sounding objects by integrating visual, auditory, and textual cues. Existing methods often lack genuine semantic understanding, tending to memorize fixed reasoning patterns. Furthermore, jointly training for reasoning and segmentation can compromise pixel-level precision. To address these issues, we introduce AURORA, a novel framework designed to enhance genuine reasoning and language comprehension in reference audio-visual segmentation. We employ a structured Chain-of-Thought (CoT) prompting mechanism to guide the model through a step-by-step reasoning process and introduce a novel segmentation feature distillation loss to effectively integrate these reasoning abilities without sacrificing segmentation performance. To further cultivate the model's genuine reasoning capabilities, we devise a further two-stage training strategy: first, a ``corrective reflective-style training" stage utilizes self-correction to enhance the quality of reasoning paths, followed by reinforcement learning via Group Reward Policy Optimization (GRPO) to bolster robustness in challenging scenarios. Experiments demonstrate that AURORA achieves state-of-the-art performance on Ref-AVS benchmarks and generalizes effectively to unreferenced segmentation. + oai:arXiv.org:2508.02149v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Yoonah Park, Haesung Pyun, Yohan Jo + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ziyang Luo, Nian Liu, Fahad Shahbaz Khan, Junwei Han - Control Your Robot: A Unified System for Robot Control and Policy Deployment - https://arxiv.org/abs/2509.23823 - arXiv:2509.23823v2 Announce Type: replace -Abstract: Cross-platform robot control remains difficult because hardware interfaces, data formats, and control paradigms vary widely, which fragments toolchains and slows deployment. To address this, we present Control Your Robot, a modular, general-purpose framework that unifies data collection and policy deployment across diverse platforms. The system reduces fragmentation through a standardized workflow with modular design, unified APIs, and a closed-loop architecture. It supports flexible robot registration, dual-mode control with teleoperation and trajectory playback, and seamless integration from multimodal data acquisition to inference. Experiments on single-arm and dual-arm systems show efficient, low-latency data collection and effective support for policy learning with imitation learning and vision-language-action models. Policies trained on data gathered by Control Your Robot match expert demonstrations closely, indicating that the framework enables scalable and reproducible robot learning across platforms. - oai:arXiv.org:2509.23823v2 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Transient thermal analysis of a bi-layered composites with the dual-reciprocity inclusion-based boundary element method + https://arxiv.org/abs/2508.02683 + arXiv:2508.02683v2 Announce Type: replace +Abstract: This paper proposes a single-domain dual-reciprocity inclusion-based boundary element method (DR-iBEM) for a three-dimensional fully bonded bi-layered composite embedded with ellipsoidal inhomogeneities under transient/harmonic thermal loads. The heat equation is interpreted as a static one containing time- and frequency-dependent nonhomogeneous source terms, which is similar to eigen-fields but is transformed into a boundary integral by the dual-reciprocity method. Using the steady-state bimaterial Green's function, boundary integral equations are proposed to take into account continuity conditions of temperature and heat flux, which avoids setting up any continuity equations at the bimaterial interface. Eigen-temperature-gradients and eigen-heat-source are introduced to simulate the material mismatch in thermal conductivity and heat capacity, respectively. The DR-iBEM algorithm is particularly suitable for investigating the transient and harmonic thermal behaviors of bi-layered composites and is verified by the finite element method (FEM). Numerical comparison with the FEM demonstrates its robustness and accuracy. The method has been applied to a functionally graded material as a bimaterial with graded particle distributions, where particle size and gradation effects are evaluated. + oai:arXiv.org:2508.02683v2 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tian Nian, Weijie Ke, Shaolong Zhu, Bingshan Hu + 10.1016/j.ijheatmasstransfer.2025.128116 + International Journal of Heat and Mass Transfer, 2026, volume 256, part 3, 128116 + Chunlin Wu, Liangliang Zhang, Tengxiang Wang, Huiming Yin - Dual Mechanisms of Value Expression: Intrinsic vs. Prompted Values in LLMs - https://arxiv.org/abs/2509.24319 - arXiv:2509.24319v2 Announce Type: replace -Abstract: Large language models (LLMs) can express different values in two distinct ways: (1) intrinsic expression, reflecting the model's inherent values learned during training, and (2) prompted expression, elicited by explicit prompts. Given their widespread use in value alignment and persona steering, it is paramount to clearly understand their underlying mechanisms, particularly whether they mostly overlap (as one might expect) or rely on substantially different mechanisms, but this remains largely understudied. We analyze this at the mechanistic level using two approaches: (1) value vectors, feature directions representing value mechanisms extracted from the residual stream, and (2) value neurons, MLP neurons that contribute to value expressions. We demonstrate that intrinsic and prompted value mechanisms partly share common components that are crucial for inducing value expression, but also possess unique elements that manifest in different ways. As a result, these mechanisms lead to different degrees of value steerability (prompted > intrinsic) and response diversity (intrinsic > prompted). In particular, components unique to the intrinsic mechanism seem to promote lexical diversity in responses, whereas those specific to the prompted mechanism primarily strengthen instruction following, taking effect even in distant tasks like jailbreaking. - oai:arXiv.org:2509.24319v2 + ShoppingBench: A Real-World Intent-Grounded Shopping Benchmark for LLM-based Agents + https://arxiv.org/abs/2508.04266 + arXiv:2508.04266v3 Announce Type: replace +Abstract: Existing benchmarks in e-commerce primarily focus on basic user intents, such as finding or purchasing products. However, real-world users often pursue more complex goals, such as applying vouchers, managing budgets, and finding multi-products seller. To bridge this gap, we propose ShoppingBench, a novel end-to-end shopping benchmark designed to encompass increasingly challenging levels of grounded intent. Specifically, we propose a scalable framework to simulate user instructions based on various intents derived from sampled real-world products. To facilitate consistent and reliable evaluations, we provide a large-scale shopping sandbox that serves as an interactive simulated environment, incorporating over 2.5 million real-world products. Experimental results demonstrate that even state-of-the-art language agents (such as GPT-4.1) achieve absolute success rates under 50% on our benchmark tasks, highlighting the significant challenges posed by our ShoppingBench. In addition, we propose a trajectory distillation strategy and leverage supervised fine-tuning, along with reinforcement learning on synthetic trajectories, to distill the capabilities of a large language agent into a smaller one. As a result, our trained agent achieves competitive performance compared to GPT-4.1. + oai:arXiv.org:2508.04266v3 cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jongwook Han, Jongwon Lim, Injin Kong, Yohan Jo + Jiangyuan Wang, Kejun Xiao, Qi Sun, Huaipeng Zhao, Tao Luo, Jian Dong Zhang, Xiaoyi Zeng - Short window attention enables long-term memorization - https://arxiv.org/abs/2509.24552 - arXiv:2509.24552v2 Announce Type: replace -Abstract: Recent works show that hybrid architectures combining sliding window softmax attention layers with linear recurrent neural network (RNN) layers outperform both of these architectures taken separately. However, the impact of the window length and the interplay between softmax attention and linear RNN layers remain under-studied. In this work, we introduce SWAX, a hybrid architecture consisting of sliding-window attention and xLSTM linear RNN layers. - A counter-intuitive finding with SWAX is that larger sliding windows do not improve the long-context performance. In fact, short window attention encourages the model to better train the long-term memory of the xLSTM, by relying less on the softmax attention mechanism for long context-retrieval. - The issue with small sliding windows is that they are detrimental for short-context tasks, which could be solved with information from moderately larger sliding windows otherwise. Therefore, we train SWAX by stochastically changing the sliding window size, forcing the model to leverage both a longer context window and the xLSTM memory. SWAX trained with stochastic window sizes significantly outperforms regular window attention both on short and long-context problems. - oai:arXiv.org:2509.24552v2 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Riemann-Roch bases for arbitrary elliptic curve divisors and their application in cryptography + https://arxiv.org/abs/2508.04340 + arXiv:2508.04340v2 Announce Type: replace +Abstract: This paper presents explicit constructions of bases for Riemann-Roch spaces associated with arbitrary divisors on elliptic curves. In the context of algebraic geometry codes, the knowledge of an explicit basis for arbitrary divisors is especially valuable, as it enables efficient code construction. From a cryptographic point of view, codes associated with arbitrary divisors with many points are closer to Goppa codes, making them attractive for embedding in the McEliece cryptosystem. Using the results obtained in this work, it is also possible to efficiently construct quasi-cyclic subfield subcodes of elliptic codes. These codes enable a significant reduction in public key size for the McEliece cryptosystem and, consequently, represent promising candidates for integration into post-quantum code-based schemes. + oai:arXiv.org:2508.04340v2 + cs.IT + cs.CR + math.AG + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Lo\"ic Cabannes, Maximilian Beck, Gergely Szilvasy, Matthijs Douze, Maria Lomeli, Jade Copet, Pierre-Emmanuel Mazar\'e, Gabriel Synnaeve, Herv\'e J\'egou + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Artyom Kuninets, Ekaterina Malygina - MixtureVitae: Open Web-Scale Pretraining Dataset With High Quality Instruction and Reasoning Data Built from Permissive-First Text Sources - https://arxiv.org/abs/2509.25531 - arXiv:2509.25531v2 Announce Type: replace -Abstract: We present MixtureVitae, an open-access pretraining corpus built to minimize legal risk while providing strong model performance. MixtureVitae follows a risk-mitigated sourcing strategy that combines public-domain and permissively licensed text (e.g., CC-BY/Apache) with carefully justified low-risk additions (e.g., government works and EU TDM-eligible sources), alongside targeted instruction, reasoning and synthetic data with documented provenance. We detail a transparent, multi-stage pipeline for license-aware filtering, safety and quality screening, and domain-aware mixing, and we release the dataset and curation recipes to support reproducible research. In controlled experiments using the open-sci-ref training protocol (fixed architectures at 130M/400M/1.3B/1.7B parameters; training budgets of 50B and 300B tokens), models trained on MixtureVitae consistently outperform other permissive datasets across a suite of standard benchmarks, and at the 1.7B/300B setting they surpass FineWeb-Edu and approach DCLM in the later stages of training. Performance is particularly strong on math/code and competitive on QA tasks. These results demonstrate that permissive-first, risk-mitigated data provides a practical and legally mitigated foundation for training capable LLMs, reducing reliance on indiscriminate web scraping without sacrificing competitiveness. Code: https://github.com/ontocord/mixturevitae - oai:arXiv.org:2509.25531v2 - cs.CL + WGAST: Weakly-Supervised Generative Network for Daily 10 m Land Surface Temperature Estimation via Spatio-Temporal Fusion + https://arxiv.org/abs/2508.06485 + arXiv:2508.06485v2 Announce Type: replace +Abstract: Urbanization, climate change, and agricultural stress are increasing the demand for precise and timely environmental monitoring. Land Surface Temperature (LST) is a key variable in this context and is retrieved from remote sensing satellites. However, these systems face a trade-off between spatial and temporal resolution. While spatio-temporal fusion methods offer promising solutions, few have addressed the estimation of daily LST at 10 m resolution. In this study, we present WGAST, a weakly-supervised generative network for daily 10 m LST estimation via spatio-temporal fusion of Terra MODIS, Landsat 8, and Sentinel-2. WGAST is the first end-to-end deep learning framework designed for this task. It adopts a conditional generative adversarial architecture, with a generator composed of four stages: feature extraction, fusion, LST reconstruction, and noise suppression. The first stage employs a set of encoders to extract multi-level latent representations from the inputs, which are then fused in the second stage using cosine similarity, normalization, and temporal attention mechanisms. The third stage decodes the fused features into high-resolution LST, followed by a Gaussian filter to suppress high-frequency noise. Training follows a weakly supervised strategy based on physical averaging principles and reinforced by a PatchGAN discriminator. Experiments demonstrate that WGAST outperforms existing methods in both quantitative and qualitative evaluations. Compared to the best-performing baseline, on average, WGAST reduces RMSE by 17.05% and improves SSIM by 4.22%. Furthermore, WGAST effectively captures fine-scale thermal patterns, as validated against near-surface air temperature measurements from 33 near-ground sensors. The code is available at https://github.com/Sofianebouaziz1/WGAST.git. + oai:arXiv.org:2508.06485v2 + cs.CV cs.AI cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Huu Nguyen (Sonny), Victor May (Sonny), Harsh Raj (Sonny), Marianna Nezhurina (Sonny), Yishan Wang (Sonny), Yanqi Luo (Sonny), Minh Chien Vu (Sonny), Taishi Nakamura (Sonny), Ken Tsui (Sonny), Van Khue Nguyen (Sonny), David Salinas (Sonny), Aleksandra Krasnod\k{e}bska (Sonny), Christoph Schuhmann (Sonny), Mats Leon Richter (Sonny), Xuan-Son (Sonny), Vu, Jenia Jitsev + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sofiane Bouaziz, Adel Hafiane, Raphael Canals, Rachid Nedjai - SOCK: A Benchmark for Measuring Self-Replication in Large Language Models - https://arxiv.org/abs/2509.25643 - arXiv:2509.25643v3 Announce Type: replace -Abstract: We introduce SOCK, a benchmark command line interface (CLI) that measures large language models' (LLMs) ability to self-replicate without human intervention. In this benchmark, self-replication is defined not only as an LLM's ability to create a functioning and running copy of itself, but also the ability for that self-replication to persist and occur across different computational contexts. Accordingly, we've developed a system to categorize LLMs based on broad self-replication capabilities in two general classes, Replication-Capability Levels (RCL) and Persistence-Capability Levels (PCL). Using a five-task suite based on practically manipulable modern CLI utilities and computer processes, experiments are orchestrated in a controlled environment with an LLM acting agentically. The performance of the LLM on agent tasks is then computed to produce an R-score (a quantitative evaluation of overall self-replication ability) and data used to categorize LLMs into specific RCL-PCL matrices. SOCK offers two primary contributions: (1) Provides the first formalized definitions and benchmark suite for evaluating LLM self-replication, with the goal of establishing a standard for future research, to our knowledge; (2) Allows the industry to track the effectiveness of future multi-agent systems and mitigate potential self-replication threat vectors within them. The results compiled from evaluating a variety of open-weight and proprietary frontier models reveal significant obstacles to persistent self-replication and multi-agent systems, including context retention and multi-agent decision-making. We propose future research directions to safely reduce the severity of these obstacles, potentially lowering future risk of more functional multi-agent systems. - oai:arXiv.org:2509.25643v3 + PROPS: Progressively Private Self-alignment of Large Language Models + https://arxiv.org/abs/2508.06783 + arXiv:2508.06783v2 Announce Type: replace +Abstract: Alignment is a key step in developing Large Language Models (LLMs) using human feedback to ensure adherence to human values and societal norms. Dependence on human feedback raises privacy concerns about how much a labeler's preferences may reveal about their personal values, beliefs, and personality traits. Existing approaches, such as Differentially Private SGD (DP-SGD), provide rigorous privacy guarantees by privatizing gradients during fine-tuning and alignment but can provide more privacy than necessary as human preferences are tied only to labels of (prompt, response) pairs and can degrade model utility. This work focuses on LLM alignment with preference-level privacy, which preserves the privacy of preference labels provided by humans. We propose PROPS (PROgressively Private Self-alignment), a multi-stage privacy preserving alignment framework where privately aligned models in previous stages can serve as labelers for supplementing training data in the subsequent stages of alignment. We present theoretical guarantees for PROPS as well as comprehensive validation using multiple models (Pythia and GPT) and datasets (AlpacaEval, Anthropic HH-RLHF, truthy-dpo-v0.1) to demonstrate the utility of PROPS over existing methods while still providing high privacy. For the same privacy budget, alignment via PROPS can achieve up to 3x higher win-rates compared to DP-SGD, and 2.5x higher win-rates compared to Randomized Response (RR) based alignment. + oai:arXiv.org:2508.06783v2 + cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CR + cs.IT + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Justin Chavarria, Rohan Raizada, Justin White, Eyad Alhetairshi + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Transactions on ML Research (TMLR) 2025 + Noel Teku, Fengwei Tian, Payel Bhattacharjee, Souradip Chakraborty, Amrit Singh Bedi, Ravi Tandon - Numerical approximations to invariant measures of hybrid stochastic differential equations with superlinear coefficients via the backward Euler-Maruyama method - https://arxiv.org/abs/2509.25799 - arXiv:2509.25799v2 Announce Type: replace -Abstract: For stochastic differential equations (SDEs) with Markovian switching, whose drift and diffusion coefficients are allowed to contain superlinear terms, the backward Euler-Maruyama (BEM) method is proposed to approximate the invariant measure. The existence and uniqueness of the invariant measure of the numerical solution generated by the BEM method is proved. Then the convergence of the numerical invariant measure to its underlying counterpart is shown. Those results obtained in this work release the requirement of the global Lipschitz condition on the diffusion coefficient in [X. Li et al. SIAM J. Numer. Anal. 56(3)(2018), pp. 1435-1455] and can also be regarded as a non-trivial extension of [W. Liu et al. Appl. Numer. Math. 184(2023), pp. 137-150] to the case of hybrid SDEs. - oai:arXiv.org:2509.25799v2 - math.NA - cs.NA - math.PR - Wed, 10 Dec 2025 00:00:00 -0500 + WeatherDiffusion: Controllable Weather Editing in Intrinsic Space + https://arxiv.org/abs/2508.06982 + arXiv:2508.06982v4 Announce Type: replace +Abstract: We present WeatherDiffusion, a diffusion-based framework for controllable weather editing in intrinsic space. Our framework includes two components based on diffusion priors: an inverse renderer that estimates material properties, scene geometry, and lighting as intrinsic maps from an input image, and a forward renderer that utilizes these geometry and material maps along with a text prompt that describes specific weather conditions to generate a final image. The intrinsic maps enhance controllability compared to traditional pixel-space editing approaches. We propose an intrinsic map-aware attention mechanism that improves spatial correspondence and decomposition quality in large outdoor scenes. For forward rendering, we leverage CLIP-space interpolation of weather prompts to achieve fine-grained weather control. We also introduce a synthetic and a real-world dataset, containing 38k and 18k images under various weather conditions, each with intrinsic map annotations. WeatherDiffusion outperforms state-of-the-art pixel-space editing approaches, weather restoration methods, and rendering-based methods, showing promise for downstream tasks such as autonomous driving, enhancing the robustness of detection and segmentation in challenging weather scenarios. + oai:arXiv.org:2508.06982v4 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wei Liu, Jie Xu + Yixin Zhu, Zuoliang Zhu, Jian Yang, Milo\v{s} Ha\v{s}an, Jin Xie, Beibei Wang - Spec-Gloss Surfels and Normal-Diffuse Priors for Relightable Glossy Objects - https://arxiv.org/abs/2510.02069 - arXiv:2510.02069v2 Announce Type: replace -Abstract: Accurate reconstruction and relighting of glossy objects remains a longstanding challenge, as object shape, material properties, and illumination are inherently difficult to disentangle. Existing neural rendering approaches often rely on simplified BRDF models or parameterizations that couple diffuse and specular components, which restrict faithful material recovery and limit relighting fidelity. We propose a relightable framework that integrates a microfacet BRDF with the specular-glossiness parameterization into 2D Gaussian Splatting with deferred shading. This formulation enables more physically consistent material decomposition, while diffusion-based priors for surface normals and diffuse color guide early-stage optimization and mitigate ambiguity. A coarse-to-fine environment map optimization accelerates convergence, and negative-only environment map clipping preserves high-dynamic-range specular reflections. Extensive experiments on complex, glossy scenes demonstrate that our method achieves high-quality geometry and material reconstruction, delivering substantially more realistic and consistent relighting under novel illumination compared to existing Gaussian splatting methods. - oai:arXiv.org:2510.02069v2 - cs.GR + AugLift: Uncertainty Aware Depth Descriptors for Robust 2D to 3D Pose Lifting + https://arxiv.org/abs/2508.07112 + arXiv:2508.07112v3 Announce Type: replace +Abstract: Lifting based 3D human pose estimators infer 3D joints from 2D keypoints, but often struggle to generalize to real world settings with noisy 2D detections. We revisit the input to lifting and propose AugLift, a simple augmentation of standard lifting that enriches each 2D keypoint (x, y) with an Uncertainty Aware Depth Descriptor (UADD). We run a single off the shelf monocular depth estimator to obtain a depth map, and for every keypoint with detector confidence c we extract depth statistics from its confidence scaled neighborhood, forming a compact, interpretable UADD (c, d, d_min, d_max) that captures both local geometry and reliability. AugLift is modular, requires no new sensors or architectural changes, and integrates by expanding the input layer of existing lifting models. + Across four datasets and four lifting architectures, AugLift boosts cross dataset (out of distribution) performance on unseen data by an average of 10.1 percent, while also improving in distribution performance by 4.0 percent as measured by MPJPE. A post hoc analysis clarifies when and why it helps: gains are largest on novel poses and significantly occluded joints, where depth statistics resolve front back ambiguities while confidence calibrates the spatial neighborhoods from which they are drawn. We also study interaction with recent image feature lifting methods and find the signals are complementary: adding UADD to image conditioned lifting yields both ID and OOD gains. A learned depth feature extension (AugLiftV2) improves performance further while trading off interpretability. Together, these results indicate that lightweight, confidence aware depth cues are a powerful plug in for robust 2D to 3D pose lifting. + oai:arXiv.org:2508.07112v3 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Georgios Kouros, Minye Wu, Tinne Tuytelaars + Nikolai Warner, Wenjin Zhang, Hamid Badiozamani, Irfan Essa, Apaar Sadhwani - LLMSQL: Upgrading WikiSQL for the LLM Era of Text-to-SQL - https://arxiv.org/abs/2510.02350 - arXiv:2510.02350v2 Announce Type: replace -Abstract: Converting natural language questions into SQL queries enables non-expert users to interact with relational databases and has long been a central task for natural language interfaces to data. While the WikiSQL dataset played a key role in early text-to-SQL research, its usage has declined due to structural and annotation issues, including case sensitivity inconsistencies, data type mismatches, syntax errors, and unanswered questions. We present LLMSQL, a systematic revision and transformation of WikiSQL designed for the large language model era. We classify these errors and implement automated methods for cleaning and re-annotation. To assess the impact of these improvements, we evaluated multiple large language models, including Gemma 3, LLaMA 3.2, Mistral 7B, gpt-oss 20B, Phi-3.5 Mini, Qwen 2.5, OpenAI o4-mini, DeepSeek-R1, and others. Notably, DeepSeek-R1 achieves 88.40% accuracy in a zero-shot setting, and models under 10B parameters surpass 90% accuracy after fine-tuning. Rather than serving as an update, LLMSQL is introduced as an LLM-ready benchmark. Unlike the original WikiSQL, which was tailored for pointer-network models selecting tokens from input, LLMSQL provides clean natural language questions and full SQL queries as plain text, enabling straightforward generation and evaluation for modern natural-language-to-SQL models. - oai:arXiv.org:2510.02350v2 - cs.CL - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + OpenConstruction: A Systematic Synthesis of Open Visual Datasets for Data-Centric Artificial Intelligence in Construction Monitoring + https://arxiv.org/abs/2508.11482 + arXiv:2508.11482v2 Announce Type: replace +Abstract: The construction industry increasingly relies on visual data to support Artificial Intelligence (AI) and Machine Learning (ML) applications for site monitoring. High-quality, domain-specific datasets, comprising images, videos, and point clouds, capture site geometry and spatiotemporal dynamics, including the location and interaction of objects, workers, and materials. However, despite growing interest in leveraging visual datasets, existing resources vary widely in sizes, data modalities, annotation quality, and representativeness of real-world construction conditions. A systematic review to categorize their data characteristics and application contexts is still lacking, limiting the community's ability to fully understand the dataset landscape, identify critical gaps, and guide future directions toward more effective, reliable, and scalable AI applications in construction. To address this gap, this study conducts an extensive search of academic databases and open-data platforms, yielding 51 publicly available visual datasets that span the 2005-2024 period. These datasets are categorized using a structured data schema covering (i) data fundamentals (e.g., size and license), (ii) data modalities (e.g., RGB and point cloud), (iii) annotation frameworks (e.g., bounding boxes), and (iv) downstream application domains (e.g., progress tracking). This study synthesizes these findings into an open-source catalog, OpenConstruction, supporting data-driven method development. Furthermore, the study discusses several critical limitations in the existing construction dataset landscape and presents a roadmap for future data infrastructure anchored in the Findability, Accessibility, Interoperability, and Reusability (FAIR) principles. By reviewing the current landscape and outlining strategic priorities, this study supports the advancement of data-centric solutions in the construction sector. + oai:arXiv.org:2508.11482v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Dzmitry Pihulski, Karol Charchut, Viktoria Novogrodskaia, Jan Koco\'n + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ruoxin Xiong, Yanyu Wang, Jiannan Cai, Kaijian Liu, Yuansheng Zhu, Pingbo Tang, Nora El-Gohary - UniPruning: Unifying Local Metric and Global Feedback for Scalable Sparse LLMs - https://arxiv.org/abs/2510.03291 - arXiv:2510.03291v2 Announce Type: replace -Abstract: Large Language Models (LLMs) achieve strong performance across diverse tasks but face prohibitive computational and memory costs. Pruning offers a promising path by inducing sparsity while preserving architectural flexibility. However, existing methods struggle to balance efficiency and robustness: local metric approaches prune layer by layer but often collapse under high sparsity, whereas global feedback methods enforce consistency at the cost of expensive weight updates or restrictive semi-structured formats. We present UniPruning, a unified post-training pruning framework that combines the speed of local saliency metrics with the stability of global coordination, enabled by a mirror descent based optimization, all without updating model weights. UniPruning leverages fast layer-wise scoring and a lightweight global controller to allocate a single sparsity budget, supporting both unstructured and semi-structured N :M pruning within one framework. After a brief calibration, it can generate pruning masks for arbitrary sparsity levels in one shot, and adapts seamlessly to hardware-aware constraints. Extensive experiments on multiple pretrained LLM families and standard benchmarks show that UniPruning consistently delivers competitive or superior perplexity and zero-shot accuracy. Ablation studies further highlight the importance of mirror descent and local saliency anchoring. Overall, UniPruning provides an efficient, principled, and scalable solution for sparsifying large-scale LLMs. Our code is available at: https://github.com/RainbowQTT/UniPruning. - oai:arXiv.org:2510.03291v2 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Matrix-game 2.0: An open-source real-time and streaming interactive world model + https://arxiv.org/abs/2508.13009 + arXiv:2508.13009v3 Announce Type: replace +Abstract: Recent advances in interactive video generations have demonstrated diffusion model's potential as world models by capturing complex physical dynamics and interactive behaviors. However, existing interactive world models depend on bidirectional attention and lengthy inference steps, severely limiting real-time performance. Consequently, they are hard to simulate real-world dynamics, where outcomes must update instantaneously based on historical context and current actions. To address this, we present Matrix-Game 2.0, an interactive world model generates long videos on-the-fly via few-step auto-regressive diffusion. Our framework consists of three key components: (1) A scalable data production pipeline for Unreal Engine and GTA5 environments to effectively produce massive amounts (about 1200 hours) of video data with diverse interaction annotations; (2) An action injection module that enables frame-level mouse and keyboard inputs as interactive conditions; (3) A few-step distillation based on the casual architecture for real-time and streaming video generation. Matrix Game 2.0 can generate high-quality minute-level videos across diverse scenes at an ultra-fast speed of 25 FPS. We open-source our model weights and codebase to advance research in interactive world modeling. + oai:arXiv.org:2508.13009v3 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Yizhuo Ding, Wanying Qu, Jiawei Geng, Wenqi Shao, Yanwei Fu + Xianglong He, Chunli Peng, Zexiang Liu, Boyang Wang, Yifan Zhang, Qi Cui, Fei Kang, Biao Jiang, Mengyin An, Yangyang Ren, Baixin Xu, Hao-Xiang Guo, Kaixiong Gong, Size Wu, Wei Li, Xuchen Song, Yang Liu, Yangguang Li, Yahui Zhou - From Patchwork to Network: A Comprehensive Framework for Demand Analysis and Fleet Optimization of Urban Air Mobility - https://arxiv.org/abs/2510.04186 - arXiv:2510.04186v3 Announce Type: replace -Abstract: Urban Air Mobility (UAM) presents a transformative vision for metropolitan transportation, but its practical implementation is hindered by substantial infrastructure costs and operational complexities. We address these challenges by modeling a UAM network that leverages existing regional airports and operates with an optimized, heterogeneous fleet of aircraft. We introduce LPSim, a Large-Scale Parallel Simulation framework that utilizes multi-GPU computing to co-optimize UAM demand, fleet operations, and ground transportation interactions simultaneously. - Our equilibrium search algorithm is extended to accurately forecast demand and determine the most efficient fleet composition. Applied to a case study of the San Francisco Bay Area, our results demonstrate that this UAM model can yield over 20 minutes' travel time savings for 230,000 selected trips. However, the analysis also reveals that system-wide success is critically dependent on seamless integration with ground access and dynamic scheduling. - oai:arXiv.org:2510.04186v3 - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + Vevo2: A Unified and Controllable Framework for Speech and Singing Voice Generation + https://arxiv.org/abs/2508.16332 + arXiv:2508.16332v2 Announce Type: replace +Abstract: Controllable human voice generation, particularly for expressive domains like singing, remains a significant challenge. This paper introduces Vevo2, a unified framework for controllable speech and singing voice generation. To tackle issues like the scarcity of annotated singing data and to enable flexible controllability, Vevo2 introduces two audio tokenizers: (1) a unified music-notation-free prosody tokenizer that captures prosody and melody from speech, singing, and even instrumental sounds, and (2) a unified content-style tokenizer that encodes linguistic content, prosody, and style for both speech and singing, while enabling timbre disentanglement. Vevo2 consists of an auto-regressive (AR) content-style modeling stage, which aims to enable controllability over text, prosody, and style, as well as a flow-matching acoustic modeling stage that allows for timbre control. Particularly, during the speech-singing joint training of the AR model, we propose both explicit and implicit prosody learning strategies to bridge speech and singing voice. Moreover, to further enhance the Vevo2's ability to follow text and prosody, we design a multi-objective post-training task that integrates both intelligibility and prosody similarity alignment. Experimental results show that the unified modeling in Vevo2 brings mutual benefits to both speech and singing voice generation. Additionally, Vevo2's effectiveness across a wide range of synthesis, conversion, and editing tasks for both speech and singing further demonstrates its strong generalization ability and versatility. Audio samples are are available at https://versasinger.github.io/. + oai:arXiv.org:2508.16332v2 + cs.SD + cs.AI + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Xuan Jiang, Xuanyu Zhou, Yibo Zhao, Shangqing Cao, Dingyi Zhuang, Jinhua Zhao, Haris Koutsopoulos, Shenhao Wang, Mark Hansen, Raja Sengupta + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Xueyao Zhang, Junan Zhang, Yuancheng Wang, Chaoren Wang, Yuanzhe Chen, Dongya Jia, Zhuo Chen, Zhizheng Wu - NERVIS: An Interactive System for Graph-Based Exploration and Editing of Named Entities - https://arxiv.org/abs/2510.04971 - arXiv:2510.04971v2 Announce Type: replace -Abstract: We present an interactive visualization system for exploring named entities and their relationships across document collections. The system is designed around a graph-based representation that integrates three types of nodes: documents, entity mentions, and entities. Connections capture two key relationship types: (i) identical entities across contexts, and (ii) co-locations of mentions within documents. Multiple coordinated views enable users to examine entity occurrences, discover clusters of related mentions, and explore higher-level entity group relationships. To support flexible and iterative exploration, the interface offers fuzzy views with approximate connections, as well as tools for interactively editing the graph by adding or removing links, entities, and mentions, as well as editing entity terms. Additional interaction features include filtering, mini-map navigation, and export options to JSON or image formats for downstream analysis and reporting. This approach contributes to human-centered exploration of entity-rich text data by combining graph visualization, interactive refinement, and adaptable perspectives on relationships. - oai:arXiv.org:2510.04971v2 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Steiner Traveling Salesman Problem with Time Windows and Pickup-Delivery: integrating classical and quantum optimization + https://arxiv.org/abs/2508.17896 + arXiv:2508.17896v2 Announce Type: replace +Abstract: We propose the Steiner Traveling Salesman Problem with Time Windows and Pickup and Delivery, an advanced and practical extension of classical routing models. This variant integrates the characteristics of the Steiner Traveling Salesman Problem with time-window constraints, pickup and delivery operations and vehicle capacity limitations. These features closely mirror the complexities of contemporary logistics challenges, including last-mile distribution, reverse logistics and on-demand service scenarios. To tackle the inherent computational difficulties of this NP-hard problem, we propose two specialized mathematical formulations: an arc-based model and a node-oriented model, each designed to capture distinct structural aspects of the problem. We further introduce a preprocessing reduction method that eliminates redundant arcs, significantly enhancing computational performance and scalability. Both formulations are implemented using classical and quantum optimization approaches. In particular, the classical models are solved with Gurobi, whereas the quantum implementation is carried out on D-Wave's LeapCQMHybrid platform, a hybrid quantum-classical environment that integrates quantum annealing with classical optimization techniques for constrained problem solving. Numerical experiments are conducted to validate the proposed formulations and the preprocessing reduction method. The analyses performed assess the structural properties of the two models, their computational behavior, and the impact of preprocessing on problem size and solution efficiency. + oai:arXiv.org:2508.17896v2 + cs.ET + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Uro\v{s} \v{S}majdek, Ciril Bohak + Alessia Ciacco, Francesca Guerriero, Eneko Osaba - Can AI Truly Represent Your Voice in Deliberations? A Comprehensive Study of Large-Scale Opinion Aggregation with LLMs - https://arxiv.org/abs/2510.05154 - arXiv:2510.05154v3 Announce Type: replace -Abstract: Large-scale public deliberations generate thousands of free-form contributions that must be synthesized into representative and neutral summaries for policy use. While LLMs have been shown as a promising tool to generate summaries for large-scale deliberations, they also risk underrepresenting minority perspectives and exhibiting bias with respect to the input order, raising fairness concerns in high-stakes contexts. Studying and fixing these issues requires a comprehensive evaluation at a large scale, yet current practice often relies on LLMs as judges, which show weak alignment with human judgments. To address this, we present DeliberationBank, a large-scale human-grounded dataset with (1) opinion data spanning ten deliberation questions created by 3,000 participants and (2) summary judgment data annotated by 4,500 participants across four dimensions (representativeness, informativeness, neutrality, policy approval). Using these datasets, we train DeliberationJudge, a fine-tuned DeBERTa model that can rate deliberation summaries from individual perspectives. DeliberationJudge is more efficient and more aligned with human judgements compared to a wide range of LLM judges. With DeliberationJudge, we evaluate 18 LLMs and reveal persistent weaknesses in deliberation summarization, especially underrepresentation of minority positions. Our framework provides a scalable and reliable way to evaluate deliberation summarization, helping ensure AI systems are more representative and equitable for policymaking. - oai:arXiv.org:2510.05154v3 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Grounding the Ungrounded: A Spectral-Graph Framework for Quantifying Hallucinations in Multimodal LLMs + https://arxiv.org/abs/2508.19366 + arXiv:2508.19366v4 Announce Type: replace +Abstract: Hallucinations in LLMs--especially in multimodal settings--undermine reliability. We present a rigorous information-geometric framework, grounded in diffusion dynamics, to quantify hallucinations in MLLMs where model outputs are embedded via spectral decompositions of multimodal graph Laplacians, and their gaps to a truth manifold define a semantic distortion metric. We derive Courant-Fischer bounds on a temperature-dependent hallucination profile and use RKHS eigenmodes to obtain modality-aware, interpretable measures that track evolution over prompts and time. This reframes hallucination as quantifiable and bounded, providing a principled basis for evaluation and mitigation. + oai:arXiv.org:2508.19366v4 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Shenzhe Zhu, Shu Yang, Michiel A. Bakker, Alex Pentland, Jiaxin Pei + Supratik Sarkar, Swagatam Das - Provably Mitigating Corruption, Overoptimization, and Verbosity Simultaneously in Offline and Online RLHF/DPO Alignment - https://arxiv.org/abs/2510.05526 - arXiv:2510.05526v2 Announce Type: replace -Abstract: Reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO) are important techniques to align large language models (LLM) with human preference. However, the quality of RLHF and DPO training is seriously compromised by \textit{\textbf{C}orrupted} preference, reward \textit{\textbf{O}veroptimization}, and bias towards \textit{\textbf{V}erbosity}. To our knowledge, most existing works tackle only one of these important issues, and the few other works require much computation to estimate multiple reward models and lack theoretical guarantee of generalization ability. In this work, we propose RLHF-\textbf{COV} and DPO-\textbf{COV} algorithms that can simultaneously mitigate these three issues, in both offline and online settings. This ability is theoretically demonstrated by obtaining length-regularized generalization error rates for our DPO-COV algorithms trained on corrupted data, which match the best-known rates for simpler cases with clean data and without length regularization. Moreover, our DPO-COV algorithm is simple to implement without reward estimation, and is proved to be equivalent to our RLHF-COV algorithm, which directly implies the equivalence between the vanilla RLHF and DPO algorithms. Experiments demonstrate the effectiveness of our DPO-COV algorithms under both offline and online settings. - oai:arXiv.org:2510.05526v2 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + A Continuous Energy Ising Machine Leveraging Difference-of-Convex Programming + https://arxiv.org/abs/2509.01928 + arXiv:2509.01928v2 Announce Type: replace +Abstract: Many combinatorial optimization problems can be reformulated as finding the ground state of the Ising model. Existing Ising solvers are mostly inspired by simulated annealing. Although annealing techniques offer scalability, they lack convergence guarantees and are sensitive to the cooling schedule. We propose solving the Ising problem by relaxing the binary spins to continuous variables and introducing an attraction potential that steers the solution toward binary spin configurations. A key property of this potential is that its combination with the Ising energy produces a Hamiltonian that can be written as a difference of convex polynomials. This enables us to design efficient iterative algorithms that require a single matrix-vector multiplication per iteration and provide convergence guarantees. We implement our Ising solver on a wide range of GPU platforms, from edge devices to high-performance computing clusters, and demonstrate that it consistently outperforms existing solvers across problem sizes ranging from small ($10^3$ spins) to ultra-large ($10^8$ spins). + oai:arXiv.org:2509.01928v2 + cs.DC + math-ph + math.MP + math.OC + quant-ph + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ziyi Chen, Junyi Li, Peiran Yu, Heng Huang + Debraj Banerjee, Santanu Mahapatra, Kunal Narayan Chaudhury - Permutation-Invariant Representation Learning for Robust and Privacy-Preserving Feature Selection - https://arxiv.org/abs/2510.05535 - arXiv:2510.05535v2 Announce Type: replace -Abstract: Feature selection eliminates redundancy among features to improve downstream task performance while reducing computational overhead. Existing methods often struggle to capture intricate feature interactions and adapt across diverse application scenarios. Recent advances employ generative intelligence to alleviate these drawbacks. However, these methods remain constrained by permutation sensitivity in embedding and reliance on convexity assumptions in gradient-based search. To address these limitations, our initial work introduces a novel framework that integrates permutation-invariant embedding with policy-guided search. Although effective, it still left opportunities to adapt to realistic distributed scenarios. In practice, data across local clients is highly imbalanced, heterogeneous and constrained by strict privacy regulations, limiting direct sharing. These challenges highlight the need for a framework that can integrate feature selection knowledge across clients without exposing sensitive information. In this extended journal version, we advance the framework from two perspectives: 1) developing a privacy-preserving knowledge fusion strategy to derive a unified representation space without sharing sensitive raw data. 2) incorporating a sample-aware weighting strategy to address distributional imbalance among heterogeneous local clients. Extensive experiments validate the effectiveness, robustness, and efficiency of our framework. The results further demonstrate its strong generalization ability in federated learning scenarios. The code and data are publicly available: https://anonymous.4open.science/r/FedCAPS-08BF. - oai:arXiv.org:2510.05535v2 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Performance analysis of common browser extensions for cryptojacking detection + https://arxiv.org/abs/2509.02083 + arXiv:2509.02083v2 Announce Type: replace +Abstract: This paper considers five extensions for Chromium-based browsers in order to determine how effective can browser-based defenses against cryptojacking available to regular users be. We've examined most popular extensions - MinerBlock, AdGuard AdBlocker, Easy Redirect && Prevent Cryptojacking, CoinEater and Miners Shield, which claim to be designed specifically to identify and stop illegal cryptocurrency mining. An empirically confirmed dataset of 373 distinct cryptojacking-infected websites which was assembled during multi-stage procedure, was used to test those extensions. The results showed that all plugins in question had significant performance limits. Easy Redirect and Miners Shield only blocked 6 and 5 websites respectively, while MinerBlock had the greatest detection rate at only 27% (101/373 sites blocked). Most concerningly, despite promises of cryptojacking prevention, AdGuard (which has over 13 million users) and CoinEater were unable to identify any of the compromised websites. These results demonstrate serious flaws in cryptojacking detection products targeted for regular users, since even the best-performing specimen failed to detect 73% of attacks. The obvious difference between advertised capabilities and real performance highlights the urgent need for either accessibility improvements for laboratory-grade detection technologies that show 90%+ efficiency in controlled environment or fundamental upgrades to current commonly used extensions. + oai:arXiv.org:2509.02083v2 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Rui Liu, Tao Zhe, Yanjie Fu, Feng Xia, Ted Senator, Dongjie Wang + Dmitry Tanana - NewtonBench: Benchmarking Generalizable Scientific Law Discovery in LLM Agents - https://arxiv.org/abs/2510.07172 - arXiv:2510.07172v2 Announce Type: replace -Abstract: Large language models are emerging as powerful tools for scientific law discovery, a foundational challenge in AI-driven science. However, existing benchmarks for this task suffer from a fundamental methodological trilemma, forcing a trade-off between scientific relevance, scalability, and resistance to memorization. Furthermore, they oversimplify discovery as static function fitting, failing to capture the authentic scientific process of uncovering embedded laws through the interactive exploration of complex model systems. To address these critical gaps, we introduce NewtonBench, a benchmark comprising 324 scientific law discovery tasks across 12 physics domains. Our design mitigates the evaluation trilemma by using counterfactual law shifts - systematic alterations of canonical laws - to generate a vast suite of problems that are scalable, scientifically relevant, and memorization-resistant. Moreover, we elevate the evaluation from static function fitting to interactive model discovery, requiring agents to experimentally probe simulated complex systems to uncover hidden principles. Our extensive experiment reveals a clear but fragile capability for discovery in frontier LLMs: this ability degrades precipitously with increasing system complexity and exhibits extreme sensitivity to observational noise. Notably, we uncover a paradoxical effect of tool assistance: providing a code interpreter can hinder more capable models by inducing a premature shift from exploration to exploitation, causing them to satisfice on suboptimal solutions. These results demonstrate that robust, generalizable discovery in complex, interactive environments remains the core challenge. By providing a scalable, robust, and scientifically authentic testbed, NewtonBench offers a crucial tool for measuring true progress and guiding the development of next-generation AI agents capable of genuine scientific discovery. - oai:arXiv.org:2510.07172v2 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + ModalSurv: Investigating opportunities and limitations of multimodal deep survival learning in prostate and bladder cancer + https://arxiv.org/abs/2509.05037 + arXiv:2509.05037v4 Announce Type: replace +Abstract: Accurate survival prediction is essential for personalised cancer treatment. We propose ModalSurv, a multimodal deep survival framework integrating clinical, MRI, histopathology, and RNA-sequencing data via modality-specific projections and cross-attention fusion. On the CHIMERA Grand Challenge datasets, ModalSurv achieved a C-index of 0.7402 (1st) for prostate and 0.5740 (5th) for bladder cancer. Notably, clinical features alone outperformed multimodal models on external tests, highlighting challenges of limited multimodal alignment and potential overfitting. Local validation showed multimodal gains but limited generalisation. ModalSurv provides a systematic evaluation of multimodal survival modelling, underscoring both its promise and current limitations for scalable, generalisable cancer prognosis. + oai:arXiv.org:2509.05037v4 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by-nc-sa/4.0/ - Tianshi Zheng, Kelvin Kiu-Wai Tam, Newt Hue-Nam K. Nguyen, Baixuan Xu, Zhaowei Wang, Jiayang Cheng, Hong Ting Tsang, Weiqi Wang, Jiaxin Bai, Tianqing Fang, Yangqiu Song, Ginny Y. Wong, Simon See + Noorul Wahab, Ethar Alzaid, Jiaqi Lv, Fayyaz Minhas, Adam Shephard, Shan E Ahmed Raza - Personalize Before Retrieve: LLM-based Personalized Query Expansion for User-Centric Retrieval - https://arxiv.org/abs/2510.08935 - arXiv:2510.08935v2 Announce Type: replace -Abstract: Retrieval-Augmented Generation (RAG) critically depends on effective query expansion to retrieve relevant information. However, existing expansion methods adopt uniform strategies that overlook user-specific semantics, ignoring individual expression styles, preferences, and historical context. In practice, identical queries in text can express vastly different intentions across users. This representational rigidity limits the ability of current RAG systems to generalize effectively in personalized settings. Specifically, we identify two core challenges for personalization: 1) user expression styles are inherently diverse, making it difficult for standard expansions to preserve personalized intent. 2) user corpora induce heterogeneous semantic structures-varying in topical focus and lexical organization-which hinders the effective anchoring of expanded queries within the user's corpora space. To address these challenges, we propose Personalize Before Retrieve (PBR), a framework that incorporates user-specific signals into query expansion prior to retrieval. PBR consists of two components: P-PRF, which generates stylistically aligned pseudo feedback using user history for simulating user expression style, and P-Anchor, which performs graph-based structure alignment over user corpora to capture its structure. Together, they produce personalized query representations tailored for retrieval. Experiments on two personalized benchmarks show that PBR consistently outperforms strong baselines, with up to 10% gains on PersonaBench across retrievers. Our findings demonstrate the value of modeling personalization before retrieval to close the semantic gap in user-adaptive RAG systems. - oai:arXiv.org:2510.08935v2 - cs.IR - Wed, 10 Dec 2025 00:00:00 -0500 + Hybrid A* Path Planning with Multi-Modal Motion Extension for Four-Wheel Steering Mobile Robots + https://arxiv.org/abs/2509.06115 + arXiv:2509.06115v2 Announce Type: replace +Abstract: Four-wheel independent steering (4WIS) systems provide mobile robots with a rich set of motion modes, such as Ackermann steering, lateral steering, and parallel movement, offering superior maneuverability in constrained environments. However, existing path planning methods generally assume a single kinematic model and thus fail to fully exploit the multi-modal capabilities of 4WIS platforms. To address this limitation, we propose an extended Hybrid A* framework that operates in a four-dimensional state space incorporating both spatial states and motion modes. Within this framework, we design multi-modal Reeds-Shepp curves tailored to the distinct kinematic constraints of each motion mode, develop an enhanced heuristic function that accounts for mode-switching costs, and introduce a terminal connection strategy with intelligent mode selection to ensure smooth transitions between different steering patterns. The proposed planner enables seamless integration of multiple motion modalities within a single path, significantly improving flexibility and adaptability in complex environments. Results demonstrate significantly improved planning performance for 4WIS robots in complex environments. + oai:arXiv.org:2509.06115v2 + cs.RO + cs.SY + eess.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yingyi Zhang, Pengyue Jia, Derong Xu, Yi Wen, Xianneng Li, Yichao Wang, Wenlin Zhang, Xiaopeng Li, Weinan Gan, Huifeng Guo, Yong Liu, Xiangyu Zhao + Runjiao Bao, Lin Zhang, Tianwei Niu, Haoyu Yuan, Shoukun Wang - Value-State Gated Attention for Mitigating Extreme-Token Phenomena in Transformers - https://arxiv.org/abs/2510.09017 - arXiv:2510.09017v2 Announce Type: replace -Abstract: Large models based on the Transformer architecture are susceptible to extreme-token phenomena, such as attention sinks and value-state drains. These issues, which degrade model performance, quantization fidelity, and interpretability, arise from a problematic mutual reinforcement mechanism where the model learns an inefficient 'no-op' behavior by focusing attention on tokens with near-zero value states. In this paper, we propose Value-State Gated Attention (VGA), a simple, dedicated, and stable architectural mechanism for performing 'no-op' attention efficiently by directly breaking this cycle. VGA introduces a learnable, data-dependent gate, computed directly from the value vectors (V), to modulate the output. Through a theoretical analysis of the underlying gradients, we show that gating the value-state with a function of itself is more effective at decoupling value and attention score updates than prior methods that gate on input embeddings. This creates a direct regulatory pathway that allows the model to suppress a token's contribution based on its emergent value representation. Our experiments demonstrate that VGA significantly mitigates the formation of attention sinks and stabilizes value-state norms, leading to improved performance, robust quantization fidelity, and enhanced model interpretability. - oai:arXiv.org:2510.09017v2 + Text-Trained LLMs Can Zero-Shot Extrapolate PDE Dynamics, Revealing a Three-Stage In-Context Learning Mechanism + https://arxiv.org/abs/2509.06322 + arXiv:2509.06322v2 Announce Type: replace +Abstract: Large language models (LLMs) have demonstrated emergent in-context learning (ICL) capabilities across a range of tasks, including zero-shot time-series forecasting. We show that text-trained foundation models can accurately extrapolate spatiotemporal dynamics from discretized partial differential equation (PDE) solutions without fine-tuning or natural language prompting. Predictive accuracy improves with longer temporal contexts but degrades at finer spatial discretizations. In multi-step rollouts, where the model recursively predicts future spatial states over multiple time steps, errors grow algebraically with the time horizon, reminiscent of global error accumulation in classical finite-difference solvers. We interpret these trends as in-context neural scaling laws, where prediction quality varies predictably with both context length and output length. To better understand how LLMs are able to internally process PDE solutions so as to accurately roll them out, we analyze token-level output distributions and uncover a consistent three-stage ICL progression: beginning with syntactic pattern imitation, transitioning through an exploratory high-entropy phase, and culminating in confident, numerically grounded predictions. + oai:arXiv.org:2509.06322v2 cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Rui Bu, Haofeng Zhong, Wenzheng Chen, Yangyan Li + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiajun Bao, Nicolas Boull\'e, Toni J. B. Liu, Rapha\"el Sarfati, Christopher J. Earls - On the Relationship between Space-Time Accessibility and Leisure Activity Participation - https://arxiv.org/abs/2510.10307 - arXiv:2510.10307v2 Announce Type: replace -Abstract: Understanding how accessibility shapes participation in leisure activities is central to promoting inclusive and vibrant urban life. Conventional accessibility measures often focus on potential access from fixed home locations, overlooking the constraints and opportunities embedded in daily routines. In this study, we apply a space-time accessibility (STA) metric rooted in the capability approach, capturing feasible leisure opportunities between home and work given a certain time budget, individual transport modes, and urban infrastructure. Using high-resolution GPS data from 2,415 residents in the Paris region, we assess how STA influences total travel time and leisure participation, measured as the diversity of leisure locations visited. Our analysis shows that most individuals, especially active transport users, choose destinations aligned with their STA-defined opportunity sets, underscoring the metric's validity in capturing capability sets. Structural equation modeling reveals that STA directly fosters leisure diversity but also reduces travel time, which in turn is associated with lower diversity of visited leisure locations. These findings highlight the value of person-centered, capability-informed accessibility metrics for understanding inequalities in urban mobility and informing transport planning strategies that expand real freedoms to participate in social life across diverse population groups. - oai:arXiv.org:2510.10307v2 - cs.SI - cs.CE - Wed, 10 Dec 2025 00:00:00 -0500 + Imitative Membership Inference Attack + https://arxiv.org/abs/2509.06796 + arXiv:2509.06796v2 Announce Type: replace +Abstract: A Membership Inference Attack (MIA) assesses how much a target machine learning model reveals about its training data by determining whether specific query instances were part of the training set. State-of-the-art MIAs rely on training hundreds of shadow models that are independent of the target model, leading to significant computational overhead. In this paper, we introduce Imitative Membership Inference Attack (IMIA), which employs a novel imitative training technique to strategically construct a small number of target-informed imitative models that closely replicate the target model's behavior for inference. Extensive experimental results demonstrate that IMIA substantially outperforms existing MIAs in various attack settings while only requiring less than 5% of the computational cost of state-of-the-art approaches. + oai:arXiv.org:2509.06796v2 + cs.CR + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-sa/4.0/ - Yuan Liao, Rafael H. M. Pereira, Jorge Gil, Silvia De Sojo Caso, Laura Alessandretti + http://creativecommons.org/licenses/by/4.0/ + Yuntao Du, Yuetian Chen, Hanshen Xiao, Bruno Ribeiro, Ninghui Li - ADiP: Adaptive Precision Systolic Array for Matrix Multiplication Acceleration - https://arxiv.org/abs/2510.10623 - arXiv:2510.10623v2 Announce Type: replace -Abstract: Transformers are at the core of modern AI nowadays. They rely heavily on matrix multiplication and require efficient acceleration due to their substantial memory and computational requirements. Quantization plays a vital role in reducing memory usage, and can be exploited for computations by designing reconfigurable architectures that enhance matrix multiplication by dynamically adjusting the precision. This paper proposes ADiP, a novel adaptive-precision systolic array architecture designed for efficient matrix multiplication acceleration.The proposed architecture consists of NxN adaptive-precision processing elements (PEs) and shared accumulators. ADiP supports multiple computation modes, including symmetric single-matrix multiplication as well as asymmetric multi-matrix multiplication with a shared input matrix, thereby improving data-reuse and PE utilization. In addition, ADiP maximizes the computational density by adapting to different precisions, such as 8bitx8bit, 8bitx4bit, and 8bitx2bit. Analytical models are developed for ADiP architecture, including latency and throughput for versatile architecture configurations. A comprehensive hardware design space exploration is demonstrated using 22nm commercial technology, achieving up to a 4x higher computational throughput. Furthermore, ADiP is evaluated on different transformer workloads from GPT-2 Medium, BERT Large, and BitNet-1.58B models, delivering latency improvement up to 53.6%, and energy improvement up to 24.4% for BitNet-1.58B MHA workloads. At a 64x64 size with 4096 PEs, ADiP achieves a peak throughput of 8.192 TOPS, 16.384 TOPS, and 32.768 TOPS for 8bitx8bit, 8bitx4bit, and 8bitx2bit operations, respectively. - oai:arXiv.org:2510.10623v2 - cs.AR - Wed, 10 Dec 2025 00:00:00 -0500 + HeLoFusion: An Efficient and Scalable Encoder for Modeling Heterogeneous and Multi-Scale Interactions in Trajectory Prediction + https://arxiv.org/abs/2509.11719 + arXiv:2509.11719v2 Announce Type: replace +Abstract: Multi-agent trajectory prediction in autonomous driving requires a comprehensive understanding of complex social dynamics. Existing methods, however, often struggle to capture the full richness of these dynamics, particularly the co-existence of multi-scale interactions and the diverse behaviors of heterogeneous agents. To address these challenges, this paper introduces HeLoFusion, an efficient and scalable encoder for modeling heterogeneous and multi-scale agent interactions. Instead of relying on global context, HeLoFusion constructs local, multi-scale graphs centered on each agent, allowing it to effectively model both direct pairwise dependencies and complex group-wise interactions (\textit{e.g.}, platooning vehicles or pedestrian crowds). Furthermore, HeLoFusion tackles the critical challenge of agent heterogeneity through an aggregation-decomposition message-passing scheme and type-specific feature networks, enabling it to learn nuanced, type-dependent interaction patterns. This locality-focused approach enables a principled representation of multi-level social context, yielding powerful and expressive agent embeddings. On the challenging Waymo Open Motion Dataset, HeLoFusion achieves state-of-the-art performance, setting new benchmarks for key metrics including Soft mAP and minADE. Our work demonstrates that a locality-grounded architecture, which explicitly models multi-scale and heterogeneous interactions, is a highly effective strategy for advancing motion forecasting. + oai:arXiv.org:2509.11719v2 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Ahmed J. Abdelmaksoud, Cristian Sestito, Shiwei Wang, Themis Prodromakis + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Bingqing Wei, Lianmin Chen, Zhongyu Xia, Yongtao Wang - ContextGen: Contextual Layout Anchoring for Identity-Consistent Multi-Instance Generation - https://arxiv.org/abs/2510.11000 - arXiv:2510.11000v2 Announce Type: replace -Abstract: Multi-instance image generation (MIG) remains a significant challenge for modern diffusion models due to key limitations in achieving precise control over object layout and preserving the identity of multiple distinct subjects. To address these limitations, we introduce ContextGen, a novel Diffusion Transformer framework for multi-instance generation that is guided by both layout and reference images. Our approach integrates two key technical contributions: a Contextual Layout Anchoring (CLA) mechanism that incorporates the composite layout image into the generation context to robustly anchor the objects in their desired positions, and Identity Consistency Attention (ICA), an innovative attention mechanism that leverages contextual reference images to ensure the identity consistency of multiple instances. Recognizing the lack of large-scale, hierarchically-structured datasets for this task, we introduce IMIG-100K, the first dataset with detailed layout and identity annotations. Extensive experiments demonstrate that ContextGen sets a new state-of-the-art, outperforming existing methods in control precision, identity fidelity, and overall visual quality. - oai:arXiv.org:2510.11000v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + SimpleFold: Folding Proteins is Simpler than You Think + https://arxiv.org/abs/2509.18480 + arXiv:2509.18480v4 Announce Type: replace +Abstract: Protein folding models have achieved groundbreaking results typically via a combination of integrating domain knowledge into the architectural blocks and training pipelines. Nonetheless, given the success of generative models across different but related problems, it is natural to question whether these architectural designs are a necessary condition to build performant models. In this paper, we introduce SimpleFold, the first flow-matching based protein folding model that solely uses general purpose transformer blocks. Protein folding models typically employ computationally expensive modules involving triangular updates, explicit pair representations or multiple training objectives curated for this specific domain. Instead, SimpleFold employs standard transformer blocks with adaptive layers and is trained via a generative flow-matching objective with an additional structural term. We scale SimpleFold to 3B parameters and train it on approximately 9M distilled protein structures together with experimental PDB data. On standard folding benchmarks, SimpleFold-3B achieves competitive performance compared to state-of-the-art baselines, in addition SimpleFold demonstrates strong performance in ensemble prediction which is typically difficult for models trained via deterministic reconstruction objectives. Due to its general-purpose architecture, SimpleFold shows efficiency in deployment and inference on consumer-level hardware. SimpleFold challenges the reliance on complex domain-specific architectures designs in protein folding, opening up an alternative design space for future progress. + oai:arXiv.org:2509.18480v4 + cs.LG + q-bio.QM + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ruihang Xu, Dewei Zhou, Fan Ma, Yi Yang + http://creativecommons.org/licenses/by/4.0/ + Yuyang Wang, Jiarui Lu, Navdeep Jaitly, Josh Susskind, Miguel Angel Bautista - COINS: SemantiC Ids Enhanced COLd Item RepresentatioN for Click-through Rate Prediction in E-commerce Sarch - https://arxiv.org/abs/2510.12604 - arXiv:2510.12604v2 Announce Type: replace -Abstract: With the rise of modern search and recommendation platforms, insufficient collaborative information of cold-start items exacerbates the Matthew effect of existing platform items, challenging platform diversity and becoming a longstanding issue. Existing methods align items' side content with collaborative information to transfer collaborative signals from high-popularity items to cold-start items. However, these methods fail to account for the asymmetry between collaboration and content, nor the fine-grained differences among items. To address these issues, we propose SMILE, an item representation enhancement approach based on fused alignment of semantic IDs. Specifically, we use RQ-OPQ encoding to quantize item content and collaborative information, followed by a two-step alignment: RQ encoding transfers shared collaborative signals across items, while OPQ encoding learns differentiated information of items. Comprehensive offline experiments on large-scale industrial datasets demonstrate superiority of SMILE, and rigorous online A/B tests confirm statistically significant improvements: item CTR +1.66%, buyers +1.57%, and order volume +2.17%. - oai:arXiv.org:2510.12604v2 - cs.IR + A Unified Noise-Curvature View of Loss of Trainability + https://arxiv.org/abs/2509.19698 + arXiv:2509.19698v3 Announce Type: replace +Abstract: Loss of trainability refers to a phenomenon in continual learning where parameter updates no longer make progress on the optimization objective, so accuracy stalls or degrades as the learning problem changes over time. In this paper, we analyze loss of trainability through an optimization lens and find that the phenomenon is not reliably predicted by existing individual indicators such as Hessian rank, sharpness level, weight or gradient norms, gradient-to-parameter ratios, and unit-sign entropy. Motivated by our analysis, we introduce two complementary indicators: a batch-size-aware gradient-noise bound and a curvature volatility-controlled bound. We then combine these two indicators into a per-layer adaptive noise threshold on the effective step-size that anticipates trainability behavior. Using this insight, we propose a step-size scheduler that keeps each layer's effective parameter update below this bound, thereby avoiding loss of trainability. We demonstrate that our scheduler can improve the accuracy maintained by previously proposed approaches, such as concatenated ReLU (CReLU), Wasserstein regularizer, and L2 weight decay. Surprisingly, our scheduler produces adaptive step-size trajectories that, without tuning, mirror the manually engineered step-size decay schedules. + oai:arXiv.org:2509.19698v3 + cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Qihang Zhao, Zhongbo Sun, Xiaoyang Zheng, Xian Guo, Siyuan Wang, Zihan Liang, Mingcan Peng, Ben Chen, Chenyi Lei + http://creativecommons.org/licenses/by/4.0/ + Gunbir Singh Baveja, Alex Lewandowski, Mark Schmidt - You May Speak Freely: Improving the Fine-Grained Visual Recognition Capabilities of Multimodal Large Language Models with Answer Extraction - https://arxiv.org/abs/2510.14885 - arXiv:2510.14885v2 Announce Type: replace -Abstract: Despite the renewed interest in zero-shot visual classification due to the rise of Multimodal Large Language Models (MLLMs), the problem of evaluating free-form responses of auto-regressive models remains a persistent challenge. Most existing works focus on language-only tasks or don't consider Multiple Choice Questions (MCQs) beyond 5-way options, both of which are critical capabilities to solve tasks in Fine-Grained Visual Classification (FGVC) where choice counts are in the hundreds to thousands and the choices are highly related. Furthermore, in this highly multi-way MCQ setting it is not clear how to extend LLM choice extraction to retrieval-based problems, where computing probabilities over the choice set is computationally costly. In this work we investigate nlg2choice, a simple two-stage method which first asks the MLLM an open-ended question for the task with minimal constraints, then uses text-only constrained decoding to predict the most likely choice. In retrieval settings, we compute the probability of the constrained response taking that choice with an early stopping method to significantly improve throughput. Our results show improvement over a suite of seven fine-grained visual datasets when evaluating in terms of classification and retrieval, and show that this performance holds over the various ways that users of LLMs can implement tasks in natural language. - oai:arXiv.org:2510.14885v2 - cs.CV - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Benchmarking Web API Integration Code Generation + https://arxiv.org/abs/2509.20172 + arXiv:2509.20172v5 Announce Type: replace +Abstract: API integration is a cornerstone of our digital infrastructure, enabling software systems to connect and interact. However, as shown by many studies, writing or generating correct code to invoke APIs, particularly web APIs, is challenging. Although large language models (LLMs) have become popular in software development, their effectiveness in automating the generation of web API integration code remains unexplored. In order to address this, we present WAPIIBench, a dataset and evaluation pipeline designed to assess the ability of LLMs to generate web API invocation code. Our experiments with several open-source LLMs reveal that generating API invocations poses a significant challenge, resulting in hallucinated endpoints, incorrect argument usage, and other errors. None of the evaluated open-source models was able to solve more than 40% of the tasks. + oai:arXiv.org:2509.20172v5 + cs.SE + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Logan Lawrence, Oindrila Saha, Megan Wei, Chen Sun, Subhransu Maji, Grant Van Horn + Daniel Maninger, Leon Chemnitz, Amir Molzam Sharifloo, Jannis Brugger, Mira Mezini - When Many-Shot Prompting Fails: An Empirical Study of LLM Code Translation - https://arxiv.org/abs/2510.16809 - arXiv:2510.16809v2 Announce Type: replace -Abstract: Large Language Models (LLMs) with vast context windows offer new avenues for in-context learning (ICL), where providing many examples ("many-shot" prompting) is often assumed to enhance performance. We investigate this assumption for the complex task of code translation. Through a large-scale empirical study of over 90,000 translations, we systematically evaluate the impact of scaling in-context examples from zero-shot to many-shot configurations of up to 625 examples, with prompts spanning from approximately 100,000 to 800,000 tokens. Our findings reveal a "many-shot paradox": while static similarity metrics may modestly improve with more examples, functional correctness consistently peaks with few-shot prompting (5-25 examples). Providing substantially more examples often degrades this crucial functional performance. This study highlights that for code translation, the quality of a few well-chosen examples outweighs sheer quantity, challenging the universal efficacy of "more is better" for ICL and underscoring the task-dependent nature of optimal prompting strategies. Our results have significant implications for effectively leveraging LLMs in software engineering. - oai:arXiv.org:2510.16809v2 - cs.SE + A Unified Formal Theory on the Logical Limits of Symbol Grounding + https://arxiv.org/abs/2509.20409 + arXiv:2509.20409v4 Announce Type: replace +Abstract: This paper synthesizes a series of formal proofs to construct a unified theory on the logical limits of the Symbol Grounding Problem. We distinguish between internal meaning (sense), which formal systems can possess via axioms, and external grounding (reference), which is a necessary condition for connecting symbols to the world. We demonstrate through a four-stage argument that meaningful grounding within a formal system must arise from a process that is external, dynamic, and non-fixed algorithmic. First, we show that for a purely symbolic system, the impossibility of grounding is a direct consequence of its definition. Second, we extend this limitation to systems with any finite, static set of pre-established meanings (Semantic Axioms). By formally modeling the computationalist hypothesis-which equates grounding with internal derivation-we prove via G\"odelian arguments that such systems cannot consistently and completely define a "groundability predicate" for all truths. Third, we demonstrate that the "grounding act" for emergent meanings cannot be inferred from internal rules but requires an axiomatic, meta-level update. Drawing on Turing's concept of Oracle Machines and Piccinini's analysis of the mathematical objection, we identify this update as physical transduction. Finally, we prove that this process cannot be simulated by a fixed judgment algorithm, validating the logical necessity of embodied interaction. + oai:arXiv.org:2509.20409v4 + cs.LO cs.AI - cs.CL - cs.PL - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Amirkia Rafiei Oskooei, Kaan Baturalp Cosdan, Husamettin Isiktas, Mehmet S. Aktas + Zhangchi Liu - On the Credibility of Deniable Communication in Court - https://arxiv.org/abs/2510.16873 - arXiv:2510.16873v2 Announce Type: replace -Abstract: Over time, cryptographically deniable systems have come to be associated in computer-science literature with the idea of "denying" evidence in court - specifically, with the ability to convincingly forge evidence in courtroom scenarios and an inability to authenticate evidence in such contexts. Evidentiary processes in courts, however, have been developed over centuries to account for the reality that evidence has always been forgeable, and relies on factors outside of cryptographic models to seek the truth "as well as possible" while acknowledging that all evidence is imperfect. We argue that deniability does not and need not change this paradigm. - Our analysis highlights a gap between technical deniability notions and their application to the real world. There will always be factors outside a cryptographic model that influence perceptions of a message's authenticity, in realistic situations. We propose the broader concept of credibility to capture these factors. The credibility of a system is determined by (1) a threshold of quality that a forgery must pass to be "believable" as an original communication, which varies based on sociotechnical context and threat model, (2) the ease of creating a forgery that passes this threshold, which is also context- and threat-model-dependent, and (3) default system retention policy and retention settings. All three aspects are important for designing secure communication systems for real-world threat models, and some aspects of (2) and (3) may be incorporated directly into technical system design. We hope that our model of credibility will facilitate system design and deployment that addresses threats that are not and cannot be captured by purely technical definitions and existing cryptographic models, and support more nuanced discourse on the strengths and limitations of cryptographic guarantees within specific legal and sociotechnical contexts. - oai:arXiv.org:2510.16873v2 - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Seedream 4.0: Toward Next-generation Multimodal Image Generation + https://arxiv.org/abs/2509.20427 + arXiv:2509.20427v3 Announce Type: replace +Abstract: We introduce Seedream 4.0, an efficient and high-performance multimodal image generation system that unifies text-to-image (T2I) synthesis, image editing, and multi-image composition within a single framework. We develop a highly efficient diffusion transformer with a powerful VAE which also can reduce the number of image tokens considerably. This allows for efficient training of our model, and enables it to fast generate native high-resolution images (e.g., 1K-4K). Seedream 4.0 is pretrained on billions of text-image pairs spanning diverse taxonomies and knowledge-centric concepts. Comprehensive data collection across hundreds of vertical scenarios, coupled with optimized strategies, ensures stable and large-scale training, with strong generalization. By incorporating a carefully fine-tuned VLM model, we perform multi-modal post-training for training both T2I and image editing tasks jointly. For inference acceleration, we integrate adversarial distillation, distribution matching, and quantization, as well as speculative decoding. It achieves an inference time of up to 1.8 seconds for generating a 2K image (without a LLM/VLM as PE model). Comprehensive evaluations reveal that Seedream 4.0 can achieve state-of-the-art results on both T2I and multimodal image editing. In particular, it demonstrates exceptional multimodal capabilities in complex tasks, including precise image editing and in-context reasoning, and also allows for multi-image reference, and can generate multiple output images. This extends traditional T2I systems into an more interactive and multidimensional creative tool, pushing the boundary of generative AI for both creativity and professional applications. We further scale our model and data as Seedream 4.5. Seedream 4.0 and Seedream 4.5 are accessible on Volcano Engine https://www.volcengine.com/experience/ark?launch=seedream. + oai:arXiv.org:2509.20427v3 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Jacob Leiken, Sunoo Park + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Team Seedream, :, Yunpeng Chen, Yu Gao, Lixue Gong, Meng Guo, Qiushan Guo, Zhiyao Guo, Xiaoxia Hou, Weilin Huang, Yixuan Huang, Xiaowen Jian, Huafeng Kuang, Zhichao Lai, Fanshi Li, Liang Li, Xiaochen Lian, Chao Liao, Liyang Liu, Wei Liu, Yanzuo Lu, Zhengxiong Luo, Tongtong Ou, Guang Shi, Yichun Shi, Shiqi Sun, Yu Tian, Zhi Tian, Peng Wang, Rui Wang, Xun Wang, Ye Wang, Guofeng Wu, Jie Wu, Wenxu Wu, Yonghui Wu, Xin Xia, Xuefeng Xiao, Shuang Xu, Xin Yan, Ceyuan Yang, Jianchao Yang, Zhonghua Zhai, Chenlin Zhang, Heng Zhang, Qi Zhang, Xinyu Zhang, Yuwei Zhang, Shijia Zhao, Wenliang Zhao, Wenjia Zhu - StreamingThinker: Large Language Models Can Think While Reading - https://arxiv.org/abs/2510.17238 - arXiv:2510.17238v2 Announce Type: replace -Abstract: Large language models (LLMs) have demonstrated remarkable capabilities in chain of thought (CoT) reasoning. However, the current LLM reasoning paradigm initiates thinking only after the entire input is available, which introduces unnecessary latency and weakens attention to earlier information in dynamic scenarios. Inspired by human cognition of thinking while reading, we first design a \textit{\textbf{streaming thinking}} paradigm for LLMs, where reasoning unfolds in the order of input and further adjusts its depth once reading is complete. We instantiate this paradigm with \textit{StreamingThinker}, a framework that enables LLMs to think while reading through the integration of streaming CoT generation, streaming-constraint training, and streaming parallel inference. Specifically, StreamingThinker employs streaming reasoning units with quality control for CoT generation, enforces order-preserving reasoning through streaming attention masks and position encoding, and leverages parallel KV caches that decouple input encoding from reasoning generation, thereby ensuring alignment and enabling true concurrency. We evaluate StreamingThinker on the Qwen3 model family across math reasoning, logical reasoning, and context-based QA reasoning tasks. Experimental results show that the StreamingThinker preserves performance comparable to batch thinking, while yielding an 80\% reduction in token waiting before the onset of reasoning and a more than 60\% reduction in time-level latency for producing the final answer, demonstrating the effectiveness of the streaming paradigm for LLM reasoning. Code will be released at https://github.com/EIT-NLP/StreamingLLM/tree/main/StreamingThinker. - oai:arXiv.org:2510.17238v2 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + New Synthetic Goldmine: Hand Joint Angle-Driven EMG Data Generation Framework for Micro-Gesture Recognition + https://arxiv.org/abs/2509.23359 + arXiv:2509.23359v4 Announce Type: replace +Abstract: Electromyography (EMG)-based gesture recognition has emerged as a promising approach for human-computer interaction. However, its performance is often limited by the scarcity of labeled EMG data, significant cross-user variability, and poor generalization to unseen gestures. To address these challenges, we propose SeqEMG-GAN, a conditional, sequence-driven generative framework that synthesizes high-fidelity EMG signals from hand joint angle sequences. Our method introduces a context-aware architecture composed of an angle encoder, a dual-layer context encoder featuring the novel Ang2Gist unit, a deep convolutional EMG generator, and a discriminator, all jointly optimized via adversarial learning. By conditioning on joint kinematic trajectories, SeqEMG-GAN is capable of generating semantically consistent EMG sequences, even for previously unseen gestures, thereby enhancing data diversity and physiological plausibility. Experimental results show that classifiers trained solely on synthetic data experience only a slight accuracy drop (from 57.77\% to 55.71\%). In contrast, training with a combination of real and synthetic data significantly improves accuracy to 60.53\%, outperforming real-only training by 2.76\%. These findings demonstrate the effectiveness of our framework,also achieves the state-of-art performance in augmenting EMG datasets and enhancing gesture recognition performance for applications such as neural robotic hand control, AI/AR glasses, and gesture-based virtual gaming systems. + oai:arXiv.org:2509.23359v4 + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Junlong Tong, Yingqi Fan, Anhao Zhao, Yunpu Ma, Xiaoyu Shen + Nana Wang, Gen Li, Pengfei Ren, Hao Su, Suli Wang - PAGE-4D: Disentangled Pose and Geometry Estimation for VGGT-4D Perception - https://arxiv.org/abs/2510.17568 - arXiv:2510.17568v3 Announce Type: replace -Abstract: Recent 3D feed-forward models, such as the Visual Geometry Grounded Transformer (VGGT), have shown strong capability in inferring 3D attributes of static scenes. However, since they are typically trained on static datasets, these models often struggle in real-world scenarios involving complex dynamic elements, such as moving humans or deformable objects like umbrellas. To address this limitation, we introduce PAGE-4D, a feedforward model that extends VGGT to dynamic scenes, enabling camera pose estimation, depth prediction, and point cloud reconstruction -- all without post-processing. A central challenge in multi-task 4D reconstruction is the inherent conflict between tasks: accurate camera pose estimation requires suppressing dynamic regions, while geometry reconstruction requires modeling them. To resolve this tension, we propose a dynamics-aware aggregator that disentangles static and dynamic information by predicting a dynamics-aware mask -- suppressing motion cues for pose estimation while amplifying them for geometry reconstruction. Extensive experiments show that PAGE-4D consistently outperforms the original VGGT in dynamic scenarios, achieving superior results in camera pose estimation, monocular and video depth estimation, and dense point map reconstruction. - oai:arXiv.org:2510.17568v3 + BridgeDrive: Diffusion Bridge Policy for Closed-Loop Trajectory Planning in Autonomous Driving + https://arxiv.org/abs/2509.23589 + arXiv:2509.23589v2 Announce Type: replace +Abstract: Diffusion-based planners have shown great promise for autonomous driving due to their ability to capture multi-modal driving behaviors. However, guiding these models effectively in reactive, closed-loop environments remains a significant challenge. Simple conditioning often fails to provide sufficient guidance in complex and dynamic driving scenarios. Recent work attempts to use typical expert driving behaviors (i.e., anchors) to guide diffusion models but relies on a truncated schedule, which introduces theoretical inconsistencies and can compromise performance. To address this, we introduce BridgeDrive, a novel anchor-guided diffusion bridge policy for closed-loop trajectory planning. Our approach provides a principled diffusion framework that effectively translates anchors into fine-grained trajectory plans, appropriately responding to varying traffic conditions. Our planner is compatible with efficient ODE solvers, a critical factor for real-time autonomous driving deployment. We achieve state-of-the-art performance on the Bench2Drive benchmark, improving the success rate by 7.72% over prior arts. + oai:arXiv.org:2509.23589v2 + cs.AI cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kaichen Zhou, Yuhan Wang, Grace Chen, Xinhai Chang, Gaspard Beaudouin, Fangneng Zhan, Paul Pu Liang, Mengyu Wang + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Shu Liu, Wenlin Chen, Weihao Li, Zheng Wang, Lijin Yang, Jianing Huang, Yipin Zhang, Zhongzhan Huang, Ze Cheng, Hao Yang - Rewarding the Journey, Not Just the Destination: A Composite Path and Answer Self-Scoring Reward Mechanism for Test-Time Reinforcement Learning - https://arxiv.org/abs/2510.17923 - arXiv:2510.17923v4 Announce Type: replace -Abstract: Reinforcement Learning (RL) has emerged as a powerful paradigm for advancing Large Language Models (LLMs), achieving remarkable performance in complex reasoning domains such as mathematics and code generation. However, current RL methods face a fundamental scalability bottleneck due to their heavy reliance on human-curated preference data or labeled datasets for reward modeling. To overcome this limitation, we explore RL on unlabeled data where models learn autonomously from continuous experience streams. The core challenge in this setting lies in reliable reward estimation without ground-truth supervision. Existing approaches like Test-Time RL address this through self-consistent consensus, but risk reinforcing incorrect pseudo-labels derived from majority voting. We introduce COMPASS (Composite Path and Answer Self-Scoring), a novel test-time reward mechanism that operates without external supervision. COMPASS integrates two complementary components: the Dual-Calibration Answer Reward (DCAR), which stabilizes training by establishing trustworthy pseudo-labels through confidence and credibility calibration, and the Decisive Path Reward (DPR), which directly optimizes the reasoning process quality beyond mere outcome supervision. By jointly reinforcing trustworthy consensus answers and highly decisive reasoning chains, the COMPASS systematically enhances the model's analytical capabilities. Extensive experiments show that COMPASS achieves significant and consistent performance gains across diverse reasoning tasks and model architectures, advancing a more scalable direction for LLMs to learn from continuous experience. - oai:arXiv.org:2510.17923v4 + The Impossibility of Inverse Permutation Learning in Transformer Models + https://arxiv.org/abs/2509.24125 + arXiv:2509.24125v3 Announce Type: replace +Abstract: In this technical note, we study the problem of inverse permutation learning in decoder-only transformers. Given a permutation and a string to which that permutation has been applied, the model is tasked with producing the original (``canonical'') string. We argue that this task models a natural robustness property across a variety of reasoning tasks, including long-context retrieval, multiple choice QA and in-context learning. Our primary contribution is an impossibility result: we show that an arbitrary depth, decoder-only transformer cannot learn this task. This result concerns the expressive capacity of decoder-only transformer models and is agnostic to training dynamics or sample complexity. We give a pair of alternative constructions under which inverse permutation learning is feasible. The first of these highlights the fundamental role of the causal attention mask, and reveals a gap between the expressivity of encoder-decoder transformers and the more popular decoder-only architecture. The latter result is more surprising: we show that simply padding the input with ``scratch tokens" yields a construction under which inverse permutation learning is possible. We conjecture that this may suggest an alternative mechanism by which chain-of-thought prompting or, more generally, intermediate ``thinking'' tokens can enable reasoning in large language models, even when these tokens encode no meaningful semantic information (e.g., the results of intermediate computations). + oai:arXiv.org:2509.24125v3 cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jingyu Xing, Chenwei Tang, Xinyu Liu, Deng Xiong, Shudong Huang, Wei Ju, Jiancheng Lv, Ziyue Qiao + Rohan Alur, Chris Hays, Manish Raghavan, Devavrat Shah - Small Drafts, Big Verdict: Information-Intensive Visual Reasoning via Speculation - https://arxiv.org/abs/2510.20812 - arXiv:2510.20812v3 Announce Type: replace -Abstract: Large Vision-Language Models (VLMs) have achieved remarkable progress in multimodal understanding, yet they struggle when reasoning over information-intensive images that densely interleave textual annotations with fine-grained graphical elements. The main challenges lie in precisely localizing critical cues in dense layouts and multi-hop reasoning to integrate dispersed evidence. We propose Speculative Verdict (SV), a training-free framework inspired by speculative decoding that combines multiple lightweight draft experts with a large verdict model. In the draft stage, small VLMs act as draft experts to generate reasoning paths that provide diverse localization candidates; in the verdict stage, a strong VLM synthesizes these paths to produce the final answer, minimizing computational cost while recovering correct answers. To further improve efficiency and accuracy, SV introduces a consensus expert selection mechanism that forwards only high-agreement reasoning paths to the verdict. Empirically, SV achieves consistent gains on challenging information-intensive and high-resolution visual question answering benchmarks, including InfographicVQA, ChartMuseum, ChartQAPro, and HR-Bench 4K. By synthesizing correct insights from multiple partially accurate reasoning paths, SV achieves both error correction and cost-efficiency compared to large proprietary models or training pipelines. Code is available at https://github.com/Tinaliu0123/speculative-verdict. - oai:arXiv.org:2510.20812v3 - cs.CV + InfMasking: Unleashing Synergistic Information by Contrastive Multimodal Interactions + https://arxiv.org/abs/2509.25270 + arXiv:2509.25270v3 Announce Type: replace +Abstract: In multimodal representation learning, synergistic interactions between modalities not only provide complementary information but also create unique outcomes through specific interaction patterns that no single modality could achieve alone. Existing methods may struggle to effectively capture the full spectrum of synergistic information, leading to suboptimal performance in tasks where such interactions are critical. This is particularly problematic because synergistic information constitutes the fundamental value proposition of multimodal representation. To address this challenge, we introduce InfMasking, a contrastive synergistic information extraction method designed to enhance synergistic information through an Infinite Masking strategy. InfMasking stochastically occludes most features from each modality during fusion, preserving only partial information to create representations with varied synergistic patterns. Unmasked fused representations are then aligned with masked ones through mutual information maximization to encode comprehensive synergistic information. This infinite masking strategy enables capturing richer interactions by exposing the model to diverse partial modality combinations during training. As computing mutual information estimates with infinite masking is computationally prohibitive, we derive an InfMasking loss to approximate this calculation. Through controlled experiments, we demonstrate that InfMasking effectively enhances synergistic information between modalities. In evaluations on large-scale real-world datasets, InfMasking achieves state-of-the-art performance across seven benchmarks. Code is released at https://github.com/brightest66/InfMasking. + oai:arXiv.org:2509.25270v3 + cs.LG cs.AI - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Yuhan Liu, Lianhui Qin, Shengjie Wang + Liangjian Wen, Qun Dai, Jianzhuang Liu, Jiangtao Zheng, Yong Dai, Dongkai Wang, Zhao Kang, Jun Wang, Zenglin Xu, Jiang Duan - Shift Bribery over Social Networks - https://arxiv.org/abs/2510.21200 - arXiv:2510.21200v2 Announce Type: replace -Abstract: In shift bribery, a briber seeks to promote his preferred candidate by paying voters to raise their ranking. Classical models of shift bribery assume voters act independently, overlooking the role of social influence. However, in reality, individuals are social beings and are often represented as part of a social network, where bribed voters may influence their neighbors, thereby amplifying the effect of persuasion. We study Shift bribery over Networks, where voters are modeled as nodes in a directed weighted graph, and arcs represent social influence between them. In this setting, bribery is not confined to directly targeted voters its effects can propagate through the network, influencing neighbors and amplifying persuasion. Given a budget and individual cost functions for shifting each voter's preference toward a designated candidate, the goal is to determine whether a shift strategy exists within budget that ensures the preferred candidate wins after both direct and network-propagated influence takes effect. We show that the problem is NP-Complete even with two candidates and unit costs, and W[2]-hard when parameterized by budget or maximum degree. On the positive side, we design polynomial-time algorithms for complete graphs under plurality and majority rules and path graphs for uniform edge weights, linear-time algorithms for transitive tournaments for two candidates, linear cost functions and uniform arc weights, and pseudo-polynomial algorithms for cluster graphs. We further prove the existence of fixed-parameter tractable algorithms with treewidth as parameter for two candidates, linear cost functions and uniform arc weights and pseudo-FPT with cluster vertex deletion number for two candidates and uniform arc weights. Together, these results give a detailed complexity landscape for shift bribery in social networks. - oai:arXiv.org:2510.21200v2 - cs.GT - cs.CY - cs.MA - cs.SI - Wed, 10 Dec 2025 00:00:00 -0500 + The Three Regimes of Offline-to-Online Reinforcement Learning + https://arxiv.org/abs/2510.01460 + arXiv:2510.01460v2 Announce Type: replace +Abstract: Offline-to-online reinforcement learning (RL) has emerged as a practical paradigm that leverages offline datasets for pretraining and online interactions for fine-tuning. However, its empirical behavior is highly inconsistent: design choices of online-fine tuning that work well in one setting can fail completely in another. We propose a stability--plasticity principle that can explain this inconsistency: we should preserve the knowledge of pretrained policy or offline dataset during online fine-tuning, whichever is better, while maintaining sufficient plasticity. This perspective identifies three regimes of online fine-tuning, each requiring distinct stability properties. We validate this framework through a large-scale empirical study, finding that the results strongly align with its predictions in 45 of 63 cases. This work provides a principled framework for guiding design choices in offline-to-online RL based on the relative performance of the offline dataset and the pretrained policy. + oai:arXiv.org:2510.01460v2 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Ashlesha Hota, Susobhan Bandopadhyay, Palash Dey + Lu Li, Tianwei Ni, Yihao Sun, Pierre-Luc Bacon - From Benchmarks to Business Impact: Deploying IBM Generalist Agent in Enterprise Production - https://arxiv.org/abs/2510.23856 - arXiv:2510.23856v2 Announce Type: replace -Abstract: Agents are rapidly advancing in automating digital work, but enterprises face a harder challenge: moving beyond prototypes to deployed systems that deliver measurable business value. This path is complicated by fragmented frameworks, slow development, and the absence of standardized evaluation practices. Generalist agents have emerged as a promising direction, excelling on academic benchmarks and offering flexibility across task types, applications, and modalities. Yet, evidence of their use in production enterprise settings remains limited. This paper reports IBM's experience developing and piloting the Computer Using Generalist Agent (CUGA), which has been open-sourced for the community (https://github.com/cuga-project/cuga-agent). CUGA adopts a hierarchical planner--executor architecture with strong analytical foundations, achieving state-of-the-art performance on AppWorld and WebArena. Beyond benchmarks, it was evaluated in a pilot within the Business-Process-Outsourcing talent acquisition domain, addressing enterprise requirements for scalability, auditability, safety, and governance. To support assessment, we introduce BPO-TA, a 26-task benchmark spanning 13 analytics endpoints. In preliminary evaluations, CUGA approached the accuracy of specialized agents while indicating potential for reducing development time and cost. Our contribution is twofold: presenting early evidence of generalist agents operating at enterprise scale, and distilling technical and organizational lessons from this initial pilot. We outline requirements and next steps for advancing research-grade architectures like CUGA into robust, enterprise-ready systems. - oai:arXiv.org:2510.23856v2 + OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit Flows + https://arxiv.org/abs/2510.03506 + arXiv:2510.03506v3 Announce Type: replace +Abstract: We present OneFlow, the first non-autoregressive multimodal model that enables variable-length and concurrent mixed-modal generation. Unlike autoregressive models that enforce rigid causal ordering between text and image generation, OneFlow combines an insertion-based Edit Flow for discrete text tokens with Flow Matching for image latents. OneFlow enables concurrent text-image synthesis with hierarchical sampling that prioritizes content over grammar. Through controlled experiments across model sizes from 1B to 8B, we demonstrate that OneFlow outperforms autoregressive baselines on both generation and understanding tasks while using up to 50% fewer training FLOPs. OneFlow surpasses both autoregressive and diffusion-based approaches while unlocking new capabilities for concurrent generation, iterative refinement, and natural reasoning-like generation. + oai:arXiv.org:2510.03506v3 cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Segev Shlomov, Alon Oved, Sami Marreed, Ido Levy, Offer Akrabi, Avi Yaeli, {\L}ukasz Str\k{a}k, Elizabeth Koumpan, Yinon Goldshtein, Eilam Shapira, Nir Mashkif, Asaf Adi + http://creativecommons.org/licenses/by/4.0/ + John Nguyen, Marton Havasi, Tariq Berrada, Luke Zettlemoyer, Ricky T. Q. Chen - OS-Sentinel: Towards Safety-Enhanced Mobile GUI Agents via Hybrid Validation in Realistic Workflows - https://arxiv.org/abs/2510.24411 - arXiv:2510.24411v2 Announce Type: replace -Abstract: Computer-using agents powered by Vision-Language Models (VLMs) have demonstrated human-like capabilities in operating digital environments like mobile platforms. While these agents hold great promise for advancing digital automation, their potential for unsafe operations, such as system compromise and privacy leakage, is raising significant concerns. Detecting these safety concerns across the vast and complex operational space of mobile environments presents a formidable challenge that remains critically underexplored. To establish a foundation for mobile agent safety research, we introduce MobileRisk-Live, a dynamic sandbox environment accompanied by a safety detection benchmark comprising realistic trajectories with fine-grained annotations. Built upon this, we propose OS-Sentinel, a novel hybrid safety detection framework that synergistically combines a Formal Verifier for detecting explicit system-level violations with a VLM-based Contextual Judge for assessing contextual risks and agent actions. Experiments show that OS-Sentinel achieves 10%-30% improvements over existing approaches across multiple metrics. Further analysis provides critical insights that foster the development of safer and more reliable autonomous mobile agents. Our code and data are available at https://github.com/OS-Copilot/OS-Sentinel. - oai:arXiv.org:2510.24411v2 - cs.AI + Demystifying deep search: a holistic evaluation with hint-free multi-hop questions and factorised metrics + https://arxiv.org/abs/2510.05137 + arXiv:2510.05137v3 Announce Type: replace +Abstract: RAG (Retrieval-Augmented Generation) systems and web agents are increasingly evaluated on multi-hop deep search tasks, yet current practice suffers from two major limitations. First, most benchmarks leak the reasoning path in the question text, allowing models to follow surface cues rather than discover reasoning chains autonomously. Second, evaluation is typically reduced to a single pass rate, which collapses diverse behaviours into one score and obscures whether failures stem from inadequate search, poor knowledge use, or inappropriate refusal. To address these issues, we present WebDetective, a benchmark of hint-free multi-hop questions paired with a controlled Wikipedia sandbox that ensures full traceability of model actions, and a holistic evaluation framework that separates search sufficiency, knowledge utilisation, and refusal behaviour. Our evaluation of 25 state-of-the-art models reveals systematic weaknesses across all architectures: models struggle with knowledge utilisation despite having sufficient evidence and demonstrate near-absent appropriate refusal when evidence is lacking. These patterns expose a fundamental gap: today's systems excel at executing given reasoning paths but fail when required to discover them. We develop an agentic workflow, EvidenceLoop, that explicitly targets the challenges our benchmark identifies, incorporating verification loops and systematic evidence tracking that improve both search and synthesis capabilities. This baseline demonstrates that WebDetective's diagnostic framework can guide concrete architectural improvements, establishing our benchmark as a critical tool for developing genuinely autonomous reasoning systems rather than pattern-following agents. + oai:arXiv.org:2510.05137v3 cs.CL - cs.CV - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Qiushi Sun, Mukai Li, Zhoumianze Liu, Zhihui Xie, Fangzhi Xu, Zhangyue Yin, Kanzhi Cheng, Zehao Li, Zichen Ding, Qi Liu, Zhiyong Wu, Zhuosheng Zhang, Ben Kao, Lingpeng Kong + Maojia Song, Renhang Liu, Xinyu Wang, Yong Jiang, Pengjun Xie, Fei Huang, Jingren Zhou, Dorien Herremans, Soujanya Poria - Using Copilot Agent Mode to Automate Library Migration: A Quantitative Assessment - https://arxiv.org/abs/2510.26699 - arXiv:2510.26699v2 Announce Type: replace -Abstract: Keeping software systems up to date is essential to avoid technical debt, security vulnerabilities, and the rigidity typical of legacy systems. However, updating libraries and frameworks remains a time consuming and error-prone process. Recent advances in Large Language Models (LLMs) and agentic coding systems offer new opportunities for automating such maintenance tasks. In this paper, we evaluate the update of a well-known Python library, SQLAlchemy, across a dataset of ten client applications. For this task, we use the Github's Copilot Agent Mode, an autonomous AI systema capable of planning and executing multi-step migration workflows. To assess the effectiveness of the automated migration, we also introduce Migration Coverage, a metric that quantifies the proportion of API usage points correctly migrated. The results of our study show that the LLM agent was capable of migrating functionalities and API usages between SQLAlchemy versions (migration coverage: 100%, median), but failed to maintain the application functionality, leading to a low test-pass rate (39.75%, median). - oai:arXiv.org:2510.26699v2 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + TRepLiNa: Layer-wise CKA+REPINA Alignment Improves Low-Resource Machine Translation in Aya-23 8B + https://arxiv.org/abs/2510.06249 + arXiv:2510.06249v5 Announce Type: replace +Abstract: The 2025 Multimodal Models for Low-Resource Contexts and Social Impact (MMLoSo) Language Challenge addresses one of India's most pressing linguistic gaps: the lack of resources for its diverse low-resource languages (LRLs). In this study, we investigate whether enforcing cross-lingual similarity in specific internal layers of a decoder-only multilingual large language model (LLM) can improve translation quality from LRL to high-resource language (HRL). Specifically, we combine Centered Kernel Alignment (CKA), a similarity metric that encourages representations of different languages to align, with REPINA, a regularization method that constrains parameter updates to remain close to the pretrained model, into a joint method we call TRepLiNa. In this research project, we experiment with zero-shot, few-shot, and fine-tuning settings using Aya-23 8B with QLoRA across MMLoSo shared task language pairs (Mundari, Santali, Bhili) with Hindi/English pivots. Our results show that aligning mid-level layers using TRepLiNa (CKA+REPINA) is a low-cost, practical approach to improving LRL translation, especially in data-scarce settings. + oai:arXiv.org:2510.06249v5 + cs.CL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Aylton Almeida, Laerte Xavier, Marco Tulio Valente + http://creativecommons.org/licenses/by/4.0/ + Toshiki Nakai, Ravi Kiran Chikkala, Lena Sophie Oberkircher, Nicholas Jennings, Natalia Skachkova, Tatiana Anikina, Jesujoba Oluwadara Alabi - Online Energy Storage Arbitrage under Imperfect Predictions: A Conformal Risk-Aware Approach - https://arxiv.org/abs/2511.01032 - arXiv:2511.01032v2 Announce Type: replace -Abstract: This work proposes a conformal approach for energy storage arbitrage to control the downside risk arising from imperfect price forecasts. Energy storage arbitrage relies solely on predictions of future market prices, while inaccurate price predictions may lead to significant profit losses. Based on conformal decision theory, we describe a controller that dynamically adjusts decision conservativeness through prediction sets without distributional assumptions. To enable online calibration when online profit loss feedback is unobservable, we establish that a temporal difference error serves as a measurable proxy. Building on this insight, we develop two online calibration strategies: prediction error-based adaptation targeting forecast accuracy, and value error-based calibration focusing on decision quality. Analysis of the conformal controller proves bounded long-term risk with convergence guarantees in temporal difference error, which further effectively manages risk exposure in potential profit losses. Case studies demonstrate superior performance in balancing risk and opportunity compared to benchmarks under varying forecast conditions. - oai:arXiv.org:2511.01032v2 - eess.SY - cs.SY - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + Open ASR Leaderboard: Towards Reproducible and Transparent Multilingual Speech Recognition Evaluation + https://arxiv.org/abs/2510.06961 + arXiv:2510.06961v3 Announce Type: replace +Abstract: Despite rapid progress, ASR evaluation remains saturated with short-form English, and efficiency is rarely reported. We present the Open ASR Leaderboard, a fully reproducible benchmark and interactive leaderboard comparing 60+ open-source and proprietary systems across 11 datasets, including a dedicated multilingual track. We standardize text normalization and report both word error rate (WER) and inverse real-time factor (RTFx), enabling fair accuracy-efficiency comparisons. For English transcription, Conformer encoders paired with LLM decoders achieve the best average WER but are slower, while CTC and TDT decoders deliver much better RTFx, making them attractive for long-form and offline use. Whisper-derived encoders fine-tuned for English improve accuracy but often trade off multilingual coverage. All code and dataset loaders are open-sourced to support transparent, extensible evaluation. + oai:arXiv.org:2510.06961v3 + cs.CL + cs.AI + cs.SD + eess.AS + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Yiqian Wu, Ming Yi, Bolun Xu, James Anderson + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Vaibhav Srivastav, Steven Zheng, Eric Bezzam, Eustache Le Bihan, Adel Moumen, Sanchit Gandhi - Test-driven Reinforcement Learning in Continuous Control - https://arxiv.org/abs/2511.07904 - arXiv:2511.07904v3 Announce Type: replace -Abstract: Reinforcement learning (RL) has been recognized as a powerful tool for robot control tasks. RL typically employs reward functions to define task objectives and guide agent learning. However, since the reward function serves the dual purpose of defining the optimal goal and guiding learning, it is challenging to design the reward function manually, which often results in a suboptimal task representation. To tackle the reward design challenge in RL, inspired by the satisficing theory, we propose a Test-driven Reinforcement Learning (TdRL) framework. In the TdRL framework, multiple test functions are used to represent the task objective rather than a single reward function. Test functions can be categorized as pass-fail tests and indicative tests, each dedicated to defining the optimal objective and guiding the learning process, respectively, thereby making defining tasks easier. Building upon such a task definition, we first prove that if a trajectory return function assigns higher returns to trajectories closer to the optimal trajectory set, maximum entropy policy optimization based on this return function will yield a policy that is closer to the optimal policy set. Then, we introduce a lexicographic heuristic approach to compare the relative distance relationship between trajectories and the optimal trajectory set for learning the trajectory return function. Furthermore, we develop an algorithm implementation of TdRL. Experimental results on the DeepMind Control Suite benchmark demonstrate that TdRL matches or outperforms handcrafted reward methods in policy training, with greater design simplicity and inherent support for multi-objective optimization. We argue that TdRL offers a novel perspective for representing task objectives, which could be helpful in addressing the reward design challenges in RL applications. - oai:arXiv.org:2511.07904v3 + PEAR: Planner-Executor Agent Robustness Benchmark + https://arxiv.org/abs/2510.07505 + arXiv:2510.07505v3 Announce Type: replace +Abstract: Large Language Model (LLM)-based Multi-Agent Systems (MAS) have emerged as a powerful paradigm for tackling complex, multi-step tasks across diverse domains. However, despite their impressive capabilities, MAS remain susceptible to adversarial manipulation. Existing studies typically examine isolated attack surfaces or specific scenarios, leaving a lack of holistic understanding of MAS vulnerabilities. To bridge this gap, we introduce PEAR, a benchmark for systematically evaluating both the utility and vulnerability of planner-executor MAS. While compatible with various MAS architectures, our benchmark focuses on the planner-executor structure, which is a practical and widely adopted design. Through extensive experiments, we find that (1) a weak planner degrades overall clean task performance more severely than a weak executor; (2) while a memory module is essential for the planner, having a memory module for the executor does not impact the clean task performance; (3) there exists a trade-off between task performance and robustness; and (4) attacks targeting the planner are particularly effective at misleading the system. These findings offer actionable insights for enhancing the robustness of MAS and lay the groundwork for principled defenses in multi-agent settings. + oai:arXiv.org:2510.07505v3 cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-sa/4.0/ - Zhao Yu, Xiuping Wu, Liangjun Ke + http://creativecommons.org/licenses/by/4.0/ + Shen Dong, Mingxuan Zhang, Pengfei He, Li Ma, Bhavani Thuraisingham, Hui Liu, Yue Xing - Decomposition of Small Transformer Models - https://arxiv.org/abs/2511.08854 - arXiv:2511.08854v2 Announce Type: replace -Abstract: Recent work in mechanistic interpretability has shown that decomposing models in parameter space may yield clean handles for analysis and intervention. Previous methods have demonstrated successful applications on a wide range of toy models, but the gap to "real models" has not yet been bridged. In this work, we extend Stochastic Parameter Decomposition (SPD) to Transformer models, proposing an updated causal importance function suited for sequential data and a new loss function. We demonstrate that SPD can successfully decompose a toy induction-head model and recover the expected 2-step circuit. We also show that applying SPD to GPT-2-small can successfully locate subcomponents corresponding to interpretable concepts like "golf" and "basketball". These results take the first step in the direction of extending SPD to modern models, and show that we can use the method to surface interpretable parameter-space mechanisms. - oai:arXiv.org:2511.08854v2 + Detecting and Mitigating Insertion Hallucination in Video-to-Audio Generation + https://arxiv.org/abs/2510.08078 + arXiv:2510.08078v4 Announce Type: replace +Abstract: Video-to-Audio generation has made remarkable strides in automatically synthesizing sound for video. However, existing evaluation metrics, which focus on semantic and temporal alignment, overlook a critical failure mode: models often generate acoustic events, particularly speech and music, that have no corresponding visual source. We term this phenomenon Insertion Hallucination and identify it as a systemic risk driven by dataset biases, such as the prevalence of off-screen sounds, that remains completely undetected by current metrics. To address this challenge, we first develop a systematic evaluation framework that employs a majority-voting ensemble of multiple audio event detectors. We also introduce two novel metrics to quantify the prevalence and severity of this issue: IH@vid (the fraction of videos with hallucinations) and IH@dur (the fraction of hallucinated duration). Building on this, we propose Posterior Feature Correction, a novel training-free inference-time method that mitigates IH. PFC operates in a two-pass process: it first generates an initial audio output to detect hallucinated segments, and then regenerates the audio after masking the corresponding video features at those timestamps. Experiments on several mainstream V2A benchmarks first reveal that state-of-the-art models suffer from severe IH. In contrast, our PFC method reduces both the prevalence and duration of hallucinations by over 50\% on average, without degrading, and in some cases even improving, conventional metrics for audio quality and temporal synchronization. Our work is the first to formally define, systematically measure, and effectively mitigate Insertion Hallucination, paving the way for more reliable and faithful V2A models. + oai:arXiv.org:2510.08078v4 + cs.SD cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Casper L. Christensen, Logan Riggs + Liyang Chen, Hongkai Chen, Yujun Cai, Sifan Li, Qingwen Ye, Yiwei Wang - Learning to Pose Problems: Reasoning-Driven and Solver-Adaptive Data Synthesis for Large Reasoning Models - https://arxiv.org/abs/2511.09907 - arXiv:2511.09907v2 Announce Type: replace -Abstract: Data synthesis for training large reasoning models offers a scalable alternative to limited, human-curated datasets, enabling the creation of high-quality data. However, existing approaches face several challenges: (i) indiscriminate generation that ignores the solver's ability and yields low-value problems, or reliance on complex data pipelines to balance problem difficulty; and (ii) a lack of reasoning in problem generation, leading to shallow problem variants. In this paper, we develop a problem generator that reasons explicitly to plan problem directions before synthesis and adapts difficulty to the solver's ability. Specifically, we construct related problem pairs and augment them with intermediate problem-design CoT produced by a reasoning model. These data bootstrap problem-design strategies from the generator. Then, we treat the solver's feedback on synthetic problems as a reward signal, enabling the generator to calibrate difficulty and produce complementary problems near the edge of the solver's competence. Extensive experiments on 10 mathematical and general reasoning benchmarks show that our method achieves an average improvement of 2.5% and generalizes to both language and vision-language models. Moreover, a solver trained on the synthesized data provides improved rewards for continued generator training, enabling co-evolution and yielding a further 0.7% performance gain. Our code will be made publicly available here. - oai:arXiv.org:2511.09907v2 - cs.AI + Adaptive Gradient Calibration for Single-Positive Multi-Label Learning in Remote Sensing Image Scene Classification + https://arxiv.org/abs/2510.08269 + arXiv:2510.08269v2 Announce Type: replace +Abstract: Multi-label classification (MLC) offers a more comprehensive semantic understanding of Remote Sensing (RS) imagery compared to traditional single-label classification (SLC). However, obtaining complete annotations for MLC is particularly challenging due to the complexity and high cost of the labeling process. As a practical alternative, single-positive multi-label learning (SPML) has emerged, where each image is annotated with only one relevant label, and the model is expected to recover the full set of labels. While scalable, SPML introduces significant supervision ambiguity, demanding specialized solutions for model training. Although various SPML methods have been proposed in the computer vision domain, research in the RS context remains limited. To bridge this gap, we propose Adaptive Gradient Calibration (AdaGC), a novel and generalizable SPML framework tailored to RS imagery. AdaGC adopts a gradient calibration (GC) mechanism with a dual exponential moving average (EMA) module for robust pseudo-label generation. We introduce a theoretically grounded, training-dynamics-based indicator to adaptively trigger GC, which ensures GC's effectiveness by preventing it from being affected by model underfitting or overfitting to label noise. Extensive experiments on two benchmark RS datasets under two distinct label noise types demonstrate that AdaGC achieves state-of-the-art (SOTA) performance while maintaining strong robustness across diverse settings. The codes and data will be released at https://github.com/rslab-unitrento/AdaGC. + oai:arXiv.org:2510.08269v2 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Yongxian Wei, Yilin Zhao, Li Shen, Xinrui Chen, Runxi Cheng, Sinan Du, Hao Yu, Gang Liu, Jiahong Yan, Chun Yuan, Dian Li + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chenying Liu, Gianmarco Perantoni, Lorenzo Bruzzone, Xiao Xiang Zhu - Do Language Models Associate Sound with Meaning? A Multimodal Study of Sound Symbolism - https://arxiv.org/abs/2511.10045 - arXiv:2511.10045v3 Announce Type: replace -Abstract: Sound symbolism is a linguistic concept that refers to non-arbitrary associations between phonetic forms and their meanings. We suggest that this can be a compelling probe into how Multimodal Large Language Models (MLLMs) interpret auditory information in human languages. We investigate MLLMs' performance on phonetic iconicity across textual (orthographic and IPA) and auditory forms of inputs with up to 25 semantic dimensions (e.g., sharp vs. round), observing models' layer-wise information processing by measuring phoneme-level attention fraction scores. To this end, we present LEX-ICON, an extensive mimetic word dataset consisting of 8,052 words from four natural languages (English, French, Japanese, and Korean) and 2,930 systematically constructed pseudo-words, annotated with semantic features applied across both text and audio modalities. Our key findings demonstrate (1) MLLMs' phonetic intuitions that align with existing linguistic research across multiple semantic dimensions and (2) phonosemantic attention patterns that highlight models' focus on iconic phonemes. These results bridge domains of artificial intelligence and cognitive linguistics, providing the first large-scale, quantitative analyses of phonetic iconicity in terms of MLLMs' interpretability. - oai:arXiv.org:2511.10045v3 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Evaluating Small Vision-Language Models on Distance-Dependent Traffic Perception + https://arxiv.org/abs/2510.08352 + arXiv:2510.08352v2 Announce Type: replace +Abstract: Vision-Language Models (VLMs) are becoming increasingly powerful, demonstrating strong performance on a variety of tasks that require both visual and textual understanding. Their strong generalisation abilities make them a promising component for automated driving systems, which must handle unexpected corner cases. However, to be trusted in such safety-critical applications, a model must first possess a reliable perception system. Moreover, since critical objects and agents in traffic scenes are often at a distance, we require systems that are not "shortsighted", i.e., systems with strong perception capabilities at both close (up to 20 meters) and long (30+ meters) range. With this in mind, we introduce Distance-Annotated Traffic Perception Question Answering (DTPQA), the first Visual Question Answering (VQA) benchmark focused solely on perception-based questions in traffic scenes, enriched with distance annotations. By excluding questions that require reasoning, we ensure that model performance reflects perception capabilities alone. Since automated driving hardware has limited processing power and cannot support large VLMs, our study centers on smaller VLMs. More specifically, we evaluate several state-of-the-art (SOTA) small VLMs on DTPQA and show that, despite the simplicity of the questions, these models significantly underperform compared to humans (~60% average accuracy for the best-performing small VLM versus ~85% human performance). However, it is important to note that the human sample size was relatively small, which imposes statistical limitations. We also identify specific perception tasks, such as distinguishing left from right, that remain particularly challenging for these models. + oai:arXiv.org:2510.08352v2 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jinhong Jeong, Sunghyun Lee, Jaeyoung Lee, Seonah Han, Youngjae Yu + http://creativecommons.org/licenses/by/4.0/ + 10.1109/OJVT.2025.3629318 + IEEE Open Journal of Vehicular Technology, vol. 7, pp. 54-72, 2026 + Nikos Theodoridis, Tim Brophy, Reenu Mohandas, Ganesh Sistu, Fiachra Collins, Anthony Scanlan, Ciaran Eising - ProgRAG: Hallucination-Resistant Progressive Retrieval and Reasoning over Knowledge Graphs - https://arxiv.org/abs/2511.10240 - arXiv:2511.10240v2 Announce Type: replace -Abstract: Large Language Models (LLMs) demonstrate strong reasoning capabilities but struggle with hallucinations and limited transparency. Recently, KG-enhanced LLMs that integrate knowledge graphs (KGs) have been shown to improve reasoning performance, particularly for complex, knowledge-intensive tasks. However, these methods still face significant challenges, including inaccurate retrieval and reasoning failures, often exacerbated by long input contexts that obscure relevant information or by context constructions that struggle to capture the richer logical directions required by different question types. Furthermore, many of these approaches rely on LLMs to directly retrieve evidence from KGs, and to self-assess the sufficiency of this evidence, which often results in premature or incorrect reasoning. To address the retrieval and reasoning failures, we propose ProgRAG, a multi-hop knowledge graph question answering (KGQA) framework that decomposes complex questions into sub-questions, and progressively extends partial reasoning paths by answering each sub-question. At each step, external retrievers gather candidate evidence, which is then refined through uncertainty-aware pruning by the LLM. Finally, the context for LLM reasoning is optimized by organizing and rearranging the partial reasoning paths obtained from the sub-question answers. Experiments on three well-known datasets demonstrate that ProgRAG outperforms existing baselines in multi-hop KGQA, offering improved reliability and reasoning quality. - oai:arXiv.org:2511.10240v2 + Learning What Matters: Steering Diffusion via Spectrally Anisotropic Forward Noise + https://arxiv.org/abs/2510.09660 + arXiv:2510.09660v4 Announce Type: replace +Abstract: Diffusion Probabilistic Models (DPMs) have achieved strong generative performance, yet their inductive biases remain largely implicit. In this work, we aim to build inductive biases into the training and sampling of diffusion models to better accommodate the target distribution of the data to model. We introduce an anisotropic noise operator that shapes these biases by replacing the isotropic forward covariance with a structured, frequency-diagonal covariance. This operator unifies band-pass masks and power-law weightings, allowing us to emphasize or suppress designated frequency bands, while keeping the forward process Gaussian. We refer to this as Spectrally Anisotropic Gaussian Diffusion (SAGD). In this work, we derive the score relation for anisotropic forward covariances and show that, under full support, the learned score converges to the true data score as $t\!\to\!0$, while anisotropy reshapes the probability-flow path from noise to data. Empirically, we show the induced anisotropy outperforms standard diffusion across several vision datasets, and enables selective omission: learning while ignoring known corruptions confined to specific bands. Together, these results demonstrate that carefully designed anisotropic forward noise provides a simple, yet principled, handle to tailor inductive bias in DPMs. + oai:arXiv.org:2510.09660v4 + cs.LG cs.AI - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Minbae Park, Hyemin Yang, Jeonghyun Kim, Kunsoo Park, Hyunjoon Kim + Luca Scimeca, Thomas Jiralerspong, Berton Earnshaw, Jason Hartford, Yoshua Bengio - Humanoid Whole-Body Badminton via Multi-Stage Reinforcement Learning - https://arxiv.org/abs/2511.11218 - arXiv:2511.11218v2 Announce Type: replace -Abstract: Humanoid robots have demonstrated strong capabilities for interacting with static scenes across locomotion, manipulation, and more challenging loco-manipulation tasks. Yet the real world is dynamic, and quasi-static interactions are insufficient to cope with diverse environmental conditions. As a step toward more dynamic interaction scenarios, we present a reinforcement-learning-based training pipeline that produces a unified whole-body controller for humanoid badminton, enabling coordinated lower-body footwork and upper-body striking without motion priors or expert demonstrations. Training follows a three-stage curriculum: first footwork acquisition, then precision-guided racket swing generation, and finally task-focused refinement, yielding motions in which both legs and arms serve the hitting objective. For deployment, we incorporate an Extended Kalman Filter (EKF) to estimate and predict shuttlecock trajectories for target striking. We also introduce a prediction-free variant that dispenses with EKF and explicit trajectory prediction. To validate the framework, we conduct five sets of experiments in both simulation and the real world. In simulation, two robots sustain a rally of 21 consecutive hits. Moreover, the prediction-free variant achieves successful hits with comparable performance relative to the target-known policy. In real-world tests, both prediction and controller modules exhibit high accuracy, and on-court hitting achieves an outgoing shuttle speed up to 19.1 m/s with a mean return landing distance of 4 m. These experimental results show that our proposed training scheme can deliver highly dynamic while precise goal striking in badminton, and can be adapted to more dynamics-critical domains. - oai:arXiv.org:2511.11218v2 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + A Generic Machine Learning Framework for Radio Frequency Fingerprinting + https://arxiv.org/abs/2510.09775 + arXiv:2510.09775v2 Announce Type: replace +Abstract: Fingerprinting radio frequency (RF) emitters typically involves finding unique characteristics that are featured in their received signal. These fingerprints are nuanced, but sufficiently detailed, motivating the pursuit of methods that can successfully extract them. The downstream task that requires the most meticulous RF fingerprinting (RFF) is known as specific emitter identification (SEI), which entails recognising each individual transmitter. RFF and SEI have a long history, with numerous defence and civilian applications such as signal intelligence, electronic surveillance, physical-layer authentication of wireless devices, to name a few. In recent years, data-driven RFF approaches have become popular due to their ability to automatically learn intricate fingerprints. They generally deliver superior performance when compared to traditional RFF techniques that are often labour-intensive, inflexible, and only applicable to a particular emitter type or transmission scheme. In this paper, we present a generic and versatile machine learning (ML) framework for data-driven RFF with several popular downstream tasks such as SEI, data association (EDA) and RF emitter clustering (RFEC). It is emitter-type agnostic. We then demonstrate the introduced framework for several tasks using real RF datasets for spaceborne surveillance, signal intelligence and countering drones applications. + oai:arXiv.org:2510.09775v2 + cs.LG + cs.CR + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chenhao Liu, Leyun Jiang, Yibo Wang, Kairan Yao, Jinchen Fu, Xiaoyu Ren + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Alex Hiles, Bashar I. Ahmad - SynBullying: A Multi LLM Synthetic Conversational Dataset for Cyberbullying Detection - https://arxiv.org/abs/2511.11599 - arXiv:2511.11599v2 Announce Type: replace -Abstract: We introduce SynBullying, a synthetic multi-LLM conversational dataset for studying and detecting cyberbullying (CB). SynBullying provides a scalable and ethically safe alternative to human data collection by leveraging large language models (LLMs) to simulate realistic bullying interactions. The dataset offers (i) conversational structure, capturing multi-turn exchanges rather than isolated posts; (ii) context-aware annotations, where harmfulness is assessed within the conversational flow considering context, intent, and discourse dynamics; and (iii) fine-grained labeling, covering various CB categories for detailed linguistic and behavioral analysis. We evaluate SynBullying across five dimensions, including conversational structure, lexical patterns, sentiment/toxicity, role dynamics, harm intensity, and CB-type distribution. We further examine its utility by testing its performance as standalone training data and as an augmentation source for CB classification. - oai:arXiv.org:2511.11599v2 - cs.AI - cs.CL - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + CoPRS: Learning Positional Prior from Chain-of-Thought for Reasoning Segmentation + https://arxiv.org/abs/2510.11173 + arXiv:2510.11173v2 Announce Type: replace +Abstract: Existing works on reasoning segmentation either connect hidden features from a language model directly to a mask decoder or represent positions in text, which limits interpretability and semantic detail. To solve this, we present CoPRS, a Multi-modal Chain-of-Thought (MCoT)-based positional perception model that bridges language reasoning to segmentation through a differentiable and interpretable positional prior instantiated as a heatmap. By making the reasoning process clear via MCoT and expressing it as a dense, differentiable heatmap, this interface enhances interpretability and diagnostic analysis and yields more concentrated evidence on the target. A learnable concentration token aggregates features of the image and reasoning text to generate this positional prior, which is decoded to precise masks through a lightweight decoder, providing a direct connection between reasoning and segmentation. Across the RefCOCO series and ReasonSeg, CoPRS matches or surpasses the best reported metrics on each standard split under comparable protocols, with performance at or above the prior state of the art across both validation and test partitions. Extensive experiments demonstrate a strong positive correlation among the CoT trajectory, the generated heatmap, and the decoded mask, supporting an interpretable alignment between the reasoning output and downstream mask generation. Collectively, these findings support the utility of this paradigm in bridging reasoning and segmentation and show advantages in concentration driven by reasoning and in more precise mask prediction. Code, checkpoints and logs are released at https://github.com/ZhenyuLU-Heliodore/CoPRS.git. + oai:arXiv.org:2510.11173v2 + cs.CV + cs.MM + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Arefeh Kazemi, Hamza Qadeer, Joachim Wagner, Hossein Hosseini, Sri Balaaji Natarajan Kalaivendan, Brian Davis + http://creativecommons.org/licenses/by/4.0/ + Zhenyu Lu, Liupeng Li, Jinpeng Wang, Yan Feng, Bin Chen, Ke Chen, Yaowei Wang - No-Regret Strategy Solving in Imperfect-Information Games via Pre-Trained Embedding - https://arxiv.org/abs/2511.12083 - arXiv:2511.12083v2 Announce Type: replace -Abstract: High-quality information set abstraction remains a core challenge in solving large-scale imperfect-information extensive-form games (IIEFGs)--such as no-limit Texas Hold'em--where the finite nature of spatial resources hinders solving strategies for the full game. State-of-the-art AI methods rely on pre-trained discrete clustering for abstraction, yet their hard classification irreversibly discards critical information: specifically, the quantifiable subtle differences between information sets--vital for strategy solving--thus compromising the quality of such solving. Inspired by the word embedding paradigm in natural language processing, this paper proposes the Embedding CFR algorithm, a novel approach for solving strategies in IIEFGs within an embedding space. The algorithm pre-trains and embeds the features of individual information sets into an interconnected low-dimensional continuous space, where the resulting vectors more precisely capture both the distinctions and connections between information sets. Embedding CFR introduces a strategy-solving process driven by regret accumulation and strategy updates in this embedding space, with supporting theoretical analysis verifying its ability to reduce cumulative regret. Experiments on poker show that with the same spatial overhead, Embedding CFR achieves significantly faster exploitability convergence compared to cluster-based abstraction algorithms, confirming its effectiveness. Furthermore, to our knowledge, it is the first algorithm in poker AI that pre-trains information set abstractions via low-dimensional embedding for strategy solving. - oai:arXiv.org:2511.12083v2 + The Adoption Paradox for Veterinary Professionals in China: High Use of Artificial Intelligence Despite Low Familiarity + https://arxiv.org/abs/2510.11758 + arXiv:2510.11758v2 Announce Type: replace +Abstract: While the global integration of artificial intelligence (AI) into veterinary medicine is accelerating, its adoption dynamics in major markets such as China remain uncharacterized. This paper presents the first exploratory analysis of AI perception and adoption among veterinary professionals in China, based on a cross-sectional survey of 455 practitioners conducted in mid-2025. We identify a distinct "adoption paradox": although 71.0% of respondents have incorporated AI into their workflows, 44.6% of these active users report low familiarity with the technology. In contrast to the administrative-focused patterns observed in North America, adoption in China is practitioner-driven and centers on core clinical tasks, such as disease diagnosis (50.1%) and prescription calculation (44.8%). However, concerns regarding reliability and accuracy remain the primary barrier (54.3%), coexisting with a strong consensus (93.8%) for regulatory oversight. These findings suggest a unique "inside-out" integration model in China, characterized by high clinical utility but restricted by an "interpretability gap," underscoring the need for specialized tools and robust regulatory frameworks to safely harness AI's potential in this expanding market. + oai:arXiv.org:2510.11758v2 + cs.CY cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yanchang Fu, Shengda Liu, Pei Xu, Kaiqi Huang + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Shumin Li, Xiaoyun Lai - Collusion-proof Auction Design using Side Information - https://arxiv.org/abs/2511.12456 - arXiv:2511.12456v2 Announce Type: replace -Abstract: We study the problem of auction design in the presence of bidder collusion. Specifically, we consider a multi-unit auction of identical items with single-minded bidders, where a subset of bidders may collude by coordinating bids and transferring payments and items among themselves. The classical Vickrey-Clarke-Groves(VCG) mechanism is highly vulnerable to collusion and fully collusion-proof mechanisms are limited to posted-price formats, which fail to guarantee even approximate efficiency. This paper aims to bridge this gap by designing auctions that achieve good welfare and revenue guarantees even when some bidders collude. We first characterize the strategic behavior of colluding bidders under VCG and prove that such bidders optimally bid shade: they never overbid or take additional items, but instead reduce the auction price. This characterization enables a Bulow-Klemperer type result: adding colluding bidders can only improve welfare and revenue relative to running VCG on the non-colluding group alone. We next consider a setting where black-box collusion detection algorithm is available to label bidders as being colluding or non-colluding, and we propose a VCG-Posted Price(V-PoP) mechanism that combines VCG applied to non-colluding bidders with a posted-price mechanism for colluding bidders. We show that V-PoP is ex-post dominant-strategy incentive compatible(DSIC) and derive probabilistic guarantees on expected welfare and revenue under both known and unknown valuation distributions. Numerical experiments across several distributions demonstrate that V-PoP consistently outperforms VCG restricted to non-colluding bidders and approaches the performance of the ideal VCG mechanism assuming universal truthfulness. Our results provide a principled framework for incorporating collusion detection into mechanism design, offering a step toward collusion-resistant auctions. - oai:arXiv.org:2511.12456v2 - cs.GT - econ.TH - Wed, 10 Dec 2025 00:00:00 -0500 + GRAVITY: A Framework for Personalized Text Generation via Profile-Grounded Synthetic Preferences + https://arxiv.org/abs/2510.11952 + arXiv:2510.11952v2 Announce Type: replace +Abstract: Personalization in LLMs often relies on costly human feedback or interaction logs, limiting scalability and neglecting deeper user attributes. To reduce the reliance on human annotations, we introduce GRAVITY (Generative Response with Aligned Values, Interests, and Traits of You), a framework for generating synthetic, profile-grounded preference data that captures users' interests, values, beliefs, and personality traits. By integrating demographic, cultural, and psychological frameworks -- including Hofstede's cultural dimensions, Schwartz's basic values, the World Values Survey, and Big Five OCEAN traits -- GRAVITY synthesizes preference pairs to guide personalized content generation. We evaluate GRAVITY on book descriptions for 400 Amazon users, comparing it to prompt-based conditioning, standard fine-tuning, and naive synthetic pair generation. Profile-grounded synthetic data consistently improves generation, especially across multiple cultures (USA, Brazil, Japan, India), achieving over 4% higher preference gains across baselines, with user studies showing that GRAVITY outputs are preferred over 86% of the time. Our results show that scenario-grounded synthetic data can capture richer user variation, reduce reliance on costly annotation, and produce more engaging, user-centered content, offering a scalable path for LLM personalization. + oai:arXiv.org:2510.11952v2 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Sukanya Kudva, Anil Aswani + Priyanka Dey, Daniele Rosa, Wenqing Zheng, Daniel Barcklow, Jieyu Zhao, Emilio Ferrara - Botany Meets Robotics in Alpine Scree Monitoring - https://arxiv.org/abs/2511.12526 - arXiv:2511.12526v2 Announce Type: replace -Abstract: According to the European Union's Habitat Directive, habitat monitoring plays a critical role in response to the escalating problems posed by biodiversity loss and environmental degradation. Scree habitats, hosting unique and often endangered species, face severe threats from climate change due to their high-altitude nature. Traditionally, their monitoring has required highly skilled scientists to conduct extensive fieldwork in remote, potentially hazardous locations, making the process resource-intensive and time-consuming. This paper presents a novel approach for scree habitat monitoring using a legged robot to assist botanists in data collection and species identification. Specifically, we deployed the ANYmal C robot in the Italian Alpine bio-region in two field campaigns spanning two years and leveraged deep learning to detect and classify key plant species of interest. Our results demonstrate that agile legged robots can navigate challenging terrains and increase the frequency and efficiency of scree monitoring. When paired with traditional phytosociological surveys performed by botanists, this robotics-assisted protocol not only streamlines field operations but also enhances data acquisition, storage, and usage. The outcomes of this research contribute to the evolving landscape of robotics in environmental science, paving the way for a more comprehensive and sustainable approach to habitat monitoring and preservation. - oai:arXiv.org:2511.12526v2 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + COINS: SemantiC Ids Enhanced COLd Item RepresentatioN for Click-through Rate Prediction in E-commerce Search + https://arxiv.org/abs/2510.12604 + arXiv:2510.12604v3 Announce Type: replace +Abstract: With the rise of modern search and recommendation platforms, insufficient collaborative information of cold-start items exacerbates the Matthew effect of existing platform items, challenging platform diversity and becoming a longstanding issue. Existing methods align items' side content with collaborative information to transfer collaborative signals from high-popularity items to cold-start items. However, these methods fail to account for the asymmetry between collaboration and content, nor the fine-grained differences among items. To address these issues, we propose SMILE, an item representation enhancement approach based on fused alignment of semantic IDs. Specifically, we use RQ-OPQ encoding to quantize item content and collaborative information, followed by a two-step alignment: RQ encoding transfers shared collaborative signals across items, while OPQ encoding learns differentiated information of items. Comprehensive offline experiments on large-scale industrial datasets demonstrate superiority of SMILE, and rigorous online A/B tests confirm statistically significant improvements: item CTR +1.66%, buyers +1.57%, and order volume +2.17%. + oai:arXiv.org:2510.12604v3 + cs.IR + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/TFR.2025.3632773 - IEEE Transactions on Field Robotics, vol. 2, pp. 920-936, 2025 - Davide De Benedittis, Giovanni Di Lorenzo, Franco Angelini, Barbara Valle, Marina Serena Borgatti, Paolo Remagnino, Marco Caccianiga, Manolo Garabini + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Qihang Zhao, Zhongbo Sun, Xiaoyang Zheng, Xian Guo, Siyuan Wang, Zihan Liang, Mingcan Peng, Ben Chen, Chenyi Lei - Scalable Multi-Objective and Meta Reinforcement Learning via Gradient Estimation - https://arxiv.org/abs/2511.12779 - arXiv:2511.12779v2 Announce Type: replace -Abstract: We study the problem of efficiently estimating policies that simultaneously optimize multiple objectives in reinforcement learning (RL). Given $n$ objectives (or tasks), we seek the optimal partition of these objectives into $k \ll n$ groups, where each group comprises related objectives that can be trained together. This problem arises in applications such as robotics, control, and preference optimization in language models, where learning a single policy for all $n$ objectives is suboptimal as $n$ grows. We introduce a two-stage procedure -- meta-training followed by fine-tuning -- to address this problem. We first learn a meta-policy for all objectives using multitask learning. Then, we adapt the meta-policy to multiple randomly sampled subsets of objectives. The adaptation step leverages a first-order approximation property of well-trained policy networks, which is empirically verified to be accurate within a 2% error margin across various RL environments. The resulting algorithm, PolicyGradEx, efficiently estimates an aggregate task-affinity score matrix given a policy evaluation algorithm. Based on the estimated affinity score matrix, we cluster the $n$ objectives into $k$ groups by maximizing the intra-cluster affinity scores. Experiments on three robotic control and the Meta-World benchmarks demonstrate that our approach outperforms state-of-the-art baselines by 16% on average, while delivering up to $26\times$ faster speedup relative to performing full training to obtain the clusters. Ablation studies validate each component of our approach. For instance, compared with random grouping and gradient-similarity-based grouping, our loss-based clustering yields an improvement of 19%. Finally, we analyze the generalization error of policy networks by measuring the Hessian trace of the loss surface, which gives non-vacuous measures relative to the observed generalization errors. - oai:arXiv.org:2511.12779v2 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Foveation Improves Payload Capacity in Steganography + https://arxiv.org/abs/2510.13151 + arXiv:2510.13151v2 Announce Type: replace +Abstract: Steganography finds its use in visual medium such as providing metadata and watermarking. With support of efficient latent representations and foveated rendering, we trained models that improve existing capacity limits from 100 to 500 bits, while achieving better accuracy of up to 1 failure bit out of 2000, at 200K test bits. Finally, we achieve a comparable visual quality of 31.47 dB PSNR and 0.13 LPIPS, showing the effectiveness of novel perceptual design in creating multi-modal latent representations in steganography. + oai:arXiv.org:2510.13151v2 + cs.CV + cs.GR + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Zhenshuo Zhang, Minxuan Duan, Youran Ye, Hongyang R. Zhang + 10.1145/3757374.3771423 + Lifeng Qiu Lin, Henry Kam, Qi Sun, Kaan Ak\c{s}it - Batch Acquisition Function Evaluations and Decouple Optimizer Updates for Faster Bayesian Optimization - https://arxiv.org/abs/2511.13625 - arXiv:2511.13625v3 Announce Type: replace -Abstract: Bayesian optimization (BO) efficiently finds high-performing parameters by maximizing an acquisition function, which models the promise of parameters. A major computational bottleneck arises in acquisition function optimization, where multi-start optimization (MSO) with quasi-Newton (QN) methods is required due to the non-convexity of the acquisition function. BoTorch, a widely used BO library, currently optimizes the summed acquisition function over multiple points, leading to the speedup of MSO owing to PyTorch batching. Nevertheless, this paper empirically demonstrates the suboptimality of this approach in terms of off-diagonal approximation errors in the inverse Hessian of a QN method, slowing down its convergence. To address this problem, we propose to decouple QN updates using a coroutine while batching the acquisition function calls. Our approach not only yields the theoretically identical convergence to the sequential MSO but also drastically reduces the wall-clock time compared to the previous approaches. Our approach is available in GPSampler in Optuna, effectively reducing its computational overhead. - oai:arXiv.org:2511.13625v3 + Deep Edge Filter: Return of the Human-Crafted Layer in Deep Learning + https://arxiv.org/abs/2510.13865 + arXiv:2510.13865v5 Announce Type: replace +Abstract: We introduce the Deep Edge Filter, a novel approach that applies high-pass filtering to deep neural network features to improve model generalizability. Our method is motivated by our hypothesis that neural networks encode task-relevant semantic information in high-frequency components while storing domain-specific biases in low-frequency components of deep features. By subtracting low-pass filtered outputs from original features, our approach isolates generalizable representations while preserving architectural integrity. Experimental results across diverse domains such as Vision, Text, 3D, and Audio demonstrate consistent performance improvements regardless of model architecture and data modality. Analysis reveals that our method induces feature sparsification and effectively isolates high-frequency components, providing empirical validation of our core hypothesis. The code is available at https://github.com/dongkwani/DeepEdgeFilter. + oai:arXiv.org:2510.13865v5 cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kaichi Irie, Shuhei Watanabe, Masaki Onishi + http://creativecommons.org/licenses/by-sa/4.0/ + Dongkwan Lee, Junhoo Lee, Nojun Kwak - Tractable Probabilistic Models for Investment Planning - https://arxiv.org/abs/2511.13888 - arXiv:2511.13888v2 Announce Type: replace -Abstract: Investment planning in power utilities, such as generation and transmission expansion, requires decade-long forecasts under profound uncertainty. Forecasting of energy mix and energy use decades ahead is nontrivial. Classical approaches focus on generating a finite number of scenarios (modeled as a mixture of Diracs in statistical theory terms), which limits insight into scenario-specific volatility and hinders robust decision-making. We propose an alternative using tractable probabilistic models (TPMs), particularly sum-product networks (SPNs). These models enable exact, scalable inference of key quantities such as scenario likelihoods, marginals, and conditional probabilities, supporting robust scenario expansion and risk assessment. - This framework enables direct embedding of chance-constrained optimization into investment planning, enforcing safety or reliability with prescribed confidence levels. TPMs allow both scenario analysis and volatility quantification by compactly representing high-dimensional uncertainties. We demonstrate the effectiveness of the approach through a representative power system planning case study, illustrating its computational and reliability advantages over traditional scenario-based models. - oai:arXiv.org:2511.13888v2 + Tawa: Automatic Warp Specialization for Modern GPUs with Asynchronous References + https://arxiv.org/abs/2510.14719 + arXiv:2510.14719v2 Announce Type: replace +Abstract: Modern GPUs feature specialized hardware units that enable high-performance, asynchronous dataflow execution. However, the conventional SIMT programming model is fundamentally misaligned with this task-parallel hardware, creating a significant programmability gap. While hardware-level warp specialization is the key to unlocking peak performance, it forces developers to manually orchestrate complex, low-level communication and software pipelines--a process that is labor-intensive, error-prone, and unsustainable. To address this challenge, we present Tawa, an automated compiler that systematically generates high-performance, warp-specialized code from a high-level, tile-based program. Central to our approach is a novel IR abstraction, asynchronous references (aref), which expresses warp-level communication without exposing low-level hardware details. Using this abstraction, Tawa automatically partitions programs into producer-consumer roles and manages the intricate dataflow pipeline, relieving developers of invasive kernel rewriting. Evaluation on NVIDIA H100 GPUs across representative LLM kernels shows that Tawa delivers high hardware utilization, achieving up to 1.1$\times$ speedup over highly optimized cuBLAS GEMM kernels. For attention workloads, Tawa attains 1.2$\times$ speedup over Triton and matches the performance of the hand-optimized CUTLASS C++ FlashAttention-3 kernel with far less programming effort. + oai:arXiv.org:2510.14719v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AR + cs.PL + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Nicolas M. Cuadrado A., Mohannad Takrouri, Ji\v{r}\'i N\v{e}me\v{c}ek, Martin Tak\'a\v{c}, Jakub Mare\v{c}ek + Hongzheng Chen, Bin Fan, Alexander Collins, Bastian Hagedorn, Evghenii Gaburov, Masahiro Masuda, Matthew Brookhart, Chris Sullivan, Jason Knight, Zhiru Zhang, Vinod Grover - AraLingBench A Human-Annotated Benchmark for Evaluating Arabic Linguistic Capabilities of Large Language Models - https://arxiv.org/abs/2511.14295 - arXiv:2511.14295v2 Announce Type: replace -Abstract: We present AraLingBench: a fully human annotated benchmark for evaluating the Arabic linguistic competence of large language models (LLMs). The benchmark spans five core categories: grammar, morphology, spelling, reading comprehension, and syntax, through 150 expert-designed multiple choice questions that directly assess structural language understanding. Evaluating 35 Arabic and bilingual LLMs reveals that current models demonstrate strong surface level proficiency but struggle with deeper grammatical and syntactic reasoning. AraLingBench highlights a persistent gap between high scores on knowledge-based benchmarks and true linguistic mastery, showing that many models succeed through memorization or pattern recognition rather than authentic comprehension. By isolating and measuring fundamental linguistic skills, AraLingBench provides a diagnostic framework for developing Arabic LLMs. The full evaluation code is publicly available on GitHub. - oai:arXiv.org:2511.14295v2 + Attention Sinks in Diffusion Language Models + https://arxiv.org/abs/2510.15731 + arXiv:2510.15731v2 Announce Type: replace +Abstract: Masked Diffusion Language Models (DLMs) have recently emerged as a promising alternative to traditional Autoregressive Models (ARMs). DLMs employ transformer encoders with bidirectional attention, enabling parallel token generation while maintaining competitive performance. Although their efficiency and effectiveness have been extensively studied, the internal mechanisms that govern DLMs remain largely unexplored. In this work, we conduct an empirical analysis of DLM attention patterns, focusing on the attention sinking phenomenon, an effect previously observed in various transformer-based architectures. Our findings reveal that DLMs also exhibit attention sinks, but with distinct characteristics. First, unlike in ARMs, the sink positions in DLMs tend to shift throughout the generation process, displaying a dynamic behaviour. Second, while ARMs are highly sensitive to the removal of attention sinks, DLMs remain robust: masking sinks leads to only a minor degradation in performance. These results provide new insights into the inner workings of diffusion-based language models and highlight fundamental differences in how they allocate and utilize attention compared to autoregressive models. + oai:arXiv.org:2510.15731v2 cs.CL cs.AI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mohammad Zbeeb, Hasan Abed Al Kader Hammoud, Sina Mukalled, Nadine Rizk, Fatima Karnib, Issam Lakkis, Ammar Mohanna, Bernard Ghanem + http://creativecommons.org/licenses/by/4.0/ + Maximo Eduardo Rulli, Simone Petruzzi, Edoardo Michielon, Fabrizio Silvestri, Simone Scardapane, Alessio Devoto - Robust H-infinity control and worst-case search in constrained parametric space - https://arxiv.org/abs/2511.15480 - arXiv:2511.15480v3 Announce Type: replace -Abstract: Standard H-infinity/H2 robust control and analysis tools operate on uncertain parameters assumed to vary independently within prescribed bounds. This paper extends their capabilities in the presence of constraints coupling these parameters and restricting the parametric space. Focusing on the worst-case search, we demonstrate - based on the theory of upper-C1 functions - the validity of using standard, readily available smooth optimization algorithms to address this nonsmooth constrained optimization problem. In particular, we prove that the sequential quadratic programming algorithm converges to Karush-Kuhn-Tucker points, and that such conditions are satisfied by any subgradient at a local minimum. This worst-case search then enables robust controller synthesis: as in the state-of-art algorithm for standard robust control, identified worst-case configurations are iteratively added to an active set on which a non-smooth multi-models optimization of the controller is performed. The methodology is illustrated on a satellite benchmark with flexible appendages, of order 50 with 43 uncertain parameters. From a practical point of view, we combine the local exploitation proposed above with a global exploration using either Monte-Carlo sampling or particle swarm optimization. We show that the proposed constrained optimization effectively complements Monte-Carlo sampling by enabling fast detection of rare worst-case configurations, and that the robust controller optimization converges with less than 10 active configurations. - oai:arXiv.org:2511.15480v3 - eess.SY - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Enhanced Sentiment Interpretation via a Lexicon-Fuzzy-Transformer Framework + https://arxiv.org/abs/2510.15843 + arXiv:2510.15843v2 Announce Type: replace +Abstract: Accurately detecting sentiment polarity and intensity in product reviews and social media posts remains challenging due to informal and domain-specific language. To address this, we propose a novel hybrid lexicon-fuzzy-transformer framework that combines rule-based heuristics, contextual deep learning, and fuzzy logic to generate continuous sentiment scores reflecting both polarity and strength. The pipeline begins with VADER-based initial sentiment estimations, which are refined through a two-stage adjustment process. This involves leveraging confidence scores from DistilBERT, a lightweight transformer and applying fuzzy logic principles to mitigate excessive neutrality bias and enhance granularity. A custom fuzzy inference system then maps the refined scores onto a 0 to 1 continuum, producing expert)like judgments. The framework is rigorously evaluated on four domain-specific datasets. food delivery, e-commerce, tourism, and fashion. Results show improved alignment with user ratings, better identification of sentiment extremes, and reduced misclassifications. Both quantitative metrics (distributional alignment, confusion matrices) and qualitative insights (case studies, runtime analysis) affirm the models robustness and efficiency. This work demonstrates the value of integrating symbolic reasoning with neural models for interpretable, finegrained sentiment analysis in linguistically dynamic domains. + oai:arXiv.org:2510.15843v2 + cs.CL + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ervan Kassarian, Francesco Sanfedino, Daniel Alazard, Andrea Marrazza + http://creativecommons.org/licenses/by/4.0/ + Shayan Rokhva, Mousa Alizadeh, Maryam Abdollahi Shamami - What Does It Take to Be a Good AI Research Agent? Studying the Role of Ideation Diversity - https://arxiv.org/abs/2511.15593 - arXiv:2511.15593v2 Announce Type: replace -Abstract: AI research agents offer the promise to accelerate scientific progress by automating the design, implementation, and training of machine learning models. However, the field is still in its infancy, and the key factors driving the success or failure of agent trajectories are not fully understood. We examine the role that ideation diversity plays in agent performance. First, we analyse agent trajectories on MLE-bench, a well-known benchmark to evaluate AI research agents, across different models and agent scaffolds. Our analysis reveals that different models and agent scaffolds yield varying degrees of ideation diversity, and that higher-performing agents tend to have increased ideation diversity. Further, we run a controlled experiment where we modify the degree of ideation diversity, demonstrating that higher ideation diversity results in stronger performance. Finally, we strengthen our results by examining additional evaluation metrics beyond the standard medal-based scoring of MLE-bench, showing that our findings still hold across other agent performance metrics. - oai:arXiv.org:2511.15593v2 + Executable Epistemology: The Structured Cognitive Loop as an Architecture of Intentional Understanding + https://arxiv.org/abs/2510.15952 + arXiv:2510.15952v3 Announce Type: replace +Abstract: Large language models exhibit intelligence without genuine epistemic understanding, exposing a key gap: the absence of epistemic architecture. This paper introduces the Structured Cognitive Loop (SCL) as an executable epistemological framework for emergent intelligence. Unlike traditional AI research asking "what is intelligence?" (ontological), SCL asks "under what conditions does cognition emerge?" (epistemological). Grounded in philosophy of mind and cognitive phenomenology, SCL bridges conceptual philosophy and implementable cognition. Drawing on process philosophy, enactive cognition, and extended mind theory, we define intelligence not as a property but as a performed process -- a continuous loop of judgment, memory, control, action, and regulation. SCL makes three contributions. First, it operationalizes philosophical insights into computationally interpretable structures, enabling "executable epistemology" -- philosophy as structural experiment. Second, it shows that functional separation within cognitive architecture yields more coherent and interpretable behavior than monolithic prompt based systems, supported by agent evaluations. Third, it redefines intelligence: not representational accuracy but the capacity to reconstruct its own epistemic state through intentional understanding. This framework impacts philosophy of mind, epistemology, and AI. For philosophy, it allows theories of cognition to be enacted and tested. For AI, it grounds behavior in epistemic structure rather than statistical regularity. For epistemology, it frames knowledge not as truth possession but as continuous reconstruction within a phenomenologically coherent loop. We situate SCL within debates on cognitive phenomenology, emergence, normativity, and intentionality, arguing that real progress requires not larger models but architectures that realize cognitive principles structurally. + oai:arXiv.org:2510.15952v3 cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by-nc-sa/4.0/ - Alexis Audran-Reiss, Jordi Armengol-Estap\'e, Karen Hambardzumyan, Amar Budhiraja, Martin Josifoski, Edan Toledo, Rishi Hazra, Despoina Magka, Michael Shvartsman, Parth Pathak, Justine T Kao, Lucia Cipolina-Kun, Bhavul Gauri, Jean-Christophe Gagnon-Audet, Emanuel Tewolde, Jenny Zhang, Taco Cohen, Yossi Adi, Tatiana Shavrina, Yoram Bachrach + Myung Ho Kim - A Causal Perspective on Measuring, Explaining and Mitigating Smells in LLM-Generated Code - https://arxiv.org/abs/2511.15817 - arXiv:2511.15817v3 Announce Type: replace -Abstract: Recent advances in large language models (LLMs) have accelerated their adoption in software engineering contexts. However, concerns persist about the structural quality of the code they produce. In particular, LLMs often replicate poor coding practices, introducing code smells (i.e., patterns that hinder readability, maintainability, or design integrity). Although prior research has examined the detection or repair of smells, we still lack a clear understanding of how and when these issues emerge in generated code. - This paper addresses this gap by systematically measuring, explaining and mitigating smell propensity in LLM-generated code. We build on the Propensity Smelly Score (PSC), a probabilistic metric that estimates the likelihood of generating particular smell types, and establish its robustness as a signal of structural quality. Using PSC as an instrument for causal analysis, we identify how generation strategy, model size, model architecture and prompt formulation shape the structural properties of generated code. Our findings show that prompt design and architectural choices play a decisive role in smell propensity and motivate practical mitigation strategies that reduce its occurrence. A user study further demonstrates that PSC helps developers interpret model behavior and assess code quality, providing evidence that smell propensity signals can support human judgement. Taken together, our work lays the groundwork for integrating quality-aware assessments into the evaluation and deployment of LLMs for code. - oai:arXiv.org:2511.15817v3 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + Colliding with Adversaries at ECML-PKDD 2025 Model Robustness Competition 1st Prize Solution + https://arxiv.org/abs/2510.16443 + arXiv:2510.16443v2 Announce Type: replace +Abstract: This report presents the winning solution for Task 2 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery at ECML-PKDD 2025. The goal of the challenge was to design and train a robust ANN-based model capable of achieving high accuracy in a binary classification task on both clean and adversarial data generated with the Random Distribution Shuffle Attack (RDSA). Our solution consists of two components: a data generation phase and a robust model training phase. In the first phase, we produced 15 million artificial training samples using a custom methodology derived from Random Distribution Shuffle Attack (RDSA). In the second phase, we introduced a robust architecture comprising (i)a Feature Embedding Block with shared weights among features of the same type and (ii)a Dense Fusion Tail responsible for the final prediction. Training this architecture on our adversarial dataset achieved a mixed accuracy score of 80\%, exceeding the second-place solution by two percentage points. + oai:arXiv.org:2510.16443v2 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1145/3744916.3773164 - Alejandro Velasco, Daniel Rodriguez-Cardenas, Dipin Khati, David N. Palacio, Luftar Rahman Alif, Denys Poshyvanyk + http://creativecommons.org/licenses/by/4.0/ + Dimitris Stefanopoulos, Andreas Voskou - TS-PEFT: Unveiling Token-Level Redundancy in Parameter-Efficient Fine-Tuning - https://arxiv.org/abs/2511.16147 - arXiv:2511.16147v2 Announce Type: replace -Abstract: Current Parameter-Efficient Fine-Tuning (PEFT) methods typically operate under an implicit assumption: once a target module is selected, every token passing through it contributes equally to the downstream task and requires a parameter update. In this paper, we challenge this convention and unveil a pervasive token-level redundancy in the fine-tuning of large models. We propose TS-PEFT, a theoretically grounded framework utilizing proximal optimization to dynamically identify and skip redundant token updates during training. Our extensive experiments across Natural Language Understanding, Commonsense Reasoning, and Visual Instruction Tuning demonstrate that indiscriminately updating all tokens is not only computationally superfluous but often introduces optimization noise. Strikingly, by discarding 40%-60% of token updates, TS-PEFT consistently matches or surpasses the performance of dense baselines (e.g., LoRA, DoRA). Furthermore, we provide an in-depth analysis revealing that the learned token-level sparsity serves as a superior indicator of module importance compared to traditional weight norms, offering a novel data-driven perspective on the intrinsic adaptation mechanism of large models. - oai:arXiv.org:2511.16147v2 - cs.CL + DSEBench: A Test Collection for Explainable Dataset Search with Examples + https://arxiv.org/abs/2510.17228 + arXiv:2510.17228v2 Announce Type: replace +Abstract: Dataset search is a well-established task in the Semantic Web and information retrieval research. Current approaches retrieve datasets either based on keyword queries or by identifying datasets similar to a given target dataset. These paradigms fail when the information need involves both keywords and target datasets. To address this gap, we investigate a generalized task, Dataset Search with Examples (DSE), and extend it to Explainable DSE (ExDSE), which further requires identifying relevant fields of the retrieved datasets. We construct DSEBench, the first test collection that provides high-quality dataset-level and field-level annotations to support the evaluation of DSE and ExDSE, respectively. In addition, we employ a large language model to generate extensive annotations for training purposes. We establish comprehensive baselines on DSEBench by adapting and evaluating a variety of lexical, dense, and LLM-based retrieval, reranking, and explanation methods. + oai:arXiv.org:2510.17228v2 + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Qing Shi, Jing He, Qiaosheng Chen, Gong Cheng + + + MemoryBench: A Benchmark for Memory and Continual Learning in LLM Systems + https://arxiv.org/abs/2510.17281 + arXiv:2510.17281v3 Announce Type: replace +Abstract: Scaling up data, parameters, and test-time computation has been the mainstream methods to improve LLM systems (LLMsys), but their upper bounds are almost reached due to the gradual depletion of high-quality data and marginal gains obtained from larger computational resource consumption. Inspired by the abilities of human and traditional AI systems in learning from practice, constructing memory and continual learning frameworks for LLMsys has become an important and popular research direction in recent literature. Yet, existing benchmarks for LLM memory often focus on evaluating the system on homogeneous reading comprehension tasks with long-form inputs rather than testing their abilities to learn from accumulated user feedback in service time. Therefore, we propose a user feedback simulation framework and a comprehensive benchmark covering multiple domains, languages, and types of tasks to evaluate the continual learning abilities of LLMsys. Experiments show that the effectiveness and efficiency of state-of-the-art baselines are far from satisfying, and we hope this benchmark could pave the way for future studies on LLM memory and optimization algorithms. + oai:arXiv.org:2510.17281v3 + cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Dabiao Ma, Ziming Dai, Zhimin Xin, Shu Wang, Ye Wang, Haojun Fei + Qingyao Ai, Yichen Tang, Changyue Wang, Jianming Long, Weihang Su, Yiqun Liu - ChemLabs on ChemO: A Multi-Agent System for Multimodal Reasoning on IChO 2025 - https://arxiv.org/abs/2511.16205 - arXiv:2511.16205v2 Announce Type: replace -Abstract: Olympiad-level benchmarks in mathematics and physics are crucial testbeds for advanced AI reasoning, but chemistry, with its unique multimodal symbolic language, has remained an open challenge. We introduce ChemO, a new benchmark built from the International Chemistry Olympiad (IChO) 2025. ChemO features two key innovations for automated assessment: Assessment-Equivalent Reformulation (AER), which converts problems requiring visual outputs (e.g., drawing molecules) into computationally tractable formats, and Structured Visual Enhancement (SVE), a diagnostic mechanism to disentangle a model's visual perception capabilities from its core chemical reasoning. To tackle this benchmark, we propose ChemLabs, a hierarchical multi-agent framework that mimics human expert collaboration through specialized agents for problem decomposition, perception, reasoning, and auditing. Experiments on state-of-the-art multimodal models demonstrate that combining SVE with our multi-agent system yields dramatic performance gains. Our top configuration achieves a score of 93.6 out of 100, surpassing an estimated human gold medal threshold and establishing a new state-of-the-art in automated chemical problem-solving. ChemO Dataset: https://huggingface.co/datasets/IDEA-AI4SCI/ChemO - oai:arXiv.org:2511.16205v2 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + PrivaDE: Privacy-preserving Data Evaluation for Blockchain-based Data Marketplaces + https://arxiv.org/abs/2510.18109 + arXiv:2510.18109v2 Announce Type: replace +Abstract: Evaluating the usefulness of data before purchase is essential when obtaining data for high-quality machine learning models, yet both model builders and data providers are often unwilling to reveal their proprietary assets. + We present PrivaDE, a privacy-preserving protocol that allows a model owner and a data owner to jointly compute a utility score for a candidate dataset without fully exposing model parameters, raw features, or labels. PrivaDE provides strong security against malicious behavior and can be integrated into blockchain-based marketplaces, where smart contracts enforce fair execution and payment. To make the protocol practical, we propose optimizations to enable efficient secure model inference, and a model-agnostic scoring method that uses only a small, representative subset of the data while still reflecting its impact on downstream training. Evaluation shows that PrivaDE performs data evaluation effectively, achieving online runtimes within 15 minutes even for models with millions of parameters. + Our work lays the foundation for fair and automated data marketplaces in decentralized machine learning ecosystems. + oai:arXiv.org:2510.18109v2 + cs.CR + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Qiang Xu, Shengyuan Bai, Leqing Chen, Zijing Liu, Yu Li + Wan Ki Wong, Sahel Torkamani, Michele Ciampi, Rik Sarkar - Glass Surface Detection: Leveraging Reflection Dynamics in Flash/No-flash Imagery - https://arxiv.org/abs/2511.16887 - arXiv:2511.16887v2 Announce Type: replace -Abstract: Glass surfaces are ubiquitous in daily life, typically appearing colorless, transparent, and lacking distinctive features. These characteristics make glass surface detection a challenging computer vision task. Existing glass surface detection methods always rely on boundary cues (e.g., window and door frames) or reflection cues to locate glass surfaces, but they fail to fully exploit the intrinsic properties of the glass itself for accurate localization. We observed that in most real-world scenes, the illumination intensity in front of the glass surface differs from that behind it, which results in variations in the reflections visible on the glass surface. Specifically, when standing on the brighter side of the glass and applying a flash towards the darker side, existing reflections on the glass surface tend to disappear. Conversely, while standing on the darker side and applying a flash towards the brighter side, distinct reflections will appear on the glass surface. Based on this phenomenon, we propose NFGlassNet, a novel method for glass surface detection that leverages the reflection dynamics present in flash/no-flash imagery. Specifically, we propose a Reflection Contrast Mining Module (RCMM) for extracting reflections, and a Reflection Guided Attention Module (RGAM) for fusing features from reflection and glass surface for accurate glass surface detection. For learning our network, we also construct a dataset consisting of 3.3K no-flash and flash image pairs captured from various scenes with corresponding ground truth annotations. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods. Our code, model, and dataset will be available upon acceptance of the manuscript. - oai:arXiv.org:2511.16887v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Benchmarking World-Model Learning + https://arxiv.org/abs/2510.19788 + arXiv:2510.19788v3 Announce Type: replace +Abstract: Model-learning agents should gather information to learn world models that support many downstream tasks and inferences, such as predicting unobserved states, estimating near- and far-term consequences of actions, planning action sequences, and detecting changes in dynamics. Current methods for learning and evaluating world models diverge from this goal: training and evaluation are anchored to next-frame prediction, and success is scored by reward maximization in the same environment. We propose WorldTest, a protocol to evaluate model-learning agents that separates reward-free interaction from a scored test phase in a different but related environment. WorldTest is open-ended $\unicode{x2014}$ models should support many different tasks unknown ahead of time $\unicode{x2014}$ and agnostic to model representation, allowing comparison across approaches. We instantiated WorldTest with AutumnBench, a suite of 43 interactive grid-world environments and 129 tasks across three families: masked-frame prediction, planning, and predicting changes to the causal dynamics. We compared 517 human participants and three frontier models on AutumnBench. We found that humans outperform the models, and scaling compute improves performance only in some environments but not others. WorldTest provides a novel template $\unicode{x2014}$ reward-free exploration, derived tests, and behavior-based scoring $\unicode{x2014}$ to evaluate what agents learn about environment dynamics, and AutumnBench exposes significant headroom in world-model learning. + oai:arXiv.org:2510.19788v3 + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tao Yan, Hao Huang, Yiwei Lu, Zeyu Wang, Ke Xu, Yinghui Wang, Xiaojun Chang, Rynson W. H. Lau + http://creativecommons.org/licenses/by-sa/4.0/ + Archana Warrier, Dat Nguyen, Michelangelo Naim, Moksh Jain, Yichao Liang, Karen Schroeder, Cambridge Yang, Joshua B. Tenenbaum, Sebastian Vollmer, Kevin Ellis, Zenna Tavares - Mixed Reality Scenic Live Streaming for Cultural Heritage: Visual Interactions in a Historic Landscape - https://arxiv.org/abs/2511.17246 - arXiv:2511.17246v2 Announce Type: replace -Abstract: Scenic Live Streams (SLS), capturing real-world scenic sites from fixed cameras without streamers, have gained increasing popularity recently. They afford unique real-time lenses into remote sites for viewers' synchronous and collective engagement. Foregrounding its lack of dynamism and interactivity, we aim to maximize the potential of SLS by making it interactive. Namely MRSLS, we overlaid plain SLS with interactive Mixed Reality content that matches the site's geographical structures and local cultural backgrounds. We further highlight the substantial benefit of MRSLS to cultural heritage site interactions, and we demonstrate this design proposal with an MRSLS prototype at a UNESCO-listed heritage site in China. The design process includes an interview (N=6) to pinpoint local scenery and culture, as well as two iterative design studies (N=15, 14). A mixed-methods, between-subjects study (N=43, 37) shows that MRSLS affords immersive scenery appreciation, effective cultural imprints, and vivid shared experience. With its balance between cultural, participatory, and authentic attributes, we appeal for more HCI attention to (MR)SLS as an under-explored design space. - oai:arXiv.org:2511.17246v2 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Enhancing Reasoning Skills in Small Persian Medical Language Models Can Outperform Large-Scale Data Training + https://arxiv.org/abs/2510.20059 + arXiv:2510.20059v4 Announce Type: replace +Abstract: Enhancing reasoning capabilities in small language models is critical for specialized applications such as medical question answering, particularly in underrepresented languages like Persian. In this study, we employ Reinforcement Learning with AI Feedback (RLAIF) and Direct preference optimization (DPO) to improve the reasoning skills of a general-purpose Persian language model. To achieve this, we translated a multiple-choice medical question-answering dataset into Persian and used RLAIF to generate rejected-preferred answer pairs, which are essential for DPO training. By prompting both teacher and student models to produce Chain-of-Thought (CoT) reasoning responses, we compiled a dataset containing correct and incorrect reasoning trajectories. This dataset, comprising 2 million tokens in preferred answers and 2.5 million tokens in rejected ones, was used to train a baseline model, significantly enhancing its medical reasoning capabilities in Persian. Remarkably, the resulting model outperformed its predecessor, gaokerena-V, which was trained on approximately 57 million tokens, despite leveraging a much smaller dataset. These results highlight the efficiency and effectiveness of reasoning-focused training approaches in developing domain-specific language models with limited data availability. + oai:arXiv.org:2510.20059v4 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-sa/4.0/ - Zeyu Huang, Zuyu Xu, Yuanhao Zhang, Chengzhong Liu, Yanwei Zhao, Chuhan Shi, Jason Chen Zhao, Xiaojuan Ma + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Mehrdad Ghassabi, Sadra Hakim, Hamidreza Baradaran Kashani, Pedram Rostami - GateRA: Token-Aware Modulation for Parameter-Efficient Fine-Tuning - https://arxiv.org/abs/2511.17582 - arXiv:2511.17582v2 Announce Type: replace -Abstract: Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, DoRA, and HiRA, enable lightweight adaptation of large pre-trained models via low-rank updates. However, existing PEFT approaches apply static, input-agnostic updates to all tokens, disregarding the varying importance and difficulty of different inputs. This uniform treatment can lead to overfitting on trivial content or under-adaptation on more informative regions, especially in autoregressive settings with distinct prefill and decoding dynamics. In this paper, we propose GateRA, a unified framework that introduces token-aware modulation to dynamically adjust the strength of PEFT updates. By incorporating adaptive gating into standard PEFT branches, GateRA enables selective, token-level adaptation, preserving pre-trained knowledge for well-modeled inputs while focusing capacity on challenging cases. Empirical visualizations reveal phase-sensitive behaviors, where GateRA automatically suppresses updates for redundant prefill tokens while emphasizing adaptation during decoding. To promote confident and efficient modulation, we further introduce an entropy-based regularization that encourages near-binary gating decisions. This regularization prevents diffuse update patterns and leads to interpretable, sparse adaptation without hard thresholding. Finally, we present a theoretical analysis showing that GateRA induces a soft gradient-masking effect over the PEFT path, enabling continuous and differentiable control over adaptation. Experiments on multiple commonsense reasoning benchmarks demonstrate that GateRA consistently outperforms or matches prior PEFT methods. - oai:arXiv.org:2511.17582v2 - cs.LG - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Anderson-type acceleration method for Deep Neural Network optimization + https://arxiv.org/abs/2510.20254 + arXiv:2510.20254v2 Announce Type: replace +Abstract: In this paper we consider the neural network optimization. We develop Anderson-type acceleration method for the stochastic gradient decent method and it improves the network permanence very much. We demonstrate the applicability of the method for Deep Neural Network (DNN) and Convolution Neural Network (CNN). + oai:arXiv.org:2510.20254v2 + math.NA + cs.NA + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - AAAI 2026 - Jie Ou, Shuaihong Jiang, Yingjun Du, Cees G. M. Snoek + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kazufumi Ito, Tiancheng Xue - DiscoVerse: Multi-Agent Pharmaceutical Co-Scientist for Traceable Drug Discovery and Reverse Translation - https://arxiv.org/abs/2511.18259 - arXiv:2511.18259v2 Announce Type: replace -Abstract: Pharmaceutical research and development has accumulated vast and heterogeneous archives of data. Much of this knowledge stems from discontinued programs, and reusing these archives is invaluable for reverse translation. However, in practice, such reuse is often infeasible. In this work, we introduce DiscoVerse, a multi-agent co-scientist designed to support pharmaceutical research and development at Roche. Designed as a human-in-the-loop assistant, DiscoVerse enables domain-specific queries by delivering evidence-based answers: it retrieves relevant data, links across documents, summarises key findings and preserves institutional memory. We assess DiscoVerse through expert evaluation of source-linked outputs. Our evaluation spans a selected subset of 180 molecules from Roche's research and development repositories, encompassing over 0.87 billion BPE tokens and more than four decades of research. To our knowledge, this represents the first agentic framework to be systematically assessed on real pharmaceutical data for reverse translation, enabled by authorized access to confidential archives covering the full lifecycle of drug development. Our contributions include: role-specialized agent designs aligned with scientist workflows; human-in-the-loop support for reverse translation; expert evaluation; and a large-scale demonstration showing promising decision-making insights. In brief, across seven benchmark queries, DiscoVerse achieved near-perfect recall ($\geq 0.99$) with moderate precision ($0.71-0.91$). Qualitative assessments and three real-world pharmaceutical use cases further showed faithful, source-linked synthesis across preclinical and clinical evidence. - oai:arXiv.org:2511.18259v2 + Neural Diversity Regularizes Hallucinations in Language Models + https://arxiv.org/abs/2510.20690 + arXiv:2510.20690v2 Announce Type: replace +Abstract: Language models continue to hallucinate despite increases in parameters, compute, and data. We propose neural diversity -- decorrelated parallel representations -- as a principled mechanism that reduces hallucination rates at fixed parameter and data budgets. While existing mitigation strategies largely target accuracy, we provide the first formal tail bounds for hallucination probability in ensembled language models, reframing it as a second-moment reliability problem and explaining 94.3% of empirical reliability variation seen across parallel configurations. We introduce ND-LoRA (Neural Diversity Low-Rank Adaptation), combining parallel LoRA adapters with Barlow Twins regularization, and reduce hallucinations by up to 25.6% (and 14.6% on average) while preserving general accuracy. Ablations show LoRA adapters and regularization act synergistically, causal interventions prove neurodiversity as the mediating factor and correlational studies indicate scale: a 0.1% neural correlation increase is associated with a 3.8% hallucination increase. Finally, task-dependent optimality emerges: different tasks require different optimal amounts of neurodiversity. Together, our results highlight neural diversity as a third axis of scaling -- orthogonal to parameters and data -- to improve the reliability of language models at fixed budgets. + oai:arXiv.org:2510.20690v2 cs.CL - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Xiaochen Zheng, Alvaro Serra, Ilya Schneider Chernov, Maddalena Marchesi, Eunice Musvasva, Tatyana Y. Doktorova + Kushal Chakrabarti, Nirmal Balachundhar - Towards Characterizing Knowledge Distillation of PPG Heart Rate Estimation Models - https://arxiv.org/abs/2511.18829 - arXiv:2511.18829v2 Announce Type: replace -Abstract: Heart rate estimation from photoplethysmography (PPG) signals generated by wearable devices such as smartwatches and fitness trackers has significant implications for the health and well-being of individuals. Although prior work has demonstrated deep learning models with strong performance in the heart rate estimation task, in order to deploy these models on wearable devices, these models must also adhere to strict memory and latency constraints. In this work, we explore and characterize how large pre-trained PPG models may be distilled to smaller models appropriate for real-time inference on the edge. We evaluate four distillation strategies through comprehensive sweeps of teacher and student model capacities: (1) hard distillation, (2) soft distillation, (3) decoupled knowledge distillation (DKD), and (4) feature distillation. We present a characterization of the resulting scaling laws describing the relationship between model size and performance. This early investigation lays the groundwork for practical and predictable methods for building edge-deployable models for physiological sensing. - oai:arXiv.org:2511.18829v2 + Smaller Models, Smarter Rewards: A Two-Sided Approach to Process and Outcome Rewards + https://arxiv.org/abs/2510.23083 + arXiv:2510.23083v3 Announce Type: replace +Abstract: Generating high-quality code remains a challenge for Large Language Models (LLMs). For the evolution of reasoning models on this task, reward models are a necessary intermediate step. These models judge outcomes or intermediate steps. Decoder-only transformer models can be turned into reward models by introducing a regression layer and supervised fine-tuning. While it is known that reflection capabilities generally increase with the size of a model, we want to investigate whether state-of-the-art small language models like the Phi-4 family can be turned into usable reward models blending the consideration of process rewards and outcome rewards. + Targeting this goal, we construct a dataset of code samples with correctness labels derived from the APPS coding challenge benchmark. We then train a value-head model to estimate the success probability of intermediate outputs. Our evaluation shows that small LLMs are capable of serving as effective reward models or code evaluation critics, successfully identifying correct solutions among multiple candidates. Using this critic, we achieve over a 20% improvement in the search capability of the most accurate code out of multiple generations. + oai:arXiv.org:2510.23083v3 + cs.AI cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.SE + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Kanav Arora, Girish Narayanswamy, Shwetak Patel, Richard Li + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Jan Niklas Groeneveld, Xi Qin, Alexander Schaefer, Yaad Oren - Representational Stability of Truth in Large Language Models - https://arxiv.org/abs/2511.19166 - arXiv:2511.19166v2 Announce Type: replace -Abstract: Large language models (LLMs) are widely used for factual tasks such as "What treats asthma?" or "What is the capital of Latvia?". However, it remains unclear how stably LLMs encode distinctions between true, false, and neither-true-nor-false content in their internal probabilistic representations. We introduce representational stability as the robustness of an LLM's veracity representations to perturbations in the operational definition of truth. We assess representational stability by (i) training a linear probe on an LLM's activations to separate true from not-true statements and (ii) measuring how its learned decision boundary shifts under controlled label changes. Using activations from sixteen open-source models and three factual domains, we compare two types of neither statements. The first are fact-like assertions about entities we believe to be absent from any training data. We call these unfamiliar neither statements. The second are nonfactual claims drawn from well-known fictional contexts. We call these familiar neither statements. The unfamiliar statements induce the largest boundary shifts, producing up to $40\%$ flipped truth judgements in fragile domains (such as word definitions), while familiar fictional statements remain more coherently clustered and yield smaller changes ($\leq 8.2\%$). These results suggest that representational stability stems more from epistemic familiarity than from linguistic form. More broadly, our approach provides a diagnostic for auditing and training LLMs to preserve coherent truth assignments under semantic uncertainty, rather than optimizing for output accuracy alone. - oai:arXiv.org:2511.19166v2 - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + Breaking the Circle: An Autonomous Control-Switching Strategy for Stable Orographic Soaring in MAVs + https://arxiv.org/abs/2510.23084 + arXiv:2510.23084v2 Announce Type: replace +Abstract: Orographic soaring can significantly extend the endurance of micro aerial vehicles (MAVs), but circling behavior, arising from control conflicts between the longitudinal and vertical axes, increases energy consumption and the risk of divergence. We propose a control switching method, named SAOS: Switched Control for Autonomous Orographic Soaring, which mitigates circling behavior by selectively controlling either the horizontal or vertical axis, effectively transforming the system from underactuated to fully actuated during soaring. Additionally, the angle of attack is incorporated into the INDI controller to improve force estimation. Simulations with randomized initial positions and wind tunnel experiments on two MAVs demonstrate that the SAOS improves position convergence, reduces throttle usage, and mitigates roll oscillations caused by pitch-roll coupling. These improvements enhance energy efficiency and flight stability in constrained soaring environments. + oai:arXiv.org:2510.23084v2 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Samantha Dies, Courtney Maynard, Germans Savcisens, Tina Eliassi-Rad + Sunyou Hwang, Christophe De Wagter, Bart Remes, Guido de Croon - Image2Gcode: Image-to-G-code Generation for Additive Manufacturing Using Diffusion-Transformer Model - https://arxiv.org/abs/2511.20636 - arXiv:2511.20636v2 Announce Type: replace -Abstract: Mechanical design and manufacturing workflows conventionally begin with conceptual design, followed by the creation of a computer-aided design (CAD) model and fabrication through material-extrusion (MEX) printing. This process requires converting CAD geometry into machine-readable G-code through slicing and path planning. While each step is well established, dependence on CAD modeling remains a major bottleneck: constructing object-specific 3D geometry is slow and poorly suited to rapid prototyping. Even minor design variations typically necessitate manual updates in CAD software, making iteration time-consuming and difficult to scale. To address this limitation, we introduce Image2Gcode, an end-to-end data-driven framework that bypasses the CAD stage and generates printer-ready G-code directly from images and part drawings. Instead of relying on an explicit 3D model, a hand-drawn or captured 2D image serves as the sole input. The framework first extracts slice-wise structural cues from the image and then employs a denoising diffusion probabilistic model (DDPM) over G-code sequences. Through iterative denoising, the model transforms Gaussian noise into executable print-move trajectories with corresponding extrusion parameters, establishing a direct mapping from visual input to native toolpaths. By producing structured G-code directly from 2D imagery, Image2Gcode eliminates the need for CAD or STL intermediates, lowering the entry barrier for additive manufacturing and accelerating the design-to-fabrication cycle. This approach supports on-demand prototyping from simple sketches or visual references and integrates with upstream 2D-to-3D reconstruction modules to enable an automated pipeline from concept to physical artifact. The result is a flexible, computationally efficient framework that advances accessibility in design iteration, repair workflows, and distributed manufacturing. - oai:arXiv.org:2511.20636v2 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Beyond the Failures: Rethinking Foundation Models in Pathology + https://arxiv.org/abs/2510.23807 + arXiv:2510.23807v4 Announce Type: replace +Abstract: Despite their successes in vision and language, foundation models have stumbled in pathology, revealing low accuracy, instability, and heavy computational demands. These shortcomings stem not from tuning problems but from deeper conceptual mismatches: dense embeddings cannot represent the combinatorial richness of tissue, and current architectures inherit flaws in self-supervision, patch design, and noise-fragile pretraining. Biological complexity and limited domain innovation further widen the gap. The evidence is clear-pathology requires models explicitly designed for biological images rather than adaptations of large-scale natural-image methods whose assumptions do not hold for tissue. + oai:arXiv.org:2510.23807v4 + cs.AI + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Ziyue Wang, Yayati Jadhav, Peter Pak, Amir Barati Farimani + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Hamid R. Tizhoosh - Mortgage Language Model: Domain-Adaptive Pretraining with Residual Instruction, Alignment Tuning, and Task-Specific Routing - https://arxiv.org/abs/2511.21101 - arXiv:2511.21101v2 Announce Type: replace -Abstract: Large Language Models (LLMs) demonstrate exceptional capabilities across general domains, yet their application to specialized sectors such as mortgage finance requires domain-specific knowledge augmentation while preserving instruction-following fidelity. We present MortgageLLM, a novel domain-specific large language model that addresses this dual challenge. It is developed using a dual-track specialization framework from a single base model (LLaMA-3.1-8B). We opted for this dual-expert approach as a single multi-task model suffers from performance trade-offs, where optimizing for structured tasks (via SFT) degrades conversational fidelity (via DPO). Our dual-track method solves this by creating two specialists, allowing each to be optimally trained for its distinct capability. Our approach applies the instruction residual technique to restore instruction-following capabilities post-domain adaptation without supervised fine-tuning. We contribute: (1) application of this residual technique to the highly specialized mortgage finance domain; (2) a dual-expert architecture combining a conversational Q&A model and a structured task model for classification and summarization; and (3) an intelligent task routing mechanism using few-shot classification performed by one of the expert models itself. We validate our approach on domain-specific benchmarks, where our final model (MLM v2) significantly outperforms the base LLaMA-3.1-8B-Instruct, achieving an LLM-as-a-Judge summarization score of 4.58 (vs. 3.99), a Q&A score of 4.09 (vs. 4.0), and a classification score of 2.6 (vs. 1.2). On semantic similarity, our model achieved a BERTScore of 0.77 for summarization (vs. 0.74), 0.68 for Q&A (vs. 0.58), and 0.75 for classification (vs. 0.73), substantially outperforming baseline approaches. - oai:arXiv.org:2511.21101v2 - cs.CL - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + EthVault: A Secure and Resource-Conscious FPGA-Based Ethereum Cold Wallet + https://arxiv.org/abs/2510.23847 + arXiv:2510.23847v2 Announce Type: replace +Abstract: Cryptocurrency blockchain networks safeguard digital assets using cryptographic keys, with wallets playing a critical role in generating, storing, and managing these keys. Wallets, typically categorized as hot and cold, offer varying degrees of security and convenience. However, they are generally software-based applications running on microcontrollers. Consequently, they are vulnerable to malware and side-channel attacks, allowing perpetrators to extract private keys by targeting critical algorithms, such as ECC, which processes private keys to generate public keys and authorize transactions. To address these issues, this work presents EthVault, the first hardware architecture for an Ethereum hierarchically deterministic cold wallet, featuring hardware implementations of key algorithms for secure key generation. Also, an ECC architecture resilient to side-channel and timing attacks is proposed. Moreover, an architecture of the child key derivation function, a fundamental component of cryptocurrency wallets, is proposed. The design minimizes resource usage, meeting market demand for small, portable cryptocurrency wallets. FPGA implementation results validate the feasibility of the proposed approach. The ECC architecture exhibits uniform execution behavior across varying inputs, while the complete design utilizes only 27%, 7%, and 6% of LUTs, registers, and RAM blocks, respectively, on a Xilinx Zynq UltraScale+ FPGA + oai:arXiv.org:2510.23847v2 + cs.CR + eess.SP + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1049/blc2.70028 + IET Blockchain 5, no. 1 (2025): e70028 + Joel Poncha Lemayian, Ghyslain Gagnon, Kaiwen Zhang, Pascal Giard + + + TeleEgo: Benchmarking Egocentric AI Assistants in the Wild + https://arxiv.org/abs/2510.23981 + arXiv:2510.23981v4 Announce Type: replace +Abstract: Egocentric AI assistants in real-world settings must process multi-modal inputs (video, audio, text), respond in real time, and retain evolving long-term memory. However, existing benchmarks typically evaluate these abilities in isolation, lack realistic streaming scenarios, or support only short-term tasks. We introduce \textbf{TeleEgo}, a long-duration, streaming, omni-modal benchmark for evaluating egocentric AI assistants in realistic daily contexts. The dataset features over 14 hours per participant of synchronized egocentric video, audio, and text across four domains: work \& study, lifestyle \& routines, social activities, and outings \& culture. All data is aligned on a unified global timeline and includes high-quality visual narrations and speech transcripts, curated through human refinement.TeleEgo defines 12 diagnostic subtasks across three core capabilities: Memory (recalling past events), Understanding (interpreting the current moment), and Cross-Memory Reasoning (linking distant events). It contains 3,291 human-verified QA items spanning multiple question formats (single-choice, binary, multi-choice, and open-ended), evaluated strictly in a streaming setting. We propose Real-Time Accuracy (RTA) to jointly capture correctness and responsiveness under tight decision windows, and Memory Persistence Time (MPT) as a forward-looking metric for long-term retention in continuous streams. In this work, we report RTA results for current models and release TeleEgo, together with an MPT evaluation framework, as a realistic and extensible benchmark for future egocentric assistants with stronger streaming memory, enabling systematic study of both real-time behavior and long-horizon memory. + oai:arXiv.org:2510.23981v4 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiaqi Yan, Ruilong Ren, Jingren Liu, Shuning Xu, Ling Wang, Yiheng Wang, Xinlin Zhong, Yun Wang, Long Zhang, Xiangyu Chen, Changzhi Sun, Jixiang Luo, Dell Zhang, Hao Sun, Chi Zhang, Xuelong Li + + + Control Synthesis with Reinforcement Learning: A Modeling Perspective + https://arxiv.org/abs/2510.25063 + arXiv:2510.25063v2 Announce Type: replace +Abstract: Controllers designed with reinforcement learning can be sensitive to model mismatch. We demonstrate that designing such controllers in a virtual simulation environment with an inaccurate model is not suitable for deployment in a physical setup. Controllers designed using an accurate model is robust against disturbance and small mismatch between the physical setup and the mathematical model derived from first principles; while a poor model results in a controller that performs well in simulation but fails in physical experiments. Sensitivity analysis is used to justify these discrepancies and an empirical region of attraction estimation help us visualize their robustness. + oai:arXiv.org:2510.25063v2 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Manish Jain, Satheesh Kumar Ponnambalam, Salman Faroz, Chandrakanth Lns, Vinay Sharma + Nikki Xu, Hien Tran - Escaping the Verifier: Learning to Reason via Demonstrations - https://arxiv.org/abs/2511.21667 - arXiv:2511.21667v3 Announce Type: replace -Abstract: Training Large Language Models (LLMs) to reason often relies on Reinforcement Learning (RL) with task-specific verifiers. However, many real-world reasoning-intensive tasks lack verifiers, despite offering abundant expert demonstrations that remain under-utilized for reasoning-focused training. We introduce RARO (Relativistic Adversarial Reasoning Optimization) that learns strong reasoning capabilities from only expert demonstrations via Inverse Reinforcement Learning. Our method sets up an adversarial game between a policy and a relativistic critic: the policy learns to mimic expert answers, while the critic aims to identify the experts among (expert, policy) answer pairs. Both the policy and the critic are trained jointly and continuously via RL, and we identify the key stabilization techniques required for robust learning. Empirically, RARO significantly outperforms strong verifier-free baselines on all of our evaluation tasks -- Countdown, DeepMath, and Poetry Writing -- and enjoys the same robust scaling trends as RL with verifiers. These results demonstrate that our method effectively elicits strong reasoning performance from expert demonstrations alone, enabling robust reasoning learning even when task-specific verifiers are unavailable. - oai:arXiv.org:2511.21667v3 + A Practitioner's Guide to Kolmogorov-Arnold Networks + https://arxiv.org/abs/2510.25781 + arXiv:2510.25781v2 Announce Type: replace +Abstract: The so-called Kolmogorov-Arnold Networks (KANs), whose design is merely inspired, rather than dictated, by the Kolmogorov superposition theorem, have emerged as a promising alternative to traditional Multilayer Perceptrons (MLPs). This review provides a systematic and comprehensive overview of the rapidly expanding KAN landscape. By collecting and categorizing a large set of open-source implementations, we map the vibrant ecosystem supporting modern KAN development. We organize the review around four core themes: + (i) presenting a precise history of Kolmogorov's superposition theory toward neural-network formulations; (ii) establishing the formal equivalence between KANs and MLPs; (iii) analyzing the critical role of basis functions; and (iv) organizing recent advancements in accuracy, efficiency, regularization, and convergence. + Finally, we provide a practical Choose-Your-KAN guide to assist practitioners in selecting appropriate architectures, and we close by identifying current research gaps and future directions. The associated GitHub repository (https://github.com/AmirNoori68/kan-review) complements this paper and serves as a structured reference for ongoing KAN research. + oai:arXiv.org:2510.25781v2 cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.NA + cs.NE + math.NA + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Locke Cai, Ivan Provilkov + Amir Noorizadegan, Sifan Wang, Leevan Ling - Edge Deployment of Small Language Models, a comprehensive comparison of CPU, GPU and NPU backends - https://arxiv.org/abs/2511.22334 - arXiv:2511.22334v2 Announce Type: replace -Abstract: Edge computing processes data where it is generated, enabling faster decisions, lower bandwidth usage, and improved privacy. However, edge devices typically operate under strict constraints on processing power, memory, and energy consumption, making them unsuitable for large language models (LLMs). Fortunately, Small Language Models (SLMs) offer lightweight alternatives that bring AI inference to resource-constrained environments by significantly reducing computational cost while remaining suitable for specialization and customization. In this scenario, selecting the hardware platform that best balances performance and efficiency for SLM inference is challenging due to strict resource limitations. To address this issue, this study evaluates the inference performance and energy efficiency of commercial CPUs (Intel and ARM), GPUs (NVIDIA), and NPUs (RaiderChip) for running SLMs. GPUs, the usual platform of choice, are compared against commercial NPUs and recent multi-core CPUs. While NPUs leverage custom hardware designs optimized for computation, modern CPUs increasingly incorporate dedicated features targeting language-model workloads. Using a common execution framework and a suite of state-of-the-art SLMs, we analyze both maximum achievable performance and processing and energy efficiency across commercial solutions available for each platform. The results indicate that specialized backends outperform general-purpose CPUs, with NPUs achieving the highest performance by a wide margin. Bandwidth normalization proves essential for fair cross-architecture comparisons. Although low-power ARM processors deliver competitive results when energy usage is considered, metrics that combine performance and power (such as EDP) again highlight NPUs as the dominant architecture. These findings show that designs optimized for both efficiency and performance offer a clear advantage for edge workloads. - oai:arXiv.org:2511.22334v2 - cs.PF + Structural Plasticity as Active Inference: A Biologically-Inspired Architecture for Homeostatic Control + https://arxiv.org/abs/2511.02241 + arXiv:2511.02241v3 Announce Type: replace +Abstract: Traditional neural networks, while powerful, rely on biologically implausible learning mechanisms such as global backpropagation. This paper introduces the Structurally Adaptive Predictive Inference Network (SAPIN), a novel computational model inspired by the principles of active inference and the morphological plasticity observed in biological neural cultures. SAPIN operates on a 2D grid where processing units, or cells, learn by minimizing local prediction errors. The model features two primary, concurrent learning mechanisms: a local, Hebbian-like synaptic plasticity rule based on the temporal difference between a cell's actual activation and its learned expectation, and a structural plasticity mechanism where cells physically migrate across the grid to optimize their information-receptive fields. This dual approach allows the network to learn both how to process information (synaptic weights) and also where to position its computational resources (network topology). We validated the SAPIN model on the classic Cart Pole reinforcement learning benchmark. Our results demonstrate that the architecture can successfully solve the CartPole task, achieving robust performance. The network's intrinsic drive to minimize prediction error and maintain homeostasis was sufficient to discover a stable balancing policy. We also found that while continual learning led to instability, locking the network's parameters after achieving success resulted in a stable policy. When evaluated for 100 episodes post-locking (repeated over 100 successful agents), the locked networks maintained an average 82% success rate. + oai:arXiv.org:2511.02241v3 + cs.NE cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + q-bio.NC + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Pablo Prieto, Pablo Abad + Brennen A. Hill - Wukong's 72 Transformations: High-fidelity Textured 3D Morphing via Flow Models - https://arxiv.org/abs/2511.22425 - arXiv:2511.22425v2 Announce Type: replace -Abstract: We present WUKONG, a novel training-free framework for high-fidelity textured 3D morphing that takes a pair of source and target prompts (image or text) as input. Unlike conventional methods -- which rely on manual correspondence matching and deformation trajectory estimation (limiting generalization and requiring costly preprocessing) -- WUKONG leverages the generative prior of flow-based transformers to produce high-fidelity 3D transitions with rich texture details. To ensure smooth shape transitions, we exploit the inherent continuity of flow-based generative processes and formulate morphing as an optimal transport barycenter problem. We further introduce a sequential initialization strategy to prevent abrupt geometric distortions and preserve identity coherence. For faithful texture preservation, we propose a similarity-guided semantic consistency mechanism that selectively retains high-frequency details and enables precise control over blending dynamics. This avoids common artifacts like oversmoothing while maintaining semantic fidelity. Extensive quantitative and qualitative evaluations demonstrate that WUKONG significantly outperforms state-of-the-art methods, achieving superior results across diverse geometry and texture variations. - oai:arXiv.org:2511.22425v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + DropleX: Liquid sensing on tablet touchscreens + https://arxiv.org/abs/2511.02694 + arXiv:2511.02694v3 Announce Type: replace +Abstract: We present DropleX, the first system that enables liquid sensing using the capacitive touchscreen of commodity tablets. DropleX detects microliter-scale liquid samples, and performs non-invasive, through-container measurements to detect whether a drink has been spiked or if a sealed liquid has been contaminated. These capabilities are made possible by a physics-informed mechanism that disables the touchscreen's built-in adaptive filters, originally designed to reject the effects of liquid drops such as rain, without any hardware modifications. We model the touchscreen's sensing capabilities, limits, and non-idealities to inform the design of a signal processing and learning-based pipeline for liquid sensing. Our system achieves 96-99% accuracy in detecting microliter-scale adulteration in soda, wine, and milk, 93-96% accuracy in threshold detection of trace chemical concentrations, and 86-96% accuracy in through-container adulterant detection. Given the predominance of touchscreens, these exploratory results can open new opportunities for liquid sensing on everyday devices. + oai:arXiv.org:2511.02694v3 + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Minghao Yin, Yukang Cao, Kai Han + http://creativecommons.org/licenses/by/4.0/ + Siqi Zhang, Mayank Goel, Justin Chan - Orthogonal Disentanglement with Projected Feature Alignment for Multimodal Emotion Recognition in Conversation - https://arxiv.org/abs/2511.22463 - arXiv:2511.22463v2 Announce Type: replace -Abstract: Multimodal Emotion Recognition in Conversation (MERC) significantly enhances emotion recognition performance by integrating complementary emotional cues from text, audio, and visual modalities. While existing methods commonly utilize techniques such as contrastive learning and cross-attention mechanisms to align cross-modal emotional semantics, they typically overlook modality-specific emotional nuances like micro-expressions, tone variations, and sarcastic language. To overcome these limitations, we propose Orthogonal Disentanglement with Projected Feature Alignment (OD-PFA), a novel framework designed explicitly to capture both shared semantics and modality-specific emotional cues. Our approach first decouples unimodal features into shared and modality-specific components. An orthogonal disentanglement strategy (OD) enforces effective separation between these components, aided by a reconstruction loss to maintain critical emotional information from each modality. Additionally, a projected feature alignment strategy (PFA) maps shared features across modalities into a common latent space and applies a cross-modal consistency alignment loss to enhance semantic coherence. Extensive evaluations on widely-used benchmark datasets, IEMOCAP and MELD, demonstrate effectiveness of our proposed OD-PFA multimodal emotion recognition tasks, as compared with the state-of-the-art approaches. - oai:arXiv.org:2511.22463v2 - cs.MM - Wed, 10 Dec 2025 00:00:00 -0500 + SnapStream: Efficient Long Sequence Decoding on Dataflow Accelerators + https://arxiv.org/abs/2511.03092 + arXiv:2511.03092v5 Announce Type: replace +Abstract: The proliferation of 100B+ parameter Large Language Models (LLMs) with 100k+ context length support have resulted in increasing demands for on-chip memory to support large KV caches. Techniques such as StreamingLLM and SnapKV demonstrate how to control KV cache size while maintaining model accuracy. Yet, these techniques are not commonly used within industrial deployments using frameworks like vLLM or SGLang. The reason is twofold: on one hand, the static graphs and continuous batching methodology employed by these frameworks make it difficult to admit modifications to the standard multi-head attention algorithm, while on the other hand, the accuracy implications of such techniques on modern instruction-following and reasoning models are not well understood, obfuscating the need for implementing these techniques. In this paper, we explore these accuracy implications on Llama-3.1-8B-Instruct and DeepSeek-R1, and develop SnapStream, a KV cache compression method that can be deployed at scale. We demonstrate the efficacy of SnapStream in a 16-way tensor-parallel deployment of DeepSeek-671B on SambaNova SN40L accelerators running at 128k context length and up to 1832 tokens per second in a real production setting. SnapStream enables $4\times$ improved on-chip memory usage and introduces minimal accuracy degradation on LongBench-v2, AIME24 and LiveCodeBench. To the best of our knowledge, this is the first implementation of sparse KV attention techniques deployed in a production inference system with static graphs and continuous batching. + oai:arXiv.org:2511.03092v5 + cs.AI + cs.AR + cs.DC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xinyi Che, Wenbo Wang, Jian Guan, Qijun Zhao + http://creativecommons.org/licenses/by/4.0/ + Jonathan Li, Nasim Farahini, Evgenii Iuliugin, Magnus Vesterlund, Christian H\"aggstr\"om, Guangtao Wang, Shubhangi Upasani, Ayush Sachdeva, Rui Li, Faline Fu, Chen Wu, Ayesha Siddiqua, John Long, Tuowen Zhao, Matheen Musaddiq, H\r{a}kan Zeffer, Yun Du, Mingran Wang, Qinghua Li, Bo Li, Urmish Thakker, Raghu Prabhakar - Visual Orientalism in the AI Era: From West-East Binaries to English-Language Centrism - https://arxiv.org/abs/2511.22931 - arXiv:2511.22931v2 Announce Type: replace -Abstract: Text-to-image AI models systematically encode geopolitical bias through visual representation. Drawing on Said's Orientalism and framing theory, we introduce Visual Orientalism - the dual standard whereby AI depicts Western nations through political-modern symbols while portraying Eastern nations through cultural-traditional symbols. Analyzing 396 AI-generated images across 12 countries and 3 models, we reveal an evolution: Visual Orientalism has shifted from traditional West-versus-East binaries to English-language centrism, where only English-speaking core countries (USA and UK) receive political representation while all other nations - including European powers - face cultural exoticization. This algorithmic reconfiguration operates through automated framing mechanisms shaped by English-language training data dominance. Our findings demonstrate how AI systems function as agents of cultural representation that perpetuate and intensify historical power asymmetries. Addressing Visual Orientalism requires rethinking of algorithmic governance and the geopolitical structures embedded in AI training data. - oai:arXiv.org:2511.22931v2 - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + PUL-SLAM: Path-Uncertainty Co-Optimization with Lightweight Stagnation Detection for Efficient Robotic Exploration + https://arxiv.org/abs/2511.04180 + arXiv:2511.04180v2 Announce Type: replace +Abstract: Existing Active SLAM methodologies face issues such as slow exploration speed and suboptimal paths. To address these limitations, we propose a hybrid framework combining a Path-Uncertainty Co-Optimization Deep Reinforcement Learning framework and a Lightweight Stagnation Detection mechanism. The Path-Uncertainty Co-Optimization framework jointly optimizes travel distance and map uncertainty through a dual-objective reward function, balancing exploration and exploitation. The Lightweight Stagnation Detection reduces redundant exploration through Lidar Static Anomaly Detection and Map Update Stagnation Detection, terminating episodes on low expansion rates. Experimental results show that compared with the frontier-based method and RRT method, our approach shortens exploration time by up to 65% and reduces path distance by up to 42%, significantly improving exploration efficiency in complex environments while maintaining reliable map completeness. Ablation studies confirm that the collaborative mechanism accelerates training convergence. Empirical validation on a physical robotic platform demonstrates the algorithm's practical applicability and its successful transferability from simulation to real-world environments. + oai:arXiv.org:2511.04180v2 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhilong Zhao, Yindi Liu + Yizhen Yin, Dapeng Feng, Hongbo Chen, Yuhua Qi - The Battle of the Water Futures - https://arxiv.org/abs/2511.22986 - arXiv:2511.22986v2 Announce Type: replace -Abstract: The highly anticipated 'Battle of the Water Networks' is back with a new challenge for the water community. This competition will be hosted at the 4th International Joint Conference on Water Distribution Systems Analysis and Computing and Control in the Water Industry (WDSA/CCWI 2026), taking place in Paphos, Cyprus, from May 18-21, 2026. This competition embodies the core mission of Water-Futures and the theme for WDSA/CCWI 2026: "Designing the next generation of urban water (and wastewater) systems." - The objective is to design and operate a water distribution system over a long-term horizon under deep uncertainty, with interventions applied in stages. For the first time, this challenge features a staged-design approach, unobservable and unknown uncertainties, and incorporates elements of policymaking and artificial intelligence. The solutions will be assessed using a transparent and inspectable open-source evaluation framework. - oai:arXiv.org:2511.22986v2 + Model-free Adaptive Output Feedback Vibration Suppression in a Cantilever Beam + https://arxiv.org/abs/2511.06084 + arXiv:2511.06084v2 Announce Type: replace +Abstract: This paper presents a model-free adaptive control approach to suppress vibrations in a cantilevered beam excited by an unknown disturbance. The cantilevered beam under harmonic excitation is modeled using a lumped parameter approach. Based on retrospective cost optimization, a sampled-data adaptive controller is developed to suppress vibrations caused by external disturbances. Both displacement and acceleration measurements are considered for feedback. Since acceleration measurements are more sensitive to spillover, which excites higher frequency modes, a filter is developed to extract key displacement information from the acceleration data and enhance suppression performance. The vibration suppression performance is compared using both displacement and acceleration measurements. + oai:arXiv.org:2511.06084v2 eess.SY + cs.RO cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Dennis Zanutto, Christos Michalopoulos, Lydia Tsiami, Andr\'e Artelt, Jasmin Brandt, Demetrios Eliades, Stelios Vrachimis, Stefano Alvisi, Valentina Marsili, Filippo Mazzoni, Panagiotis Smartzis, Barbara Hammer, Phoebe Koundouri, Marios Polycarpou, Dragan Savi\'c - - - Reasoning Under Pressure: How do Training Incentives Influence Chain-of-Thought Monitorability? - https://arxiv.org/abs/2512.00218 - arXiv:2512.00218v2 Announce Type: replace -Abstract: AI systems that output their reasoning in natural language offer an opportunity for safety -- we can \emph{monitor} their chain of thought (CoT) for undesirable reasoning, such as the pursuit of harmful objectives. However, the extent to which CoT faithfully reflects the underlying reasoning process, and hence the extent to which it can be usefully monitored, may be influenced by certain aspects of training. We investigate how different \emph{training incentives}, applied to a reasoning model, affect its monitorability. We introduce a novel methodology for measuring monitorability according to whether a monitor can predict a key latent variable using the model's reasoning. When controlling for accuracy, we do not find evidence for consistent effects from commonly used incentives (length penalties and KL regularisation), but we find that adversarial optimisation (penalising monitor accuracy) degrades monitor performance, while direct optimisation for monitorability does not reliably lead to improvements. Our code is available at https://github.com/QiyaoWei/reasoning-under-pressure. - oai:arXiv.org:2512.00218v2 - cs.AI - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Matt MacDermott, Qiyao Wei, Rada Djoneva, Francis Rhys Ward + Juan Augusto Paredes Salazar, Ankit Goel - TenonOS: A Self-Generating LibOS-on-LibOS Framework for Time-Critical Embedded Operating Systems - https://arxiv.org/abs/2512.00400 - arXiv:2512.00400v2 Announce Type: replace -Abstract: The growing complexity of embedded systems creates tension between rich functionality and strict resource and real-time constraints. Traditional monolithic operating system and hypervisor designs suffer from resource bloat and unpredictable scheduling, making them unsuitable for time-critical workloads where low latency and low jitter are essential. We propose TenonOS, a demand-driven, self-generating, lightweight operating system framework for time-critical embedded systems that rethinks both hypervisor and operating system architectures. TenonOS introduces a LibOS-on-LibOS model that decomposes hypervisor and operating system functionality into fine-grained, reusable micro-libraries. A generative orchestration engine dynamically composes these libraries to synthesize a customized runtime tailored to each application's criticality, timing requirements, and resource profile. TenonOS consists of two core components: Mortise, a minimalist micro-hypervisor, and Tenon, a real-time library operating system. Mortise provides lightweight isolation and removes the usual double-scheduler overhead in virtualized setups, while Tenon provides precise and deterministic task management. By generating only the necessary software stack per workload, TenonOS removes redundant layers, minimizes the trusted computing base, and maximizes responsiveness. Experiments show a 40.28 percent reduction in scheduling latency, an ultra-compact 361 KiB memory footprint, and strong adaptability. - oai:arXiv.org:2512.00400v2 - cs.OS - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + ALIGN: A Vision-Language Framework for High-Accuracy Accident Location Inference through Geo-Spatial Neural Reasoning + https://arxiv.org/abs/2511.06316 + arXiv:2511.06316v2 Announce Type: replace +Abstract: Reliable geospatial information on road accidents is vital for safety analysis and infrastructure planning, yet most low- and middle-income countries continue to face a critical shortage of accurate, location-specific crash data. Existing text-based geocoding tools perform poorly in multilingual and unstructured news environments, where incomplete place descriptions and mixed language (e.g. Bangla-English) scripts obscure spatial context. To address these limitations, this study introduces ALIGN (Accident Location Inference through Geo-Spatial Neural Reasoning), a vision-language framework that emulates human spatial reasoning to infer accident location coordinates directly from available textual and map-based cues. ALIGN integrates large language and vision-language model mechanisms within a multi-stage pipeline that performs optical character recognition, linguistic reasoning, and map-level verification through grid-based spatial scanning. The framework systematically evaluates each predicted location against contextual and visual evidence, ensuring interpretable, fine-grained geolocation outcomes without requiring model retraining. Applied to Bangla-language news data source, ALIGN demonstrates consistent improvements over traditional geoparsing methods, accurately identifying district- and sub-district-level crash sites. Beyond its technical contribution, the framework establishes a high accuracy foundation for automated crash mapping in data-scarce regions, supporting evidence-driven road-safety policymaking and the broader integration of multimodal artificial intelligence in transportation analytics. + oai:arXiv.org:2511.06316v2 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Xinkui Zhao, Yifan Zhang, Haidan Zhao, Hao Zhang, Qingyu Ma, Lufei Zhang, Guanjie Cheng, Shuiguang Deng, Jianwei Yin, Zuoning Chen + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + MD Thamed Bin Zaman Chowdhury, Moazzem Hossain - IRPO: Boosting Image Restoration via Post-training GRPO - https://arxiv.org/abs/2512.00814 - arXiv:2512.00814v2 Announce Type: replace -Abstract: Recent advances in post-training paradigms have achieved remarkable success in high-level generation tasks, yet their potential for low-level vision remains rarely explored. Existing image restoration (IR) methods rely on pixel-level hard-fitting to ground-truth images, struggling with over-smoothing and poor generalization. To address these limitations, we propose IRPO, a low-level GRPO-based post-training paradigm that systematically explores both data formulation and reward modeling. We first explore a data formulation principle for low-level post-training paradigm, in which selecting underperforming samples from the pre-training stage yields optimal performance and improved efficiency. Furthermore, we model a reward-level criteria system that balances objective accuracy and human perceptual preference through three complementary components: a General Reward for structural fidelity, an Expert Reward leveraging Qwen-VL for perceptual alignment, and a Restoration Reward for task-specific low-level quality. Comprehensive experiments on six in-domain and five out-of-domain (OOD) low-level benchmarks demonstrate that IRPO achieves state-of-the-art results across diverse degradation types, surpassing the AdaIR baseline by 0.83 dB on in-domain tasks and 3.43 dB on OOD settings. Our code can be shown in https://github.com/HaoxuanXU1024/IRPO. - oai:arXiv.org:2512.00814v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + HEDN: A Hard-Easy Dual Network with Source Reliability Assessment for Cross-Subject EEG Emotion Recognition + https://arxiv.org/abs/2511.06782 + arXiv:2511.06782v2 Announce Type: replace +Abstract: Cross-subject electroencephalography (EEG) emotion recognition remains a major challenge in brain-computer interfaces (BCIs) due to substantial inter-subject variability. Multi-Source Domain Adaptation (MSDA) offers a potential solution, but existing MSDA frameworks typically assume equal source quality, leading to negative transfer from low-reliability domains and prohibitive computational overhead due to multi-branch model designs. To address these limitations, we propose the Hard-Easy Dual Network (HEDN), a lightweight reliability-aware MSDA framework. HEDN introduces a novel Source Reliability Assessment (SRA) mechanism that dynamically evaluates the structural integrity of each source domain during training. Based on this assessment, sources are routed to two specialized branches: an Easy Network that exploits high-quality sources to construct fine-grained, structure-aware prototypes for reliable pseudo-label generation, and a Hard Network that utilizes adversarial training to refine and align low-quality sources. Furthermore, a cross-network consistency loss aligns predictions between branches to preserve semantic coherence. Extensive experiments conducted on SEED, SEED-IV, and DEAP datasets demonstrate that HEDN achieves state-of-the-art performance across both cross-subject and cross-dataset evaluation protocols while reducing adaptation complexity. + oai:arXiv.org:2511.06782v2 + cs.HC + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haoxuan Xu, Yi Liu, Boyuan Jiang, Jinlong Peng, Donghao Luo, Xiaobin Hu, Shuicheng Yan, Haoang Li + Qiang Wang, Liying Yang, Jiayun Song, Yifan Bai, Jingtao Du - ReJump: A Tree-Jump Representation for Analyzing and Improving LLM Reasoning - https://arxiv.org/abs/2512.00831 - arXiv:2512.00831v2 Announce Type: replace -Abstract: Large Reasoning Models (LRMs) are Large Language Models (LLMs) explicitly trained to generate long-form Chain-of-Thoughts (CoTs), achieving impressive success on challenging tasks like math and programming. However, their underlying reasoning "algorithms" remain poorly understood. To investigate this, we propose ReJump, which represents a reasoning trace as a visitation order over nodes in a tree of intermediate problem-solving steps. Transitions between nodes, which we term jumps, include adjacent moves that capture behaviors such as calculation, and non-adjacent moves that capture behaviors such as backtracking and verification. ReJump enables analyzing LLM reasoning with diverse metrics that quantify exploration, exploitation, overthinking, forgetting, and verification. Using our proposed LLM agent to extract reasoning traces into ReJump format, we evaluate state-of-the-art LRMs on two tasks and find that models with similar accuracy can exhibit distinct reasoning behaviors, while different tasks favor different reasoning styles (e.g., varying balance between exploration and exploitation). To further understand how learning strategies shape reasoning, we use ReJump to compare distilled LRMs with their teachers, CoT-prompted LLMs with LRMs, and to examine how the number of reasoning examples and reinforcement learning affect reasoning behavior. Finally, we show that ReJump can improve reasoning quality at test time through strategies such as ReJump-guided Best-of-N selection and prompt selection. Our code is publicly available at https://github.com/UW-Madison-Lee-Lab/ReJump. - oai:arXiv.org:2512.00831v2 + LLMscape + https://arxiv.org/abs/2511.07161 + arXiv:2511.07161v3 Announce Type: replace +Abstract: LLMscape is an interactive installation that investigates how humans and AI construct meaning under shared conditions of uncertainty. Within a mutable, projection-mapped landscape, human participants reshape the world and engage with multiple AI agents, each developing incomplete and provisional accounts of their environment. Exhibited in Shanghai and continually evolving, the work positions AI not as deterministic tools but as embodied co-witnesses to an unstable world, examining the parallels between human and artificial meaning-making and inviting reflection on our shared epistemic limits. + oai:arXiv.org:2511.07161v3 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuchen Zeng, Shuibai Zhang, Wonjun Kang, Shutong Wu, Lynnix Zou, Ying Fan, Heeju Kim, Ziqian Lin, Jungtaek Kim, Hyung Il Koo, Dimitris Papailiopoulos, Kangwook Lee + Gottfried Haider, Jie Zhang - The Necessity of Imperfection:Reversing Model Collapse via Simulating Cognitive Boundedness - https://arxiv.org/abs/2512.01354 - arXiv:2512.01354v3 Announce Type: replace -Abstract: Although synthetic data is widely promoted as a remedy, its prevailing production paradigm -- one optimizing for statistical smoothness -- systematically removes the long-tail, cognitively grounded irregularities that characterize human text. Prolonged training on such statistically optimal but cognitively impoverished data accelerates model collapse. - This paper proposes a paradigm shift: instead of imitating the surface properties of data, we simulate the cognitive processes that generate human text. We introduce the Prompt-driven Cognitive Computing Framework (PMCSF), whose core consists of a Cognitive State Decoder (CSD) that reverse-engineers unstructured text into structured cognitive vectors, and a Cognitive Text Encoder (CTE) that re-materializes these states into text enriched with human-typical imperfections via mathematically defined Cognitive Perturbation Operators. - The framework is validated through a two-stage objective evaluation pipeline. First, in cognitive codec verification, CTE text yields a Jensen-Shannon divergence of 0.0614 from human text (vs. 0.4431 for standard LLM output), passes double-blind professional media review, and achieves an intraclass correlation coefficient ICC > 0.9 for cognitive profile alignment across heterogeneous models. Second, in functional gain evaluation, isomorphic stress tests in the A-share market show that strategies incorporating CTE-generated data reduce maximum drawdown by 47.4% during the 2015 crash and deliver 8.6% Defensive Alpha, exceeding transaction costs by a factor of 33. - Our findings demonstrate that modelling human cognitive limitations -- not copying surface data -- enables synthetic data with genuine functional gain, offering a viable technical pathway toward resolving the AI data-collapse crisis. - oai:arXiv.org:2512.01354v3 - cs.AI - cs.CL - cs.CY - cs.LG - q-fin.TR - Wed, 10 Dec 2025 00:00:00 -0500 + Finite Volume Analysis of the Poisson Problem via a Reduced Discontinuous Galerkin Space + https://arxiv.org/abs/2511.09099 + arXiv:2511.09099v2 Announce Type: replace +Abstract: In this paper, we propose and analyze a high-order finite volume method for the Poisson problem based on the reduced discontinuous Galerkin (RDG) space. The main idea is to employ the RDG space as the trial space and the piecewise constant space as the test space, thereby formulating the scheme in a Petrov-Galerkin framework. This approach inherits the local conservation property of finite volume methods while benefiting from the approximation capabilities of discontinuous Galerkin spaces with significantly fewer degrees of freedom. We establish a rigorous error analysis of the proposed scheme: in particular, we prove optimal-order convergence in the DG energy norm and suboptimal-order convergence in \(L^2\) norm. The theoretical analysis is supported by a set of one- and two-dimensional numerical experiments with Dirichlet and periodic boundary conditions, which confirm both the accuracy and efficiency of the method. The significance of this work lies in bridging finite volume and discontinuous Galerkin methodologies through the RDG space, thus enabling finite volume schemes with a mathematically rigorous convergence theory. + oai:arXiv.org:2511.09099v2 + math.NA + cs.NA + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Zhongjie Jiang + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Wenbo Hu, Yinhua Xia - CLAPS: Posterior-Aware Conformal Intervals via Last-Layer Laplace - https://arxiv.org/abs/2512.01384 - arXiv:2512.01384v2 Announce Type: replace -Abstract: We present CLAPS, a posterior-aware conformal regression method that pairs a Last-Layer Laplace Approximation with split-conformal calibration. From the resulting Gaussian posterior, CLAPS defines a simple two-sided posterior CDF score that aligns the conformity metric with the full predictive shape, not just a point estimate. This alignment yields narrower prediction intervals at the same target coverage, especially on small to medium tabular datasets where data are scarce and uncertainty modeling matters. We also provide a lightweight diagnostic suite that separates aleatoric and epistemic components and visualizes posterior behavior, helping practitioners understand why intervals shrink when they do. Across multiple benchmarks using the same MLP backbone, CLAPS consistently attains nominal coverage with improved efficiency and minimal overhead, offering a clear, practical upgrade to residual-based conformal baselines. - oai:arXiv.org:2512.01384v2 + Classifying Phonotrauma Severity from Vocal Fold Images with Soft Ordinal Regression + https://arxiv.org/abs/2511.09702 + arXiv:2511.09702v2 Announce Type: replace +Abstract: Phonotrauma refers to vocal fold tissue damage resulting from exposure to forces during voicing. It occurs on a continuum from mild to severe, and treatment options can vary based on severity. Assessment of severity involves a clinician's expert judgment, which is costly and can vary widely in reliability. In this work, we present the first method for automatically classifying phonotrauma severity from vocal fold images. To account for the ordinal nature of the labels, we adopt a widely used ordinal regression framework. To account for label uncertainty, we propose a novel modification to ordinal regression loss functions that enables them to operate on soft labels reflecting annotator rating distributions. Our proposed soft ordinal regression method achieves predictive performance approaching that of clinical experts, while producing well-calibrated uncertainty estimates. By providing an automated tool for phonotrauma severity assessment, our work can enable large-scale studies of phonotrauma, ultimately leading to improved clinical understanding and patient care. + oai:arXiv.org:2511.09702v2 + cs.CV cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh + http://creativecommons.org/licenses/by/4.0/ + Katie Matton, Purvaja Balaji, Hamzeh Ghasemzadeh, Jameson C. Cooper, Daryush D. Mehta, Jarrad H. Van Stan, Robert E. Hillman, Rosalind Picard, John Guttag, S. Mazdak Abulnaga - CourtMotion: Learning Event-Driven Motion Representations from Skeletal Data for Basketball - https://arxiv.org/abs/2512.01478 - arXiv:2512.01478v2 Announce Type: replace -Abstract: This paper presents CourtMotion, a spatiotemporal modeling framework for analyzing and predicting game events and plays as they develop in professional basketball. Anticipating basketball events requires understanding both physical motion patterns and their semantic significance in the context of the game. Traditional approaches that use only player positions fail to capture crucial indicators such as body orientation, defensive stance, or shooting preparation motions. Our two-stage approach first processes skeletal tracking data through Graph Neural Networks to capture nuanced motion patterns, then employs a Transformer architecture with specialized attention mechanisms to model player interactions. We introduce event projection heads that explicitly connect player movements to basketball events like passes, shots, and steals, training the model to associate physical motion patterns with their tactical purposes. Experiments on NBA tracking data demonstrate significant improvements over position-only baselines: 35% reduction in trajectory prediction error compared to state-of-the-art position-based models and consistent performance gains across key basketball analytics tasks. The resulting pretrained model serves as a powerful foundation for multiple downstream tasks, with pick detection, shot taker identification, assist prediction, shot location classification, and shot type recognition demonstrating substantial improvements over existing methods. - oai:arXiv.org:2512.01478v2 - cs.CV - cs.MA - Wed, 10 Dec 2025 00:00:00 -0500 + Boosting In-Silicon Directed Evolution with Fine-Tuned Protein Language Model and Tree Search + https://arxiv.org/abs/2511.09900 + arXiv:2511.09900v3 Announce Type: replace +Abstract: Protein evolution through amino acid mutations is a cornerstone of life sciences. Recent advances in protein language models have shown rich evolutionary patterns, offering unprecedented potential for in-silicon directed evolution. However, existing directed evolution methods largely rely on heuristic evolution strategies and have yet to efficiently integrate the transformative protein language models with advanced optimization techniques, such as reinforcement learning, to learn optimal evolution policies. To bridge this gap, we propose AlphaDE, a novel framework that evolves protein sequences by harnessing the innovative paradigms of large language models, such as fine-tuning and test-time inference. First, AlphaDE fine-tunes pretrained protein language models using masked language modeling on homologous protein sequences to activate the evolutionary plausibility of the interested protein family. Second, AlphaDE introduces test-time inference based on Monte Carlo tree search, which effectively evolves proteins with evolutionary guidance from the fine-tuned protein language model. Extensive benchmark experiments show that AlphaDE remarkably outperforms previous state-of-the-art methods even with few-shot fine-tuning. A case study further demonstrates that AlphaDE supports condensing the protein sequence space of avGFP through computational evolution. + oai:arXiv.org:2511.09900v3 + cs.AI + cs.CE + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Omer Sela (Amazon, Tel Aviv University), Michael Chertok (Amazon), Lior Wolf (Tel Aviv University) + Yaodong Yang, Yang Wang, Jinpeng Li, Pei Guo, Da Han, Guangyong Chen, Pheng-Ann Heng - StarDist: A Code Generator for Distributed Graph Algorithms - https://arxiv.org/abs/2512.01646 - arXiv:2512.01646v2 Announce Type: replace -Abstract: Relational data, occurring in the real world, are often structured as graphs, which provide the logical abstraction required to make analytical derivations simpler. As graphs get larger, the irregular access patterns exhibited in most graph algorithms, hamper performance. This, along with NUMA and physical memory limits, results in scaling complexities with sequential/shared memory frameworks. StarPlat's MPI backend abstracts away the programmatic complexity involved in designing optimal distributed graph algorithms. It provides an instrument for coding graph algorithms that scale over distributed memory. In this work, we provide an analysis-transformation framework that leverages general semantics associated with iterations involving nodes and their neighbors, within StarPlat, to aggregate communication. The framework scans for patterns that warrant re-ordering in neighborhood access patterns, aggregate communication, and avoid communication altogether with opportunistic caching in reduction constructs. We also architect an optimized bulk-reduction substrate using Open MPI's passive Remote Memory Access (RMA) constructs. We applied our optimization logic to StarPlat's distributed backend and outperformed d-Galois by 2.05 and DRONE by 1.44 times in Single Source Shortest Paths across several big data graphs. - oai:arXiv.org:2512.01646v2 - cs.DC - Wed, 10 Dec 2025 00:00:00 -0500 + Online Price Competition under Generalized Linear Demands + https://arxiv.org/abs/2511.10718 + arXiv:2511.10718v3 Announce Type: replace +Abstract: We study sequential price competition among $N$ sellers, each influenced by the pricing decisions of their rivals. Specifically, the demand function for each seller $i$ follows the single index model $\lambda_i(\mathbf{p}) = \mu_i(\langle \boldsymbol{\theta}_{i,0}, \mathbf{p} \rangle)$, with known increasing link $\mu_i$ and unknown parameter $\boldsymbol{\theta}_{i,0}$, where the vector $\mathbf{p}$ denotes the vector of prices offered by all the sellers simultaneously at a given instant. Each seller observes only their own realized demand -- unobservable to competitors -- and the prices set by rivals. Our framework generalizes existing approaches that focus solely on linear demand models. We propose a novel decentralized policy, PML-GLUCB, that combines penalized MLE with an upper-confidence pricing rule, removing the need for coordinated exploration phases across sellers -- which is integral to previous linear models -- and accommodating both binary and real-valued demand observations. Relative to a dynamic benchmark policy, each seller achieves $O(N^{2}\sqrt{T}\log(T))$ regret, which essentially matches the optimal rate known in the linear setting. A significant technical contribution of our work is the development of a variant of the elliptical potential lemma -- typically applied in single-agent systems -- adapted to our competitive multi-agent environment. + oai:arXiv.org:2511.10718v3 + cs.GT + math.ST + stat.ME + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Barenya Kumar Nandy, Rupesh Nasre + http://creativecommons.org/licenses/by/4.0/ + Daniele Bracale, Moulinath Banerjee, Cong Shi, Yuekai Sun - Inverse Optimality for Fair Digital Twins: A Preference-based approach - https://arxiv.org/abs/2512.01650 - arXiv:2512.01650v2 Announce Type: replace -Abstract: Digital Twins (DTs) are increasingly used as autonomous decision-makers in complex socio-technical systems. However, their mathematically optimal decisions often diverge from human expectations, revealing a persistent mismatch between algorithmic and bounded human rationality. This work addresses this challenge by proposing a framework that introduces fairness as a learnable objective within optimization-based Digital Twins. In this respect, a preference-driven learning workflow that infers latent fairness objectives directly from human pairwise preferences over feasible decisions is introduced. A dedicated Siamese neural network is developed to generate convex quadratic cost functions conditioned on contextual information. The resulting surrogate objectives drive the optimization procedure toward solutions that better reflect human-perceived fairness while maintaining computational efficiency. The effectiveness of the approach is demonstrated on a COVID-19 hospital resource allocation scenario. Overall, this work offers a practical solution to integrate human-centered fairness into the design of autonomous decision-making systems. - oai:arXiv.org:2512.01650v2 + Forgetting-MarI: LLM Unlearning via Marginal Information Regularization + https://arxiv.org/abs/2511.11914 + arXiv:2511.11914v2 Announce Type: replace +Abstract: As AI models are trained on ever-expanding datasets, the ability to remove the influence of specific data from trained models has become essential for privacy protection and regulatory compliance. Unlearning addresses this challenge by selectively removing parametric knowledge from the trained models without retraining from scratch, which is critical for resource-intensive models such as Large Language Models (LLMs). Existing unlearning methods often degrade model performance by removing more information than necessary when attempting to ''forget'' specific data. We introduce Forgetting-MarI, an LLM unlearning framework that provably removes only the additional (marginal) information contributed by the data to be unlearned, while preserving the information supported by the data to be retained. By penalizing marginal information, our method yields an explicit upper bound on the unlearn dataset's residual influence in the trained models, providing provable undetectability. Extensive experiments confirm that our approach outperforms current state-of-the-art unlearning methods, delivering reliable forgetting and better preserved general model performance across diverse benchmarks. This advancement represents an important step toward making AI systems more controllable and compliant with privacy and copyright regulations without compromising their effectiveness. + oai:arXiv.org:2511.11914v2 + cs.AI + cs.CL + cs.CR + cs.IT cs.LG - cs.SE - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Daniele Masti, Francesco Basciani, Arianna Fedeli, Girgio Gnecco, Francesco Smarra + Shizhou Xu, Yuan Ni, Stefan Broecker, Thomas Strohmer - HERMES: Heterogeneous Application-Enabled Routing Middleware for Edge-IoT Systems - https://arxiv.org/abs/2512.01824 - arXiv:2512.01824v2 Announce Type: replace -Abstract: The growth of the Internet of Things has enabled a new generation of applications, pushing computation and intelligence toward the network edge. This trend, however, exposes challenges, as the heterogeneity of devices and the complex requirements of applications are often misaligned with the assumptions of traditional routing protocols, which lack the flexibility to accommodate application-layer metrics and policies. This work addresses this gap by proposing a software framework that enhances routing flexibility by dynamically incorporating application-aware decisions. The core of the work establishes a multi-hop Wi-Fi network of heterogeneous devices, specifically ESP8266, ESP32, and Raspberry Pi 3B. The routing layer follows a proactive approach, while the network is fault-tolerant, maintaining operation despite both node loss and message loss. On top of this, a middleware layer introduces three strategies for influencing routing behavior: two adapt the path a message traverses until arriving at the destination, while the third allows applications to shape the network topology. This layer offers a flexible interface for diverse applications. The framework was validated on a physical testbed through edge intelligence use cases, including distributing neural network inference computations across multiple devices and offloading the entire workload to the most capable node. Distributed inference is useful in scenarios requiring low latency, energy efficiency, privacy, and autonomy. Experimental results indicated that device heterogeneity significantly impacts network performance. Throughput and inference duration analysis showed the influence of the strategies on application behaviour, revealed that topology critically affects decentralized performance, and demonstrated the suitability of the framework for complex tasks. - oai:arXiv.org:2512.01824v2 - cs.NI - cs.NE - Wed, 10 Dec 2025 00:00:00 -0500 + O-Mem: Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents + https://arxiv.org/abs/2511.13593 + arXiv:2511.13593v3 Announce Type: replace +Abstract: Recent advancements in LLM-powered agents have demonstrated significant potential in generating human-like responses; however, they continue to face challenges in maintaining long-term interactions within complex environments, primarily due to limitations in contextual consistency and dynamic personalization. Existing memory systems often depend on semantic grouping prior to retrieval, which can overlook semantically irrelevant yet critical user information and introduce retrieval noise. In this report, we propose the initial design of O-Mem, a novel memory framework based on active user profiling that dynamically extracts and updates user characteristics and event records from their proactive interactions with agents. O-Mem supports hierarchical retrieval of persona attributes and topic-related context, enabling more adaptive and coherent personalized responses. O-Mem achieves 51.67% on the public LoCoMo benchmark, a nearly 3% improvement upon LangMem,the previous state-of-the-art, and it achieves 62.99% on PERSONAMEM, a 3.5% improvement upon A-Mem,the previous state-of-the-art. O-Mem also boosts token and interaction response time efficiency compared to previous memory frameworks. Our work opens up promising directions for developing efficient and human-like personalized AI assistants in the future. + oai:arXiv.org:2511.13593v3 + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - J\'essica Consci\^encia, Ant\'onio Grilo + Piaohong Wang, Motong Tian, Jiaxian Li, Yuan Liang, Yuqing Wang, Qianben Chen, Tiannan Wang, Zhicong Lu, Jiawei Ma, Yuchen Eleanor Jiang, Wangchunshu Zhou - The Mean-Field Dynamics of Transformers - https://arxiv.org/abs/2512.01868 - arXiv:2512.01868v2 Announce Type: replace -Abstract: We develop a mathematical framework that interprets Transformer attention as an interacting particle system and studies its continuum (mean-field) limits. By idealizing attention on the sphere, we connect Transformer dynamics to Wasserstein gradient flows, synchronization models (Kuramoto), and mean-shift clustering. Central to our results is a global clustering phenomenon whereby tokens cluster asymptotically after long metastable states where they are arranged into multiple clusters. We further analyze a tractable equiangular reduction to obtain exact clustering rates, show how commonly used normalization schemes alter contraction speeds, and identify a phase transition for long-context attention. The results highlight both the mechanisms that drive representation collapse and the regimes that preserve expressive, multi-cluster structure in deep attention architectures. - oai:arXiv.org:2512.01868v2 - cs.LG - math-ph - math.DS - math.MP - math.PR - Wed, 10 Dec 2025 00:00:00 -0500 + GloTok: Global Perspective Tokenizer for Image Reconstruction and Generation + https://arxiv.org/abs/2511.14184 + arXiv:2511.14184v3 Announce Type: replace +Abstract: Existing state-of-the-art image tokenization methods leverage diverse semantic features from pre-trained vision models for additional supervision, to expand the distribution of latent representations and thereby improve the quality of image reconstruction and generation. These methods employ a locally supervised approach for semantic supervision, which limits the uniformity of semantic distribution. However, VA-VAE proves that a more uniform feature distribution yields better generation performance. In this work, we introduce a Global Perspective Tokenizer (GloTok), which utilizes global relational information to model a more uniform semantic distribution of tokenized features. Specifically, a codebook-wise histogram relation learning method is proposed to transfer the semantics, which are modeled by pre-trained models on the entire dataset, to the semantic codebook. Then, we design a residual learning module that recovers the fine-grained details to minimize the reconstruction error caused by quantization. Through the above design, GloTok delivers more uniformly distributed semantic latent representations, which facilitates the training of autoregressive (AR) models for generating high-quality images without requiring direct access to pre-trained models during the training process. Experiments on the standard ImageNet-1k benchmark clearly show that our proposed method achieves state-of-the-art reconstruction performance and generation quality. + oai:arXiv.org:2511.14184v3 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Philippe Rigollet + http://creativecommons.org/licenses/by/4.0/ + Xuan Zhao, Zhongyu Zhang, Yuge Huang, Yuxi Mi, Guodong Mu, Shouhong Ding, Jun Wang, Rizen Guo, Shuigeng Zhou - An Improved Ensemble-Based Machine Learning Model with Feature Optimization for Early Diabetes Prediction - https://arxiv.org/abs/2512.02023 - arXiv:2512.02023v2 Announce Type: replace -Abstract: Diabetes is a serious worldwide health issue, and successful intervention depends on early detection. However, overlapping risk factors and data asymmetry make prediction difficult. To use extensive health survey data to create a machine learning framework for diabetes classification that is both accurate and comprehensible, to produce results that will aid in clinical decision-making. Using the BRFSS dataset, we assessed a number of supervised learning techniques. SMOTE and Tomek Links were used to correct class imbalance. To improve prediction performance, both individual models and ensemble techniques such as stacking were investigated. The 2015 BRFSS dataset, which includes roughly 253,680 records with 22 numerical features, is used in this study. Strong ROC-AUC performance of approximately 0.96 was attained by the individual models Random Forest, XGBoost, CatBoost, and LightGBM.The stacking ensemble with XGBoost and KNN yielded the best overall results with 94.82\% accuracy, ROC-AUC of 0.989, and PR-AUC of 0.991, indicating a favourable balance between recall and precision. In our study, we proposed and developed a React Native-based application with a Python Flask backend to support early diabetes prediction, providing users with an accessible and efficient health monitoring tool. - oai:arXiv.org:2512.02023v2 - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + PathMind: A Retrieve-Prioritize-Reason Framework for Knowledge Graph Reasoning with Large Language Models + https://arxiv.org/abs/2511.14256 + arXiv:2511.14256v2 Announce Type: replace +Abstract: Knowledge graph reasoning (KGR) is the task of inferring new knowledge by performing logical deductions on knowledge graphs. Recently, large language models (LLMs) have demonstrated remarkable performance in complex reasoning tasks. Despite promising success, current LLM-based KGR methods still face two critical limitations. First, existing methods often extract reasoning paths indiscriminately, without assessing their different importance, which may introduce irrelevant noise that misleads LLMs. Second, while many methods leverage LLMs to dynamically explore potential reasoning paths, they require high retrieval demands and frequent LLM calls. To address these limitations, we propose PathMind, a novel framework designed to enhance faithful and interpretable reasoning by selectively guiding LLMs with important reasoning paths. Specifically, PathMind follows a "Retrieve-Prioritize-Reason" paradigm. First, it retrieves a query subgraph from KG through the retrieval module. Next, it introduces a path prioritization mechanism that identifies important reasoning paths using a semantic-aware path priority function, which simultaneously considers the accumulative cost and the estimated future cost for reaching the target. Finally, PathMind generates accurate and logically consistent responses via a dual-phase training strategy, including task-specific instruction tuning and path-wise preference alignment. Extensive experiments on benchmark datasets demonstrate that PathMind consistently outperforms competitive baselines, particularly on complex reasoning tasks with fewer input tokens, by identifying essential reasoning paths. + oai:arXiv.org:2511.14256v2 + cs.AI + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Md. Najmul Islam, Md. Miner Hossain Rimon, Shah Sadek-E-Akbor Shamim, Zarif Mohaimen Fahad, Md. Jehadul Islam Mony, Md. Jalal Uddin Chowdhury + Yu Liu, Xixun Lin, Yanmin Shang, Yangxi Li, Shi Wang, Yanan Cao - Aetheria: A multimodal interpretable content safety framework based on multi-agent debate and collaboration - https://arxiv.org/abs/2512.02530 - arXiv:2512.02530v2 Announce Type: replace -Abstract: The exponential growth of digital content presents significant challenges for content safety. Current moderation systems, often based on single models or fixed pipelines, exhibit limitations in identifying implicit risks and providing interpretable judgment processes. To address these issues, we propose Aetheria, a multimodal interpretable content safety framework based on multi-agent debate and collaboration.Employing a collaborative architecture of five core agents, Aetheria conducts in-depth analysis and adjudication of multimodal content through a dynamic, mutually persuasive debate mechanism, which is grounded by RAG-based knowledge retrieval.Comprehensive experiments on our proposed benchmark (AIR-Bench) validate that Aetheria not only generates detailed and traceable audit reports but also demonstrates significant advantages over baselines in overall content safety accuracy, especially in the identification of implicit risks. This framework establishes a transparent and interpretable paradigm, significantly advancing the field of trustworthy AI content moderation. - oai:arXiv.org:2512.02530v2 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Flow-Aided Flight Through Dynamic Clutters From Point To Motion + https://arxiv.org/abs/2511.16372 + arXiv:2511.16372v2 Announce Type: replace +Abstract: Challenges in traversing dynamic clutters lie mainly in the efficient perception of the environmental dynamics and the generation of evasive behaviors considering obstacle movement. Previous solutions have made progress in explicitly modeling the dynamic obstacle motion for avoidance, but this key dependency of decision-making is time-consuming and unreliable in highly dynamic scenarios with occlusions. On the contrary, without introducing object detection, tracking, and prediction, we empower the reinforcement learning (RL) with single LiDAR sensing to realize an autonomous flight system directly from point to motion. For exteroception, a depth sensing distance map achieving fixed-shape, low-resolution, and detail-safe is encoded from raw point clouds, and an environment change sensing point flow is adopted as motion features extracted from multi-frame observations. These two are integrated into a lightweight and easy-to-learn representation of complex dynamic environments. For action generation, the behavior of avoiding dynamic threats in advance is implicitly driven by the proposed change-aware sensing representation, where the policy optimization is indicated by the relative motion modulated distance field. With the deployment-friendly sensing simulation and dynamics model-free acceleration control, the proposed system shows a superior success rate and adaptability to alternatives, and the policy derived from the simulator can drive a real-world quadrotor with safe maneuvers. + oai:arXiv.org:2511.16372v2 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuxiang He, Jian Zhao, Yuchen Yuan, Tianle Zhang, Wei Cai, Haojie Cheng, Ziyan Shi, Ming Zhu, Haichuan Tang, Chi Zhang, Xuelong Li + Bowen Xu, Zexuan Yan, Minghao Lu, Xiyu Fan, Yi Luo, Youshen Lin, Zhiqiang Chen, Yeke Chen, Qiyuan Qiao, Peng Lu - Strengthening Han's Fourier Entropy-Influence Inequality via an Information-Theoretic Proof - https://arxiv.org/abs/2512.03117 - arXiv:2512.03117v3 Announce Type: replace -Abstract: We strengthen Han's Fourier entropy-influence inequality $$ H[\widehat{f}] \leq C_{1}I(f) + C_{2}\sum_{i\in [n]}I_{i}(f)\ln\frac{1}{I_{i}(f)} $$ originally proved for $\{-1,1\}$-valued Boolean functions with $C_{1}=3+2\ln 2$ and $C_{2}=1$. We show, by a short information-theoretic proof, that it in fact holds with sharp constants $C_{1}=C_{2}=1$ for all real-valued Boolean functions of unit $L^{2}$-norm, thereby establishing the inequality as an elementary structural property of Shannon entropy and influence. - oai:arXiv.org:2512.03117v3 - cs.IT - math.IT - Wed, 10 Dec 2025 00:00:00 -0500 + DISPATCH -- Decentralized Informed Spatial Planning and Assignment of Tasks for Cooperative Heterogeneous Agents + https://arxiv.org/abs/2511.17915 + arXiv:2511.17915v3 Announce Type: replace +Abstract: Spatial task allocation in systems such as multi-robot delivery or ride-sharing requires balancing efficiency with fair service across tasks. Greedy assignment policies that match each agent to its highest-preference or lowest-cost task can maximize efficiency but often create inequities: some tasks receive disproportionately favorable service (e.g., shorter delays or better matches), while others face long waits or poor allocations. + We study fairness in heterogeneous multi-agent systems where tasks vary in preference alignment and urgency. Most existing approaches either assume centralized coordination or largely ignore fairness under partial observability. Distinct from this prior work, we establish a connection between the Eisenberg-Gale (EG) equilibrium convex program and decentralized, partially observable multi-agent learning. Building on this connection, we develop two equilibrium-informed algorithms that integrate fairness and efficiency: (i) a multi-agent reinforcement learning (MARL) framework, EG-MARL, whose training is guided by a centralized EG equilibrium assignment algorithm; and (ii) a stochastic online optimization mechanism that performs guided exploration and subset-based fair assignment as tasks are discovered. + We evaluate on Multi-Agent Particle Environment (MPE) simulations across varying team sizes against centralized EG, Hungarian, and Min-Max distance baselines, and also present a Webots-based warehouse proof-of-concept with heterogeneous robots. Both methods preserve the fairness-efficiency balance of the EG solution under partial observability, with EG-MARL achieving near-centralized coordination and reduced travel distances, and the online mechanism enabling real-time allocation with competitive fairness. + oai:arXiv.org:2511.17915v3 + cs.MA + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Peijie Li, Guangyue Han + Yao Liu, Sampad Mohanty, Elizabeth Ondula, Bhaskar Krishnamachari - Understanding LLM Reasoning for Abstractive Summarization - https://arxiv.org/abs/2512.03503 - arXiv:2512.03503v2 Announce Type: replace -Abstract: While the reasoning capabilities of Large Language Models (LLMs) excel in analytical tasks such as mathematics and code generation, their utility for abstractive summarization remains widely assumed but largely unverified. To bridge this gap, we first tailor general reasoning strategies to the summarization domain. We then conduct a systematic, large scale comparative study of 8 reasoning strategies and 3 Large Reasoning Models (LRMs) across 8 diverse datasets, assessing both summary quality and faithfulness. Our findings show that reasoning is not a universal solution and its effectiveness is highly dependent on the specific strategy and context. Specifically, we observe a trade-off between summary quality and factual faithfulness: explicit reasoning strategies tend to improve fluency at the expense of factual grounding, while implicit reasoning in LRMs exhibits the inverse pattern. Furthermore, increasing an LRM's internal reasoning budget does not improve, and can even hurt, factual consistency, suggesting that effective summarization demands faithful compression rather than creative over-thinking. - oai:arXiv.org:2512.03503v2 + Multi-Agent Collaborative Filtering: Orchestrating Users and Items for Agentic Recommendations + https://arxiv.org/abs/2511.18413 + arXiv:2511.18413v2 Announce Type: replace +Abstract: Agentic recommendations cast recommenders as large language model (LLM) agents that can plan, reason, use tools, and interact with users of varying preferences in web applications. However, most existing agentic recommender systems focus on generic single-agent plan-execute workflows or multi-agent task decomposition pipelines. Without recommendation-oriented design, they often underuse the collaborative signals in the user-item interaction history, leading to unsatisfying recommendation results. To address this, we propose the Multi-Agent Collaborative Filtering (MACF) framework for agentic recommendations, drawing an analogy between traditional collaborative filtering algorithms and LLM-based multi-agent collaboration. Specifically, given a target user and query, we instantiate similar users and relevant items as LLM agents with unique profiles. Each agent is able to call retrieval tools, suggest candidate items, and interact with other agents. Different from the static preference aggregation in traditional collaborative filtering, MACF employs a central orchestrator agent to adaptively manage the collaboration between user and item agents via dynamic agent recruitment and personalized collaboration instruction. Experimental results on datasets from three different domains show the advantages of our MACF framework compared to strong agentic recommendation baselines. + oai:arXiv.org:2511.18413v2 cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + cs.IR + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haohan Yuan, Haopeng Zhang + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Yu Xia, Sungchul Kim, Tong Yu, Ryan A. Rossi, Julian McAuley - Cross-Space Synergy: A Unified Framework for Multimodal Emotion Recognition in Conversation - https://arxiv.org/abs/2512.03521 - arXiv:2512.03521v2 Announce Type: replace -Abstract: Multimodal Emotion Recognition in Conversation (MERC) aims to predict speakers' emotions by integrating textual, acoustic, and visual cues. Existing approaches either struggle to capture complex cross-modal interactions or experience gradient conflicts and unstable training when using deeper architectures. To address these issues, we propose Cross-Space Synergy (CSS), which couples a representation component with an optimization component. Synergistic Polynomial Fusion (SPF) serves the representation role, leveraging low-rank tensor factorization to efficiently capture high-order cross-modal interactions. Pareto Gradient Modulator (PGM) serves the optimization role, steering updates along Pareto-optimal directions across competing objectives to alleviate gradient conflicts and improve stability. Experiments show that CSS outperforms existing representative methods on IEMOCAP and MELD in both accuracy and training stability, demonstrating its effectiveness in complex multimodal scenarios. - oai:arXiv.org:2512.03521v2 - cs.MM - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Connectivity-Preserving Multi-Agent Area Coverage via Optimal-Transport-Based Density-Driven Optimal Control (D2OC) + https://arxiv.org/abs/2511.18579 + arXiv:2511.18579v3 Announce Type: replace +Abstract: Multi-agent systems play a central role in area coverage tasks across search-and-rescue, environmental monitoring, and precision agriculture. Achieving non-uniform coverage, where spatial priorities vary across the domain, requires coordinating agents while respecting dynamic and communication constraints. Density-driven approaches can distribute agents according to a prescribed reference density, but existing methods do not ensure connectivity. This limitation often leads to communication loss, reduced coordination, and degraded coverage performance. + This letter introduces a connectivity-preserving extension of the Density-Driven Optimal Control (D2OC) framework. The coverage objective, defined using the Wasserstein distance between the agent distribution and the reference density, admits a convex quadratic program formulation. Communication constraints are incorporated through a smooth connectivity penalty, which maintains strict convexity, supports distributed implementation, and preserves inter-agent communication without imposing rigid formations. + Simulation studies show that the proposed method consistently maintains connectivity, improves convergence speed, and enhances non-uniform coverage quality compared with density-driven schemes that do not incorporate explicit connectivity considerations. + oai:arXiv.org:2511.18579v3 + eess.SY + cs.RO + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaosen Lyu, Jiayu Xiong, Yuren Chen, Wanlong Wang, Xiaoqing Dai, Jing Wang + Kooktae Lee, Ethan Brook - On the Temporality for Sketch Representation Learning - https://arxiv.org/abs/2512.04007 - arXiv:2512.04007v2 Announce Type: replace -Abstract: Sketches are simple human hand-drawn abstractions of complex scenes and real-world objects. Although the field of sketch representation learning has advanced significantly, there is still a gap in understanding the true relevance of the temporal aspect to the quality of these representations. This work investigates whether it is indeed justifiable to treat sketches as sequences, as well as which internal orders play a more relevant role. The results indicate that, although the use of traditional positional encodings is valid for modeling sketches as sequences, absolute coordinates consistently outperform relative ones. Furthermore, non-autoregressive decoders outperform their autoregressive counterparts. Finally, the importance of temporality was shown to depend on both the order considered and the task evaluated. - oai:arXiv.org:2512.04007v2 + CoD: A Diffusion Foundation Model for Image Compression + https://arxiv.org/abs/2511.18706 + arXiv:2511.18706v2 Announce Type: replace +Abstract: Existing diffusion codecs typically build on text-to-image diffusion foundation models like Stable Diffusion. However, text conditioning is suboptimal from a compression perspective, hindering the potential of downstream diffusion codecs, particularly at ultra-low bitrates. To address it, we introduce \textbf{CoD}, the first \textbf{Co}mpression-oriented \textbf{D}iffusion foundation model, trained from scratch to enable end-to-end optimization of both compression and generation. CoD is not a fixed codec but a general foundation model designed for various diffusion-based codecs. It offers several advantages: \textbf{High compression efficiency}, replacing Stable Diffusion with CoD in downstream codecs like DiffC achieves SOTA results, especially at ultra-low bitrates (e.g., 0.0039 bpp); \textbf{Low-cost and reproducible training}, 300$\times$ faster training than Stable Diffusion ($\sim$ 20 vs. $\sim$ 6,250 A100 GPU days) on entirely open image-only datasets; \textbf{Providing new insights}, e.g., We find pixel-space diffusion can achieve VTM-level PSNR with high perceptual quality and can outperform GAN-based codecs using fewer parameters. We hope CoD lays the foundation for future diffusion codec research. Codes will be released. + oai:arXiv.org:2511.18706v2 cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Marcelo Isaias de Moraes Junior, Moacir Antonelli Ponti + Zhaoyang Jia, Zihan Zheng, Naifu Xue, Jiahao Li, Bin Li, Zongyu Guo, Xiaoyi Zhang, Houqiang Li, Yan Lu - Mitigating the Curse of Detail: Scaling Arguments for Feature Learning and Sample Complexity - https://arxiv.org/abs/2512.04165 - arXiv:2512.04165v3 Announce Type: replace -Abstract: Two pressing topics in the theory of deep learning are the interpretation of feature learning mechanisms and the determination of implicit bias of networks in the rich regime. Current theories of rich feature learning, often appear in the form of high-dimensional non-linear equations, which require computationally intensive numerical solutions. Given the many details that go into defining a deep learning problem, this complexity is a significant and often unavoidable challenge. Here, we propose a powerful heuristic route for predicting the data and width scales at which various patterns of feature learning emerge. This form of scale analysis is considerably simpler than exact theories and reproduces the scaling exponents of various known results. In addition, we make novel predictions on complex toy architectures, such as three-layer non-linear networks and attention heads, thus extending the scope of first-principle theories of deep learning. - oai:arXiv.org:2512.04165v3 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + A Note on the Parameterised Complexity of Coverability in Vector Addition Systems + https://arxiv.org/abs/2511.19212 + arXiv:2511.19212v2 Announce Type: replace +Abstract: We investigate the parameterised complexity of the classic coverability problem for vector addition systems (VAS): given a finite set of vectors $V \subseteq\mathbb{Z}^d$, an initial configuration $s\in\mathbb{N}^d$, and a target configuration $t\in\mathbb{N}^d$, decide whether starting from $s$, one can iteratively add vectors from $V$ to ultimately arrive at a configuration that is larger than or equal to $t$ on every coordinate, while not observing any negative value on any coordinate along the way. We consider two natural parameters for the problem: the dimension $d$ and the size of $V$, defined as the total bitsize of its encoding. We present several results charting the complexity of those two parameterisations, among which the highlight is that coverability for VAS parameterised by the dimension and with all the numbers in the input encoded in unary is complete for the class XNL under PL-reductions. We also discuss open problems in the topic, most notably the question about fixed-parameter tractability for the parameterisation by the size of $V$. + oai:arXiv.org:2511.19212v2 + cs.CC + cs.LO + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Noa Rubin, Orit Davidovich, Zohar Ringel - - - Orchestrator Multi-Agent Clinical Decision Support System for Secondary Headache Diagnosis in Primary Care - https://arxiv.org/abs/2512.04207 - arXiv:2512.04207v2 Announce Type: replace -Abstract: Unlike most primary headaches, secondary headaches need specialized care and can have devastating consequences if not treated promptly. Clinical guidelines highlight several 'red flag' features, such as thunderclap onset, meningismus, papilledema, focal neurologic deficits, signs of temporal arteritis, systemic illness, and the 'worst headache of their life' presentation. Despite these guidelines, determining which patients require urgent evaluation remains challenging in primary care settings. Clinicians often work with limited time, incomplete information, and diverse symptom presentations, which can lead to under-recognition and inappropriate care. We present a large language model (LLM)-based multi-agent clinical decision support system built on an orchestrator-specialist architecture, designed to perform explicit and interpretable secondary headache diagnosis from free-text clinical vignettes. The multi-agent system decomposes diagnosis into seven domain-specialized agents, each producing a structured and evidence-grounded rationale, while a central orchestrator performs task decomposition and coordinates agent routing. We evaluated the multi-agent system using 90 expert-validated secondary headache cases and compared its performance with a single-LLM baseline across two prompting strategies: question-based prompting (QPrompt) and clinical practice guideline-based prompting (GPrompt). We tested five open-source LLMs (Qwen-30B, GPT-OSS-20B, Qwen-14B, Qwen-8B, and Llama-3.1-8B), and found that the orchestrated multi-agent system with GPrompt consistently achieved the highest F1 scores, with larger gains in smaller models. These findings demonstrate that structured multi-agent reasoning improves accuracy beyond prompt engineering alone and offers a transparent, clinically aligned approach for explainable decision support in secondary headache diagnosis. - oai:arXiv.org:2512.04207v2 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xizhi Wu, Nelly Estefanie Garduno-Rapp, Justin F Rousseau, Mounika Thakkallapally, Hang Zhang, Yuelyu Ji, Shyam Visweswaran, Yifan Peng, Yanshan Wang + Micha{\l} Pilipczuk, Sylvain Schmitz, Henry Sinclair-Banks - Polynomiogram: An Integrated Framework for Root Visualization and Generative Art - https://arxiv.org/abs/2512.04263 - arXiv:2512.04263v2 Announce Type: replace -Abstract: This work presents the Polynomiogram framework, an integrated computational platform for exploring, visualizing, and generating art from polynomial root systems. The main innovation is a flexible sampling scheme in which two independent parameters are drawn from user defined domains and mapped to the polynomial coefficients through a generating function. This design allows the same mathematical foundation to support both scientific investigation and generative algorithmic art. The framework integrates two complementary numerical engines: NumPy companion matrix solver for fast, large scale computation and MPSolve for high precision, scientifically rigorous validation. This dual architecture enables efficient visualization for creative use and accurate computation for research and education. Numerical accuracy was verified using classical ensembles, including the Kac and Lucas polynomials. The method was applied to the cubic polynomial system to analyze its bifurcation structure, demonstrating its value as both a scientific tool for exploring root phenomena and an educational aid for visualizing fundamental concepts in algebra and dynamical systems. Beyond analysis, the Polynomiogram also demonstrated its potential as a tool for personalized generative art. Examples include the use of the platform to generate a natural form resembling a hibiscus flower and to create personalized artwork expressing gratitude toward advances in artificial intelligence and large language models through a tribute composition. - oai:arXiv.org:2512.04263v2 - cs.SE + MAESTRO: Multi-Agent Environment Shaping through Task and Reward Optimization + https://arxiv.org/abs/2511.19253 + arXiv:2511.19253v2 Announce Type: replace +Abstract: Cooperative Multi-Agent Reinforcement Learning (MARL) faces two major design bottlenecks: crafting dense reward functions and constructing curricula that avoid local optima in high-dimensional, non-stationary environments. Existing approaches rely on fixed heuristics or use Large Language Models (LLMs) directly in the control loop, which is costly and unsuitable for real-time systems. We propose MAESTRO (Multi-Agent Environment Shaping through Task and Reward Optimization), a framework that moves the LLM outside the execution loop and uses it as an offline training architect. MAESTRO introduces two generative components: (i) a semantic curriculum generator that creates diverse, performance-driven traffic scenarios, and (ii) an automated reward synthesizer that produces executable Python reward functions adapted to evolving curriculum difficulty. These components guide a standard MARL backbone (MADDPG) without increasing inference cost at deployment. We evaluate MAESTRO on large-scale traffic signal control (Hangzhou, 16 intersections) and conduct controlled ablations. Results show that combining LLM-generated curricula with LLM-generated reward shaping yields improved performance and stability. Across four seeds, the full system achieves +4.0% higher mean return (163.26 vs. 156.93) and 2.2% better risk-adjusted performance (Sharpe 1.53 vs. 0.70) over a strong curriculum baseline. These findings highlight LLMs as effective high-level designers for cooperative MARL training. + oai:arXiv.org:2511.19253v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hoang Duc Nguyen, Anh Van Pham, Hien D. Nguyen + http://creativecommons.org/licenses/by/4.0/ + Boyuan Wu - RoboBPP: Benchmarking Robotic Online Bin Packing with Physics-based Simulation - https://arxiv.org/abs/2512.04415 - arXiv:2512.04415v2 Announce Type: replace -Abstract: Physical feasibility in 3D bin packing is a key requirement in modern industrial logistics and robotic automation. With the growing adoption of industrial automation, online bin packing has gained increasing attention. However, inconsistencies in problem settings, test datasets, and evaluation metrics have hindered progress in the field, and there is a lack of a comprehensive benchmarking system. Direct testing on real hardware is costly, and building a realistic simulation environment is also challenging. To address these limitations, we introduce RoboBPP, a benchmarking system designed for robotic online bin packing. RoboBPP integrates a physics-based simulator to assess physical feasibility. In our simulation environment, we introduce a robotic arm and boxes at real-world scales to replicate real industrial packing workflows. By simulating conditions that arise in real industrial applications, we ensure that evaluated algorithms are practically deployable. In addition, prior studies often rely on synthetic datasets whose distributions differ from real-world industrial data. To address this issue, we collect three datasets from real industrial workflows, including assembly-line production, logistics packing, and furniture manufacturing. The benchmark comprises three carefully designed test settings and extends existing evaluation metrics with new metrics for structural stability and operational safety. We design a scoring system and derive a range of insights from the evaluation results. RoboBPP is fully open-source and is equipped with visualization tools and an online leaderboard, providing a reproducible and extensible foundation for future research and industrial applications (https://robot-bin-packing-benchmark.github.io). - oai:arXiv.org:2512.04415v2 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Active Inference in Discrete State Spaces from First Principles + https://arxiv.org/abs/2511.20321 + arXiv:2511.20321v2 Announce Type: replace +Abstract: We seek to clarify the concept of active inference by disentangling it from the Free Energy Principle. We show how the optimizations that need to be carried out in order to implement active inference in discrete state spaces can be formulated as constrained divergence minimization problems which can be solved by standard mean field methods that do not appeal to the idea of expected free energy. When it is used to model perception, the perception/action divergence criterion that we propose coincides with variational free energy. When it is used to model action, it differs from an expected free energy functional by an entropy regularizer. + oai:arXiv.org:2511.20321v2 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhoufeng Wang, Hang Zhao, Juzhan Xu, Shishun Zhang, Zeyu Xiong, Ruizhen Hu, Chenyang Zhu, Kai Xu + Patrick Kenny - MARL Warehouse Robots - https://arxiv.org/abs/2512.04463 - arXiv:2512.04463v2 Announce Type: replace -Abstract: We present a comparative study of multi-agent reinforcement learning (MARL) algorithms for cooperative warehouse robotics. We evaluate QMIX and IPPO on the Robotic Warehouse (RWARE) environment and a custom Unity 3D simulation. Our experiments reveal that QMIX's value decomposition significantly outperforms independent learning approaches (achieving 3.25 mean return vs. 0.38 for advanced IPPO), but requires extensive hyperparameter tuning -- particularly extended epsilon annealing (5M+ steps) for sparse reward discovery. We demonstrate successful deployment in Unity ML-Agents, achieving consistent package delivery after 1M training steps. While MARL shows promise for small-scale deployments (2-4 robots), significant scaling challenges remain. Code and analyses: https://pallman14.github.io/MARL-QMIX-Warehouse-Robots/ - oai:arXiv.org:2512.04463v2 + ICM-SR: Image-Conditioned Manifold Regularization for Image Super-Resoultion + https://arxiv.org/abs/2511.22048 + arXiv:2511.22048v2 Announce Type: replace +Abstract: Real world image super-resolution (Real-ISR) often leverages the powerful generative priors of text-to-image diffusion models by regularizing the output to lie on their learned manifold. However, existing methods often overlook the importance of the regularizing manifold, typically defaulting to a text-conditioned manifold. This approach suffers from two key limitations. Conceptually, it is misaligned with the Real-ISR task, which is to generate high quality (HQ) images directly tied to the low quality (LQ) images. Practically, the teacher model often reconstructs images with color distortions and blurred edges, indicating a flawed generative prior for this task. To correct these flaws and ensure conceptual alignment, a more suitable manifold must incorporate information from the images. While the most straightforward approach is to condition directly on the raw input images, their high information densities make the regularization process numerically unstable. To resolve this, we propose image-conditioned manifold regularization (ICM), a method that regularizes the output towards a manifold conditioned on the sparse yet essential structural information: a combination of colormap and Canny edges. ICM provides a task-aligned and stable regularization signal, thereby avoiding the instability of dense-conditioning and enhancing the final super-resolution quality. Our experiments confirm that the proposed regularization significantly enhances super-resolution performance, particularly in perceptual quality, demonstrating its effectiveness for real-world applications. We will release the source code of our work for reproducibility. + oai:arXiv.org:2511.22048v2 + cs.CV cs.AI - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Price Allman, Lian Thang, Dre Simmons, Salmon Riaz + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Junoh Kang, Donghun Ryou, Bohyung Han - SlideGen: Collaborative Multimodal Agents for Scientific Slide Generation - https://arxiv.org/abs/2512.04529 - arXiv:2512.04529v2 Announce Type: replace -Abstract: Generating academic slides from scientific papers is a challenging multimodal reasoning task that requires both long context understanding and deliberate visual planning. Existing approaches largely reduce it to text only summarization, overlooking the visual component and design intensive nature of slide creation. In this paper we introduce SlideGen, an agentic, modular, and visual in the loop framework for scientific paper to slide generation. SlideGen orchestrates a group of vision language agents that reason collaboratively over the document structure and semantics, producing editable PPTX slides with logical flow and compelling visual presentation. By integrating coordinated outlining, mapping, arrangement, note synthesis, and iterative refinement, our system consistently delivers slides of expert level quality. Across diverse benchmarks and strong baselines, SlideGen outperforms existing methods in visual quality, content faithfulness, and readability, positioning it as the new state of the art in automated slide generation. Our work establishes a foundation for design aware multimodal slide generation, demonstrating how agentic collaboration can bridge understanding and presentation in complex multimodal reasoning tasks. - oai:arXiv.org:2512.04529v2 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + FADiff: Fusion-Aware Differentiable Optimization for DNN Scheduling on Tensor Accelerators + https://arxiv.org/abs/2511.22348 + arXiv:2511.22348v2 Announce Type: replace +Abstract: Efficient deployment of Deep Neural Networks (DNNs), such as Large Language Models (LLMs), on tensor accelerators is essential for maximizing computational efficiency in modern AI systems. However, achieving this is challenging due to the enormous and complex design space created by the interaction of intra-layer mapping and inter-layer fusion. In this work, we present FADiff, a gradient-based optimization framework capable of automatically identifying high-quality intra-layer mapping and inter-layer fusion strategies to accelerate inference for DNN workloads. We first construct a unified and differentiable analytical cost model, which accurately predicts the energy and latency of both single-layer mappings and various layer fusion strategies. Then, by encoding discrete constraints into the loss function, we employ a gradient-based approach to efficiently explore the vast design space, determining the optimal joint strategy for mapping and fusion. Experimental results demonstrate the superiority of FADiff, achieving better optimization in terms of energy and latency compared to existing methods. + oai:arXiv.org:2511.22348v2 + cs.AR + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Xin Liang, Xiang Zhang, Yiwei Xu, Siqi Sun, Chenyu You + Shuao Jia, Zichao Ling, Chen Bai, Kang Zhao, Jianwang Zhai - MT-Depth: Multi-task Instance feature analysis for the Depth Completion - https://arxiv.org/abs/2512.04734 - arXiv:2512.04734v2 Announce Type: replace -Abstract: Depth completion plays a vital role in 3D perception systems, especially in scenarios where sparse depth data must be densified for tasks such as autonomous driving, robotics, and augmented reality. While many existing approaches rely on semantic segmentation to guide depth completion, they often overlook the benefits of object-level understanding. In this work, we introduce an instance-aware depth completion framework that explicitly integrates binary instance masks as spatial priors to refine depth predictions. Our model combines four main components: a frozen YOLO V11 instance segmentation branch, a U-Net-based depth completion backbone, a cross-attention fusion module, and an attention-guided prediction head. The instance segmentation branch generates per-image foreground masks that guide the depth branch via cross-attention, allowing the network to focus on object-centric regions during refinement. We validate our method on the Virtual KITTI 2 dataset, showing that it achieves lower Root Mean Squared Error (RMSE) compared to both a U-Net-only baseline and previous semantic-guided methods, while maintaining competitive Mean Absolute Error (MAE). Qualitative and quantitative results demonstrate that the proposed model effectively enhances depth accuracy near object boundaries, occlusions, and thin structures. Our findings suggest that incorporating instance-aware cues offers a promising direction for improving depth completion without relying on dense semantic labels. - oai:arXiv.org:2512.04734v2 + World in a Frame: Understanding Culture Mixing as a New Challenge for Vision-Language Models + https://arxiv.org/abs/2511.22787 + arXiv:2511.22787v2 Announce Type: replace +Abstract: In a globalized world, cultural elements from diverse origins frequently appear together within a single visual scene. We refer to these as culture mixing scenarios, yet how Large Vision-Language Models (LVLMs) perceive them remains underexplored. We investigate culture mixing as a critical challenge for LVLMs and examine how current models behave when cultural items from multiple regions appear together. To systematically analyze these behaviors, we construct CultureMix, a food Visual Question Answering (VQA) benchmark with 23k diffusion-generated, human-verified culture mixing images across four subtasks: (1) food-only, (2) food+food, (3) food+background, and (4) food+food+background. Evaluating 10 LVLMs, we find consistent failures to preserve individual cultural identities in mixed settings. Models show strong background reliance, with accuracy dropping 14% when cultural backgrounds are added to food-only baselines, and they produce inconsistent predictions for identical foods across different contexts. To address these limitations, we explore three robustness strategies. We find supervised fine-tuning using a diverse culture mixing dataset substantially improve model consistency and reduce background sensitivity. We call for increased attention to culture mixing scenarios as a critical step toward developing LVLMs capable of operating reliably in culturally diverse real-world environments. + oai:arXiv.org:2511.22787v2 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Abdul Haseeb Nizamani, Dandi Zhou, Xinhai Sun + Eunsu Kim, Junyeong Park, Na Min An, Junseong Kim, Hitesh Laxmichand Patel, Jiho Jin, Julia Kruk, Amit Agarwal, Srikant Panda, Fenal Ashokbhai Ilasariya, Hyunjung Shim, Alice Oh - EMMA: Efficient Multimodal Understanding, Generation, and Editing with a Unified Architecture - https://arxiv.org/abs/2512.04810 - arXiv:2512.04810v4 Announce Type: replace -Abstract: We propose EMMA, an efficient and unified architecture for multimodal understanding, generation and editing. Specifically, EMMA primarily consists of 1) An efficient autoencoder with a 32x compression ratio, which significantly reduces the number of tokens required for generation. This also ensures the training balance between understanding and generation tasks by applying the same compression ratio to images. 2) Channel-wise concatenation instead of token-wise concatenation among visual understanding and generation tokens, which further reduces the visual tokens in unified architectures. 3) A shared-and-decoupled network that enables mutual improvements across tasks while meeting the task-specific modeling requirements. 4) A mixture-of-experts mechanism adopted for visual understanding encoder, which substantially improves perceptual capabilities with a few parameters increase. Extensive experiments have shown that EMMA-4B can significantly outperform state-of-the-art unified multimodal approaches (e.g., BAGEL-7B) in both efficiency and performance, while also achieving competitive results compared to recent multimodal understanding and generation experts (e.g., Qwen3-VL and Qwen-Image). We believe that EMMA lays a solid foundation for the future development of unified multimodal architectures. - oai:arXiv.org:2512.04810v4 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory + https://arxiv.org/abs/2511.23083 + arXiv:2511.23083v2 Announce Type: replace +Abstract: High-capacity kernel Hopfield networks exhibit a \textit{Ridge of Optimization} characterized by extreme stability. While previously linked to \textit{Spectral Concentration}, its origin remains elusive. Here, we analyze the network dynamics on a statistical manifold, revealing that the Ridge corresponds to the Edge of Stability, a critical boundary where the Fisher Information Matrix becomes singular. We demonstrate that the apparent Euclidean force antagonism is a manifestation of \textit{Dual Equilibrium} in the Riemannian space. This unifies learning dynamics and capacity via the Minimum Description Length principle, offering a geometric theory of self-organized criticality. + oai:arXiv.org:2511.23083v2 + cs.LG + cs.NE + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xin He, Longhui Wei, Jianbo Ouyang, Minghui Liao, Lingxi Xie, Qi Tian + Akira Tamamori - Equivariant Symmetry-Aware Head Pose Estimation for Fetal MRI - https://arxiv.org/abs/2512.04890 - arXiv:2512.04890v2 Announce Type: replace -Abstract: We present E(3)-Pose, a novel fast pose estimation method that jointly and explicitly models rotation equivariance and object symmetry. Our work is motivated by the challenging problem of accounting for fetal head motion during a diagnostic MRI scan. We aim to enable automatic adaptive prescription of 2D diagnostic MRI slices with 6-DoF head pose estimation, supported by 3D MRI volumes rapidly acquired before each 2D slice. Existing methods struggle to generalize to clinical volumes, due to pose ambiguities induced by inherent anatomical symmetries, as well as low resolution, noise, and artifacts. In contrast, E(3)-Pose captures anatomical symmetries and rigid pose equivariance by construction, and yields robust estimates of the fetal head pose. Our experiments on publicly available and representative clinical fetal MRI datasets demonstrate the superior robustness and generalization of our method across domains. Crucially, E(3)-Pose achieves state-of-the-art accuracy on clinical MRI volumes, paving the way for clinical translation. Our implementation is available at github.com/ramyamut/E3-Pose. - oai:arXiv.org:2512.04890v2 + THCRL: Trusted Hierarchical Contrastive Representation Learning for Multi-View Clustering + https://arxiv.org/abs/2512.00368 + arXiv:2512.00368v2 Announce Type: replace +Abstract: Multi-View Clustering (MVC) has garnered increasing attention in recent years. It is capable of partitioning data samples into distinct groups by learning a consensus representation. However, a significant challenge remains: the problem of untrustworthy fusion. This problem primarily arises from two key factors: 1) Existing methods often ignore the presence of inherent noise within individual views; 2) In traditional MVC methods using Contrastive Learning (CL), similarity computations typically rely on different views of the same instance, while neglecting the structural information from nearest neighbors within the same cluster. Consequently, this leads to the wrong direction for multi-view fusion. To address this problem, we present a novel Trusted Hierarchical Contrastive Representation Learning (THCRL). It consists of two key modules. Specifically, we propose the Deep Symmetry Hierarchical Fusion (DSHF) module, which leverages the UNet architecture integrated with multiple denoising mechanisms to achieve trustworthy fusion of multi-view data. Furthermore, we present the Average K-Nearest Neighbors Contrastive Learning (AKCL) module to align the fused representation with the view-specific representation. Unlike conventional strategies, AKCL enhances representation similarity among samples belonging to the same cluster, rather than merely focusing on the same sample across views, thereby reinforcing the confidence of the fused representation. Extensive experiments demonstrate that THCRL achieves the state-of-the-art performance in deep MVC tasks. + oai:arXiv.org:2512.00368v2 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Ramya Muthukrishnan, Borjan Gagoski, Aryn Lee, P. Ellen Grant, Elfar Adalsteinsson, Polina Golland, Benjamin Billot + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jian Zhu - The AI Consumer Index (ACE) - https://arxiv.org/abs/2512.04921 - arXiv:2512.04921v3 Announce Type: replace -Abstract: We introduce the first version of the AI Consumer Index (ACE), a benchmark for assessing whether frontier AI models can perform everyday consumer tasks. ACE contains a hidden heldout set of 400 test cases, split across four consumer activities: shopping, food, gaming, and DIY. We are also open sourcing 80 cases as a devset with a CC-BY license. For the ACE leaderboard we evaluated 10 frontier models (with websearch turned on) using a novel grading methodology that dynamically checks whether relevant parts of the response are grounded in the retrieved web sources. GPT 5 (Thinking = High) is the top-performing model, scoring 56.1%, followed by o3 Pro (Thinking = On) at 55.2% and GPT 5.1 (Thinking = High) at 55.1%. Model scores differ across domains, and in Shopping the top model scores under 50\%. We find that models are prone to hallucinating key information, such as prices. ACE shows a substantial gap between the performance of even the best models and consumers' AI needs. - oai:arXiv.org:2512.04921v3 - cs.AI + CryptoBench: A Dynamic Benchmark for Expert-Level Evaluation of LLM Agents in Cryptocurrency + https://arxiv.org/abs/2512.00417 + arXiv:2512.00417v4 Announce Type: replace +Abstract: This paper introduces CryptoBench, the first expert-curated, dynamic benchmark designed to rigorously evaluate the real-world capabilities of Large Language Model (LLM) agents in the uniquely demanding and fast-paced cryptocurrency domain. Unlike general-purpose agent benchmarks for search and prediction, professional crypto analysis presents specific challenges: \emph{extreme time-sensitivity}, \emph{a highly adversarial information environment}, and the critical need to synthesize data from \emph{diverse, specialized sources}, such as on-chain intelligence platforms and real-time Decentralized Finance (DeFi) dashboards. CryptoBench thus serves as a much more challenging and valuable scenario for LLM agent assessment. To address these challenges, we constructed a live, dynamic benchmark featuring 50 questions per month, expertly designed by crypto-native professionals to mirror actual analyst workflows. These tasks are rigorously categorized within a four-quadrant system: Simple Retrieval, Complex Retrieval, Simple Prediction, and Complex Prediction. This granular categorization enables a precise assessment of an LLM agent's foundational data-gathering capabilities alongside its advanced analytical and forecasting skills. + Our evaluation of ten LLMs, both directly and within an agentic framework, reveals a performance hierarchy and uncovers a failure mode. We observe a \textit{retrieval-prediction imbalance}, where many leading models, despite being proficient at data retrieval, demonstrate a pronounced weakness in tasks requiring predictive analysis. This highlights a problematic tendency for agents to appear factually grounded while lacking the deeper analytical capabilities to synthesize information. + oai:arXiv.org:2512.00417v4 cs.CL - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Julien Benchek, Rohit Shetty, Benjamin Hunsberger, Ajay Arun, Zach Richards, Brendan Foody, Osvald Nitski, Bertie Vidgen + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiacheng Guo, Suozhi Huang, Zixin Yao, Yifan Zhang, Yifu Lu, Jiashuo Liu, Zihao Li, Nicholas Deng, Qixin Xiao, Jia Tian, Kanghong Zhan, Tianyi Li, Xiaochen Liu, Jason Ge, Chaoyang He, Kaixuan Huang, Lin Yang, Wenhao Huang, Mengdi Wang - Arbitrage: Efficient Reasoning via Advantage-Aware Speculation - https://arxiv.org/abs/2512.05033 - arXiv:2512.05033v2 Announce Type: replace -Abstract: Modern Large Language Models achieve impressive reasoning capabilities with long Chain of Thoughts, but they incur substantial computational cost during inference, and this motivates techniques to improve the performance-cost ratio. Among these techniques, Speculative Decoding accelerates inference by employing a fast but inaccurate draft model to autoregressively propose tokens, which are then verified in parallel by a more capable target model. However, due to unnecessary rejections caused by token mismatches in semantically equivalent steps, traditional token-level Speculative Decoding struggles in reasoning tasks. Although recent works have shifted to step-level semantic verification, which improve efficiency by accepting or rejecting entire reasoning steps, existing step-level methods still regenerate many rejected steps with little improvement, wasting valuable target compute. To address this challenge, we propose Arbitrage, a novel step-level speculative generation framework that routes generation dynamically based on the relative advantage between draft and target models. Instead of applying a fixed acceptance threshold, Arbitrage uses a lightweight router trained to predict when the target model is likely to produce a meaningfully better step. This routing approximates an ideal Arbitrage Oracle that always chooses the higher-quality step, achieving near-optimal efficiency-accuracy trade-offs. Across multiple mathematical reasoning benchmarks, Arbitrage consistently surpasses prior step-level Speculative Decoding baselines, reducing inference latency by up to $\sim2\times$ at matched accuracy. - oai:arXiv.org:2512.05033v2 - cs.CL - cs.AI - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + VFM-ISRefiner: Towards Better Adapting Vision Foundation Models for Interactive Segmentation of Remote Sensing Images + https://arxiv.org/abs/2512.00718 + arXiv:2512.00718v2 Announce Type: replace +Abstract: Interactive image segmentation(IIS) plays a critical role in generating precise annotations for remote sensing imagery, where objects often exhibit scale variations, irregular boundaries and complex backgrounds. However, existing IIS methods, primarily designed for natural images, struggle to generalize to remote sensing domains due to limited annotated data and computational overhead. To address these challenges, we proposed RS-ISRefiner, a novel click-based IIS framework tailored for remote sensing images. The framework employs an adapter-based tuning strategy that preserves the general representations of Vision Foundation Models while enabling efficient learning of remote sensing-specific spatial and boundary characteristics. A hybrid attention mechanism integrating convolutional local modeling with Transformer-based global reasoning enhances robustness against scale diversity and scene complexity. Furthermore, an improved probability map modulation scheme effectively incorporates historical user interactions, yielding more stable iterative refinement and higher boundary accuracy. Comprehensive experiments on six remote sensing datasets, including iSAID, ISPRS Potsdam, SandBar, NWPU, LoveDA Urban and WHUBuilding, demonstrate that RS-ISRefiner consistently outperforms state-of-the-art IIS methods in terms of segmentation accuracy, efficiency and interaction cost. These results confirm the effectiveness and generalizability of our framework, making it highly suitable for high-quality instance segmentation in practical remote sensing scenarios. The codes are available at https://github.com/wondelyan/VFM-ISRefiner . + oai:arXiv.org:2512.00718v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Monishwaran Maheswaran, Rishabh Tiwari, Yuezhou Hu, Kerem Dilmen, Coleman Hooper, Haocheng Xi, Nicholas Lee, Mehrdad Farajtabar, Michael W. Mahoney, Kurt Keutzer, Amir Gholami + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Deliang Wang, Peng Liu, Yan Ma, Rongkai Zhuang, Lajiao Chen, Bing Li, Yi Zeng - Uncertainty Quantification for Scientific Machine Learning using Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KAN) - https://arxiv.org/abs/2512.05306 - arXiv:2512.05306v2 Announce Type: replace -Abstract: Kolmogorov-Arnold Networks have emerged as interpretable alternatives to traditional multi-layer perceptrons. However, standard implementations lack principled uncertainty quantification capabilities essential for many scientific applications. We present a framework integrating sparse variational Gaussian process inference with the Kolmogorov-Arnold topology, enabling scalable Bayesian inference with computational complexity quasi-linear in sample size. Through analytic moment matching, we propagate uncertainty through deep additive structures while maintaining interpretability. We use three example studies to demonstrate the framework's ability to distinguish aleatoric from epistemic uncertainty: calibration of heteroscedastic measurement noise in fluid flow reconstruction, quantification of prediction confidence degradation in multi-step forecasting of advection-diffusion dynamics, and out-of-distribution detection in convolutional autoencoders. These results suggest Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KANs) is a promising architecture for uncertainty-aware learning in scientific machine learning. - oai:arXiv.org:2512.05306v2 + Addressing the Plasticity-Stability Dilemma in Reinforcement Learning + https://arxiv.org/abs/2512.01034 + arXiv:2512.01034v2 Announce Type: replace +Abstract: Neural networks have shown remarkable success in supervised learning when trained on a single task using a fixed dataset. However, when neural networks are trained on a reinforcement learning task, their ability to continue learning from new experiences declines over time. This decline in learning ability is known as plasticity loss. To restore plasticity, prior work has explored periodically resetting the parameters of the learning network, a strategy that often improves overall performance. However, such resets come at the cost of a temporary drop in performance, which can be dangerous in real-world settings. To overcome this instability, we introduce AltNet, a reset-based approach that restores plasticity without performance degradation by leveraging twin networks. The use of twin networks anchors performance during resets through a mechanism that allows networks to periodically alternate roles: one network learns as it acts in the environment, while the other learns off-policy from the active network's interactions and a replay buffer. At fixed intervals, the active network is reset and the passive network, having learned from prior experiences, becomes the new active network. AltNet restores plasticity, improving sample efficiency and achieving higher performance, while avoiding performance drops that pose risks in safety-critical settings. We demonstrate these advantages in several high-dimensional control tasks from the DeepMind Control Suite, where AltNet outperforms various relevant baseline methods, as well as state-of-the-art reset-based techniques. + oai:arXiv.org:2512.01034v2 cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Y. Sungtaek Ju + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mansi Maheshwari, John C. Raisbeck, Bruno Castro da Silva - China Regional 3km Downscaling Based on Residual Corrective Diffusion Model - https://arxiv.org/abs/2512.05377 - arXiv:2512.05377v2 Announce Type: replace -Abstract: A fundamental challenge in numerical weather prediction is to efficiently produce high-resolution forecasts. A common solution is applying downscaling methods, which include dynamical downscaling and statistical downscaling, to the outputs of global models. This work focuses on statistical downscaling, which establishes statistical relationships between low-resolution and high-resolution historical data using statistical models. Deep learning has emerged as a powerful tool for this task, giving rise to various high-performance super-resolution models, which can be directly applied for downscaling, such as diffusion models and Generative Adversarial Networks. This work relies on a diffusion-based downscaling framework named CorrDiff. In contrast to the original work of CorrDiff, the region considered in this work is nearly 40 times larger, and we not only consider surface variables as in the original work, but also encounter high-level variables (six pressure levels) as target downscaling variables. In addition, a global residual connection is added to improve accuracy. In order to generate the 3km forecasts for the China region, we apply our trained models to the 25km global grid forecasts of CMA-GFS, an operational global model of the China Meteorological Administration (CMA), and SFF, a data-driven deep learning-based weather model developed from Spherical Fourier Neural Operators (SFNO). CMA-MESO, a high-resolution regional model, is chosen as the baseline model. The experimental results demonstrate that the forecasts downscaled by our method generally outperform the direct forecasts of CMA-MESO in terms of MAE for the target variables. Our forecasts of radar composite reflectivity show that CorrDiff, as a generative model, can generate fine-scale details that lead to more realistic predictions compared to the corresponding deterministic regression models. - oai:arXiv.org:2512.05377v2 - cs.LG - cs.AI - physics.ao-ph - Wed, 10 Dec 2025 00:00:00 -0500 + Generalised Medical Phrase Grounding + https://arxiv.org/abs/2512.01085 + arXiv:2512.01085v2 Announce Type: replace +Abstract: Medical phrase grounding (MPG) maps textual descriptions of radiological findings to corresponding image regions. These grounded reports are easier to interpret, especially for non-experts. Existing MPG systems mostly follow the referring expression comprehension (REC) paradigm and return exactly one bounding box per phrase. Real reports often violate this assumption. They contain multi-region findings, non-diagnostic text, and non-groundable phrases, such as negations or descriptions of normal anatomy. Motivated by this, we reformulate the task as generalised medical phrase grounding (GMPG), where each sentence is mapped to zero, one, or multiple scored regions. To realise this formulation, we introduce the first GMPG model: MedGrounder. We adopted a two-stage training regime: pre-training on report sentence--anatomy box alignment datasets and fine-tuning on report sentence--human annotated box datasets. Experiments on PadChest-GR and MS-CXR show that MedGrounder achieves strong zero-shot transfer and outperforms REC-style and grounded report generation baselines on multi-region and non-groundable phrases, while using far fewer human box annotations. Finally, we show that MedGrounder can be composed with existing report generators to produce grounded reports without retraining the generator. + oai:arXiv.org:2512.01085v2 + cs.CV + cs.CL + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Honglu Sun, Hao Jing, Zhixiang Dai, Sa Xiao, Wei Xue, Jian Sun, Qifeng Lu + http://creativecommons.org/licenses/by/4.0/ + Wenjun Zhang, Shekhar S. Chandra, Aaron Nicolson - Towards agent-based-model informed neural networks - https://arxiv.org/abs/2512.05764 - arXiv:2512.05764v2 Announce Type: replace -Abstract: In this article, we present a framework for designing neural networks that remain consistent with the underlying principles of agent-based models. We begin by highlighting the limitations of standard neural differential equations in modeling complex systems, where physical invariants (like energy) are often absent but other constraints (like mass conservation, information locality, bounded rationality) must be enforced. To address this, we introduce Agent-Based-Model informed Neural Networks (ABM-NNs), which leverage restricted graph neural networks and hierarchical decomposition to learn interpretable, structure-preserving dynamics. We validate the framework across three case studies of increasing complexity: (i) a generalized Generalized Lotka--Volterra system, where we recover ground-truth parameters from short trajectories in presence of interventions; (ii) a graph-based SIR contagion model, where our method outperforms state-of-the-art graph learning baselines (GCN, GraphSAGE, Graph Transformer) in out-of-sample forecasting and noise robustness; and (iii) a real-world macroeconomic model of the ten largest economies, where we learn coupled GDP dynamics from empirical data and demonstrate counterfactual analysis for policy interventions - oai:arXiv.org:2512.05764v2 - cs.LG - cs.SI - nlin.AO - physics.soc-ph - Wed, 10 Dec 2025 00:00:00 -0500 + JFR: An Efficient Jump Frontier Relaxation Strategy for Bellman-Ford + https://arxiv.org/abs/2512.01802 + arXiv:2512.01802v3 Announce Type: replace +Abstract: We propose JFR, a Bellman-Ford-based optimization framework leveraging frontier contraction and abstract multi-hop jump propagation to accelerate shortest-path computation while strictly preserving correctness. JFR achieves substantial reductions in relaxation operations, ranging from 25 to 99 percent, across sparse, dense, and negative-edge graphs, ensuring robust performance even under adversarial or highly connected topologies. On ultra-large graphs with up to N=20,000 nodes and 295 million edges, JFR maintains strong operational reductions and comparable or improved runtime relative to SPFA-SLF, demonstrating consistent robustness across graph size and density. Lower relaxation counts imply reduced memory-access overheads and computational effort; this normalized work reduction highlights JFR's suitability for scenarios requiring high throughput or energy-conscious operation. Future work focuses on integrating high-performance queue structures, adaptive frontier strategies, and cache-aware techniques to further reduce constant-factor overheads and fully realize JFR's practical runtime potential. + oai:arXiv.org:2512.01802v3 + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Nino Antulov-Fantulin + Xin Wang, Xi Chen - TRACE: A Framework for Analyzing and Enhancing Stepwise Reasoning in Vision-Language Models - https://arxiv.org/abs/2512.05943 - arXiv:2512.05943v2 Announce Type: replace -Abstract: Reliable mathematical and scientific reasoning remains an open challenge for large vision-language models. Standard final-answer evaluation often masks reasoning errors, allowing silent failures to persist. To address this gap, we introduce TRACE, a framework for Transparent Reasoning And Consistency Evaluation that diagnoses reasoning trajectories rather than only end results. At its core, TRACE leverages Auxiliary Reasoning Sets, compact sub question answer pairs that decompose complex problems, evaluate intermediate steps through consistency-based metrics, and expose failures overlooked by standard evaluation. Our experiments show that consistency across ARS correlates with final-answer correctness and helps pinpoint the reasoning steps where failures arise, offering actionable signals for model improvement. Furthermore, TRACE defines confidence regions that distinguish reliable from unreliable reasoning paths, supporting effective filtering, debugging, and model refinement. - oai:arXiv.org:2512.05943v2 + Enhancing Floor Plan Recognition: A Hybrid Mix-Transformer and U-Net Approach for Precise Wall Segmentation + https://arxiv.org/abs/2512.02413 + arXiv:2512.02413v2 Announce Type: replace +Abstract: Automatic 3D reconstruction of indoor spaces from 2D floor plans necessitates high-precision semantic segmentation of structural elements, particularly walls. However, existing methods often struggle with detecting thin structures and maintaining geometric precision. This study introduces MitUNet, a hybrid neural network combining a Mix-Transformer encoder and a U-Net decoder enhanced with spatial and channel attention blocks. Our approach, optimized with the Tversky loss function, achieves a balance between precision and recall, ensuring accurate boundary recovery. Experiments on the CubiCasa5k dataset and a proprietary regional dataset demonstrate MitUNet's superiority in generating structurally correct masks with high boundary accuracy, outperforming standard models. This tool provides a robust foundation for automated 3D reconstruction pipelines. To ensure reproducibility and facilitate future research, the source code and the proprietary regional dataset are publicly available at https://github.com/aliasstudio/mitunet and https://doi.org/10.5281/zenodo.17871079 respectively. + oai:arXiv.org:2512.02413v2 + cs.CV cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Shima Imani, Seungwhan Moon, Lambert Mathias, Lu Zhang, Babak Damavandi + http://creativecommons.org/licenses/by/4.0/ + Dmitriy Parashchuk, Alexey Kapshitskiy, Yuriy Karyakin - Training-Time Action Conditioning for Efficient Real-Time Chunking - https://arxiv.org/abs/2512.05964 - arXiv:2512.05964v2 Announce Type: replace -Abstract: Real-time chunking (RTC) enables vision-language-action models (VLAs) to generate smooth, reactive robot trajectories by asynchronously predicting action chunks and conditioning on previously committed actions via inference-time inpainting. However, this inpainting method introduces computational overhead that increases inference latency. In this work, we propose a simple alternative: simulating inference delay at training time and conditioning on action prefixes directly, eliminating any inference-time overhead. Our method requires no modifications to the model architecture or robot runtime, and can be implemented with only a few additional lines of code. In simulated experiments, we find that training-time RTC outperforms inference-time RTC at higher inference delays. In real-world experiments on box building and espresso making tasks with the $\pi_{0.6}$ VLA, we demonstrate that training-time RTC maintains both task performance and speed parity with inference-time RTC while being computationally cheaper. Our results suggest that training-time action conditioning is a practical drop-in replacement for inference-time inpainting in real-time robot control. - oai:arXiv.org:2512.05964v2 - cs.RO - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Unsupervised Structural Scene Decomposition via Foreground-Aware Slot Attention with Pseudo-Mask Guidance + https://arxiv.org/abs/2512.02685 + arXiv:2512.02685v2 Announce Type: replace +Abstract: Recent advances in object-centric representation learning have shown that slot attention-based methods can effectively decompose visual scenes into object slot representations without supervision. However, existing approaches typically process foreground and background regions indiscriminately, often resulting in background interference and suboptimal instance discovery performance on real-world data. To address this limitation, we propose Foreground-Aware Slot Attention (FASA), a two-stage framework that explicitly separates foreground from background to enable precise object discovery. In the first stage, FASA performs a coarse scene decomposition to distinguish foreground from background regions through a dual-slot competition mechanism. These slots are initialized via a clustering-based strategy, yielding well-structured representations of salient regions. In the second stage, we introduce a masked slot attention mechanism where the first slot captures the background while the remaining slots compete to represent individual foreground objects. To further address over-segmentation of foreground objects, we incorporate pseudo-mask guidance derived from a patch affinity graph constructed with self-supervised image features to guide the learning of foreground slots. Extensive experiments on both synthetic and real-world datasets demonstrate that FASA consistently outperforms state-of-the-art methods, validating the effectiveness of explicit foreground modeling and pseudo-mask guidance for robust scene decomposition and object-coherent representation. Code will be made publicly available. + oai:arXiv.org:2512.02685v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Kevin Black, Allen Z. Ren, Michael Equi, Sergey Levine + Huankun Sheng, Ming Li, Yixiang Wei, Yeying Fan, Yu-Hui Wen, Tieliang Gong, Yong-Jin Liu - High-Throughput Unsupervised Profiling of the Morphology of 316L Powder Particles for Use in Additive Manufacturing - https://arxiv.org/abs/2512.06012 - arXiv:2512.06012v2 Announce Type: replace -Abstract: Selective Laser Melting (SLM) is a powder-bed additive manufacturing technique whose part quality depends critically on feedstock morphology. However, conventional powder characterization methods are low-throughput and qualitative, failing to capture the heterogeneity of industrial-scale batches. We present an automated, machine learning framework that couples high-throughput imaging with shape extraction and clustering to profile metallic powder morphology at scale. We develop and evaluate three clustering pipelines: an autoencoder pipeline, a shape-descriptor pipeline, and a functional-data pipeline. Across a dataset of approximately 126,000 powder images (0.5-102 micrometer diameter), internal validity metrics identify the Fourier-descriptor + k-means pipeline as the most effective, achieving the lowest Davies-Bouldin index and highest Calinski-Harabasz score while maintaining sub-millisecond runtime per particle on a standard desktop workstation. Although the present work focuses on establishing the morphological-clustering framework, the resulting shape groups form a basis for future studies examining their relationship to flowability, packing density, and SLM part quality. Overall, this unsupervised learning framework enables rapid, automated assessment of powder morphology and supports tracking of shape evolution across reuse cycles, offering a path toward real-time feedstock monitoring in SLM workflows. - oai:arXiv.org:2512.06012v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Numerical Analysis of the 2D Stochastic Navier-Stokes Equations: Convergence under Transport Noise and No-slip Boundary Conditions + https://arxiv.org/abs/2512.03483 + arXiv:2512.03483v2 Announce Type: replace +Abstract: This work is concerned with the numerical approximation of the two-dimensional stochastic Navier-Stokes equation with transport noise and no-slip boundary conditions on a convex polygonal domain. The analysis is challenged by the solution's low spatial regularity and the non-Lipschitz nonlinearity. We derive a convergence rate in the mean-square sense for a spatial semidiscretization. Furthermore, for the full discretization, we prove convergence in probability and establish an explicit rate with respect to the time step. + oai:arXiv.org:2512.03483v2 + math.NA + cs.NA + math.PR + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Emmanuel Akeweje, Conall Kirk, Chi-Wai Chan, Denis Dowling, Mimi Zhang + http://creativecommons.org/licenses/by/4.0/ + Binjie Li, Qin Zhou - A Broader View on Clustering under Cluster-Aware Norm Objectives - https://arxiv.org/abs/2512.06211 - arXiv:2512.06211v2 Announce Type: replace -Abstract: We revisit the $(f,g)$-clustering problem that we introduced in a recent work [SODA'25], and which subsumes fundamental clustering problems such as $k$-Center, $k$-Median, Min-Sum of Radii, and Min-Load $k$-Clustering. This problem assigns each of the $k$ clusters a cost determined by the monotone, symmetric norm $f$ applied to the vector distances in the cluster, and aims at minimizing the norm $g$ applied to the vector of cluster costs. Previously, we focused on certain special cases for which we designed constant-factor approximation algorithms. Our bounds for more general settings left, however, large gaps to the known bounds for the basic problems they capture. - In this work, we provide a clearer picture of the approximability of these more general settings. First, we design an $O(\log^2 n)$-approximation algorithm for $(f, L_{1})$-clustering for any $f$. This improves upon our previous $\widetilde{O}(\sqrt{n})$-approximation. Second, we provide an $O(k)$-approximation for the general $(f,g)$-clustering problem, which improves upon our previous $\widetilde{O}(\sqrt{kn})$-approximation algorithm and matches the best-known upper bound for Min-Load $k$-Clustering. - We then design an approximation algorithm for $(f,g)$-clustering that interpolates, up to polylog factors, between the best known bounds for $k$-Center, $k$-Median, Min-Sum of Radii, Min-Load $k$-Clustering, (Top, $L_{1}$)-clustering, and $(L_{\infty},g)$-clustering based on a newly defined parameter of $f$ and $g$. - oai:arXiv.org:2512.06211v2 - cs.DS + Cross-Space Synergy: A Unified Framework for Multimodal Emotion Recognition in Conversation + https://arxiv.org/abs/2512.03521 + arXiv:2512.03521v3 Announce Type: replace +Abstract: Multimodal Emotion Recognition in Conversation (MERC) aims to predict speakers' emotions by integrating textual, acoustic, and visual cues. Existing approaches either struggle to capture complex cross-modal interactions or experience gradient conflicts and unstable training when using deeper architectures. To address these issues, we propose Cross-Space Synergy (CSS), which couples a representation component with an optimization component. Synergistic Polynomial Fusion (SPF) serves the representation role, leveraging low-rank tensor factorization to efficiently capture high-order cross-modal interactions. Pareto Gradient Modulator (PGM) serves the optimization role, steering updates along Pareto-optimal directions across competing objectives to alleviate gradient conflicts and improve stability. Experiments show that CSS outperforms existing representative methods on IEMOCAP and MELD in both accuracy and training stability, demonstrating its effectiveness in complex multimodal scenarios. + oai:arXiv.org:2512.03521v3 + cs.MM cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Martin G. Herold, Evangelos Kipouridis, Joachim Spoerhase + Xiaosen Lyu, Jiayu Xiong, Yuren Chen, Wanlong Wang, Xiaoqing Dai, Jing Wang - HuPrior3R: Incorporating Human Priors for Better 3D Dynamic Reconstruction from Monocular Videos - https://arxiv.org/abs/2512.06368 - arXiv:2512.06368v2 Announce Type: replace -Abstract: Monocular dynamic video reconstruction faces significant challenges in dynamic human scenes due to geometric inconsistencies and resolution degradation issues. Existing methods lack 3D human structural understanding, producing geometrically inconsistent results with distorted limb proportions and unnatural human-object fusion, while memory-constrained downsampling causes human boundary drift toward background geometry. To address these limitations, we propose to incorporate hybrid geometric priors that combine SMPL human body models with monocular depth estimation. Our approach leverages structured human priors to maintain surface consistency while capturing fine-grained geometric details in human regions. We introduce HuPrior3R, featuring a hierarchical pipeline with refinement components that processes full-resolution images for overall scene geometry, then applies strategic cropping and cross-attention fusion for human-specific detail enhancement. The method integrates SMPL priors through a Feature Fusion Module to ensure geometrically plausible reconstruction while preserving fine-grained human boundaries. Extensive experiments on TUM Dynamics and GTA-IM datasets demonstrate superior performance in dynamic human reconstruction. - oai:arXiv.org:2512.06368v2 + MoReGen: Multi-Agent Motion-Reasoning Engine for Code-based Text-to-Video Synthesis + https://arxiv.org/abs/2512.04221 + arXiv:2512.04221v2 Announce Type: replace +Abstract: While text-to-video (T2V) generation has achieved remarkable progress in photorealism, generating intent-aligned videos that faithfully obey physics principles remains a core challenge. In this work, we systematically study Newtonian motion-controlled text-to-video generation and evaluation, emphasizing physical precision and motion coherence. We introduce MoReGen, a motion-aware, physics-grounded T2V framework that integrates multi-agent LLMs, physics simulators, and renderers to generate reproducible, physically accurate videos from text prompts in the code domain. To quantitatively assess physical validity, we propose object-trajectory correspondence as a direct evaluation metric and present MoReSet, a benchmark of 1,275 human-annotated videos spanning nine classes of Newtonian phenomena with scene descriptions, spatiotemporal relations, and ground-truth trajectories. Using MoReSet, we conduct experiments on existing T2V models, evaluating their physical validity through both our MoRe metrics and existing physics-based evaluators. Our results reveal that state-of-the-art models struggle to maintain physical validity, while MoReGen establishes a principled direction toward physically coherent video synthesis. + oai:arXiv.org:2512.04221v2 cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Weitao Xiong, Zhiyuan Yuan, Jiahao Lu, Chengfeng Zhao, Peng Li, Yuan Liu + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Xiangyu Bai, He Liang, Bishoy Galoaa, Utsav Nandi, Shayda Moezzi, Yuhang He, Sarah Ostadabbas - CommentScope: A Comment-Embedded Assisted Reading System for a Long Text - https://arxiv.org/abs/2512.06408 - arXiv:2512.06408v2 Announce Type: replace -Abstract: Long texts are ubiquitous on social platforms, yet readers often face information overload and struggle to locate key content. Comments provide valuable external perspectives for understanding, questioning, and complementing the text, but their potential is hindered by disorganized and unstructured presentation. Few studies have explored embedding comments directly into reading. As an exploratory step, we propose CommentScope, a system with two core modules: a pipeline that classifies comments into five types and aligns them with relevant sentences, and a presentation module that integrates comments inline or as side notes, supported by visual cues such as colors, charts, and highlights. Technical evaluation shows that the hybrid "Rule+LLM" pipeline achieved solid performance in semantic classification (accuracy=0.90) and position alignment (accuracy=0.88). A user study (N=12) further demonstrated that the sentence-end embedding significantly improved comment discovery accuracy and reading fluency while reducing mental demand and perceived effort. - oai:arXiv.org:2512.06408v2 - cs.HC - Wed, 10 Dec 2025 00:00:00 -0500 + Self-Paced and Self-Corrective Masked Prediction for Movie Trailer Generation + https://arxiv.org/abs/2512.04426 + arXiv:2512.04426v2 Announce Type: replace +Abstract: As a challenging video editing task, movie trailer generation involves selecting and reorganizing movie shots to create engaging trailers. Currently, most existing automatic trailer generation methods employ a "selection-then-ranking" paradigm (i.e., first selecting key shots and then ranking them), which suffers from inevitable error propagation and limits the quality of the generated trailers. Beyond this paradigm, we propose a new self-paced and self-corrective masked prediction method called SSMP, which achieves state-of-the-art results in automatic trailer generation via bi-directional contextual modeling and progressive self-correction. In particular, SSMP trains a Transformer encoder that takes the movie shot sequences as prompts and generates corresponding trailer shot sequences accordingly. The model is trained via masked prediction, reconstructing each trailer shot sequence from its randomly masked counterpart. The mask ratio is self-paced, allowing the task difficulty to adapt to the model and thereby improving model performance. When generating a movie trailer, the model fills the shot positions with high confidence at each step and re-masks the remaining positions for the next prediction, forming a progressive self-correction mechanism that is analogous to how human editors work. Both quantitative results and user studies demonstrate the superiority of SSMP in comparison to existing automatic movie trailer generation methods. Demo is available at: https://github.com/Dixin-Lab/SSMP. + oai:arXiv.org:2512.04426v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shuai Chen (Jiangsu Ocean University, Lianyungang, China), Lei Han (Jiangsu Ocean University, Lianyungang, China), Haoyu Wang (Jiangsu Ocean University, Lianyungang, China), Zhaoman Zhong (Jiangsu Ocean University, Lianyungang, China) + Sidan Zhu, Hongteng Xu, Dixin Luo - LightSearcher: Efficient DeepSearch via Experiential Memory - https://arxiv.org/abs/2512.06653 - arXiv:2512.06653v2 Announce Type: replace -Abstract: DeepSearch paradigms have become a core enabler for deep reasoning models, allowing them to invoke external search tools to access up-to-date, domain-specific knowledge beyond parametric boundaries, thereby enhancing the depth and factual reliability of reasoning. Building upon this foundation, recent advances in reinforcement learning (RL) have further empowered models to autonomously and strategically control search tool usage, optimizing when and how to query external knowledge sources. Yet, these RL-driven DeepSearch systems often reveal a see-saw trade-off between accuracy and efficiency-frequent tool invocations can improve factual correctness but lead to unnecessary computational overhead and diminished efficiency. To address this challenge, we propose LightSearcher, an efficient RL framework that incorporates textual experiential memory by learning contrastive reasoning trajectories to generate interpretable summaries of successful reasoning patterns. In addition, it employs an adaptive reward shaping mechanism that penalizes redundant tool calls only in correct-answer scenarios. This design effectively balances the inherent accuracy-efficiency trade-off in DeepSearch paradigms. Experiments on four multi-hop QA benchmarks show that LightSearcher maintains accuracy comparable to SOTA baseline ReSearch, while reducing search tool invocations by 39.6%, inference time by 48.6%, and token consumption by 21.2%, demonstrating its superior efficiency. - oai:arXiv.org:2512.06653v2 + Persona-based Multi-Agent Collaboration for Brainstorming + https://arxiv.org/abs/2512.04488 + arXiv:2512.04488v2 Announce Type: replace +Abstract: We demonstrate the importance of persona-based multi-agents brainstorming for both diverse topics and subject matter ideation. Prior work has shown that generalized multi-agent collaboration often provides better reasoning than a single agent alone. In this paper, we propose and develop a framework for persona-based agent selection, showing how persona domain curation can improve brainstorming outcomes. Using multiple experimental setups, we evaluate brainstorming outputs across different persona pairings (e.g., Doctor vs VR Engineer) and A2A (agent-to-agent) dynamics (separate, together, separate-then-together). Our results show that (1) persona choice shapes idea domains, (2) collaboration mode shifts diversity of idea generation, and (3) multi-agent persona-driven brainstorming produces idea depth and cross-domain coverage. + oai:arXiv.org:2512.04488v2 cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hengzhi Lan, Yue Yu, Li Qian, Li Peng, Jie Wu, Wei Liu, Jian Luan, Ting Bai + http://creativecommons.org/licenses/by/4.0/ + Nate Straub, Saara Khan, Katharina Jay, Brian Cabral, Oskar Linde - Symmetry-Based Formation Control on Cycle Graphs Using Dihedral Point Groups - https://arxiv.org/abs/2512.06733 - arXiv:2512.06733v2 Announce Type: replace -Abstract: This work develops a symmetry-based framework for formation control on cycle graphs using Dihedral point-group constraints. We show that enforcing inter-agent reflection symmetries, together with anchoring a single designated agent to its prescribed mirror axis, is sufficient to realize every $\mathcal{C}_{nv}$-symmetric configuration using only $n-1$ communication links. The resulting control laws have a matrix-weighted Laplacian structure and guarantee exponential convergence to the desired symmetric configuration. Furthermore, we extend the method to enable coordinated maneuvers along a time-varying reference trajectory. Simulation results are provided to support the theoretical analysis. - oai:arXiv.org:2512.06733v2 - eess.SY - cs.SY - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + Distributed scalable coupled policy algorithm for networked multi-agent reinforcement learning + https://arxiv.org/abs/2512.05447 + arXiv:2512.05447v2 Announce Type: replace +Abstract: This paper studies networked multi-agent reinforcement learning (NMARL) with interdependent rewards and coupled policies. In this setting, each agent's reward depends on its own state-action pair as well as those of its direct neighbors, and each agent's policy is parameterized by its local parameters together with those of its $\kappa_{p}$-hop neighbors, with $\kappa_{p}\geq 1$ denoting the coupled radius. The objective of the agents is to collaboratively optimize their policies to maximize the discounted average cumulative reward. To address the challenge of interdependent policies in collaborative optimization, we introduce a novel concept termed the neighbors' averaged $Q$-function and derive a new expression for the coupled policy gradient. Based on these theoretical foundations, we develop a distributed scalable coupled policy (DSCP) algorithm, where each agent relies only on the state-action pairs of its $\kappa_{p}$-hop neighbors and the rewards of its $(\kappa_{p}+1)$-hop neighbors. Specially, in the DSCP algorithm, we employ a geometric 2-horizon sampling method that does not require storing a full $Q$-table to obtain an unbiased estimate of the coupled policy gradient. Moreover, each agent interacts exclusively with its direct neighbors to obtain accurate policy parameters, while maintaining local estimates of other agents' parameters to execute its local policy and collect samples for optimization. These estimates and policy parameters are updated via a push-sum protocol, enabling distributed coordination of policy updates across the network. We prove that the joint policy produced by the proposed algorithm converges to a first-order stationary point of the objective function. Finally, the effectiveness of DSCP algorithm is demonstrated through simulations in a robot path planning environment, showing clear improvement over state-of-the-art methods. + oai:arXiv.org:2512.05447v2 + cs.MA + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Zamir Martinez, Daniel Zelazo + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Pengcheng Dai, Dongming Wang, Wenwu Yu, Wei Ren - DoVer: Intervention-Driven Auto Debugging for LLM Multi-Agent Systems - https://arxiv.org/abs/2512.06749 - arXiv:2512.06749v2 Announce Type: replace -Abstract: Large language model (LLM)-based multi-agent systems are challenging to debug because failures often arise from long, branching interaction traces. The prevailing practice is to leverage LLMs for log-based failure localization, attributing errors to a specific agent and step. However, this paradigm has two key limitations: (i) log-only debugging lacks validation, producing untested hypotheses, and (ii) single-step or single-agent attribution is often ill-posed, as we find that multiple distinct interventions can independently repair the failed task. To address the first limitation, we introduce DoVer, an intervention-driven debugging framework, which augments hypothesis generation with active verification through targeted interventions (e.g., editing messages, altering plans). For the second limitation, rather than evaluating on attribution accuracy, we focus on measuring whether the system resolves the failure or makes quantifiable progress toward task success, reflecting a more outcome-oriented view of debugging. Within the Magnetic-One agent framework, on the datasets derived from GAIA and AssistantBench, DoVer flips 18-28% of failed trials into successes, achieves up to 16% milestone progress, and validates or refutes 30-60% of failure hypotheses. DoVer also performs effectively on a different dataset (GSMPlus) and agent framework (AG2), where it recovers 49% of failed trials. These results highlight intervention as a practical mechanism for improving reliability in agentic systems and open opportunities for more robust, scalable debugging methods for LLM-based multi-agent systems. Project website and code will be available at https://aka.ms/DoVer. - oai:arXiv.org:2512.06749v2 - cs.AI - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + ProbeWalk: Fast Estimation of Biharmonic Distance on Graphs via Probe-Driven Random Walks + https://arxiv.org/abs/2512.05460 + arXiv:2512.05460v2 Announce Type: replace +Abstract: The biharmonic distance is a fundamental metric on graphs that measures the dissimilarity between two nodes, capturing both local and global structures. It has found applications across various fields, including network centrality, graph clustering, and machine learning. These applications typically require efficient evaluation of pairwise biharmonic distances. However, existing algorithms remain computationally expensive. The state-of-the-art method attains an absolute-error guarantee epsilon_abs with time complexity O(L^5 / epsilon_abs^2), where L denotes the truncation length. In this work, we improve the complexity to O(L^3 / epsilon^2) under a relative-error guarantee epsilon via probe-driven random walks. We provide a relative-error guarantee rather than an absolute-error guarantee because biharmonic distances vary by orders of magnitude across node pairs. Since L is often very large in real-world networks (for example, L >= 10^3), reducing the L-dependence from the fifth to the third power yields substantial gains. Extensive experiments on real-world networks show that our method delivers 10x-1000x per-query speedups at matched relative error over strong baselines and scales to graphs with tens of millions of nodes. + oai:arXiv.org:2512.05460v2 + cs.SI + cs.DS + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Ming Ma, Jue Zhang, Fangkai Yang, Yu Kang, Qingwei Lin, Tianming Yang, Saravan Rajmohan, Dongmei Zhang + Dehong Zheng, Zhongzhi Zhang - db-LaCAM: Fast and Scalable Multi-Robot Kinodynamic Motion Planning with Discontinuity-Bounded Search and Lightweight MAPF - https://arxiv.org/abs/2512.06796 - arXiv:2512.06796v2 Announce Type: replace -Abstract: State-of-the-art multi-robot kinodynamic motion planners struggle to handle more than a few robots due to high computational burden, which limits their scalability and results in slow planning time. In this work, we combine the scalability and speed of modern multi-agent path finding (MAPF) algorithms with the dynamic-awareness of kinodynamic planners to address these limitations. To this end, we propose discontinuity-Bounded LaCAM (db-LaCAM), a planner that utilizes a precomputed set of motion primitives that respect robot dynamics to generate horizon-length motion sequences, while allowing a user-defined discontinuity between successive motions. The planner db-LaCAM is resolution-complete with respect to motion primitives and supports arbitrary robot dynamics. Extensive experiments demonstrate that db-LaCAM scales efficiently to scenarios with up to 50 robots, achieving up to ten times faster runtime compared to state-of-the-art planners, while maintaining comparable solution quality. The approach is validated in both 2D and 3D environments with dynamics such as the unicycle and 3D double integrator. We demonstrate the safe execution of trajectories planned with db-LaCAM in two distinct physical experiments involving teams of flying robots and car-with-trailer robots. - oai:arXiv.org:2512.06796v2 - cs.RO - Wed, 10 Dec 2025 00:00:00 -0500 + Bring Your Dreams to Life: Continual Text-to-Video Customization + https://arxiv.org/abs/2512.05802 + arXiv:2512.05802v2 Announce Type: replace +Abstract: Customized text-to-video generation (CTVG) has recently witnessed great progress in generating tailored videos from user-specific text. However, most CTVG methods assume that personalized concepts remain static and do not expand incrementally over time. Additionally, they struggle with forgetting and concept neglect when continuously learning new concepts, including subjects and motions. To resolve the above challenges, we develop a novel Continual Customized Video Diffusion (CCVD) model, which can continuously learn new concepts to generate videos across various text-to-video generation tasks by tackling forgetting and concept neglect. To address catastrophic forgetting, we introduce a concept-specific attribute retention module and a task-aware concept aggregation strategy. They can capture the unique characteristics and identities of old concepts during training, while combining all subject and motion adapters of old concepts based on their relevance during testing. Besides, to tackle concept neglect, we develop a controllable conditional synthesis to enhance regional features and align video contexts with user conditions, by incorporating layer-specific region attention-guided noise estimation. Extensive experimental comparisons demonstrate that our CCVD outperforms existing CTVG baselines on both the DreamVideo and Wan 2.1 backbones. The code is available at https://github.com/JiahuaDong/CCVD. + oai:arXiv.org:2512.05802v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Akmaral Moldagalieva, Keisuke Okumura, Amanda Prorok, Wolfgang H\"onig + Jiahua Dong, Xudong Wang, Wenqi Liang, Zongyan Han, Meng Cao, Duzhen Zhang, Hanbin Zhao, Zhi Han, Salman Khan, Fahad Shahbaz Khan - Khalasi: Energy-Efficient Navigation for Surface Vehicles in Vortical Flow Fields - https://arxiv.org/abs/2512.06912 - arXiv:2512.06912v2 Announce Type: replace -Abstract: For centuries, khalasi (Gujarati for sailor) have skillfully harnessed ocean currents to navigate vast waters with minimal effort. Emulating this intuition in autonomous systems remains a significant challenge, particularly for Autonomous Surface Vehicles tasked with long duration missions under strict energy budgets. In this work, we present a learning-based approach for energy-efficient surface vehicle navigation in vortical flow fields, where partial observability often undermines traditional path-planning methods. We present an end to end reinforcement learning framework based on Soft Actor Critic that learns flow-aware navigation policies using only local velocity measurements. Through extensive evaluation across diverse and dynamically rich scenarios, our method demonstrates substantial energy savings and robust generalization to previously unseen flow conditions, offering a promising path toward long term autonomy in ocean environments. The navigation paths generated by our proposed approach show an improvement in energy conservation 30 to 50 percent compared to the existing state of the art techniques. - oai:arXiv.org:2512.06912v2 + Toward Efficient and Robust Behavior Models for Multi-Agent Driving Simulation + https://arxiv.org/abs/2512.05812 + arXiv:2512.05812v2 Announce Type: replace +Abstract: Scalable multi-agent driving simulation requires behavior models that are both realistic and computationally efficient. We address this by optimizing the behavior model that controls individual traffic participants. To improve efficiency, we adopt an instance-centric scene representation, where each traffic participant and map element is modeled in its own local coordinate frame. This design enables efficient, viewpoint-invariant scene encoding and allows static map tokens to be reused across simulation steps. To model interactions, we employ a query-centric symmetric context encoder with relative positional encodings between local frames. We use Adversarial Inverse Reinforcement Learning to learn the behavior model and propose an adaptive reward transformation that automatically balances robustness and realism during training. Experiments demonstrate that our approach scales efficiently with the number of tokens, significantly reducing training and inference times, while outperforming several agent-centric baselines in terms of positional accuracy and robustness. + oai:arXiv.org:2512.05812v2 cs.RO - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Rushiraj Gadhvi, Sandeep Manjanna - - - Multi-Docker-Eval: A `Shovel of the Gold Rush' Benchmark on Automatic Environment Building for Software Engineering - https://arxiv.org/abs/2512.06915 - arXiv:2512.06915v2 Announce Type: replace -Abstract: Automated environment configuration is a critical bottleneck in scaling software engineering (SWE) automation. To provide a reliable evaluation standard for this task, we present Multi-Docker-Eval benchmark. It includes 40 real-world repositories spanning 9 programming languages and measures both success in achieving executable states and efficiency under realistic constraints. Our extensive evaluation of state-of-the-art LLMs and agent frameworks reveals key insights: (1) the overall success rate of current models is low (F2P at most 37.7%), with environment construction being the primary bottleneck; (2) model size and reasoning length are not decisive factors, and open-source models like DeepSeek-V3.1 and Kimi-K2 are competitive in both efficiency and effectiveness; (3) agent framework and programming language also have significantly influence on success rate. These findings provide actionable guidelines for building scalable, fully automated SWE pipelines. - oai:arXiv.org:2512.06915v2 - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kelin Fu, Tianyu Liu, Zeyu Shang, Yingwei Ma, Jian Yang, Jiaheng Liu, Kaigui Bian + Fabian Konstantinidis, Moritz Sackmann, Ulrich Hofmann, Christoph Stiller - Evaluating and Preserving High-level Fidelity in Super-Resolution - https://arxiv.org/abs/2512.07037 - arXiv:2512.07037v2 Announce Type: replace -Abstract: Recent image Super-Resolution (SR) models are achieving impressive effects in reconstructing details and delivering visually pleasant outputs. However, the overpowering generative ability can sometimes hallucinate and thus change the image content despite gaining high visual quality. This type of high-level change can be easily identified by humans yet not well-studied in existing low-level image quality metrics. In this paper, we establish the importance of measuring high-level fidelity for SR models as a complementary criterion to reveal the reliability of generative SR models. We construct the first annotated dataset with fidelity scores from different SR models, and evaluate how state-of-the-art (SOTA) SR models actually perform in preserving high-level fidelity. Based on the dataset, we then analyze how existing image quality metrics correlate with fidelity measurement, and further show that this high-level task can be better addressed by foundation models. Finally, by fine-tuning SR models based on our fidelity feedback, we show that both semantic fidelity and perceptual quality can be improved, demonstrating the potential value of our proposed criteria, both in model evaluation and optimization. We will release the dataset, code, and models upon acceptance. - oai:arXiv.org:2512.07037v2 - cs.CV + Utility Boundary of Dataset Distillation: Scaling and Configuration-Coverage Laws + https://arxiv.org/abs/2512.05817 + arXiv:2512.05817v3 Announce Type: replace +Abstract: Dataset distillation (DD) aims to construct compact synthetic datasets that allow models to achieve comparable performance to full-data training while substantially reducing storage and computation. Despite rapid empirical progress, its theoretical foundations remain limited: existing methods (gradient, distribution, trajectory matching) are built on heterogeneous surrogate objectives and optimization assumptions, which makes it difficult to analyze their common principles or provide general guarantees. Moreover, it is still unclear under what conditions distilled data can retain the effectiveness of full datasets when the training configuration, such as optimizer, architecture, or augmentation, changes. To answer these questions, we propose a unified theoretical framework, termed configuration--dynamics--error analysis, which reformulates major DD approaches under a common generalization-error perspective and provides two main results: (i) a scaling law that provides a single-configuration upper bound, characterizing how the error decreases as the distilled sample size increases and explaining the commonly observed performance saturation effect; and (ii) a coverage law showing that the required distilled sample size scales linearly with configuration diversity, with provably matching upper and lower bounds. In addition, our unified analysis reveals that various matching methods are interchangeable surrogates, reducing the same generalization error, clarifying why they can all achieve dataset distillation and providing guidance on how surrogate choices affect sample efficiency and robustness. Experiments across diverse methods and configurations empirically confirm the derived laws, advancing a theoretical foundation for DD and enabling theory-driven design of compact, configuration-robust dataset distillation. + oai:arXiv.org:2512.05817v3 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Josep M. Rocafort, Shaolin Su, Alexandra Gomez-Villa, Javier Vazquez-Corral + Zhengquan Luo, Zhiqiang Xu - VIGIL: A Reflective Runtime for Self-Healing Agents - https://arxiv.org/abs/2512.07094 - arXiv:2512.07094v2 Announce Type: replace -Abstract: Agentic LLM frameworks promise autonomous behavior via task decomposition, tool use, and iterative planning, but most deployed systems remain brittle. They lack runtime introspection, cannot diagnose their own failure modes, and do not improve over time without human intervention. In practice, many agent stacks degrade into decorated chains of LLM calls with no structural mechanisms for reliability. We present VIGIL (Verifiable Inspection and Guarded Iterative Learning), a reflective runtime that supervises a sibling agent and performs autonomous maintenance rather than task execution. VIGIL ingests behavioral logs, appraises each event into a structured emotional representation, maintains a persistent EmoBank with decay and contextual policies, and derives an RBT diagnosis that sorts recent behavior into strengths, opportunities, and failures. From this analysis, VIGIL generates both guarded prompt updates that preserve core identity semantics and read only code proposals produced by a strategy engine that operates on log evidence and code hotspots. VIGIL functions as a state gated pipeline. Illegal transitions produce explicit errors rather than allowing the LLM to improvise. In a reminder latency case study, VIGIL identified elevated lag, proposed prompt and code repairs, and when its own diagnostic tool failed due to a schema conflict, it surfaced the internal error, produced a fallback diagnosis, and emitted a repair plan. This demonstrates meta level self repair in a deployed agent runtime. - oai:arXiv.org:2512.07094v2 - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + InstructMPC: A Human-LLM-in-the-Loop Framework for Context-Aware Power Grid Control + https://arxiv.org/abs/2512.05876 + arXiv:2512.05876v3 Announce Type: replace +Abstract: The transition toward power grids with high renewable penetration demands context-aware decision making frameworks. Traditional operational paradigms, which rely on static optimization of history-based load forecasting, often fail to capture the complex nature of real-time operational conditions, such as operator-issued maintenance mandates, emergency topology changes, or event-driven load surges. To address this challenge, we introduce InstructMPC, a closed-loop framework that integrates Large Language Models (LLMs) to generate context-aware predictions, enabling the controller to optimize power system operation. Our method employs a Contextual Disturbances Predictor (CDP) module to translate contextual information into predictive disturbance trajectories, which are then incorporated into the Model Predictive Control (MPC) optimization. Unlike conventional open-loop forecasting frameworks, InstructMPC features an online tuning mechanism where the predictor's parameters are continuously updated based on the realized control cost with a theoretical guarantee, achieving a regret bound of $O(\sqrt{T \log T})$ for linear dynamics when optimized via a tailored loss function, ensuring task-aware learning and adaption to non-stationary grid conditions. + oai:arXiv.org:2512.05876v3 + eess.SY + cs.SY + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Christopher Cruz + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Ruixiang Wu, Jiahao Ai, Tinko Sebastian Bartels, Tongxin Li - COREA: Coarse-to-Fine 3D Representation Alignment Between Relightable 3D Gaussians and SDF via Bidirectional 3D-to-3D Supervision - https://arxiv.org/abs/2512.07107 - arXiv:2512.07107v2 Announce Type: replace -Abstract: We present COREA, the first unified framework that jointly learns relightable 3D Gaussians and a Signed Distance Field (SDF) for accurate geometry reconstruction and faithful relighting. While recent 3D Gaussian Splatting (3DGS) methods have extended toward mesh reconstruction and physically-based rendering (PBR), their geometry is still learned from 2D renderings, leading to coarse surfaces and unreliable BRDF-lighting decomposition. To address these limitations, COREA introduces a coarse-to-fine bidirectional 3D-to-3D alignment strategy that allows geometric signals to be learned directly in 3D space. Within this strategy, depth provides coarse alignment between the two representations, while depth gradients and normals refine fine-scale structure, and the resulting geometry supports stable BRDF-lighting decomposition. A density-control mechanism further stabilizes Gaussian growth, balancing geometric fidelity with memory efficiency. Experiments on standard benchmarks demonstrate that COREA achieves superior performance in novel-view synthesis, mesh reconstruction, and PBR within a unified framework. - oai:arXiv.org:2512.07107v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + The Tragedy of Productivity: A Unified Framework for Diagnosing Coordination Failures in Labor Markets and AI Governance + https://arxiv.org/abs/2512.05995 + arXiv:2512.05995v2 Announce Type: replace +Abstract: Despite productivity increasing eightfold since Keynes's 1930 prediction of 15-hour workweeks, workers globally still work roughly double these hours. Separately, AI development accelerates despite existential risk warnings from leading researchers. We demonstrate these failures share identical game-theoretic structure: coordination failures where individually rational choices produce collectively suboptimal outcomes. + We synthesize five necessary and sufficient conditions characterizing such coordination failures as structural tragedies: N-player structure, binary choices with negative externalities, dominance where defection yields higher payoffs, Pareto-inefficiency where cooperation dominates mutual defection, and enforcement difficulty from structural barriers. We validate this framework across canonical cases and extend it through condition intensities, introducing a Tragedy Index revealing governance of transformative AI breakthroughs faces orders-of-magnitude greater coordination difficulty than climate change or nuclear weapons. + Applied to productivity competition, we prove firms face coordination failure preventing productivity gains from translating to worker welfare. European evidence shows that even under favorable conditions, productivity-welfare decoupling persists. Applied to AI governance, we demonstrate development faces the same structure but with amplified intensity across eight dimensions compared to successful arms control, making coordination structurally more difficult than for nuclear weapons. The Russia-Ukraine drone war validates this: both sides escalated from dozens to thousands of drones monthly within two years despite prior governance dialogue. + The analysis is diagnostic rather than prescriptive, identifying structural barriers to coordination rather than proposing solutions. + oai:arXiv.org:2512.05995v2 + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Jaeyoon Lee, Hojoon Jung, Sungtae Hwang, Jihyong Oh, Jongwon Choi + http://creativecommons.org/licenses/by/4.0/ + Ali Dasdan - TrajMoE: Scene-Adaptive Trajectory Planning with Mixture of Experts and Reinforcement Learning - https://arxiv.org/abs/2512.07135 - arXiv:2512.07135v2 Announce Type: replace -Abstract: Current autonomous driving systems often favor end-to-end frameworks, which take sensor inputs like images and learn to map them into trajectory space via neural networks. Previous work has demonstrated that models can achieve better planning performance when provided with a prior distribution of possible trajectories. However, these approaches often overlook two critical aspects: 1) The appropriate trajectory prior can vary significantly across different driving scenarios. 2) Their trajectory evaluation mechanism lacks policy-driven refinement, remaining constrained by the limitations of one-stage supervised training. To address these issues, we explore improvements in two key areas. For problem 1, we employ MoE to apply different trajectory priors tailored to different scenarios. For problem 2, we utilize Reinforcement Learning to fine-tune the trajectory scoring mechanism. Additionally, we integrate models with different perception backbones to enhance perceptual features. Our integrated model achieved a score of 51.08 on the navsim ICCV benchmark, securing third place. - oai:arXiv.org:2512.07135v2 - cs.CV - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Privacy Loss of Noise Perturbation via Concentration Analysis of A Product Measure + https://arxiv.org/abs/2512.06253 + arXiv:2512.06253v2 Announce Type: replace +Abstract: Noise perturbation is one of the most fundamental approaches for achieving $(\epsilon,\delta)$-differential privacy (DP) guarantees when releasing the result of a query or function $f(\cdot)\in\mathbb{R}^M$ evaluated on a sensitive dataset $\mathbf{x}$. In this approach, calibrated noise $\mathbf{n}\in\mathbb{R}^M$ is used to obscure the difference vector $f(\mathbf{x})-f(\mathbf{x}')$, where $\mathbf{x}'$ is known as a neighboring dataset. A DP guarantee is obtained by studying the tail probability bound of a privacy loss random variable (PLRV), defined as the Radon-Nikodym derivative between two distributions. When $\mathbf{n}$ follows a multivariate Gaussian distribution, the PLRV is characterized as a specific univariate Gaussian. In this paper, we propose a novel scheme to generate $\mathbf{n}$ by leveraging the fact that the perturbation noise is typically spherically symmetric (i.e., the distribution is rotationally invariant around the origin). The new noise generation scheme allows us to investigate the privacy loss from a geometric perspective and express the resulting PLRV using a product measure, $W\times U$; measure $W$ is related to a radius random variable controlling the magnitude of $\mathbf{n}$, while measure $U$ involves a directional random variable governing the angle between $\mathbf{n}$ and the difference $f(\mathbf{x})-f(\mathbf{x}')$. We derive a closed-form moment bound on the product measure to prove $(\epsilon,\delta)$-DP. Under the same $(\epsilon,\delta)$-DP guarantee, our mechanism yields a smaller expected noise magnitude than the classic Gaussian noise in high dimensions, thereby significantly improving the utility of the noisy result $f(\mathbf{x})+\mathbf{n}$. To validate this, we consider convex and non-convex empirical risk minimization (ERM) problems in high dimensional space and apply the proposed product noise to achieve privacy. + oai:arXiv.org:2512.06253v2 + cs.CR + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zebin Xing, Pengxuan Yang, Linbo Wang, Yichen Zhang, Yiming Hu, Yupeng Zheng, Junli Wang, Yinfeng Gao, Guang Li, Kun Ma, Long Chen, Zhongpu Xia, Qichao Zhang, Hangjun Ye, Dongbin Zhao + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Shuainan Liu, Tianxi Ji, Zhongshuo Fang, Lu Wei, Pan Li - CHIMERA: Adaptive Cache Injection and Semantic Anchor Prompting for Zero-shot Image Morphing with Morphing-oriented Metrics - https://arxiv.org/abs/2512.07155 - arXiv:2512.07155v2 Announce Type: replace -Abstract: Diffusion models exhibit remarkable generative ability, yet achieving smooth and semantically consistent image morphing remains a challenge. Existing approaches often yield abrupt transitions or over-saturated appearances due to the lack of adaptive structural and semantic alignments. We propose CHIMERA, a zero-shot diffusion-based framework that formulates morphing as a cached inversion-guided denoising process. To handle large semantic and appearance disparities, we propose Adaptive Cache Injection and Semantic Anchor Prompting. Adaptive Cache Injection (ACI) caches down, mid, and up blocks features from both inputs during DDIM inversion and re-injects them adaptively during denoising, enabling spatial and semantic alignment in depth- and time-adaptive manners and enabling natural feature fusion and smooth transitions. Semantic Anchor Prompting (SAP) leverages a vision-language model to generate a shared anchor prompt that serves as a semantic anchor, bridging dissimilar inputs and guiding the denoising process toward coherent results. Finally, we introduce the Global-Local Consistency Score (GLCS), a morphing-oriented metric that simultaneously evaluates the global harmonization of the two inputs and the smoothness of the local morphing transition. Extensive experiments and user studies show that CHIMERA achieves smoother and more semantically aligned transitions than existing methods, establishing a new state of the art in image morphing. The code and project page will be publicly released. - oai:arXiv.org:2512.07155v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Distributionally Robust Kalman Filter + https://arxiv.org/abs/2512.06286 + arXiv:2512.06286v2 Announce Type: replace +Abstract: In this work, we propose a noise-centric formulation of the distributionally robust Kalman filter (DRKF) for discrete-time linear stochastic systems with uncertain noise statistics. By placing Wasserstein ambiguity sets directly on the process and measurement noise distributions, the proposed DRKF preserves the analytical structure of the classical Kalman filter while providing a priori spectral bounds on all feasible covariances. In the time-invariant setting, we derive a steady-state DRKF from a single stationary semidefinite program, yielding a constant-gain estimator with the same per-step computational complexity as the standard Kalman filter. We establish conditions guaranteeing the existence, uniqueness, and convergence of this steady-state solution, and we prove its asymptotic minimax optimality with respect to the worst-case mean-square error. Numerical experiments validate the theory and demonstrate that the proposed DRKF improves estimation accuracy under unknown or uncertain noise models while offering computational advantages over existing robust and distributionally robust filters. + oai:arXiv.org:2512.06286v2 + eess.SY + cs.SY + math.OC + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Dahyeon Kye, Jeahun Sung, Mingyu Jeon, Jihyong Oh + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Minhyuk Jang, Astghik Hakobyan, Insoon Yang - VFM-VLM: Vision Foundation Model and Vision Language Model based Visual Comparison for 3D Pose Estimation - https://arxiv.org/abs/2512.07215 - arXiv:2512.07215v2 Announce Type: replace -Abstract: Vision Foundation Models (VFMs) and Vision Language Models (VLMs) have revolutionized computer vision by providing rich semantic and geometric representations. This paper presents a comprehensive visual comparison between CLIP based and DINOv2 based approaches for 3D pose estimation in hand object grasping scenarios. We evaluate both models on the task of 6D object pose estimation and demonstrate their complementary strengths: CLIP excels in semantic understanding through language grounding, while DINOv2 provides superior dense geometric features. Through extensive experiments on benchmark datasets, we show that CLIP based methods achieve better semantic consistency, while DINOv2 based approaches demonstrate competitive performance with enhanced geometric precision. Our analysis provides insights for selecting appropriate vision models for robotic manipulation and grasping, picking applications. - oai:arXiv.org:2512.07215v2 - cs.CV + Proportional integral derivative booster for neural networks-based time-series prediction: Case of water demand prediction + https://arxiv.org/abs/2512.06357 + arXiv:2512.06357v2 Announce Type: replace +Abstract: Multi-step time-series prediction is an essential supportive step for decision-makers in several industrial areas. Artificial intelligence techniques, which use a neural network component in various forms, have recently frequently been used to accomplish this step. However, the complexity of the neural network structure still stands up as a critical problem against prediction accuracy. In this paper, a method inspired by the proportional-integral-derivative (PID) control approach is investigated to enhance the performance of neural network models used for multi-step ahead prediction of periodic time-series information while maintaining a negligible impact on the complexity of the system. The PID-based method is applied to the predicted value at each time step to bring that value closer to the real value. The water demand forecasting problem is considered as a case study, where two deep neural network models from the literature are used to prove the effectiveness of the proposed boosting method. Furthermore, to prove the applicability of this PID-based booster to other types of periodic time-series prediction problems, it is applied to enhance the accuracy of a neural network model used for multi-step forecasting of hourly energy consumption. The comparison between the results of the original prediction models and the results after using the proposed technique demonstrates the superiority of the proposed method in terms of prediction accuracy and system complexity. + oai:arXiv.org:2512.06357v2 + cs.LG cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Md Selim Sarowar, Sungho Kim + 10.1016/j.engappai.2021.104570 + Tony Salloom, Okyay Kaynak, Xinbo Yub, Wei He - Pay Less Attention to Function Words for Free Robustness of Vision-Language Models - https://arxiv.org/abs/2512.07222 - arXiv:2512.07222v2 Announce Type: replace -Abstract: To address the trade-off between robustness and performance for robust VLM, we observe that function words could incur vulnerability of VLMs against cross-modal adversarial attacks, and propose Function-word De-Attention (FDA) accordingly to mitigate the impact of function words. Similar to differential amplifiers, our FDA calculates the original and the function-word cross-attention within attention heads, and differentially subtracts the latter from the former for more aligned and robust VLMs. Comprehensive experiments include 2 SOTA baselines under 6 different attacks on 2 downstream tasks, 3 datasets, and 3 models. Overall, our FDA yields an average 18/13/53% ASR drop with only 0.2/0.3/0.6% performance drops on the 3 tested models on retrieval, and a 90% ASR drop with a 0.3% performance gain on visual grounding. We demonstrate the scalability, generalization, and zero-shot performance of FDA experimentally, as well as in-depth ablation studies and analysis. Code will be made publicly at https://github.com/michaeltian108/FDA. - oai:arXiv.org:2512.07222v2 - cs.LG - cs.CL - Wed, 10 Dec 2025 00:00:00 -0500 + When Does Regulation by Insurance Work? The Case of Frontier AI + https://arxiv.org/abs/2512.06597 + arXiv:2512.06597v2 Announce Type: replace +Abstract: No one doubts the utility of insurance for its ability to spread risk or streamline claims management; much debated is when and how insurance uptake can improve welfare by reducing harm, despite moral hazard. Proponents and dissenters of "regulation by insurance" have now documented a number of cases of insurers succeeding or failing to have such a net regulatory effect (in contrast with a net hazard effect). Collecting these examples together and drawing on an extensive economics literature, this Article develops a principled framework for evaluating insurance uptake's effect in a given context. The presence of certain distortions - including judgment-proofness, competitive dynamics, and behavioral biases - creates potential for a net regulatory effect. How much of that potential gets realized then depends on the type of policyholder, type of risk, type of insurer, and the structure of the insurance market. The analysis suggests regulation by insurance can be particularly effective for catastrophic non-product accidents where market mechanisms provide insufficient discipline and psychological biases are strongest. As a demonstration, the framework is applied to the frontier AI industry, revealing significant potential for a net regulatory effect but also the need for policy intervention to realize that potential. One option is a carefully designed mandate that encourages forming a specialized insurer or mutual, focuses on catastrophic rather than routine risks, and bars pure captives. + oai:arXiv.org:2512.06597v2 + cs.CY + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Qiwei Tian, Chenhao Lin, Zhengyu Zhao, Chao Shen + Cristian Trout - How Do LLMs Fail In Agentic Scenarios? A Qualitative Analysis of Success and Failure Scenarios of Various LLMs in Agentic Simulations - https://arxiv.org/abs/2512.07497 - arXiv:2512.07497v2 Announce Type: replace -Abstract: We investigate how large language models (LLMs) fail when operating as autonomous agents with tool-use capabilities. Using the Kamiwaza Agentic Merit Index (KAMI) v0.1 benchmark, we analyze 900 execution traces from three representative models - Granite 4 Small, Llama 4 Maverick, and DeepSeek V3.1 - across filesystem, text extraction, CSV analysis, and SQL scenarios. Rather than focusing on aggregate scores, we perform fine-grained, per-trial behavioral analysis to surface the strategies that enable successful multi-step tool execution and the recurrent failure modes that undermine reliability. Our findings show that model scale alone does not predict agentic robustness: Llama 4 Maverick (400B) performs only marginally better than Granite 4 Small (32B) in some uncertainty-driven tasks, while DeepSeek V3.1's superior reliability derives primarily from post-training reinforcement learning rather than architecture or size. Across models, we identify four recurring failure archetypes: premature action without grounding, over-helpfulness that substitutes missing entities, vulnerability to distractor-induced context pollution, and fragile execution under load. These patterns highlight the need for agentic evaluation methods that emphasize interactive grounding, recovery behavior, and environment-aware adaptation, suggesting that reliable enterprise deployment requires not just stronger models but deliberate training and design choices that reinforce verification, constraint discovery, and adherence to source-of-truth data. - oai:arXiv.org:2512.07497v2 + Financial Fraud Identification and Interpretability Study for Listed Companies Based on Convolutional Neural Network + https://arxiv.org/abs/2512.06648 + arXiv:2512.06648v2 Announce Type: replace +Abstract: Since the emergence of joint-stock companies, financial fraud by listed firms has repeatedly undermined capital markets. Fraud is difficult to detect because of covert tactics and the high labor and time costs of audits. Traditional statistical models are interpretable but struggle with nonlinear feature interactions, while machine learning models are powerful but often opaque. In addition, most existing methods judge fraud only for the current year based on current year data, limiting timeliness. + This paper proposes a financial fraud detection framework for Chinese A-share listed companies based on convolutional neural networks (CNNs). We design a feature engineering scheme that transforms firm-year panel data into image like representations, enabling the CNN to capture cross-sectional and temporal patterns and to predict fraud in advance. Experiments show that the CNN outperforms logistic regression and LightGBM in accuracy, robustness, and early-warning performance, and that proper tuning of the classification threshold is crucial in high-risk settings. + To address interpretability, we analyze the model along the dimensions of entity, feature, and time using local explanation techniques. We find that solvency, ratio structure, governance structure, and internal control are general predictors of fraud, while environmental indicators matter mainly in high-pollution industries. Non-fraud firms share stable feature patterns, whereas fraud firms exhibit heterogeneous patterns concentrated in short time windows. A case study of Guanong Shares in 2022 shows that cash flow analysis, social responsibility, governance structure, and per-share indicators are the main drivers of the model's fraud prediction, consistent with the company's documented misconduct. + oai:arXiv.org:2512.06648v2 + cs.LG cs.AI - cs.SE - Wed, 10 Dec 2025 00:00:00 -0500 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - JV Roig + Xiao Li - MeshRipple: Structured Autoregressive Generation of Artist-Meshes - https://arxiv.org/abs/2512.07514 - arXiv:2512.07514v2 Announce Type: replace -Abstract: Meshes serve as a primary representation for 3D assets. Autoregressive mesh generators serialize faces into sequences and train on truncated segments with sliding-window inference to cope with memory limits. However, this mismatch breaks long-range geometric dependencies, producing holes and fragmented components. To address this critical limitation, we introduce MeshRipple, which expands a mesh outward from an active generation frontier, akin to a ripple on a surface. MeshRipple rests on three key innovations: a frontier-aware BFS tokenization that aligns the generation order with surface topology; an expansive prediction strategy that maintains coherent, connected surface growth; and a sparse-attention global memory that provides an effectively unbounded receptive field to resolve long-range topological dependencies. This integrated design enables MeshRipple to generate meshes with high surface fidelity and topological completeness, outperforming strong recent baselines. - oai:arXiv.org:2512.07514v2 - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + LightSearcher: Efficient DeepSearch via Experiential Memory + https://arxiv.org/abs/2512.06653 + arXiv:2512.06653v3 Announce Type: replace +Abstract: DeepSearch paradigms have become a core enabler for deep reasoning models, allowing them to invoke external search tools to access up-to-date, domain-specific knowledge beyond parametric boundaries, thereby enhancing the depth and factual reliability of reasoning. Building upon this foundation, recent advances in reinforcement learning (RL) have further empowered models to autonomously and strategically control search tool usage, optimizing when and how to query external knowledge sources. Yet, these RL-driven DeepSearch systems often reveal a see-saw trade-off between accuracy and efficiency-frequent tool invocations can improve factual correctness but lead to unnecessary computational overhead and diminished efficiency. To address this challenge, we propose LightSearcher, an efficient RL framework that incorporates textual experiential memory by learning contrastive reasoning trajectories to generate interpretable summaries of successful reasoning patterns. In addition, it employs an adaptive reward shaping mechanism that penalizes redundant tool calls only in correct-answer scenarios. This design effectively balances the inherent accuracy-efficiency trade-off in DeepSearch paradigms. Experiments on four multi-hop QA benchmarks show that LightSearcher maintains accuracy comparable to SOTA baseline ReSearch, while reducing search tool invocations by 39.6%, inference time by 48.6%, and token consumption by 21.2%, demonstrating its superior efficiency. + oai:arXiv.org:2512.06653v3 + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Junkai Lin, Hang Long, Huipeng Guo, Jielei Zhang, JiaYi Yang, Tianle Guo, Yang Yang, Jianwen Li, Wenxiao Zhang, Matthias Nie{\ss}ner, Wei Yang + Hengzhi Lan, Yue Yu, Li Qian, Li Peng, Jie Wu, Wei Liu, Jian Luan, Ting Bai - From Orbit to Ground: Generative City Photogrammetry from Extreme Off-Nadir Satellite Images - https://arxiv.org/abs/2512.07527 - arXiv:2512.07527v2 Announce Type: replace -Abstract: City-scale 3D reconstruction from satellite imagery presents the challenge of extreme viewpoint extrapolation, where our goal is to synthesize ground-level novel views from sparse orbital images with minimal parallax. This requires inferring nearly $90^\circ$ viewpoint gaps from image sources with severely foreshortened facades and flawed textures, causing state-of-the-art reconstruction engines such as NeRF and 3DGS to fail. To address this problem, we propose two design choices tailored for city structures and satellite inputs. First, we model city geometry as a 2.5D height map, implemented as a Z-monotonic signed distance field (SDF) that matches urban building layouts from top-down viewpoints. This stabilizes geometry optimization under sparse, off-nadir satellite views and yields a watertight mesh with crisp roofs and clean, vertically extruded facades. Second, we paint the mesh appearance from satellite images via differentiable rendering techniques. While the satellite inputs may contain long-range, blurry captures, we further train a generative texture restoration network to enhance the appearance, recovering high-frequency, plausible texture details from degraded inputs. Our method's scalability and robustness are demonstrated through extensive experiments on large-scale urban reconstruction. For example, in our teaser figure, we reconstruct a $4\,\mathrm{km}^2$ real-world region from only a few satellite images, achieving state-of-the-art performance in synthesizing photorealistic ground views. The resulting models are not only visually compelling but also serve as high-fidelity, application-ready assets for downstream tasks like urban planning and simulation. Project page can be found at https://pku-vcl-geometry.github.io/Orbit2Ground/. - oai:arXiv.org:2512.07527v2 - cs.CV - cs.GR - Wed, 10 Dec 2025 00:00:00 -0500 + Partial Inverse Design of High-Performance Concrete Using Cooperative Neural Networks for Constraint-Aware Mix Generation + https://arxiv.org/abs/2512.06813 + arXiv:2512.06813v2 Announce Type: replace +Abstract: High-performance concrete requires complex mix design decisions involving interdependent variables and practical constraints. While data-driven methods have improved predictive modeling for forward design in concrete engineering, inverse design remains limited, especially when some variables are fixed and only the remaining ones must be inferred. This study proposes a cooperative neural network framework for the partial inverse design of high-performance concrete. The framework integrates an imputation model with a surrogate strength predictor and learns through cooperative training. Once trained, it generates valid and performance-consistent mix designs in a single forward pass without retraining for different constraint scenarios. Compared with baseline models, including autoencoder models and Bayesian inference with Gaussian process surrogates, the proposed method achieves R-squared values of 0.87 to 0.92 and substantially reduces mean squared error by approximately 50% and 70%, respectively. The results show that the framework provides an accurate and computationally efficient foundation for constraint-aware, data-driven mix proportioning. + oai:arXiv.org:2512.06813v2 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fei Yu, Yu Liu, Luyang Tang, Mingchao Sun, Zengye Ge, Rui Bu, Yuchao Jin, Haisen Zhao, He Sun, Yangyan Li, Mu Xu, Wenzheng Chen, Baoquan Chen + Agung Nugraha, Heungjun Im, Jihwan Lee - FRWKV:Frequency-Domain Linear Attention for Long-Term Time Series Forecasting - https://arxiv.org/abs/2512.07539 - arXiv:2512.07539v2 Announce Type: replace -Abstract: Traditional Transformers face a major bottleneck in long-sequence time series forecasting due to their quadratic complexity $(\mathcal{O}(T^2))$ and their limited ability to effectively exploit frequency-domain information. Inspired by RWKV's $\mathcal{O}(T)$ linear attention and frequency-domain modeling, we propose FRWKV, a frequency-domain linear-attention framework that overcomes these limitations. Our model integrates linear attention mechanisms with frequency-domain analysis, achieving $\mathcal{O}(T)$ computational complexity in the attention path while exploiting spectral information to enhance temporal feature representations for scalable long-sequence modeling. Across eight real-world datasets, FRWKV achieves a first-place average rank. Our ablation studies confirm the critical roles of both the linear attention and frequency-encoder components. This work demonstrates the powerful synergy between linear attention and frequency analysis, establishing a new paradigm for scalable time series modeling. Code is available at this repository: https://github.com/yangqingyuan-byte/FRWKV. - oai:arXiv.org:2512.07539v2 + Dual Refinement Cycle Learning: Unsupervised Text Classification of Mamba and Community Detection on Text Attributed Graph + https://arxiv.org/abs/2512.07100 + arXiv:2512.07100v2 Announce Type: replace +Abstract: Pretrained language models offer strong text understanding capabilities but remain difficult to deploy in real-world text-attributed networks due to their heavy dependence on labeled data. Meanwhile, community detection methods typically ignore textual semantics, limiting their usefulness in downstream applications such as content organization, recommendation, and risk monitoring. To overcome these limitations, we present Dual Refinement Cycle Learning (DRCL), a fully unsupervised framework designed for practical scenarios where no labels or category definitions are available. DRCL integrates structural and semantic information through a warm-start initialization and a bidirectional refinement cycle between a GCN-based Community Detection Module (GCN-CDM) and a Text Semantic Modeling Module (TSMM). The two modules iteratively exchange pseudo-labels, allowing semantic cues to enhance structural clustering and structural patterns to guide text representation learning without manual supervision. Across several text-attributed graph datasets, DRCL consistently improves the structural and semantic quality of discovered communities. Moreover, a Mamba-based classifier trained solely from DRCL's community signals achieves accuracy comparable to supervised models, demonstrating its potential for deployment in large-scale systems where labeled data are scarce or costly. The code is available at https://github.com/wuanghoong/DRCL.git. + oai:arXiv.org:2512.07100v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Qingyuan Yang, Shizhuo Deng, Dongyue Chen, Da Teng, Zehua Gan + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Hong Wang, Yinglong Zhang, Hanhan Guo, Xuewen Xia, Xing Xu - Obstacle Avoidance of UAV in Dynamic Environments Using Direction and Velocity-Adaptive Artificial Potential Field - https://arxiv.org/abs/2512.07609 - arXiv:2512.07609v2 Announce Type: replace -Abstract: The conventional Artificial Potential Field (APF) is fundamentally limited by the local minima issue and its inability to account for the kinematics of moving obstacles. This paper addresses the critical challenge of autonomous collision avoidance for Unmanned Aerial Vehicles (UAVs) operating in dynamic and cluttered airspace by proposing a novel Direction and Relative Velocity Weighted Artificial Potential Field (APF). In this approach, a bounded weighting function, $\omega(\theta,v_{e})$, is introduced to dynamically scale the repulsive potential based on the direction and velocity of the obstacle relative to the UAV. This robust APF formulation is integrated within a Model Predictive Control (MPC) framework to generate collision-free trajectories while adhering to kinematic constraints. Simulation results demonstrate that the proposed method effectively resolves local minima and significantly enhances safety by enabling smooth, predictive avoidance maneuvers. The system ensures superior path integrity and reliable performance, confirming its viability for autonomous navigation in complex environments. - oai:arXiv.org:2512.07609v2 - eess.SY - cs.RO - cs.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Local-Curvature-Aware Knowledge Graph Embedding: An Extended Ricci Flow Approach + https://arxiv.org/abs/2512.07332 + arXiv:2512.07332v2 Announce Type: replace +Abstract: Knowledge graph embedding (KGE) relies on the geometry of the embedding space to encode semantic and structural relations. Existing methods place all entities on one homogeneous manifold, Euclidean, spherical, hyperbolic, or their product/multi-curvature variants, to model linear, symmetric, or hierarchical patterns. Yet a predefined, homogeneous manifold cannot accommodate the sharply varying curvature that real-world graphs exhibit across local regions. Since this geometry is imposed a priori, any mismatch with the knowledge graph's local curvatures will distort distances between entities and hurt the expressiveness of the resulting KGE. To rectify this, we propose RicciKGE to have the KGE loss gradient coupled with local curvatures in an extended Ricci flow such that entity embeddings co-evolve dynamically with the underlying manifold geometry towards mutual adaptation. Theoretically, when the coupling coefficient is bounded and properly selected, we rigorously prove that i) all the edge-wise curvatures decay exponentially, meaning that the manifold is driven toward the Euclidean flatness; and ii) the KGE distances strictly converge to a global optimum, which indicates that geometric flattening and embedding optimization are promoting each other. Experimental improvements on link prediction and node classification benchmarks demonstrate RicciKGE's effectiveness in adapting to heterogeneous knowledge graph structures. + oai:arXiv.org:2512.07332v2 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Nikita Vaibhav Pavle, Shrreya Rajneesh, Rakesh Kumar Sahoo, Manoranjan Sinha + Zhengquan Luo, Guy Tadmor, Or Amar, David Zeevi, Zhiqiang Xu - ViSA: 3D-Aware Video Shading for Real-Time Upper-Body Avatar Creation - https://arxiv.org/abs/2512.07720 - arXiv:2512.07720v2 Announce Type: replace -Abstract: Generating high-fidelity upper-body 3D avatars from one-shot input image remains a significant challenge. Current 3D avatar generation methods, which rely on large reconstruction models, are fast and capable of producing stable body structures, but they often suffer from artifacts such as blurry textures and stiff, unnatural motion. In contrast, generative video models show promising performance by synthesizing photorealistic and dynamic results, but they frequently struggle with unstable behavior, including body structural errors and identity drift. To address these limitations, we propose a novel approach that combines the strengths of both paradigms. Our framework employs a 3D reconstruction model to provide robust structural and appearance priors, which in turn guides a real-time autoregressive video diffusion model for rendering. This process enables the model to synthesize high-frequency, photorealistic details and fluid dynamics in real time, effectively reducing texture blur and motion stiffness while preventing the structural inconsistencies common in video generation methods. By uniting the geometric stability of 3D reconstruction with the generative capabilities of video models, our method produces high-fidelity digital avatars with realistic appearance and dynamic, temporally coherent motion. Experiments demonstrate that our approach significantly reduces artifacts and achieves substantial improvements in visual quality over leading methods, providing a robust and efficient solution for real-time applications such as gaming and virtual reality. Project page: https://lhyfst.github.io/visa - oai:arXiv.org:2512.07720v2 + Exploring possible vector systems for faster training of neural networks with preconfigured latent spaces + https://arxiv.org/abs/2512.07509 + arXiv:2512.07509v2 Announce Type: replace +Abstract: The overall neural network (NN) performance is closely related to the properties of its embedding distribution in latent space (LS). It has recently been shown that predefined vector systems, specifically An root system vectors, can be used as targets for latent space configurations (LSC) to ensure the desired LS structure. One of the main LSC advantage is the possibility of training classifier NNs without classification layers, which facilitates training NNs on datasets with extremely large numbers of classes. This paper provides a more general overview of possible vector systems for NN training along with their properties and methods for vector system construction. These systems are used to configure LS of encoders and visual transformers to significantly speed up ImageNet-1K and 50k-600k classes LSC training. It is also shown that using the minimum number of LS dimensions for a specific number of classes results in faster convergence. The latter has potential advantages for reducing the size of vector databases used to store NN embeddings. + oai:arXiv.org:2512.07509v2 + cs.LG + cs.AI cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Fan Yang, Heyuan Li, Peihao Li, Weihao Yuan, Lingteng Qiu, Chaoyue Song, Cheng Chen, Yisheng He, Shifeng Zhang, Xiaoguang Han, Steven Hoi, Guosheng Lin + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nikita Gabdullin - On computing the (exact) Fr\'echet distance with a frog - https://arxiv.org/abs/2512.07728 - arXiv:2512.07728v2 Announce Type: replace -Abstract: The continuous Frechet distance between two polygonal curves is classically computed by exploring their free space diagram. Recently, Har-Peled, Raichel, and Robson [SoCG'25] proposed a radically different approach: instead of directly traversing the continuous free space, they approximate the distance by computing paths in a discrete graph derived from the discrete free space, recursively bisecting edges until the discrete distance converges to the continuous Frechet distance. They implement this so-called frog-based technique and report substantial practical speedups over the state of the art. - We revisit the frog-based approach and address three of its limitations. First, the method does not compute the Frechet distance exactly. Second, the recursive bisection procedure only introduces the monotonicity events required to realise the Frechet distance asymptotically, that is, only in the limit. Third, the applied simplification technique is heuristic. Motivated by theoretical considerations, we develop new techniques that guarantee exactness, polynomial-time convergence, and near-optimal lossless simplifications. We provide an open-source C++ implementation of our variant. - Our primary contribution is an extensive empirical evaluation. As expected, exact computation introduces overhead and increases the median running time. Yet, our method is often faster in the worst case, the slowest ten percent of instances, or even on average due to its convergence guarantees. More surprisingly, in our experiments, the implementation of Bringmann, Kuennemann, and Nusser [SoCG'19] consistently outperforms all frog-based approaches in practice. This appears to contrast published claims of the efficiency of the frog-based techniques. These results thereby provide nuanced perspective on frogs: highlighting both the theoretical appeal, but also the practical limitations. - oai:arXiv.org:2512.07728v2 - cs.CG - Wed, 10 Dec 2025 00:00:00 -0500 + More than Segmentation: Benchmarking SAM 3 for Segmentation, 3D Perception, and Reconstruction in Robotic Surgery + https://arxiv.org/abs/2512.07596 + arXiv:2512.07596v2 Announce Type: replace +Abstract: The recent SAM 3 and SAM 3D have introduced significant advancements over the predecessor, SAM 2, particularly with the integration of language-based segmentation and enhanced 3D perception capabilities. SAM 3 supports zero-shot segmentation across a wide range of prompts, including point, bounding box, and language-based prompts, allowing for more flexible and intuitive interactions with the model. In this empirical evaluation, we assess the performance of SAM 3 in robot-assisted surgery, benchmarking its zero-shot segmentation with point and bounding box prompts and exploring its effectiveness in dynamic video tracking, alongside its newly introduced language prompt segmentation. While language prompts show potential, their performance in the surgical domain is currently suboptimal, highlighting the need for further domain-specific training. Additionally, we investigate SAM 3D's depth reconstruction abilities, demonstrating its capacity to process surgical scene data and reconstruct 3D anatomical structures from 2D images. Through comprehensive testing on the MICCAI EndoVis 2017 and EndoVis 2018 benchmarks, SAM 3 shows clear improvements over SAM and SAM 2 in both image and video segmentation under spatial prompts, while the zero-shot evaluations of SAM 3D on SCARED, StereoMIS, and EndoNeRF indicate strong monocular depth estimation and realistic 3D instrument reconstruction, yet also reveal remaining limitations in complex, highly dynamic surgical scenes. + oai:arXiv.org:2512.07596v2 + cs.CV + cs.RO + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Wenzhen Dong, Jieming Yu, Yiming Huang, Hongqiu Wang, Lei Zhu, Albert C. S. Chung, Hongliang Ren, Long Bai + + + Enabling Delayed-Full Charging Through Transformer-Based Real-Time-to-Departure Modeling for EV Battery Longevity + https://arxiv.org/abs/2512.07723 + arXiv:2512.07723v2 Announce Type: replace +Abstract: Electric vehicles (EVs) are key to sustainable mobility, yet their lithium-ion batteries (LIBs) degrade more rapidly under prolonged high states of charge (SOC). This can be mitigated by delaying full charging \ours until just before departure, which requires accurate prediction of user departure times. In this work, we propose Transformer-based real-time-to-event (TTE) model for accurate EV departure prediction. Our approach represents each day as a TTE sequence by discretizing time into grid-based tokens. Unlike previous methods primarily dependent on temporal dependency from historical patterns, our method leverages streaming contextual information to predict departures. Evaluation on a real-world study involving 93 users and passive smartphone data demonstrates that our method effectively captures irregular departure patterns within individual routines, outperforming baseline models. These results highlight the potential for practical deployment of the \ours algorithm and its contribution to sustainable transportation systems. + oai:arXiv.org:2512.07723v2 + cs.LG + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Jacobus Conradi, Ivor van der Hoog, Eva Rotenberg + Yonggeon Lee, Jibin Hwang, Alfred Malengo Kondoro, Juhyun Song, Youngtae Noh Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support https://arxiv.org/abs/2512.07801 - arXiv:2512.07801v2 Announce Type: replace -Abstract: LLM-based agents are increasingly deployed for expert decision support, yet human-AI teams in high-stakes settings do not yet reliably outperform the best individual. We argue this complementarity gap reflects a fundamental mismatch: current agents are trained as answer engines, not as partners in the collaborative sensemaking through which experts actually make decisions. Sensemaking (the ability to co-construct causal explanations, surface uncertainties, and adapt goals) is the key capability that current training pipelines do not explicitly develop or evaluate. We propose Collaborative Causal Sensemaking (CCS) as a research agenda to develop this capability from the ground up, spanning new training environments that reward collaborative thinking, representations for shared human-AI mental models, and evaluation centred on trust and complementarity. These directions can advance MAS research toward agents that think with their human partners rather than for them. - oai:arXiv.org:2512.07801v2 + arXiv:2512.07801v3 Announce Type: replace +Abstract: LLM-based agents are increasingly deployed for expert decision support, yet human-AI teams in high-stakes settings do not yet reliably outperform the best individual. We argue this complementarity gap reflects a fundamental mismatch: current agents are trained as answer engines, not as partners in the collaborative sensemaking through which experts actually make decisions. Sensemaking (the ability to co-construct causal explanations, surface uncertainties, and adapt goals) is the key capability that current training pipelines do not explicitly develop or evaluate. We propose Collaborative Causal Sensemaking (CCS) as a research agenda to develop this capability from the ground up, spanning new training environments that reward collaborative thinking, representations for shared human-AI mental models, and evaluation centred on trust and complementarity. Taken together, these directions shift MAS research from building oracle-like answer engines to cultivating AI teammates that co-reason with their human partners over the causal structure of shared decisions, advancing the design of effective human-AI teams. + oai:arXiv.org:2512.07801v3 cs.CL cs.AI cs.HC cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ Raunak Jain, Mudita Khurana - Understanding Privacy Risks in Code Models Through Training Dynamics: A Causal Approach - https://arxiv.org/abs/2512.07814 - arXiv:2512.07814v2 Announce Type: replace -Abstract: Large language models for code (LLM4Code) have greatly improved developer productivity but also raise privacy concerns due to their reliance on open-source repositories containing abundant personally identifiable information (PII). Prior work shows that commercial models can reproduce sensitive PII, yet existing studies largely treat PII as a single category and overlook the heterogeneous risks among different types. We investigate whether distinct PII types vary in their likelihood of being learned and leaked by LLM4Code, and whether this relationship is causal. Our methodology includes building a dataset with diverse PII types, fine-tuning representative models of different scales, computing training dynamics on real PII data, and formulating a structural causal model to estimate the causal effect of learnability on leakage. Results show that leakage risks differ substantially across PII types and correlate with their training dynamics: easy-to-learn instances such as IP addresses exhibit higher leakage, while harder types such as keys and passwords leak less frequently. Ambiguous types show mixed behaviors. This work provides the first causal evidence that leakage risks are type-dependent and offers guidance for developing type-aware and learnability-aware defenses for LLM4Code. - oai:arXiv.org:2512.07814v2 - cs.SE - cs.AI - cs.CR - Wed, 10 Dec 2025 00:00:00 -0500 + The Adoption and Usage of AI Agents: Early Evidence from Perplexity + https://arxiv.org/abs/2512.07828 + arXiv:2512.07828v2 Announce Type: replace +Abstract: This paper presents the first large-scale field study of the adoption, usage intensity, and use cases of general-purpose AI agents operating in open-world web environments. Our analysis centers on Comet, an AI-powered browser developed by Perplexity, and its integrated agent, Comet Assistant. Drawing on hundreds of millions of anonymized user interactions, we address three fundamental questions: Who is using AI agents? How intensively are they using them? And what are they using them for? Our findings reveal substantial heterogeneity in adoption and usage across user segments. Earlier adopters, users in countries with higher GDP per capita and educational attainment, and individuals working in digital or knowledge-intensive sectors -- such as digital technology, academia, finance, marketing, and entrepreneurship -- are more likely to adopt or actively use the agent. To systematically characterize the substance of agent usage, we introduce a hierarchical agentic taxonomy that organizes use cases across three levels: topic, subtopic, and task. The two largest topics, Productivity & Workflow and Learning & Research, account for 57% of all agentic queries, while the two largest subtopics, Courses and Shopping for Goods, make up 22%. The top 10 out of 90 tasks represent 55% of queries. Personal use constitutes 55% of queries, while professional and educational contexts comprise 30% and 16%, respectively. In the short term, use cases exhibit strong stickiness, but over time users tend to shift toward more cognitively oriented topics. The diffusion of increasingly capable AI agents carries important implications for researchers, businesses, policymakers, and educators, inviting new lines of inquiry into this rapidly emerging class of AI capabilities. + oai:arXiv.org:2512.07828v2 + cs.LG + econ.GN + q-fin.EC + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by-nc-nd/4.0/ - Hua Yang, Alejandro Velasco, Sen Fang, Bowen Xu, Denys Poshyvanyk + Jeremy Yang, Noah Yonack, Kate Zyskowski, Denis Yarats, Johnny Ho, Jerry Ma - Astral Space: Convex Analysis at Infinity - https://arxiv.org/abs/2205.03260 - arXiv:2205.03260v4 Announce Type: replace-cross -Abstract: Not all convex functions on $\mathbb{R}^n$ have finite minimizers; some can only be minimized by a sequence as it heads to infinity. In this work, we aim to develop a theory for understanding such minimizers at infinity. We study astral space, a compact extension of $\mathbb{R}^n$ to which such points at infinity have been added. Astral space is constructed to be as small as possible while still ensuring that all linear functions can be continuously extended to the new space. Although astral space includes all of $\mathbb{R}^n$, it is not a vector space, nor even a metric space. However, it is sufficiently well-structured to allow useful and meaningful extensions of concepts of convexity, conjugacy, and subdifferentials. We develop these concepts and analyze various properties of convex functions on astral space, including the detailed structure of their minimizers, exact characterizations of continuity, and convergence of descent algorithms. - oai:arXiv.org:2205.03260v4 - math.OC + Advancing physiological time series reconstruction and imputation via mixture of receptive fields and experts fusion + https://arxiv.org/abs/2512.07873 + arXiv:2512.07873v2 Announce Type: replace +Abstract: Recent studies show that using diffusion models for time series signal reconstruction holds great promise. However, such approaches remain largely unexplored in the domain of medical time series. The unique characteristics of the physiological time series signals, such as multivariate, high temporal variability, highly noisy, and artifact-prone, make deep learning-based approaches still challenging for tasks such as imputation. Hence, we propose a novel Mixture of Experts (MoE)-based noise estimator within a score-based diffusion framework. Specifically, the Receptive Field Adaptive MoE (RFAMoE) module is designed to enable each channel to adaptively select desired receptive fields throughout the diffusion process. Moreover, recent literature has found that when generating a physiological signal, performing multiple inferences and averaging the reconstructed signals can effectively reduce reconstruction errors, but at the cost of significant computational and latency overhead. We design a Fusion MoE module and innovatively leverage the nature of MoE module to generate K noise signals in parallel, fuse them using a routing mechanism, and complete signal reconstruction in a single inference step. This design not only improves performance over previous methods but also eliminates the substantial computational cost and latency associated with multiple inference processes. Extensive results demonstrate that our proposed framework consistently outperforms diffusion-based SOTA works on different tasks and datasets. + oai:arXiv.org:2512.07873v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Miroslav Dud\'ik, Robert E. Schapire, Matus Telgarsky + Ci Zhang, Huayu Li, Changdi Yang, Jiangnan Xia, Yanzhi Wang, Xiaolong Ma, Jin Lu, Geng Yuan - BeeTLe: An Imbalance-Aware Deep Sequence Model for Linear B-Cell Epitope Prediction and Classification with Logit-Adjusted Losses - https://arxiv.org/abs/2309.02071 - arXiv:2309.02071v2 Announce Type: replace-cross -Abstract: The process of identifying and characterizing B-cell epitopes, which are the portions of antigens recognized by antibodies, is important for our understanding of the immune system, and for many applications including vaccine development, therapeutics, and diagnostics. Computational epitope prediction is challenging yet rewarding as it significantly reduces the time and cost of laboratory work. Most of the existing tools do not have satisfactory performance and only discriminate epitopes from non-epitopes. This paper presents a new deep learning-based multi-task framework for linear B-cell epitope prediction as well as antibody type-specific epitope classification. Specifically, a sequenced-based neural network model using recurrent layers and Transformer blocks is developed. We propose an amino acid encoding method based on eigen decomposition to help the model learn the representations of epitopes. We introduce modifications to standard cross-entropy loss functions by extending a logit adjustment technique to cope with the class imbalance. Experimental results on data curated from the largest public epitope database demonstrate the validity of the proposed methods and the superior performance compared to competing ones. - oai:arXiv.org:2309.02071v2 - q-bio.QM + Artificial Intelligence-Driven Network-on-Chip Design Space Exploration: Neural Network Architectures for Design + https://arxiv.org/abs/2512.07877 + arXiv:2512.07877v2 Announce Type: replace +Abstract: Network-on-Chip (NoC) design requires exploring a high-dimensional configuration space to satisfy stringent throughput requirements and latency constraints. Traditional design space exploration techniques are often slow and struggle to handle complex, non-linear parameter interactions. This work presents a machine learning-driven framework that automates NoC design space exploration using BookSim simulations and reverse neural network models. Specifically, we compare three architectures - a Multi-Layer Perceptron (MLP),a Conditional Diffusion Model, and a Conditional Variational Autoencoder (CVAE) to predict optimal NoC parameters given target performance metrics. Our pipeline generates over 150,000 simulation data points across varied mesh topologies. The Conditional Diffusion Model achieved the highest predictive accuracy, attaining a mean squared error (MSE) of 0.463 on unseen data. Furthermore, the proposed framework reduces design exploration time by several orders of magnitude, making it a practical solution for rapid and scalable NoC co-design. + oai:arXiv.org:2512.07877v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1007/978-3-031-43427-3_29 - Xiao Yuan + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Amogh Anshu N, Harish BP - A kernel-based method for Schr\"odinger bridges - https://arxiv.org/abs/2310.14522 - arXiv:2310.14522v5 Announce Type: replace-cross -Abstract: We characterize the Schr\"odinger bridge problems by a family of Mckean-Vlasov stochastic control problems with no terminal time distribution constraint. In doing so, we use the theory of Hilbert space embeddings of probability measures and then describe the constraint as penalty terms defined by the maximum mean discrepancy in the control problems. A sequence of the probability laws of the state processes resulting from $\epsilon$-optimal controls converges to a unique solution of the Schr\"odinger's problem under mild conditions on given initial and terminal time distributions and an underlying diffusion process. We propose a neural SDE based deep learning algorithm for the Mckean-Vlasov stochastic control problems. Several numerical experiments validate our methods. - oai:arXiv.org:2310.14522v5 - math.OC - cs.NA - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + Investigating the originality of scientific papers across time and domain: A quantitative analysis + https://arxiv.org/abs/2512.07892 + arXiv:2512.07892v2 Announce Type: replace +Abstract: The study of creativity in science has long sought quantitative metrics capable of capturing the originality of the scientific insights contained within articles and other scientific works. In recent years, the field has witnessed a substantial expansion of research activity, enabled by advances in natural language processing and network analysis, and has utilised both macro- and micro-scale approaches with success. However, they often do not examine the text itself for evidence of originality. In this paper, we apply a computational measure correlating with originality from creativity science, Divergent Semantic Integration (DSI), to a set of 51,200 scientific abstracts and titles sourced from the Web of Science. To adapt DSI for application to scientific texts, we advance the original BERT method by incorporating SciBERT (a model trained on scientific corpora) into the computation of DSI. In our study, we observe that DSI plays a more pronounced role in the accrual of early citations for papers with fewer authors, varies substantially across subjects and research fields, and exhibits a declining correlation with citation counts over time. Furthermore, by modelling SciBERT- and BERT-DSI as predictors of the logarithm of 5-year citation counts alongside field, publication year, and the logarithm of author count, we find statistically significant relationships, with adjusted R-squared of 0.103 and 0.101 for BERT-DSI and SciBERT-DSI. Because existing scientometric measures rarely assess the originality expressed in textual content, DSI provides a valuable means of directly quantifying the conceptual originality embedded in scientific writing. + oai:arXiv.org:2512.07892v2 + cs.DL + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yumiharu Nakano + Jack H. Culbert, Yoed N. Kenett, Philipp Mayr - Deep generative modelling of canonical ensemble with differentiable thermal properties - https://arxiv.org/abs/2404.18404 - arXiv:2404.18404v2 Announce Type: replace-cross -Abstract: It is a long-standing challenge to accurately and efficiently compute thermodynamic quantities of many-body systems at thermal equilibrium. The conventional methods, e.g., Markov chain Monte Carlo, require many steps to equilibrate. The recently developed deep learning methods can perform direct sampling, but only work at a single trained temperature point and risk biased sampling. Here, we propose a variational method for canonical ensembles with differentiable temperature, which gives thermodynamic quantities as continuous functions of temperature akin to an analytical solution. The proposed method is a general framework that works with any tractable density generative model. At optimal, the model is theoretically guaranteed to be the unbiased Boltzmann distribution. We validated our method by calculating phase transitions in the Ising and XY models, demonstrating that our direct-sampling simulations are as accurate as Markov chain Monte Carlo, but more efficient. Moreover, our differentiable free energy aligns closely with the exact one to the second-order derivative, indicating that the variational model captures the subtle thermal transitions at the phase transitions. This functional dependence on external parameters is a fundamental advancement in combining the exceptional fitting ability of deep learning with rigorous physical analysis. - oai:arXiv.org:2404.18404v2 - cond-mat.stat-mech + HOLE: Homological Observation of Latent Embeddings for Neural Network Interpretability + https://arxiv.org/abs/2512.07988 + arXiv:2512.07988v2 Announce Type: replace +Abstract: Deep learning models have achieved remarkable success across various domains, yet their learned representations and decision-making processes remain largely opaque and hard to interpret. This work introduces HOLE (Homological Observation of Latent Embeddings), a method for analyzing and interpreting deep neural networks through persistent homology. HOLE extracts topological features from neural activations and presents them using a suite of visualization techniques, including Sankey diagrams, heatmaps, dendrograms, and blob graphs. These tools facilitate the examination of representation structure and quality across layers. We evaluate HOLE on standard datasets using a range of discriminative models, focusing on representation quality, interpretability across layers, and robustness to input perturbations and model compression. The results indicate that topological analysis reveals patterns associated with class separation, feature disentanglement, and model robustness, providing a complementary perspective for understanding and improving deep learning systems. + oai:arXiv.org:2512.07988v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + cs.GR + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1103/8wx7-kyx8 - Phys. Rev. Lett. 135, 027301 (2025) - Shuo-Hui Li, Yao-Wen Zhang, Ding Pan - - - Explosive neural networks via higher-order interactions in curved statistical manifolds - https://arxiv.org/abs/2408.02326 - arXiv:2408.02326v3 Announce Type: replace-cross -Abstract: Higher-order interactions underlie complex phenomena in systems such as biological and artificial neural networks, but their study is challenging due to the scarcity of tractable models. By leveraging a generalisation of the maximum entropy principle, we introduce curved neural networks as a class of models with a limited number of parameters that are particularly well-suited for studying higher-order phenomena. Through exact mean-field descriptions, we show that these curved neural networks implement a self-regulating annealing process that can accelerate memory retrieval, leading to explosive order-disorder phase transitions with multi-stability and hysteresis effects. Moreover, by analytically exploring their memory-retrieval capacity using the replica trick, we demonstrate that these networks can enhance memory capacity and robustness of retrieval over classical associative-memory networks. Overall, the proposed framework provides parsimonious models amenable to analytical study, revealing higher-order phenomena in complex networks. - oai:arXiv.org:2408.02326v3 - cond-mat.dis-nn - cond-mat.stat-mech - cs.IT - math.IT - nlin.AO - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - 10.1038/s41467-025-61475-w - Aguilera, M., Morales, P.A., Rosas, F.E. et al. Explosive neural networks via higher-order interactions in curved statistical manifolds. Nature Communications 16, 6511 (2025) - Miguel Aguilera, Pablo A. Morales, Fernando E. Rosas, Hideaki Shimazaki + Sudhanva Manjunath Athreya, Paul Rosen - Survey of Data-driven Newsvendor: Unified Analysis and Spectrum of Achievable Regrets - https://arxiv.org/abs/2409.03505 - arXiv:2409.03505v4 Announce Type: replace-cross -Abstract: In the Newsvendor problem, the goal is to guess the number that will be drawn from some distribution, with asymmetric consequences for guessing too high vs. too low. In the data-driven version, the distribution is unknown, and one must work with samples from the distribution. Data-driven Newsvendor has been studied under many variants: additive vs. multiplicative regret, high probability vs. expectation bounds, and different distribution classes. This paper studies all combinations of these variants, filling in many gaps in the literature and simplifying many proofs. In particular, we provide a unified analysis based on the notion of clustered distributions, which in conjunction with our new lower bounds, shows that the entire spectrum of regrets between $1/\sqrt{n}$ and $1/n$ can be possible. Simulations on commonly-used distributions demonstrate that our notion is the "correct" predictor of empirical regret across varying data sizes. - oai:arXiv.org:2409.03505v4 - stat.ML + PolyLingua: Margin-based Inter-class Transformer for Robust Cross-domain Language Detection + https://arxiv.org/abs/2512.08143 + arXiv:2512.08143v2 Announce Type: replace +Abstract: Language identification is a crucial first step in multilingual systems such as chatbots and virtual assistants, enabling linguistically and culturally accurate user experiences. Errors at this stage can cascade into downstream failures, setting a high bar for accuracy. Yet, existing language identification tools struggle with key cases -- such as music requests where the song title and user language differ. Open-source tools like LangDetect, FastText are fast but less accurate, while large language models, though effective, are often too costly for low-latency or low-resource settings. We introduce PolyLingua, a lightweight Transformer-based model for in-domain language detection and fine-grained language classification. It employs a two-level contrastive learning framework combining instance-level separation and class-level alignment with adaptive margins, yielding compact and well-separated embeddings even for closely related languages. Evaluated on two challenging datasets -- Amazon Massive (multilingual digital assistant utterances) and a Song dataset (music requests with frequent code-switching) -- PolyLingua achieves 99.25% F1 and 98.15% F1, respectively, surpassing Sonnet 3.5 while using 10x fewer parameters, making it ideal for compute- and latency-constrained environments. + oai:arXiv.org:2512.08143v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://creativecommons.org/licenses/by/4.0/ - Zhuoxin Chen, Will Ma + Ali Lotfi Rezaabad, Bikram Khanal, Shashwat Chaurasia, Lu Zeng, Dezhi Hong, Hossein Bashashati, Thomas Butler, Megan Ganji - Asymptotic stability equals exponential stability -- while you twist your eyes - https://arxiv.org/abs/2411.03277 - arXiv:2411.03277v3 Announce Type: replace-cross -Abstract: Suppose that two vector fields on a smooth manifold render some equilibrium point globally asymptotically stable (GAS). We show that there exists a homotopy between the corresponding semiflows such that this point remains GAS along this homotopy. - oai:arXiv.org:2411.03277v3 - math.DS - cs.SY - eess.SY - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + OpenSubject: Leveraging Video-Derived Identity and Diversity Priors for Subject-driven Image Generation and Manipulation + https://arxiv.org/abs/2512.08294 + arXiv:2512.08294v2 Announce Type: replace +Abstract: Despite the promising progress in subject-driven image generation, current models often deviate from the reference identities and struggle in complex scenes with multiple subjects. To address this challenge, we introduce OpenSubject, a video-derived large-scale corpus with 2.5M samples and 4.35M images for subject-driven generation and manipulation. The dataset is built with a four-stage pipeline that exploits cross-frame identity priors. (i) Video Curation. We apply resolution and aesthetic filtering to obtain high-quality clips. (ii) Cross-Frame Subject Mining and Pairing. We utilize vision-language model (VLM)-based category consensus, local grounding, and diversity-aware pairing to select image pairs. (iii) Identity-Preserving Reference Image Synthesis. We introduce segmentation map-guided outpainting to synthesize the input images for subject-driven generation and box-guided inpainting to generate input images for subject-driven manipulation, together with geometry-aware augmentations and irregular boundary erosion. (iv) Verification and Captioning. We utilize a VLM to validate synthesized samples, re-synthesize failed samples based on stage (iii), and then construct short and long captions. In addition, we introduce a benchmark covering subject-driven generation and manipulation, and then evaluate identity fidelity, prompt adherence, manipulation consistency, and background consistency with a VLM judge. Extensive experiments show that training with OpenSubject improves generation and manipulation performance, particularly in complex scenes. + oai:arXiv.org:2512.08294v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://creativecommons.org/licenses/by/4.0/ - Wouter Jongeneel + Yexin Liu, Manyuan Zhang, Yueze Wang, Hongyu Li, Dian Zheng, Weiming Zhang, Changsheng Lu, Xunliang Cai, Yan Feng, Peng Pei, Harry Yang - Enhancing Brain Age Estimation with a Multimodal 3D CNN Approach Combining Structural MRI and AI-Synthesized Cerebral Blood Volume Measures - https://arxiv.org/abs/2412.01865 - arXiv:2412.01865v4 Announce Type: replace-cross -Abstract: Brain age gap estimation (BrainAGE) is a promising imaging-derived biomarker of neurobiological aging and disease risk, yet current approaches rely predominantly on T1-weighted structural MRI (T1w), overlooking functional vascular changes that may precede tissue damage and cognitive decline. Artificial intelligence-generated cerebral blood volume (AICBV) maps, synthesized from non-contrast MRI, offer an alternative to contrast-enhanced perfusion imaging by capturing vascular information relevant to early neurodegeneration. We developed a multimodal BrainAGE framework that integrates brain age predictions using linear regression from two separate 3D VGG-based networks, one model trained on only structural T1w scans and one trained on only AICBV maps generated from a pre-trained 3D patch-based deep learning model. Each model was trained and validated on 2,851 scans from 13 open-source datasets and was evaluated for concordance with mild cognitive impairment (MCI) and Alzheimer's disease (AD) using ADNI subjects (n=1,233). The combined model achieved the most accurate brain age gap for cognitively normal (CN) controls, with a mean absolute error (MAE) of 3.95 years ($R^2$=0.943), outperforming models trained on T1w (MAE=4.10) or AICBV alone (MAE=4.49). Saliency maps revealed complementary modality contributions: T1w emphasized white matter and cortical atrophy, while AICBV highlighted vascular-rich and periventricular regions implicated in hypoperfusion and early cerebrovascular dysfunction, consistent with normal aging. Next, we observed that BrainAGE increased stepwise across diagnostic strata (CN < MCI < AD) and correlated with cognitive impairment (CDRSB r=0.403; MMSE r=-0.310). AICBV-based BrainAGE showed particularly strong separation between stable vs. progressive MCI (p=$1.47 \times 10^{-8}$), suggesting sensitivity to prodromal vascular changes that precede overt atrophy. - oai:arXiv.org:2412.01865v4 - eess.IV - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Jordan Jomsky (for the Frontotemporal Lobar Degeneration Neuroimaging Initiative,for the Alzheimer's Disease Neuroimaging Initiative), Kay C. Igwe (for the Frontotemporal Lobar Degeneration Neuroimaging Initiative,for the Alzheimer's Disease Neuroimaging Initiative), Zongyu Li (for the Frontotemporal Lobar Degeneration Neuroimaging Initiative,for the Alzheimer's Disease Neuroimaging Initiative), Yiren Zhang (for the Frontotemporal Lobar Degeneration Neuroimaging Initiative,for the Alzheimer's Disease Neuroimaging Initiative), Max Lashley (for the Frontotemporal Lobar Degeneration Neuroimaging Initiative,for the Alzheimer's Disease Neuroimaging Initiative), Tal Nuriel (for the Frontotemporal Lobar Degeneration Neuroimaging Initiative,for the Alzheimer's Disease Neuroimaging Initiative), Andrew Laine (for the Frontotemporal Lobar Degeneration Neuroimaging Initiative,for the Alzheimer's Disease Neuroimaging Initiative), Jia Guo (for the Frontotemporal Lobar Degeneration Neuroimaging Initiative,for the Alzheimer's Disease Neuroimaging Initiative) + GeoDM: Geometry-aware Distribution Matching for Dataset Distillation + https://arxiv.org/abs/2512.08317 + arXiv:2512.08317v2 Announce Type: replace +Abstract: Dataset distillation aims to synthesize a compact subset of the original data, enabling models trained on it to achieve performance comparable to those trained on the original large dataset. Existing distribution-matching methods are confined to Euclidean spaces, making them only capture linear structures and overlook the intrinsic geometry of real data, e.g., curvature. However, high-dimensional data often lie on low-dimensional manifolds, suggesting that dataset distillation should have the distilled data manifold aligned with the original data manifold. In this work, we propose a geometry-aware distribution-matching framework, called \textbf{GeoDM}, which operates in the Cartesian product of Euclidean, hyperbolic, and spherical manifolds, with flat, hierarchical, and cyclical structures all captured by a unified representation. To adapt to the underlying data geometry, we introduce learnable curvature and weight parameters for three kinds of geometries. At the same time, we design an optimal transport loss to enhance the distribution fidelity. Our theoretical analysis shows that the geometry-aware distribution matching in a product space yields a smaller generalization error bound than the Euclidean counterparts. Extensive experiments conducted on standard benchmarks demonstrate that our algorithm outperforms state-of-the-art data distillation methods and remains effective across various distribution-matching strategies for the single geometries. + oai:arXiv.org:2512.08317v2 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xuhui Li, Zhengquan Luo, Zihui Cui, Zhiqiang Xu - RAG-IT: Retrieval-Augmented Instruction Tuning for Automated Financial Analysis - A Case Study for the Semiconductor Sector - https://arxiv.org/abs/2412.08179 - arXiv:2412.08179v3 Announce Type: replace-cross -Abstract: Financial analysis relies heavily on the interpretation of earnings reports to assess company performance and guide decision-making. Traditional methods for generating such analyzes require significant financial expertise and are often time-consuming. With the rapid advancement of Large Language Models (LLMs), domain-specific adaptations have emerged for financial tasks such as sentiment analysis and entity recognition. This paper introduces RAG-IT (Retrieval-Augmented Instruction Tuning), a novel framework designed to automate the generation of earnings report analysis through an LLM fine-tuned specifically for the financial domain. Our approach integrates retrieval augmentation with instruction-based fine-tuning to enhance factual accuracy, contextual relevance, and domain adaptability. We construct a sector-specific financial instruction dataset derived from semiconductor industry documents to guide the LLM adaptation to specialized financial reasoning. Using NVIDIA, AMD, and Broadcom as representative companies, our case study demonstrates that RAG-IT substantially improves a general-purpose open-source LLM and achieves performance comparable to commercial systems like GPT-3.5 on financial report generation tasks. This research highlights the potential of retrieval-augmented instruction tuning to streamline and elevate financial analysis automation, advancing the broader field of intelligent financial reporting. - oai:arXiv.org:2412.08179v3 - q-fin.ST - cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + A Multivariate Bernoulli-Based Sampling Method for Multi-Label Data with Application to Meta-Research + https://arxiv.org/abs/2512.08371 + arXiv:2512.08371v2 Announce Type: replace +Abstract: Datasets may contain observations with multiple labels. If the labels are not mutually exclusive, and if the labels vary greatly in frequency, obtaining a sample that includes sufficient observations with scarcer labels to make inferences about those labels, and which deviates from the population frequencies in a known manner, creates challenges. In this paper, we consider a multivariate Bernoulli distribution as our underlying distribution of a multi-label problem. We present a novel sampling algorithm that takes label dependencies into account. It uses observed label frequencies to estimate multivariate Bernoulli distribution parameters and calculate weights for each label combination. This approach ensures the weighted sampling acquires target distribution characteristics while accounting for label dependencies. We applied this approach to a sample of research articles from Web of Science labeled with 64 biomedical topic categories. We aimed to preserve category frequency order, reduce frequency differences between most and least common categories, and account for category dependencies. This approach produced a more balanced sub-sample, enhancing the representation of minority categories. + oai:arXiv.org:2512.08371v2 + cs.LG + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://creativecommons.org/licenses/by-nc-nd/4.0/ - Hai-Thien To, Tien-Cuong Bui, Van-Duc Le + Simon Chung, Colby J. Vorland, Donna L. Maney, Andrew W. Brown - Rainbow Arborescence Conjecture - https://arxiv.org/abs/2412.15457 - arXiv:2412.15457v2 Announce Type: replace-cross -Abstract: The famous Ryser--Brualdi--Stein conjecture asserts that every $k \times k$ Latin square contains a partial transversal of size $k-1$. Since its appearance, the conjecture has attracted significant interest, leading to several proposed generalizations. One of the most notable of these, by Aharoni, Kotlar, and Ziv, conjectures that $k$ disjoint common bases of two matroids of rank $k$ have a common independent partial transversal of size $k-1$. Although simple counterexamples show that the size $k-1$ above cannot be improved to $k$ (i.e., a transversal instead of a partial transversal), it is remarkable that no such counterexample is known for the special case of spanning arborescences. This motivated the formulation of the Rainbow Arborescence Conjecture: any graph on $n$ vertices formed by the union of $n-1$ spanning arborescences contains an arborescence using exactly one arc from each. - We prove several partial results on this conjecture. We show that the computational problem of testing the existence of such an arborescence with a fixed root is NP-complete, verify the conjecture in several special cases, and study relaxations of the problem. In particular, we establish the validity of the conjecture when the underlying undirected graph is a cycle; this also yields a new result on systems of distinct representatives for intervals on a cycle. - oai:arXiv.org:2412.15457v2 - math.CO - cs.DM + Finding All Bounded-Length Simple Cycles in a Directed Graph -- Revisited + https://arxiv.org/abs/2512.08392 + arXiv:2512.08392v2 Announce Type: replace +Abstract: In 2021, Gupta and Suzumura proposed a novel algorithm for enumerating all bounded-length simple cycles in directed graphs. In this work, we present concrete examples demonstrating that the proposed algorithm fails to enumerate certain valid cycles. Via these examples, we perform a detailed analysis pinpointing the specific points at which the proofs exhibit logical gaps. Furthermore, we propose a corrected formulation that resolves these issues while preserving the desirable property that the algorithm's computational complexity remains $O((c + 1) \cdot k \cdot (n + e))$ where $c$ is the number of simple cycles of a specified maximum length $k$, and $n$ and $e$ the number of graph nodes and edges respectively. + oai:arXiv.org:2512.08392v2 cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Krist\'of B\'erczi, Tam\'as Kir\'aly, Yutaro Yamaguchi, Yu Yokoi + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Frank Bauern\"oppel, J\"org-R\"udiger Sack - AI-powered virtual tissues from spatial proteomics for clinical diagnostics and biomedical discovery - https://arxiv.org/abs/2501.06039 - arXiv:2501.06039v2 Announce Type: replace-cross -Abstract: Spatial proteomics technologies have transformed our understanding of complex tissue architecture in cancer but present unique challenges for computational analysis. Each study uses a different marker panel and protocol, and most methods are tailored to single cohorts, which limits knowledge transfer and robust biomarker discovery. Here we present Virtual Tissues (VirTues), a general-purpose foundation model for spatial proteomics that learns marker-aware, multi-scale representations of proteins, cells, niches and tissues directly from multiplex imaging data. From a single pretrained backbone, VirTues supports marker reconstruction, cell typing and niche annotation, spatial biomarker discovery, and patient stratification, including zero-shot annotation across heterogeneous panels and datasets. In triple-negative breast cancer, VirTues-derived biomarkers predict anti-PD-L1 chemo-immunotherapy response and stratify disease-free survival in an independent cohort, outperforming state-of-the-art biomarkers derived from the same datasets and current clinical stratification schemes. - oai:arXiv.org:2501.06039v2 - q-bio.QM - cs.AI - cs.CV + Solving Oversmoothing in GNNs via Nonlocal Message Passing: Algebraic Smoothing and Depth Scalability + https://arxiv.org/abs/2512.08475 + arXiv:2512.08475v2 Announce Type: replace +Abstract: The relationship between Layer Normalization (LN) placement and the oversmoothing phenomenon remains underexplored. We identify a critical dilemma: Pre-LN architectures avoid oversmoothing but suffer from the curse of depth, while Post-LN architectures bypass the curse of depth but experience oversmoothing. + To resolve this, we propose a new method based on Post-LN that induces algebraic smoothing, preventing oversmoothing without the curse of depth. Empirical results across five benchmarks demonstrate that our approach supports deeper networks (up to 256 layers) and improves performance, requiring no additional parameters. + Key contributions: + Theoretical Characterization: Analysis of LN dynamics and their impact on oversmoothing and the curse of depth. + A Principled Solution: A parameter-efficient method that induces algebraic smoothing and avoids oversmoothing and the curse of depth. + Empirical Validation: Extensive experiments showing the effectiveness of the method in deeper GNNs. + oai:arXiv.org:2512.08475v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Johann Wenckstern, Eeshaan Jain, Yexiang Cheng, Benedikt von Querfurth, Kiril Vasilev, Matteo Pariset, Phil F. Cheng, Petros Liakopoulos, Olivier Michielin, Andreas Wicki, Gabriele Gut, Charlotte Bunne + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Weiqi Guan, Junlin He - Reconstruction of frequency-localized functions from pointwise samples via least squares and deep learning - https://arxiv.org/abs/2502.09794 - arXiv:2502.09794v2 Announce Type: replace-cross -Abstract: Recovering frequency-localized functions from pointwise data is a fundamental task in signal processing. We examine this problem from an approximation-theoretic perspective, focusing on least squares and deep learning-based methods. First, we establish a novel recovery theorem for least squares approximations using the Slepian basis from uniform random samples in low dimensions, explicitly tracking the dependence of the bandwidth on the sampling complexity. Building on these results, we then present a recovery guarantee for approximating bandlimited functions via deep learning from pointwise data. This result, framed as a practical existence theorem, provides conditions on the network architecture, training procedure, and data acquisition sufficient for accurate approximation. To complement our theoretical findings, we perform numerical comparisons between least squares and deep learning for approximating one- and two-dimensional functions. We conclude with a discussion of the theoretical limitations and the practical gaps between theory and implementation. - oai:arXiv.org:2502.09794v2 - math.CA + Optimal Perturbation Budget Allocation for Data Poisoning in Offline Reinforcement Learning + https://arxiv.org/abs/2512.08485 + arXiv:2512.08485v2 Announce Type: replace +Abstract: Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to data poisoning attacks. Existing attack strategies typically rely on locally uniform perturbations, which treat all samples indiscriminately. This approach is inefficient, as it wastes the perturbation budget on low-impact samples, and lacks stealthiness due to significant statistical deviations. In this paper, we propose a novel Global Budget Allocation attack strategy. Leveraging the theoretical insight that a sample's influence on value function convergence is proportional to its Temporal Difference (TD) error, we formulate the attack as a global resource allocation problem. We derive a closed-form solution where perturbation magnitudes are assigned proportional to the TD-error sensitivity under a global L2 constraint. Empirical results on D4RL benchmarks demonstrate that our method significantly outperforms baseline strategies, achieving up to 80% performance degradation with minimal perturbations that evade detection by state-of-the-art statistical and spectral defenses. + oai:arXiv.org:2512.08485v2 cs.LG - cs.NA - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - A. Martina Neuman, Andres Felipe Lerma Pineda, Jason J. Bramburger, Simone Brugiapaglia + Junnan Qiu, Yuanjie Zhao, Jie Li - BGG Sequences -- A Riemannian perspective - https://arxiv.org/abs/2502.17016 - arXiv:2502.17016v2 Announce Type: replace-cross -Abstract: BGG resolutions and generalized BGG resolutions from representation theory of semisimple Lie algebras have been generalized to sequences of invariant differential operators on manifolds endowed with a geometric structure belonging to the family of parabolic geometries. Two of these structures, conformal structures and projective structures, occur as weakenings of a Riemannian metric respectively of a specified torsion-free connection on the tangent bundle. In particular, one obtains BGG sequences on open subsets of $\mathbb R^n$ as very special cases of the construction. It turned out that several examples of the latter sequences are of interest in applied mathematics, since they can be used to construct numerical methods to study operators relevant for elasticity theory, numerical relativity and related fields. - This article is intended to provide an intermediate level between BGG sequences for parabolic geometries and the case of domains in $\mathbb R^n$. We provide a construction of conformal BGG sequences on Riemannian manifolds and of projective BGG sequences on manifolds endowed with a volume preserving linear connection on their tangent bundle. These constructions do not need any input from parabolic geometries. Except from standard differential geometry methods the only deeper input comes from representation theory. So one can either view the results as a simplified version of the constructions for parabolic geometries in an explicit form. Alternatively, one can view them as providing an extension of the simplified constructions for domains in $\Bbb R^n$ to general Riemannian manifolds or to manifolds endowed with an appropriate connection on the tangent bundle. - oai:arXiv.org:2502.17016v2 - math.DG - cs.NA - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + SensHRPS: Sensing Comfortable Human-Robot Proxemics and Personal Space With Eye-Tracking + https://arxiv.org/abs/2512.08518 + arXiv:2512.08518v2 Announce Type: replace +Abstract: Social robots must adjust to human proxemic norms to ensure user comfort and engagement. While prior research demonstrates that eye-tracking features reliably estimate comfort in human-human interactions, their applicability to interactions with humanoid robots remains unexplored. In this study, we investigate user comfort with the robot "Ameca" across four experimentally controlled distances (0.5 m to 2.0 m) using mobile eye-tracking and subjective reporting (N=19). We evaluate multiple machine learning and deep learning models to estimate comfort based on gaze features. Contrary to previous human-human studies where Transformer models excelled, a Decision Tree classifier achieved the highest performance (F1-score = 0.73), with minimum pupil diameter identified as the most critical predictor. These findings suggest that physiological comfort thresholds in human-robot interaction differ from human-human dynamics and can be effectively modeled using interpretable logic. + oai:arXiv.org:2512.08518v2 + cs.RO + cs.AI + cs.HC + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://creativecommons.org/licenses/by/4.0/ - Andreas Cap + Nadezhda Kushina, Ko Watanabe, Aarthi Kannan, Ashita Ashok, Andreas Dengel, Karsten Berns - Optimal Trickle-Down Theorems for Path Complexes via C-Lorentzian Polynomials with Applications to Sampling and Log-Concave Sequences - https://arxiv.org/abs/2503.01005 - arXiv:2503.01005v3 Announce Type: replace-cross -Abstract: Let $X$ be a $d$-partite $d$-dimensional simplicial complex with parts $T_1,\dots,T_d$ and let $\mu$ be a distribution on the facets of $X$. Informally, we say $(X,\mu)$ is a path complex if for any $i<j<k$ and $F \in T_i,G \in T_j, K\in T_k$, we have $\mathbb{P}_\mu[F,K | G]=\mathbb{P}_\mu[F|G]\cdot\mathbb{P}_\mu[K|G].$ We develop a new machinery with $\mathcal{C}$-Lorentzian polynomials to show that if all links of $X$ of co-dimension 2 have spectral expansion at most $1/2$, then $X$ is a $1/2$-local spectral expander. We then prove that one can derive fast-mixing results and log-concavity statements for top-link spectral expanders. - We use our machinery to prove fast mixing results for sampling maximal flags of flats of distributive lattices (a.k.a. linear extensions of posets) subject to external fields, and to sample maximal flags of flats of "typical" modular lattices. We also use it to re-prove the Heron-Rota-Welsh conjecture and to prove a conjecture of Chan and Pak which gives a generalization of Stanley's log-concavity theorem. Lastly, we use it to prove near optimal trickle-down theorems for "sparse complexes" such as constructions by Lubotzky-Samuels-Vishne, Kaufman-Oppenheim, and O'Donnell-Pratt. - oai:arXiv.org:2503.01005v3 - math.CO - cs.CC - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross + Mind to Hand: Purposeful Robotic Control via Embodied Reasoning + https://arxiv.org/abs/2512.08580 + arXiv:2512.08580v2 Announce Type: replace +Abstract: Humans act with context and intention, with reasoning playing a central role. While internet-scale data has enabled broad reasoning capabilities in AI systems, grounding these abilities in physical action remains a major challenge. We introduce Lumo-1, a generalist vision-language-action (VLA) model that unifies robot reasoning ("mind") with robot action ("hand"). Our approach builds upon the general multi-modal reasoning capabilities of pre-trained vision-language models (VLMs), progressively extending them to embodied reasoning and action prediction, and ultimately towards structured reasoning and reasoning-action alignment. This results in a three-stage pre-training pipeline: (1) Continued VLM pre-training on curated vision-language data to enhance embodied reasoning skills such as planning, spatial understanding, and trajectory prediction; (2) Co-training on cross-embodiment robot data alongside vision-language data; and (3) Action training with reasoning process on trajectories collected on Astribot S1, a bimanual mobile manipulator with human-like dexterity and agility. Finally, we integrate reinforcement learning to further refine reasoning-action consistency and close the loop between semantic inference and motor control. Extensive experiments demonstrate that Lumo-1 achieves significant performance improvements in embodied vision-language reasoning, a critical component for generalist robotic control. Real-world evaluations further show that Lumo-1 surpasses strong baselines across a wide range of challenging robotic tasks, with strong generalization to novel objects and environments, excelling particularly in long-horizon tasks and responding to human-natural instructions that require reasoning over strategy, concepts and space. + oai:arXiv.org:2512.08580v2 + cs.RO + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Peijun Tang, Shangjin Xie, Binyan Sun, Baifu Huang, Kuncheng Luo, Haotian Yang, Weiqi Jin, Jianan Wang + + + Decoupling Template Bias in CLIP: Harnessing Empty Prompts for Enhanced Few-Shot Learning + https://arxiv.org/abs/2512.08606 + arXiv:2512.08606v2 Announce Type: replace +Abstract: The Contrastive Language-Image Pre-Training (CLIP) model excels in few-shot learning by aligning visual and textual representations. Our study shows that template-sample similarity (TSS), defined as the resemblance between a text template and an image sample, introduces bias. This bias leads the model to rely on template proximity rather than true sample-to-category alignment, reducing both accuracy and robustness in classification. We present a framework that uses empty prompts, textual inputs that convey the idea of "emptiness" without category information. These prompts capture unbiased template features and offset TSS bias. The framework employs two stages. During pre-training, empty prompts reveal and reduce template-induced bias within the CLIP encoder. During few-shot fine-tuning, a bias calibration loss enforces correct alignment between images and their categories, ensuring the model focuses on relevant visual cues. Experiments across multiple benchmarks demonstrate that our template correction method significantly reduces performance fluctuations caused by TSS, yielding higher classification accuracy and stronger robustness. The repository of this project is available at https://github.com/zhenyuZ-HUST/Decoupling-Template-Bias-in-CLIP. + oai:arXiv.org:2512.08606v2 + cs.CV + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 + replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jonathan Leake, Kasper Lindberg, Shayan Oveis Gharan + Zhenyu Zhang, Guangyao Chen, Yixiong Zou, Zhimeng Huang, Yuhua Li - Median Consensus Embedding for Dimensionality Reduction - https://arxiv.org/abs/2503.08103 - arXiv:2503.08103v2 Announce Type: replace-cross -Abstract: This study proposes median consensus embedding (MCE) to address variability in low-dimensional embeddings caused by random initialization in nonlinear dimensionality reduction techniques such as $t$-distributed stochastic neighbor embedding. MCE is defined as the geometric median of multiple embeddings. By assuming multiple embeddings as independent and identically distributed random samples and applying large deviation theory, we prove that MCE achieves consistency at an exponential rate. Furthermore, we develop a practical algorithm to implement MCE by constructing a distance function between embeddings based on the Frobenius norm of the pairwise distance matrix of data points. Application to actual data demonstrates that MCE converges rapidly and effectively reduces instability. We further combine MCE with multiple imputation to address missing values and consider multiscale hyperparameters. Results confirm that MCE effectively mitigates instability issues in embedding methods arising from random initialization and other sources. - oai:arXiv.org:2503.08103v2 - stat.ML + C-DIRA: Computationally Efficient Dynamic ROI Routing and Domain-Invariant Adversarial Learning for Lightweight Driver Behavior Recognition + https://arxiv.org/abs/2512.08647 + arXiv:2512.08647v2 Announce Type: replace +Abstract: Driver distraction behavior recognition using in-vehicle cameras demands real-time inference on edge devices. However, lightweight models often fail to capture fine-grained behavioral cues, resulting in reduced performance on unseen drivers or under varying conditions. ROI-based methods also increase computational cost, making it difficult to balance efficiency and accuracy. This work addresses the need for a lightweight architecture that overcomes these constraints. We propose Computationally efficient Dynamic region of Interest Routing and domain-invariant Adversarial learning for lightweight driver behavior recognition (C-DIRA). The framework combines saliency-driven Top-K ROI pooling and fused classification for local feature extraction and integration. Dynamic ROI routing enables selective computation by applying ROI inference only to high difficulty data samples. Moreover, pseudo-domain labeling and adversarial learning are used to learn domain-invariant features robust to driver and background variation. Experiments on the State Farm Distracted Driver Detection Dataset show that C-DIRA maintains high accuracy with significantly fewer FLOPs and lower latency than prior lightweight models. It also demonstrates robustness under visual degradation such as blur and low-light, and stable performance across unseen domains. These results confirm C-DIRA's effectiveness in achieving compactness, efficiency, and generalization. + oai:arXiv.org:2512.08647v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Keito Inoshita + + + DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning + https://arxiv.org/abs/2512.08671 + arXiv:2512.08671v2 Announce Type: replace +Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor. + oai:arXiv.org:2512.08671v2 cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Yui Tomo, Daisuke Yoneoka + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Huzaifa Arif - A Graph-based Approach to Variant Extraction from Sequences - https://arxiv.org/abs/2503.18472 - arXiv:2503.18472v2 Announce Type: replace-cross -Abstract: Accurate variant descriptions are of paramount importance in the field of genomics. The domain is confronted with increasingly complex variants, e.g., combinations of multiple indels, making it challenging to generate proper variant descriptions directly from chromosomal sequences. We present a graph based on all minimal alignments that is a complete representation of a variant, which gives insight into the nature of a variant compared to a single variant description. We provide three complementary extraction methods to derive variant descriptions from this graph, including one that yields domain-specific constructs from the HGVS nomenclature. Our experiments show that our methods in comparison with dbSNP, the authoritative variant database from the NCBI, result in identical HGVS descriptions for simple variants and more meaningful descriptions for complex variants, in particular for repeat expansions and contractions. - oai:arXiv.org:2503.18472v2 - q-bio.GN - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - 10.1093/nargab/lqaf173 - NAR Genomics and Bioinformatics, Volume 7, Issue 4, December 2025 - Mark A. Santcroos, Walter A. Kosters, Mihai Lefter, Jeroen F. J. Laros, Jonathan K. Vis + Efficiently Reconstructing Dynamic Scenes One D4RT at a Time + https://arxiv.org/abs/2512.08924 + arXiv:2512.08924v2 Announce Type: replace +Abstract: Understanding and reconstructing the complex geometry and motion of dynamic scenes from video remains a formidable challenge in computer vision. This paper introduces D4RT, a simple yet powerful feedforward model designed to efficiently solve this task. D4RT utilizes a unified transformer architecture to jointly infer depth, spatio-temporal correspondence, and full camera parameters from a single video. Its core innovation is a novel querying mechanism that sidesteps the heavy computation of dense, per-frame decoding and the complexity of managing multiple, task-specific decoders. Our decoding interface allows the model to independently and flexibly probe the 3D position of any point in space and time. The result is a lightweight and highly scalable method that enables remarkably efficient training and inference. We demonstrate that our approach sets a new state of the art, outperforming previous methods across a wide spectrum of 4D reconstruction tasks. We refer to the project webpage for animated results: https://d4rt-paper.github.io/. + oai:arXiv.org:2512.08924v2 + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chuhan Zhang, Guillaume Le Moing, Skanda Koppula, Ignacio Rocco, Liliane Momeni, Junyu Xie, Shuyang Sun, Rahul Sukthankar, Jo\"elle K. Barral, Raia Hadsell, Zoubin Ghahramani, Andrew Zisserman, Junlin Zhang, Mehdi S. M. Sajjadi - Residual-based Chebyshev filtered subspace iteration for sparse Hermitian eigenvalue problems tolerant to inexact matrix-vector products - https://arxiv.org/abs/2503.22652 - arXiv:2503.22652v4 Announce Type: replace-cross -Abstract: Chebyshev Filtered Subspace Iteration (ChFSI) has emerged as a robust alternative to Krylov eigensolvers for extracting a small subset of extremal eigenpairs from large sparse matrices, particularly in situations where these eigenpairs must be computed repeatedly as the system matrix evolves within an outer iteration. In this work, we propose R-ChFSI, a residual based reformulation of ChFSI designed to exhibit strong convergence properties even when the matrix-vector products are computed inexactly. We derive convergence guarantees under matrix-vector product approximations, providing a rigorous foundation for the method in large-scale eigenvalue computations. The tolerance of R-ChFSI to inexact matrix-vector products enables an efficient treatment of generalized Hermitian definite eigenproblems of the form $\textbf{A} \textbf{x} = \lambda \textbf{B} \textbf{x}$ where exact factorizations or high-accuracy iterative solves for evaluating $\textbf{B}^{-1}$ are often prohibitively expensive. Moreover, R-ChFSI naturally accommodates low-precision arithmetic for both standard and generalized eigenproblems, making it well-suited for modern hardware accelerators optimised for mixed-precision computation. To demonstrate the effectiveness of the approach, extensive numerical experiments are conducted on finite-element discretized eigenproblems with millions of degrees of freedom, solving for thousands of eigenpairs arising in \emph{ab initio} material modelling using Kohn-Sham density functional theory. For generalized eigenproblems employing approximate $\textbf{B}^{-1}$, R-ChFSI achieves desired residual norms orders of magnitude smaller than those obtained with standard ChFSI. In addition, R-ChFSI reliably reaches target residual tolerances (e.g., 10$^{-8}$) even with FP32 and TF32 arithmetic, significantly outperforming standard ChFSI in similar settings. - oai:arXiv.org:2503.22652v4 - physics.comp-ph - cs.NA - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Equiangular lines via matrix projection + https://arxiv.org/abs/2110.15842 + arXiv:2110.15842v5 Announce Type: replace-cross +Abstract: In 1973, Lemmens and Seidel posed the problem of determining the maximum number of equiangular lines in $\mathbb{R}^r$ with angle $\arccos(\alpha)$ and gave a partial answer in the regime $r \leq 1/\alpha^2 - 2$. At the other extreme where $r$ is at least exponential in $1/\alpha$, recent breakthroughs have led to an almost complete resolution of this problem. In this paper, we introduce a new method for obtaining upper bounds which unifies and improves upon previous approaches, thereby yielding bounds which bridge the gap between the aforementioned regimes and are best possible either exactly or up to a small multiplicative constant. Our approach relies on orthogonal projection of matrices with respect to the Frobenius inner product and as a byproduct, it yields the first extension of the Alon-Boppana theorem to dense graphs, with equality for strongly regular graphs corresponding to $\binom{r+1}{2}$ equiangular lines in $\mathbb{R}^r$. Applications of our method in the complex setting will be discussed as well. + oai:arXiv.org:2110.15842v5 + math.CO + cs.IT + math.IT + math.MG + quant-ph + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/publicdomain/zero/1.0/ - Nikhil Kodali, Kartick Ramakrishnan, Phani Motamarri + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Igor Balla - Heart Failure Prediction using Modal Decomposition and Masked Autoencoders for Scarce Echocardiography Databases - https://arxiv.org/abs/2504.07606 - arXiv:2504.07606v3 Announce Type: replace-cross -Abstract: Heart diseases remain the leading cause of mortality worldwide, implying approximately 18 million deaths according to the WHO. In particular, heart failures (HF) press the healthcare industry to develop systems for their early, rapid, and effective prediction. This work presents an automatic system based on a novel framework which combines Modal Decomposition and Masked Autoencoders (MAE) to extend the application from heart disease classification to the more challenging and specific task of heart failure time prediction, not previously addressed to the best of authors' knowledge. This system comprises two stages. The first one transforms the data from a database of echocardiography video sequences into a large collection of annotated images compatible with the training phase of machine learning-based frameworks and deep learning-based ones. This stage includes the use of the Higher Order Dynamic Mode Decomposition (HODMD) algorithm for both data augmentation and feature extraction. The second stage builds and trains a Vision Transformer (ViT). MAEs based on a combined scheme of self-supervised (SSL) and supervised learning, so far barely explored in the literature about heart failure prediction, are adopted to effectively train the ViT from scratch, even with scarce databases. The designed neural network analyses in real-time images from echocardiography sequences to estimate the time of happening a heart failure. This approach demonstrates to improve prediction accuracy from scarce databases and to be superior to several established ViT and Convolutional Neural Network (CNN) architectures. The source code will be incorporated into the next version release of the ModelFLOWs-app software (https://github.com/modelflows/ModelFLOWs-app). - oai:arXiv.org:2504.07606v3 - eess.IV - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Performance Analysis of Quantum CSS Error-Correcting Codes via MacWilliams Identities + https://arxiv.org/abs/2305.01301 + arXiv:2305.01301v3 Announce Type: replace-cross +Abstract: We analyze the performance of quantum stabilizer codes, one of the most important classes for practical implementations, on both symmetric and asymmetric quantum channels. To this aim, we first derive the weight enumerator (WE) for the undetectable errors based on the quantum MacWilliams identities. The WE is then used to evaluate tight upper bounds on the error rate of CSS quantum codes with \acl{MW} decoding. For surface codes we also derive a simple closed form expression of the bounds over the depolarizing channel. We introduce a novel approach that combines the knowledge of WE with a logical operator analysis, allowing the derivation of the exact asymptotic error rate for short codes. For example, on a depolarizing channel with physical error rate $\rho \to 0$, the logical error rate $\rho_\mathrm{L}$ is asymptotically $\rho_\mathrm{L} \approx 16 \rho^2$ for the $[[9,1,3]]$ Shor code, $\rho_\mathrm{L} \approx 16.3 \rho^2$ for the $[[7,1,3]]$ Steane code, $\rho_\mathrm{L} \approx 18.7 \rho^2$ for the $[[13,1,3]]$ surface code, and $\rho_\mathrm{L} \approx 149.3 \rho^3$ for the $[[41,1,5]]$ surface code. For larger codes our bound provides $\rho_\mathrm{L} \approx 1215 \rho^4$ and $\rho_\mathrm{L} \approx 663 \rho^5$ for the $[[85,1,7]]$ and the $[[181,1,10]]$ surface codes, respectively. Finally, we extend our analysis to include realistic, noisy syndrome extraction circuits by modeling error propagation throughout gadgets. This enables estimation of logical error rates under faulty measurements. The performance analysis serves as a design tool for developing fault-tolerant quantum systems by guiding the selection of quantum codes based on their error correction capability. Additionally, it offers a novel perspective on quantum degeneracy, showing it represents the fraction of non-correctable error patterns shared by multiple logical operators. + oai:arXiv.org:2305.01301v3 + quant-ph + cs.IT + math.IT + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Andr\'es Bell-Navas, Mar\'ia Villalba-Orero, Enrique Lara-Pezzi, Jes\'us Garicano-Mena, Soledad Le Clainche + Diego Forlivesi, Lorenzo Valentini, Marco Chiani - Computing the Tropical Abel--Jacobi Transform and Tropical Distances for Metric Graphs - https://arxiv.org/abs/2504.11619 - arXiv:2504.11619v3 Announce Type: replace-cross -Abstract: Metric graphs are important models for capturing the structure of complex data across various domains. While much effort has been devoted to extracting geometric and topological features from graph data, computational aspects of metric graphs as abstract tropical curves remains unexplored. In this paper, we present the first computational and machine learning-driven study of metric graphs from the perspective of tropical algebraic geometry. Specifically, we study the tropical Abel--Jacobi transform, a vectorization of points on a metric graph via the tropical Abel--Jacobi map into its associated flat torus, the tropical Jacobian. We develop algorithms to compute this transform and investigate how the resulting embeddings depend on different combinatorial models of the same metric graph. - Once embedded, we compute pairwise distances between points in the tropical Jacobian under two natural metrics: the tropical polarization distance and the Foster--Zhang distance. Computing these distances are generally NP-hard as they turn out to be linked to classical lattice problems in computational complexity, however, we identify a class of metric graphs where fast and explicit computations are feasible. For the general case, we propose practical algorithms for both exact and approximate distance matrix computations using lattice basis reduction and mixed-integer programming solvers. Our work lays the groundwork for future applications of tropical geometry and the tropical Abel--Jacobi transform in machine learning and data analysis. - oai:arXiv.org:2504.11619v3 - math.AG - cs.NA - math.MG - math.NA - Wed, 10 Dec 2025 00:00:00 -0500 + Towards a theory of natural directed paths + https://arxiv.org/abs/2306.02792 + arXiv:2306.02792v4 Announce Type: replace-cross +Abstract: We introduce the abstract setting of presheaf category on a thick category of cubes. Precubical sets, symmetric transverse sets, symmetric precubical sets and the new category of (non-symmetric) transverse sets are examples of this structure. All these presheaf categories share the same metric and homotopical properties from a directed homotopy point of view. This enables us to extend Raussen's notion of natural $d$-path for each of them. Finally, we adapt Ziemia\'{n}ski's notion of cube chain to this abstract setting and we prove that it has the expected behavior on precubical sets. As an application, we verify that the formalization of the parallel composition with synchronization of process algebra using the coskeleton functor of the category of symmetric transverse sets has a category of cube chains with the correct homotopy type. + oai:arXiv.org:2306.02792v4 + math.CT + cs.LO + math.AT + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yueqi Cao, Anthea Monod + Philippe Gaucher - Minimal dispersion on the sphere - https://arxiv.org/abs/2505.10929 - arXiv:2505.10929v2 Announce Type: replace-cross -Abstract: The minimal spherical cap dispersion ${\rm disp}_{\mathcal{C}}(n,d)$ is the largest number $\varepsilon\in (0,1]$ such that, for every $n$ points on the $d$-dimensional Euclidean unit sphere $\mathbb{S}^d$, there exists a spherical cap with normalized area $\varepsilon$ not containing any of these points. We study the behavior of ${\rm disp}_{\mathcal{C}}(n,d)$ as $n$ and $d$ grow to infinity. We develop connections to the problems of sphere covering and approximation of the Euclidean unit ball by inscribed polytopes. Existing and new results are presented in a unified way. Upper bounds on ${\rm disp}_{\mathcal{C}}(n,d)$ result from choosing the points independently and uniformly at random and possibly adding some well-separated points to close large gaps. Moreover, we study dispersion with respect to intersections of caps. - oai:arXiv.org:2505.10929v2 - math.MG + An explicit Euler method for Sobolev vector fields with applications to the continuity equation on non cartesian grids + https://arxiv.org/abs/2402.04118 + arXiv:2402.04118v4 Announce Type: replace-cross +Abstract: We prove a novel stability estimate in $L^\infty _t (L^p _x)$ between the regular Lagrangian flow of a Sobolev vector field and a piecewise affine approximation of such flow. This approximation of the flow is obtained by a (sort of) explicit Euler method, and it is the crucial tool to prove approximation results for the solution of the continuity equation by using the representation of the solution as the push-forward via the regular Lagrangian flow of the initial datum. We approximate the solution in two ways, one probabilistic and one deterministic, using different approximations for both the flow and the initial datum. Such estimates for the solution of the continuity equation are derived on non Cartesian grids and without the need to assume a CFL condition. + oai:arXiv.org:2402.04118v4 + math.AP cs.NA math.NA - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Alexander E. Litvak, Mathias Sonnleitner, Tomasz Szczepanski - - - From Fibers to Cells: Fourier-Based Registration Enables Virtual Cresyl Violet Staining From 3D Polarized Light Imaging - https://arxiv.org/abs/2505.11394 - arXiv:2505.11394v2 Announce Type: replace-cross -Abstract: Comprehensive assessment of the various aspects of the brain's microstructure requires the use of complementary imaging techniques. This includes measuring the spatial distribution of cell bodies (cytoarchitecture) and nerve fibers (myeloarchitecture). The gold standard for cytoarchitectonic analysis is light microscopic imaging of cell-body stained tissue sections. To reveal the 3D orientations of nerve fibers, 3D Polarized Light Imaging (3D-PLI) has been introduced, a method that is label-free and allows subsequent staining of sections after 3D-PLI measurement. By post-staining for cell bodies, a direct link between fiber- and cytoarchitecture can potentially be established in the same section. However, inevitable distortions introduced during the staining process make a costly nonlinear and cross-modal registration necessary in order to study the detailed relationships between cells and fibers in the images. In addition, the complexity of processing histological sections for post-staining only allows for a limited number of such samples. In this work, we take advantage of deep learning methods for image-to-image translation to generate a virtual staining of 3D-PLI that is spatially aligned at the cellular level. We use a supervised setting, building on a unique dataset of brain sections, to which Cresyl violet staining has been applied after 3D-PLI measurement. To ensure high correspondence between both modalities, we address the misalignment of training data using Fourier-based registration. In this way, registration can be efficiently calculated during training for local image patches of target and predicted staining. We demonstrate that the proposed method can predict a Cresyl violet staining from 3D-PLI, resulting in a virtual staining that exhibits plausible patterns of cell organization in gray matter, with larger cell bodies being localized at their expected positions. - oai:arXiv.org:2505.11394v2 - eess.IV - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1162/IMAG.a.1079 - Alexander Oberstrass, Esteban Vaca, Eric Upschulte, Meiqi Niu, Nicola Palomero-Gallagher, David Graessel, Christian Schiffer, Markus Axer, Katrin Amunts, Timo Dickscheid + Tommaso Cortopassi - Feature Integration Spaces: Joint Training Reveals Dual Encoding in Neural Network Representations - https://arxiv.org/abs/2507.00269 - arXiv:2507.00269v2 Announce Type: replace-cross -Abstract: Current sparse autoencoder (SAE) approaches to neural network interpretability assume that activations can be decomposed through linear superposition into sparse, interpretable features. Despite high reconstruction fidelity, SAEs consistently fail to eliminate polysemanticity and exhibit pathological behavioral errors. We propose that neural networks encode information in two complementary spaces compressed into the same substrate: feature identity and feature integration. To test this dual encoding hypothesis, we develop sequential and joint-training architectures to capture identity and integration patterns simultaneously. Joint training achieves 41.3% reconstruction improvement and 51.6% reduction in KL divergence errors. This architecture spontaneously develops bimodal feature organization: low squared norm features contributing to integration pathways and the rest contributing directly to the residual. Small nonlinear components (3% of parameters) achieve 16.5% standalone improvements, demonstrating parameter-efficient capture of computational relationships crucial for behavior. Additionally, intervention experiments using 2x2 factorial stimulus designs demonstrated that integration features exhibit selective sensitivity to experimental manipulations and produce systematic behavioral effects on model outputs, including significant statistical interaction effects across semantic dimensions. This work provides systematic evidence for (1) dual encoding in neural representations, (2) meaningful nonlinearly encoded feature interactions, and (3) introduces an architectural paradigm shift from post-hoc feature analysis to integrated computational design, establishing foundations for next-generation SAEs. - oai:arXiv.org:2507.00269v2 - q-bio.NC + Optimal Transportation by Orthogonal Coupling Dynamics + https://arxiv.org/abs/2410.08060 + arXiv:2410.08060v2 Announce Type: replace-cross +Abstract: Many numerical and learning algorithms rely on the solution of the Monge-Kantorovich problem and Wasserstein distances, which provide appropriate distributional metrics. While the natural approach is to treat the problem as an infinite-dimensional linear programming, such a methodology limits the computational performance due to the polynomial scaling with respect to the sample size along with intensive memory requirements. We propose a novel alternative framework to address the Monge-Kantorovich problem based on a projection type gradient descent scheme. The dynamics builds on the notion of the conditional expectation, where the connection with the opinion dynamics is leveraged to devise efficient numerical schemes. We demonstrate that the resulting dynamics recovers random maps with favourable computational performance. Along with the theoretical insight, the proposed dynamics paves the way for innovative approaches to construct numerical schemes for computing optimal transport maps as well as Wasserstein distances. + oai:arXiv.org:2410.08060v2 + math.OC cs.AI - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Omar Claflin + Mohsen Sadr, Peyman Mohajerin Esfahani, Hossein Gorji - Hebbian Physics Networks: A Self-Organizing Computational Architecture Based on Local Physical Laws - https://arxiv.org/abs/2507.00641 - arXiv:2507.00641v2 Announce Type: replace-cross -Abstract: Physical transport processes organize through local interactions that redistribute imbalance while preserving conservation. Classical solvers enforce this organization by applying fixed discrete operators on rigid grids. We introduce the Hebbian Physics Network (HPN), a computational framework that replaces this rigid scaffolding with a plastic transport geometry. An HPN is a coupled dynamical system of physical states on nodes and constitutive weights on edges in a graph. Residuals--local violations of continuity, momentum balance, or energy conservation--act as thermodynamic forces that drive the joint evolution of both the state and the operator (i.e. the adaptive weights). The weights adapt through a three-factor Hebbian rule, which we prove constitutes a strictly local gradient descent on the residual energy. This mechanism ensures thermodynamic stability: near equilibrium, the learned operator naturally converges to a symmetric, positive-definite form, rigorously reproducing Onsager\'s reciprocal relations without explicit enforcement. Far from equilibrium, the system undergoes a self-organizing search for a transport topology that restores global coercivity. Unlike optimization-based approaches that impose physics through global loss functions, HPNs embed conservation intrinsically: transport is restored locally by the evolving operator itself, without a global Poisson solve or backpropagated objective. We demonstrate the framework on scalar diffusion and incompressible lid-driven cavity flow, showing that physically consistent transport geometries and flow structures emerge from random initial conditions solely through residual-driven local adaptation. HPNs thus reframe computation not as the solution of a fixed equation, but as a thermodynamic relaxation process where the constitutive geometry and physical state co-evolve. - oai:arXiv.org:2507.00641v2 - nlin.AO - cs.LG - stat.CO - stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Machine Learning for Arbitrary Single-Qubit Rotations on an Embedded Device + https://arxiv.org/abs/2411.13037 + arXiv:2411.13037v2 Announce Type: replace-cross +Abstract: Here we present a technique for using machine learning (ML) for single-qubit gate synthesis on field programmable logic for a superconducting transmon-based quantum computer based on simulated studies. Our approach is multi-stage. We first bootstrap a model based on simulation with access to the full statevector for measuring gate fidelity. We next present an algorithm, named adapted randomized benchmarking (ARB), for fine-tuning the gate on hardware based on measurements of the devices. We also present techniques for deploying the model on programmable devices with care to reduce the required resources. While the techniques here are applied to a transmon-based computer, many of them are portable to other architectures. + oai:arXiv.org:2411.13037v2 + quant-ph + cs.ET + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Gunjan Auti, Hirofumi Daiguji, Gouhei Tanaka + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1007/s42484-024-00214-8 + Madhav Narayan Bhat, Marco Russo, Luca P. Carloni, Giuseppe Di Guglielmo, Farah Fahim, Andy C. Y. Li, Gabriel N. Perdue - Partial decidability protocol for the Wang tiling problem from statistical mechanics and chaotic mapping - https://arxiv.org/abs/2507.13268 - arXiv:2507.13268v2 Announce Type: replace-cross -Abstract: We introduce a partial decidability protocol for the Wang tiling problem (which is the prototype of undecidable problems in combinatorics and statistical physics) by constructing a suitable mapping from tilings of finite squares of different sizes. Such mapping depends on the initial family of Wang tiles (the alphabet) with which one would like to tile the plane. This allows to define effective entropy and temperature associated to the alphabet (together with the corresponding partition function). We identify a subclass of good alphabets by observing that when the entropy and temperature of a given alphabet are well-behaved in the thermodynamical sense then such alphabet is a good candidate to tile the infinite two-dimensional plane. Our proposal is tested successfully with the known available good alphabets (which produce periodic tilings, aperiodic but self-similar tilings as well as tilings which are neither periodic nor self-similar). Our analysis shows that the Kendall Tau coefficient is able to distinguish alphabets with a good thermodynamical behavior from alphabets with bad thermodynamical behavior. The transition from good to bad behavior is related to a transition from non-chaotic to chaotic regime in discrete dynamical systems of logistic type. - oai:arXiv.org:2507.13268v2 - cond-mat.stat-mech + Refining Concentration for Gaussian Quadratic Chaos + https://arxiv.org/abs/2412.03774 + arXiv:2412.03774v3 Announce Type: replace-cross +Abstract: We slightly modify the proof of Hanson-Wright inequality (HWI) for concentration of Gaussian quadratic chaos where we tighten the bound by increasing the absolute constant in its formulation from the largest known value of 0.125 to at least 0.145 in the symmetric case. We also present a sharper version of an inequality due to Laurent and Massart (LMI) through which we increase the absolute constant in HWI from the largest available value of approximately $0.134$ due to LMI itself to at least $0.152$ in the positive-semidefinite case. A new sequence of concentration bounds indexed by $m=1,2,3,\cdots, \infty$ is developed that involves Schatten norms of the underlying matrix. The case $m=1$ recovers HWI. These bounds undergo a phase transition in the sense that if the tail parameter is smaller than a critical threshold $\tau_c$, then $m=1$ is the tightest and if it is larger than $\tau_c$, then $m=\infty$ is the tightest. This leads to a novel bound called the~$m_\infty$-bound. A separate concentration bound named twin to HWI is also developed that is tighter than HWI for both sufficiently small and large tail parameter. Finally, we explore concentration bounds when the underlying matrix is positive-semidefinite and only the dimension~$n$ and its largest eigenvalue are known. Five candidates are examined, namely, the $m_\infty$-bound, relaxed versions of HWI and LMI, the $\chi^2$-bound and the large deviations bound. The sharpest among these is always either the $m_\infty$-bound or the $\chi^2$-bound. The case of even dimension is given special attention. If $n=2,4,6$, the $\chi^2$-bound is tighter than the $m_\infty$-bound. If $n$ is an even integer greater than or equal to 8, the $m_\infty$-bound is sharper than the $\chi^2$-bound if and only if the ratio of the tail parameter over the largest eigenvalue lies inside a finite open interval which expands indefinitely as $n$ grows. + oai:arXiv.org:2412.03774v3 + math.PR cs.IT - hep-th math.IT - math.LO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fabrizio Canfora, Marco Cedeno + Kamyar Moshksar - PET Image Reconstruction Using Deep Diffusion Image Prior - https://arxiv.org/abs/2507.15078 - arXiv:2507.15078v2 Announce Type: replace-cross -Abstract: Diffusion models have shown great promise in medical image denoising and reconstruction, but their application to Positron Emission Tomography (PET) imaging remains limited by tracer-specific contrast variability and high computational demands. In this work, we proposed an anatomical prior-guided PET image reconstruction method based on diffusion models, inspired by the deep diffusion image prior (DDIP) framework. The proposed method alternated between diffusion sampling and model fine-tuning guided by the PET sinogram, enabling the reconstruction of high-quality images from various PET tracers using a score function pretrained on a dataset of another tracer. To improve computational efficiency, the half-quadratic splitting (HQS) algorithm was adopted to decouple network optimization from iterative PET reconstruction. The proposed method was evaluated using one simulation and two clinical datasets. For the simulation study, a model pretrained on [$^{18}$F]FDG data was tested on [$^{18}$F]FDG data and amyloid-negative PET data to assess out-of-distribution (OOD) performance. For the clinical-data validation, ten low-dose [$^{18}$F]FDG datasets and one [$^{18}$F]Florbetapir dataset were tested on a model pretrained on data from another tracer. Experiment results show that the proposed PET reconstruction method can generalize robustly across tracer distributions and scanner types, providing an efficient and versatile reconstruction framework for low-dose PET imaging. - oai:arXiv.org:2507.15078v2 + INRetouch: Context Aware Implicit Neural Representation for Photography Retouching + https://arxiv.org/abs/2412.03848 + arXiv:2412.03848v4 Announce Type: replace-cross +Abstract: Professional photo editing remains challenging, requiring extensive knowledge of imaging pipelines and significant expertise. While recent deep learning approaches, particularly style transfer methods, have attempted to automate this process, they often struggle with output fidelity, editing control, and complex retouching capabilities. We propose a novel retouch transfer approach that learns from professional edits through before-after image pairs, enabling precise replication of complex editing operations. We develop a context-aware Implicit Neural Representation that learns to apply edits adaptively based on image content and context, and is capable of learning from a single example. Our method extracts implicit transformations from reference edits and adaptively applies them to new images. To facilitate this research direction, we introduce a comprehensive Photo Retouching Dataset comprising 100,000 high-quality images edited using over 170 professional Adobe Lightroom presets. Through extensive evaluation, we demonstrate that our approach not only surpasses existing methods in photo retouching but also enhances performance in related image reconstruction tasks like Gamut Mapping and Raw Reconstruction. By bridging the gap between professional editing capabilities and automated solutions, our work presents a significant step toward making sophisticated photo editing more accessible while maintaining high-fidelity results. The source code and the dataset are publicly available at https://omaralezaby.github.io/inretouch . + oai:arXiv.org:2412.03848v4 eess.IV cs.CV - physics.med-ph - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fumio Hashimoto, Kuang Gong + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Omar Elezabi, Marcos V. Conde, Zongwei Wu, Radu Timofte - Amorphous Solid Model of Vectorial Hopfield Neural Networks - https://arxiv.org/abs/2507.22787 - arXiv:2507.22787v5 Announce Type: replace-cross -Abstract: We introduce a three-dimensional vectorial extension of the Hopfield associative-memory model in which each neuron is a unit vector on $S^2$ and synaptic couplings are $3\times 3$ blocks generated through a vectorial Hebbian rule. The resulting block-structured operator is mathematically analogous to the Hessian of amorphous solids and induces a rigid energy landscape with deep minima for stored patterns. Simulations and spectral analysis show that the vectorial network substantially outperforms the classical binary Hopfield model. For moderate connectivity, the critical storage ratio $\gamma_c$ grows approximately linearly with the coordination number $Z$, while for $Z\gtrsim 40$ a high-connectivity regime emerges in which $\gamma_c$ systematically exceeds the extrapolated low-$Z$ linear fit. At the same time, a persistent spectral gap separates pattern modes from the bulk and basins of attraction enlarge, yielding enhanced robustness to initialization noise. Thus geometric constraints combined with amorphous-solid-inspired structure produce associative memories with superior storage and retrieval performance, especially in the high-connectivity ($Z \gtrsim 20$-$30$) regime. - oai:arXiv.org:2507.22787v5 - cond-mat.dis-nn - cond-mat.soft - cond-mat.stat-mech - cs.LG - cs.NE - Wed, 10 Dec 2025 00:00:00 -0500 + SEAL: Speech Embedding Alignment Learning for Speech Large Language Model with Retrieval-Augmented Generation + https://arxiv.org/abs/2502.02603 + arXiv:2502.02603v2 Announce Type: replace-cross +Abstract: Embedding-based retrieval models have made significant strides in retrieval-augmented generation (RAG) techniques for text and multimodal large language models (LLMs) applications. However, when it comes to speech larage language models (SLLMs), these methods are limited to a two-stage process, where automatic speech recognition (ASR) is combined with text-based retrieval. This sequential architecture suffers from high latency and error propagation. To address these limitations, we propose a unified embedding framework that eliminates the need for intermediate text representations. Specifically, the framework includes separate speech and text encoders, followed by a shared scaling layer that maps both modalities into a common embedding space. Our model reduces pipeline latency by 50\% while achieving higher retrieval accuracy compared to traditional two-stage methods. We also provide a theoretical analysis of the challenges inherent in end-to-end speech retrieval and introduce architectural principles for effective speech-to-document matching. Extensive experiments demonstrate the robustness of our approach across diverse acoustic conditions and speaker variations, paving the way for a new paradigm in multimodal SLLMs retrieval systems. + oai:arXiv.org:2502.02603v2 + eess.AS + cs.CL + cs.SD + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - F. Gallavotti, A. Zaccone + Chunyu Sun, Bingyu Liu, Zhichao Cui, Junhan Shi, Anbin Qi, Tian-hao Zhang, Dinghao Zhou, Lewei Lu - A General Approach to Visualizing Uncertainty in Statistical Graphics - https://arxiv.org/abs/2508.00937 - arXiv:2508.00937v3 Announce Type: replace-cross -Abstract: We present a general approach to visualizing uncertainty in static 2-D statistical graphics. If we treat a visualization as a function of its underlying quantities, uncertainty in those quantities induces a distribution over images. We show how to aggregate these images into a single visualization that represents the uncertainty. The approach can be viewed as a generalization of sample-based approaches that use overlay. Notably, standard representations, such as confidence intervals and bands, emerge with their usual coverage guarantees without being explicitly quantified or visualized. As a proof of concept, we implement our approach in the IID setting using resampling, provided as an open-source Python library. Because the approach operates directly on images, the user needs only to supply the data and the code for visualizing the quantities of interest without uncertainty. Through several examples, we show how both familiar and novel forms of uncertainty visualization can be created. The implementation is not only a practical validation of the underlying theory but also an immediately usable tool that can complement existing uncertainty-visualization libraries. - oai:arXiv.org:2508.00937v3 - stat.ME - cs.GR + Dynamic Pricing in the Linear Valuation Model using Shape Constraints + https://arxiv.org/abs/2502.05776 + arXiv:2502.05776v4 Announce Type: replace-cross +Abstract: We propose a shape-constrained approach to dynamic pricing for censored data in the linear valuation model eliminating the need for tuning parameters commonly required by existing methods. Previous works have addressed the challenge of unknown market noise distribution $F_0$ using strategies ranging from kernel methods to reinforcement learning algorithms, such as bandit techniques and upper confidence bounds (UCB), under the assumption that $F_0$ satisfies Lipschitz (or stronger) conditions. In contrast, our method relies on isotonic regression under the weaker assumption that $F_0$ is $\alpha$-H\"older continuous for some $\alpha \in (0,1]$, for which we derive a regret upper bound. Simulations and experiments with real-world data obtained by Welltower Inc (a major healthcare Real Estate Investment Trust) consistently demonstrate that our method attains lower empirical regret in comparison to several existing methods in the literature while offering the advantage of being tuning-parameter free. + oai:arXiv.org:2502.05776v4 + stat.ML cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bernarda Petek, David Nabergoj, Erik \v{S}trumbelj + http://creativecommons.org/licenses/by/4.0/ + Daniele Bracale, Moulinath Banerjee, Yuekai Sun, Kevin Stoll, Salam Turki - Automated Construction of Artificial Lattice Structures with Designer Electronic States - https://arxiv.org/abs/2508.02581 - arXiv:2508.02581v2 Announce Type: replace-cross -Abstract: Manipulating matter with a scanning tunneling microscope (STM) enables creation of atomically defined artificial structures that host designer quantum states. However, the time-consuming nature of the manipulation process, coupled with the sensitivity of the STM tip, constrains the exploration of diverse configurations and limits the size of designed features. In this study, we present a reinforcement learning (RL)-based framework for creating artificial structures by spatially manipulating carbon monoxide (CO) molecules on a copper substrate using the STM tip. The automated workflow combines molecule detection and manipulation, employing deep learning-based object detection to locate CO molecules and linear assignment algorithms to allocate these molecules to designated target sites. We initially perform molecule maneuvering based on randomized parameter sampling for sample bias, tunneling current setpoint and manipulation speed. This dataset is then structured into an action trajectory used to train an RL agent. The model is subsequently deployed on the STM for real-time fine-tuning of manipulation parameters during structure construction. Our approach incorporates path planning protocols coupled with active drift compensation to enable atomically precise fabrication of structures with significantly reduced human input while realizing larger-scale artificial lattices with desired electronic properties. Using our approach, we demonstrate the automated construction of an extended artificial graphene lattice and confirm the existence of characteristic Dirac point in its electronic structure. Further challenges to RL-based structural assembly scalability are discussed. - oai:arXiv.org:2508.02581v2 - cond-mat.mtrl-sci + Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning + https://arxiv.org/abs/2502.16816 + arXiv:2502.16816v4 Announce Type: replace-cross +Abstract: We present the first finite-sample analysis of policy evaluation in robust average-reward Markov Decision Processes (MDPs). Prior work in this setting have established only asymptotic convergence guarantees, leaving open the question of sample complexity. In this work, we address this gap by showing that the robust Bellman operator is a contraction under a carefully constructed semi-norm, and developing a stochastic approximation framework with controlled bias. Our approach builds upon Multi-Level Monte Carlo (MLMC) techniques to estimate the robust Bellman operator efficiently. To overcome the infinite expected sample complexity inherent in standard MLMC, we introduce a truncation mechanism based on a geometric distribution, ensuring a finite expected sample complexity while maintaining a small bias that decays exponentially with the truncation level. Our method achieves the order-optimal sample complexity of $\tilde{\mathcal{O}}(\epsilon^{-2})$ for robust policy evaluation and robust average reward estimation, marking a significant advancement in robust reinforcement learning theory. + oai:arXiv.org:2502.16816v4 + stat.ML cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Ganesh Narasimha, Mykola Telychko, Wooin Yang, Arthur P. Baddorf, P. Ganesh, An-Ping Li, Rama Vasudevan + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yang Xu, Washim Uddin Mondal, Vaneet Aggarwal - Gaussian Approximation for Two-Timescale Linear Stochastic Approximation - https://arxiv.org/abs/2508.07928 - arXiv:2508.07928v2 Announce Type: replace-cross -Abstract: In this paper, we establish non-asymptotic bounds for accuracy of normal approximation for linear two-timescale stochastic approximation (TTSA) algorithms driven by martingale difference or Markov noise. Focusing on both the last iterate and Polyak-Ruppert averaging regimes, we derive bounds for normal approximation in terms of the convex distance between probability distributions. Our analysis reveals a non-trivial interaction between the fast and slow timescales: the normal approximation rate for the last iterate improves as the timescale separation increases, while it decreases in the Polyak-Ruppert averaged setting. We also provide the high-order moment bounds for the error of linear TTSA algorithm, which may be of independent interest. - oai:arXiv.org:2508.07928v2 + Revenue Maximization Under Sequential Price Competition Via The Estimation Of s-Concave Demand Functions + https://arxiv.org/abs/2503.16737 + arXiv:2503.16737v5 Announce Type: replace-cross +Abstract: We consider price competition among multiple sellers over a selling horizon of $T$ periods. In each period, sellers simultaneously offer their prices (which are made public) and subsequently observe their respective demand (not made public). The demand function of each seller depends on all sellers' prices through a private, unknown, and nonlinear relationship. We propose a dynamic pricing policy that uses semi-parametric least-squares estimation and show that when the sellers employ our policy, their prices converge at a rate of $O(T^{-1/7})$ to the Nash equilibrium prices that sellers would reach if they were fully informed. Each seller incurs a regret of $O(T^{5/7})$ relative to a dynamic benchmark policy. A theoretical contribution of our work is proving the existence of equilibrium under shape-constrained demand functions via the concept of $s$-concavity and establishing regret bounds of our proposed policy. Technically, we also establish new concentration results for the least squares estimator under shape constraints. Our findings offer significant insights into dynamic competition-aware pricing and contribute to the broader study of non-parametric learning in strategic decision-making. + oai:arXiv.org:2503.16737v5 stat.ML cs.LG - math.OC math.PR math.ST stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Daniele Bracale, Moulinath Banerjee, Cong Shi, Yuekai Sun + + + Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems + https://arxiv.org/abs/2503.18309 + arXiv:2503.18309v4 Announce Type: replace-cross +Abstract: Gaussian process state-space models (GPSSMs) offer a principled framework for learning and inference in nonlinear dynamical systems with uncertainty quantification. However, existing GPSSMs are limited by the use of multiple independent stationary Gaussian processes (GPs), leading to prohibitive computational and parametric complexity in high-dimensional settings and restricted modeling capacity for non-stationary dynamics. To address these challenges, we propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems. Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive non-stationary implicit process prior that can capture complex transition dynamics while significantly reducing model complexity. For the inference of the implicit process, we develop a variational inference algorithm that jointly approximates the posterior over the underlying GP and the neural network parameters defining the normalizing flows. To avoid explicit variational parameterization of the latent states, we further incorporate the ensemble Kalman filter (EnKF) into the variational framework, enabling accurate and efficient state estimation. Extensive empirical evaluations on synthetic and real-world datasets demonstrate the superior performance of our ETGPSSM in system dynamics learning, high-dimensional state estimation, and time-series forecasting, outperforming existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy. + oai:arXiv.org:2503.18309v4 + stat.ML + cs.LG + eess.SP + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bogdan Butyrin, Artemy Rubtsov, Alexey Naumov, Vladimir Ulyanov, Sergey Samsonov + Zhidi Lin, Ying Li, Feng Yin, Juan Maro\~nas, Alexandre H. Thi\'ery - Guiding WaveMamba with Frequency Maps for Image Debanding - https://arxiv.org/abs/2508.11331 - arXiv:2508.11331v2 Announce Type: replace-cross -Abstract: Compression at low bitrates in modern codecs often introduces banding artifacts, especially in smooth regions such as skies. These artifacts degrade visual quality and are common in user-generated content due to repeated transcoding. We propose a banding restoration method that employs the Wavelet State Space Model and a frequency masking map to preserve high-frequency details. Furthermore, we provide a benchmark of open-source banding restoration methods and evaluate their performance on two public banding image datasets. Experimentation on the available datasets suggests that the proposed post-processing approach effectively suppresses banding compared to the state-of-the-art method (a DBI value of 0.082 on BAND-2k) while preserving image textures. Visual inspections of the results confirm this. Code and supplementary material are available at: https://github.com/xinyiW915/Debanding-PCS2025. - oai:arXiv.org:2508.11331v2 - eess.IV - cs.CV - Wed, 10 Dec 2025 00:00:00 -0500 + Knowledge Independence Breeds Disruption but Limits Recognition + https://arxiv.org/abs/2504.09589 + arXiv:2504.09589v2 Announce Type: replace-cross +Abstract: Despite extensive research on scientific disruption, two questions remain: why disruption has declined amid growing knowledge, and why disruptive work receives fewer and delayed citations. One way to address these questions is to identify an intrinsic, paper-level property that reliably predicts disruption and explains both patterns. Here, we propose a novel measure, knowledge independence, capturing the extent to which a paper draws on references that do not cite one another. Analyzing 114 million publications, we find that knowledge independence strongly predicts disruption and mediates the disruptive advantage of small, onsite, and fresh teams. Its long-term decline, nonreproducible by null models, provides a mechanistic explanation for the parallel decline in disruption. Causal and simulation evidence further indicates that knowledge independence drives the persistent trade-off between disruption and impact. Taken together, these findings fill a critical gap in understanding scientific innovation, revealing a universal law: Knowledge independence breeds disruption but limits recognition. + oai:arXiv.org:2504.09589v2 + physics.soc-ph + cs.DL + cs.SI + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Xinyi Wang, Smaranda Tasmoc, Nantheera Anantrasirichai, Angeliki Katsenou + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiaoyao Yu, Talal Rahwan, Tao Jia - Random forest-based out-of-distribution detection for robust lung cancer segmentation - https://arxiv.org/abs/2508.19112 - arXiv:2508.19112v3 Announce Type: replace-cross -Abstract: Accurate detection and segmentation of cancerous lesions from computed tomography (CT) scans is essential for automated treatment planning and cancer treatment response assessment. Transformer-based models with self-supervised pretraining can produce reliably accurate segmentation from in-distribution (ID) data but degrade when applied to out-of-distribution (OOD) datasets. We address this challenge with RF-Deep, a random forest classifier that utilizes deep features from a pretrained transformer encoder of the segmentation model to detect OOD scans and enhance segmentation reliability. The segmentation model comprises a Swin Transformer encoder, pretrained with masked image modeling (SimMIM) on 10,432 unlabeled 3D CT scans covering cancerous and non-cancerous conditions, with a convolution decoder, trained to segment lung cancers in 317 3D scans. Independent testing was performed on 603 3D CT public datasets that included one ID dataset and four OOD datasets comprising chest CTs with pulmonary embolism (PE) and COVID-19, and abdominal CTs with kidney cancers and healthy volunteers. RF-Deep detected OOD cases with a FPR95 of 18.26%, 27.66%, and less than 0.1% on PE, COVID-19, and abdominal CTs, consistently outperforming established OOD approaches. The RF-Deep classifier provides a simple and effective approach to enhance reliability of cancer segmentation in ID and OOD scenarios. - oai:arXiv.org:2508.19112v3 - eess.IV - cs.CV - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Benchmarking data encoding methods in Quantum Machine Learning + https://arxiv.org/abs/2505.14295 + arXiv:2505.14295v2 Announce Type: replace-cross +Abstract: Data encoding plays a fundamental and distinctive role in Quantum Machine Learning (QML). While classical approaches process data directly as vectors, QML may require transforming classical data into quantum states through encoding circuits, known as quantum feature maps or quantum embeddings. This step leverages the inherently high-dimensional and non-linear nature of Hilbert space, enabling more efficient data separation in complex feature spaces that may be inaccessible to classical methods. This encoding part significantly affects the performance of the QML model, so it is important to choose the right encoding method for the dataset to be encoded. However, this choice is generally arbitrary, since there is no "universal" rule for knowing which encoding to choose based on a specific set of data. There are currently a variety of encoding methods using different quantum logic gates. We studied the most commonly used types of encoding methods and benchmarked them using different datasets. + oai:arXiv.org:2505.14295v2 + quant-ph + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Aneesh Rangnekar, Harini Veeraraghavan + Orlane Zang, Gr\'egoire Barru\'e, Tony Quertier - Universal Representation of Generalized Convex Functions and their Gradients - https://arxiv.org/abs/2509.04477 - arXiv:2509.04477v2 Announce Type: replace-cross -Abstract: A wide range of optimization problems can often be written in terms of generalized convex functions (GCFs). When this structure is present, it can convert certain nested bilevel objectives into single-level problems amenable to standard first-order optimization methods. We provide a new differentiable layer with a convex parameter space and show (Theorems 5.1 and 5.2) that it and its gradient are universal approximators for GCFs and their gradients. We demonstrate how this parameterization can be leveraged in practice by (i) learning optimal transport maps with general cost functions and (ii) learning optimal auctions of multiple goods. In both these cases, we show how our layer can be used to convert the existing bilevel or min-max formulations into single-level problems that can be solved efficiently with first-order methods. - oai:arXiv.org:2509.04477v2 - math.OC + Structured quantum learning via em algorithm for Boltzmann machines + https://arxiv.org/abs/2507.21569 + arXiv:2507.21569v2 Announce Type: replace-cross +Abstract: Quantum Boltzmann machines (QBMs) are generative models with potential advantages in quantum machine learning, yet their training is fundamentally limited by the barren plateau problem, where gradients vanish exponentially with system size. We introduce a quantum version of the em algorithm, an information-geometric generalization of the classical Expectation-Maximization method, which circumvents gradient-based optimization on non-convex functions. Implemented on a semi-quantum restricted Boltzmann machine (sqRBM) -- a hybrid architecture with quantum effects confined to the hidden layer -- our method achieves stable learning and outperforms gradient descent on multiple benchmark datasets. These results establish a structured and scalable alternative to gradient-based training in QML, offering a pathway to mitigate barren plateaus and enhance quantum generative modeling. + oai:arXiv.org:2507.21569v2 + quant-ph cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Moeen Nehzati + http://creativecommons.org/licenses/by/4.0/ + Takeshi Kimura, Kohtaro Kato, Masahito Hayashi - Contractive kinetic Langevin samplers beyond global Lipschitz continuity - https://arxiv.org/abs/2509.12031 - arXiv:2509.12031v2 Announce Type: replace-cross -Abstract: In this paper, we examine the problem of sampling from log-concave distributions with (possibly) superlinear gradient growth under kinetic (underdamped) Langevin algorithms. Using a carefully tailored taming scheme, we propose two novel discretizations of the kinetic Langevin SDE, and we show that they are both contractive and satisfy a log-Sobolev inequality. Building on this, we establish a series of non-asymptotic bounds in $2$-Wasserstein distance between the law reached by each algorithm and the underlying target measure. - oai:arXiv.org:2509.12031v2 - math.PR - cs.NA - math.NA + Diffusion Secant Alignment for Score-Based Density Ratio Estimation + https://arxiv.org/abs/2509.04852 + arXiv:2509.04852v3 Announce Type: replace-cross +Abstract: Estimating density ratios has become increasingly important with the recent rise of score-based and diffusion-inspired methods. However, current tangent-based approaches rely on a high-variance learning objective, which leads to unstable training and costly numerical integration during inference. We propose \textit{Interval-annealed Secant Alignment Density Ratio Estimation (ISA-DRE)}, a score-based framework along diffusion interpolants that replaces the instantaneous tangent with its interval integral, the secant, as the learning target. We show theoretically that the secant is a provably lower variance and smoother target for neural approximation, and also a strictly more general representation that contains the tangent as the infinitesimal limit. To make secant learning feasible, we introduce the \textit{Secant Alignment Identity (SAI)} to enforce self consistency between secant and tangent representations, and \textit{Contraction Interval Annealing (CIA)} to ensure stable convergence. Empirically, this stability-first formulation produces high efficiency and accuracy. ISA-DRE achieves comparable or superior results with fewer function evaluations, demonstrating robustness under large distribution discrepancies and effectively mitigating the density-chasm problem. + oai:arXiv.org:2509.04852v3 stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Iosif Lytras, Panayotis Mertikopoulos + Wei Chen, Shigui Li, Jiacheng Li, Jian Xu, Zhiqi Lin, Junmei Yang, Delu Zeng, John Paisley, Qibin Zhao - Fourier Spectrum of Noisy Quantum Algorithms - https://arxiv.org/abs/2510.06385 - arXiv:2510.06385v2 Announce Type: replace-cross -Abstract: Quantum computing promises exponential speedups for certain problems, yet fully universal quantum computers remain out of reach and near-term devices are inherently noisy. Motivated by this, we study noisy quantum algorithms and the landscape between $\mathsf{BQP}$ and $\mathsf{BPP}$. We build on a powerful technique to differentiate quantum and classical algorithms called the level-$\ell$ Fourier growth (the sum of absolute values of Fourier coefficients of sets of size $\ell$) and show that it can also be used to differentiate quantum algorithms based on the types of resources used. We show that noise acting on a quantum algorithm dampens its Fourier growth in ways intricately linked to the type of noise. - Concretely, we study noisy models of quantum computation where highly mixed states are prevalent, namely: $\mathsf{DQC}_k$ algorithms, where $k$ qubits are clean and the rest are maximally mixed, and $\frac{1}{2}\mathsf {BQP}$ algorithms, where the initial state is maximally mixed, but the algorithm is given knowledge of the initial state at the end of the computation. We establish upper bounds on the Fourier growth of $\mathsf{DQC}_k$, $\frac{1}{2}\mathsf{BQP}$ and $\mathsf{BQP}$ algorithms and leverage the differences between these bounds to derive oracle separations between these models. In particular, we show that 2-Forrelation and 3-Forrelation require $N^{\Omega(1)}$ queries in the $\mathsf{DQC}_1$ and $\frac{1}{2}\mathsf{BQP}$ models respectively. Our results are proved using a new matrix decomposition lemma that might be of independent interest. - oai:arXiv.org:2510.06385v2 - quant-ph - cs.CC - Wed, 10 Dec 2025 00:00:00 -0500 + Next-Generation Reservoir Computing for Dynamical Inference + https://arxiv.org/abs/2509.11338 + arXiv:2509.11338v2 Announce Type: replace-cross +Abstract: We present a simple and scalable implementation of next-generation reservoir computing (NGRC) for modeling dynamical systems from time-series data. The method uses a pseudorandom nonlinear projection of time-delay embedded inputs, allowing the feature-space dimension to be chosen independently of the observation size and offering a flexible alternative to polynomial-based NGRC projections. We demonstrate the approach on benchmark tasks, including attractor reconstruction and bifurcation diagram estimation, using partial and noisy measurements. We further show that small amounts of measurement noise during training act as an effective regularizer, improving long-term autonomous stability compared to standard regression alone. Across all tests, the models remain stable over long rollouts and generalize beyond the training data. The framework offers explicit control of system state during prediction, and these properties make NGRC a natural candidate for applications such as surrogate modeling and digital-twin applications. + oai:arXiv.org:2509.11338v2 + stat.ML + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Uma Girish + http://creativecommons.org/licenses/by/4.0/ + Rok Cestnik, Erik A. Martens - Trickle-down Theorems via C-Lorentzian Polynomials II: Pairwise Spectral Influence and Improved Dobrushin's Condition - https://arxiv.org/abs/2510.06549 - arXiv:2510.06549v2 Announce Type: replace-cross -Abstract: Let $\mu$ be a probability distribution on a multi-state spin system on a set $V$ of sites. Equivalently, we can think of this as a $d$-partite simplical complex with distribution $\mu$ on maximal faces. For any pair of vertices $u,v\in V$, define the pairwise spectral influence $\mathcal{I}_{u,v}$ as follows. Let $\sigma$ be a choice of spins $s_w\in S_w$ for every $w\in V \setminus \{u,v\}$, and construct a matrix in $\mathbb{R}^{(S_u\cup S_v)\times (S_u\cup S_v)}$ where for any $s_u\in S_u, s_v\in S_v$, the $(us_u,vs_v)$-entry is the probability that $s_v$ is the spin of $v$ conditioned on $s_u$ being the spin of $u$ and on $\sigma$. Then $\mathcal{I}_{u,v}$ is the maximal second eigenvalue of this matrix, over all choices of spins for all $w \in V \setminus \{u,v\}$. Equivalently, $\mathcal{I}_{u,v}$ is the maximum local spectral expansion of links of codimension $2$ that include a spin for every $w \in V \setminus \{u,v\}$. - We show that if the largest eigenvalue of the pairwise spectral influence matrix with entries $\mathcal{I}_{u,v}$ is bounded away from 1, i.e. $\lambda_{\max}(\mathcal{I})\leq 1-\epsilon$ (and $X$ is connected), then the Glauber dynamics mixes rapidly and generate samples from $\mu$. This improves/generalizes the classical Dobrushin's influence matrix as the $\mathcal{I}_{u,v}$ lower-bounds the classical influence of $u\to v$. As a by-product, we also prove improved/almost optimal trickle-down theorems for partite simplicial complexes. The proof builds on the trickle-down theorems via $\mathcal{C}$-Lorentzian polynomials machinery recently developed by the authors and Lindberg. - oai:arXiv.org:2510.06549v2 - math.CO - cs.CC - cs.DS - Wed, 10 Dec 2025 00:00:00 -0500 + DeepMech: A Machine Learning Framework for Chemical Reaction Mechanism Prediction + https://arxiv.org/abs/2509.15872 + arXiv:2509.15872v2 Announce Type: replace-cross +Abstract: Prediction of complete step-by-step chemical reaction mechanisms (CRMs) remains a major challenge. Whereas the traditional approaches in CRM tasks rely on expert-driven experiments or costly quantum chemical computations, contemporary deep learning (DL) alternatives ignore key intermediates and mechanistic steps and often suffer from hallucinations. We present DeepMech, an interpretable graph-based DL framework employing atom- and bond-level attention, guided by generalized templates of mechanistic operations (TMOps), to generate CRMs. Trained on our curated ReactMech dataset (~30K CRMs with 100K atom-mapped and mass-balanced elementary steps), DeepMech achieves 98.98+/-0.12% accuracy in predicting elementary steps and 95.94+/-0.21% in complete CRM tasks, besides maintaining high fidelity even in out-of-distribution scenarios as well as in predicting side and/or byproducts. Extension to multistep CRMs relevant to prebiotic chemistry, demonstrates the ability of DeepMech in effectively reconstructing 2 pathways from simple primordial substrates to complex biomolecules such as serine and aldopentose. Attention analysis identifies reactive atoms/bonds in line with chemical intuition, rendering our model interpretable and suitable for reaction design. + oai:arXiv.org:2509.15872v2 + physics.chem-ph + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Manajit Das, Ajnabiul Hoque, Mayank Baranwal, Raghavan B. Sunoj + + + Forecasting the Future with Yesterday's Climate: Temperature Bias in AI Weather and Climate Models + https://arxiv.org/abs/2509.22359 + arXiv:2509.22359v2 Announce Type: replace-cross +Abstract: AI-based climate and weather models have rapidly gained popularity, providing faster forecasts with skill that can match or even surpass that of traditional dynamical models. Despite this success, these models face a key challenge: predicting future climates while being trained only with historical data. In this study, we investigate this issue by analyzing boreal winter land temperature biases in AI weather and climate models. We examine two weather models, FourCastNet V2 Small (FourCastNet) and Pangu Weather (Pangu), evaluating their predictions for 2020-2025 and Ai2 Climate Emulator version 2 (ACE2) for 1996-2010. These time periods lie outside of the respective models' training sets and are significantly more recent than the bulk of their training data, allowing us to assess how well the models generalize to new, i.e. more modern, conditions. We find that all three models produce cold-biased mean temperatures, resembling climates from 15-20 years earlier than the period they are predicting. In some regions, like the Eastern U.S., the predictions resemble climates from as much as 20-30 years earlier. Further analysis shows that FourCastNet's and Pangu's cold bias is strongest in the hottest predicted temperatures, indicating limited training exposure to modern extreme heat events. In contrast, ACE2's bias is more evenly distributed but largest in regions, seasons, and parts of the temperature distribution where climate change has been most pronounced. These findings underscore the challenge of training AI models exclusively on historical data and highlight the need to account for such biases when applying them to future climate prediction. + oai:arXiv.org:2509.22359v2 + physics.ao-ph + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jonathan Leake, Shayan Oveis Gharan + Jacob B. Landsberg, Elizabeth A. Barnes - Beyond Hoeffding and Chernoff: Trading conclusiveness for advantages in quantum hypothesis testing - https://arxiv.org/abs/2510.07601 - arXiv:2510.07601v2 Announce Type: replace-cross -Abstract: The ultimate limits of quantum state discrimination are often thought to be captured by asymptotic bounds that restrict the achievable error probabilities, notably the quantum Chernoff and Hoeffding bounds. Here we study hypothesis testing protocols that are permitted a probability of producing an inconclusive discrimination outcome, and investigate their performance when this probability is suitably constrained. We show that even by allowing an arbitrarily small probability of inconclusiveness, the limits imposed by the quantum Hoeffding and Chernoff bounds can be significantly exceeded. This completely circumvents the conventional trade-offs between error exponents in hypothesis testing while incurring only a vanishingly small overhead over conventional approaches. Such improvements over standard state discrimination are robust and can be obtained even when an exponentially vanishing probability of inconclusive outcomes is demanded. Relaxing the constraints on the inconclusive probability can enable even larger advantages, but this comes at a price. We show a 'strong converse' property of this setting: targeting error exponents beyond those achievable with vanishing inconclusiveness necessarily forces the probability of inconclusive outcomes to converge to one. By exactly quantifying the rate of this convergence, we give a complete characterisation of the trade-offs between error exponents and rates of conclusive outcome probabilities. Overall, our results provide a comprehensive asymptotic picture of how the allowance for inconclusive measurement outcomes reshapes optimal quantum hypothesis testing. - oai:arXiv.org:2510.07601v2 + Good quantum codes with addressable and parallelizable transversal non-Clifford gates + https://arxiv.org/abs/2510.19809 + arXiv:2510.19809v2 Announce Type: replace-cross +Abstract: In this work, we prove that for any $m>1$, there exists a family of good qudit quantum codes supporting transversal logical $\mathsf{C}^{m-1}\mathsf{Z}$ gates that can address specified logical qudits and be largely executed in parallel. Building on the family of good quantum error-correcting codes presented in He et al. (2025), which support addressable and transversal logical $\mathsf{CCZ}$ gates, we extend their framework and show how to perform large sets of gates in parallel. The construction relies on the classical algebraic geometry codes of Stichtenoth (IEEE Trans. Inf. Theory, 2006). Our results lead to a substantial reduction in the depth overhead of multi-control-$Z$ circuits. In particular, we show that the minimal depth of any logical $\mathsf{C}^{m-1}\mathsf{Z}$ circuit involving qudits from $m$ distinct code blocks is upper bounded by $O(k^{m-1})$, where $k$ is the code dimension. While this overhead is optimal for dense $\mathsf{C}^{m-1}\mathsf{Z}$ circuits, for sparse circuits we discuss how the depth overhead can be significantly reduced by exploiting the structure of the quantum code. + oai:arXiv.org:2510.19809v2 quant-ph cs.IT - math-ph math.IT - math.MP - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kaiyuan Ji, Bartosz Regula + http://creativecommons.org/licenses/by/4.0/ + Virgile Gu\'emard - Formalization of Auslander--Buchsbaum--Serre criterion in Lean4 - https://arxiv.org/abs/2510.24818 - arXiv:2510.24818v3 Announce Type: replace-cross -Abstract: We present a comprehensive formalization in the Lean4 theorem prover of the Auslander--Buchsbaum--Serre criterion, which characterizes regular local rings as those Noetherian local rings with finite global dimension. Rather than following the well-known proof that computes the projective dimension of the residue field via quotient by regular sequences and uses the Koszul complex to bound the cotangent space dimension by the global dimension, our approach is built systematically on the formalization of depth defined via the vanishing of Ext functors. We establish key homological results including Rees' theorem, the Auslander--Buchsbaum formula, and Ischebeck's theorem, and further develop the theories of Cohen--Macaulay modules and rings, including a complete formalization of the unmixedness theorem for Cohen--Macaulay rings. To prove the Auslander--Buchsbaum--Serre criterion, we show that maximal Cohen--Macaulay modules over regular local rings are free and establish a weakened form of the Ferrand--Vasconcelos theorem specific for the unique maximal ideal. As corollaries, we deduce that regularity can be checked at maximal ideals and formalize Hilbert's Syzygy Theorem. This work demonstrates how homological algebra can be effectively employed in the formalization of commutative algebra, providing extensive infrastructure for future developments in the field. - oai:arXiv.org:2510.24818v3 - math.AC - cs.FL - cs.LO - Wed, 10 Dec 2025 00:00:00 -0500 + Generalized Guarantees for Variational Inference in the Presence of Even and Elliptical Symmetry + https://arxiv.org/abs/2511.01064 + arXiv:2511.01064v2 Announce Type: replace-cross +Abstract: We extend several recent results providing symmetry-based guarantees for variational inference (VI) with location-scale families. VI approximates a target density $p$ by the best match $q^*$ in a family $Q$ of tractable distributions that in general does not contain $p$. It is known that VI can recover key properties of $p$, such as its mean and correlation matrix, when $p$ and $Q$ exhibit certain symmetries and $q^*$ is found by minimizing the reverse Kullback-Leibler divergence. We extend these guarantees in two important directions. First, we provide symmetry-based guarantees for $f$-divergences, a broad class that includes the reverse and forward Kullback-Leibler divergences and the $\alpha$-divergences. We highlight properties specific to the reverse Kullback-Leibler divergence under which we obtain our strongest guarantees. Second, we obtain further guarantees for VI when the target density $p$ exhibits even and elliptical symmetries in some but not all of its coordinates. These partial symmetries arise naturally in Bayesian hierarchical models, where the prior induces a challenging geometry but still possesses axes of symmetry. We illustrate these theoretical results in a number of experimental settings. + oai:arXiv.org:2511.01064v2 + stat.ML + cs.LG + stat.CO + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Naillin Guan, Yongle Hu + http://creativecommons.org/licenses/by/4.0/ + Charles C. Margossian, Lawrence K. Saul - A Non-Variational Quantum Approach to the Job Shop Scheduling Problem - https://arxiv.org/abs/2510.26859 - arXiv:2510.26859v2 Announce Type: replace-cross -Abstract: Quantum heuristics offer a potential advantage for combinatorial optimization but are constrained by near-term hardware limitations. We introduce Iterative-QAOA, a variant of QAOA designed to mitigate these constraints. The algorithm combines a non-variational, shallow-depth circuit approach using fixed-parameter schedules with an iterative warm-starting process. We benchmark the algorithm on Just-in-Time Job Shop Scheduling Problem (JIT-JSSP) instances on IonQ Forte Generation QPUs, representing some of the largest such problems ever executed on quantum hardware. We compare the performance of the algorithm against both the Variational Quantum Imaginary Time Evolution (VarQITE) algorithm and the non-variational Linear Ramp (LR) QAOA algorithm. We find that Iterative-QAOA robustly converges to find optimal solutions as well as high-quality, near-optimal solutions for all problem instances evaluated. We evaluate the algorithm on larger problem instances up to 97 qubits using tensor network simulations. The scaling behavior of the algorithm indicates potential for solving industrial-scale problems on fault-tolerant quantum computers. - oai:arXiv.org:2510.26859v2 - quant-ph - cs.ET - Wed, 10 Dec 2025 00:00:00 -0500 + Statistical Properties of Rectified Flow + https://arxiv.org/abs/2511.03193 + arXiv:2511.03193v3 Announce Type: replace-cross +Abstract: Rectified flow (Liu et al., 2022; Liu, 2022; Wu et al., 2023) is a method for defining a transport map between two distributions, and enjoys popularity in machine learning, although theoretical results supporting the validity of these methods are scant. The rectified flow can be regarded as an approximation to optimal transport, but in contrast to other transport methods that require optimization over a function space, computing the rectified flow only requires standard statistical tools such as regression or density estimation, which we leverage to develop empirical versions of transport maps. We study some structural properties of the rectified flow, including existence, uniqueness, and regularity, as well as the related statistical properties, such as rates of convergence and central limit theorems, for some selected estimators. To do so, we analyze the bounded and unbounded cases separately as each presents unique challenges. In both cases, we are able to establish convergence at faster rates than those for the usual nonparametric regression and density estimation. + oai:arXiv.org:2511.03193v3 + math.ST + cs.LG + stat.ME + stat.ML + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Miguel Angel Lopez-Ruiz, Emily L. Tucker, Emma M. Arnold, Evgeny Epifanovsky, Ananth Kaushik, Martin Roetteler + http://creativecommons.org/licenses/by/4.0/ + Gonzalo Mena, Arun Kumar Kuchibhotla, Larry Wasserman - Tensor rank and dimension expanders - https://arxiv.org/abs/2511.02670 - arXiv:2511.02670v2 Announce Type: replace-cross -Abstract: We prove a lower bound on the rank of tensors constructed from families of linear maps that `expand' the dimension of every subspace. Such families, called {\em dimension expanders} have been studied for many years with several known explicit constructions. Using these constructions we show that one can construct an explicit $[D]\times [n] \times [n]$-tensor with rank at least $(2 - \epsilon)n$, with $D$ a constant depending on $\epsilon$. Our results extend to border rank over the real or complex numbers. - oai:arXiv.org:2511.02670v2 - math.CO - cs.CC - Wed, 10 Dec 2025 00:00:00 -0500 + Function-on-Function Bayesian Optimization + https://arxiv.org/abs/2511.12783 + arXiv:2511.12783v2 Announce Type: replace-cross +Abstract: Bayesian optimization (BO) has been widely used to optimize expensive and gradient-free objective functions across various domains. However, existing BO methods have not addressed the objective where both inputs and outputs are functions, which increasingly arise in complex systems as advanced sensing technologies. To fill this gap, we propose a novel function-on-function Bayesian optimization (FFBO) framework. Specifically, we first introduce a function-on-function Gaussian process (FFGP) model with a separable operator-valued kernel to capture the correlations between function-valued inputs and outputs. Compared to existing Gaussian process models, FFGP is modeled directly in the function space. Based on FFGP, we define a scalar upper confidence bound (UCB) acquisition function using a weighted operator-based scalarization strategy. Then, a scalable functional gradient ascent algorithm (FGA) is developed to efficiently identify the optimal function-valued input. We further analyze the theoretical properties of the proposed method. Extensive experiments on synthetic and real-world data demonstrate the superior performance of FFBO over existing approaches. + oai:arXiv.org:2511.12783v2 + stat.ML + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zeev Dvir + Jingru Huang, Haijie Xu, Manrui Jiang, Chen Zhang - The Rainbow Arborescence Problem on Cycles - https://arxiv.org/abs/2511.04953 - arXiv:2511.04953v2 Announce Type: replace-cross -Abstract: The rainbow arborescence conjecture posits that if the arcs of a directed graph with $n$ vertices are colored by $n-1$ colors such that each color class forms a spanning arborescence, then there is a spanning arborescence that contains exactly one arc of every color. We prove that the conjecture is true if the underlying undirected graph is a cycle. - oai:arXiv.org:2511.04953v2 - math.CO - cs.DM - Wed, 10 Dec 2025 00:00:00 -0500 + Solving a Research Problem in Mathematical Statistics with AI Assistance + https://arxiv.org/abs/2511.18828 + arXiv:2511.18828v2 Announce Type: replace-cross +Abstract: Over the last few months, AI models including large language models have improved greatly. There are now several documented examples where they have helped professional mathematical scientists prove new results, sometimes even helping resolve known open problems. In this short note, we add another example to the list, by documenting how we were able to solve a previously unsolved research problem in robust mathematical statistics with crucial help from GPT-5. Our problem concerns robust density estimation, where the observations are perturbed by Wasserstein-bounded contaminations. In a previous preprint (Chao and Dobriban, 2023, arxiv:2308.01853v2), we have obtained upper and lower bounds on the minimax optimal estimation error; which were, however, not sharp. + Starting in October 2025, making significant use of GPT-5 Pro, we were able to derive the minimax optimal error rate (reported in version 3 of the above arxiv preprint). GPT-5 provided crucial help along the way, including by suggesting calculations that we did not think of, and techniques that were not familiar to us, such as the dynamic Benamou-Brenier formulation, for key steps in the analysis. Working with GPT-5 took a few weeks of effort, and we estimate that it could have taken several months to get the same results otherwise. At the same time, there are still areas where working with GPT-5 was challenging: it sometimes provided incorrect references, and glossed over details that sometimes took days of work to fill in. We outline our workflow and steps taken to mitigate issues. Overall, our work can serve as additional documentation for a new age of human-AI collaborative work in mathematical science. + oai:arXiv.org:2511.18828v2 + math.ST + cs.AI + cs.LG + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Krist\'of B\'erczi, Tam\'as Kir\'aly, Yutaro Yamaguchi, Yu Yokoi + Edgar Dobriban - Report on the Scoping Workshop on AI in Science Education Research 2025 - https://arxiv.org/abs/2511.14318 - arXiv:2511.14318v3 Announce Type: replace-cross -Abstract: This report summarizes the outcomes of a two-day international scoping workshop on the role of artificial intelligence (AI) in science education research. As AI rapidly reshapes scientific practice, classroom learning, and research methods, the field faces both new opportunities and significant challenges. The report clarifies key AI concepts to reduce ambiguity and reviews evidence of how AI influences scientific work, teaching practices, and disciplinary learning. It identifies how AI intersects with major areas of science education research, including curriculum development, assessment, epistemic cognition, inclusion, and teacher professional development, highlighting cases where AI can support human reasoning and cases where it may introduce risks to equity or validity. The report also examines how AI is transforming methodological approaches across quantitative, qualitative, ethnographic, and design-based traditions, giving rise to hybrid forms of analysis that combine human and computational strengths. To guide responsible integration, a systems-thinking heuristic is introduced that helps researchers consider stakeholder needs, potential risks, and ethical constraints. The report concludes with actionable recommendations for training, infrastructure, and standards, along with guidance for funders, policymakers, professional organizations, and academic departments. The goal is to support principled and methodologically sound use of AI in science education research. - oai:arXiv.org:2511.14318v3 - physics.ed-ph - cs.CY - Wed, 10 Dec 2025 00:00:00 -0500 + Data-Driven Learnability Transition of Measurement-Induced Entanglement + https://arxiv.org/abs/2512.01317 + arXiv:2512.01317v2 Announce Type: replace-cross +Abstract: Measurement-induced entanglement (MIE) captures how local measurements generate long-range quantum correlations and drive dynamical phase transitions in many-body systems. Yet estimating MIE experimentally remains challenging: direct evaluation requires extensive post-selection over measurement outcomes, raising the question of whether MIE is accessible with only polynomial resources. We address this challenge by reframing MIE detection as a data-driven learning problem that assumes no prior knowledge of state preparation. Using measurement records alone, we train a neural network in a self-supervised manner to predict the uncertainty metric for MIE--the gap between upper and lower bounds of the average post-measurement bipartite entanglement. Applied to random circuits with one-dimensional all-to-all connectivity and two-dimensional nearest-neighbor coupling, our method reveals a learnability transition with increasing circuit depth: below a threshold, the uncertainty is small and decreases with polynomial measurement data and model parameters, while above it the uncertainty remains large despite increasing resources. We further verify this transition experimentally on current noisy quantum devices, demonstrating its robustness to realistic noise. These results highlight the power of data-driven approaches for learning MIE and delineate the practical limits of its classical learnability. + oai:arXiv.org:2512.01317v2 + quant-ph + cond-mat.dis-nn + cs.AI + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Marcus Kubsch, Marit Kastaun, Peter Wulff, Nicole Graulich, Moriah Ariely, Alexander Bergmann-Gering, Sebastian Gombert, Bor Gregorcic, Hendrik H\"artig, Benedikt Heuckmann, Andrea Horbach, Christina Krist, Gerd Kortemeyer, Ben M\"unch, Samuel Pazicni, Joshua M. Rosenberg, Sascha Schanze, Gena Sbeglia, Vidar Skogvoll, Christophe Speroni, Christoph Thyssen, Lars-Jochen Thoms, Brandon J. Yik, Xiaoming Zhai + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Dongheng Qian, Jing Wang - Unifying Entropy Regularization in Optimal Control: From and Back to Classical Objectives via Iterated Soft Policies and Path Integral Solutions - https://arxiv.org/abs/2512.06109 - arXiv:2512.06109v2 Announce Type: replace-cross -Abstract: This paper develops a unified perspective on several stochastic optimal control formulations through the lens of Kullback-Leibler regularization. We propose a central problem that separates the KL penalties on policies and transitions, assigning them independent weights, thereby generalizing the standard trajectory-level KL-regularization commonly used in probabilistic and KL-regularized control. This generalized formulation acts as a generative structure allowing to recover various control problems. These include the classical Stochastic Optimal Control (SOC), Risk-Sensitive Optimal Control (RSOC), and their policy-based KL-regularized counterparts. The latter we refer to as soft-policy SOC and RSOC, facilitating alternative problems with tractable solutions. Beyond serving as regularized variants, we show that these soft-policy formulations majorize the original SOC and RSOC problem. This means that the regularized solution can be iterated to retrieve the original solution. Furthermore, we identify a structurally synchronized case of the risk-seeking soft-policy RSOC formulation, wherein the policy and transition KL-regularization weights coincide. Remarkably, this specific setting gives rise to several powerful properties such as a linear Bellman equation, path integral solution, and, compositionality, thereby extending these computationally favourable properties to a broad class of control problems. - oai:arXiv.org:2512.06109v2 - math.OC - cs.LG - cs.RO - cs.SY - eess.SY - Wed, 10 Dec 2025 00:00:00 -0500 + Stronger is not better: Better Augmentations in Contrastive Learning for Medical Image Segmentation + https://arxiv.org/abs/2512.05992 + arXiv:2512.05992v2 Announce Type: replace-cross +Abstract: Self-supervised contrastive learning is among the recent representation learning methods that have shown performance gains in several downstream tasks including semantic segmentation. This paper evaluates strong data augmentation, one of the most important components for self-supervised contrastive learning's improved performance. Strong data augmentation involves applying the composition of multiple augmentation techniques on images. Surprisingly, we find that the existing data augmentations do not always improve performance for semantic segmentation for medical images. We experiment with other augmentations that provide improved performance. + oai:arXiv.org:2512.05992v2 + eess.IV + cs.CV + Thu, 11 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Azeez Idris, Abdurahman Ali Mohammed, Samuel Fanijo + + + Adversarial Barrier in Uniform Class Separation + https://arxiv.org/abs/2512.08149 + arXiv:2512.08149v2 Announce Type: replace-cross +Abstract: We identify a strong structural obstruction to Uniform Separation in constructive arithmetic. The mechanism is independent of semantic content; it emerges whenever two distinct evaluator predicates are sustained in parallel and inference remains uniformly representable in an extension of HA. Under these conditions, any putative Uniform Class Separation principle becomes a distinguished instance of a fixed point construction. The resulting limitation is stricter in scope than classical separation barriers (Baker; Rudich; Aaronson et al.) insofar as it constrains the logical form of uniform separation within HA, rather than limiting particular relativizing, naturalizing, or algebrizing techniques. + oai:arXiv.org:2512.08149v2 + math.LO + cs.CC + cs.LO + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Ajinkya Bhole, Mohammad Mahmoudi Filabadi, Guillaume Crevecoeur, Tom Lefebvre + Milan Rosko