diff --git "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml" "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
--- "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
+++ "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
@@ -7,9922 +7,12 @@
http://www.rssboard.org/rss-specificationen-us
- Thu, 06 Nov 2025 05:00:14 +0000
+ Sat, 08 Nov 2025 05:00:00 +0000rss-help@arxiv.org
- Thu, 06 Nov 2025 00:00:00 -0500
+ Sat, 08 Nov 2025 00:00:00 -0500
- SaturdaySunday
+ Saturday
-
- Quantum-Classical Hybrid Encryption Framework Based on Simulated BB84 and AES-256: Design and Experimental Evaluation
- https://arxiv.org/abs/2511.02836
- arXiv:2511.02836v1 Announce Type: new
-Abstract: This paper presents the design, implementation, and evaluation of a hybrid encryption framework that combines quantum key distribution, specifically a simulated BB84 protocol, with AES-256 encryption. The system enables secure file encryption by leveraging quantum principles for key generation and classical cryptography for data protection. It introduces integrity validation mechanisms, including HMAC verification and optional post-quantum digital signatures, ensuring robustness even in the presence of quantum-capable adversaries. The entire architecture is implemented in Python, with modular components simulating quantum key exchange, encryption, and secure packaging. Experimental results include visual testing of various attack scenarios, such as key tampering, HMAC failure, and file corruption, demonstrating the effectiveness and resilience of the approach. The proposed solution serves as a practical foundation for quantum-aware cybersecurity systems.
- oai:arXiv.org:2511.02836v1
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Hector E Mozo
-
-
- An extended reality-based framework for user risk training in urban built environment
- https://arxiv.org/abs/2511.02837
- arXiv:2511.02837v1 Announce Type: new
-Abstract: In the context of increasing urban risks, particularly from climate change-induced flooding, this paper presents an extended Reality (XR)-based framework to improve user risk training within urban built environments. The framework is designed to improve risk awareness and preparedness among various stakeholders, including citizens, local authorities, and emergency responders. Using immersive XR technologies, the training experience simulates real-world emergency scenarios, contributing to active participation and a deeper understanding of potential hazards and especially for floods. The framework highlights the importance of stakeholder participation in its development, ensuring that training modules are customized to address the specific needs of different user groups. The iterative approach of the framework supports ongoing refinement through user feedback and performance data, thus improving the overall effectiveness of risk training initiatives. This work outlines the methodological phases involved in the framework's implementation, including i) user flow mapping, ii) scenario selection, and iii) performance evaluation, with a focus on the pilot application in Senigallia, Italy. The findings underscore the potential of XR technologies to transform urban risk training, promoting a culture of preparedness and resilience against urban hazards.
- oai:arXiv.org:2511.02837v1
- cs.HC
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sotirios Konstantakos, Sotirios Asparagkathos, Moatasim Mahmoud, Stamatia Rizou, Enrico Quagliarini, Gabriele Bernardini
-
-
- How ChatGPT and Gemini View the Elements of Communication Competence of Large Language Models: A Pilot Study
- https://arxiv.org/abs/2511.02838
- arXiv:2511.02838v1 Announce Type: new
-Abstract: A concise overview is provided of selected theoretical models of communication competence in the fields of linguistics, interpersonal communication, second language use, and human-robot interaction. The following practical research consisted of two case studies with the goals of investigating how advanced AI tools like ChatGPT and Gemini interpret elements of two communication competence theories in the context of Large Language Model (LLM) interactions with users. The focus was on these theoretical approaches: (1) an integrated linguistic-interpersonal model and (2) an interpersonal "human-humanoid" interaction model. The conclusion is that both approaches are suitable for a better understanding of LLM-user interaction.
- oai:arXiv.org:2511.02838v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Goran Bubas
-
-
- Evaluating Generative AI as an Educational Tool for Radiology Resident Report Drafting
- https://arxiv.org/abs/2511.02839
- arXiv:2511.02839v1 Announce Type: new
-Abstract: Objective: Radiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings' ability to provide guidance. This study evaluates a HIPAA-compliant GPT-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings.
- Methods: We analyzed 5,000 resident-attending report pairs from routine practice at a multi-site U.S. health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4o's feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Inter-reader reliability was measured with Krippendorff's alpha. Educational value was measured as the proportion of cases rated helpful.
- Results: Three common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% across error types. Inter-reader reliability showed moderate variability ({\alpha} = 0.767, 0.595, 0.567), and replacing a human reader with GPT-4o did not significantly affect agreement ({\Delta} = -0.004 to 0.002). GPT's feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%.
- Discussion: ChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.
- oai:arXiv.org:2511.02839v1
- cs.HC
- cs.AI
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Antonio Verdone, Aidan Cardall, Fardeen Siddiqui, Motaz Nashawaty, Danielle Rigau, Youngjoon Kwon, Mira Yousef, Shalin Patel, Alex Kieturakis, Eric Kim, Laura Heacock, Beatriu Reig, Yiqiu Shen
-
-
- Interview Survey on Attractivenesses of Place Re-creation Toward Developing a Virtual Twin Design Theory
- https://arxiv.org/abs/2511.02840
- arXiv:2511.02840v1 Announce Type: new
-Abstract: It is often seen that real-world locations are re-created using models, metaverse technology, or computer graphics. Although the surface-level purposes of these re-creations vary, the author hypothesizes that there exists an underlying common attractiveness that remains unclear. This research aims to clarify the attractiveness and its structures of place re-creations through an interview study with qualitative analysis. The interviews used examples of physical re-creations, such as the model in Komazawa University's Zen Culture History Museum and some dioramas of Tokyo, as well as computer-generated re-creations of Shibuya using platforms like Minecraft and Project Plateau's 3D city model. Using insights gained from this investigation, this study seeks to establish a theoretical framework for designing virtual twins.
- oai:arXiv.org:2511.02840v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Saizo Aoyagi
-
-
- AI Agents with Decentralized Identifiers and Verifiable Credentials
- https://arxiv.org/abs/2511.02841
- arXiv:2511.02841v1 Announce Type: new
-Abstract: LLM-based AI agents still lack the technical means to automatically build nuanced and differentiated trust in other agents at the beginning of an agent-to-agent dialogue. But autonomous and interoperable trust establishing becomes a fundamental prerequisite once agents start to operate beyond isolated environments and engage in dialogues across individual or organizational boundaries. A promising way to fill this gap in Agentic AI is to equip agents with long-lived digital identities and introduce tamper-proof and flexible identity-bound attestations of agents, provisioned by commonly trusted third parties and designed for cross-domain verifiability. This article presents a conceptual framework and a prototypical multi-agent system, where each agent is endowed with a self-sovereign digital identity. It combines a unique and ledger-anchored Decentralized Identifier (DID) of an agent with a set of third-party issued Verifiable Credentials (VCs). This enables agents at the start of a dialog to prove ownership of their self-controlled DIDs for authentication purposes and to establish various cross-domain trust relationships through the spontaneous exchange of their self-hosted DID-bound VCs. A comprehensive evaluation of the prototypical implementation demonstrates technical feasibility but also reveals limitations once an agent's LLM is in sole charge to control the respective security procedures.
- oai:arXiv.org:2511.02841v1
- cs.CR
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sandro Rodriguez Garzon, Awid Vaziry, Enis Mert Kuzu, Dennis Enrique Gehrmann, Buse Varkan, Alexander Gaballa, Axel K\"upper
-
-
- Digital Transformation Chatbot (DTchatbot): Integrating Large Language Model-based Chatbot in Acquiring Digital Transformation Needs
- https://arxiv.org/abs/2511.02842
- arXiv:2511.02842v1 Announce Type: new
-Abstract: Many organisations pursue digital transformation to enhance operational efficiency, reduce manual efforts, and optimise processes by automation and digital tools. To achieve this, a comprehensive understanding of their unique needs is required. However, traditional methods, such as expert interviews, while effective, face several challenges, including scheduling conflicts, resource constraints, inconsistency, etc. To tackle these issues, we investigate the use of a Large Language Model (LLM)-powered chatbot to acquire organisations' digital transformation needs. Specifically, the chatbot integrates workflow-based instruction with LLM's planning and reasoning capabilities, enabling it to function as a virtual expert and conduct interviews. We detail the chatbot's features and its implementation. Our preliminary evaluation indicates that the chatbot performs as designed, effectively following predefined workflows and supporting user interactions with areas for improvement. We conclude by discussing the implications of employing chatbots to elicit user information, emphasizing their potential and limitations.
- oai:arXiv.org:2511.02842v1
- cs.HC
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiawei Zheng, Gokcen Yilmaz, Ji Han, Saeema Ahmed-Kristensen
-
-
- Teaching Quantum Computing through Lab-Integrated Learning: Bridging Conceptual and Computational Understanding
- https://arxiv.org/abs/2511.02844
- arXiv:2511.02844v1 Announce Type: new
-Abstract: Quantum computing education requires students to move beyond classical programming intuitions related to state, determinism, and debugging, and to develop reasoning skills grounded in probability, measurement, and interference. This paper reports on the design and delivery of a combined undergraduate and graduate course at Louisiana State University that employed a lab-integrated learning model to support conceptual change and progressive understanding. The course paired lectures with weekly programming labs that served as environments for experimentation and reflection. These labs enabled students to confront misconceptions and refine their mental models through direct observation and evidence-based reasoning. Instruction began with Quantum Without Linear Algebra (QWLA), which introduced core concepts such as superposition and entanglement through intuitive, dictionary representations. The course then transitioned to IBM Qiskit, which provided a professional framework for circuit design, noise simulation, and algorithm implementation. Analysis of student work and feedback indicated that hands-on experimentation improved confidence, conceptual clarity, and fluency across representations. At the same time, it revealed persistent challenges in debugging, reasoning about measurement, and understanding probabilistic outcomes. This paper presents the course structure, instructional strategies, and lessons learned, and argues that lab-integrated learning offers an effective and accessible approach to teaching quantum computing in computer science education.
- oai:arXiv.org:2511.02844v1
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Umar Farooq, Krishna Upadhyay
-
-
- SELF-REDRAFT: Eliciting Intrinsic Exploration-Exploitation Balance in Test-Time Scaling for Code Generation
- https://arxiv.org/abs/2511.02854
- arXiv:2511.02854v1 Announce Type: new
-Abstract: Test-time scaling without interpreter feedback is essential for real-world code generation scenarios where test cases are not readily available. While existing paradigms often rely on either greedy exploitation (i.e., iterative refinement) or stochastic exploration (i.e., relying on sample-based voting or reranking mechanisms), the balance between these two dimensions remains underexplored. To investigate the LLM's intrinsic ability to balance exploitation and exploration, we introduce SELF-REDRAFT, a framework built upon Self-Refine that encourages the model to propose new drafts for solutions that are fundamentally flawed. Our results show that SELF-REDRAFT consistently achieves better performance than Self-Refine when converged under the same maximum number of iterations. Still, we observe that significant room for improvement remains, largely due to two core aspects of current self-redraft capabilities: constrained capacity for generating instructive feedback and fragile discriminative judgment. We also find that balancing strategies vary notably across different LLMs, reflecting distinct, model-specific behaviors. Overall, our study establishes a baseline for intrinsic exploration-exploitation balancing in test-time scaling and identifies feedback and discrimination as key areas with potential for future advances.
- oai:arXiv.org:2511.02854v1
- cs.SE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Yixiang Chen, Tianshi Zheng, Shijue Huang, Zhitao He, Yi R. Fung
-
-
- Workday's Approach to Secure and Compliant Cloud ERP Systems
- https://arxiv.org/abs/2511.02856
- arXiv:2511.02856v1 Announce Type: new
-Abstract: Workday's compliance with global standards -- such as GDPR, SOC 2, HIPAA, ISO 27001, and FedRAMP -- shows its ability to best protect critical financial, healthcare, and government data.Automated compliance attributes like audit trails, behavioral analytics, and continuous reporting improve automation of the process and cut down on the manual effort to audit. A comparative review demonstrates enhanced risk management, operational flexibility, and breach mitigation. The paper also discusses potential future solutions with AI, ML and blockchain, to enhance attackdetection and data integrity. Overall, Workday turns out to be a secure, compliant and future-ready ERP solution. The paper also explores emerging trends, including the integration of AI, machine learning, and blockchain technologies to enhance next-generation threat detection and data integrity. The findings position Workday as a reliable, compliant, and future-ready ERP solution, setting a new benchmark for secure enterprise cloud management.
- oai:arXiv.org:2511.02856v1
- cs.CE
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Monu Sharma
-
-
- The Evolution of Agile and Hybrid Project Management Methodologies: A Systematic Literature Review
- https://arxiv.org/abs/2511.02859
- arXiv:2511.02859v1 Announce Type: new
-Abstract: The rapid evolution of IT projects has driven the transformation of project management methodologies, from traditional waterfall approaches to agile frameworks and, more recently, hybrid models. This systematic literature review investigates the evolution of agile methodologies into hybrid frameworks, analysing their implementation challenges and success factors. We identify key trends through PRISMA-guided analysis of peer-reviewed studies from the last 8 years. Hybrid methodologies emerge from agile limitations in large-scale and regulated environments, combining iterative flexibility with structured governance. Agile has several implementation challenges, leading to hybrid methods, and the success hinges on leadership support, tailored process integration, and continuous improvement mechanisms. The study explores the need for contextual adaptation over rigid frameworks, offering practical insights for organisations navigating hybrid transitions.
- oai:arXiv.org:2511.02859v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Bianca Leech, Ridewaan Hanslo
-
-
- Mathematical exploration and discovery at scale
- https://arxiv.org/abs/2511.02864
- arXiv:2511.02864v1 Announce Type: new
-Abstract: AlphaEvolve is a generic evolutionary coding agent that combines the generative capabilities of LLMs with automated evaluation in an iterative evolutionary framework that proposes, tests, and refines algorithmic solutions to challenging scientific and practical problems. In this paper we showcase AlphaEvolve as a tool for autonomously discovering novel mathematical constructions and advancing our understanding of long-standing open problems.
- To demonstrate its breadth, we considered a list of 67 problems spanning mathematical analysis, combinatorics, geometry, and number theory. The system rediscovered the best known solutions in most of the cases and discovered improved solutions in several. In some instances, AlphaEvolve is also able to generalize results for a finite number of input values into a formula valid for all input values. Furthermore, we are able to combine this methodology with Deep Think and AlphaProof in a broader framework where the additional proof-assistants and reasoning systems provide automated proof generation and further mathematical insights.
- These results demonstrate that large language model-guided evolutionary search can autonomously discover mathematical constructions that complement human intuition, at times matching or even improving the best known results, highlighting the potential for significant new ways of interaction between mathematicians and AI systems. We present AlphaEvolve as a powerful new tool for mathematical discovery, capable of exploring vast search spaces to solve complex optimization problems at scale, often with significantly reduced requirements on preparation and computation time.
- oai:arXiv.org:2511.02864v1
- cs.NE
- cs.AI
- math.CA
- math.CO
- math.MG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Bogdan Georgiev, Javier G\'omez-Serrano, Terence Tao, Adam Zsolt Wagner
-
-
- LM-Fix: Lightweight Bit-Flip Detection and Rapid Recovery Framework for Language Models
- https://arxiv.org/abs/2511.02866
- arXiv:2511.02866v1 Announce Type: new
-Abstract: This paper presents LM-Fix, a lightweight detection and rapid recovery framework for faults in large language models (LLMs). Existing integrity approaches are often heavy or slow for modern LLMs. LM-Fix runs a short test-vector pass and uses hash-guided checks to detect bit-flip faults, then repairs them locally without a full reload. Across multiple models, it detects over 94% of single-bit flips at TVL=200 and nearly 100% of multi-bit flips with approximately 1% to 7.7% runtime overhead; recovery is more than 100x faster than reloading. These results show a practical, low-overhead solution to keep LLMs reliable in production
- oai:arXiv.org:2511.02866v1
- cs.SE
- cs.AI
- cs.AR
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ahmad Tahmasivand, Noureldin Zahran, Saba Al-Sayouri, Mohammed Fouda, Khaled N. Khasawneh
-
-
- Proof-of-Spiking-Neurons(PoSN): Neuromorphic Consensus for Next-Generation Blockchains
- https://arxiv.org/abs/2511.02868
- arXiv:2511.02868v1 Announce Type: new
-Abstract: Blockchain systems face persistent challenges of scalability, latency, and energy inefficiency. Existing consensus protocols such as Proof-of-Work (PoW) and Proof-of-Stake (PoS) either consume excessive resources or risk centralization. This paper proposes \textit{Proof-of-Spiking-Neurons (PoSN)}, a neuromorphic consensus protocol inspired by spiking neural networks. PoSN encodes transactions as spike trains, elects leaders through competitive firing dynamics, and finalizes blocks via neural synchronization, enabling parallel and event-driven consensus with minimal energy overhead. A hybrid system architecture is implemented on neuromorphic platforms, supported by simulation frameworks such as Nengo and PyNN. Experimental results show significant gains in energy efficiency, throughput, and convergence compared to PoB and PoR. PoSN establishes a foundation for sustainable, adaptive blockchains suitable for IoT, edge, and large-scale distributed systems.
- oai:arXiv.org:2511.02868v1
- cs.CR
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- IEEE conference COMCOMAP 2025
- M. Z. Haider, M. U Ghouri, Tayyaba Noreen, M. Salman
-
-
- Analysis of AdvFusion: Adapter-based Multilingual Learning for Code Large Language Models
- https://arxiv.org/abs/2511.02869
- arXiv:2511.02869v1 Announce Type: new
-Abstract: Programming languages can benefit from one another by utilizing a language model for software engineering tasks. Full fine-tuning and Parameter Efficient Fine-Tuning (PEFT) of Code Language Models (Code-LMs) has been explored for multilingual knowledge transfer. AdapterFusion is a PEFT architecture that aims to enhance task performance by leveraging information from multiple programming languages, but primarily focuses on the target programming language.
- In our previous work, we proposed AdvFusion, a novel PEFT-based approach that effectively learns from other programming languages before adapting to the target task. Though previous experiments showed that AdvFusion outperformed AdapterFusion and LoRA, it was applied on pre-trained Code-LMs and was limited to only two tasks, code summarization and method name prediction. In this study, we expanded our work and investigated AdvFusion on Code Large Language Models (Code-LLMs), considering three new tasks: code generation, code translation, and commit message generation. We observed that different Code-LLMs/tasks exhibit different characteristics. In code generation, AdvFusion outperformed AdapterFusion but not other PEFT methods (LoRA, Compacter, and TaskAdapter). In commit message generation, AdapterFusion performed better than AdvFusion, and contrary to code generation, we found that the other PEFT methods do not have better performance. In code translation, AdvFusion performed worse than AdapterFusion overall, with the performance gap marginally widening as the model size increases. However, consistent with code generation, other PEFT methods showed better performance.
- oai:arXiv.org:2511.02869v1
- cs.SE
- cs.AI
- cs.PL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Amirreza Esmaeili, Fahd Seddik, Yongyi Ji, Fatemeh Fard, Fuxiang Chen
-
-
- FATE: A Formal Benchmark Series for Frontier Algebra of Multiple Difficulty Levels
- https://arxiv.org/abs/2511.02872
- arXiv:2511.02872v1 Announce Type: new
-Abstract: Recent advances in large language models (LLMs) have demonstrated impressive capabilities in formal theorem proving, particularly on contest-based mathematical benchmarks like the IMO. However, these contests do not reflect the depth, breadth, and abstraction of modern mathematical research. To bridge this gap, we introduce FATE (Formal Algebra Theorem Evaluation), a new benchmark series in formal algebra designed to chart a course toward advanced mathematical reasoning. We present two new components, FATE-H and FATE-X, each with 100 problems in abstract and commutative algebra. The FATE series spans a difficulty spectrum from undergraduate exercises to problems exceeding PhD qualifying exams. Notably, FATE-X is the first formal benchmark to surpass both PhD-level exam difficulty and the coverage of the Mathlib library. Our evaluations of state-of-the-art LLM provers on this new benchmark reveal a stark performance gap compared to contest math: the best model achieves only 3% (pass@64) accuracy on FATE-H and 0% on FATE-X. Our two-stage evaluation reveals that models' natural-language reasoning is notably more accurate than their ability to formalize this reasoning. We systematically classify the common errors that arise during this formalization process. Furthermore, a comparative study shows that a specialized prover can exhibit less effective reflection than general-purpose models, reducing its accuracy at the natural-language stage. We believe FATE provides a robust and challenging benchmark that establishes essential checkpoints on the path toward research-level formal mathematical reasoning.
- oai:arXiv.org:2511.02872v1
- cs.LG
- cs.AI
- cs.FL
- cs.LO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jiedong Jiang, Wanyi He, Yuefeng Wang, Guoxiong Gao, Yongle Hu, Jingting Wang, Nailing Guan, Peihao Wu, Chunbo Dai, Liang Xiao, Bin Dong
-
-
- An Analysis of Early-Stage Functional Safety Analysis Methods and Their Integration into Model-Based Systems Engineering
- https://arxiv.org/abs/2511.02874
- arXiv:2511.02874v1 Announce Type: new
-Abstract: As systems become increasingly complex, conducting effective safety analysis in the earlier phases of a system's lifecycle is essential to identify and mitigate risks before they escalate. To that end, this paper investigates the capabilities of key safety analysis techniques, namely: Failure Mode and Effects Analysis (FMEA), Functional Hazard Analysis (FHA), and Functional Failure Identification and Propagation (FFIP), along with the current state of the literature in terms of their integration into Model-Based Systems Engineering (MBSE). A two-phase approach is adopted. The first phase is focused on contrasting FMEA, FHA, and FFIP techniques, examining their procedures, along with a documentation of their relative strengths and limitations. Our analysis highlights FFIP's capability in identifying emergent system behaviors, second-order effects, and fault propagation; thus, suggesting it is better suited for the safety needs of modern interconnected systems. Second, we review the existing research on the efforts to integrate each of these methods into MBSE. We find that MBSE integration efforts primarily focus on FMEA, and integration of FHA and FFIP is nascent. Additionally, FMEA-MBSE integration efforts could be organized into four categories: model-to-model transformation, use of external customized algorithms, built-in MBSE packages, and manual use of standard MBSE diagrams. While our findings indicate a variety of MBSE integration approaches, there is no universally established framework or standard. This leaves room for an integration approach that could support the ongoing Digital Engineering transformation efforts by enabling a more synergistic lifecycle safety management methods and tools.
- oai:arXiv.org:2511.02874v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jannatul Shefa, Taylan G. Topcu
-
-
- Academics and Generative AI: Empirical and Epistemic Indicators of Policy-Practice Voids
- https://arxiv.org/abs/2511.02875
- arXiv:2511.02875v1 Announce Type: new
-Abstract: As generative AI diffuses through academia, policy-practice divergence becomes consequential, creating demand for auditable indicators of alignment. This study prototypes a ten-item, indirect-elicitation instrument embedded in a structured interpretive framework to surface voids between institutional rules and practitioner AI use. The framework extracts empirical and epistemic signals from academics, yielding three filtered indicators of such voids: (1) AI-integrated assessment capacity (proxy) - within a three-signal screen (AI skill, perceived teaching benefit, detection confidence), the share who would fully allow AI in exams; (2) sector-level necessity (proxy) - among high output control users who still credit AI with high contribution, the proportion who judge AI capable of challenging established disciplines; and (3) ontological stance - among respondents who judge AI different in kind from prior tools, report practice change, and pass a metacognition gate, the split between material and immaterial views as an ontological map aligning procurement claims with evidence classes.
- oai:arXiv.org:2511.02875v1
- cs.CY
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- R. Yamamoto Ravenor
-
-
- CS Educator challenges and their solutions : A systematic mapping study
- https://arxiv.org/abs/2511.02876
- arXiv:2511.02876v1 Announce Type: new
-Abstract: Computer Science (CS) education is expanding rapidly, but educators continue to face persistent challenges in teaching and learning environments.Despite growing interest, limited systematic work exists to categorize and synthesize the specific challenges faced by CS educators and the remedies adopted in response.This is problematic because it remains unclear which areas have been thoroughly addressed and which still lack sufficient scholarly attention. In this study, we conducted a structured literature review of peer-reviewed research papers published over the last five years, focusing on challenges and remedies across ten categorized themes, including pedagogical, emotional, technological, and institutional dimensions.Our analysis revealed recurring issues in areas such as assessment practices, teacher training, classroom management, and emotional well-being, along with various strategies such as professional development programs and policy interventions adopted to mitigate them while also revealing several areas that have received insufficient attention.This review offers a consolidated understanding of the CS education landscape, providing valuable insights for researchers, curriculum designers, and policymakers aiming to improve teaching effectiveness and educator support.
- oai:arXiv.org:2511.02876v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Anjali Chouhan, Sruti Srinivasa Ragavan, Amey Karkare
-
-
- A Novel Reservoir Computing Framework for Chaotic Time Series Prediction Using Time Delay Embedding and Random Fourier Features
- https://arxiv.org/abs/2511.02877
- arXiv:2511.02877v1 Announce Type: new
-Abstract: Forecasting chaotic time series requires models that can capture the intrinsic geometry of the underlying attractor while remaining computationally efficient. We introduce a novel reservoir computing (RC) framework that integrates time-delay embedding with Random Fourier Feature (RFF) mappings to construct a dynamical reservoir without the need for traditional recurrent architectures. Unlike standard RC, which relies on high-dimensional recurrent connectivity, the proposed RFF-RC explicitly approximates nonlinear kernel transformations that uncover latent dynamical relations in the reconstructed phase space. This hybrid formulation offers two key advantages: (i) it provides a principled way to approximate complex nonlinear interactions among delayed coordinates, thereby enriching the effective dynamical representation of the reservoir, and (ii) it reduces reliance on manual reservoir hyperparameters such as spectral radius and leaking rate. We evaluate the framework on canonical chaotic systems-the Mackey-Glass equation, the Lorenz system, and the Kuramoto-Sivashinsky equation. This novel formulation demonstrates that RFF-RC not only achieves superior prediction accuracy but also yields robust attractor reconstructions and long-horizon forecasts. These results show that the combination of delay embedding and RFF-based reservoirs reveals new dynamical structure by embedding the system in an enriched feature space, providing a computationally efficient and interpretable approach to modeling chaotic dynamics.
- oai:arXiv.org:2511.02877v1
- cs.NE
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- S. K. Laha
-
-
- Stochastic Deep Graph Clustering for Practical Group Formation
- https://arxiv.org/abs/2511.02879
- arXiv:2511.02879v1 Announce Type: new
-Abstract: While prior work on group recommender systems (GRSs) has primarily focused on improving recommendation accuracy, most approaches assume static or predefined groups, making them unsuitable for dynamic, real-world scenarios. We reframe group formation as a core challenge in GRSs and propose DeepForm (Stochastic Deep Graph Clustering for Practical Group Formation), a framework designed to meet three key operational requirements: (1) the incorporation of high-order user information, (2) real-time group formation, and (3) dynamic adjustment of the number of groups. DeepForm employs a lightweight GCN architecture that effectively captures high-order structural signals. Stochastic cluster learning enables adaptive group reconfiguration without retraining, while contrastive learning refines groups under dynamic conditions. Experiments on multiple datasets demonstrate that DeepForm achieves superior group formation quality, efficiency, and recommendation accuracy compared with various baselines.
- oai:arXiv.org:2511.02879v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Junhyung Park, Hyungjin Kim, Seokho Ahn, Young-Duk Seo
-
-
- AgentSLA : Towards a Service Level Agreement for AI Agents
- https://arxiv.org/abs/2511.02885
- arXiv:2511.02885v1 Announce Type: new
-Abstract: AI components are increasingly becoming a key element of all types of software systems to enhance their functionality. These AI components are often implemented as AI Agents, offering more autonomy than a plain integration of Large Language Models (LLMs), moving from a Model-as-a-Service paradigm to an Agent-as-a-Service one, bringing new challenges to the development of smart software systems. Indeed, while support for the design, implementation, and deployment of those agents exist, the specification of Quality of Service (QoS) and definition of Service Level Agreements (SLAs) aspects for those agents, important to ensure the quality of the resulting systems, remains an open challenge. Part of this is due to the difficulty to clearly define quality in the context of AI components, resulting in a lack of consensus on how to best approach Quality Assurance (QA) for these types of systems. To address this challenge, this paper proposes both a quality model for AI agents based on the ISO/IEC 25010 standard, and a domain specific language to support the definition of SLAs for the services provided by these AI agents.
- oai:arXiv.org:2511.02885v1
- cs.SE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Gwendal Jouneaux, Jordi Cabot
-
-
- Test-time Adaptation of Tiny Recursive Models
- https://arxiv.org/abs/2511.02886
- arXiv:2511.02886v1 Announce Type: new
-Abstract: Prior to the close of the 2025 ARC Prize competition, the leading open source approach - known as TRM, or Tiny Recursive Models - involved training a 7M parameter recursive neural network on augmented variants of ARC tasks. That approach scored approximately 7.8% on the public ARC AGI II evaluation set, but required a level of compute far in excess of what is allowed during the competition. This paper shows that, by starting from a tiny recursive model that has been pre-trained on public ARC tasks, one can efficiently fine-tune on competition tasks within the allowed compute limits. Specifically, a model was pre-trained on 1,280 public tasks for 700k+ optimizer steps over 48 hours on 4xH100 SXM GPUs to obtain a ~10% score on the public evaluation set. That model was then post-trained in just 12,500 gradient steps during the competition to reach a score of 6.67% on semi-private evaluation tasks. Notably, such post-training performance is achieved by full-fine tuning of the tiny model, not LoRA fine-tuning or fine-tuning of task embeddings alone.
- oai:arXiv.org:2511.02886v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ronan Killian McGovern
-
-
- Predicting Weekly Fishing Concentration Zones through Deep Learning Integration of Heterogeneous Environmental Spatial Datasets
- https://arxiv.org/abs/2511.02887
- arXiv:2511.02887v1 Announce Type: new
-Abstract: The North Indian Ocean, including the Arabian Sea and the Bay of Bengal, represents a vital source of livelihood for coastal communities, yet fishermen often face uncertainty in locating productive fishing grounds. To address this challenge, we present an AI-assisted framework for predicting Potential Fishing Zones (PFZs) using oceanographic parameters such as sea surface temperature and chlorophyll concentration. The approach is designed to enhance the accuracy of PFZ identification and provide region-specific insights for sustainable fishing practices. Preliminary results indicate that the framework can support fishermen by reducing search time, lowering fuel consumption, and promoting efficient resource utilization.
- oai:arXiv.org:2511.02887v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Chaitanya Rele, Aditya Rathod, Kaustubh Natu, Saurabh Kulkarni, Ajay Koli, Swapnali Makdey
-
-
- A Survey of Driver Distraction and Inattention in Popular Commercial Software-Defined Vehicles
- https://arxiv.org/abs/2511.02891
- arXiv:2511.02891v1 Announce Type: new
-Abstract: As the automotive industry embraces software-defined vehicles (SDVs), the role of user interface (UI) design in ensuring driver safety has become increasingly significant. In crashes related to distracted driving, over 90% did not involve cellphone use but were related to UI controls. However, many of the existing UI SDV implementations do not consider Drive Distraction and Inattention (DDI), which is reflected in many popular commercial vehicles. This paper investigates the impact of UI designs on driver distraction and inattention within the context of SDVs. Through a survey of popular commercial vehicles, we identify UI features that potentially increase cognitive load and evaluate design strategies to mitigate these risks. This survey highlights the need for UI designs that balance advanced software functionalities with driver-cognitive ergonomics. Findings aim to provide valuable guidance to researchers and OEMs to contribute to the field of automotive UI, contributing to the broader discussion on enhancing vehicular safety in the software-centric automotive era.
- oai:arXiv.org:2511.02891v1
- cs.HC
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Lingyu Zhao, Yuankai He
-
-
- Adaptive and Robust Data Poisoning Detection and Sanitization in Wearable IoT Systems using Large Language Models
- https://arxiv.org/abs/2511.02894
- arXiv:2511.02894v1 Announce Type: new
-Abstract: The widespread integration of wearable sensing devices in Internet of Things (IoT) ecosystems, particularly in healthcare, smart homes, and industrial applications, has required robust human activity recognition (HAR) techniques to improve functionality and user experience. Although machine learning models have advanced HAR, they are increasingly susceptible to data poisoning attacks that compromise the data integrity and reliability of these systems. Conventional approaches to defending against such attacks often require extensive task-specific training with large, labeled datasets, which limits adaptability in dynamic IoT environments. This work proposes a novel framework that uses large language models (LLMs) to perform poisoning detection and sanitization in HAR systems, utilizing zero-shot, one-shot, and few-shot learning paradigms. Our approach incorporates \textit{role play} prompting, whereby the LLM assumes the role of expert to contextualize and evaluate sensor anomalies, and \textit{think step-by-step} reasoning, guiding the LLM to infer poisoning indicators in the raw sensor data and plausible clean alternatives. These strategies minimize reliance on curation of extensive datasets and enable robust, adaptable defense mechanisms in real-time. We perform an extensive evaluation of the framework, quantifying detection accuracy, sanitization quality, latency, and communication cost, thus demonstrating the practicality and effectiveness of LLMs in improving the security and reliability of wearable IoT systems.
- oai:arXiv.org:2511.02894v1
- cs.LG
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- W. K. M Mithsara, Ning Yang, Ahmed Imteaj, Hussein Zangoti, Abdur R. Shahid
-
-
- A Criminology of Machines
- https://arxiv.org/abs/2511.02895
- arXiv:2511.02895v1 Announce Type: new
-Abstract: While the possibility of reaching human-like Artificial Intelligence (AI) remains controversial, the likelihood that the future will be characterized by a society with a growing presence of autonomous machines is high. Autonomous AI agents are already deployed and active across several industries and digital environments and alongside human-human and human-machine interactions, machine-machine interactions are poised to become increasingly prevalent. Given these developments, I argue that criminology must begin to address the implications of this transition for crime and social control. Drawing on Actor-Network Theory and Woolgar's decades-old call for a sociology of machines -- frameworks that acquire renewed relevance with the rise of generative AI agents -- I contend that criminologists should move beyond conceiving AI solely as a tool. Instead, AI agents should be recognized as entities with agency encompassing computational, social, and legal dimensions. Building on the literature on AI safety, I thus examine the risks associated with the rise of multi-agent AI systems, proposing a dual taxonomy to characterize the channels through which interactions among AI agents may generate deviant, unlawful, or criminal outcomes. I then advance and discuss four key questions that warrant theoretical and empirical attention: (1) Can we assume that machines will simply mimic humans? (2) Will crime theories developed for humans suffice to explain deviant or criminal behaviors emerging from interactions between autonomous AI agents? (3) What types of criminal behaviors will be affected first? (4) How might this unprecedented societal shift impact policing? These questions underscore the urgent need for criminologists to theoretically and empirically engage with the implications of multi-agent AI systems for the study of crime and play a more active role in debates on AI safety and governance.
- oai:arXiv.org:2511.02895v1
- cs.CY
- cs.AI
- cs.HC
- physics.soc-ph
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Gian Maria Campedelli
-
-
- Performance Evaluation of Bitstring Representations in a Linear Genetic Programming Framework
- https://arxiv.org/abs/2511.02897
- arXiv:2511.02897v1 Announce Type: new
-Abstract: Different bitstring representations can yield varying computational performance. This work compares three bitstring implementations in C++: std::bitset, boost::dynamic_bitset, and a custom direct implementation. Their performance is benchmarked in the context of concatenation within a Linear Genetic Programming system. Benchmarks were conducted on three platforms (macOS, Linux, and Windows MSYS2) to assess platform specific performance variations. The results show that the custom direct implementation delivers the fastest performance on Linux and Windows, while std::bitset performs best on macOS. Although consistently slower, boost::dynamic_bitset remains a viable and flexible option. These findings highlight the influence of compiler optimisations and system architecture on performance, providing practical guidance for selecting the optimal method based on platform and application requirements.
- oai:arXiv.org:2511.02897v1
- cs.NE
- cs.AI
- cs.PF
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Clyde Meli, Vitezslav Nezval, Zuzana Kominkova Oplatkova, Victor Buttigieg, Anthony Spiteri Staines
-
-
- Designing Proportionate Cybersecurity Frameworks for European Micro-Enterprises: Lessons from the Squad 2025 Case
- https://arxiv.org/abs/2511.02898
- arXiv:2511.02898v1 Announce Type: new
-Abstract: Micro and small enterprises (SMEs) account for most European businesses yet remain highly vulnerable to cyber threats. This paper analyses the design logic of a recent European policy initiative -- the Squad 2025 Playbook on Cybersecurity Awareness for Micro-SMEs -- to extract general principles for proportionate, resource-aware cybersecurity governance. The author participated in the Squad 2025 team and originally proposed the seven-step preventive structure that later shaped the Playbook's design, subsequently refined collaboratively within the project. The framework was guided by the author's design premise that raising cybersecurity awareness among micro- and small-enterprise actors represents the most efficient short-term lever for increasing sensitivity to cybercrime and promoting protective behaviours. Without reproducing any proprietary material, the paper reconstructs the conceptual architecture of that approach within the broader context of ENISA guidance, ISO 27005, and the NIS2 Directive. It proposes a generic seven-dimension preventive model suitable for micro-enterprise adoption and discusses implications for policy transfer, awareness training, and maturity assessment.
- oai:arXiv.org:2511.02898v1
- cs.CR
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Roberto Garrone
-
-
- Cache Mechanism for Agent RAG Systems
- https://arxiv.org/abs/2511.02919
- arXiv:2511.02919v1 Announce Type: new
-Abstract: Recent advances in Large Language Model (LLM)-based agents have been propelled by Retrieval-Augmented Generation (RAG), which grants the models access to vast external knowledge bases. Despite RAG's success in improving agent performance, agent-level cache management, particularly constructing, maintaining, and updating a compact, relevant corpus dynamically tailored to each agent's need, remains underexplored. Therefore, we introduce ARC (Agent RAG Cache Mechanism), a novel, annotation-free caching framework that dynamically manages small, high-value corpora for each agent. By synthesizing historical query distribution patterns with the intrinsic geometry of cached items in the embedding space, ARC automatically maintains a high-relevance cache. With comprehensive experiments on three retrieval datasets, our experimental results demonstrate that ARC reduces storage requirements to 0.015% of the original corpus while offering up to 79.8% has-answer rate and reducing average retrieval latency by 80%. Our results demonstrate that ARC can drastically enhance efficiency and effectiveness in RAG-powered LLM agents.
- oai:arXiv.org:2511.02919v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Shuhang Lin, Zhencan Peng, Lingyao Li, Xiao Lin, Xi Zhu, Yongfeng Zhang
-
-
- Long-term behaviour of symmetric partitioned linear multistep methods II. Invariants error analysis for some nonlinear dispersive wave models
- https://arxiv.org/abs/2511.02921
- arXiv:2511.02921v1 Announce Type: new
-Abstract: In this paper, the use of partitioned linear multistep methods (PLMM) as time integrators for the numerical approximation of some partial differential equations (pdes) is studied. We consider the periodic initial-value problem of two nonlinear dispersive wave models as case studies. From the spatial discretization with pseudospectral methods, the theory developed for PLMMs by the authors in a previous companion paper is applied to analyze the time integration with PLMMs of the semidiscrete equations when approximating solitary wave solutions. The results are illustrated with some numerical experiments. In addition, a computational study is performed in an exploratory fashion to analyze the extension of the results to the approximation of more general localized solutions.
- oai:arXiv.org:2511.02921v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Bego\~na Cano, Angel Dur\'an, Melqu\'iades Rodr\'iguez
-
-
- Comprehension-Performance Gap in GenAI-Assisted Brownfield Programming: A Replication and Extension
- https://arxiv.org/abs/2511.02922
- arXiv:2511.02922v1 Announce Type: new
-Abstract: Code comprehension is essential for brownfield programming tasks, in which developers maintain and enhance legacy code bases. Generative AI (GenAI) coding assistants such as GitHub Copilot have been shown to improve developer productivity, but their impact on code understanding is less clear. We replicate and extend a previous study by exploring both performance and comprehension in GenAI-assisted brownfield programming tasks. In a within-subjects experimental study, 18 computer science graduate students completed feature implementation tasks with and without Copilot. Results show that Copilot significantly reduced task time and increased the number of test cases passed. However, comprehension scores did not differ across conditions, revealing a comprehension-performance gap: participants passed more test cases with Copilot, but did not demonstrate greater understanding of the legacy codebase. Moreover, we failed to find a correlation between comprehension and task performance. These findings suggest that while GenAI tools can accelerate programming progress in a legacy codebase, such progress may come without an improved understanding of that codebase. We consider the implications of these findings for programming education and GenAI tool design.
- oai:arXiv.org:2511.02922v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yunhan Qiao, Christopher Hundhausen, Summit Haque, Md Istiak Hossain Shihab
-
-
- Cropland Mapping using Geospatial Embeddings
- https://arxiv.org/abs/2511.02923
- arXiv:2511.02923v1 Announce Type: new
-Abstract: Accurate and up-to-date land cover maps are essential for understanding land use change, a key driver of climate change. Geospatial embeddings offer a more efficient and accessible way to map landscape features, yet their use in real-world mapping applications remains underexplored. In this work, we evaluated the utility of geospatial embeddings for cropland mapping in Togo. We produced cropland maps using embeddings from Presto and AlphaEarth. Our findings show that geospatial embeddings can simplify workflows, achieve high-accuracy cropland classification and ultimately support better assessments of land use change and its climate impacts.
- oai:arXiv.org:2511.02923v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ivan Zvonkov, Gabriel Tseng, Inbal Becker-Reshef, Hannah Kerner
-
-
- Lightweight Session-Key Rekeying Framework for Secure IoT-Edge Communication
- https://arxiv.org/abs/2511.02924
- arXiv:2511.02924v1 Announce Type: new
-Abstract: The proliferation of Internet of Things (IoT) networks demands security mechanisms that protect constrained devices without the computational cost of public-key cryptography. Conventional Pre-Shared Key (PSK) encryption, while efficient, remains vulnerable due to static key reuse, replay attacks, and the lack of forward secrecy. This paper presents the Dynamic Session Enhanced Key Protocol (DSEKP) - a lightweight session-key rekeying framework, a fully symmetric extension to PSK that derives per-session AES-GCM keys using the HMAC-based Key Derivation Function (HKDF-SHA256) and authenticates session establishment through an HMAC proof in a single init-ack exchange. DSEKP was implemented on an ESP32 IoT sensor node and a Raspberry Pi 5 edge server communicating through a Mosquitto MQTT broker, and benchmarked against a static PSK baseline over more than 6,500 encrypted packets per configuration. The results demonstrate nearly identical throughput and reliability, with moderate overhead - mean latency increased by 27% and payload size by 10% - while delivering per-session forward secrecy and built-in replay protection. These findings confirm that dynamic symmetric rekeying can substantially strengthen IoT-Edge links with minimal computational and bandwidth cost, offering a practical migration path from static PSK to session-aware, scalable, and reproducible IoT security.
- oai:arXiv.org:2511.02924v1
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Haranath Rakshit, Rajkumar Bhandari, Subhasis Banerjee
-
-
- Risk Estimation in Differential Fuzzing via Extreme Value Theory
- https://arxiv.org/abs/2511.02927
- arXiv:2511.02927v1 Announce Type: new
-Abstract: Differential testing is a highly effective technique for automatically detecting software bugs and vulnerabilities when the specifications involve an analysis over multiple executions simultaneously. Differential fuzzing, in particular, operates as a guided randomized search, aiming to find (similar) inputs that lead to a maximum difference in software outputs or their behaviors. However, fuzzing, as a dynamic analysis, lacks any guarantees on the absence of bugs: from a differential fuzzing campaign that has observed no bugs (or a minimal difference), what is the risk of observing a bug (or a larger difference) if we run the fuzzer for one or more steps?
- This paper investigates the application of Extreme Value Theory (EVT) to address the risk of missing or underestimating bugs in differential fuzzing. The key observation is that differential fuzzing as a random process resembles the maximum distribution of observed differences. Hence, EVT, a branch of statistics dealing with extreme values, is an ideal framework to analyze the tail of the differential fuzzing campaign to contain the risk. We perform experiments on a set of real-world Java libraries and use differential fuzzing to find information leaks via side channels in these libraries. We first explore the feasibility of EVT for this task and the optimal hyperparameters for EVT distributions. We then compare EVT-based extrapolation against baseline statistical methods like Markov's as well as Chebyshev's inequalities, and the Bayes factor. EVT-based extrapolations outperform the baseline techniques in 14.3% of cases and tie with the baseline in 64.2% of cases. Finally, we evaluate the accuracy and performance gains of EVT-enabled differential fuzzing in real-world Java libraries, where we reported an average saving of tens of millions of bytecode executions by an early stop.
- oai:arXiv.org:2511.02927v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Rafael Baez (University of Texas at El Paso), Alejandro Olivas (University of Texas at El Paso), Nathan K. Diamond (University of Texas at El Paso), Marcelo Frias (University of Texas at El Paso), Yannic Noller (Ruhr University Bochum), Saeid Tizpaz-Niari (University of Illinois Chicago)
-
-
- A Conditional Diffusion Model for Building Energy Modeling Workflows
- https://arxiv.org/abs/2511.02930
- arXiv:2511.02930v1 Announce Type: new
-Abstract: Understanding current energy consumption behavior in communities is critical for informing future energy use decisions and enabling efficient energy management. Urban energy models, which are used to simulate these energy use patterns, require large datasets with detailed building characteristics for accurate outcomes. However, such detailed characteristics at the individual building level are often unknown and costly to acquire, or unavailable. Through this work, we propose using a generative modeling approach to generate realistic building attributes to fill in the data gaps and finally provide complete characteristics as inputs to energy models. Our model learns complex, building-level patterns from training on a large-scale residential building stock model containing 2.2 million buildings. We employ a tabular diffusion-based framework that is designed to handle heterogeneous (discrete and continuous) features in tabular building data, such as occupancy, floor area, heating, cooling, and other equipment details. We develop a capability for conditional diffusion, enabling the imputation of missing building characteristics conditioned on known attributes. We conduct a comprehensive validation of our conditional diffusion model, firstly by comparing the generated conditional distributions against the underlying data distribution, and secondly, by performing a case study for a Baltimore residential region, showing the practical utility of our approach. Our work is one of the first to demonstrate the potential of generative modeling to accelerate building energy modeling workflows.
- oai:arXiv.org:2511.02930v1
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Saumya Sinha, Alexandre Cortiella, Rawad El Kontar, Andrew Glaws, Ryan King, Patrick Emami
-
-
- Google's Hidden Empire
- https://arxiv.org/abs/2511.02931
- arXiv:2511.02931v1 Announce Type: new
-Abstract: This paper presents striking new data about the scale of Google's involvement in the global digital and corporate landscape, head and shoulders above the other big tech firms. While public attention and some antitrust scrutiny has focused on these firms' mergers and acquisitions (M&A) activities, Google has also been amassing an empire of more than 6,000 companies which it has acquired, supported or invested in, across the digital economy and beyond. The power of Google over the digital markets infrastructure and dynamics is likely greater than previously documented. We also trace the antitrust failures that have led to this state of affairs. In particular, we explore the role of neoclassical economics practiced both inside the regulatory authorities and by consultants on the outside. Their unduly narrow approach has obscured harms from vertical and conglomerate concentrations of market power and erected ever higher hurdles for enforcement action, as we demonstrate using examples of the failure to intervene in the Google/DoubleClick and Google/Fitbit mergers. Our lessons from the past failures can inform the current approach towards one of the biggest ever big tech M&A deals: Google's $32 billion acquisition of the Israeli cloud cybersecurity firm Wiz.
- oai:arXiv.org:2511.02931v1
- cs.CY
- econ.GN
- q-fin.EC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aline Blankertz, Brianna Rock, Nicholas Shaxson
-
-
- Generative Hints
- https://arxiv.org/abs/2511.02933
- arXiv:2511.02933v1 Announce Type: new
-Abstract: Data augmentation is widely used in vision to introduce variation and mitigate overfitting, through enabling models to learn invariant properties, such as spatial invariance. However, these properties are not fully captured by data augmentation alone, since it attempts to learn the property on transformations of the training data only. We propose generative hints, a training methodology that directly enforces known invariances in the entire input space. Our approach leverages a generative model trained on the training set to approximate the input distribution and generate unlabeled images, which we refer to as virtual examples. These virtual examples are used to enforce functional properties known as hints. In generative hints, although the training dataset is fully labeled, the model is trained in a semi-supervised manner on both the classification and hint objectives, using the unlabeled virtual examples to guide the model in learning the desired hint. Across datasets, architectures, and loss functions, generative hints consistently outperform standard data augmentation when learning the same property. On popular fine-grained visual classification benchmarks, we achieved up to 1.78% top-1 accuracy improvement (0.63% on average) over fine-tuned models with data augmentation and an average performance boost of 1.286% on the CheXpert X-ray dataset.
- oai:arXiv.org:2511.02933v1
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Andy Dimnaku, Abdullah Yusuf Kavrano\u{g}lu, Yaser Abu-Mostafa
-
-
- Zero-shot data citation function classification using transformer-based large language models (LLMs)
- https://arxiv.org/abs/2511.02936
- arXiv:2511.02936v1 Announce Type: new
-Abstract: Efforts have increased in recent years to identify associations between specific datasets and the scientific literature that incorporates them. Knowing that a given publication cites a given dataset, the next logical step is to explore how or why that data was used. Advances in recent years with pretrained, transformer-based large language models (LLMs) offer potential means for scaling the description of data use cases in the published literature. This avoids expensive manual labeling and the development of training datasets for classical machine-learning (ML) systems. In this work we apply an open-source LLM, Llama 3.1-405B, to generate structured data use case labels for publications known to incorporate specific genomic datasets. We also introduce a novel evaluation framework for determining the efficacy of our methods. Our results demonstrate that the stock model can achieve an F1 score of .674 on a zero-shot data citation classification task with no previously defined categories. While promising, our results are qualified by barriers related to data availability, prompt overfitting, computational infrastructure, and the expense required to conduct responsible performance evaluation.
- oai:arXiv.org:2511.02936v1
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Neil Byers, Ali Zaidi, Valerie Skye, Chris Beecroft, Kjiersten Fagnan
-
-
- Toward an Agricultural Operational Design Domain: A Framework
- https://arxiv.org/abs/2511.02937
- arXiv:2511.02937v1 Announce Type: new
-Abstract: The agricultural sector increasingly relies on autonomous systems that operate in complex and variable environments. Unlike on-road applications, agricultural automation integrates driving and working processes, each of which imposes distinct operational constraints. Handling this complexity and ensuring consistency throughout the development and validation processes requires a structured, transparent, and verified description of the environment. However, existing Operational Design Domain (ODD) concepts do not yet address the unique challenges of agricultural applications.
- Therefore, this work introduces the Agricultural ODD (Ag-ODD) Framework, which can be used to describe and verify the operational boundaries of autonomous agricultural systems. The Ag-ODD Framework consists of three core elements. First, the Ag-ODD description concept, which provides a structured method for unambiguously defining environmental and operational parameters using concepts from ASAM Open ODD and CityGML. Second, the 7-Layer Model derived from the PEGASUS 6-Layer Model, has been extended to include a process layer to capture dynamic agricultural operations. Third, the iterative verification process verifies the Ag-ODD against its corresponding logical scenarios, derived from the 7-Layer Model, to ensure the Ag-ODD's completeness and consistency.
- Together, these elements provide a consistent approach for creating unambiguous and verifiable Ag-ODD. Demonstrative use cases show how the Ag-ODD Framework can support the standardization and scalability of environmental descriptions for autonomous agricultural systems.
- oai:arXiv.org:2511.02937v1
- cs.RO
- cs.SE
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mirco Felske, Jannik Redenius, Georg Happich, Julius Sch\"oning
-
-
- Faster Weak Expander Decompositions and Approximate Max Flow
- https://arxiv.org/abs/2511.02943
- arXiv:2511.02943v1 Announce Type: new
-Abstract: We give faster algorithms for weak expander decompositions and approximate max flow on undirected graphs. First, we show that it is possible to "warm start" the cut-matching game when computing weak expander decompositions, avoiding the cost of the recursion depth. Our algorithm is also flexible enough to support weaker flow subroutines than previous algorithms.
- Our second contribution is to streamline the recent non-recursive approximate max flow algorithm of Li, Rao, and Wang (SODA, 2025) and adapt their framework to use our new weak expander decomposition primitive. Consequently, we give an approximate max flow algorithm within a few logarithmic factors of the limit of expander decomposition-based approaches.
- oai:arXiv.org:2511.02943v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Henry Fleischmann, George Z. Li, Jason Li
-
-
- Power Constrained Nonstationary Bandits with Habituation and Recovery Dynamics
- https://arxiv.org/abs/2511.02944
- arXiv:2511.02944v1 Announce Type: new
-Abstract: A common challenge for decision makers is selecting actions whose rewards are unknown and evolve over time based on prior policies. For instance, repeated use may reduce an action's effectiveness (habituation), while inactivity may restore it (recovery). These nonstationarities are captured by the Reducing or Gaining Unknown Efficacy (ROGUE) bandit framework, which models real-world settings such as behavioral health interventions. While existing algorithms can compute sublinear regret policies to optimize these settings, they may not provide sufficient exploration due to overemphasis on exploitation, limiting the ability to estimate population-level effects. This is a challenge of particular interest in micro-randomized trials (MRTs) that aid researchers in developing just-in-time adaptive interventions that have population-level effects while still providing personalized recommendations to individuals. In this paper, we first develop ROGUE-TS, a Thompson Sampling algorithm tailored to the ROGUE framework, and provide theoretical guarantees of sublinear regret. We then introduce a probability clipping procedure to balance personalization and population-level learning, with quantified trade-off that balances regret and minimum exploration probability. Validation on two MRT datasets concerning physical activity promotion and bipolar disorder treatment shows that our methods both achieve lower regret than existing approaches and maintain high statistical power through the clipping procedure without significantly increasing regret. This enables reliable detection of treatment effects while accounting for individual behavioral dynamics. For researchers designing MRTs, our framework offers practical guidance on balancing personalization with statistical validity.
- oai:arXiv.org:2511.02944v1
- cs.LG
- cs.AI
- math.OC
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fengxu Li, Stephanie M. Carpenter, Matthew P. Buman, Yonatan Mintz
-
-
- ProM3E: Probabilistic Masked MultiModal Embedding Model for Ecology
- https://arxiv.org/abs/2511.02946
- arXiv:2511.02946v1 Announce Type: new
-Abstract: We introduce ProM3E, a probabilistic masked multimodal embedding model for any-to-any generation of multimodal representations for ecology. ProM3E is based on masked modality reconstruction in the embedding space, learning to infer missing modalities given a few context modalities. By design, our model supports modality inversion in the embedding space. The probabilistic nature of our model allows us to analyse the feasibility of fusing various modalities for given downstream tasks, essentially learning what to fuse. Using these features of our model, we propose a novel cross-modal retrieval approach that mixes inter-modal and intra-modal similarities to achieve superior performance across all retrieval tasks. We further leverage the hidden representation from our model to perform linear probing tasks and demonstrate the superior representation learning capability of our model. All our code, datasets and model will be released at https://vishu26.github.io/prom3e.
- oai:arXiv.org:2511.02946v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Srikumar Sastry, Subash Khanal, Aayush Dhakal, Jiayu Lin, Dan Cher, Phoenix Jarosz, Nathan Jacobs
-
-
- NF-SecRIS: RIS-Assisted Near-Field Physical Layer Security via Secure Location Modulation
- https://arxiv.org/abs/2511.02949
- arXiv:2511.02949v1 Announce Type: new
-Abstract: The 6G wireless networks impose extremely high requirements on physical layer secure communication. However, the existing solutions usually can only achieve one-dimensional physical layer security (PLS) in the angle dimension, and cannot achieve PLS in the range dimension. In this paper, we propose the NF-SecRIS system, the first range-angle-dependent (2D) PLS near-field communication system based on ultra-large-scale reconfigurable intelligent surface (RIS). We propose the secure location modulation scheme to synthesize the near-field spatial-temporal coding pattern of RIS with extremely low complexity. It ensures that only legitimate user can receive the raw constellations, while potential eavesdroppers at other ranges or angles can only receive the obfuscated constellations. NF-SecRIS operates without requiring synchronization with either transmitter or receiver. We implement a prototype of NF-SecRIS and conduct comprehensive experiments with multiple modulation schemes. The results show that the bit error rate (BER) of legitimate user is below 10^{-4}, while eavesdroppers at other ranges or angles suffer from BER exceeding 40%. It validates the implementation of 2D PLS in near-field communications.
- oai:arXiv.org:2511.02949v1
- cs.ET
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhendong Wang, Chenyang Meng, Jun Yang, Jiayuan Wang, Yin Li, Linshan Jiang, Jin Zhang
-
-
- Ownership and Flow Primitives for Scalable Consent Management in Digital Public Infrastructures
- https://arxiv.org/abs/2511.02950
- arXiv:2511.02950v1 Announce Type: new
-Abstract: Digital public infrastructures (DPIs) represent networks of open technology standards, applications, services, and digital assets made available for the public good. One of the key challenges in DPI design is to resolve complex issues of consent, scaled over large populations. While the primary objective of consent management is to empower the data owner, ownership itself can come with variegated morphological forms with different implications over consent. Questions of ownership in a public space also have several nuances where individual autonomy needs to be balanced with public well-being and national sovereignty. This requires consent management to be compliant with applicable regulations for data sharing. This paper addresses the question of representing modes of ownership of digital assets and their corresponding implications for consensual data flows in a DPI. It proposes a set of foundational abstractions to represent them. Our proposed architecture responds to the growing need for transparent, secure, and user-centric consent management within Digital Public Infrastructure (DPI). Incorporating a formalised data ownership model enables end-to-end traceability of consent, fine-grained control over data sharing, and alignment with evolving legal and regulatory frameworks.
- oai:arXiv.org:2511.02950v1
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Rohith Vaidyanathan, Srinath Srinivasa, Praseeda, Dev Shinde
-
-
- List Decoding and New Bicycle Code Constructions for Quantum LDPC Codes
- https://arxiv.org/abs/2511.02951
- arXiv:2511.02951v1 Announce Type: new
-Abstract: In this paper, we propose a new decoder, called the Multiple-Bases Belief-Propagation List Decoder (MBBP-LD), for Quantum Low-Density Parity-Check (QLDPC) codes. It extends the Multiple-Bases Belief-Propagation (MBBP) framework, originally developed for classical cyclic LDPC codes. The proposed method preserves the linear-time complexity of standard BP decoder while improving the logical error rate. To further reduce the logical error rate, a new decision rule is introduced for the post-processing list decoder, outperforming the conventional least-metric selector (LMS) criterion. For the recently developed and implemented bivariate bicycle (BB) code with parameters \([[144,12,12]]\), our proposed MBBP-LD decoder achieves up to 40\% lower logical error rate compared to the state-of-the-art decoder for short QLDPC codes, i.e., BP with ordered-statistics decoding (BP-OSD), while retaining the linear-time complexity of the plain BP decoder. In addition, we explore a new subclass of BB codes, that we refer to as the univariate bicycle (UB) codes, specifically with lower-weight parity checks (\(w=6,8\)). This reduces the polynomial search space for the code compared to general BB codes, i.e., by reducing the search space over two polynomial components in BB codes to just a single polynomial component in UB codes. Simulations demonstrate the promising performance of these codes under various types of BP decoders.
- oai:arXiv.org:2511.02951v1
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Sheida Rabeti, Hessam Mahdavifar
-
-
- DecodeX: Exploring and Benchmarking of LDPC Decoding across CPU, GPU, and ASIC Platforms
- https://arxiv.org/abs/2511.02952
- arXiv:2511.02952v1 Announce Type: new
-Abstract: Emerging virtualized radio access networks (vRANs) demand flexible and efficient baseband processing across heterogeneous compute substrates. In this paper, we present DecodeX, a unified benchmarking framework for evaluating low-density parity-check (LDPC) decoding acceleration across different hardware platforms. DecodeX integrates a comprehensive suite of LDPC decoder implementations, including kernels, APIs, and test vectors for CPUs (FlexRAN), GPUs (Aerial and Sionna-RK), and ASIC (ACC100), and can be readily extended to additional architectures and configurations. Using DecodeX, we systematically characterize how different platforms orchestrate computation-from threading and memory management to data movement and accelerator offload-and quantify the resulting decoding latency under varying Physical layer parameters. Our observations reveal distinct trade-offs in parallel efficiency and offload overhead, showing that accelerator gains strongly depend on data-movement and workload granularity. Building on these insights, we discuss how cross-platform benchmarking can inform adaptive scheduling and co-design for future heterogeneous vRANs, enabling scalable and energy-efficient baseband processing for NextG wireless systems.
- oai:arXiv.org:2511.02952v1
- cs.NI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Zhenzhou Qi, Yuncheng Yao, Yiming Li, Chung-Hsuan Tung, Junyao Zheng, Danyang Zhuo, Tingjun Chen
-
-
- EvtSlowTV - A Large and Diverse Dataset for Event-Based Depth Estimation
- https://arxiv.org/abs/2511.02953
- arXiv:2511.02953v1 Announce Type: new
-Abstract: Event cameras, with their high dynamic range (HDR) and low latency, offer a promising alternative for robust depth estimation in challenging environments. However, many event-based depth estimation approaches are constrained by small-scale annotated datasets, limiting their generalizability to real-world scenarios. To bridge this gap, we introduce EvtSlowTV, a large-scale event camera dataset curated from publicly available YouTube footage, which contains more than 13B events across various environmental conditions and motions, including seasonal hiking, flying, scenic driving, and underwater exploration. EvtSlowTV is an order of magnitude larger than existing event datasets, providing an unconstrained, naturalistic setting for event-based depth learning. This work shows the suitability of EvtSlowTV for a self-supervised learning framework to capitalise on the HDR potential of raw event streams. We further demonstrate that training with EvtSlowTV enhances the model's ability to generalise to complex scenes and motions. Our approach removes the need for frame-based annotations and preserves the asynchronous nature of event data.
- oai:arXiv.org:2511.02953v1
- cs.CV
- cs.AI
- cs.LG
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sadiq Layi Macaulay, Nimet Kaygusuz, Simon Hadfield
-
-
- Tight Better-Than-Worst-Case Bounds for Element Distinctness and Set Intersection
- https://arxiv.org/abs/2511.02954
- arXiv:2511.02954v1 Announce Type: new
-Abstract: The element distinctness problem takes as input a list $I$ of $n$ values from a totally ordered universe and the goal is to decide whether $I$ contains any duplicates. It is a well-studied problem with a classical worst-case $\Omega(n \log n)$ comparison-based lower bound by Fredman. At first glance, this lower bound appears to rule out any algorithm more efficient than the naive approach of sorting $I$ and comparing adjacent elements. However, upon closer inspection, the $\Omega(n \log n)$ bound does not apply if the input has many duplicates. We therefore ask: Are there comparison-based lower bounds for element distinctness that are sensitive to the amount of duplicates in the input?
- To address this question, we derive instance-specific lower bounds. For any input instance $I$, we represent the combinatorial structure of the duplicates in $I$ by an undirected graph $G(I)$ that connects identical elements. Each such graph $G$ is a union of cliques, and we study algorithms by their worst-case running time over all inputs $I'$ with $G(I') \cong G$. We establish an adversarial lower bound showing that, for any deterministic algorithm $\mathcal{A}$, there exists a graph $G$ and an algorithm $\mathcal{A}'$ that, for all inputs $I$ with $G(I) \cong G$, is a factor $O(\log \log n)$ faster than $\mathcal{A}$. Consequently, no deterministic algorithm can be $o(\log \log n)$-competitive for all graphs $G$. We complement this with an $O(\log \log n)$-competitive deterministic algorithm, thereby obtaining tight bounds for element distinctness that go beyond classical worst-case analysis.
- We subsequently study the related problem of set intersection. We show that no deterministic set intersection algorithm can be $o(\log n)$-competitive, and provide an $O(\log n)$-competitive deterministic algorithm. This shows a separation between element distinctness and the set intersection problem.
- oai:arXiv.org:2511.02954v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ivor van der Hoog, Eva Rotenberg, Daniel Rutschmann
-
-
- Digital Twin-Driven Pavement Health Monitoring and Maintenance Optimization Using Graph Neural Networks
- https://arxiv.org/abs/2511.02957
- arXiv:2511.02957v1 Announce Type: new
-Abstract: Pavement infrastructure monitoring is challenged by complex spatial dependencies, changing environmental conditions, and non-linear deterioration across road networks. Traditional Pavement Management Systems (PMS) remain largely reactive, lacking real-time intelligence for failure prevention and optimal maintenance planning. To address this, we propose a unified Digital Twin (DT) and Graph Neural Network (GNN) framework for scalable, data-driven pavement health monitoring and predictive maintenance. Pavement segments and spatial relations are modeled as graph nodes and edges, while real-time UAV, sensor, and LiDAR data stream into the DT. The inductive GNN learns deterioration patterns from graph-structured inputs to forecast distress and enable proactive interventions. Trained on a real-world-inspired dataset with segment attributes and dynamic connectivity, our model achieves an R2 of 0.3798, outperforming baseline regressors and effectively capturing non-linear degradation. We also develop an interactive dashboard and reinforcement learning module for simulation, visualization, and adaptive maintenance planning. This DT-GNN integration enhances forecasting precision and establishes a closed feedback loop for continuous improvement, positioning the approach as a foundation for proactive, intelligent, and sustainable pavement management, with future extensions toward real-world deployment, multi-agent coordination, and smart-city integration.
- oai:arXiv.org:2511.02957v1
- cs.LG
- cs.CE
- cs.ET
- cs.NE
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Mohsin Mahmud Topu, Mahfuz Ahmed Anik, Azmine Toushik Wasi, Md Manjurul Ahsan
-
-
- Automatic Machine Translation Detection Using a Surrogate Multilingual Translation Model
- https://arxiv.org/abs/2511.02958
- arXiv:2511.02958v1 Announce Type: new
-Abstract: Modern machine translation (MT) systems depend on large parallel corpora, often collected from the Internet. However, recent evidence indicates that (i) a substantial portion of these texts are machine-generated translations, and (ii) an overreliance on such synthetic content in training data can significantly degrade translation quality. As a result, filtering out non-human translations is becoming an essential pre-processing step in building high-quality MT systems. In this work, we propose a novel approach that directly exploits the internal representations of a surrogate multilingual MT model to distinguish between human and machine-translated sentences. Experimental results show that our method outperforms current state-of-the-art techniques, particularly for non-English language pairs, achieving gains of at least 5 percentage points of accuracy.
- oai:arXiv.org:2511.02958v1
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Cristian Garc\'ia-Romero, Miquel Espl\`a-Gomis, Felipe S\'anchez-Mart\'inez
-
-
- A physics-augmented neural network framework for finite strain incompressible viscoelasticity
- https://arxiv.org/abs/2511.02959
- arXiv:2511.02959v1 Announce Type: new
-Abstract: We propose a physics-augmented neural network (PANN) framework for finite strain incompressible viscoelasticity within the generalized standard materials theory. The formulation is based on the multiplicative decomposition of the deformation gradient and enforces unimodularity of the inelastic deformation part throughout the evolution. Invariant-based representations of the free energy and the dual dissipation potential by monotonic and fully input-convex neural networks ensure thermodynamic consistency, objectivity, and material symmetry by construction. The evolution of the internal variables during training is handled by solving the evolution equations using an implicit exponential time integrator. In addition, a trainable gate layer combined with lp regularization automatically identifies the required number of internal variables during training. The PANN is calibrated with synthetic and experimental data, showing excellent agreement for a wide range of deformation rates and different load paths. We also show that the proposed model achieves excellent interpolation as well as plausible and accurate extrapolation behaviors. In addition, we demonstrate consistency of the PANN with linear viscoelasticity by linearization of the full model.
- oai:arXiv.org:2511.02959v1
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Karl A. Kalina, J\"org Brummund, Markus K\"astner
-
-
- The Contiguous Art Gallery Problem is in {\Theta}(n log n)
- https://arxiv.org/abs/2511.02960
- arXiv:2511.02960v1 Announce Type: new
-Abstract: Recently, a natural variant of the Art Gallery problem, known as the \emph{Contiguous Art Gallery problem} was proposed. Given a simple polygon $P$, the goal is to partition its boundary $\partial P$ into the smallest number of contiguous segments such that each segment is completely visible from some point in $P$. Unlike the classical Art Gallery problem, which is NP-hard, this variant is polynomial-time solvable. At SoCG~2025, three independent works presented algorithms for this problem, each achieving a running time of $O(k n^5 \log n)$ (or $O(n^6\log n)$), where $k$ is the size of an optimal solution. Interestingly, these results were obtained using entirely different approaches, yet all led to roughly the same asymptotic complexity, suggesting that such a running time might be inherent to the problem.
- We show that this is not the case. In the real RAM-model, the prevalent model in computational geometry, we present an $O(n \log n)$-time algorithm, achieving an $O(k n^4)$ factor speed-up over the previous state-of-the-art. We also give a straightforward sorting-based lower bound by reducing from the set intersection problem. We thus show that the Contiguous Art Gallery problem is in $\Theta(n \log n)$.
- oai:arXiv.org:2511.02960v1
- cs.CG
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Sarita de Berg, Jacobus Conradi, Ivor van der Hoog, Eva Rotenberg
-
-
- Hybrid DeepONet Surrogates for Multiphase Flow in Porous Media
- https://arxiv.org/abs/2511.02962
- arXiv:2511.02962v1 Announce Type: new
-Abstract: The solution of partial differential equations (PDEs) plays a central role in numerous applications in science and engineering, particularly those involving multiphase flow in porous media. Complex, nonlinear systems govern these problems and are notoriously computationally intensive, especially in real-world applications and reservoirs. Recent advances in deep learning have spurred the development of data-driven surrogate models that approximate PDE solutions with reduced computational cost. Among these, Neural Operators such as Fourier Neural Operator (FNO) and Deep Operator Networks (DeepONet) have shown strong potential for learning parameter-to-solution mappings, enabling the generalization across families of PDEs. However, both methods face challenges when applied independently to complex porous media flows, including high memory requirements and difficulty handling the time dimension. To address these limitations, this work introduces hybrid neural operator surrogates based on DeepONet models that integrate Fourier Neural Operators, Multi-Layer Perceptrons (MLPs), and Kolmogorov-Arnold Networks (KANs) within their branch and trunk networks. The proposed framework decouples spatial and temporal learning tasks by splitting these structures into the branch and trunk networks, respectively. We evaluate these hybrid models on multiphase flow in porous media problems ranging in complexity from the steady 2D Darcy flow to the 2D and 3D problems belonging to the $10$th Comparative Solution Project from the Society of Petroleum Engineers. Results demonstrate that hybrid schemes achieve accurate surrogate modeling with significantly fewer parameters while maintaining strong predictive performance on large-scale reservoir simulations.
- oai:arXiv.org:2511.02962v1
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ezequiel S. Santos, Gabriel F. Barros, Amanda C. N. Oliveira, R\^omulo M. Silva, Rodolfo S. M. Freitas, Dakshina M. Valiveti, Xiao-Hui Wu, Fernando A. Rochinha, Alvaro L. G. A. Coutinho
-
-
- Inference-Time Personalized Alignment with a Few User Preference Queries
- https://arxiv.org/abs/2511.02966
- arXiv:2511.02966v1 Announce Type: new
-Abstract: We study the problem of aligning a generative model's response with a user's preferences. Recent works have proposed several different formulations for personalized alignment; however, they either require a large amount of user preference queries or require that the preference be explicitly specified as a text input. In this paper, we propose a novel inference-time personalized alignment method, UserAlign, that elicits the user's preferences with a few queries as pairwise response comparisons. In particular, UserAlign builds on the theoretical framework of best-arm identification in logistic bandits and selects a personalized response from a fixed pool of the model's generated responses. The key idea is to consider the user's feedback consistent and noise-free, and incorporate it into the theoretical framework to identify the best response quickly. Experimental results across several tasks, involving personalized text and image generation, showcase the effectiveness of UserAlign in achieving personalized alignment.
- oai:arXiv.org:2511.02966v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Victor-Alexandru P\u{a}durean, Parameswaran Kamalaruban, Nachiket Kotalwar, Alkis Gotovos, Adish Singla
-
-
- Value of Information-Enhanced Exploration in Bootstrapped DQN
- https://arxiv.org/abs/2511.02969
- arXiv:2511.02969v1 Announce Type: new
-Abstract: Efficient exploration in deep reinforcement learning remains a fundamental challenge, especially in environments characterized by high-dimensional states and sparse rewards. Traditional exploration strategies that rely on random local policy noise, such as $\epsilon$-greedy and Boltzmann exploration methods, often struggle to efficiently balance exploration and exploitation. In this paper, we integrate the notion of (expected) value of information (EVOI) within the well-known Bootstrapped DQN algorithmic framework, to enhance the algorithm's deep exploration ability. Specifically, we develop two novel algorithms that incorporate the expected gain from learning the value of information into Bootstrapped DQN. Our methods use value of information estimates to measure the discrepancies of opinions among distinct network heads, and drive exploration towards areas with the most potential. We evaluate our algorithms with respect to performance and their ability to exploit inherent uncertainty arising from random network initialization. Our experiments in complex, sparse-reward Atari games demonstrate increased performance, all the while making better use of uncertainty, and, importantly, without introducing extra hyperparameters.
- oai:arXiv.org:2511.02969v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Stergios Plataniotis, Charilaos Akasiadis, Georgios Chalkiadakis
-
-
- Systematizing LLM Persona Design: A Four-Quadrant Technical Taxonomy for AI Companion Applications
- https://arxiv.org/abs/2511.02979
- arXiv:2511.02979v1 Announce Type: new
-Abstract: The design and application of LLM-based personas in AI companionship is a rapidly expanding but fragmented field, spanning from virtual emotional compan- ions and game NPCs to embodied functional robots. This diversity in objectives, modality, and technical stacks creates an urgent need for a unified framework. To address this gap, this paper systematizes the field by proposing a Four-Quadrant Technical Taxonomy for AI companion applications. The framework is structured along two critical axes: Virtual vs. Embodied and Emotional Companionship vs. Functional Augmentation. Quadrant I (Virtual Companionship) explores virtual idols, romantic companions, and story characters, introducing a four-layer technical framework to analyze their challenges in maintaining long-term emotional consistency. Quadrant II (Functional Virtual Assistants) analyzes AI applica- tions in work, gaming, and mental health, highlighting the shift from "feeling" to "thinking and acting" and pinpointing key technologies like enterprise RAG and on-device inference. Quadrants III & IV (Embodied Intelligence) shift from the virtual to the physical world, analyzing home robots and vertical-domain assistants, revealing core challenges in symbol grounding, data privacy, and ethical liability. This taxonomy provides not only a systematic map for researchers and developers to navigate the complex persona design space but also a basis for policymakers to identify and address the unique risks inherent in different application scenarios.
- oai:arXiv.org:2511.02979v1
- cs.HC
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Esther Sun, Zichu Wu
-
-
- Hybrid Convolution and Vision Transformer NAS Search Space for TinyML Image Classification
- https://arxiv.org/abs/2511.02992
- arXiv:2511.02992v1 Announce Type: new
-Abstract: Hybrids of Convolutional Neural Network (CNN) and Vision Transformer (ViT) have outperformed pure CNN or ViT architecture. However, since these architectures require large parameters and incur large computational costs, they are unsuitable for tinyML deployment. This paper introduces a new hybrid CNN-ViT search space for Neural Architecture Search (NAS) to find efficient hybrid architectures for image classification. The search space covers hybrid CNN and ViT blocks to learn local and global information, as well as the novel Pooling block of searchable pooling layers for efficient feature map reduction. Experimental results on the CIFAR10 dataset show that our proposed search space can produce hybrid CNN-ViT architectures with superior accuracy and inference speed to ResNet-based tinyML models under tight model size constraints.
- oai:arXiv.org:2511.02992v1
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Mikhael Djajapermana, Moritz Reiber, Daniel Mueller-Gritschneder, Ulf Schlichtmann
-
-
- PrivyWave: Privacy-Aware Wireless Sensing of Heartbeat
- https://arxiv.org/abs/2511.02993
- arXiv:2511.02993v1 Announce Type: new
-Abstract: Wireless sensing technologies can now detect heartbeats using radio frequency and acoustic signals, raising significant privacy concerns. Existing privacy solutions either protect from all sensing systems indiscriminately preventing any utility or operate post-data collection, failing to enable selective access where authorized devices can monitor while unauthorized ones cannot. We present a key-based physical obfuscation system, PrivyWave, that addresses this challenge by generating controlled decoy heartbeat signals at cryptographically-determined frequencies. Unauthorized sensors receive a mixture of real and decoy signals that are indistinguishable without the secret key, while authorized sensors use the key to filter out decoys and recover accurate measurements. Our evaluation with 13 participants demonstrates effective protection across both sensing modalities: for mmWave radar, unauthorized sensors show 21.3 BPM mean absolute error while authorized sensors maintain a much smaller 5.8 BPM; for acoustic sensing, unauthorized error increases to 42.0 BPM while authorized sensors achieve 9.7 BPM. The system operates across multiple sensing modalities without per-modality customization and provides cryptographic obfuscation guarantees. Performance benchmarks show robust protection across different distances (30-150 cm), orientations (120{\deg} field of view), and diverse indoor environments, establishing physical-layer obfuscation as a viable approach for selective privacy in pervasive health monitoring.
- oai:arXiv.org:2511.02993v1
- cs.CR
- cs.HC
- eess.SP
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yixuan Gao, Tanvir Ahmed, Zekun Chang, Thijs Roumen, Rajalakshmi Nandakumar
-
-
- Comprehensive Assessment of LiDAR Evaluation Metrics: A Comparative Study Using Simulated and Real Data
- https://arxiv.org/abs/2511.02994
- arXiv:2511.02994v1 Announce Type: new
-Abstract: For developing safe Autonomous Driving Systems (ADS), rigorous testing is required before they are deemed safe for road deployments. Since comprehensive conventional physical testing is impractical due to cost and safety concerns, Virtual Testing Environments (VTE) can be adopted as an alternative. Comparing VTE-generated sensor outputs against their real-world analogues can be a strong indication that the VTE accurately represents reality. Correspondingly, this work explores a comprehensive experimental approach to finding evaluation metrics suitable for comparing real-world and simulated LiDAR scans. The metrics were tested in terms of sensitivity and accuracy with different noise, density, distortion, sensor orientation, and channel settings. From comparing the metrics, we found that Density Aware Chamfer Distance (DCD) works best across all cases. In the second step of the research, a Virtual Testing Environment was generated using real LiDAR scan data. The data was collected in a controlled environment with only static objects using an instrumented vehicle equipped with LiDAR, IMU and cameras. Simulated LiDAR scans were generated from the VTEs using the same pose as real LiDAR scans. The simulated and LiDAR scans were compared in terms of model perception and geometric similarity. Actual and simulated LiDAR scans have a similar semantic segmentation output with a mIoU of 21\% with corrected intensity and an average density aware chamfer distance (DCD) of 0.63. This indicates a slight difference in the geometric properties of simulated and real LiDAR scans and a significant difference between model outputs. During the comparison, density-aware chamfer distance was found to be the most correlated among the metrics with perception methods.
- oai:arXiv.org:2511.02994v1
- cs.RO
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Syed Mostaquim Ali, Taufiq Rahman, Ghazal Farhani, Mohamed H. Zaki, Benoit Anctil, Dominique Charlebois
-
-
- SCALE-VLP: Soft-Weighted Contrastive Volumetric Vision-Language Pre-training with Spatial-Knowledge Semantics
- https://arxiv.org/abs/2511.02996
- arXiv:2511.02996v1 Announce Type: new
-Abstract: Vision-language models (VLMs) have demonstrated strong cross-modal capabilities, yet most work remains limited to 2D data and assumes binary supervision (i.e., positive vs. negative pairs), overlooking the continuous and structured dependencies present in volumetric data such as CT. Existing approaches often treat volumetric scans as independent 2D slices, compromising spatial coherence and underutilizing rich clinical semantics. We propose SCALE-VLP, a soft-weighted contrastive vision-language pre-training framework that integrates (i) volumetric spatial semantics to preserve anatomical structure and (ii) domain-aware, knowledge-infused semantics (e.g., radiological ontologies) to guide alignment. This yields structurally consistent and semantically grounded representations under limited supervision, demonstrating strong cross-task transferability (retrieval, report generation, and classification), and cross-domain generalizability with consistent gains without further fine-tuning. In particular, compared to the previous state of the art, SCALE-VLP achieves up to 4.3x higher top-1 CT-report retrieval, improves abnormality classification by 10 points, and reaches ROUGE-L 0.44 and BERT-F1 0.89 for report generation. Further, in zero-shot evaluation on an out-of-domain external dataset, we observe consistent gains, indicating the cross-task and cross-domain generalization ability of SCALE-VLP.
- oai:arXiv.org:2511.02996v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ailar Mahdizadeh, Puria Azadi Moghadam, Xiangteng He, Shahriar Mirabbasi, Panos Nasiopoulos, Leonid Sigal
-
-
- Evaluating Control Protocols for Untrusted AI Agents
- https://arxiv.org/abs/2511.02997
- arXiv:2511.02997v1 Announce Type: new
-Abstract: As AI systems become more capable and widely deployed as agents, ensuring their safe operation becomes critical. AI control offers one approach to mitigating the risk from untrusted AI agents by monitoring their actions and intervening or auditing when necessary. Evaluating the safety of these protocols requires understanding both their effectiveness against current attacks and their robustness to adaptive adversaries. In this work, we systematically evaluate a range of control protocols in SHADE-Arena, a dataset of diverse agentic environments. First, we evaluate blue team protocols, including deferral to trusted models, resampling, and deferring on critical actions, against a default attack policy. We find that resampling for incrimination and deferring on critical actions perform best, increasing safety from 50% to 96%. We then iterate on red team strategies against these protocols and find that attack policies with additional affordances, such as knowledge of when resampling occurs or the ability to simulate monitors, can substantially improve attack success rates against our resampling strategy, decreasing safety to 17%. However, deferring on critical actions is highly robust to even our strongest red team strategies, demonstrating the importance of denying attack policies access to protocol internals.
- oai:arXiv.org:2511.02997v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- Jon Kutasov, Chloe Loughridge, Yuqi Sun, Henry Sleight, Buck Shlegeris, Tyler Tracy, Joe Benton
-
-
- LEGO-Eval: Towards Fine-Grained Evaluation on Synthesizing 3D Embodied Environments with Tool Augmentation
- https://arxiv.org/abs/2511.03001
- arXiv:2511.03001v1 Announce Type: new
-Abstract: Despite recent progress in using Large Language Models (LLMs) for automatically generating 3D scenes, generated scenes often lack realistic spatial layouts and object attributes found in real-world environments. As this problem stems from insufficiently detailed, coarse-grained instructions, advancing 3D scene synthesis guided by more detailed, fine-grained instructions that reflect real-world environments becomes crucial. Without such realistic scenes, training embodied agents in unrealistic environments can lead them to learn priors that diverge significantly from real-world physics and semantics, degrading their performance when deployed. Thus, verifying the alignment between the fine-grained instruction and the generated scene is essential for effective learning. However, current evaluation methods, such as CLIPScore and vision-language models (VLMs), often fail to reliably assess such alignment. This shortcoming arises primarily from their shallow understanding of 3D scenes, which often leads to improperly grounded scene components. To address this, we introduce LEGO-Eval, an evaluation framework equipped with diverse tools designed to explicitly ground scene components, enabling more accurate alignment assessments. We also present LEGO-Bench, a benchmark of detailed instructions that specify complex layouts and attributes of real-world environments. Experiments demonstrate that LEGO-Eval outperforms VLM-as-a-judge by 0.41 F1 score in assessing scene-instruction alignment. Benchmarking with LEGO-Bench reveals significant limitations in current generation methods. Across all evaluated approaches, success rates reached at most 10% in generating scenes that fully align with fine-grained instructions.
- oai:arXiv.org:2511.03001v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Gyeom Hwangbo, Hyungjoo Chae, Minseok Kang, Hyeonjong Ju, Soohyun Oh, Jinyoung Yeo
-
-
- Robust reduced-order model predictive control using peak-to-peak analysis of filtered signals
- https://arxiv.org/abs/2511.03002
- arXiv:2511.03002v1 Announce Type: new
-Abstract: We address the design of a model predictive control (MPC) scheme for large-scale linear systems using reduced-order models (ROMs). Our approach uses a ROM, leverages tools from robust control, and integrates them into an MPC framework to achieve computational tractability with robust constraint satisfaction. Our key contribution is a method to obtain guaranteed bounds on the predicted outputs of the full-order system by predicting a (scalar) error-bounding system alongside the ROM. This bound is then used to formulate a robust ROM-based MPC that guarantees constraint satisfaction and robust performance. Our method is developed step-by-step by (i) analysing the error, (ii) bounding the peak-to-peak gain, an (iii) using filtered signals. We demonstrate our method on a 100-dimensional mass-spring-damper system, achieving over four orders of magnitude reduction in conservatism relative to existing approaches.
- oai:arXiv.org:2511.03002v1
- eess.SY
- cs.SY
- math.OC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Johannes K\"ohler, Carlo Scholz, Melanie Zeilinger
-
-
- Learning with less: label-efficient land cover classification at very high spatial resolution using self-supervised deep learning
- https://arxiv.org/abs/2511.03004
- arXiv:2511.03004v1 Announce Type: new
-Abstract: Deep learning semantic segmentation methods have shown promising performance for very high 1-m resolution land cover classification, but the challenge of collecting large volumes of representative training data creates a significant barrier to widespread adoption of such models for meter-scale land cover mapping over large areas. In this study, we present a novel label-efficient approach for statewide 1-m land cover classification using only 1,000 annotated reference image patches with self-supervised deep learning. We use the "Bootstrap Your Own Latent" pre-training strategy with a large amount of unlabeled color-infrared aerial images (377,921 256x256 1-m pixel patches) to pre-train a ResNet-101 convolutional encoder. The learned encoder weights were subsequently transferred into multiple deep semantic segmentation architectures (FCN, U-Net, Attention U-Net, DeepLabV3+, UPerNet, PAN), which were then fine-tuned using very small training dataset sizes with cross-validation (250, 500, 750 patches). Among the fine-tuned models, we obtained the 87.14% overall accuracy and 75.58% macro F1 score using an ensemble of the best performing U-Net models for comprehensive 1-m, 8-class land cover mapping, covering more than 123 billion pixels over the state of Mississippi, USA. Detailed qualitative and quantitative analysis revealed accurate mapping of open water and forested areas, while highlighting challenges in accurate delineation between cropland, herbaceous, and barren land cover types. These results show that self-supervised learning is an effective strategy for reducing the need for large volumes of manually annotated data, directly addressing a major limitation to high spatial resolution land cover mapping at scale.
- oai:arXiv.org:2511.03004v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Dakota Hester, Vitor S. Martins, Lucas B. Ferreira, Thainara M. A. Lima
-
-
- Targeted Error Correction in Knowledge Distillation: Small Language Models Surpass GPT
- https://arxiv.org/abs/2511.03005
- arXiv:2511.03005v1 Announce Type: new
-Abstract: We introduce an Analyze-Revise-Finetune (ARF) pipeline that enables smaller open-source language models (LLMs) to surpass substantially larger proprietary models in customer service summarization tasks. The pipeline first analyzes and categorizes common errors in summaries produced by a teacher model (GPT-3.5), then performs a targeted revision using a compact editor model (Llama 3.1 70B) to generate high-quality, refined training data. Fine-tuning a smaller student model (Llama 3.1 8B) on this refined data resulted in superior summarization performance compared to GPT-3.5. The ARF pipeline improves cost efficiency and data privacy while maintaining competitive accuracy, illustrating a generalizable framework for enhancing open-source LLMs across diverse downstream applications.
- oai:arXiv.org:2511.03005v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Hee-Jin Lee, Zhen Guo, Luchao Jin, Morteza Moazami Goudarzi
-
-
- Implementation and Brief Experimental Analysis of the Duan et al. (2025) Algorithm for Single-Source Shortest Paths
- https://arxiv.org/abs/2511.03007
- arXiv:2511.03007v1 Announce Type: new
-Abstract: We present an implementation and a brief experimental analysis of the deterministic algorithm proposed by Duan et al. (2025) for the Single-Source Shortest Path (SSSP) problem, which achieves the best known asymptotic upper bound in the comparison-addition model, with running time $O(m \log^{2/3} n)$. We provide a faithful C++ implementation of this algorithm, following all structural details described in the original paper, and compare its empirical performance with the classical Dijkstra's algorithm using binary heaps. The experiments were conducted on both synthetic sparse random graphs and real-world road network instances from the DIMACS benchmark. Our results show that, despite its superior asymptotic complexity, the new algorithm presents significantly larger constant factors, making Dijkstra's algorithm faster for all tested sparse graph sizes, including instances with tens of millions of vertices. Our implementation achieves $O(m \log^{2/3} n)$ expected time, due to the use of hash tables, and some possibilities for making it worst-case are being considered. (This is a ongoing work.)
- oai:arXiv.org:2511.03007v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lucas Castro, Thailsson Clementino, Rosiane de Freitas
-
-
- Heterogeneous Metamaterials Design via Multiscale Neural Implicit Representation
- https://arxiv.org/abs/2511.03012
- arXiv:2511.03012v1 Announce Type: new
-Abstract: Metamaterials are engineered materials composed of specially designed unit cells that exhibit extraordinary properties beyond those of natural materials. Complex engineering tasks often require heterogeneous unit cells to accommodate spatially varying property requirements. However, designing heterogeneous metamaterials poses significant challenges due to the enormous design space and strict compatibility requirements between neighboring cells. Traditional concurrent multiscale design methods require solving an expensive optimization problem for each unit cell and often suffer from discontinuities at cell boundaries. On the other hand, data-driven approaches that assemble structures from a fixed library of microstructures are limited by the dataset and require additional post-processing to ensure seamless connections. In this work, we propose a neural network-based metamaterial design framework that learns a continuous two-scale representation of the structure, thereby jointly addressing these challenges. Central to our framework is a multiscale neural representation in which the neural network takes both global (macroscale) and local (microscale) coordinates as inputs, outputting an implicit field that represents multiscale structures with compatible unit cell geometries across the domain, without the need for a predefined dataset. We use a compatibility loss term during training to enforce connectivity between adjacent unit cells. Once trained, the network can produce metamaterial designs at arbitrarily high resolution, hence enabling infinite upsampling for fabrication or simulation. We demonstrate the effectiveness of the proposed approach on mechanical metamaterial design, negative Poisson's ratio, and mechanical cloaking problems with potential applications in robotics, bioengineering, and aerospace.
- oai:arXiv.org:2511.03012v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Hongrui Chen, Liwei Wang, Levent Burak Kara
-
-
- A Foundation Model for Brain MRI with Dynamic Modality Integration
- https://arxiv.org/abs/2511.03014
- arXiv:2511.03014v1 Announce Type: new
-Abstract: We present a foundation model for brain MRI that can work with different combinations of imaging sequences. The model uses one encoder with learnable modality embeddings, conditional layer normalization, and a masked autoencoding objective that accounts for missing modalities. A variance-covariance regularizer is applied to stabilize feature learning and improve representation diversity. This design removes the need for separate models for each modality and allows the network to adapt when some sequences are missing or unseen. It is trained on about 60,000 multi-center MRIs using self-supervised reconstruction and modality imputation to learn flexible representations. A learnable modality embedding guides feature extraction so the encoder can adjust to different inputs. We describe our planned evaluation on brain tumor and multiple sclerosis segmentation, as well as lesion classification, under various modality settings. Preliminary results show that the method works feasibly, and further experiments are planned to study its performance in more detail. All code and pretrained models are available at https://github.com/BrainFM/brainfm
- oai:arXiv.org:2511.03014v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Minh Sao Khue Luu, Bair N. Tuchinov
-
-
- Discrete Bayesian Sample Inference for Graph Generation
- https://arxiv.org/abs/2511.03015
- arXiv:2511.03015v1 Announce Type: new
-Abstract: Generating graph-structured data is crucial in applications such as molecular generation, knowledge graphs, and network analysis. However, their discrete, unordered nature makes them difficult for traditional generative models, leading to the rise of discrete diffusion and flow matching models. In this work, we introduce GraphBSI, a novel one-shot graph generative model based on Bayesian Sample Inference (BSI). Instead of evolving samples directly, GraphBSI iteratively refines a belief over graphs in the continuous space of distribution parameters, naturally handling discrete structures. Further, we state BSI as a stochastic differential equation (SDE) and derive a noise-controlled family of SDEs that preserves the marginal distributions via an approximation of the score function. Our theoretical analysis further reveals the connection to Bayesian Flow Networks and Diffusion models. Finally, in our empirical evaluation, we demonstrate state-of-the-art performance on molecular and synthetic graph generation, outperforming existing one-shot graph generative models on the standard benchmarks Moses and GuacaMol.
- oai:arXiv.org:2511.03015v1
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ole Petersen, Marcel Kollovieh, Marten Lienen, Stephan G\"unnemann
-
-
- Establishing Trust in Crowdsourced Data
- https://arxiv.org/abs/2511.03016
- arXiv:2511.03016v1 Announce Type: new
-Abstract: Crowdsourced data supports real-time decision-making but faces challenges like misinformation, errors, and contributor power concentration. This study systematically examines trust management practices across platforms categorised as Volunteered Geographic Information, Wiki Ecosystems, Social Media, Mobile Crowdsensing, and Specialised Review and Environmental Crowdsourcing. Identified strengths include automated moderation and community validation, while limitations involve rapid data influx, niche oversight gaps, opaque trust metrics, and elite dominance. Proposed solutions incorporate advanced AI tools, transparent reputation metrics, decentralised moderation, structured community engagement, and a ``soft power'' strategy, aiming to equitably distribute decision-making authority and enhance overall data reliability.
- oai:arXiv.org:2511.03016v1
- cs.SI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Iffat Gheyas, Muhammad Rizwan Asghar, Steve Schneider, Alan Woodward
-
-
- Oscillation Analysis and Damping Control for a Proposed North American AC-DC Macrogrid
- https://arxiv.org/abs/2511.03017
- arXiv:2511.03017v1 Announce Type: new
-Abstract: In recent years, several studies conducted by both industry and U.S. Department of Energy (DOE)-funded initiatives have proposed linking North America's Eastern and Western Interconnections (EI and WI) through a multiterminal DC (MTDC) macrogrid. These studies have explored the advantages and opportunities of the proposed configuration from the perspectives of capacity sharing and frequency support. However, the potential challenges of small-signal stability arising from this interconnection have not been thoroughly examined. To address this gap, detailed model-based simulation studies are performed in this paper to assess the risks of poorly damped inter-area oscillations in the proposed macrogrid. A custom-built dynamic model of the MTDC system is developed and integrated with industry-grade models of the EI and WI, incorporating high levels of inverter-based energy resources. Through model-based oscillation analysis, potential shifts in inter-area modes for both EI and WI, resulting from the MTDC integration are characterized, and modes with inadequate damping are identified. Furthermore, to mitigate the risks of unstable oscillations, supplementary damping controllers are designed for the MTDC system, leveraging wide-area feedback to modulate active power set points at selected converter stations. A frequency scanning approach is employed for data-driven model linearization and controller synthesis. The damping performance is evaluated under the designed operating conditions and selected contingency scenarios.
- oai:arXiv.org:2511.03017v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Kaustav Chatterjee, Sameer Nekkalapu, Antos Varghese, Marcelo Elizondo, Quan Nguyen, Xiaoyuan Fan
-
-
- SLIP: Structural-aware Language-Image Pretraining for Vision-Language Alignment
- https://arxiv.org/abs/2511.03019
- arXiv:2511.03019v1 Announce Type: new
-Abstract: Vision-Language Pretraining (VLP) has achieved remarkable success across various downstream tasks, but such gains are largely driven by scaling up on training data. Yet, literature methods treat image-text pairs as isolated training examples; this neglects the rich relational structure naturally present in many domains, such as e-commerce product co-purchase graphs and social recommendation networks. Inspired by neuroscientific evidence that human encodes knowledge as relationship cognitive maps, we introduce Structure-aware Language-Image Pretraining (SLIP). SLIP integrates a structural contrastive loss to align modalities while also modeling relationships between neighboring entities in a structured graph. To support this paradigm, we construct a large-scale Amazon Product Co-purchase Multimodal Graph Dataset, enabling structured cross-modality supervision at scale. Experiment results show that SLIP consistently outperforms CLIP on cross-modal retrieval and classification tasks in both zero-shot and few-shot settings, showing the value of relational supervision for cross-modal alignment.
- oai:arXiv.org:2511.03019v1
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Wenbo Lu
-
-
- Exploratory Analysis of Cyberattack Patterns on E-Commerce Platforms Using Statistical Methods
- https://arxiv.org/abs/2511.03020
- arXiv:2511.03020v1 Announce Type: new
-Abstract: Cyberattacks on e-commerce platforms have grown in sophistication, threatening consumer trust and operational continuity. This research presents a hybrid analytical framework that integrates statistical modelling and machine learning for detecting and forecasting cyberattack patterns in the e-commerce domain. Using the Verizon Community Data Breach (VCDB) dataset, the study applies Auto ARIMA for temporal forecasting and significance testing, including a Mann-Whitney U test (U = 2579981.5, p = 0.0121), which confirmed that holiday shopping events experienced significantly more severe cyberattacks than non-holiday periods. ANOVA was also used to examine seasonal variation in threat severity, while ensemble machine learning models (XGBoost, LightGBM, and CatBoost) were employed for predictive classification. Results reveal recurrent attack spikes during high-risk periods such as Black Friday and holiday seasons, with breaches involving Personally Identifiable Information (PII) exhibiting elevated threat indicators. Among the models, CatBoost achieved the highest performance (accuracy = 85.29%, F1 score = 0.2254, ROC AUC = 0.8247). The framework uniquely combines seasonal forecasting with interpretable ensemble learning, enabling temporal risk anticipation and breach-type classification. Ethical considerations, including responsible use of sensitive data and bias assessment, were incorporated. Despite class imbalance and reliance on historical data, the study provides insights for proactive cybersecurity resource allocation and outlines directions for future real-time threat detection research.
- oai:arXiv.org:2511.03020v1
- cs.CR
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Fatimo Adenike Adeniya (York St John University, London Campus, London, United Kingdom)
-
-
- Adaptive-Sensorless Monitoring of Shipping Containers
- https://arxiv.org/abs/2511.03022
- arXiv:2511.03022v1 Announce Type: new
-Abstract: Monitoring the internal temperature and humidity of shipping containers is essential to preventing quality degradation during cargo transportation. Sensorless monitoring -- machine learning models that predict the internal conditions of the containers using exogenous factors -- shows promise as an alternative to monitoring using sensors. However, it does not incorporate telemetry information and correct for systematic errors, causing the predictions to differ significantly from the live data and confusing the users. In this paper, we introduce the residual correction method, a general framework for correcting for systematic biases in sensorless models after observing live telemetry data. We call this class of models ``adaptive-sensorless'' monitoring. We train and evaluate adaptive-sensorless models on the 3.48 million data points -- the largest dataset of container sensor readings ever used in academic research -- and show that they produce consistent improvements over the baseline sensorless models. When evaluated on the holdout set of the simulated data, they achieve average mean absolute errors (MAEs) of 2.24 $\sim$ 2.31$^\circ$C (vs 2.43$^\circ$C by sensorless) for temperature and 5.72 $\sim$ 7.09% for relative humidity (vs 7.99% by sensorless) and average root mean-squared errors (RMSEs) of 3.19 $\sim$ 3.26$^\circ$C for temperature (vs 3.38$^\circ$C by sensorless) and 7.70 $\sim$ 9.12% for relative humidity (vs 10.0% by sensorless). Adaptive-sensorless models enable more accurate cargo monitoring, early risk detection, and less dependence on full connectivity in global shipping.
- oai:arXiv.org:2511.03022v1
- cs.LG
- cs.AI
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lingqing Shen, Chi Heem Wong, Misaki Mito, Arnab Chakrabarti
-
-
- PublicAgent: Multi-Agent Design Principles From an LLM-Based Open Data Analysis Framework
- https://arxiv.org/abs/2511.03023
- arXiv:2511.03023v1 Announce Type: new
-Abstract: Open data repositories hold potential for evidence-based decision-making, yet are inaccessible to non-experts lacking expertise in dataset discovery, schema mapping, and statistical analysis. Large language models show promise for individual tasks, but end-to-end analytical workflows expose fundamental limitations: attention dilutes across growing contexts, specialized reasoning patterns interfere, and errors propagate undetected. We present PublicAgent, a multi-agent framework that addresses these limitations through decomposition into specialized agents for intent clarification, dataset discovery, analysis, and reporting. This architecture maintains focused attention within agent contexts and enables validation at each stage. Evaluation across five models and 50 queries derives five design principles for multi-agent LLM systems. First, specialization provides value independent of model strength--even the strongest model shows 97.5% agent win rates, with benefits orthogonal to model scale. Second, agents divide into universal (discovery, analysis) and conditional (report, intent) categories. Universal agents show consistent effectiveness (std dev 12.4%) while conditional agents vary by model (std dev 20.5%). Third, agents mitigate distinct failure modes--removing discovery or analysis causes catastrophic failures (243-280 instances), while removing report or intent causes quality degradation. Fourth, architectural benefits persist across task complexity with stable win rates (86-92% analysis, 84-94% discovery), indicating workflow management value rather than reasoning enhancement. Fifth, wide variance in agent effectiveness across models (42-96% for analysis) requires model-aware architecture design. These principles guide when and why specialization is necessary for complex analytical workflows while enabling broader access to public data through natural language interfaces.
- oai:arXiv.org:2511.03023v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Sina Montazeri, Yunhe Feng, Kewei Sha
-
-
- Assurance Case Development for Evolving Software Product Lines: A Formal Approach
- https://arxiv.org/abs/2511.03026
- arXiv:2511.03026v1 Announce Type: new
-Abstract: In critical software engineering, structured assurance cases (ACs) are used to demonstrate how key system properties are supported by evidence (e.g., test results, proofs). Creating rigorous ACs is particularly challenging in the context of software product lines (SPLs), i.e, sets of software products with overlapping but distinct features and behaviours. Since SPLs can encompass very large numbers of products, developing a rigorous AC for each product individually is infeasible. Moreover, if the SPL evolves, e.g., by the modification or introduction of features, it can be infeasible to assess the impact of this change. Instead, the development and maintenance of ACs ought to be lifted such that a single AC can be developed for the entire SPL simultaneously, and be analyzed for regression in a variability-aware fashion. In this article, we describe a formal approach to lifted AC development and regression analysis. We formalize a language of variability-aware ACs for SPLs and study the lifting of template-based AC development. We also define a regression analysis to determine the effects of SPL evolutions on variability-aware ACs. We describe a model-based assurance management tool which implements these techniques, and illustrate our contributions by developing an AC for a product line of medical devices.
- oai:arXiv.org:2511.03026v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Logan Murphy, Torin Viger, Alessio Di Sandro, Aren A. Babikian, Marsha Chechik
-
-
- Harvesting energy consumption on European HPC systems: Sharing Experience from the CEEC project
- https://arxiv.org/abs/2511.03029
- arXiv:2511.03029v1 Announce Type: new
-Abstract: Energy efficiency has emerged as a central challenge for modern high-performance computing (HPC) systems, where escalating computational demands and architectural complexity have led to significant energy footprints. This paper presents the collective experience of the EuroHPC JU Center of Excellence in Exascale CFD (CEEC) in measuring, analyzing, and optimizing energy consumption across major European HPC systems. We briefly review key methodologies and tools for energy measurement as well as define metrics for reporting results. Through case studies using representative CFD applications (waLBerla, FLEXI/GAL{\AE}XI, Neko, and NekRS), we evaluate energy-to-solution and time-to-solution metrics on diverse architectures, including CPU- and GPU-based partitions of LUMI, MareNostrum5, MeluXina, and JUWELS Booster. Our results highlight the advantages of accelerators and mixed-precision techniques for reducing energy consumption while maintaining computational accuracy. Finally, we advocate the need to facilitate energy measurements on HPC systems in order to raise awareness, teach the community, and take actions toward more sustainable exascale computing.
- oai:arXiv.org:2511.03029v1
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Kajol Kulkarni, Samuel Kemmler, Anna Schwarz, Gulcin Gedik, Yanxiang Chen, Dimitrios Papageorgiou, Ioannis Kavroulakis, Roman Iakymchuk
-
-
- Leveraging Discrete Function Decomposability for Scientific Design
- https://arxiv.org/abs/2511.03032
- arXiv:2511.03032v1 Announce Type: new
-Abstract: In the era of AI-driven science and engineering, we often want to design discrete objects in silico according to user-specified properties. For example, we may wish to design a protein to bind its target, arrange components within a circuit to minimize latency, or find materials with certain properties. Given a property predictive model, in silico design typically involves training a generative model over the design space (e.g., protein sequence space) to concentrate on designs with the desired properties. Distributional optimization -- which can be formalized as an estimation of distribution algorithm or as reinforcement learning policy optimization -- finds the generative model that maximizes an objective function in expectation. Optimizing a distribution over discrete-valued designs is in general challenging because of the combinatorial nature of the design space. However, many property predictors in scientific applications are decomposable in the sense that they can be factorized over design variables in a way that could in principle enable more effective optimization. For example, amino acids at a catalytic site of a protein may only loosely interact with amino acids of the rest of the protein to achieve maximal catalytic activity. Current distributional optimization algorithms are unable to make use of such decomposability structure. Herein, we propose and demonstrate use of a new distributional optimization algorithm, Decomposition-Aware Distributional Optimization (DADO), that can leverage any decomposability defined by a junction tree on the design variables, to make optimization more efficient. At its core, DADO employs a soft-factorized "search distribution" -- a learned generative model -- for efficient navigation of the search space, invoking graph message-passing to coordinate optimization across linked factors.
- oai:arXiv.org:2511.03032v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- James C. Bowden, Sergey Levine, Jennifer Listgarten
-
-
- Data-Efficient Adaptation and a Novel Evaluation Method for Aspect-based Sentiment Analysis
- https://arxiv.org/abs/2511.03034
- arXiv:2511.03034v1 Announce Type: new
-Abstract: Aspect-based Sentiment Analysis (ABSA) is a fine-grained opinion mining approach that identifies and classifies opinions associated with specific entities (aspects) or their categories within a sentence. Despite its rapid growth and broad potential, ABSA research and resources remain concentrated in commercial domains, leaving analytical needs unmet in high-demand yet low-resource areas such as education and healthcare. Domain adaptation challenges and most existing methods' reliance on resource-intensive in-training knowledge injection further hinder progress in these areas. Moreover, traditional evaluation methods based on exact matches are overly rigid for ABSA tasks, penalising any boundary variations which may misrepresent the performance of generative models. This work addresses these gaps through three contributions: 1) We propose a novel evaluation method, Flexible Text Similarity Matching and Optimal Bipartite Pairing (FTS-OBP), which accommodates realistic extraction boundary variations while maintaining strong correlation with traditional metrics and offering fine-grained diagnostics. 2) We present the first ABSA study of small decoder-only generative language models (SLMs; <7B parameters), examining resource lower bounds via a case study in education review ABSA. We systematically explore data-free (in-context learning and weight merging) and data-light fine-tuning methods, and propose a multitask fine-tuning strategy that significantly enhances SLM performance, enabling 1.5-3.8 B models to surpass proprietary large models and approach benchmark results with only 200-1,000 examples on a single GPU. 3) We release the first public set of education review ABSA resources to support future research in low-resource domains.
- oai:arXiv.org:2511.03034v1
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Yan Cathy Hua, Paul Denny, J\"org Wicker, Katerina Ta\v{s}kova
-
-
- Distributed Incast Detection in Data Center Networks
- https://arxiv.org/abs/2511.03039
- arXiv:2511.03039v1 Announce Type: new
-Abstract: Incast traffic in data centers can lead to severe performance degradation, such as packet loss and increased latency. Effectively addressing incast requires prompt and accurate detection. Existing solutions, including MA-ECN, BurstRadar and Pulser, typically rely on fixed thresholds of switch port egress queue lengths or their gradients to identify microburst caused by incast flows. However, these queue length related methods often suffer from delayed detection and high error rates. In this study, we propose a distributed incast detection method for data center networks at the switch-level, leveraging a probabilistic hypothesis test with an optimal detection threshold. By analyzing the arrival intervals of new flows, our algorithm can immediately determine if a flow is part of an incast traffic from its initial packet. The experimental results demonstrate that our method offers significant improvements over existing approaches in both detection speed and inference accuracy.
- oai:arXiv.org:2511.03039v1
- cs.NI
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yiming Zheng, Haoran Qi, Lirui Yu, Zhan Shu, Qing Zhao
-
-
- Quantifying Power Systems Resilience Using Statistical Analysis and Bayesian Learning
- https://arxiv.org/abs/2511.03043
- arXiv:2511.03043v1 Announce Type: new
-Abstract: The increasing frequency and intensity of extreme weather events is significantly affecting the power grid, causing large-scale outages and impacting power system resilience. Yet limited work has been done on systematically modeling the impacts of weather parameters to quantify resilience. This study presents a framework using statistical and Bayesian learning approaches to quantitatively model the relationship between weather parameters and power system resilience metrics. By leveraging real-world publicly available outage and weather data, we identify key weather variables of wind speed, temperature, and precipitation influencing a particular region's resilience metrics. A case study of Cook County, Illinois, and Miami-Dade County, Florida, reveals that these weather parameters are critical factors in resiliency analysis and risk assessment. Additionally, we find that these weather variables have combined effects when studied jointly compared to their effects in isolation. This framework provides valuable insights for understanding how weather events affect power distribution system performance, supporting decision-makers in developing more effective strategies for risk mitigation, resource allocation, and adaptation to changing climatic conditions.
- oai:arXiv.org:2511.03043v1
- eess.SY
- cs.SY
- stat.AP
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Apsara Adhikari, Charlotte Wertz, Anamika Dubey, Arslan Ahmad, Ian Dobson
-
-
- Data-Efficient Realized Volatility Forecasting with Vision Transformers
- https://arxiv.org/abs/2511.03046
- arXiv:2511.03046v1 Announce Type: new
-Abstract: Recent work in financial machine learning has shown the virtue of complexity: the phenomenon by which deep learning methods capable of learning highly nonlinear relationships outperform simpler approaches in financial forecasting. While transformer architectures like Informer have shown promise for financial time series forecasting, the application of transformer models for options data remains largely unexplored. We conduct preliminary studies towards the development of a transformer model for options data by training the Vision Transformer (ViT) architecture, typically used in modern image recognition and classification systems, to predict the realized volatility of an asset over the next 30 days from its implied volatility surface (augmented with date information) for a single day. We show that the ViT can learn seasonal patterns and nonlinear features from the IV surface, suggesting a promising direction for model development.
- oai:arXiv.org:2511.03046v1
- cs.LG
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Emi Soroka, Artem Arzyn
-
-
- Unsupervised Evaluation of Multi-Turn Objective-Driven Interactions
- https://arxiv.org/abs/2511.03047
- arXiv:2511.03047v1 Announce Type: new
-Abstract: Large language models (LLMs) have seen increasing popularity in enterprise applications where AI agents and humans engage in objective-driven interactions. However, these systems are difficult to evaluate: data may be complex and unlabeled; human annotation is often impractical at scale; custom metrics can monitor for specific errors, but not previously-undetected ones; and LLM judges can produce unreliable results. We introduce the first set of unsupervised metrics for objective-driven interactions, leveraging statistical properties of unlabeled interaction data and using fine-tuned LLMs to adapt to distributional shifts. We develop metrics for labeling user goals, measuring goal completion, and quantifying LLM uncertainty without grounding evaluations in human-generated ideal responses. Our approach is validated on open-domain and task-specific interaction data.
- oai:arXiv.org:2511.03047v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Emi Soroka, Tanmay Chopra, Krish Desai, Sanjay Lall
-
-
- ROBoto2: An Interactive System and Dataset for LLM-assisted Clinical Trial Risk of Bias Assessment
- https://arxiv.org/abs/2511.03048
- arXiv:2511.03048v1 Announce Type: new
-Abstract: We present ROBOTO2, an open-source, web-based platform for large language model (LLM)-assisted risk of bias (ROB) assessment of clinical trials. ROBOTO2 streamlines the traditionally labor-intensive ROB v2 (ROB2) annotation process via an interactive interface that combines PDF parsing, retrieval-augmented LLM prompting, and human-in-the-loop review. Users can upload clinical trial reports, receive preliminary answers and supporting evidence for ROB2 signaling questions, and provide real-time feedback or corrections to system suggestions. ROBOTO2 is publicly available at https://roboto2.vercel.app/, with code and data released to foster reproducibility and adoption. We construct and release a dataset of 521 pediatric clinical trial reports (8954 signaling questions with 1202 evidence passages), annotated using both manually and LLM-assisted methods, serving as a benchmark and enabling future research. Using this dataset, we benchmark ROB2 performance for 4 LLMs and provide an analysis into current model capabilities and ongoing challenges in automating this critical aspect of systematic review.
- oai:arXiv.org:2511.03048v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Anthony Hevia, Sanjana Chintalapati, Veronica Ka Wai Lai, Thanh Tam Nguyen, Wai-Tat Wong, Terry Klassen, Lucy Lu Wang
-
-
- No-Human in the Loop: Agentic Evaluation at Scale for Recommendation
- https://arxiv.org/abs/2511.03051
- arXiv:2511.03051v1 Announce Type: new
-Abstract: Evaluating large language models (LLMs) as judges is increasingly critical for building scalable and trustworthy evaluation pipelines. We present ScalingEval, a large-scale benchmarking study that systematically compares 36 LLMs, including GPT, Gemini, Claude, and Llama, across multiple product categories using a consensus-driven evaluation protocol. Our multi-agent framework aggregates pattern audits and issue codes into ground-truth labels via scalable majority voting, enabling reproducible comparison of LLM evaluators without human annotation. Applied to large-scale complementary-item recommendation, the benchmark reports four key findings: (i) Anthropic Claude 3.5 Sonnet achieves the highest decision confidence; (ii) Gemini 1.5 Pro offers the best overall performance across categories; (iii) GPT-4o provides the most favorable latency-accuracy-cost tradeoff; and (iv) GPT-OSS 20B leads among open-source models. Category-level analysis shows strong consensus in structured domains (Electronics, Sports) but persistent disagreement in lifestyle categories (Clothing, Food). These results establish ScalingEval as a reproducible benchmark and evaluation protocol for LLMs as judges, with actionable guidance on scaling, reliability, and model family tradeoffs.
- oai:arXiv.org:2511.03051v1
- cs.AI
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Tao Zhang, Kehui Yao, Luyi Ma, Jiao Chen, Reza Yousefi Maragheh, Kai Zhao, Jianpeng Xu, Evren Korpeoglu, Sushant Kumar, Kannan Achan
-
-
- From Propagation to Prediction: Point-level Uncertainty Evaluation of MLS Point Clouds under Limited Ground Truth
- https://arxiv.org/abs/2511.03053
- arXiv:2511.03053v1 Announce Type: new
-Abstract: Evaluating uncertainty is critical for reliable use of Mobile Laser Scanning (MLS) point clouds in many high-precision applications such as Scan-to-BIM, deformation analysis, and 3D modeling. However, obtaining the ground truth (GT) for evaluation is often costly and infeasible in many real-world applications. To reduce this long-standing reliance on GT in uncertainty evaluation research, this study presents a learning-based framework for MLS point clouds that integrates optimal neighborhood estimation with geometric feature extraction. Experiments on a real-world dataset show that the proposed framework is feasible and the XGBoost model delivers fully comparable accuracy to Random Forest while achieving substantially higher efficiency (about 3 times faster), providing initial evidence that geometric features can be used to predict point-level uncertainty quantified by the C2C distance. In summary, this study shows that MLS point clouds' uncertainty is learnable, offering a novel learning-based viewpoint towards uncertainty evaluation research.
- oai:arXiv.org:2511.03053v1
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ziyang Xu, Olaf Wysocki, Christoph Holst
-
-
- Read Between the Hyperplanes: On Spectral Projection and Sampling Approaches to Randomized Kaczmarz
- https://arxiv.org/abs/2511.03055
- arXiv:2511.03055v1 Announce Type: new
-Abstract: Among recent developments centered around Randomized Kaczmarz (RK), a row-sampling iterative projection method for large-scale linear systems, several adaptions to the method have inspired faster convergence. Focusing solely on ill-conditioned and overdetermined linear systems, we highlight inter-row relationships that can be leveraged to guide directionally aware projections. In particular, we find that improved convergence rates can be made by (i) projecting onto pairwise row differences, (ii) sampling from partitioned clusters of nearly orthogonal rows, or (iii) more frequently sampling spectrally-diverse rows.
- oai:arXiv.org:2511.03055v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- James Nguyen, Oleg Presnyakov, Aditya Radhakhrishnan
-
-
- Reading Between the Lines: The One-Sided Conversation Problem
- https://arxiv.org/abs/2511.03056
- arXiv:2511.03056v1 Announce Type: new
-Abstract: Conversational AI is constrained in many real-world settings where only one side of a dialogue can be recorded, such as telemedicine, call centers, and smart glasses. We formalize this as the one-sided conversation problem (1SC): inferring and learning from one side of a conversation. We study two tasks: (1) reconstructing the missing speaker's turns for real-time use cases, and (2) generating summaries from one-sided transcripts. Evaluating prompting and finetuned models on MultiWOZ, DailyDialog, and Candor with both human A/B testing and LLM-as-a-judge metrics, we find that access to one future turn and information about utterance length improves reconstruction, placeholder prompting helps to mitigate hallucination, and while large models generate promising reconstructions with prompting, smaller models require finetuning. Further, high-quality summaries can be generated without reconstructing missing turns. We present 1SC as a novel challenge and report promising results that mark a step toward privacy-aware conversational AI.
- oai:arXiv.org:2511.03056v1
- cs.CL
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Victoria Ebert, Rishabh Singh, Tuochao Chen, Noah A. Smith, Shyamnath Gollakota
-
-
- Microgrids optimal radial reconfiguration via FORWARD algorithm
- https://arxiv.org/abs/2511.03059
- arXiv:2511.03059v1 Announce Type: new
-Abstract: Microgrids offer a promising paradigm for integrating distributed energy resources, bolstering energy resilience, and reducing the impact of blackouts. However, their inherent decentralization and dynamic operation present substantial energy management complexities. These complexities, including balancing supply and demand, ensuring system stability, and minimizing operational costs, often necessitate solving computationally intractable NP-hard Mixed-Integer Non-Linear Programming (MINLP) problems. Traditional MINLP solvers struggle with the scalability and feasibility guarantees required for these challenges. To address this, this paper tackles the problem of resource allocation and radial configuration design for microgrid power distribution and proposes and abstracted problem which is solved by introducing a permutation-based iterative search method over the recently introduced FORWARD method to efficiently identify feasible, near-optimal radial network structures while inherently respecting physical constraints. Furthermore, this paper investigates the integration of the proposed method as a warm-start strategy for benchmark MINLP solvers offering a scalable solution for comprehensive microgrid design.
- oai:arXiv.org:2511.03059v1
- eess.SY
- cs.NA
- cs.SY
- math.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Joan Vendrell Gallart, Russell Bent, Solmaz Kia
-
-
- The Curved Spacetime of Transformer Architectures
- https://arxiv.org/abs/2511.03060
- arXiv:2511.03060v1 Announce Type: new
-Abstract: We present a geometric framework for understanding Transformer-based language models, drawing an explicit analogy to General Relativity. Queries and keys induce an effective metric on representation space, and attention acts as a discrete connection that implements parallel transport of value vectors across tokens. Stacked layers provide discrete time-slices through which token representations evolve on this curved manifold, while backpropagation plays the role of a least-action principle that shapes loss-minimizing trajectories in parameter space. If this analogy is correct, token embeddings should not traverse straight paths in feature space; instead, their layer-wise steps should bend and reorient as interactions mediated by embedding space curvature. To test this prediction, we design experiments that expose both the presence and the consequences of curvature: (i) we visualize a curvature landscape for a full paragraph, revealing how local turning angles vary across tokens and layers; (ii) we show through simulations that excess counts of sharp/flat angles and longer length-to-chord ratios are not explainable by dimensionality or chance; and (iii) inspired by Einstein's eclipse experiment, we probe deflection under controlled context edits, demonstrating measurable, meaning-consistent bends in embedding trajectories that confirm attention-induced curvature.
- oai:arXiv.org:2511.03060v1
- cs.LG
- cs.CL
- math.DG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Riccardo Di Sipio, Jairo Diaz-Rodriguez, Luis Serrano
-
-
- A Tsallis-Entropy Lens on Genetic Variation
- https://arxiv.org/abs/2511.03063
- arXiv:2511.03063v1 Announce Type: new
-Abstract: We introduce an information-theoretic generalization of the fixation statistic, the Tsallis-order $q$ F-statistic, $F_q$, which measures the fraction of Tsallis $q$-entropy lost within subpopulations relative to the pooled population. The family nests the classical variance-based fixation index $F_{\textbf{ST}}$ at $q{=}2$ and a Shannon-entropy analogue at $q{=}1$, whose absolute form equals the mutual information between alleles and population labels. By varying $q$, $F_q$ acts as a spectral differentiator that up-weights rare variants at low $q$, while $q{>}1$ increasingly emphasizes common variants, providing a more fine-grained view of differentiation than $F_{\textbf{ST}}$ when allele-frequency spectra are skewed. On real data (865 Oceanian genomes with 1,823,000 sites) and controlled genealogical simulations (seeded from 1,432 founders from HGDP and 1000 Genomes panels, with 322,216 sites), we show that $F_q$ in One-vs-Rest (OVR) and Leave-One-Out (LOO) modes provides clear attribution of which subpopulations drive regional structure, and sensitively timestamps isolation-migration events and founder effects. $F_q$ serves as finer-resolution complement for simulation audits and population-structure summaries.
- oai:arXiv.org:2511.03063v1
- cs.IT
- cs.CE
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Margarita Geleta, Daniel Mas Montserrat, Alexander G. Ioannidis
-
-
- Homomorphism distortion: A metric to distinguish them all and in the latent space bind them
- https://arxiv.org/abs/2511.03068
- arXiv:2511.03068v1 Announce Type: new
-Abstract: For far too long, expressivity of graph neural networks has been measured \emph{only} in terms of combinatorial properties. In this work we stray away from this tradition and provide a principled way to measure similarity between vertex attributed graphs. We denote this measure as the \emph{graph homomorphism distortion}. We show it can \emph{completely characterize} graphs and thus is also a \emph{complete graph embedding}. However, somewhere along the road, we run into the graph canonization problem. To circumvent this obstacle, we devise to efficiently compute this measure via sampling, which in expectation ensures \emph{completeness}. Additionally, we also discovered that we can obtain a metric from this measure. We validate our claims empirically and find that the \emph{graph homomorphism distortion}: (1.) fully distinguishes the \texttt{BREC} dataset with up to $4$-WL non-distinguishable graphs, and (2.) \emph{outperforms} previous methods inspired in homomorphisms under the \texttt{ZINC-12k} dataset.
- These theoretical results, (and their empirical validation), pave the way for future characterization of graphs, extending the graph theoretic tradition to new frontiers.
- oai:arXiv.org:2511.03068v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Martin Carrasco, Olga Zaghen, Erik Bekkers, Bastian Rieck
-
-
- Epidemiology of Large Language Models: A Benchmark for Observational Distribution Knowledge
- https://arxiv.org/abs/2511.03070
- arXiv:2511.03070v1 Announce Type: new
-Abstract: Artificial intelligence (AI) systems hold great promise for advancing various scientific disciplines, and are increasingly used in real-world applications. Despite their remarkable progress, further capabilities are expected in order to achieve more general types of intelligence. A critical distinction in this context is between factual knowledge, which can be evaluated against true or false answers (e.g., "what is the capital of England?"), and probabilistic knowledge, reflecting probabilistic properties of the real world (e.g., "what is the sex of a computer science graduate in the US?"). In this paper, our goal is to build a benchmark for understanding the capabilities of LLMs in terms of knowledge of probability distributions describing the real world. Given that LLMs are trained on vast amounts of text, it may be plausible that they internalize aspects of these distributions. Indeed, LLMs are touted as powerful universal approximators of real-world distributions. At the same time, classical results in statistics, known as curse of dimensionality, highlight fundamental challenges in learning distributions in high dimensions, challenging the notion of universal distributional learning. In this work, we develop the first benchmark to directly test this hypothesis, evaluating whether LLMs have access to empirical distributions describing real-world populations across domains such as economics, health, education, and social behavior. Our results demonstrate that LLMs perform poorly overall, and do not seem to internalize real-world statistics naturally. When interpreted in the context of Pearl's Causal Hierarchy (PCH), our benchmark demonstrates that language models do not contain knowledge on observational distributions (Layer 1 of PCH), and thus the Causal Hierarchy Theorem implies that interventional (Layer 2) and counterfactual (Layer 3) knowledge of these models is also limited.
- oai:arXiv.org:2511.03070v1
- cs.AI
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Drago Plecko, Patrik Okanovic, Torsten Hoefler, Elias Bareinboim
-
-
- Online Learning to Rank under Corruption: A Robust Cascading Bandits Approach
- https://arxiv.org/abs/2511.03074
- arXiv:2511.03074v1 Announce Type: new
-Abstract: Online learning to rank (OLTR) studies how to recommend a short ranked list of items from a large pool and improves future rankings based on user clicks. This setting is commonly modeled as cascading bandits, where the objective is to maximize the likelihood that the user clicks on at least one of the presented items across as many timesteps as possible. However, such systems are vulnerable to click fraud and other manipulations (i.e., corruption), where bots or paid click farms inject corrupted feedback that misleads the learning process and degrades user experience. In this paper, we propose MSUCB, a robust algorithm that incorporates a novel mean-of-medians estimator, which to our knowledge is applied to bandits with corruption setting for the first time. This estimator behaves like a standard mean in the absence of corruption, so no cost is paid for robustness. Under corruption, the median step filters out outliers and corrupted samples, keeping the estimate close to its true value. Updating this estimate at every round further accelerates empirical convergence in experiments. Hence, MSUCB achieves optimal logarithmic regret in the absence of corruption and degrades gracefully under corruptions, with regret increasing only by an additive term tied to the total corruption. Comprehensive and extensive experiments on real-world datasets further demonstrate that our approach consistently outperforms prior methods while maintaining strong robustness. In particular, it achieves a \(97.35\%\) and a \(91.60\%\) regret improvement over two state-of-the-art methods.
- oai:arXiv.org:2511.03074v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Fatemeh Ghaffari, Siddarth Sitaraman, Xutong Liu, Xuchuang Wang, Mohammad Hajiesmaili
-
-
- A Collaborative Reasoning Framework for Anomaly Diagnostics in Underwater Robotics
- https://arxiv.org/abs/2511.03075
- arXiv:2511.03075v1 Announce Type: new
-Abstract: The safe deployment of autonomous systems in safety-critical settings requires a paradigm that combines human expertise with AI-driven analysis, especially when anomalies are unforeseen. We introduce AURA (Autonomous Resilience Agent), a collaborative framework for anomaly and fault diagnostics in robotics. AURA integrates large language models (LLMs), a high-fidelity digital twin (DT), and human-in-the-loop interaction to detect and respond to anomalous behavior in real time. The architecture uses two agents with clear roles: (i) a low-level State Anomaly Characterization Agent that monitors telemetry and converts signals into a structured natural-language problem description, and (ii) a high-level Diagnostic Reasoning Agent that conducts a knowledge-grounded dialogue with an operator to identify root causes, drawing on external sources. Human-validated diagnoses are then converted into new training examples that refine the low-level perceptual model. This feedback loop progressively distills expert knowledge into the AI, transforming it from a static tool into an adaptive partner. We describe the framework's operating principles and provide a concrete implementation, establishing a pattern for trustworthy, continually improving human-robot teams.
- oai:arXiv.org:2511.03075v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Markus Buchholz, Ignacio Carlucho, Yvan R. Petillot
-
-
- WorldPlanner: Monte Carlo Tree Search and MPC with Action-Conditioned Visual World Models
- https://arxiv.org/abs/2511.03077
- arXiv:2511.03077v1 Announce Type: new
-Abstract: Robots must understand their environment from raw sensory inputs and reason about the consequences of their actions in it to solve complex tasks. Behavior Cloning (BC) leverages task-specific human demonstrations to learn this knowledge as end-to-end policies. However, these policies are difficult to transfer to new tasks, and generating training data is challenging because it requires careful demonstrations and frequent environment resets. In contrast to such policy-based view, in this paper we take a model-based approach where we collect a few hours of unstructured easy-to-collect play data to learn an action-conditioned visual world model, a diffusion-based action sampler, and optionally a reward model. The world model -- in combination with the action sampler and a reward model -- is then used to optimize long sequences of actions with a Monte Carlo Tree Search (MCTS) planner. The resulting plans are executed on the robot via a zeroth-order Model Predictive Controller (MPC). We show that the action sampler mitigates hallucinations of the world model during planning and validate our approach on 3 real-world robotic tasks with varying levels of planning and modeling complexity. Our experiments support the hypothesis that planning leads to a significant improvement over BC baselines on a standard manipulation test environment.
- oai:arXiv.org:2511.03077v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- R. Khorrambakht, Joaquim Ortiz-Haro, Joseph Amigo, Omar Mostafa, Daniel Dugas, Franziska Meier, Ludovic Righetti
-
-
- 3D Cal: An Open-Source Software Library for Calibrating Tactile Sensors
- https://arxiv.org/abs/2511.03078
- arXiv:2511.03078v1 Announce Type: new
-Abstract: Tactile sensing plays a key role in enabling dexterous and reliable robotic manipulation, but realizing this capability requires substantial calibration to convert raw sensor readings into physically meaningful quantities. Despite its near-universal necessity, the calibration process remains ad hoc and labor-intensive. Here, we introduce \libname{}, an open-source library that transforms a low-cost 3D printer into an automated probing device capable of generating large volumes of labeled training data for tactile sensor calibration. We demonstrate the utility of \libname{} by calibrating two commercially available vision-based tactile sensors, DIGIT and GelSight Mini, to reconstruct high-quality depth maps using the collected data and a custom convolutional neural network. In addition, we perform a data ablation study to determine how much data is needed for accurate calibration, providing practical guidelines for researchers working with these specific sensors, and we benchmark the trained models on previously unseen objects to evaluate calibration accuracy and generalization performance. By automating tactile sensor calibration, \libname{} can accelerate tactile sensing research, simplify sensor deployment, and promote the practical integration of tactile sensing in robotic platforms.
- oai:arXiv.org:2511.03078v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Rohan Kota, Kaival Shah, J. Edward Colgate, Gregory Reardon
-
-
- LogicSparse: Enabling Engine-Free Unstructured Sparsity for Quantised Deep-learning Accelerators
- https://arxiv.org/abs/2511.03079
- arXiv:2511.03079v1 Announce Type: new
-Abstract: FPGAs have been shown to be a promising platform for deploying Quantised Neural Networks (QNNs) with high-speed, low-latency, and energy-efficient inference. However, the complexity of modern deep-learning models limits the performance on resource-constrained edge devices. While quantisation and pruning alleviate these challenges, unstructured sparsity remains underexploited due to irregular memory access. This work introduces a framework that embeds unstructured sparsity into dataflow accelerators, eliminating the need for dedicated sparse engines and preserving parallelism. A hardware-aware pruning strategy is introduced to improve efficiency and design flow further. On LeNet-5, the framework attains 51.6 x compression and 1.23 x throughput improvement using only 5.12% of LUTs, effectively exploiting unstructured sparsity for QNN acceleration.
- oai:arXiv.org:2511.03079v1
- cs.AR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Changhong Li, Biswajit Basu, Shreejith Shanker
-
-
- PolyNorm: Few-Shot LLM-Based Text Normalization for Text-to-Speech
- https://arxiv.org/abs/2511.03080
- arXiv:2511.03080v1 Announce Type: new
-Abstract: Text Normalization (TN) is a key preprocessing step in Text-to-Speech (TTS) systems, converting written forms into their canonical spoken equivalents. Traditional TN systems can exhibit high accuracy, but involve substantial engineering effort, are difficult to scale, and pose challenges to language coverage, particularly in low-resource settings. We propose PolyNorm, a prompt-based approach to TN using Large Language Models (LLMs), aiming to reduce the reliance on manually crafted rules and enable broader linguistic applicability with minimal human intervention. Additionally, we present a language-agnostic pipeline for automatic data curation and evaluation, designed to facilitate scalable experimentation across diverse languages. Experiments across eight languages show consistent reductions in the word error rate (WER) compared to a production-grade-based system. To support further research, we release PolyNorm-Benchmark, a multilingual data set covering a diverse range of text normalization phenomena.
- oai:arXiv.org:2511.03080v1
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Michel Wong, Ali Alshehri, Sophia Kao, Haotian He
-
-
- CRSF: Enabling QoS-Aware Beyond-Connectivity Service Sharing in 6G Local Networks
- https://arxiv.org/abs/2511.03081
- arXiv:2511.03081v1 Announce Type: new
-Abstract: Sixth-generation (6G) networks are envisioned to support interconnected local subnetworks that can share specialized, beyond-connectivity services. However, a standardized architecture for discovering and selecting these services across network boundaries has not existed yet. To address this gap, this paper introduces the Central Repository and Selection Function (CRSF), a novel network function for the 6G core that facilitates efficient inter-subnetwork service discovery and selection. We formulate the selection process as a QoS-aware optimization problem designed to balance service quality metrics with user-defined priorities. We evaluate our system model through simulations for a sensing service scenario and observe a consistently higher aggregate Quality of Service (QoS) compared to the baseline selection strategy. The proposed CRSF provides a foundational and extensible mechanism for building standardized, collaborative, and service-centric interconnected networks essential for the 6G era.
- oai:arXiv.org:2511.03081v1
- cs.NI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pragya Sharma, Amanda Xiang, Abbas Kiani, John Kaippallimalil, Tony Saboorian, Haining Wang
-
-
- An Analytical Approach to Parallel Repetition via CSP Inverse Theorems
- https://arxiv.org/abs/2511.03083
- arXiv:2511.03083v1 Announce Type: new
-Abstract: Let $\mathcal{G}$ be a $k$-player game with value $<1$, whose query distribution is such that no marginal on $k-1$ players admits a non-trivial Abelian embedding. We show that for every $n\geq N$, the value of the $n$-fold parallel repetition of $\mathcal{G}$ is $$ \text{val}(\mathcal{G}^{\otimes n}) \leq \frac{1}{\underbrace{\log\log\cdots\log}_{C\text{ times}} n}, $$ where $N=N(\mathcal{G})$ and $1\leq C\leq k^{O(k)}$ are constants. As a consequence, we obtain a parallel repetition theorem for all $3$-player games whose query distribution is pairwise-connected. Prior to our work, only inverse Ackermann decay bounds were known for such games [Ver96].
- As additional special cases, we obtain a unified proof for all known parallel repetition theorems, albeit with weaker bounds: (1) A new analytic proof of parallel repetition for all 2-player games [Raz98, Hol09, DS14]. (2) A new proof of parallel repetition for all $k$-player playerwise connected games [DHVY17, GHMRZ22]. (3) Parallel repetition for all $3$-player games (in particular $3$-XOR games) whose query distribution has no non-trivial Abelian embedding into $(\mathbb{Z}, +)$ [BKM23c, BBKLM25]. (4) Parallel repetition for all 3-player games with binary inputs [HR20, GHMRZ21, GHMRZ22, GMRZ22].
- oai:arXiv.org:2511.03083v1
- cs.CC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Amey Bhangale, Mark Braverman, Subhash Khot, Yang P. Liu, Dor Minzer, Kunal Mittal
-
-
- A Computational Approach to Analyzing Disrupted Language in Schizophrenia: Integrating Surprisal and Coherence Measures
- https://arxiv.org/abs/2511.03089
- arXiv:2511.03089v1 Announce Type: new
-Abstract: Language disruptions are one of the well-known effects of schizophrenia symptoms. They are often manifested as disorganized speech and impaired discourse coherence. These abnormalities in spontaneous language production reflect underlying cognitive disturbances and have the potential to serve as objective markers for symptom severity and diagnosis of schizophrenia. This study focuses on how these language disruptions can be characterized in terms of two computational linguistic measures: surprisal and semantic coherence. By computing surprisal and semantic coherence of language using computational models, this study investigates how they differ between subjects with schizophrenia and healthy controls. Furthermore, this study provides further insight into how language disruptions in terms of these linguistic measures change with varying degrees of schizophrenia symptom severity.
- oai:arXiv.org:2511.03089v1
- cs.CL
- eess.AS
- eess.SP
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Gowtham Premananth, Carol Espy-Wilson
-
-
- SnapStream: Efficient Long Sequence Decoding on Dataflow Accelerators
- https://arxiv.org/abs/2511.03092
- arXiv:2511.03092v1 Announce Type: new
-Abstract: The proliferation of 100B+ parameter Large Language Models (LLMs) with 100k+ context length support have resulted in increasing demands for on-chip memory to support large KV caches. Techniques such as StreamingLLM and SnapKV demonstrate how to control KV cache size while maintaining model accuracy. Yet, these techniques are not commonly used within industrial deployments using frameworks like vLLM or SGLang. The reason is twofold: on one hand, the static graphs and continuous batching methodology employed by these frameworks make it difficult to admit modifications to the standard multi-head attention algorithm, while on the other hand, the accuracy implications of such techniques on modern instruction-following and reasoning models are not well understood, obfuscating the need for implementing these techniques. In this paper, we explore these accuracy implications on Llama-3.1-8B-Instruct and DeepSeek-R1, and develop SnapStream, a KV cache compression method that can be deployed at scale. We demonstrate the efficacy of SnapStream in a 16-way tensor-parallel deployment of DeepSeek-671B on SambaNova SN40L accelerators running at 128k context length and up to 1832 tokens per second in a real production setting. SnapStream enables $4\times$ improved on-chip memory usage and introduces minimal accuracy degradation on LongBench-v2, AIME24 and LiveCodeBench. To the best of our knowledge, this is the first implementation of sparse KV attention techniques deployed in a production inference system with static graphs and continuous batching.
- oai:arXiv.org:2511.03092v1
- cs.AI
- cs.AR
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jonathan Li, Nasim Farahini, Evgenii Iuliugin, Magnus Vesterlund, Christian Haggstrom, Guangtao Wang, Shubhangi Upasani, Ayush Sachdeva, Rui Li, Faline Fu, Chen Wu, Ayesha Siddiqua, John Long, Tuowen Zhao, Matheen Musaddiq, Hakan Zeffer, Yun Du, Mingran Wang, Qinghua Li, Bo Li, Urmish Thakker, Raghu Prabhakar
-
-
- A Plug-and-Play Framework for Volumetric Light-Sheet Image Reconstruction
- https://arxiv.org/abs/2511.03093
- arXiv:2511.03093v1 Announce Type: new
-Abstract: Cardiac contraction is a rapid, coordinated process that unfolds across three-dimensional tissue on millisecond timescales. Traditional optical imaging is often inadequate for capturing dynamic cellular structure in the beating heart because of a fundamental trade-off between spatial and temporal resolution. To overcome these limitations, we propose a high-performance computational imaging framework that integrates Compressive Sensing (CS) with Light-Sheet Microscopy (LSM) for efficient, low-phototoxic cardiac imaging. The system performs compressed acquisition of fluorescence signals via random binary mask coding using a Digital Micromirror Device (DMD). We propose a Plug-and-Play (PnP) framework, solved using the alternating direction method of multipliers (ADMM), which flexibly incorporates advanced denoisers, including Tikhonov, Total Variation (TV), and BM3D. To preserve structural continuity in dynamic imaging, we further introduce temporal regularization enforcing smoothness between adjacent z-slices. Experimental results on zebrafish heart imaging under high compression ratios demonstrate that the proposed method successfully reconstructs cellular structures with excellent denoising performance and image clarity, validating the effectiveness and robustness of our algorithm in real-world high-speed, low-light biological imaging scenarios.
- oai:arXiv.org:2511.03093v1
- cs.CV
- cs.NA
- math.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yi Gong, Xinyuan Zhang, Jichen Chai, Yichen Ding, Yifei Lou
-
-
- ALAS: Transactional and Dynamic Multi-Agent LLM Planning
- https://arxiv.org/abs/2511.03094
- arXiv:2511.03094v1 Announce Type: new
-Abstract: Large language models enable flexible multi-agent planning but remain fragile in practice: verification is often circular, state changes are not tracked for repair, and small faults trigger costly global recomputation. We present ALAS, a stateful, disruption-aware framework that separates planning from non-circular validation, records a versioned execution log for grounded checks and restore points, and performs localized repair that preserves work in progress. The validator operates independently of the planning LLM with fresh, bounded context, avoiding self-check loops and mid-context attrition. The repair protocol edits only the minimal affected region under explicit policies (retry, catch, timeout, backoff, idempotency keys, compensation, loop guards) defined in a canonical workflow IR that maps to Amazon States Language and Argo Workflows. On job-shop scheduling suites (DMU, TA) across five classical benchmarks, ALAS matches or exceeds strong single-LLM and multi-agent baselines, achieving 83.7% success, reducing token usage by 60%, and running 1.82times faster under comparable settings. A minimal reliability study shows that the validator detects injected structural faults with low overhead, and that localized repair contains runtime perturbations with a bounded edit radius and less makespan degradation than global recompute. Results indicate that the combination of validator isolation, versioned execution logs, and localized repair provides measurable efficiency, feasibility, and scalability for multi-agent LLM planning. Code and seeds will be released.
- oai:arXiv.org:2511.03094v1
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Longling Geng, Edward Y. Chang
-
-
- Sparse, self-organizing ensembles of local kernels detect rare statistical anomalies
- https://arxiv.org/abs/2511.03095
- arXiv:2511.03095v1 Announce Type: new
-Abstract: Modern artificial intelligence has revolutionized our ability to extract rich and versatile data representations across scientific disciplines. Yet, the statistical properties of these representations remain poorly controlled, causing misspecified anomaly detection (AD) methods to falter. Weak or rare signals can remain hidden within the apparent regularity of normal data, creating a gap in our ability to detect and interpret anomalies. We examine this gap and identify a set of structural desiderata for detection methods operating under minimal prior information: sparsity, to enforce parsimony; locality, to preserve geometric sensitivity; and competition, to promote efficient allocation of model capacity. These principles define a class of self-organizing local kernels that adaptively partition the representation space around regions of statistical imbalance. As an instantiation of these principles, we introduce SparKer, a sparse ensemble of Gaussian kernels trained within a semi-supervised Neyman--Pearson framework to locally model the likelihood ratio between a sample that may contain anomalies and a nominal, anomaly-free reference. We provide theoretical insights into the mechanisms that drive detection and self-organization in the proposed model, and demonstrate the effectiveness of this approach on realistic high-dimensional problems of scientific discovery, open-world novelty detection, intrusion detection, and generative-model validation. Our applications span both the natural- and computer-science domains. We demonstrate that ensembles containing only a handful of kernels can identify statistically significant anomalous locations within representation spaces of thousands of dimensions, underscoring both the interpretability, efficiency and scalability of the proposed approach.
- oai:arXiv.org:2511.03095v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gaia Grosso, Sai Sumedh R. Hindupur, Thomas Fel, Samuel Bright-Thonney, Philip Harris, Demba Ba
-
-
- ISC-Perception: A Hybrid Computer Vision Dataset for Object Detection in Novel Steel Assembly
- https://arxiv.org/abs/2511.03098
- arXiv:2511.03098v1 Announce Type: new
-Abstract: The Intermeshed Steel Connection (ISC) system, when paired with robotic manipulators, can accelerate steel-frame assembly and improve worker safety by eliminating manual assembly. Dependable perception is one of the initial stages for ISC-aware robots. However, this is hampered by the absence of a dedicated image corpus, as collecting photographs on active construction sites is logistically difficult and raises safety and privacy concerns. In response, we introduce ISC-Perception, the first hybrid dataset expressly designed for ISC component detection. It blends procedurally rendered CAD images, game-engine photorealistic scenes, and a limited, curated set of real photographs, enabling fully automatic labelling of the synthetic portion. We explicitly account for all human effort to produce the dataset, including simulation engine and scene setup, asset preparation, post-processing scripts and quality checks; our total human time to generate a 10,000-image dataset was 30.5,h versus 166.7,h for manual labelling at 60,s per image (-81.7%). A manual pilot on a representative image with five instances of ISC members took 60,s (maximum 80,s), anchoring the manual baseline. Detectors trained on ISC-Perception achieved a mean Average Precision at IoU 0.50 of 0.756, substantially surpassing models trained on synthetic-only or photorealistic-only data. On a 1,200-frame bench test, we report mAP@0.50/mAP@[0.50:0.95] of 0.943/0.823. By bridging the data gap for construction-robotics perception, ISC-Perception facilitates rapid development of custom object detectors and is freely available for research and industrial use upon request.
- oai:arXiv.org:2511.03098v1
- cs.CV
- eess.IV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Miftahur Rahman, Samuel Adebayo, Dorian A. Acevedo-Mejia, David Hester, Daniel McPolin, Karen Rafferty, Debra F. Laefer
-
-
- DentalSplat: Dental Occlusion Novel View Synthesis from Sparse Intra-Oral Photographs
- https://arxiv.org/abs/2511.03099
- arXiv:2511.03099v1 Announce Type: new
-Abstract: In orthodontic treatment, particularly within telemedicine contexts, observing patients' dental occlusion from multiple viewpoints facilitates timely clinical decision-making. Recent advances in 3D Gaussian Splatting (3DGS) have shown strong potential in 3D reconstruction and novel view synthesis. However, conventional 3DGS pipelines typically rely on densely captured multi-view inputs and precisely initialized camera poses, limiting their practicality. Orthodontic cases, in contrast, often comprise only three sparse images, specifically, the anterior view and bilateral buccal views, rendering the reconstruction task especially challenging. The extreme sparsity of input views severely degrades reconstruction quality, while the absence of camera pose information further complicates the process. To overcome these limitations, we propose DentalSplat, an effective framework for 3D reconstruction from sparse orthodontic imagery. Our method leverages a prior-guided dense stereo reconstruction model to initialize the point cloud, followed by a scale-adaptive pruning strategy to improve the training efficiency and reconstruction quality of 3DGS. In scenarios with extremely sparse viewpoints, we further incorporate optical flow as a geometric constraint, coupled with gradient regularization, to enhance rendering fidelity. We validate our approach on a large-scale dataset comprising 950 clinical cases and an additional video-based test set of 195 cases designed to simulate real-world remote orthodontic imaging conditions. Experimental results demonstrate that our method effectively handles sparse input scenarios and achieves superior novel view synthesis quality for dental occlusion visualization, outperforming state-of-the-art techniques.
- oai:arXiv.org:2511.03099v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yiyi Miao, Taoyu Wu, Tong Chen, Sihao Li, Ji Jiang, Youpeng Yang, Angelos Stefanidis, Limin Yu, Jionglong Su
-
-
- Scaling Multi-Agent Environment Co-Design with Diffusion Models
- https://arxiv.org/abs/2511.03100
- arXiv:2511.03100v1 Announce Type: new
-Abstract: The agent-environment co-design paradigm jointly optimises agent policies and environment configurations in search of improved system performance. With application domains ranging from warehouse logistics to windfarm management, co-design promises to fundamentally change how we deploy multi-agent systems. However, current co-design methods struggle to scale. They collapse under high-dimensional environment design spaces and suffer from sample inefficiency when addressing moving targets inherent to joint optimisation. We address these challenges by developing Diffusion Co-Design (DiCoDe), a scalable and sample-efficient co-design framework pushing co-design towards practically relevant settings. DiCoDe incorporates two core innovations. First, we introduce Projected Universal Guidance (PUG), a sampling technique that enables DiCoDe to explore a distribution of reward-maximising environments while satisfying hard constraints such as spatial separation between obstacles. Second, we devise a critic distillation mechanism to share knowledge from the reinforcement learning critic, ensuring that the guided diffusion model adapts to evolving agent policies using a dense and up-to-date learning signal. Together, these improvements lead to superior environment-policy pairs when validated on challenging multi-agent environment co-design benchmarks including warehouse automation, multi-agent pathfinding and wind farm optimisation. Our method consistently exceeds the state-of-the-art, achieving, for example, 39% higher rewards in the warehouse setting with 66% fewer simulation samples. This sets a new standard in agent-environment co-design, and is a stepping stone towards reaping the rewards of co-design in real world domains.
- oai:arXiv.org:2511.03100v1
- cs.LG
- cs.AI
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Xiang Li, Michael Amir, Amanda Prorok
-
-
- CARMA: Comprehensive Automatically-annotated Reddit Mental Health Dataset for Arabic
- https://arxiv.org/abs/2511.03102
- arXiv:2511.03102v1 Announce Type: new
-Abstract: Mental health disorders affect millions worldwide, yet early detection remains a major challenge, particularly for Arabic-speaking populations where resources are limited and mental health discourse is often discouraged due to cultural stigma. While substantial research has focused on English-language mental health detection, Arabic remains significantly underexplored, partly due to the scarcity of annotated datasets. We present CARMA, the first automatically annotated large-scale dataset of Arabic Reddit posts. The dataset encompasses six mental health conditions, such as Anxiety, Autism, and Depression, and a control group. CARMA surpasses existing resources in both scale and diversity. We conduct qualitative and quantitative analyses of lexical and semantic differences between users, providing insights into the linguistic markers of specific mental health conditions. To demonstrate the dataset's potential for further mental health analysis, we perform classification experiments using a range of models, from shallow classifiers to large language models. Our results highlight the promise of advancing mental health detection in underrepresented languages such as Arabic.
- oai:arXiv.org:2511.03102v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Saad Mankarious, Ayah Zirikly
-
-
- Adaptive Detection of Software Aging under Workload Shift
- https://arxiv.org/abs/2511.03103
- arXiv:2511.03103v1 Announce Type: new
-Abstract: Software aging is a phenomenon that affects long-running systems, leading to progressive performance degradation and increasing the risk of failures. To mitigate this problem, this work proposes an adaptive approach based on machine learning for software aging detection in environments subject to dynamic workload conditions. We evaluate and compare a static model with adaptive models that incorporate adaptive detectors, specifically the Drift Detection Method (DDM) and Adaptive Windowing (ADWIN), originally developed for concept drift scenarios and applied in this work to handle workload shifts. Experiments with simulated sudden, gradual, and recurring workload transitions show that static models suffer a notable performance drop when applied to unseen workload profiles, whereas the adaptive model with ADWIN maintains high accuracy, achieving an F1-Score above 0.93 in all analyzed scenarios.
- oai:arXiv.org:2511.03103v1
- cs.SE
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- 10.5753/sscad.2025.16694
- Moura, R., Nascimento, M., Machida, F., & Andrade, E. (2025). Adaptive Detection of Software Aging under Workload Shift. In Anais do XXVI Simp\'osio em Sistemas Computacionais de Alto Desempenho, (pp. 242-253). Porto Alegre: SBC
- Rafael Jos\'e Moura, Maria Gizele Nascimento, Fumio Machida, Ermeson Andrade
-
-
- Large language models require a new form of oversight: capability-based monitoring
- https://arxiv.org/abs/2511.03106
- arXiv:2511.03106v1 Announce Type: new
-Abstract: The rapid adoption of large language models (LLMs) in healthcare has been accompanied by scrutiny of their oversight. Existing monitoring approaches, inherited from traditional machine learning (ML), are task-based and founded on assumed performance degradation arising from dataset drift. In contrast, with LLMs, inevitable model degradation due to changes in populations compared to the training dataset cannot be assumed, because LLMs were not trained for any specific task in any given population. We therefore propose a new organizing principle guiding generalist LLM monitoring that is scalable and grounded in how these models are developed and used in practice: capability-based monitoring. Capability-based monitoring is motivated by the fact that LLMs are generalist systems whose overlapping internal capabilities are reused across numerous downstream tasks. Instead of evaluating each downstream task independently, this approach organizes monitoring around shared model capabilities, such as summarization, reasoning, translation, or safety guardrails, in order to enable cross-task detection of systemic weaknesses, long-tail errors, and emergent behaviors that task-based monitoring may miss. We describe considerations for developers, organizational leaders, and professional societies for implementing a capability-based monitoring approach. Ultimately, capability-based monitoring will provide a scalable foundation for safe, adaptive, and collaborative monitoring of LLMs and future generalist artificial intelligence models in healthcare.
- oai:arXiv.org:2511.03106v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Katherine C. Kellogg, Bingyang Ye, Yifan Hu, Guergana K. Savova, Byron Wallace, Danielle S. Bitterman
-
-
- An Efficient Classification Model for Cyber Text
- https://arxiv.org/abs/2511.03107
- arXiv:2511.03107v1 Announce Type: new
-Abstract: The uprising of deep learning methodology and practice in recent years has brought about a severe consequence of increasing carbon footprint due to the insatiable demand for computational resources and power. The field of text analytics also experienced a massive transformation in this trend of monopolizing methodology. In this paper, the original TF-IDF algorithm has been modified, and Clement Term Frequency-Inverse Document Frequency (CTF-IDF) has been proposed for data preprocessing. This paper primarily discusses the effectiveness of classical machine learning techniques in text analytics with CTF-IDF and a faster IRLBA algorithm for dimensionality reduction. The introduction of both of these techniques in the conventional text analytics pipeline ensures a more efficient, faster, and less computationally intensive application when compared with deep learning methodology regarding carbon footprint, with minor compromise in accuracy. The experimental results also exhibit a manifold of reduction in time complexity and improvement of model accuracy for the classical machine learning methods discussed further in this paper.
- oai:arXiv.org:2511.03107v1
- cs.LG
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Md Sakhawat Hossen, Md. Zashid Iqbal Borshon, A. S. M. Badrudduza
-
-
- miniF2F-Lean Revisited: Reviewing Limitations and Charting a Path Forward
- https://arxiv.org/abs/2511.03108
- arXiv:2511.03108v1 Announce Type: new
-Abstract: We perform a thorough analysis of the formal and informal statements in the miniF2F benchmark from the perspective of an AI system that is tasked to participate in a math Olympiad consisting of the problems in miniF2F. In such setting, the model has to read and comprehend the problems in natural language, formalize them in Lean language, then proceed with proving the problems, and it will get credit for each problem if the formal proof corresponds to the original informal statement presented to the model. Our evaluation results reveal that the best accuracy of such pipeline can be about 36% using the SoTA models in the literature, considerably lower than the individual SoTA accuracies, 97% and 69% reported in the autoformalization and theorem proving literature. Analyzing the failure modes, we trace back a considerable portion of this drop to discrepancies between the formal and informal statements for more than half of the problems in miniF2F. We proceed with correcting all the errors, discrepancies and simplifications in formal and informal statements, and present the miniF2F-v2 with fully verified formal and informal statements and proofs. Evaluating the full theorem proving pipeline on miniF2F-v2 leads to the best accuracy of 70%, a significant improvement from the 40% on the original miniF2F, yet indicating considerable misalignment between the autoformalization models and theorem provers. Our deep analysis suggests that a higher quality benchmark can help the community better evaluate progress in the field of formal reasoning and also better diagnose the failure and success modes of autoformalization and theorem proving models. Our dataset is available at https://github.com/roozbeh-yz/miniF2F_v2.
- oai:arXiv.org:2511.03108v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Azim Ospanov, Farzan Farnia, Roozbeh Yousefzadeh
-
-
- Parametric Hierarchical Matrix Approximations to Kernel Matrices
- https://arxiv.org/abs/2511.03109
- arXiv:2511.03109v1 Announce Type: new
-Abstract: Kernel matrices are ubiquitous in computational mathematics, often arising from applications in machine learning and scientific computing. In two or three spatial or feature dimensions, such problems can be approximated efficiently by a class of matrices known as hierarchical matrices. A hierarchical matrix consists of a hierarchy of small near-field blocks (or sub-matrices) stored in a dense format and large far-field blocks approximated by low-rank matrices. Standard methods for forming hierarchical matrices do not account for the fact that kernel matrices depend on specific hyperparameters; for example, in the context of Gaussian processes, hyperparameters must be optimized over a fixed parameter space. We introduce a new class of hierarchical matrices, namely, parametric (parameter-dependent) hierarchical matrices. Members of this new class are parametric $\mathcal{H}$-matrices and parametric $\mathcal{H}^{2}$-matrices. The construction of a parametric hierarchical matrix follows an offline-online paradigm. In the offline stage, the near-field and far-field blocks are approximated by using polynomial approximation and tensor compression. In the online stage, for a particular hyperparameter, the parametric hierarchical matrix is instantiated efficiently as a standard hierarchical matrix. The asymptotic costs for storage and computation in the offline stage are comparable to the corresponding standard approaches of forming a hierarchical matrix. However, the online stage of our approach requires no new kernel evaluations, and the far-field blocks can be computed more efficiently than standard approaches. {Numerical experiments show over $100\times$ speedups compared with existing techniques.}
- oai:arXiv.org:2511.03109v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Abraham Khan, Chao Chen, Vishwas Rao, Arvind K. Saibaba
-
-
- Towards Scalable Backpropagation-Free Gradient Estimation
- https://arxiv.org/abs/2511.03110
- arXiv:2511.03110v1 Announce Type: new
-Abstract: While backpropagation--reverse-mode automatic differentiation--has been extraordinarily successful in deep learning, it requires two passes (forward and backward) through the neural network and the storage of intermediate activations. Existing gradient estimation methods that instead use forward-mode automatic differentiation struggle to scale beyond small networks due to the high variance of the estimates. Efforts to mitigate this have so far introduced significant bias to the estimates, reducing their utility. We introduce a gradient estimation approach that reduces both bias and variance by manipulating upstream Jacobian matrices when computing guess directions. It shows promising results and has the potential to scale to larger networks, indeed performing better as the network width is increased. Our understanding of this method is facilitated by analyses of bias and variance, and their connection to the low-dimensional structure of neural network gradients.
- oai:arXiv.org:2511.03110v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Daniel Wang, Evan Markou, Dylan Campbell
-
-
- Efficient linear schemes for a penalized ternary Cahn-Hilliard system
- https://arxiv.org/abs/2511.03111
- arXiv:2511.03111v1 Announce Type: new
-Abstract: In this work we introduce novel numerical schemes for a penalized version of the ternary Cahn-Hilliard system for the purpose of creating accurate and efficient numerical schemes of interfacial dynamics with three components as well as some results extending these ideas to systems with four or more components. The first scheme is linear, decoupled, first order accurate, and unconditionally energy stable. Next, we present a second scheme which is a conditionally energy stable modification of the first scheme, but has greatly reduced computational cost. Finally, we present a third scheme which is linear and second order accurate but the unknowns are coupled. Moreover, we present several numerical simulations in two and three dimensions to give a comprehensive overview of each scheme and the cost-benefit analysis associated with designing a method for energy-stability, efficiency, and accuracy.
- oai:arXiv.org:2511.03111v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Justin Swain, Giordano Tierra
-
-
- FP-AbDiff: Improving Score-based Antibody Design by Capturing Nonequilibrium Dynamics through the Underlying Fokker-Planck Equation
- https://arxiv.org/abs/2511.03113
- arXiv:2511.03113v1 Announce Type: new
-Abstract: Computational antibody design holds immense promise for therapeutic discovery, yet existing generative models are fundamentally limited by two core challenges: (i) a lack of dynamical consistency, which yields physically implausible structures, and (ii) poor generalization due to data scarcity and structural bias. We introduce FP-AbDiff, the first antibody generator to enforce Fokker-Planck Equation (FPE) physics along the entire generative trajectory. Our method minimizes a novel FPE residual loss over the mixed manifold of CDR geometries (R^3 x SO(3)), compelling locally-learned denoising scores to assemble into a globally coherent probability flow. This physics-informed regularizer is synergistically integrated with deep biological priors within a state-of-the-art SE(3)-equivariant diffusion framework. Rigorous evaluation on the RAbD benchmark confirms that FP-AbDiff establishes a new state-of-the-art. In de novo CDR-H3 design, it achieves a mean Root Mean Square Deviation of 0.99 {\AA} when superposing on the variable region, a 25% improvement over the previous state-of-the-art model, AbX, and the highest reported Contact Amino Acid Recovery of 39.91%. This superiority is underscored in the more challenging six-CDR co-design task, where our model delivers consistently superior geometric precision, cutting the average full-chain Root Mean Square Deviation by ~15%, and crucially, achieves the highest full-chain Amino Acid Recovery on the functionally dominant CDR-H3 loop (45.67%). By aligning generative dynamics with physical laws, FP-AbDiff enhances robustness and generalizability, establishing a principled approach for physically faithful and functionally viable antibody design.
- oai:arXiv.org:2511.03113v1
- cs.LG
- cs.AI
- q-bio.QM
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jiameng Chen, Yida Xiong, Kun Li, Hongzhi Zhang, Xiantao Cai, Wenbin Hu, Jia Wu
-
-
- An Augmentation Overlap Theory of Contrastive Learning
- https://arxiv.org/abs/2511.03114
- arXiv:2511.03114v1 Announce Type: new
-Abstract: Recently, self-supervised contrastive learning has achieved great success on various tasks. However, its underlying working mechanism is yet unclear. In this paper, we first provide the tightest bounds based on the widely adopted assumption of conditional independence. Further, we relax the conditional independence assumption to a more practical assumption of augmentation overlap and derive the asymptotically closed bounds for the downstream performance. Our proposed augmentation overlap theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations, thus simply aligning the positive samples (augmented views of the same sample) could make contrastive learning cluster intra-class samples together. Moreover, from the newly derived augmentation overlap perspective, we develop an unsupervised metric for the representation evaluation of contrastive learning, which aligns well with the downstream performance almost without relying on additional modules. Code is available at https://github.com/PKU-ML/GARC.
- oai:arXiv.org:2511.03114v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Qi Zhang, Yifei Wang, Yisen Wang
-
-
- Handover Configurations in Operational 5G Networks: Diversity, Evolution, and Impact on Performance
- https://arxiv.org/abs/2511.03116
- arXiv:2511.03116v1 Announce Type: new
-Abstract: Mobility management in cellular networks, especially the handover (HO) process, plays a key role in providing seamless and ubiquitous Internet access. The wide-scale deployment of 5G and the resulting co-existence of 4G/5G in the past six years have significantly changed the landscape of all mobile network operators and made the HO process much more complex than before. While several recent works have studied the impact of HOs on user experience, why and how HOs occur and how HO configurations affect performance in 5G operational networks remains largely unknown. Through four cross-country driving trips across the US spread out over a 27-month period, we conduct an in-depth measurement study of HO configurations across all three major US operators. Our study reveals (a) new types of HOs and new HO events used by operators to handle these new types of HOs, (b) overly aggressive HO configurations that result in unnecessarily high signaling overhead, (c) large diversity in HO configuration parameter values, which also differ across operators, but significantly lower diversity in 5G compared to LTE, and (d) sub-optimal HO configurations/decisions leading to poor pre- or post-HO performance. Our findings have many implications for mobile operators, as they keep fine-tuning their 5G HO configurations.
- oai:arXiv.org:2511.03116v1
- cs.NI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Moinak Ghoshal, Imran Khan, Phuc Dinh, Z. Jonny Kong, Omar Basit, Sizhe Wang, Yufei Feng, Y. Charlie Hu, Dimitrios Koutsonikolas
-
-
- Tracing Generative AI in Digital Art: A Longitudinal Study of Chinese Painters' Attitudes, Practices, and Identity Negotiation
- https://arxiv.org/abs/2511.03117
- arXiv:2511.03117v1 Announce Type: new
-Abstract: This study presents a five-year longitudinal mixed-methods study of 17 Chinese digital painters, examining how their attitudes and practices evolved in response to generative AI. Our findings reveal a trajectory from resistance and defensiveness, to pragmatic adoption, and ultimately to reflective reconstruction, shaped by strong peer pressures and shifting emotional experiences. Persistent concerns around copyright and creative labor highlight the ongoing negotiation of identity and values. This work contributes by offering rare longitudinal empirical data, advancing a theoretical lens of "identity and value negotiation," and providing design implications for future human-AI collaborative systems.
- oai:arXiv.org:2511.03117v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yibo Meng, Ruiqi Chen, Xin Chen, Zhiming Liu, Yan Guan
-
-
- QAGT-MLP: An Attention-Based Graph Transformer for Small and Large-Scale Quantum Error Mitigation
- https://arxiv.org/abs/2511.03119
- arXiv:2511.03119v1 Announce Type: new
-Abstract: Noisy quantum devices demand error-mitigation techniques to be accurate yet simple and efficient in terms of number of shots and processing time. Many established approaches (e.g., extrapolation and quasi-probability cancellation) impose substantial execution or calibration overheads, while existing learning-based methods have difficulty scaling to large and deep circuits. In this research, we introduce QAGT-MLP: an attention-based graph transformer tailored for small- and large-scale quantum error mitigation (QEM). QAGT-MLP encodes each quantum circuit as a graph whose nodes represent gate instances and whose edges capture qubit connectivity and causal adjacency. A dual-path attention module extracts features around measured qubits at two scales or contexts: 1) graph-wide global structural context; and 2) fine-grained local lightcone context. These learned representations are concatenated with circuit-level descriptor features and the circuit noisy expected values, then they are passed to a lightweight MLP to predict the noise-mitigated values. On large-scale 100-qubit Trotterized 1D Transverse-Field Ising Models -- TFIM circuits -- the proposed QAGT-MLP outperformed state-of-the-art learning baselines in terms of mean error and error variability, demonstrating strong validity and applicability in real-world QEM scenarios under matched shot budgets. By using attention to fuse global structures with local lightcone neighborhoods, QAGT-MLP achieves high mitigation quality without the increasing noise scaling or resource demand required by classical QEM pipelines, while still offering a scalable and practical path to QEM in modern and future quantum workloads.
- oai:arXiv.org:2511.03119v1
- cs.ET
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Seyed Mohamad Ali Tousi, G. N. DeSouza
-
-
- Image-Intrinsic Priors for Integrated Circuit Defect Detection and Novel Class Discovery via Self-Supervised Learning
- https://arxiv.org/abs/2511.03120
- arXiv:2511.03120v1 Announce Type: new
-Abstract: Integrated circuit manufacturing is highly complex, comprising hundreds of process steps. Defects can arise at any stage, causing yield loss and ultimately degrading product reliability. Supervised methods require extensive human annotation and struggle with emergent categories and rare, data scarce defects. Clustering-based unsupervised methods often exhibit unstable performance due to missing priors. We propose IC DefectNCD, a support set free framework that leverages Image Intrinsic Priors in IC SEM images for defect detection and novel class discovery. We first develop Self Normal Information Guided IC Defect Detection, aggregating representative normal features via a learnable normal information extractor and using reconstruction residuals to coarsely localize defect regions. To handle saliency variations across defects, we introduce an adaptive binarization strategy that produces stable subimages focused on core defective areas. Finally, we design Self Defect Information Guided IC Defect Classification, which incorporates a soft mask guided attention mechanism to inject spatial defect priors into the teacher student model. This enhances sensitivity to defective regions, suppresses background interference, and enables recognition and classification of unseen defects. We validate the approach on a real world dataset spanning three key fabrication stages and covering 15 defect types. Experiments demonstrate robust performance on both defect detection and unseen defect classification.
- oai:arXiv.org:2511.03120v1
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Botong. Zhao, Xubin. Wang, Shujing. Lyu, Yue. Lu
-
-
- Control Barrier Function for Aligning Large Language Models
- https://arxiv.org/abs/2511.03121
- arXiv:2511.03121v1 Announce Type: new
-Abstract: This paper proposes a control-based framework for aligning large language models (LLMs) by leveraging a control barrier function (CBF) to ensure user-desirable text generation. The presented framework applies the CBF safety filter to the predicted token generated from the baseline LLM, to intervene in the generated text. The safety filter includes two significant advantages: this safety filter is an add-on type, allowing it to be used for alignment purposes without fine-tuning the baseline LLM, and if there is an evaluation model regarding the desired alignment, it can be directly applied to the filter design. The overall text-generation system is implemented with open-source language models, aiming to generate positive text.
- oai:arXiv.org:2511.03121v1
- cs.CL
- cs.AI
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Yuya Miyaoka, Masaki Inoue
-
-
- Accelerating Physical Property Reasoning for Augmented Visual Cognition
- https://arxiv.org/abs/2511.03126
- arXiv:2511.03126v1 Announce Type: new
-Abstract: This paper introduces \sysname, a system that accelerates vision-guided physical property reasoning to enable augmented visual cognition. \sysname minimizes the run-time latency of this reasoning pipeline through a combination of both algorithmic and systematic optimizations, including rapid geometric 3D reconstruction, efficient semantic feature fusion, and parallel view encoding. Through these simple yet effective optimizations, \sysname reduces the end-to-end latency of this reasoning pipeline from 10--20 minutes to less than 6 seconds. A head-to-head comparison on the ABO dataset shows that \sysname achieves this 62.9$\times$--287.2$\times$ speedup while not only reaching on-par (and sometimes slightly better) object-level physical property estimation accuracy(e.g. mass), but also demonstrating superior performance in material segmentation and voxel-level inference than two SOTA baselines. We further combine gaze-tracking with \sysname to localize the object of interest in cluttered, real-world environments, streamlining the physical property reasoning on smart glasses. The case study with Meta Aria Glasses conducted at an IKEA furniture store demonstrates that \sysname achives consistently high performance compared to controlled captures, providing robust property estimations even with fewer views in real-world scenarios.
- oai:arXiv.org:2511.03126v1
- cs.CV
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hongbo Lan, Zhenlin An, Haoyu Li, Vaibhav Singh, Longfei Shangguan
-
-
- From Insight to Exploit: Leveraging LLM Collaboration for Adaptive Adversarial Text Generation
- https://arxiv.org/abs/2511.03128
- arXiv:2511.03128v1 Announce Type: new
-Abstract: LLMs can provide substantial zero-shot performance on diverse tasks using a simple task prompt, eliminating the need for training or fine-tuning. However, when applying these models to sensitive tasks, it is crucial to thoroughly assess their robustness against adversarial inputs. In this work, we introduce Static Deceptor (StaDec) and Dynamic Deceptor (DyDec), two innovative attack frameworks designed to systematically generate dynamic and adaptive adversarial examples by leveraging the understanding of the LLMs. We produce subtle and natural-looking adversarial inputs that preserve semantic similarity to the original text while effectively deceiving the target LLM. By utilizing an automated, LLM-driven pipeline, we eliminate the dependence on external heuristics. Our attacks evolve with the advancements in LLMs and demonstrate strong transferability across models unknown to the attacker. Overall, this work provides a systematic approach for the self-assessment of an LLM's robustness. We release our code and data at https://github.com/Shukti042/AdversarialExample.
- oai:arXiv.org:2511.03128v1
- cs.LG
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Najrin Sultana, Md Rafi Ur Rashid, Kang Gu, Shagufta Mehnaz
-
-
- Ceci N'est Pas un Drone: Investigating the Impact of Design Representation on Design Decision Making When Using GenAI
- https://arxiv.org/abs/2511.03131
- arXiv:2511.03131v1 Announce Type: new
-Abstract: With generative AI-powered design tools, designers and engineers can efficiently generate large numbers of design ideas. However, efficient exploration of these ideas requires designers to select a smaller group of potential solutions for further development. Therefore, the ability to judge and evaluate designs is critical for the successful use of generative design tools. Different design representation modalities can potentially affect designers' judgments. This work investigates how different design modalities, including visual rendering, numerical performance data, and a combination of both, affect designers' design selections from AI-generated design concepts for Uncrewed Aerial Vehicles. We found that different design modalities do affect designers' choices. Unexpectedly, we found that providing only numerical design performance data can lead to the best ability to select optimal designs. We also found that participants prefer visually conventional designs with axis-symmetry. The findings of this work provide insights into the interaction between human users and generative design systems.
- oai:arXiv.org:2511.03131v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Zeda Xu, Nikolas Martelaro, Christopher McComb
-
-
- Deploying Rapid Damage Assessments from sUAS Imagery for Disaster Response
- https://arxiv.org/abs/2511.03132
- arXiv:2511.03132v1 Announce Type: new
-Abstract: This paper presents the first AI/ML system for automating building damage assessment in uncrewed aerial systems (sUAS) imagery to be deployed operationally during federally declared disasters (Hurricanes Debby and Helene). In response to major disasters, sUAS teams are dispatched to collect imagery of the affected areas to assess damage; however, at recent disasters, teams collectively delivered between 47GB and 369GB of imagery per day, representing more imagery than can reasonably be transmitted or interpreted by subject matter experts in the disaster scene, thus delaying response efforts. To alleviate this data avalanche encountered in practice, computer vision and machine learning techniques are necessary. While prior work has been deployed to automatically assess damage in satellite imagery, there is no current state of practice for sUAS-based damage assessment systems, as all known work has been confined to academic settings. This work establishes the state of practice via the development and deployment of models for building damage assessment with sUAS imagery. The model development involved training on the largest known dataset of post-disaster sUAS aerial imagery, containing 21,716 building damage labels, and the operational training of 91 disaster practitioners. The best performing model was deployed during the responses to Hurricanes Debby and Helene, where it assessed a combined 415 buildings in approximately 18 minutes. This work contributes documentation of the actual use of AI/ML for damage assessment during a disaster and lessons learned to the benefit of the AI/ML research and user communities.
- oai:arXiv.org:2511.03132v1
- cs.CV
- cs.AI
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Thomas Manzini, Priyankari Perali, Robin R. Murphy
-
-
- Automated Prompt Generation for Code Intelligence: An Empirical study and Experience in WeChat
- https://arxiv.org/abs/2511.03136
- arXiv:2511.03136v1 Announce Type: new
-Abstract: Large Code Models (LCMs) show potential in code intelligence, but their effectiveness is greatly influenced by prompt quality. Current prompt design is mostly manual, which is time-consuming and highly dependent on specific LCMs and tasks. While automated prompt generation (APG) exists in NLP, it is underexplored for code intelligence. This creates a gap, as automating the prompt process is essential for developers facing diverse tasks and black-box LCMs.
- To mitigate this, we empirically investigate two important parts of APG: Instruction Generation (IG) and Multi-Step Reasoning (MSR). IG provides a task-related description to instruct LCMs, while MSR guides them to produce logical steps before the final answer. We evaluate widely-used APG methods for each part on four open-source LCMs and three code intelligence tasks: code translation (PL-PL), code summarization (PL-NL), and API recommendation (NL-PL).Experimental results indicate that both IG and MSR dramatically enhance performance compared to basic prompts. Based on these results, we propose a novel APG approach combining the best methods of the two parts. Experiments show our approach achieves average improvements of 28.38% in CodeBLEU (code translation), 58.11% in ROUGE-L (code summarization), and 84.53% in SuccessRate@1 (API recommendation) over basic prompts. To validate its effectiveness in an industrial scenario, we evaluate our approach on WeChat-Bench, a proprietary dataset, achieving an average MRR improvement of 148.89% for API recommendation.
- oai:arXiv.org:2511.03136v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Kexing Ji, Shiyun Fu, Cuiyun Gao, Yujia Chen, Zezhou Yang, Chaozheng Wang, Yuetang Deng
-
-
- Using Multi-modal Large Language Model to Boost Fireworks Algorithm's Ability in Settling Challenging Optimization Tasks
- https://arxiv.org/abs/2511.03137
- arXiv:2511.03137v1 Announce Type: new
-Abstract: As optimization problems grow increasingly complex and diverse, advancements in optimization techniques and paradigm innovations hold significant importance. The challenges posed by optimization problems are primarily manifested in their non-convexity, high-dimensionality, black-box nature, and other unfavorable characteristics. Traditional zero-order or first-order methods, which are often characterized by low efficiency, inaccurate gradient information, and insufficient utilization of optimization information, are ill-equipped to address these challenges effectively. In recent years, the rapid development of large language models (LLM) has led to substantial improvements in their language understanding and code generation capabilities. Consequently, the design of optimization algorithms leveraging large language models has garnered increasing attention from researchers. In this study, we choose the fireworks algorithm(FWA) as the basic optimizer and propose a novel approach to assist the design of the FWA by incorporating multi-modal large language model(MLLM). To put it simply, we propose the concept of Critical Part(CP), which extends FWA to complex high-dimensional tasks, and further utilizes the information in the optimization process with the help of the multi-modal characteristics of large language models. We focus on two specific tasks: the \textit{traveling salesman problem }(TSP) and \textit{electronic design automation problem} (EDA). The experimental results show that FWAs generated under our new framework have achieved or surpassed SOTA results on many problem instances.
- oai:arXiv.org:2511.03137v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Shipeng Cen, Ying Tan
-
-
- A Proprietary Model-Based Safety Response Framework for AI Agents
- https://arxiv.org/abs/2511.03138
- arXiv:2511.03138v1 Announce Type: new
-Abstract: With the widespread application of Large Language Models (LLMs), their associated security issues have become increasingly prominent, severely constraining their trustworthy deployment in critical domains. This paper proposes a novel safety response framework designed to systematically safeguard LLMs at both the input and output levels. At the input level, the framework employs a supervised fine-tuning-based safety classification model. Through a fine-grained four-tier taxonomy (Safe, Unsafe, Conditionally Safe, Focused Attention), it performs precise risk identification and differentiated handling of user queries, significantly enhancing risk coverage and business scenario adaptability, and achieving a risk recall rate of 99.3%. At the output level, the framework integrates Retrieval-Augmented Generation (RAG) with a specifically fine-tuned interpretation model, ensuring all responses are grounded in a real-time, trustworthy knowledge base. This approach eliminates information fabrication and enables result traceability. Experimental results demonstrate that our proposed safety control model achieves a significantly higher safety score on public safety evaluation benchmarks compared to the baseline model, TinyR1-Safety-8B. Furthermore, on our proprietary high-risk test set, the framework's components attained a perfect 100% safety score, validating their exceptional protective capabilities in complex risk scenarios. This research provides an effective engineering pathway for building high-security, high-trust LLM applications.
- oai:arXiv.org:2511.03138v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Qi Li, Jianjun Xu, Pingtao Wei, Jiu Li, Peiqiang Zhao, Jiwei Shi, Xuan Zhang, Yanhui Yang, Xiaodong Hui, Peng Xu, Wenqin Shao
-
-
- The isogeometric boundary element algorithm for solving the plane strain problem of an elastic matrix containing an open material surface of arbitrary shape
- https://arxiv.org/abs/2511.03141
- arXiv:2511.03141v1 Announce Type: new
-Abstract: The paper presents the Isogeometric Boundary Element Method (IGABEM) algorithm for solving the plane strain problem of an isotropic linearly elastic matrix containing an open material surface of arbitrary shape. Theoretical developments are based on the use of the Gurtin-Murdoch model of material surfaces. The governing equations and the boundary conditions for the problem are reviewed, and analytical integral representations for the elastic fields everywhere in the material system are presented in terms of unknown traction jumps across the surface. To find the jumps, the problem is reduced to a system of singular boundary integral equations in terms of two unknown scalar components of the surface stress tensor. The system is solved numerically using the developed IGABEM algorithm in which NURBS are used to approximate the unknowns. The main steps of the algorithm are discussed and convergence studies are performed. The algorithm is validated using two benchmark problems involving the matrix subjected to a uniform far-field load and containing a surface along (i) a straight segment and (ii) a circular arc. Numerical examples are presented to illustrate the influence of governing parameters with a focus on the influence of curvature variation.
- oai:arXiv.org:2511.03141v1
- math.NA
- cs.NA
- physics.comp-ph
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- Rohit Satish Patil, Zhilin Han, Sofia G. Mogilevskaya
-
-
- From Measurement to Expertise: Empathetic Expert Adapters for Context-Based Empathy in Conversational AI Agents
- https://arxiv.org/abs/2511.03143
- arXiv:2511.03143v1 Announce Type: new
-Abstract: Empathy is a critical factor in fostering positive user experiences in conversational AI. While models can display empathy, it is often generic rather than tailored to specific tasks and contexts. In this work, we introduce a novel framework for developing and evaluating context-specific empathetic large language models (LLMs). We first analyze a real-world conversational dataset consisting of 672 multi-turn conversations across 8 tasks, revealing significant differences in terms of expected and experienced empathy before and after the conversations, respectively. To help minimize this gap, we develop a synthetic multi-turn conversational generation pipeline and steer responses toward our defined empathy patterns based on the context that more closely matches users' expectations. We then train empathetic expert adapters for context-specific empathy that specialize in varying empathy levels based on the recognized task. Our empirical results demonstrate a significant gap reduction of 72.66% between perceived and desired empathy with scores increasing by an average factor of 2.43 as measured by our metrics and reward models. Additionally, our trained empathetic expert adapters demonstrate superior effectiveness in preserving empathy patterns throughout conversation turns, outperforming system prompts, which tend to dramatically diminish in impact as conversations lengthen.
- oai:arXiv.org:2511.03143v1
- cs.HC
- cs.AI
- cs.CL
- cs.CY
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Erfan Shayegani, Jina Suh, Andy Wilson, Nagu Rangan, Javier Hernandez
-
-
- MME-CC: A Challenging Multi-Modal Evaluation Benchmark of Cognitive Capacity
- https://arxiv.org/abs/2511.03146
- arXiv:2511.03146v1 Announce Type: new
-Abstract: As reasoning models scale rapidly, the essential role of multimodality in human cognition has come into sharp relief, driving a growing need to probe vision-centric cognitive behaviors. Yet, existing multimodal benchmarks either overemphasize textual reasoning or fall short of systematically capturing vision-centric cognitive behaviors, leaving the cognitive capacity of MLLMs insufficiently assessed. To address this limitation, we introduce MME-CC (Multi-Modal Evaluation benchmark of Cognitive Capacity), a vision-grounded benchmark that organizes 11 representative reasoning tasks into three fundamental categories of visual information: spatial, geometric, and knowledge-based reasoning, and provides fine-grained analyses of MLLMs' cognitive capacity across these dimensions. Based on MME-CC, we conduct extensive experiments over 16 representative MLLMs. Our study reveals that closed-source models currently lead overall (e.g., 42.66 for Gemini-2.5-Pro vs. 30.45 for GLM-4.5V), while spatial and geometric reasoning remain broadly weak (less than or equal to 30%). We further identify common error patterns, including orientation mistakes, fragile cross-view identity persistence, and poor adherence to counterfactual instructions, and observe that Chain-of-Thought typically follows a three-stage process (extract -> reason -> verify) with heavy reliance on visual extraction. We hope this work catalyzes a shift toward treating the cognitive capacity of MLLMs as central to both evaluation and model design.
- oai:arXiv.org:2511.03146v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Kaiyuan Zhang, Chenghao Yang, Zhoufutu Wen, Sihang Yuan, Qiuyue Wang, Chaoyi Huang, Guosheng Zhu, He Wang, Huawenyu Lu, Jianing Wen, Jianpeng Jiao, Lishu Luo, Longxiang Liu, Sijin Wu, Xiaolei Zhu, Xuanliang Zhang, Ge Zhang, Yi Lin, Guang Shi, Chaoyou Fu, Wenhao Huang
-
-
- Scheduling the Off-Diagonal Weingarten Loss of Neural SDFs for CAD Models
- https://arxiv.org/abs/2511.03147
- arXiv:2511.03147v1 Announce Type: new
-Abstract: Neural signed distance functions (SDFs) have become a powerful representation for geometric reconstruction from point clouds, yet they often require both gradient- and curvature-based regularization to suppress spurious warp and preserve structural fidelity. FlatCAD introduced the Off-Diagonal Weingarten (ODW) loss as an efficient second-order prior for CAD surfaces, approximating full-Hessian regularization at roughly half the computational cost. However, FlatCAD applies a fixed ODW weight throughout training, which is suboptimal: strong regularization stabilizes early optimization but suppresses detail recovery in later stages. We present scheduling strategies for the ODW loss that assign a high initial weight to stabilize optimization and progressively decay it to permit fine-scale refinement. We investigate constant, linear, quintic, and step interpolation schedules, as well as an increasing warm-up variant. Experiments on the ABC CAD dataset demonstrate that time-varying schedules consistently outperform fixed weights. Our method achieves up to a 35% improvement in Chamfer Distance over the FlatCAD baseline, establishing scheduling as a simple yet effective extension of curvature regularization for robust CAD reconstruction.
- oai:arXiv.org:2511.03147v1
- cs.GR
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Haotian Yin, Przemyslaw Musialski
-
-
- Test Time Adaptation Using Adaptive Quantile Recalibration
- https://arxiv.org/abs/2511.03148
- arXiv:2511.03148v1 Announce Type: new
-Abstract: Domain adaptation is a key strategy for enhancing the generalizability of deep learning models in real-world scenarios, where test distributions often diverge significantly from the training domain. However, conventional approaches typically rely on prior knowledge of the target domain or require model retraining, limiting their practicality in dynamic or resource-constrained environments. Recent test-time adaptation methods based on batch normalization statistic updates allow for unsupervised adaptation, but they often fail to capture complex activation distributions and are constrained to specific normalization layers. We propose Adaptive Quantile Recalibration (AQR), a test-time adaptation technique that modifies pre-activation distributions by aligning quantiles on a channel-wise basis. AQR captures the full shape of activation distributions and generalizes across architectures employing BatchNorm, GroupNorm, or LayerNorm. To address the challenge of estimating distribution tails under varying batch sizes, AQR incorporates a robust tail calibration strategy that improves stability and precision. Our method leverages source-domain statistics computed at training time, enabling unsupervised adaptation without retraining models. Experiments on CIFAR-10-C, CIFAR-100-C, and ImageNet-C across multiple architectures demonstrate that AQR achieves robust adaptation across diverse settings, outperforming existing test-time adaptation baselines. These results highlight AQR's potential for deployment in real-world scenarios with dynamic and unpredictable data distributions.
- oai:arXiv.org:2511.03148v1
- cs.LG
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Paria Mehrbod, Pedro Vianna, Geraldin Nanfack, Guy Wolf, Eugene Belilovsky
-
-
- Forecast2Anomaly (F2A): Adapting Multivariate Time Series Foundation Models for Anomaly Prediction
- https://arxiv.org/abs/2511.03149
- arXiv:2511.03149v1 Announce Type: new
-Abstract: Forecasting anomalies (anomaly prediction) in multivariate time series from different real-world, dynamic, and complex systems is vital for preempting critical failures, leading to a substantial minimization in operational costs and human labor. Yet, existing methods are limited to specific systems while failing to generalize to evolving anomaly patterns over time. In contrast, pretrained Time Series Foundation Models (TSFMs) have recently demonstrated strong generalization and zero-shot forecasting capabilities. However, their potential remains untapped for anomaly prediction, a task fundamentally different from forecasting normal behavior. Thus, we present Forecast2Anomaly (F2A), a novel framework that empowers TSFMs with anomaly prediction abilities through two key innovations. First, we propose a joint forecast-anomaly loss that fine-tunes TSFMs to accurately forecast future signals even at anomalous time points. Second, we introduce a Retrieval-Augmented Generation (RAG) module that retrieves historically relevant horizons and conditions predictions on them. This component dynamically adapts to distributional shifts at inference time, enabling F2A to track evolving anomalies without requiring model updates. By combining targeted fine-tuning with dynamic retrieval, F2A bridges the gap between robust TSFM zero-shot forecasting and zero-shot anomaly prediction. Extensive experiments across 16 diverse datasets and multiple TSFM backbones show that F2A consistently outperforms state-of-the-art methods, offering a scalable, zero-shot anomaly prediction solution for real-world applications.
- oai:arXiv.org:2511.03149v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Atif Hassan, Tarun Kumar, Ashish Mishra, Sergey Serebryakov, Satish Kumar Mopur, Phanidhar Koganti, Murthy Chelankuri, Ramanagopal Vogety, Suparna Bhattacharya, Martin Foltin
-
-
- Who Sees the Risk? Stakeholder Conflicts and Explanatory Policies in LLM-based Risk Assessment
- https://arxiv.org/abs/2511.03152
- arXiv:2511.03152v1 Announce Type: new
-Abstract: Understanding how different stakeholders perceive risks in AI systems is essential for their responsible deployment. This paper presents a framework for stakeholder-grounded risk assessment by using LLMs, acting as judges to predict and explain risks. Using the Risk Atlas Nexus and GloVE explanation method, our framework generates stakeholder-specific, interpretable policies that shows how different stakeholders agree or disagree about the same risks. We demonstrate our method using three real-world AI use cases of medical AI, autonomous vehicles, and fraud detection domain. We further propose an interactive visualization that reveals how and why conflicts emerge across stakeholder perspectives, enhancing transparency in conflict reasoning. Our results show that stakeholder perspectives significantly influence risk perception and conflict patterns. Our work emphasizes the importance of these stakeholder-aware explanations needed to make LLM-based evaluations more transparent, interpretable, and aligned with human-centered AI governance goals.
- oai:arXiv.org:2511.03152v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Srishti Yadav, Jasmina Gajcin, Erik Miehling, Elizabeth Daly
-
-
- RefAgent: A Multi-agent LLM-based Framework for Automatic Software Refactoring
- https://arxiv.org/abs/2511.03153
- arXiv:2511.03153v1 Announce Type: new
-Abstract: Large Language Models (LLMs) have substantially influenced various software engineering tasks. Indeed, in the case of software refactoring, traditional LLMs have shown the ability to reduce development time and enhance code quality. However, these LLMs often rely on static, detailed instructions for specific tasks. In contrast, LLM-based agents can dynamically adapt to evolving contexts and autonomously make decisions by interacting with software tools and executing workflows. In this paper, we explore the potential of LLM-based agents in supporting refactoring activities. Specifically, we introduce RefAgent, a multi-agent LLM-based framework for end-to-end software refactoring. RefAgent consists of specialized agents responsible for planning, executing, testing, and iteratively refining refactorings using self-reflection and tool-calling capabilities. We evaluate RefAgent on eight open-source Java projects, comparing its effectiveness against a single-agent approach, a search-based refactoring tool, and historical developer refactorings. Our assessment focuses on: (1) the impact of generated refactorings on software quality, (2) the ability to identify refactoring opportunities, and (3) the contribution of each LLM agent through an ablation study. Our results show that RefAgent achieves a median unit test pass rate of 90%, reduces code smells by a median of 52.5%, and improves key quality attributes (e.g., reusability) by a median of 8.6%. Additionally, it closely aligns with developer refactorings and the search-based tool in identifying refactoring opportunities, attaining a median F1-score of 79.15% and 72.7%, respectively. Compared to single-agent approaches, RefAgent improves the median unit test pass rate by 64.7% and the median compilation success rate by 40.1%. These findings highlight the promise of multi-agent architectures in advancing automated software refactoring.
- oai:arXiv.org:2511.03153v1
- cs.SE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Khouloud Oueslati, Maxime Lamothe, Foutse Khomh
-
-
- Generative Sequential Recommendation via Hierarchical Behavior Modeling
- https://arxiv.org/abs/2511.03155
- arXiv:2511.03155v1 Announce Type: new
-Abstract: Recommender systems in multi-behavior domains, such as advertising and e-commerce, aim to guide users toward high-value but inherently sparse conversions. Leveraging auxiliary behaviors (e.g., clicks, likes, shares) is therefore essential. Recent progress on generative recommendations has brought new possibilities for multi-behavior sequential recommendation. However, existing generative approaches face two significant challenges: 1) Inadequate Sequence Modeling: capture the complex, cross-level dependencies within user behavior sequences, and 2) Lack of Suitable Datasets: publicly available multi-behavior recommendation datasets are almost exclusively derived from e-commerce platforms, limiting the validation of feasibility in other domains, while also lacking sufficient side information for semantic ID generation. To address these issues, we propose a novel generative framework, GAMER (Generative Augmentation and Multi-lEvel behavior modeling for Recommendation), built upon a decoder-only backbone. GAMER introduces a cross-level interaction layer to capture hierarchical dependencies among behaviors and a sequential augmentation strategy that enhances robustness in training. To further advance this direction, we collect and release ShortVideoAD, a large-scale multi-behavior dataset from a mainstream short-video platform, which differs fundamentally from existing e-commerce datasets and provides pretrained semantic IDs for research on generative methods. Extensive experiments show that GAMER consistently outperforms both discriminative and generative baselines across multiple metrics.
- oai:arXiv.org:2511.03155v1
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhefan Wang, Guokai Yan, Jinbei Yu, Siyu Gu, Jingyan Chen, Peng Jiang, Zhiqiang Guo, Min Zhang
-
-
- Finetuning-Free Personalization of Text to Image Generation via Hypernetworks
- https://arxiv.org/abs/2511.03156
- arXiv:2511.03156v1 Announce Type: new
-Abstract: Personalizing text-to-image diffusion models has traditionally relied on subject-specific fine-tuning approaches such as DreamBooth~\cite{ruiz2023dreambooth}, which are computationally expensive and slow at inference. Recent adapter- and encoder-based methods attempt to reduce this overhead but still depend on additional fine-tuning or large backbone models for satisfactory results. In this work, we revisit an orthogonal direction: fine-tuning-free personalization via Hypernetworks that predict LoRA-adapted weights directly from subject images. Prior hypernetwork-based approaches, however, suffer from costly data generation or unstable attempts to mimic base model optimization trajectories. We address these limitations with an end-to-end training objective, stabilized by a simple output regularization, yielding reliable and effective hypernetworks. Our method removes the need for per-subject optimization at test time while preserving both subject fidelity and prompt alignment. To further enhance compositional generalization at inference time, we introduce Hybrid-Model Classifier-Free Guidance (HM-CFG), which combines the compositional strengths of the base diffusion model with the subject fidelity of personalized models during sampling. Extensive experiments on CelebA-HQ, AFHQ-v2, and DreamBench demonstrate that our approach achieves strong personalization performance and highlights the promise of hypernetworks as a scalable and effective direction for open-category personalization.
- oai:arXiv.org:2511.03156v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sagar Shrestha, Gopal Sharma, Luowei Zhou, Suren Kumar
-
-
- A Branch-and-Bound Approach for Maximum Low-Diameter Dense Subgraph Problems
- https://arxiv.org/abs/2511.03157
- arXiv:2511.03157v1 Announce Type: new
-Abstract: A graph with $n$ vertices is an $f(\cdot)$-dense graph if it has at least $f(n)$ edges, $f(\cdot)$ being a well-defined function. The notion $f(\cdot)$-dense graph encompasses various clique models like $\gamma$-quasi cliques, $k$-defective cliques, and dense cliques, arising in cohesive subgraph extraction applications. However, the $f(\cdot)$-dense graph may be disconnected or weakly connected. To conquer this, we study the problem of finding the largest $f(\cdot)$-dense subgraph with a diameter of at most two in the paper. Specifically, we present a decomposition-based branch-and-bound algorithm to optimally solve this problem. The key feature of the algorithm is a decomposition framework that breaks the graph into $n$ smaller subgraphs, allowing independent searches in each subgraph. We also introduce decomposition strategies including degeneracy and two-hop degeneracy orderings, alongside a branch-and-bound algorithm with a novel sorting-based upper bound to solve each subproblem. Worst-case complexity for each component is provided. Empirical results on 139 real-world graphs under two $f(\cdot)$ functions show our algorithm outperforms the MIP solver and pure branch-and-bound, solving nearly twice as many instances optimally within one hour.
- oai:arXiv.org:2511.03157v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yi Zhoua, Chunyu Luoa, Zhengren Wangb, Zhang-Hua Fuc
-
-
- Joint Optimization of DNN Model Caching and Request Routing in Mobile Edge Computing
- https://arxiv.org/abs/2511.03159
- arXiv:2511.03159v1 Announce Type: new
-Abstract: Mobile edge computing (MEC) can pre-cache deep neural networks (DNNs) near end-users, providing low-latency services and improving users' quality of experience (QoE). However, caching all DNN models at edge servers with limited capacity is difficult, and the impact of model loading time on QoE remains underexplored. Hence, we introduce dynamic DNNs in edge scenarios, disassembling a complete DNN model into interrelated submodels for more fine-grained and flexible model caching and request routing solutions. This raises the pressing issue of jointly deciding request routing and submodel caching for dynamic DNNs to balance model inference precision and loading latency for QoE optimization. In this paper, we study the joint dynamic model caching and request routing problem in MEC networks, aiming to maximize user request inference precision under constraints of server resources, latency, and model loading time. To tackle this problem, we propose CoCaR, an offline algorithm based on linear programming and random rounding that leverages dynamic DNNs to optimize caching and routing schemes, achieving near-optimal performance. Furthermore, we develop an online variant of CoCaR, named CoCaR-OL, enabling effective adaptation to dynamic and unpredictable online request patterns. The simulation results demonstrate that the proposed CoCaR improves the average inference precision of user requests by 46\% compared to state-of-the-art baselines. In addition, in online scenarios, CoCaR-OL achieves an improvement of no less than 32.3\% in user QoE over competitive baselines.
- oai:arXiv.org:2511.03159v1
- cs.NI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Shuting Qiu, Fang Dong, Siyu Tan, Ruiting Zhou, Dian Shen, Patrick P. C. Lee, Qilin Fan
-
-
- Active Noise Control Method Using Time Domain Neural Networks for Path Decoupling
- https://arxiv.org/abs/2511.03162
- arXiv:2511.03162v1 Announce Type: new
-Abstract: In decentralized active noise control (ANC) systems, crosstalk between multichannel secondary sources and error microphones significantly degrades control accuracy. Moreover, prefiltering reference signals in filtered-x (Fx) type algorithms may further introduce modeling errors. A theoretical analysis of the Fx-based decentralized control algorithm was performed, which reveals how prefiltering and crosstalk affect the control performance. Then, a hybrid method combining fixed-value neural networks and adaptive strategies was proposed for efficient decentralized ANC. The adaptive filter models the primary path of its own channel online using the least mean square (LMS) algorithm while the neural network (named DecNet) is used for secondary paths inverting and decoupling. The hybrid DecNet-LMS algorithm was implemented in the time domain to guarantee causality and avoid latency. Simulation results with measured acoustic paths show that the proposed method outperforms the existing ANC algorithms using either traditional adaptive filters or neural network-based fixed-coefficient methods under different acoustic conditions.
- oai:arXiv.org:2511.03162v1
- eess.SY
- cs.SY
- eess.SP
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yijing Chu, Qinxuan Xiang, Sipei Zhao, Ming Wu, Y. Zhao, Guangzheng Yu
-
-
- Subsampled Randomized Fourier GaLore for Adapting Foundation Models in Depth-Driven Liver Landmark Segmentation
- https://arxiv.org/abs/2511.03163
- arXiv:2511.03163v1 Announce Type: new
-Abstract: Accurate detection and delineation of anatomical structures in medical imaging are critical for computer-assisted interventions, particularly in laparoscopic liver surgery where 2D video streams limit depth perception and complicate landmark localization. While recent works have leveraged monocular depth cues for enhanced landmark detection, challenges remain in fusing RGB and depth features and in efficiently adapting large-scale vision models to surgical domains. We propose a depth-guided liver landmark segmentation framework integrating semantic and geometric cues via vision foundation encoders. We employ Segment Anything Model V2 (SAM2) encoder to extract RGB features and Depth Anything V2 (DA2) encoder to extract depth-aware features. To efficiently adapt SAM2, we introduce SRFT-GaLore, a novel low-rank gradient projection method that replaces the computationally expensive SVD with a Subsampled Randomized Fourier Transform (SRFT). This enables efficient fine-tuning of high-dimensional attention layers without sacrificing representational power. A cross-attention fusion module further integrates RGB and depth cues. To assess cross-dataset generalization, we also construct a new Laparoscopic Liver Surgical Dataset (LLSD) as an external validation benchmark. On the public L3D dataset, our method achieves a 4.85% improvement in Dice Similarity Coefficient and a 11.78-point reduction in Average Symmetric Surface Distance compared to the D2GPLand. To further assess generalization capability, we evaluate our model on LLSD dataset. Our model maintains competitive performance and significantly outperforms SAM-based baselines, demonstrating strong cross-dataset robustness and adaptability to unseen surgical environments. These results demonstrate that our SRFT-GaLore-enhanced dual-encoder framework enables scalable and precise segmentation under real-time, depth-constrained surgical settings.
- oai:arXiv.org:2511.03163v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yun-Chen Lin, Jiayuan Huang, Hanyuan Zhang, Sergi Kavtaradze, Matthew J. Clarkson, Mobarak I. Hoque
-
-
- SENT Map - Semantically Enhanced Topological Maps with Foundation Models
- https://arxiv.org/abs/2511.03165
- arXiv:2511.03165v1 Announce Type: new
-Abstract: We introduce SENT-Map, a semantically enhanced topological map for representing indoor environments, designed to support autonomous navigation and manipulation by leveraging advancements in foundational models (FMs). Through representing the environment in a JSON text format, we enable semantic information to be added and edited in a format that both humans and FMs understand, while grounding the robot to existing nodes during planning to avoid infeasible states during deployment. Our proposed framework employs a two stage approach, first mapping the environment alongside an operator with a Vision-FM, then using the SENT-Map representation alongside a natural-language query within an FM for planning. Our experimental results show that semantic-enhancement enables even small locally-deployable FMs to successfully plan over indoor environments.
- oai:arXiv.org:2511.03165v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Raj Surya Rajendran Kathirvel, Zach A Chavis, Stephen J. Guy, Karthik Desingh
-
-
- Measuring Aleatoric and Epistemic Uncertainty in LLMs: Empirical Evaluation on ID and OOD QA Tasks
- https://arxiv.org/abs/2511.03166
- arXiv:2511.03166v1 Announce Type: new
-Abstract: Large Language Models (LLMs) have become increasingly pervasive, finding applications across many industries and disciplines. Ensuring the trustworthiness of LLM outputs is paramount, where Uncertainty Estimation (UE) plays a key role. In this work, a comprehensive empirical study is conducted to examine the robustness and effectiveness of diverse UE measures regarding aleatoric and epistemic uncertainty in LLMs. It involves twelve different UE methods and four generation quality metrics including LLMScore from LLM criticizers to evaluate the uncertainty of LLM-generated answers in Question-Answering (QA) tasks on both in-distribution (ID) and out-of-distribution (OOD) datasets. Our analysis reveals that information-based methods, which leverage token and sequence probabilities, perform exceptionally well in ID settings due to their alignment with the model's understanding of the data. Conversely, density-based methods and the P(True) metric exhibit superior performance in OOD contexts, highlighting their effectiveness in capturing the model's epistemic uncertainty. Semantic consistency methods, which assess variability in generated answers, show reliable performance across different datasets and generation metrics. These methods generally perform well but may not be optimal for every situation.
- oai:arXiv.org:2511.03166v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Kevin Wang, Subre Abdoul Moktar, Jia Li, Kangshuo Li, Feng Chen
-
-
- Learning Natural and Robust Hexapod Locomotion over Complex Terrains via Motion Priors based on Deep Reinforcement Learning
- https://arxiv.org/abs/2511.03167
- arXiv:2511.03167v1 Announce Type: new
-Abstract: Multi-legged robots offer enhanced stability to navigate complex terrains with their multiple legs interacting with the environment. However, how to effectively coordinate the multiple legs in a larger action exploration space to generate natural and robust movements is a key issue. In this paper, we introduce a motion prior-based approach, successfully applying deep reinforcement learning algorithms to a real hexapod robot. We generate a dataset of optimized motion priors, and train an adversarial discriminator based on the priors to guide the hexapod robot to learn natural gaits. The learned policy is then successfully transferred to a real hexapod robot, and demonstrate natural gait patterns and remarkable robustness without visual information in complex terrains. This is the first time that a reinforcement learning controller has been used to achieve complex terrain walking on a real hexapod robot.
- oai:arXiv.org:2511.03167v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xin Liu, Jinze Wu, Yinghui Li, Chenkun Qi, Yufei Xue, Feng Gao
-
-
- UnCLe: Towards Scalable Dynamic Causal Discovery in Non-linear Temporal Systems
- https://arxiv.org/abs/2511.03168
- arXiv:2511.03168v1 Announce Type: new
-Abstract: Uncovering cause-effect relationships from observational time series is fundamental to understanding complex systems. While many methods infer static causal graphs, real-world systems often exhibit dynamic causality-where relationships evolve over time. Accurately capturing these temporal dynamics requires time-resolved causal graphs. We propose UnCLe, a novel deep learning method for scalable dynamic causal discovery. UnCLe employs a pair of Uncoupler and Recoupler networks to disentangle input time series into semantic representations and learns inter-variable dependencies via auto-regressive Dependency Matrices. It estimates dynamic causal influences by analyzing datapoint-wise prediction errors induced by temporal perturbations. Extensive experiments demonstrate that UnCLe not only outperforms state-of-the-art baselines on static causal discovery benchmarks but, more importantly, exhibits a unique capability to accurately capture and represent evolving temporal causality in both synthetic and real-world dynamic systems (e.g., human motion). UnCLe offers a promising approach for revealing the underlying, time-varying mechanisms of complex phenomena.
- oai:arXiv.org:2511.03168v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tingzhu Bi, Yicheng Pan, Xinrui Jiang, Huize Sun, Meng Ma, Ping Wang
-
-
- Uncovering Bugs in Formal Explainers: A Case Study with PyXAI
- https://arxiv.org/abs/2511.03169
- arXiv:2511.03169v1 Announce Type: new
-Abstract: Formal explainable artificial intelligence (XAI) offers unique theoretical guarantees of rigor when compared to other non-formal methods of explainability. However, little attention has been given to the validation of practical implementations of formal explainers. This paper develops a novel methodology for validating formal explainers and reports on the assessment of the publicly available formal explainer PyXAI. The paper documents the existence of incorrect explanations computed by PyXAI on most of the datasets analyzed in the experiments, thereby confirming the importance of the proposed novel methodology for the validation of formal explainers.
- oai:arXiv.org:2511.03169v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Joao Marques-Silva
-
-
- GraphCliff: Short-Long Range Gating for Subtle Differences but Critical Changes
- https://arxiv.org/abs/2511.03170
- arXiv:2511.03170v1 Announce Type: new
-Abstract: Quantitative structure-activity relationship assumes a smooth relationship between molecular structure and biological activity. However, activity cliffs defined as pairs of structurally similar compounds with large potency differences break this continuity. Recent benchmarks targeting activity cliffs have revealed that classical machine learning models with extended connectivity fingerprints outperform graph neural networks. Our analysis shows that graph embeddings fail to adequately separate structurally similar molecules in the embedding space, making it difficult to distinguish between structurally similar but functionally different molecules. Despite this limitation, molecular graph structures are inherently expressive and attractive, as they preserve molecular topology. To preserve the structural representation of molecules as graphs, we propose a new model, GraphCliff, which integrates short- and long-range information through a gating mechanism. Experimental results demonstrate that GraphCliff consistently improves performance on both non-cliff and cliff compounds. Furthermore, layer-wise node embedding analyses reveal reduced over-smoothing and enhanced discriminative power relative to strong baseline graph models.
- oai:arXiv.org:2511.03170v1
- cs.CE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hajung Kim, Jueon Park, Junseok Choe, Sheunheun Baek, Hyeon Hwang, Jaewoo Kang
-
-
- AI as We Describe It: How Large Language Models and Their Applications in Health are Represented Across Channels of Public Discourse
- https://arxiv.org/abs/2511.03174
- arXiv:2511.03174v1 Announce Type: new
-Abstract: Representation shapes public attitudes and behaviors. With the arrival and rapid adoption of LLMs, the way these systems are introduced will negotiate societal expectations for their role in high-stakes domains like health. Yet it remains unclear whether current narratives present a balanced view. We analyzed five prominent discourse channels (news, research press, YouTube, TikTok, and Reddit) over a two-year period on lexical style, informational content, and symbolic representation. Discussions were generally positive and episodic, with positivity increasing over time. Risk communication was unthorough and often reduced to information quality incidents, while explanations of LLMs' generative nature were rare. Compared with professional outlets, TikTok and Reddit highlighted wellbeing applications and showed greater variations in tone and anthropomorphism but little attention to risks. We discuss implications for public discourse as a diagnostic tool in identifying literacy and governance gaps, and for communication and design strategies to support more informed LLM engagement.
- oai:arXiv.org:2511.03174v1
- cs.HC
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jiawei Zhou, Lei Zhang, Mei Li, Benjamin D Horne, Munmun De Choudhury
-
-
- SurgAnt-ViVQA: Learning to Anticipate Surgical Events through GRU-Driven Temporal Cross-Attention
- https://arxiv.org/abs/2511.03178
- arXiv:2511.03178v1 Announce Type: new
-Abstract: Anticipating forthcoming surgical events is vital for real-time assistance in endonasal transsphenoidal pituitary surgery, where visibility is limited and workflow changes rapidly. Most visual question answering (VQA) systems reason on isolated frames with static vision language alignment, providing little support for forecasting next steps or instrument needs. Existing surgical VQA datasets likewise center on the current scene rather than the near future. We introduce PitVQA-Anticipation, the first VQA dataset designed for forward looking surgical reasoning. It comprises 33.5 hours of operative video and 734,769 question answer pairs built from temporally grouped clips and expert annotations across four tasks: predicting the future phase, next step, upcoming instrument, and remaining duration. We further propose SurgAnt-ViVQA, a video language model that adapts a large language model using a GRU Gated Temporal Cross-Attention module. A bidirectional GRU encodes frame to frame dynamics, while an adaptive gate injects visual context into the language stream at the token level. Parameter efficient fine tuning customizes the language backbone to the surgical domain. SurgAnt-ViVQA tested upon on PitVQA-Anticipation and EndoVis datasets, surpassing strong image and video based baselines. Ablations show that temporal recurrence and gated fusion drive most of the gains. A frame budget study indicates a trade-off: 8 frames maximize fluency, whereas 32 frames slightly reduce BLEU but improve numeric time estimation. By pairing a temporally aware encoder with fine grained gated cross-attention, SurgAnt-ViVQA advances surgical VQA from retrospective description to proactive anticipation. PitVQA-Anticipation offers a comprehensive benchmark for this setting and highlights the importance of targeted temporal modeling for reliable, future aware surgical assistance.
- oai:arXiv.org:2511.03178v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shreyas C. Dhake, Jiayuan Huang, Runlong He, Danyal Z. Khan, Evangelos B. Mazomenos, Sophia Bano, Hani J. Marcus, Danail Stoyanov, Matthew J. Clarkson, Mobarak I. Hoque
-
-
- Toward Autonomous Engineering Design: A Knowledge-Guided Multi-Agent Framework
- https://arxiv.org/abs/2511.03179
- arXiv:2511.03179v1 Announce Type: new
-Abstract: The engineering design process often demands expertise from multiple domains, leading to complex collaborations and iterative refinements. Traditional methods can be resource-intensive and prone to inefficiencies. To address this, we formalize the engineering design process through a multi-agent AI framework that integrates structured design and review loops. The framework introduces specialized knowledge-driven agents that collaborate to generate and refine design candidates. As an exemplar, we demonstrate its application to the aerodynamic optimization of 4-digit NACA airfoils. The framework consists of three key AI agents: a Graph Ontologist, a Design Engineer, and a Systems Engineer. The Graph Ontologist employs a Large Language Model (LLM) to construct two domain-specific knowledge graphs from airfoil design literature. The Systems Engineer, informed by a human manager, formulates technical requirements that guide design generation and evaluation. The Design Engineer leverages the design knowledge graph and computational tools to propose candidate airfoils meeting these requirements. The Systems Engineer reviews and provides feedback both qualitative and quantitative using its own knowledge graph, forming an iterative feedback loop until a design is validated by the manager. The final design is then optimized to maximize performance metrics such as the lift-to-drag ratio. Overall, this work demonstrates how collaborative AI agents equipped with structured knowledge representations can enhance efficiency, consistency, and quality in the engineering design process.
- oai:arXiv.org:2511.03179v1
- cs.AI
- cs.LG
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Varun Kumar, George Em Karniadakis
-
-
- BengaliMoralBench: A Benchmark for Auditing Moral Reasoning in Large Language Models within Bengali Language and Culture
- https://arxiv.org/abs/2511.03180
- arXiv:2511.03180v1 Announce Type: new
-Abstract: As multilingual Large Language Models (LLMs) gain traction across South Asia, their alignment with local ethical norms, particularly for Bengali, which is spoken by over 285 million people and ranked 6th globally, remains underexplored. Existing ethics benchmarks are largely English-centric and shaped by Western frameworks, overlooking cultural nuances critical for real-world deployment. To address this, we introduce BengaliMoralBench, the first large-scale ethics benchmark for the Bengali language and socio-cultural contexts. It covers five moral domains, Daily Activities, Habits, Parenting, Family Relationships, and Religious Activities, subdivided into 50 culturally relevant subtopics. Each scenario is annotated via native-speaker consensus using three ethical lenses: Virtue, Commonsense, and Justice ethics. We conduct systematic zero-shot evaluation of prominent multilingual LLMs, including Llama, Gemma, Qwen, and DeepSeek, using a unified prompting protocol and standard metrics. Performance varies widely (50-91% accuracy), with qualitative analysis revealing consistent weaknesses in cultural grounding, commonsense reasoning, and moral fairness. BengaliMoralBench provides a foundation for responsible localization, enabling culturally aligned evaluation and supporting the deployment of ethically robust AI in diverse, low-resource multilingual settings such as Bangladesh.
- oai:arXiv.org:2511.03180v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Shahriyar Zaman Ridoy, Azmine Toushik Wasi, Koushik Ahamed Tonmoy
-
-
- Learning-based Cooperative Robotic Paper Wrapping: A Unified Control Policy with Residual Force Control
- https://arxiv.org/abs/2511.03181
- arXiv:2511.03181v1 Announce Type: new
-Abstract: Human-robot cooperation is essential in environments such as warehouses and retail stores, where workers frequently handle deformable objects like paper, bags, and fabrics. Coordinating robotic actions with human assistance remains difficult due to the unpredictable dynamics of deformable materials and the need for adaptive force control. To explore this challenge, we focus on the task of gift wrapping, which exemplifies a long-horizon manipulation problem involving precise folding, controlled creasing, and secure fixation of paper. Success is achieved when the robot completes the sequence to produce a neatly wrapped package with clean folds and no tears.
- We propose a learning-based framework that integrates a high-level task planner powered by a large language model (LLM) with a low-level hybrid imitation learning (IL) and reinforcement learning (RL) policy. At its core is a Sub-task Aware Robotic Transformer (START) that learns a unified policy from human demonstrations. The key novelty lies in capturing long-range temporal dependencies across the full wrapping sequence within a single model. Unlike vanilla Action Chunking with Transformer (ACT), typically applied to short tasks, our method introduces sub-task IDs that provide explicit temporal grounding. This enables robust performance across the entire wrapping process and supports flexible execution, as the policy learns sub-goals rather than merely replicating motion sequences.
- Our framework achieves a 97% success rate on real-world wrapping tasks. We show that the unified transformer-based policy reduces the need for specialized models, allows controlled human supervision, and effectively bridges high-level intent with the fine-grained force control required for deformable object manipulation.
- oai:arXiv.org:2511.03181v1
- cs.RO
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Rewida Ali, Cristian C. Beltran-Hernandez, Weiwei Wan, Kensuke Harada
-
-
- Understanding Robustness of Model Editing in Code LLMs: An Empirical Study
- https://arxiv.org/abs/2511.03182
- arXiv:2511.03182v1 Announce Type: new
-Abstract: Large language models (LLMs) are increasingly used in software development. However, while LLMs remain static after pretraining, programming languages and APIs continue to evolve, leading to the generation of deprecated or incompatible code that undermines reliability. Retraining LLMs from scratch to reflect such changes is computationally expensive, making model editing a promising lightweight alternative that updates only a small subset of parameters. Despite its potential, it remains unclear whether model editing yields genuine syntactic and semantic adaptations or merely superficial fixes. In this work, we present a systematic study of five state-of-the-art model editing methods: Constrained Fine-Tuning (FT), GRACE, MEMIT, PMET, and ROME. We apply these methods to three leading open-source code LLMs, CodeLlama, CodeQwen1.5, and DeepSeek-Coder, under controlled API deprecation scenarios. Our evaluation covers both instant and sequential editing settings, using three disjoint evaluation sets designed to assess reliability, generalization, and specificity. We measure model correctness at three levels: successful compilation, partial test case pass, and full test pass. Our findings show that instant edits consistently degrade model performance, with syntactic validity dropping by up to 86 percentage points and functional correctness declining by 45 points even in the best-performing setting. Sequential edits further amplify this degradation, and in some cases, model performance collapses entirely. Across all models, most passing generations relied on workarounds rather than correctly adopting the intended changes, while faulty adoptions that result in test failures or compilation errors were significantly more frequent. Correct adoptions, where the model correctly integrates the intended change, occurred in only about 6% of cases.
- oai:arXiv.org:2511.03182v1
- cs.SE
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Vinaik Chhetri, A. B Siddique, Umar Farooq
-
-
- Adobe Summit Concierge Evaluation with Human in the Loop
- https://arxiv.org/abs/2511.03186
- arXiv:2511.03186v1 Announce Type: new
-Abstract: Generative AI assistants offer significant potential to enhance productivity, streamline information access, and improve user experience in enterprise contexts. In this work, we present Summit Concierge, a domain-specific AI assistant developed for Adobe Summit. The assistant handles a wide range of event-related queries and operates under real-world constraints such as data sparsity, quality assurance, and rapid deployment. To address these challenges, we adopt a human-in-the-loop development workflow that combines prompt engineering, retrieval grounding, and lightweight human validation. We describe the system architecture, development process, and real-world deployment outcomes. Our experience shows that agile, feedback-driven development enables scalable and reliable AI assistants, even in cold-start scenarios.
- oai:arXiv.org:2511.03186v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yiru Chen, Sally Fang, Sai Sree Harsha, Dan Luo, Vaishnavi Muppala, Fei Wu, Shun Jiang, Kun Qian, Yunyao Li
-
-
- Periodic Skill Discovery
- https://arxiv.org/abs/2511.03187
- arXiv:2511.03187v1 Announce Type: new
-Abstract: Unsupervised skill discovery in reinforcement learning (RL) aims to learn diverse behaviors without relying on external rewards. However, current methods often overlook the periodic nature of learned skills, focusing instead on increasing the mutual dependence between states and skills or maximizing the distance traveled in latent space. Considering that many robotic tasks -- particularly those involving locomotion -- require periodic behaviors across varying timescales, the ability to discover diverse periodic skills is essential. Motivated by this, we propose Periodic Skill Discovery (PSD), a framework that discovers periodic behaviors in an unsupervised manner. The key idea of PSD is to train an encoder that maps states to a circular latent space, thereby naturally encoding periodicity in the latent representation. By capturing temporal distance, PSD can effectively learn skills with diverse periods in complex robotic tasks, even with pixel-based observations. We further show that these learned skills achieve high performance on downstream tasks such as hurdling. Moreover, integrating PSD with an existing skill discovery method offers more diverse behaviors, thus broadening the agent's repertoire. Our code and demos are available at https://jonghaepark.github.io/psd/
- oai:arXiv.org:2511.03187v1
- cs.LG
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jonghae Park, Daesol Cho, Jusuk Lee, Dongseok Shim, Inkyu Jang, H. Jin Kim
-
-
- Collaborative Assembly Policy Learning of a Sightless Robot
- https://arxiv.org/abs/2511.03189
- arXiv:2511.03189v1 Announce Type: new
-Abstract: This paper explores a physical human-robot collaboration (pHRC) task involving the joint insertion of a board into a frame by a sightless robot and a human operator. While admittance control is commonly used in pHRC tasks, it can be challenging to measure the force/torque applied by the human for accurate human intent estimation, limiting the robot's ability to assist in the collaborative task. Other methods that attempt to solve pHRC tasks using reinforcement learning (RL) are also unsuitable for the board-insertion task due to its safety constraints and sparse rewards. Therefore, we propose a novel RL approach that utilizes a human-designed admittance controller to facilitate more active robot behavior and reduce human effort. Through simulation and real-world experiments, we demonstrate that our approach outperforms admittance control in terms of success rate and task completion time. Additionally, we observed a significant reduction in measured force/torque when using our proposed approach compared to admittance control. The video of the experiments is available at https://youtu.be/va07Gw6YIog.
- oai:arXiv.org:2511.03189v1
- cs.RO
- cs.HC
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Zeqing Zhang, Weifeng Lu, Lei Yang, Wei Jing, Bowei Tang, Jia Pan
-
-
- Efficient Linear Attention for Multivariate Time Series Modeling via Entropy Equality
- https://arxiv.org/abs/2511.03190
- arXiv:2511.03190v1 Announce Type: new
-Abstract: Attention mechanisms have been extensively employed in various applications, including time series modeling, owing to their capacity to capture intricate dependencies; however, their utility is often constrained by quadratic computational complexity, which impedes scalability for long sequences. In this work, we propose a novel linear attention mechanism designed to overcome these limitations. Our approach is grounded in a theoretical demonstration that entropy, as a strictly concave function on the probability simplex, implies that distributions with aligned probability rankings and similar entropy values exhibit structural resemblance. Building on this insight, we develop an efficient approximation algorithm that computes the entropy of dot-product-derived distributions with only linear complexity, enabling the implementation of a linear attention mechanism based on entropy equality. Through rigorous analysis, we reveal that the effectiveness of attention in spatio-temporal time series modeling may not primarily stem from the non-linearity of softmax but rather from the attainment of a moderate and well-balanced weight distribution. Extensive experiments on four spatio-temporal datasets validate our method, demonstrating competitive or superior forecasting performance while achieving substantial reductions in both memory usage and computational time.
- oai:arXiv.org:2511.03190v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Mingtao Zhang, Guoli Yang, Zhanxing Zhu, Mengzhu Wang, Xiaoying Bai
-
-
- PETWB-REP: A Multi-Cancer Whole-Body FDG PET/CT and Radiology Report Dataset for Medical Imaging Research
- https://arxiv.org/abs/2511.03194
- arXiv:2511.03194v1 Announce Type: new
-Abstract: Publicly available, large-scale medical imaging datasets are crucial for developing and validating artificial intelligence models and conducting retrospective clinical research. However, datasets that combine functional and anatomical imaging with detailed clinical reports across multiple cancer types remain scarce. Here, we present PETWB-REP, a curated dataset comprising whole-body 18F-Fluorodeoxyglucose (FDG) Positron Emission Tomography/Computed Tomography (PET/CT) scans and corresponding radiology reports from 490 patients diagnosed with various malignancies. The dataset primarily includes common cancers such as lung cancer, liver cancer, breast cancer, prostate cancer, and ovarian cancer. This dataset includes paired PET and CT images, de-identified textual reports, and structured clinical metadata. It is designed to support research in medical imaging, radiomics, artificial intelligence, and multi-modal learning.
- oai:arXiv.org:2511.03194v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Le Xue, Gang Feng, Wenbo Zhang, Yichi Zhang, Lanlan Li, Shuqi Wang, Liling Peng, Sisi Peng, Xin Gao
-
-
- Cross-Modal Alignment via Variational Copula Modelling
- https://arxiv.org/abs/2511.03196
- arXiv:2511.03196v1 Announce Type: new
-Abstract: Various data modalities are common in real-world applications (e.g., electronic health records, medical images and clinical notes in healthcare). It is essential to develop multimodal learning methods to aggregate various information from multiple modalities. The main challenge is how to appropriately align and fuse the representations of different modalities into a joint distribution. Existing methods mainly rely on concatenation or the Kronecker product, oversimplifying the interaction structure between modalities and indicating a need to model more complex interactions. Additionally, the joint distribution of latent representations with higher-order interactions is underexplored. Copula is a powerful statistical structure for modelling the interactions among variables, as it naturally bridges the joint distribution and marginal distributions of multiple variables. We propose a novel copula-driven multimodal learning framework, which focuses on learning the joint distribution of various modalities to capture the complex interactions among them. The key idea is to interpret the copula model as a tool to align the marginal distributions of the modalities efficiently. By assuming a Gaussian mixture distribution for each modality and a copula model on the joint distribution, our model can generate accurate representations for missing modalities. Extensive experiments on public MIMIC datasets demonstrate the superior performance of our model over other competitors. The code is available at https://github.com/HKU-MedAI/CMCM.
- oai:arXiv.org:2511.03196v1
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- published by ICML2025
- Feng Wu, Tsai Hor Chan, Fuying Wang, Guosheng Yin, Lequan Yu
-
-
- A Probabilistic U-Net Approach to Downscaling Climate Simulations
- https://arxiv.org/abs/2511.03197
- arXiv:2511.03197v1 Announce Type: new
-Abstract: Climate models are limited by heavy computational costs, often producing outputs at coarse spatial resolutions, while many climate change impact studies require finer scales. Statistical downscaling bridges this gap, and we adapt the probabilistic U-Net for this task, combining a deterministic U-Net backbone with a variational latent space to capture aleatoric uncertainty. We evaluate four training objectives, afCRPS and WMSE-MS-SSIM with three settings for downscaling precipitation and temperature from $16\times$ coarser resolution. Our main finding is that WMSE-MS-SSIM performs well for extremes under certain settings, whereas afCRPS better captures spatial variability across scales.
- oai:arXiv.org:2511.03197v1
- cs.LG
- cs.CV
- physics.ao-ph
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Maryam Alipourhajiagha, Pierre-Louis Lemaire, Youssef Diouane, Julie Carreau
-
-
- Large Language Models as Information Sources: Distinctive Characteristics and Types of Low-Quality Information
- https://arxiv.org/abs/2511.03198
- arXiv:2511.03198v1 Announce Type: new
-Abstract: Recent advances in large language models (LLMs) have brought public and scholarly attention to their potential in generating low-quality information. While widely acknowledged as a risk, low-quality information remains a vaguely defined concept, and little is known about how it manifests in LLM outputs or how these outputs differ from those of traditional information sources. In this study, we focus on two key questions: What types of low-quality information are produced by LLMs, and what makes them distinct than human-generated counterparts? We conducted focus groups with public health professionals and individuals with lived experience in three critical health contexts (vaccines, opioid use disorder, and intimate partner violence) where high-quality information is essential and misinformation, bias, and insensitivity are prevalent concerns. We identified a typology of LLM-generated low-quality information and a set of distinctive LLM characteristics compared to traditional information sources. Our findings show that low-quality information extends beyond factual inaccuracies into types such as misprioritization and exaggeration, and that LLM affordances fundamentally differs from previous technologies. This work offers typologies on LLM distinctive characteristics and low-quality information types as a starting point for future efforts to understand LLM-generated low-quality information and mitigate related informational harms. We call for conceptual and methodological discussions of information quality to move beyond truthfulness, in order to address the affordances of emerging technologies and the evolving dynamics of information behaviors.
- oai:arXiv.org:2511.03198v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jiawei Zhou, Amy Z. Chen, Darshi Shah, Laura M. Schwab-Reese, Munmun De Choudhury
-
-
- A Quantized VAE-MLP Botnet Detection Model: A Systematic Evaluation of Quantization-Aware Training and Post-Training Quantization Strategies
- https://arxiv.org/abs/2511.03201
- arXiv:2511.03201v1 Announce Type: new
-Abstract: In an effort to counter the increasing IoT botnet-based attacks, state-of-the-art deep learning methods have been proposed and have achieved impressive detection accuracy. However, their computational intensity restricts deployment on resource-constrained IoT devices, creating a critical need for lightweight detection models. A common solution to this challenge is model compression via quantization. This study proposes a VAE-MLP model framework where an MLP-based classifier is trained on 8-dimensional latent vectors derived from the high-dimensional train data using the encoder component of a pretrained variational autoencoder (VAE). Two widely used quantization strategies--Quantization-Aware Training (QAT) and Post-Training Quantization (PTQ)--are then systematically evaluated in terms of their impact on detection performance, storage efficiency, and inference latency using two benchmark IoT botnet datasets--N-BaIoT and CICIoT2022. The results revealed that, with respect to detection accuracy, the QAT strategy experienced a more noticeable decline,whereas PTQ incurred only a marginal reduction compared to the original unquantized model. Furthermore, PTQ yielded a 6x speedup and 21x reduction in size, while QAT achieved a 3x speedup and 24x compression, demonstrating the practicality of quantization for device-level IoT botnet detection.
- oai:arXiv.org:2511.03201v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Hassan Wasswa, Hussein Abbass, Timothy Lynar
-
-
- An Event-Driven Spiking Compute-In-Memory Macro based on SOT-MRAM
- https://arxiv.org/abs/2511.03203
- arXiv:2511.03203v1 Announce Type: new
-Abstract: The application of Magnetic Random-Access Memory (MRAM) in computing-in-memory (CIM) has gained significant attention. However, existing designs often suffer from high energy consumption due to their reliance on complex analog circuits for computation. In this work, we present a Spin-Orbit- Torque MRAM(SOT-MRAM)-based CIM macro that employs an event-driven spiking processing for high energy efficiency. The SOT-MRAM crossbar adopts a hybrid series-parallel cell structure to efficiently support matrix-vector multiplication (MVM). Signal information is (en) decoded as spikes using lightweight circuits, eliminating the need for conventional area- and powerintensive analog circuits. The SOT-MRAM macro is designed and evaluated in 28nm technology, and experimental results show that it achieves a peak energy efficiency of 243.6 TOPS/W, significantly outperforming existing designs.
- oai:arXiv.org:2511.03203v1
- cs.AR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Deyang Yu, Chenchen Liu, Chuanjie Zhang, Xiao Fang, Weisheng Zhao
-
-
- QG-CoC: Question-Guided Chain-of-Captions for Large Multimodal Models
- https://arxiv.org/abs/2511.03206
- arXiv:2511.03206v1 Announce Type: new
-Abstract: Recently, Multimodal Large Language Models (MLLMs) encounter two key issues in multi-image contexts: (1) a lack of fine-grained perception across disparate images, and (2) a diminished capability to effectively reason over and synthesize information from multiple visual inputs. However, while various prompting methods aim to describe visual content, many existing studies focus primarily on single-image settings or specific, constrained scenarios. This leaves a critical gap in understanding and addressing how MLLMs tackle more general and complex multi-image reasoning tasks. Thus, we first extensively investigate how current prompting methods perceive fine-grained visual details and process visual information when dealing with multiple images. Our findings reveal that existing prompting methods fall short in attending to needed clues and seamlessly integrating perception and reasoning. Inspired by the findings, we propose a new zero-shot prompting method, Question-Guided Chain-of-Captions (QG-CoC), a generalized prompting approach that effectively handles problems with an arbitrary number of images. We evaluate our method on various open-source and closed-source MLLMs for multi-image and single-image benchmarks. Experimental results indicate that QG-CoC demonstrates competitive performance across tasks and exhibits robust improvements in the challenging scenarios where existing prompting methods fail.
- oai:arXiv.org:2511.03206v1
- cs.CV
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- EMNLP 2025
- Kuei-Chun Kao, Hsu Tzu-Yin, Yunqi Hong, Ruochen Wang, Cho-Jui Hsieh
-
-
- A Study on Library Resources with Services Satisfaction based on Library Users Affiliated Colleges to Solapur University
- https://arxiv.org/abs/2511.03209
- arXiv:2511.03209v1 Announce Type: new
-Abstract: The main aim of this study was to assess and evaluate user satisfaction with library resources and services among library users associated with Solapur University. The current research shows the level of users satisfaction with different library resources and services offered by college libraries. The research found that a vast number of respondents were pleased with library facilities and services. The research is designed to achieve users satisfaction in the library to investigate the level of satisfaction towards library resources and services with regards to 26 colleges of Solapur University based in Maharashtra. Information in the form of data has been collected from colleges and on the basis of users results; analysis needs to analyze users satisfaction.
- oai:arXiv.org:2511.03209v1
- cs.DL
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- International Journal of Academic Research & Development (IJAR&D); Volume 6 Issue 2; Pages 73-78; 2020
- Patel Adam Burhansab, M Sadik Batcha, Muneer Ahmad
-
-
- Retrofitters, pragmatists and activists: Public interest litigation for accountable automated decision-making
- https://arxiv.org/abs/2511.03211
- arXiv:2511.03211v1 Announce Type: new
-Abstract: This paper examines the role of public interest litigation in promoting accountability for AI and automated decision-making (ADM) in Australia. Since ADM regulatio faces geopolitical headwinds, effective governance will have to rely at least in part on the enforcement of existing laws. Drawing on interviews with Australian public interest litigators, technology policy activists, and technology law scholars, the paper positions public interest litigation as part of a larger ecosystem for transparency, accountability and justice with respect to ADM. It builds on one participants's characterisation of litigation about ADM as an exercise in legal retrofitting: adapting old laws to new circumstances. The paper's primary contribution is to aggregate, organise and present original insights on pragmatic strategies and tactics for effective public interest litigation about ADM. Naturally, it also contends with the limits of these strategies, and of the legal system. Where limits are, however, capable of being overcome, the paper presents findings on urgent needs: the enabling institutional arrangements without which effective litigation and accountability will falter. The paper is relevant to law and technology scholars; individuals and groups harmed by ADM; public interest litigators and technology lawyers; civil society and advocacy organisations; and policymakers.
- oai:arXiv.org:2511.03211v1
- cs.CY
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Henry Fraser, Zahra Stardust
-
-
- MvBody: Multi-View-Based Hybrid Transformer Using Optical 3D Body Scan for Explainable Cesarean Section Prediction
- https://arxiv.org/abs/2511.03212
- arXiv:2511.03212v1 Announce Type: new
-Abstract: Accurately assessing the risk of cesarean section (CS) delivery is critical, especially in settings with limited medical resources, where access to healthcare is often restricted. Early and reliable risk prediction allows better-informed prenatal care decisions and can improve maternal and neonatal outcomes. However, most existing predictive models are tailored for in-hospital use during labor and rely on parameters that are often unavailable in resource-limited or home-based settings. In this study, we conduct a pilot investigation to examine the feasibility of using 3D body shape for CS risk assessment for future applications with more affordable general devices. We propose a novel multi-view-based Transformer network, MvBody, which predicts CS risk using only self-reported medical data and 3D optical body scans obtained between the 31st and 38th weeks of gestation. To enhance training efficiency and model generalizability in data-scarce environments, we incorporate a metric learning loss into the network. Compared to widely used machine learning models and the latest advanced 3D analysis methods, our method demonstrates superior performance, achieving an accuracy of 84.62% and an Area Under the Receiver Operating Characteristic Curve (AUC-ROC) of 0.724 on the independent test set. To improve transparency and trust in the model's predictions, we apply the Integrated Gradients algorithm to provide theoretically grounded explanations of the model's decision-making process. Our results indicate that pre-pregnancy weight, maternal age, obstetric history, previous CS history, and body shape, particularly around the head and shoulders, are key contributors to CS risk prediction.
- oai:arXiv.org:2511.03212v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ruting Cheng, Boyuan Feng, Yijiang Zheng, Chuhui Qiu, Aizierjiang Aiersilan, Joaquin A. Calderon, Wentao Zhao, Qing Pan, James K. Hahn
-
-
- Bayesian Advantage of Re-Identification Attack in the Shuffle Model
- https://arxiv.org/abs/2511.03213
- arXiv:2511.03213v1 Announce Type: new
-Abstract: The shuffle model, which anonymizes data by randomly permuting user messages, has been widely adopted in both cryptography and differential privacy. In this work, we present the first systematic study of the Bayesian advantage in re-identifying a user's message under the shuffle model. We begin with a basic setting: one sample is drawn from a distribution $P$, and $n - 1$ samples are drawn from a distribution $Q$, after which all $n$ samples are randomly shuffled. We define $\beta_n(P, Q)$ as the success probability of a Bayes-optimal adversary in identifying the sample from $P$, and define the additive and multiplicative Bayesian advantages as $\mathsf{Adv}_n^{+}(P, Q) = \beta_n(P,Q) - \frac{1}{n}$ and $\mathsf{Adv}_n^{\times}(P, Q) = n \cdot \beta_n(P,Q)$, respectively.
- We derive exact analytical expressions and asymptotic characterizations of $\beta_n(P, Q)$, along with evaluations in several representative scenarios. Furthermore, we establish (nearly) tight mutual bounds between the additive Bayesian advantage and the total variation distance.
- Finally, we extend our analysis beyond the basic setting and present, for the first time, an upper bound on the success probability of Bayesian attacks in shuffle differential privacy. Specifically, when the outputs of $n$ users--each processed through an $\varepsilon$-differentially private local randomizer--are shuffled, the probability that an attacker successfully re-identifies any target user's message is at most $e^{\varepsilon}/n$.
- oai:arXiv.org:2511.03213v1
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pengcheng Su, Haibo Cheng, Ping Wang
-
-
- LGM: Enhancing Large Language Models with Conceptual Meta-Relations and Iterative Retrieval
- https://arxiv.org/abs/2511.03214
- arXiv:2511.03214v1 Announce Type: new
-Abstract: Large language models (LLMs) exhibit strong semantic understanding, yet struggle when user instructions involve ambiguous or conceptually misaligned terms. We propose the Language Graph Model (LGM) to enhance conceptual clarity by extracting meta-relations-inheritance, alias, and composition-from natural language. The model further employs a reflection mechanism to validate these meta-relations. Leveraging a Concept Iterative Retrieval Algorithm, these relations and related descriptions are dynamically supplied to the LLM, improving its ability to interpret concepts and generate accurate responses. Unlike conventional Retrieval-Augmented Generation (RAG) approaches that rely on extended context windows, our method enables large language models to process texts of any length without the need for truncation. Experiments on standard benchmarks demonstrate that the LGM consistently outperforms existing RAG baselines.
- oai:arXiv.org:2511.03214v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wenchang Lei, Ping Zou, Yue Wang, Feng Sun, Lei Zhao
-
-
- Russian Contribution to Coronary Artery Disease Research: A Scientometric Mapping of Publications
- https://arxiv.org/abs/2511.03215
- arXiv:2511.03215v1 Announce Type: new
-Abstract: The present study attempts to highlight the research output generated in Russia in coronary artery disease (CAD) research during the period 1990-2019 to understand the distribution of research output, top journals for publications, and most prolific authors, authorship pattern, and citation pattern. This study is based on secondary data extracted from the Science Citation Index (SCI), which is an integral component of the Web of Science. Descriptive and inferential statistical techniques were applied in the study. There were 5058 articles by Russian scholars in coronary artery disease during 1990-2019; they preferred to publish in Russian journals. The research contributions were in the form of research articles, meeting abstracts and reviews with a consistent drop in the number of editorial material and article; proceedings paper with time. Co-authorship was the norm in coronary artery disease research, with a steady increase in the number of multi-author documents in recent years.
- oai:arXiv.org:2511.03215v1
- cs.DL
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Library Philosophy and Practice (e-journal), 4683, 2021
- Muneer Ahmad, M Sadik Batcha
-
-
- Hybrid Fact-Checking that Integrates Knowledge Graphs, Large Language Models, and Search-Based Retrieval Agents Improves Interpretable Claim Verification
- https://arxiv.org/abs/2511.03217
- arXiv:2511.03217v1 Announce Type: new
-Abstract: Large language models (LLMs) excel in generating fluent utterances but can lack reliable grounding in verified information. At the same time, knowledge-graph-based fact-checkers deliver precise and interpretable evidence, yet suffer from limited coverage or latency. By integrating LLMs with knowledge graphs and real-time search agents, we introduce a hybrid fact-checking approach that leverages the individual strengths of each component. Our system comprises three autonomous steps: 1) a Knowledge Graph (KG) Retrieval for rapid one - hop lookups in DBpedia, 2) an LM-based classification guided by a task-specific labeling prompt, producing outputs with internal rule-based logic, and 3) a Web Search Agent invoked only when KG coverage is insufficient. Our pipeline achieves an F1 score of 0.93 on the FEVER benchmark on the Supported/Refuted split without task- specific fine - tuning. To address Not enough information cases, we conduct a targeted reannotation study showing that our approach frequently uncovers valid evidence for claims originally labeled as Not Enough Information (NEI), as confirmed by both expert annotators and LLM reviewers. With this paper, we present a modular, opensource fact-checking pipeline with fallback strategies and generalization across datasets.
- oai:arXiv.org:2511.03217v1
- cs.CL
- cs.AI
- cs.CY
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Shaghayegh Kolli, Richard Rosenbaum, Timo Cavelius, Lasse Strothe, Andrii Lata, Jana Diesner
-
-
- Diffusion-Guided Mask-Consistent Paired Mixing for Endoscopic Image Segmentation
- https://arxiv.org/abs/2511.03219
- arXiv:2511.03219v1 Announce Type: new
-Abstract: Augmentation for dense prediction typically relies on either sample mixing or generative synthesis. Mixing improves robustness but misaligned masks yield soft label ambiguity. Diffusion synthesis increases apparent diversity but, when trained as common samples, overlooks the structural benefit of mask conditioning and introduces synthetic-real domain shift. We propose a paired, diffusion-guided paradigm that fuses the strengths of both. For each real image, a synthetic counterpart is generated under the same mask and the pair is used as a controllable input for Mask-Consistent Paired Mixing (MCPMix), which mixes only image appearance while supervision always uses the original hard mask. This produces a continuous family of intermediate samples that smoothly bridges synthetic and real appearances under shared geometry, enlarging diversity without compromising pixel-level semantics. To keep learning aligned with real data, Real-Anchored Learnable Annealing (RLA) adaptively adjusts the mixing strength and the loss weight of mixed samples over training, gradually re-anchoring optimization to real data and mitigating distributional bias. Across Kvasir-SEG, PICCOLO, CVC-ClinicDB, a private NPC-LES cohort, and ISIC 2017, the approach achieves state-of-the-art segmentation performance and consistent gains over baselines. The results show that combining label-preserving mixing with diffusion-driven diversity, together with adaptive re-anchoring, yields robust and generalizable endoscopic segmentation.
- oai:arXiv.org:2511.03219v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pengyu Jie, Wanquan Liu, Rui He, Yihui Wen, Deyu Meng, Chenqiang Gao
-
-
- MHE in Output Feedback Control of Uncertain Nonlinear Systems via IQCs
- https://arxiv.org/abs/2511.03221
- arXiv:2511.03221v1 Announce Type: new
-Abstract: We propose a moving horizon estimation (MHE) scheme for general nonlinear constrained systems with parametric or static nonlinear uncertainties and a predetermined state feedback controller that is assumed to robustly stabilize the system in the absence of estimation errors. Leveraging integral quadratic constraints (IQCs), we introduce a new notion of detectability that is robust to possibly non-parametric uncertainties and verifiable in practice. Assuming that the uncertain system driven by the controller satisfies this notion of detectability, we provide an MHE formulation such that the closed-loop system formed of the uncertain system, the controller and MHE is input-to-state stable w.r.t. exogenous disturbances.
- oai:arXiv.org:2511.03221v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1109/LCSYS.2025.3629926
- Yang Guo, Stefan Streif
-
-
- Node-Based Editing for Multimodal Generation of Text, Audio, Image, and Vide
- https://arxiv.org/abs/2511.03227
- arXiv:2511.03227v1 Announce Type: new
-Abstract: We present a node-based storytelling system for multimodal content generation. The system represents stories as graphs of nodes that can be expanded, edited, and iteratively refined through direct user edits and natural-language prompts. Each node can integrate text, images, audio, and video, allowing creators to compose multimodal narratives. A task selection agent routes between specialized generative tasks that handle story generation, node structure reasoning, node diagram formatting, and context generation. The interface supports targeted editing of individual nodes, automatic branching for parallel storylines, and node-based iterative refinement. Our results demonstrate that node-based editing supports control over narrative structure and iterative generation of text, images, audio, and video. We report quantitative outcomes on automatic story outline generation and qualitative observations of editing workflows. Finally, we discuss current limitations such as scalability to longer narratives and consistency across multiple nodes, and outline future work toward human-in-the-loop and user-centered creative AI tools.
- oai:arXiv.org:2511.03227v1
- cs.HC
- cs.AI
- cs.MM
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Alexander Htet Kyaw, Lenin Ravindranath Sivalingam
-
-
- Beyond Ranked Lists: The SARAL Framework for Cross-Lingual Document Set Retrieval
- https://arxiv.org/abs/2511.03228
- arXiv:2511.03228v1 Announce Type: new
-Abstract: Machine Translation for English Retrieval of Information in Any Language (MATERIAL) is an IARPA initiative targeted to advance the state of cross-lingual information retrieval (CLIR). This report provides a detailed description of Information Sciences Institute's (ISI's) Summarization and domain-Adaptive Retrieval Across Language's (SARAL's) effort for MATERIAL. Specifically, we outline our team's novel approach to handle CLIR with emphasis in developing an approach amenable to retrieve a query-relevant document \textit{set}, and not just a ranked document-list. In MATERIAL's Phase-3 evaluations, SARAL exceeded the performance of other teams in five out of six evaluation conditions spanning three different languages (Farsi, Kazakh, and Georgian).
- oai:arXiv.org:2511.03228v1
- cs.CL
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Shantanu Agarwal, Joel Barry, Elizabeth Boschee, Scott Miller
-
-
- Smartphone User Fingerprinting on Wireless Traffic
- https://arxiv.org/abs/2511.03229
- arXiv:2511.03229v1 Announce Type: new
-Abstract: Due to the openness of the wireless medium, smartphone users are susceptible to user privacy attacks, where user privacy information is inferred from encrypted Wi-Fi wireless traffic. Existing attacks are limited to recognizing mobile apps and their actions and cannot infer the smartphone user identity, a fundamental part of user privacy. To overcome this limitation, we propose U-Print, a novel attack system that can passively recognize smartphone apps, actions, and users from over-the-air MAC-layer frames. We observe that smartphone users usually prefer different add-on apps and in-app actions, yielding different changing patterns in Wi-Fi traffic. U-Print first extracts multi-level traffic features and exploits customized temporal convolutional networks to recognize smartphone apps and actions, thus producing users' behavior sequences. Then, it leverages the silhouette coefficient method to determine the number of users and applies the k-means clustering to profile and identify smartphone users. We implement U-Print using a laptop with a Kali dual-band wireless network card and evaluate it in three real-world environments. U-Print achieves an overall accuracy of 98.4% and an F1 score of 0.983 for user inference. Moreover, it can correctly recognize up to 96% of apps and actions in the closed world and more than 86% in the open world.
- oai:arXiv.org:2511.03229v1
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yong Huang, Zhibo Dong, Xiaoguang Yang, Dalong Zhang, Qingxian Wang, Zhihua Wang
-
-
- Transformer-Progressive Mamba Network for Lightweight Image Super-Resolution
- https://arxiv.org/abs/2511.03232
- arXiv:2511.03232v1 Announce Type: new
-Abstract: Recently, Mamba-based super-resolution (SR) methods have demonstrated the ability to capture global receptive fields with linear complexity, addressing the quadratic computational cost of Transformer-based SR approaches. However, existing Mamba-based methods lack fine-grained transitions across different modeling scales, which limits the efficiency of feature representation. In this paper, we propose T-PMambaSR, a lightweight SR framework that integrates window-based self-attention with Progressive Mamba. By enabling interactions among receptive fields of different scales, our method establishes a fine-grained modeling paradigm that progressively enhances feature representation with linear complexity. Furthermore, we introduce an Adaptive High-Frequency Refinement Module (AHFRM) to recover high-frequency details lost during Transformer and Mamba processing. Extensive experiments demonstrate that T-PMambaSR progressively enhances the model's receptive field and expressiveness, yielding better performance than recent Transformer- or Mamba-based methods while incurring lower computational cost. Our codes will be released after acceptance.
- oai:arXiv.org:2511.03232v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sichen Guo, Wenjie Li, Yuanyang Liu, Guangwei Gao, Jian Yang, Chia-Wen Lin
-
-
- From Five Dimensions to Many: Large Language Models as Precise and Interpretable Psychological Profilers
- https://arxiv.org/abs/2511.03235
- arXiv:2511.03235v1 Announce Type: new
-Abstract: Psychological constructs within individuals are widely believed to be interconnected. We investigated whether and how Large Language Models (LLMs) can model the correlational structure of human psychological traits from minimal quantitative inputs. We prompted various LLMs with Big Five Personality Scale responses from 816 human individuals to role-play their responses on nine other psychological scales. LLMs demonstrated remarkable accuracy in capturing human psychological structure, with the inter-scale correlation patterns from LLM-generated responses strongly aligning with those from human data $(R^2 > 0.89)$. This zero-shot performance substantially exceeded predictions based on semantic similarity and approached the accuracy of machine learning algorithms trained directly on the dataset. Analysis of reasoning traces revealed that LLMs use a systematic two-stage process: First, they transform raw Big Five responses into natural language personality summaries through information selection and compression, analogous to generating sufficient statistics. Second, they generate target scale responses based on reasoning from these summaries. For information selection, LLMs identify the same key personality factors as trained algorithms, though they fail to differentiate item importance within factors. The resulting compressed summaries are not merely redundant representations but capture synergistic information--adding them to original scores enhances prediction alignment, suggesting they encode emergent, second-order patterns of trait interplay. Our findings demonstrate that LLMs can precisely predict individual participants' psychological traits from minimal data through a process of abstraction and reasoning, offering both a powerful tool for psychological simulation and valuable insights into their emergent reasoning capabilities.
- oai:arXiv.org:2511.03235v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yi-Fei Liu, Yi-Long Lu, Di He, Hang Zhang
-
-
- IndicSuperTokenizer: An Optimized Tokenizer for Indic Multilingual LLMs
- https://arxiv.org/abs/2511.03237
- arXiv:2511.03237v1 Announce Type: new
-Abstract: Tokenizers play a crucial role in determining the performance, training efficiency, and the inference cost of Large Language Models (LLMs). Designing effective tokenizers for multilingual LLMs is particularly challenging due to diverse scripts and rich morphological variation. While subword methods such as Byte Pair Encoding (BPE) are widely adopted, their effectiveness in multilingual settings remains underexplored. We present IndicSuperTokenizer, a tokenizer for Indic multilingual LLMs, that combines both subword and multi-word tokenization, along with language-specific pre-tokenization, leading to more linguistically aligned tokens and achieving a new state-of-the-art in fertility score. Evaluated across English, 22 Indian languages and code data, our tokenizer improves the average fertility score by 39.5% over LLaMA4 and by 18% over Sutra (the current best). This translates to 44% improvement in inference throughput over LLaMA4 while maintaining comparable performance on English and Indic benchmarks. We also present detailed ablations across tokenizer training data size, vocabulary size, merging techniques, and pre-tokenization strategies, demonstrating the robustness of our design choices.
- oai:arXiv.org:2511.03237v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Souvik Rana, Arul Menezes, Ashish Kulkarni, Chandra Khatri, Shubham Agarwal
-
-
- Incorporating Quality of Life in Climate Adaptation Planning via Reinforcement Learning
- https://arxiv.org/abs/2511.03238
- arXiv:2511.03238v1 Announce Type: new
-Abstract: Urban flooding is expected to increase in frequency and severity as a consequence of climate change, causing wide-ranging impacts that include a decrease in urban Quality of Life (QoL). Meanwhile, policymakers must devise adaptation strategies that can cope with the uncertain nature of climate change and the complex and dynamic nature of urban flooding. Reinforcement Learning (RL) holds significant promise in tackling such complex, dynamic, and uncertain problems. Because of this, we use RL to identify which climate adaptation pathways lead to a higher QoL in the long term. We do this using an Integrated Assessment Model (IAM) which combines a rainfall projection model, a flood model, a transport accessibility model, and a quality of life index. Our preliminary results suggest that this approach can be used to learn optimal adaptation measures and it outperforms other realistic and real-world planning strategies. Our framework is publicly available: https://github.com/MLSM-at-DTU/maat_qol_framework.
- oai:arXiv.org:2511.03238v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Miguel Costa, Arthur Vandervoort, Martin Drews, Karyn Morrissey, Francisco C. Pereira
-
-
- A Feedback-Control Framework for Efficient Dataset Collection from In-Vehicle Data Streams
- https://arxiv.org/abs/2511.03239
- arXiv:2511.03239v1 Announce Type: new
-Abstract: Modern AI systems are increasingly constrained not by model capacity but by the quality and diversity of their data. Despite growing emphasis on data-centric AI, most datasets are still gathered in an open-loop manner which accumulates redundant samples without feedback from the current coverage. This results in inefficient storage, costly labeling, and limited generalization. To address this, this paper introduces \ac{FCDC}, a paradigm that formulates data collection as a closed-loop control problem. \ac{FCDC} continuously approximates the state of the collected data distribution using an online probabilistic model and adaptively regulates sample retention using based on feedback signals such as likelihood and Mahalanobis distance. Through this feedback mechanism, the system dynamically balances exploration and exploitation, maintains dataset diversity, and prevents redundancy from accumulating over time. Besides showcasing the controllability of \ac{FCDC} on a synthetic dataset, experiments on a real data stream show that \ac{FCDC} produces more balanced datasets by $\SI{25.9}{\percent}$ while reducing data storage by $\SI{39.8}{\percent}$. These results demonstrate that data collection itself can be actively controlled, transforming collection from a passive pipeline stage into a self-regulating, feedback-driven process at the core of data-centric AI.
- oai:arXiv.org:2511.03239v1
- cs.LG
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Philipp Reis, Philipp Rigoll, Christian Steinhauser, Jacob Langner, Eric Sax
-
-
- A unified physics-informed generative operator framework for general inverse problems
- https://arxiv.org/abs/2511.03241
- arXiv:2511.03241v1 Announce Type: new
-Abstract: Solving inverse problems governed by partial differential equations (PDEs) is central to science and engineering, yet remains challenging when measurements are sparse, noisy, or when the underlying coefficients are high-dimensional or discontinuous. Existing deep learning approaches either require extensive labeled datasets or are limited to specific measurement types, often leading to failure in such regimes and restricting their practical applicability. Here, a novel generative neural operator framework, IGNO, is introduced to overcome these limitations. IGNO unifies the solution of inverse problems from both point measurements and operator-valued data without labeled training pairs. This framework encodes high-dimensional, potentially discontinuous coefficient fields into a low-dimensional latent space, which drives neural operator decoders to reconstruct both coefficients and PDE solutions. Training relies purely on physics constraints through PDE residuals, while inversion proceeds via efficient gradient-based optimization in latent space, accelerated by an a priori normalizing flow model. Across a diverse set of challenging inverse problems, including recovery of discontinuous coefficients from solution-based measurements and the EIT problem with operator-based measurements, IGNO consistently achieves accurate, stable, and scalable inversion even under severe noise. It consistently outperforms the state-of-the-art method under varying noise levels and demonstrates strong generalization to out-of-distribution targets. These results establish IGNO as a unified and powerful framework for tackling challenging inverse problems across computational science domains.
- oai:arXiv.org:2511.03241v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Gang Bao, Yaohua Zang
-
-
- Climate Adaptation with Reinforcement Learning: Economic vs. Quality of Life Adaptation Pathways
- https://arxiv.org/abs/2511.03243
- arXiv:2511.03243v1 Announce Type: new
-Abstract: Climate change will cause an increase in the frequency and severity of flood events, prompting the need for cohesive adaptation policymaking. Designing effective adaptation policies, however, depends on managing the uncertainty of long-term climate impacts. Meanwhile, such policies can feature important normative choices that are not always made explicit. We propose that Reinforcement Learning (RL) can be a useful tool to both identify adaptation pathways under uncertain conditions while it also allows for the explicit modelling (and consequent comparison) of different adaptation priorities (e.g. economic vs. wellbeing). We use an Integrated Assessment Model (IAM) to link together a rainfall and flood model, and compute the impacts of flooding in terms of quality of life (QoL), transportation, and infrastructure damage. Our results show that models prioritising QoL over economic impacts results in more adaptation spending as well as a more even distribution of spending over the study area, highlighting the extent to which such normative assumptions can alter adaptation policy. Our framework is publicly available: https://github.com/MLSM-at-DTU/maat_qol_framework.
- oai:arXiv.org:2511.03243v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Miguel Costa, Arthur Vandervoort, Martin Drews, Karyn Morrissey, Francisco C. Pereira
-
-
- Why Not Put a Microphone Near the Loudspeaker? A New Paradigm for Acoustic Echo Cancellation
- https://arxiv.org/abs/2511.03244
- arXiv:2511.03244v1 Announce Type: new
-Abstract: Acoustic echo cancellation (AEC) remains challenging in real-world environments due to nonlinear distortions caused by low-cost loudspeakers and complex room acoustics. To mitigate these issues, we introduce a dual-microphone configuration, where an auxiliary reference microphone is placed near the loudspeaker to capture the nonlinearly distorted far-end signal. Although this reference signal is contaminated by near-end speech, we propose a preprocessing module based on Wiener filtering to estimate a compressed time-frequency mask to suppress near-end components. This purified reference signal enables a more effective linear AEC stage, whose residual error signal is then fed to a deep neural network for joint residual echo and noise suppression. Evaluation results show that our method outperforms baseline approaches on matched test sets. To evaluate its robustness under strong nonlinearities, we further test it on a mismatched dataset and observe that it achieves substantial performance gains. These results demonstrate its effectiveness in practical scenarios where the nonlinear distortions are typically unknown.
- oai:arXiv.org:2511.03244v1
- cs.SD
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fei Zhao, Zhong-Qiu Wang
-
-
- Decoupled Multi-Predictor Optimization for Inference-Efficient Model Tuning
- https://arxiv.org/abs/2511.03245
- arXiv:2511.03245v1 Announce Type: new
-Abstract: Recently, remarkable progress has been made in large-scale pre-trained model tuning, and inference efficiency is becoming more crucial for practical deployment. Early exiting in conjunction with multi-stage predictors, when cooperated with a parameter-efficient fine-tuning strategy, offers a straightforward way to achieve an inference-efficient model. However, a key challenge remains unresolved: How can early stages provide low-level fundamental features to deep stages while simultaneously supplying high-level discriminative features to early-stage predictors? To address this problem, we propose a Decoupled Multi-Predictor Optimization (DMPO) method to effectively decouple the low-level representative ability and high-level discriminative ability in early stages. First, in terms of architecture, we introduce a lightweight bypass module into multi-stage predictors for functional decomposition of shallow features from early stages, while a high-order statistics-based predictor is developed for early stages to effectively enhance their discriminative ability. To reasonably train our multi-predictor architecture, a decoupled optimization is proposed to allocate two-phase loss weights for multi-stage predictors during model tuning, where the initial training phase enables the model to prioritize the acquisition of discriminative ability of deep stages via emphasizing representative ability of early stages, and the latter training phase drives discriminative ability towards earlier stages as much as possible. As such, our DMPO can effectively decouple representative and discriminative abilities in early stages in terms of architecture design and model optimization. Experiments across various datasets and pre-trained backbones demonstrate that DMPO clearly outperforms its counterparts when reducing computational cost.
- oai:arXiv.org:2511.03245v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Liwei Luo, Shuaitengyuan Li, Dongwei Ren, Qilong Wang, Pengfei Zhu, Qinghua Hu
-
-
- Death by a Thousand Prompts: Open Model Vulnerability Analysis
- https://arxiv.org/abs/2511.03247
- arXiv:2511.03247v1 Announce Type: new
-Abstract: Open-weight models provide researchers and developers with accessible foundations for diverse downstream applications. We tested the safety and security postures of eight open-weight large language models (LLMs) to identify vulnerabilities that may impact subsequent fine-tuning and deployment. Using automated adversarial testing, we measured each model's resilience against single-turn and multi-turn prompt injection and jailbreak attacks. Our findings reveal pervasive vulnerabilities across all tested models, with multi-turn attacks achieving success rates between 25.86\% and 92.78\% -- representing a $2\times$ to $10\times$ increase over single-turn baselines. These results underscore a systemic inability of current open-weight models to maintain safety guardrails across extended interactions. We assess that alignment strategies and lab priorities significantly influence resilience: capability-focused models such as Llama 3.3 and Qwen 3 demonstrate higher multi-turn susceptibility, whereas safety-oriented designs such as Google Gemma 3 exhibit more balanced performance.
- The analysis concludes that open-weight models, while crucial for innovation, pose tangible operational and ethical risks when deployed without layered security controls. These findings are intended to inform practitioners and developers of the potential risks and the value of professional AI security solutions to mitigate exposure. Addressing multi-turn vulnerabilities is essential to ensure the safe, reliable, and responsible deployment of open-weight LLMs in enterprise and public domains. We recommend adopting a security-first design philosophy and layered protections to ensure resilient deployments of open-weight models.
- oai:arXiv.org:2511.03247v1
- cs.CR
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan, Adam Swanda
-
-
- Auditing M-LLMs for Privacy Risks: A Synthetic Benchmark and Evaluation Framework
- https://arxiv.org/abs/2511.03248
- arXiv:2511.03248v1 Announce Type: new
-Abstract: Recent advances in multi-modal Large Language Models (M-LLMs) have demonstrated a powerful ability to synthesize implicit information from disparate sources, including images and text. These resourceful data from social media also introduce a significant and underexplored privacy risk: the inference of sensitive personal attributes from seemingly daily media content. However, the lack of benchmarks and comprehensive evaluations of state-of-the-art M-LLM capabilities hinders the research of private attribute profiling on social media. Accordingly, we propose (1) PRISM, the first multi-modal, multi-dimensional and fine-grained synthesized dataset incorporating a comprehensive privacy landscape and dynamic user history; (2) an Efficient evaluation framework that measures the cross-modal privacy inference capabilities of advanced M-LLM. Specifically, PRISM is a large-scale synthetic benchmark designed to evaluate cross-modal privacy risks. Its key feature is 12 sensitive attribute labels across a diverse set of multi-modal profiles, which enables targeted privacy analysis. These profiles are generated via a sophisticated LLM agentic workflow, governed by a prior distribution to ensure they realistically mimic social media users. Additionally, we propose a Multi-Agent Inference Framework that leverages a pipeline of specialized LLMs to enhance evaluation capabilities. We evaluate the inference capabilities of six leading M-LLMs (Qwen, Gemini, GPT-4o, GLM, Doubao, and Grok) on PRISM. The comparison with human performance reveals that these MLLMs significantly outperform in accuracy and efficiency, highlighting the threat of potential privacy risks and the urgent need for robust defenses.
- oai:arXiv.org:2511.03248v1
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Junhao Li, Jiahao Chen, Zhou Feng, Chunyi Zhou
-
-
- Theoretical and Experimental Limitations of RoCoF Estimation
- https://arxiv.org/abs/2511.03249
- arXiv:2511.03249v1 Announce Type: new
-Abstract: A precise estimation of the Rate of Change of Frequency (RoCoF) is crucial for secure power system operation. In fact, RoCoF is strictly related to the amount of the available physical and/or virtual inertia of the system and the severity of the active power unbalance following a disturbance. For this reason, it is widely exploited in different protection systems, e.g., Anti-Islanding, Under Frequency Load Shedding (UFLS) and wide-area protection systems. The new paradigm of modern power systems, with a low-inertia and converter-based generation assets, is increasing the transient severity, making the frequency and the RoCoF estimation more complex and less precise for the actual devices. This work addresses this issue by proposing a numerically robust approach based on concepts inherited from differential geometry and fluid mechanics. The proposed approach is then tested with high-sampling real experimental measurements and used to develop a faster control logic for a RoCoF-based UFLS control scheme. The proposed approach provides information to protections regarding the nature of the contingency which can be used to improve its response.
- oai:arXiv.org:2511.03249v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Gutierrez-Florensa, F. Sanniti, D. Tedeschi, L. Sigrist, A. Ortega, F. Milano
-
-
- GMoPE:A Prompt-Expert Mixture Framework for Graph Foundation Models
- https://arxiv.org/abs/2511.03251
- arXiv:2511.03251v1 Announce Type: new
-Abstract: Graph Neural Networks (GNNs) have demonstrated impressive performance on task-specific benchmarks, yet their ability to generalize across diverse domains and tasks remains limited. Existing approaches often struggle with negative transfer, scalability issues, and high adaptation costs. To address these challenges, we propose GMoPE (Graph Mixture of Prompt-Experts), a novel framework that seamlessly integrates the Mixture-of-Experts (MoE) architecture with prompt-based learning for graphs. GMoPE leverages expert-specific prompt vectors and structure-aware MoE routing to enable each expert to specialize in distinct subdomains and dynamically contribute to predictions. To promote diversity and prevent expert collapse, we introduce a soft orthogonality constraint across prompt vectors, encouraging expert specialization and facilitating a more balanced expert utilization. Additionally, we adopt a prompt-only fine-tuning strategy that significantly reduces spatiotemporal complexity during transfer. We validate GMoPE through extensive experiments under various pretraining strategies and multiple downstream tasks. Results show that GMoPE consistently outperforms state-of-the-art baselines and achieves performance comparable to full parameter fine-tuning-while requiring only a fraction of the adaptation overhead. Our work provides a principled and scalable framework for advancing generalizable and efficient graph foundation models.
- oai:arXiv.org:2511.03251v1
- cs.LG
- cs.AI
- cs.SI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Zhibin Wang, Zhixing Zhang, Shuqi Wang, Xuanting Xie, Zhao Kang
-
-
- Generative deep learning for foundational video translation in ultrasound
- https://arxiv.org/abs/2511.03255
- arXiv:2511.03255v1 Announce Type: new
-Abstract: Deep learning (DL) has the potential to revolutionize image acquisition and interpretation across medicine, however, attention to data imbalance and missingness is required. Ultrasound data presents a particular challenge because in addition to different views and structures, it includes several sub-modalities-such as greyscale and color flow doppler (CFD)-that are often imbalanced in clinical studies. Image translation can help balance datasets but is challenging for ultrasound sub-modalities to date. Here, we present a generative method for ultrasound CFD-greyscale video translation, trained on 54,975 videos and tested on 8,368. The method developed leveraged pixel-wise, adversarial, and perceptual loses and utilized two networks: one for reconstructing anatomic structures and one for denoising to achieve realistic ultrasound imaging. Average pairwise SSIM between synthetic videos and ground truth was 0.91+/-0.04. Synthetic videos performed indistinguishably from real ones in DL classification and segmentation tasks and when evaluated by blinded clinical experts: F1 score was 0.9 for real and 0.89 for synthetic videos; Dice score between real and synthetic segmentation was 0.97. Overall clinician accuracy in distinguishing real vs synthetic videos was 54+/-6% (42-61%), indicating realistic synthetic videos. Although trained only on heart videos, the model worked well on ultrasound spanning several clinical domains (average SSIM 0.91+/-0.05), demonstrating foundational abilities. Together, these data expand the utility of retrospectively collected imaging and augment the dataset design toolbox for medical imaging.
- oai:arXiv.org:2511.03255v1
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Nikolina Tomic Roshni Bhatnagar, Sarthak Jain, Connor Lau, Tien-Yu Liu, Laura Gambini, Rima Arnaout
-
-
- Decoupled Entropy Minimization
- https://arxiv.org/abs/2511.03256
- arXiv:2511.03256v1 Announce Type: new
-Abstract: Entropy Minimization (EM) is beneficial to reducing class overlap, bridging domain gap, and restricting uncertainty for various tasks in machine learning, yet its potential is limited. To study the internal mechanism of EM, we reformulate and decouple the classical EM into two parts with opposite effects: cluster aggregation driving factor (CADF) rewards dominant classes and prompts a peaked output distribution, while gradient mitigation calibrator (GMC) penalizes high-confidence classes based on predicted probabilities. Furthermore, we reveal the limitations of classical EM caused by its coupled formulation: 1) reward collapse impedes the contribution of high-certainty samples in the learning process, and 2) easy-class bias induces misalignment between output distribution and label distribution. To address these issues, we propose Adaptive Decoupled Entropy Minimization (AdaDEM), which normalizes the reward brought from CADF and employs a marginal entropy calibrator (MEC) to replace GMC. AdaDEM outperforms DEM*, an upper-bound variant of classical EM, and achieves superior performance across various imperfectly supervised learning tasks in noisy and dynamic environments.
- oai:arXiv.org:2511.03256v1
- cs.LG
- cs.CV
- cs.IT
- math.IT
- math.ST
- stat.ML
- stat.TH
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jing Ma, Hanlin Li, Xiang Xiang
-
-
- Enhancing Medical Image Segmentation via Heat Conduction Equation
- https://arxiv.org/abs/2511.03260
- arXiv:2511.03260v1 Announce Type: new
-Abstract: Medical image segmentation has been significantly advanced by deep learning architectures, notably U-Net variants. However, existing models struggle to achieve efficient global context modeling and long-range dependency reasoning under practical computational budgets simultaneously. In this work, we propose a novel hybrid architecture utilizing U-Mamba with Heat Conduction Equation. Our model combines Mamba-based state-space modules for efficient long-range reasoning with Heat Conduction Operators (HCOs) in the bottleneck layers, simulating frequency-domain thermal diffusion for enhanced semantic abstraction. Experimental results on multimodal abdominal CT and MRI datasets demonstrate that the proposed model consistently outperforms strong baselines, validating its effectiveness and generalizability. It suggest that blending state-space dynamics with heat-based global diffusion offers a scalable and interpretable solution for medical segmentation tasks.
- oai:arXiv.org:2511.03260v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Rong Wu, Yim-Sang Yu
-
-
- Comparing the Performance of LLMs in RAG-based Question-Answering: A Case Study in Computer Science Literature
- https://arxiv.org/abs/2511.03261
- arXiv:2511.03261v1 Announce Type: new
-Abstract: Retrieval Augmented Generation (RAG) is emerging as a powerful technique to enhance the capabilities of Generative AI models by reducing hallucination. Thus, the increasing prominence of RAG alongside Large Language Models (LLMs) has sparked interest in comparing the performance of different LLMs in question-answering (QA) in diverse domains. This study compares the performance of four open-source LLMs, Mistral-7b-instruct, LLaMa2-7b-chat, Falcon-7b-instruct and Orca-mini-v3-7b, and OpenAI's trending GPT-3.5 over QA tasks within the computer science literature leveraging RAG support. Evaluation metrics employed in the study include accuracy and precision for binary questions and ranking by a human expert, ranking by Google's AI model Gemini, alongside cosine similarity for long-answer questions. GPT-3.5, when paired with RAG, effectively answers binary and long-answer questions, reaffirming its status as an advanced LLM. Regarding open-source LLMs, Mistral AI's Mistral-7b-instruct paired with RAG surpasses the rest in answering both binary and long-answer questions. However, among the open-source LLMs, Orca-mini-v3-7b reports the shortest average latency in generating responses, whereas LLaMa2-7b-chat by Meta reports the highest average latency. This research underscores the fact that open-source LLMs, too, can go hand in hand with proprietary models like GPT-3.5 with better infrastructure.
- oai:arXiv.org:2511.03261v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1007/978-981-97-9255-9_26
- Lecture Notes on Data Engineering and Communications Technologies, vol. 228, Springer, 2025, pp. 387--403
- Ranul Dayarathne, Uvini Ranaweera, Upeksha Ganegoda
-
-
- Computing the nearest $\Omega$-admissible descriptor dissipative Hamiltonian system
- https://arxiv.org/abs/2511.03265
- arXiv:2511.03265v1 Announce Type: new
-Abstract: For a given set $\Omega \subseteq \mathbb{C}$, a matrix pair $(E,A)$ is called $\Omega$-admissible if it is regular, impulse-free and its eigenvalues lie inside the region $\Omega$. In this paper, we provide a dissipative Hamiltonian characterization for the matrix pairs that are $\Omega$-admissible where $\Omega$ is an LMI region. We then use these results for solving the nearest $\Omega$-admissible matrix pair problem: Given a matrix pair $(E,A)$, find the nearest $\Omega$-admissible pair $(\tilde E, \tilde A)$ to the given pair $(E,A)$. We illustrate our results on several data sets and compare with the state of the art.
- oai:arXiv.org:2511.03265v1
- math.NA
- cs.NA
- cs.SY
- eess.SY
- math.OC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Vaishali Aggarwal, Nicolas Gillis, Punit Sharma
-
-
- IEC3D-AD: A 3D Dataset of Industrial Equipment Components for Unsupervised Point Cloud Anomaly Detection
- https://arxiv.org/abs/2511.03267
- arXiv:2511.03267v1 Announce Type: new
-Abstract: 3D anomaly detection (3D-AD) plays a critical role in industrial manufacturing, particularly in ensuring the reliability and safety of core equipment components. Although existing 3D datasets like Real3D-AD and MVTec 3D-AD offer broad application support, they fall short in capturing the complexities and subtle defects found in real industrial environments. This limitation hampers precise anomaly detection research, especially for industrial equipment components (IEC) such as bearings, rings, and bolts. To address this challenge, we have developed a point cloud anomaly detection dataset (IEC3D-AD) specific to real industrial scenarios. This dataset is directly collected from actual production lines, ensuring high fidelity and relevance. Compared to existing datasets, IEC3D-AD features significantly improved point cloud resolution and defect annotation granularity, facilitating more demanding anomaly detection tasks. Furthermore, inspired by generative 2D-AD methods, we introduce a novel 3D-AD paradigm (GMANet) on IEC3D-AD. This paradigm generates synthetic point cloud samples based on geometric morphological analysis, then reduces the margin and increases the overlap between normal and abnormal point-level features through spatial discrepancy optimization. Extensive experiments demonstrate the effectiveness of our method on both IEC3D-AD and other datasets.
- oai:arXiv.org:2511.03267v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Bingyang Guo, Hongjie Li, Ruiyun Yu, Hanzhe Liang, Jinbao Wang
-
-
- SCALE: Upscaled Continual Learning of Large Language Models
- https://arxiv.org/abs/2511.03270
- arXiv:2511.03270v1 Announce Type: new
-Abstract: We revisit continual pre-training for large language models and argue that progress now depends more on scaling the right structure than on scaling parameters alone. We introduce SCALE, a width upscaling architecture that inserts lightweight expansion into linear modules while freezing all pre-trained parameters. This preserves the residual and attention topologies and increases capacity without perturbing the base model's original functionality. SCALE is guided by two principles: Persistent Preservation, which maintains the base model's behavior via preservation-oriented initialization and freezing of the pre-trained weights, and Collaborative Adaptation, which selectively trains a subset of expansion components to acquire new knowledge with minimal interference. We instantiate these ideas as SCALE-Preserve (preservation-first), SCALE-Adapt (adaptation-first), and SCALE-Route, an optional routing extension that performs token-level routing between preservation and adaptation heads. On a controlled synthetic biography benchmark, SCALE mitigates the severe forgetting observed with depth expansion while still acquiring new knowledge. In continual pre-training on a Korean corpus, SCALE variants achieve less forgetting on English evaluations and competitive gains on Korean benchmarks, with these variants offering the best overall stability-plasticity trade-off. Accompanying analysis clarifies when preservation provably holds and why the interplay between preservation and adaptation stabilizes optimization compared to standard continual learning setups.
- oai:arXiv.org:2511.03270v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jin-woo Lee, Junhwa Choi, Bongkyu Hwang, Jinho Choo, Bogun Kim, JeongSeon Yi, Joonseok Lee, DongYoung Jung, Jaeseon Park, Kyoungwon Park, Suk-hoon Jung
-
-
- Let the Bees Find the Weak Spots: A Path Planning Perspective on Multi-Turn Jailbreak Attacks against LLMs
- https://arxiv.org/abs/2511.03271
- arXiv:2511.03271v1 Announce Type: new
-Abstract: Large Language Models (LLMs) have been widely deployed across various applications, yet their potential security and ethical risks have raised increasing concerns. Existing research employs red teaming evaluations, utilizing multi-turn jailbreaks to identify potential vulnerabilities in LLMs. However, these approaches often lack exploration of successful dialogue trajectories within the attack space, and they tend to overlook the considerable overhead associated with the attack process. To address these limitations, this paper first introduces a theoretical model based on dynamically weighted graph topology, abstracting the multi-turn attack process as a path planning problem. Based on this framework, we propose ABC, an enhanced Artificial Bee Colony algorithm for multi-turn jailbreaks, featuring a collaborative search mechanism with employed, onlooker, and scout bees. This algorithm significantly improves the efficiency of optimal attack path search while substantially reducing the average number of queries required. Empirical evaluations on three open-source and two proprietary language models demonstrate the effectiveness of our approach, achieving attack success rates above 90\% across the board, with a peak of 98\% on GPT-3.5-Turbo, and outperforming existing baselines. Furthermore, it achieves comparable success with only 26 queries on average, significantly reducing red teaming overhead and highlighting its superior efficiency.
- oai:arXiv.org:2511.03271v1
- cs.CR
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yize Liu, Yunyun Hou, Aina Sui
-
-
- Unified Long Video Inpainting and Outpainting via Overlapping High-Order Co-Denoising
- https://arxiv.org/abs/2511.03272
- arXiv:2511.03272v1 Announce Type: new
-Abstract: Generating long videos remains a fundamental challenge, and achieving high controllability in video inpainting and outpainting is particularly demanding. To address both of these challenges simultaneously and achieve controllable video inpainting and outpainting for long video clips, we introduce a novel and unified approach for long video inpainting and outpainting that extends text-to-video diffusion models to generate arbitrarily long, spatially edited videos with high fidelity. Our method leverages LoRA to efficiently fine-tune a large pre-trained video diffusion model like Alibaba's Wan 2.1 for masked region video synthesis, and employs an overlap-and-blend temporal co-denoising strategy with high-order solvers to maintain consistency across long sequences. In contrast to prior work that struggles with fixed-length clips or exhibits stitching artifacts, our system enables arbitrarily long video generation and editing without noticeable seams or drift. We validate our approach on challenging inpainting/outpainting tasks including editing or adding objects over hundreds of frames and demonstrate superior performance to baseline methods like Wan 2.1 model and VACE in terms of quality (PSNR/SSIM), and perceptual realism (LPIPS). Our method enables practical long-range video editing with minimal overhead, achieved a balance between parameter efficient and superior performance.
- oai:arXiv.org:2511.03272v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shuangquan Lyu, Steven Mao, Yue Ma
-
-
- Diffusion Language Models are Super Data Learners
- https://arxiv.org/abs/2511.03276
- arXiv:2511.03276v1 Announce Type: new
-Abstract: Under strictly controlled pre-training settings, we observe a Crossover: when unique data is limited, diffusion language models (DLMs) consistently surpass autoregressive (AR) models by training for more epochs. The crossover shifts later with more or higher-quality data, earlier with larger models, and persists across dense and sparse architectures. We attribute the gains to three compounding factors: (1) any-order modeling, (2) super-dense compute from iterative bidirectional denoising, and (3) built-in Monte Carlo augmentation; input or parameter noise improves AR under data constraint but cannot close the gap. At scale, a 1.7B DLM trained with a ~1.5T-token compute budget on 10B unique Python tokens overtakes an AR coder trained with strictly matched settings. In addition, a 1B-parameter DLM achieves > 56% accuracy on HellaSwag and > 33% on MMLU using only 1B tokens, without any special tricks, just by repeating standard pre-training data. We also show that rising validation cross-entropy does not imply degraded downstream performance in this regime.
- oai:arXiv.org:2511.03276v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jinjie Ni, Qian Liu, Longxu Dou, Chao Du, Zili Wang, Hang Yan, Tianyu Pang, Michael Qizhe Shieh
-
-
- Multi-Objective Adaptive Rate Limiting in Microservices Using Deep Reinforcement Learning
- https://arxiv.org/abs/2511.03279
- arXiv:2511.03279v1 Announce Type: new
-Abstract: As cloud computing and microservice architectures become increasingly prevalent, API rate limiting has emerged as a critical mechanism for ensuring system stability and service quality. Traditional rate limiting algorithms, such as token bucket and sliding window, while widely adopted, struggle to adapt to dynamic traffic patterns and varying system loads. This paper proposes an adaptive rate limiting strategy based on deep reinforcement learning that dynamically balances system throughput and service latency. We design a hybrid architecture combining Deep Q-Network (DQN) and Asynchronous Advantage Actor-Critic (A3C) algorithms, modeling the rate limiting decision process as a Markov Decision Process. The system continuously monitors microservice states and learns optimal rate limiting policies through environmental interaction. Extensive experiments conducted in a Kubernetes cluster environment demonstrate that our approach achieves 23.7% throughput improvement and 31.4% P99 latency reduction compared to traditional fixed-threshold strategies under high-load scenarios. Results from a 90-day production deployment handling 500 million daily requests validate the practical effectiveness of the proposed method, with 82% reduction in service degradation incidents and 68% decrease in manual interventions.
- oai:arXiv.org:2511.03279v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ning Lyu, Yuxi Wang, Ziyu Cheng, Qingyuan Zhang, Feng Chen
-
-
- A Probabilistic Approach to Pose Synchronization for Multi-Reference Alignment with Applications to MIMO Wireless Communication Systems
- https://arxiv.org/abs/2511.03280
- arXiv:2511.03280v1 Announce Type: new
-Abstract: From molecular imaging to wireless communications, the ability to align and reconstruct signals from multiple misaligned observations is crucial for system performance. We study the problem of multi-reference alignment (MRA), which arises in many real-world problems, such as cryo-EM, computer vision, and, in particular, wireless communication systems. Using a probabilistic approach to model MRA, we find a new algorithm that uses relative poses as nuisance variables to marginalize out -- thereby removing the global symmetries of the problem and allowing for more direct solutions and improved convergence. The decentralization of this approach enables significant computational savings by avoiding the cubic scaling of centralized methods through cycle consistency. Both proposed algorithms achieve lower reconstruction error across experimental settings.
- oai:arXiv.org:2511.03280v1
- cs.LG
- stat.AP
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Rob Romijnders, Gabriele Cesa, Christos Louizos, Kumar Pratik, Arash Behboodi
-
-
- When Generative Artificial Intelligence meets Extended Reality: A Systematic Review
- https://arxiv.org/abs/2511.03282
- arXiv:2511.03282v1 Announce Type: new
-Abstract: With the continuous advancement of technology, the application of generative artificial intelligence (AI) in various fields is gradually demonstrating great potential, particularly when combined with Extended Reality (XR), creating unprecedented possibilities. This survey article systematically reviews the applications of generative AI in XR, covering as much relevant literature as possible from 2023 to 2025. The application areas of generative AI in XR and its key technology implementations are summarised through PRISMA screening and analysis of the final 26 articles. The survey highlights existing articles from the last three years related to how XR utilises generative AI, providing insights into current trends and research gaps. We also explore potential opportunities for future research to further empower XR through generative AI, providing guidance and information for future generative XR research.
- oai:arXiv.org:2511.03282v1
- cs.HC
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- 10.1080/10447318.2025.2565392
- Xinyu Ning, Yan Zhuo, Xian Wang, Chan-In Devin Sio, Lik-Hang Lee
-
-
- Graph Neural AI with Temporal Dynamics for Comprehensive Anomaly Detection in Microservices
- https://arxiv.org/abs/2511.03285
- arXiv:2511.03285v1 Announce Type: new
-Abstract: This study addresses the problem of anomaly detection and root cause tracing in microservice architectures and proposes a unified framework that combines graph neural networks with temporal modeling. The microservice call chain is abstracted as a directed graph, where multidimensional features of nodes and edges are used to construct a service topology representation, and graph convolution is applied to aggregate features across nodes and model dependencies, capturing complex structural relationships among services. On this basis, gated recurrent units are introduced to model the temporal evolution of call chains, and multi-layer stacking and concatenation operations are used to jointly obtain structural and temporal representations, improving the ability to identify anomaly patterns. Furthermore, anomaly scoring functions at both the node and path levels are defined to achieve unified modeling from local anomaly detection to global call chain tracing, which enables the identification of abnormal service nodes and the reconstruction of potential anomaly propagation paths. Sensitivity experiments are then designed from multiple dimensions, including hyperparameters, environmental disturbances, and data distribution, to evaluate the framework, and results show that it outperforms baseline methods in key metrics such as AUC, ACC, Recall, and F1-Score, maintaining high accuracy and stability under dynamic topologies and complex environments. This research not only provides a new technical path for anomaly detection in microservices but also lays a methodological foundation for intelligent operations in distributed systems.
- oai:arXiv.org:2511.03285v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qingyuan Zhang, Ning Lyu, Le Liu, Yuxi Wang, Ziyu Cheng, Cancan Hua
-
-
- Characterising Global Platforms: Centralised, Decentralised, Federated, and Grassroots
- https://arxiv.org/abs/2511.03286
- arXiv:2511.03286v1 Announce Type: new
-Abstract: Global digital platforms are software systems designed to serve entire populations, with some already serving billions of people. We propose atomic transactions-based multiagent transition systems and protocols as a formal framework to study them; introduce essential agents -- minimal sets of agents the removal of which makes communication impossible; and show that the cardinality of essential agents partitions all global platforms into four classes:
- 1. Centralised -- one (the server)
- 2. Decentralised -- finite $>1$ (bootstrap nodes)
- 3. Federated -- infinite but not universal (all servers)
- 4. Grassroots -- universal (all agents)
- Our illustrative formal example is a global social network, for which we provide centralised, decentralised, federated, and grassroots specifications via multiagent atomic transactions, and prove they satisfy basic correctness properties. We discuss informally additional global platforms -- currencies, ``sharing economy'' apps, AI, and more. While this may be the first characterisation of centralised, decentralised, and federated global platforms, grassroots platforms have been formally defined previously, but using different notions. Here, we prove that their original definition implies that all agents are essential, placing grassroots platforms in a distinct class within the broader formal context that includes all global platforms. This work provides the first mathematical framework for classifying any global platform -- existing or imagined -- by providing a multiagent atomic-transactions specification of it and determining the cardinality of the minimal set of essential agents in the ensuing multiagent protocol. It thus
- oai:arXiv.org:2511.03286v1
- cs.DC
- cs.MA
- cs.SE
- cs.SI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Ehud Shapiro
-
-
- Optimal Stopping with a Predicted Prior
- https://arxiv.org/abs/2511.03289
- arXiv:2511.03289v1 Announce Type: new
-Abstract: There are two major models of value uncertainty in the optimal stopping literature: the secretary model, which assumes no prior knowledge, and the prophet inequality model, which assumes full information about value distributions. In practice, decision makers often rely on machine-learned priors that may be erroneous. Motivated by this gap, we formulate the model of optimal stopping with a predicted prior to design algorithms that are both consistent, exploiting the prediction when accurate, and robust, retaining worst-case guarantees when it is not.
- Existing secretary and prophet inequality algorithms are either pessimistic in consistency or not robust to misprediction. A randomized combination only interpolates their guarantees linearly. We show that a family of bi-criteria algorithms achieves improved consistency-robustness trade-offs, both for maximizing the expected accepted value and for maximizing the probability of accepting the maximum value. We further prove that for the latter objective, no algorithm can simultaneously match the best prophet inequality algorithm in consistency, and the best secretary algorithm in robustness.
- oai:arXiv.org:2511.03289v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tian Bai, Zhiyi Huang, Chui Shan Lee, Dongchen Li
-
-
- UMDAM: A Unified Data Layout and DRAM Address Mapping for Heterogenous NPU-PIM
- https://arxiv.org/abs/2511.03293
- arXiv:2511.03293v1 Announce Type: new
-Abstract: Large Language Models (LLMs) are increasingly deployed on edge devices with Neural Processing Units (NPUs), yet the decode phase remains memory-intensive, limiting performance. Processing-in-Memory (PIM) offers a promising solution, but co-executing NPU-PIM systems face challenges such as data layout mismatches, bandwidth loss, and redundant storage. To address these issues, we propose UMDAM, a unified memory-affinity data layout and DRAM address mapping scheme tailored for NPU-PIM co-execution. UMDAM employs a column-major, tile-based layout and a configurable DRAM mapping strategy to ensure compatibility with NPU computation while maximizing PIM efficiency -- without introducing extra memory overhead or bandwidth loss. Comprehensive evaluations on OPT models demonstrate that UMDAM reduces time-to-first-token (TTFT) by up to 3.0x and time-to-last-token (TTLT) by 2.18x, significantly improving end-to-end LLM inference efficiency on edge devices.
- oai:arXiv.org:2511.03293v1
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hai Huang, Xuhong Qiang, Weisheng Zhao, Chenchen Liu
-
-
- How to Evaluate Speech Translation with Source-Aware Neural MT Metrics
- https://arxiv.org/abs/2511.03295
- arXiv:2511.03295v1 Announce Type: new
-Abstract: Automatic evaluation of speech-to-text translation (ST) systems is typically performed by comparing translation hypotheses with one or more reference translations. While effective to some extent, this approach inherits the limitation of reference-based evaluation that ignores valuable information from the source input. In machine translation (MT), recent progress has shown that neural metrics incorporating the source text achieve stronger correlation with human judgments. Extending this idea to ST, however, is not trivial because the source is audio rather than text, and reliable transcripts or alignments between source and references are often unavailable. In this work, we conduct the first systematic study of source-aware metrics for ST, with a particular focus on real-world operating conditions where source transcripts are not available. We explore two complementary strategies for generating textual proxies of the input audio, automatic speech recognition (ASR) transcripts, and back-translations of the reference translation, and introduce a novel two-step cross-lingual re-segmentation algorithm to address the alignment mismatch between synthetic sources and reference translations. Our experiments, carried out on two ST benchmarks covering 79 language pairs and six ST systems with diverse architectures and performance levels, show that ASR transcripts constitute a more reliable synthetic source than back-translations when word error rate is below 20%, while back-translations always represent a computationally cheaper but still effective alternative. Furthermore, our cross-lingual re-segmentation algorithm enables robust use of source-aware MT metrics in ST evaluation, paving the way toward more accurate and principled evaluation methodologies for speech translation.
- oai:arXiv.org:2511.03295v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Mauro Cettolo, Marco Gaido, Matteo Negri, Sara Papi, Luisa Bentivogli
-
-
- Evolutionary Dynamics in Continuous-time Finite-state Mean Field Games - Part II: Stability
- https://arxiv.org/abs/2511.03297
- arXiv:2511.03297v1 Announce Type: new
-Abstract: We study a dynamic game with a large population of players who choose actions from a finite set in continuous time. Each player has a state in a finite state space that evolves stochastically with their actions. A player's reward depends not only on their own state and action but also on the distribution of states and actions across the population, capturing effects such as congestion in traffic networks. In Part I, we introduced an evolutionary model and a new solution concept - the mixed stationary Nash Equilibrium (MSNE) - which coincides with the rest points of the mean field evolutionary model under meaningful families of revision protocols. In this second part, we investigate the evolutionary stability of MSNE. We derive conditions on both the structure of the MSNE and the game's payoff map that ensure local and global stability under evolutionary dynamics. These results characterize when MSNE can robustly emerge and persist against strategic deviations, thereby providing insight into its long-term viability in large population dynamic games.
- oai:arXiv.org:2511.03297v1
- eess.SY
- cs.GT
- cs.SY
- math.OC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Leonardo Pedroso, Andrea Agazzi, W. P. M. H. Heemels, Mauro Salazar
-
-
- KScaNN: Scalable Approximate Nearest Neighbor Search on Kunpeng
- https://arxiv.org/abs/2511.03298
- arXiv:2511.03298v1 Announce Type: new
-Abstract: Approximate Nearest Neighbor Search (ANNS) is a cornerstone algorithm for information retrieval, recommendation systems, and machine learning applications. While x86-based architectures have historically dominated this domain, the increasing adoption of ARM-based servers in industry presents a critical need for ANNS solutions optimized on ARM architectures. A naive port of existing x86 ANNS algorithms to ARM platforms results in a substantial performance deficit, failing to leverage the unique capabilities of the underlying hardware. To address this challenge, we introduce KScaNN, a novel ANNS algorithm co-designed for the Kunpeng 920 ARM architecture. KScaNN embodies a holistic approach that synergizes sophisticated, data aware algorithmic refinements with carefully-designed hardware specific optimizations. Its core contributions include: 1) novel algorithmic techniques, including a hybrid intra-cluster search strategy and an improved PQ residual calculation method, which optimize the search process at a higher level; 2) an ML-driven adaptive search module that provides adaptive, per-query tuning of search parameters, eliminating the inefficiencies of static configurations; and 3) highly-optimized SIMD kernels for ARM that maximize hardware utilization for the critical distance computation workloads. The experimental results demonstrate that KScaNN not only closes the performance gap but establishes a new standard, achieving up to a 1.63x speedup over the fastest x86-based solution. This work provides a definitive blueprint for achieving leadership-class performance for vector search on modern ARM architectures and underscores
- oai:arXiv.org:2511.03298v1
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Oleg Senkevich, Siyang Xu, Tianyi Jiang, Alexander Radionov, Jan Tabaszewski, Dmitriy Malyshev, Zijian Li, Daihao Xue, Licheng Yu, Weidi Zeng, Meiling Wang, Xin Yao, Siyu Huang, Gleb Neshchetkin, Qiuling Pan, Yaoyao Fu
-
-
- Extending Fair Null-Space Projections for Continuous Attributes to Kernel Methods
- https://arxiv.org/abs/2511.03304
- arXiv:2511.03304v1 Announce Type: new
-Abstract: With the on-going integration of machine learning systems into the everyday social life of millions the notion of fairness becomes an ever increasing priority in their development. Fairness notions commonly rely on protected attributes to assess potential biases. Here, the majority of literature focuses on discrete setups regarding both target and protected attributes. The literature on continuous attributes especially in conjunction with regression -- we refer to this as \emph{continuous fairness} -- is scarce. A common strategy is iterative null-space projection which as of now has only been explored for linear models or embeddings such as obtained by a non-linear encoder. We improve on this by generalizing to kernel methods, significantly extending the scope. This yields a model and fairness-score agnostic method for kernel embeddings applicable to continuous protected attributes. We demonstrate that our novel approach in conjunction with Support Vector Regression (SVR) provides competitive or improved performance across multiple datasets in comparisons to other contemporary methods.
- oai:arXiv.org:2511.03304v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Felix St\"orck, Fabian Hinder, Barbara Hammer
-
-
- DRL-Based Robust Multi-Timescale Anti-Jamming Approaches under State Uncertainty
- https://arxiv.org/abs/2511.03305
- arXiv:2511.03305v1 Announce Type: new
-Abstract: Owing to the openness of wireless channels, wireless communication systems are highly susceptible to malicious jamming. Most existing anti-jamming methods rely on the assumption of accurate sensing and optimize parameters on a single timescale. However, such methods overlook two practical issues: mismatched execution latencies across heterogeneous actions and measurement errors caused by sensor imperfections. Especially for deep reinforcement learning (DRL)-based methods, the inherent sensitivity of neural networks implies that even minor perturbations in the input can mislead the agent into choosing suboptimal actions, with potentially severe consequences. To ensure reliable wireless transmission, we establish a multi-timescale decision model that incorporates state uncertainty. Subsequently, we propose two robust schemes that sustain performance under bounded sensing errors. First, a Projected Gradient Descent-assisted Double Deep Q-Network (PGD-DDQN) algorithm is designed, which derives worst-case perturbations under a norm-bounded error model and applies PGD during training for robust optimization. Second, a Nonlinear Q-Compression DDQN (NQC-DDQN) algorithm introduces a nonlinear compression mechanism that adaptively contracts Q-value ranges to eliminate action aliasing. Simulation results indicate that, compared with the perfect-sensing baseline, the proposed algorithms show only minor degradation in anti-jamming performance while maintaining robustness under various perturbations, thereby validating their practicality in imperfect sensing conditions.
- oai:arXiv.org:2511.03305v1
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Haoqin Zhao, Zan Li, Jiangbo Si, Rui Huang, Hang Hu, Tony Q. S. Quek, Naofal Al-Dhahir
-
-
- Integrity Under Siege: A Rogue gNodeB's Manipulation of 5G Network Slice Allocation
- https://arxiv.org/abs/2511.03312
- arXiv:2511.03312v1 Announce Type: new
-Abstract: The advent of 5G networks, with network slicing as a cornerstone technology, promises customized, high-performance services, but also introduces novel attack surfaces beyond traditional threats. This article investigates a critical and underexplored integrity vulnerability: the manipulation of network slice allocation to compromise Quality of Service (QoS) and resource integrity. We introduce a threat model, grounded in a risk analysis of permissible yet insecure configurations like null-ciphering (5G-EA0), demonstrating how a rogue gNodeB acting as a Man-in-the-Middle can exploit protocol weaknesses to forge slice requests and hijack a User Equipment's (UE) connection. Through a comprehensive experimental evaluation on a 5G testbed, we demonstrate the attack's versatile and severe impacts. Our findings show this integrity breach can manifest as obvious QoS degradation, such as a 95% bandwidth reduction and 150% latency increase when forcing UE to a suboptimal slice, or as stealthy slice manipulation that is indistinguishable from benign network operation and generates no core network errors. Furthermore, we validate a systemic resource contamination attack where redirecting a crowd of UE orchestrates a Denial-of-Service, causing packet loss to exceed 60% and inducing measurable CPU saturation (~80%) on core network User Plane Functions (UPFs). Based on these results, we discuss the profound implications for Service Level Agreements (SLAs) and critical infrastructure. We propose concrete, cross-layer mitigation strategies for network operators as future work, underscoring the urgent need to secure the integrity of dynamic resource management in 5G networks.
- oai:arXiv.org:2511.03312v1
- cs.NI
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jiali Xu, Valeria Loscri, Romain Rouvoy
-
-
- Diffusion-SDPO: Safeguarded Direct Preference Optimization for Diffusion Models
- https://arxiv.org/abs/2511.03317
- arXiv:2511.03317v1 Announce Type: new
-Abstract: Text-to-image diffusion models deliver high-quality images, yet aligning them with human preferences remains challenging. We revisit diffusion-based Direct Preference Optimization (DPO) for these models and identify a critical pathology: enlarging the preference margin does not necessarily improve generation quality. In particular, the standard Diffusion-DPO objective can increase the reconstruction error of both winner and loser branches. Consequently, degradation of the less-preferred outputs can become sufficiently severe that the preferred branch is also adversely affected even as the margin grows. To address this, we introduce Diffusion-SDPO, a safeguarded update rule that preserves the winner by adaptively scaling the loser gradient according to its alignment with the winner gradient. A first-order analysis yields a closed-form scaling coefficient that guarantees the error of the preferred output is non-increasing at each optimization step. Our method is simple, model-agnostic, broadly compatible with existing DPO-style alignment frameworks and adds only marginal computational overhead. Across standard text-to-image benchmarks, Diffusion-SDPO delivers consistent gains over preference-learning baselines on automated preference, aesthetic, and prompt alignment metrics. Code is publicly available at https://github.com/AIDC-AI/Diffusion-SDPO.
- oai:arXiv.org:2511.03317v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Minghao Fu, Guo-Hua Wang, Tianyu Cui, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang
-
-
- Two thousand years of the oracle problem. Insights from Ancient Delphi on the future of blockchain oracles
- https://arxiv.org/abs/2511.03319
- arXiv:2511.03319v1 Announce Type: new
-Abstract: The oracle problem refers to the inability of an agent to know if the information coming from an oracle is authentic and unbiased. In ancient times, philosophers and historians debated on how to evaluate, increase, and secure the reliability of oracle predictions, particularly those from Delphi, which pertained to matters of state. Today, we refer to data carriers for automatic machines as oracles, but establishing a secure channel between these oracles and the real world still represents a challenge. Despite numerous efforts, this problem remains mostly unsolved, and the recent advent of blockchain oracles has added a layer of complexity because of the decentralization of blockchains. This paper conceptually connects Delphic and modern blockchain oracles, developing a comparative framework. Leveraging blockchain oracle taxonomy, lexical analysis is also performed on 167 Delphic queries to shed light on the relationship between oracle answer quality and question type. The presented framework aims first at revealing commonalities between classical and computational oracles and then at enriching the oracle analysis within each field. This study contributes to the computer science literature by proposing strategies to improve the reliability of blockchain oracles based on insights from Delphi and to classical literature by introducing a framework that can also be applied to interpret and classify other ancient oracular mechanisms.
- oai:arXiv.org:2511.03319v1
- cs.CR
- cs.CY
- cs.IR
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Giulio Caldarelli, Massimiliano Ornaghi
-
-
- Constacyclic codes with best-known parameters
- https://arxiv.org/abs/2511.03323
- arXiv:2511.03323v1 Announce Type: new
-Abstract: In this paper, we construct several infinite families of $q$-ary constacyclic codes over a finite field $\mathbb{F}_q$ with length $n$, dimension around $n/2$, and minimum distance at least $cn/\log_q n$ for some positive constant $c$. They contain many constacyclic codes with optimal, or almost-optimal, or best-known parameters. We also consider various forms of the length $n$.
- oai:arXiv.org:2511.03323v1
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zekai Chen, Min Sha
-
-
- SurgViVQA: Temporally-Grounded Video Question Answering for Surgical Scene Understanding
- https://arxiv.org/abs/2511.03325
- arXiv:2511.03325v1 Announce Type: new
-Abstract: Video Question Answering (VideoQA) in the surgical domain aims to enhance intraoperative understanding by enabling AI models to reason over temporally coherent events rather than isolated frames. Current approaches are limited to static image features, and available datasets often lack temporal annotations, ignoring the dynamics critical for accurate procedural interpretation. We propose SurgViVQA, a surgical VideoQA model that extends visual reasoning from static images to dynamic surgical scenes. It uses a Masked Video--Text Encoder to fuse video and question features, capturing temporal cues such as motion and tool--tissue interactions, which a fine-tuned large language model (LLM) then decodes into coherent answers. To evaluate its performance, we curated REAL-Colon-VQA, a colonoscopic video dataset that includes motion-related questions and diagnostic attributes, as well as out-of-template questions with rephrased or semantically altered formulations to assess model robustness. Experimental validation on REAL-Colon-VQA and the public EndoVis18-VQA dataset shows that SurgViVQA outperforms existing image-based VQA benchmark models, particularly in keyword accuracy, improving over PitVQA by +11\% on REAL-Colon-VQA and +9\% on EndoVis18-VQA. A perturbation study on the questions further confirms improved generalizability and robustness to variations in question phrasing. SurgViVQA and the REAL-Colon-VQA dataset provide a framework for temporally-aware understanding in surgical VideoQA, enabling AI models to interpret dynamic procedural contexts more effectively. Code and dataset available at https://github.com/madratak/SurgViVQA.
- oai:arXiv.org:2511.03325v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mauro Orazio Drago, Luca Carlini, Pelinsu Celebi Balyemez, Dennis Pierantozzi, Chiara Lena, Cesare Hassan, Danail Stoyanov, Elena De Momi, Sophia Bano, Mobarak I. Hoque
-
-
- Benchmarking the Thinking Mode of Multimodal Large Language Models in Clinical Tasks
- https://arxiv.org/abs/2511.03328
- arXiv:2511.03328v1 Announce Type: new
-Abstract: A recent advancement in Multimodal Large Language Models (MLLMs) research is the emergence of "reasoning MLLMs" that offer explicit control over their internal thinking processes (normally referred as the "thinking mode") alongside the standard "non-thinking mode". This capability allows these models to engage in a step-by-step process of internal deliberation before generating a final response. With the rapid transition to and adoption of these "dual-state" MLLMs, this work rigorously evaluated how the enhanced reasoning processes of these MLLMs impact model performance and reliability in clinical tasks. This paper evaluates the active "thinking mode" capabilities of two leading MLLMs, Seed1.5-VL and Gemini-2.5-Flash, for medical applications. We assessed their performance on four visual medical tasks using VQA-RAD and ROCOv2 datasets. Our findings reveal that the improvement from activating the thinking mode remains marginal compared to the standard non-thinking mode for the majority of the tasks. Their performance on complex medical tasks such as open-ended VQA and medical image interpretation remains suboptimal, highlighting the need for domain-specific medical data and more advanced methods for medical knowledge integration.
- oai:arXiv.org:2511.03328v1
- cs.CL
- cs.AI
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jindong Hong, Tianjie Chen, Lingjie Luo, Chuanyang Zheng, Ting Xu, Haibao Yu, Jianing Qiu, Qianzhong Chen, Suning Huang, Yan Xu, Yong Gui, Yijun He, Jiankai Sun
-
-
- Discourse-Aware Scientific Paper Recommendation via QA-Style Summarization and Multi-Level Contrastive Learning
- https://arxiv.org/abs/2511.03330
- arXiv:2511.03330v1 Announce Type: new
-Abstract: The rapid growth of open-access (OA) publications has intensified the challenge of identifying relevant scientific papers. Due to privacy constraints and limited access to user interaction data, recent efforts have shifted toward content-based recommendation, which relies solely on textual information. However, existing models typically treat papers as unstructured text, neglecting their discourse organization and thereby limiting semantic completeness and interpretability. To address these limitations, we propose OMRC-MR, a hierarchical framework that integrates QA-style OMRC (Objective, Method, Result, Conclusion) summarization, multi-level contrastive learning, and structure-aware re-ranking for scholarly recommendation. The QA-style summarization module converts raw papers into structured and discourse-consistent representations, while multi-level contrastive objectives align semantic representations across metadata, section, and document levels. The final re-ranking stage further refines retrieval precision through contextual similarity calibration. Experiments on DBLP, S2ORC, and the newly constructed Sci-OMRC dataset demonstrate that OMRC-MR consistently surpasses state-of-the-art baselines, achieving up to 7.2% and 3.8% improvements in Precision@10 and Recall@10, respectively. Additional evaluations confirm that QA-style summarization produces more coherent and factually complete representations. Overall, OMRC-MR provides a unified and interpretable content-based paradigm for scientific paper recommendation, advancing trustworthy and privacy-aware scholarly information retrieval.
- oai:arXiv.org:2511.03330v1
- cs.IR
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Shenghua Wang, Zhen Yin
-
-
- Multi-Object Tracking Retrieval with LLaVA-Video: A Training-Free Solution to MOT25-StAG Challenge
- https://arxiv.org/abs/2511.03332
- arXiv:2511.03332v1 Announce Type: new
-Abstract: In this report, we present our solution to the MOT25-Spatiotemporal Action Grounding (MOT25-StAG) Challenge. The aim of this challenge is to accurately localize and track multiple objects that match specific and free-form language queries, using video data of complex real-world scenes as input. We model the underlying task as a video retrieval problem and present a two-stage, zero-shot approach, combining the advantages of the SOTA tracking model FastTracker and Multi-modal Large Language Model LLaVA-Video. On the MOT25-StAG test set, our method achieves m-HIoU and HOTA scores of 20.68 and 10.73 respectively, which won second place in the challenge.
- oai:arXiv.org:2511.03332v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yi Yang, Yiming Xu, Timo Kaiser, Hao Cheng, Bodo Rosenhahn, Michael Ying Yang
-
-
- UniAVGen: Unified Audio and Video Generation with Asymmetric Cross-Modal Interactions
- https://arxiv.org/abs/2511.03334
- arXiv:2511.03334v1 Announce Type: new
-Abstract: Due to the lack of effective cross-modal modeling, existing open-source audio-video generation methods often exhibit compromised lip synchronization and insufficient semantic consistency. To mitigate these drawbacks, we propose UniAVGen, a unified framework for joint audio and video generation. UniAVGen is anchored in a dual-branch joint synthesis architecture, incorporating two parallel Diffusion Transformers (DiTs) to build a cohesive cross-modal latent space. At its heart lies an Asymmetric Cross-Modal Interaction mechanism, which enables bidirectional, temporally aligned cross-attention, thus ensuring precise spatiotemporal synchronization and semantic consistency. Furthermore, this cross-modal interaction is augmented by a Face-Aware Modulation module, which dynamically prioritizes salient regions in the interaction process. To enhance generative fidelity during inference, we additionally introduce Modality-Aware Classifier-Free Guidance, a novel strategy that explicitly amplifies cross-modal correlation signals. Notably, UniAVGen's robust joint synthesis design enables seamless unification of pivotal audio-video tasks within a single model, such as joint audio-video generation and continuation, video-to-audio dubbing, and audio-driven video synthesis. Comprehensive experiments validate that, with far fewer training samples (1.3M vs. 30.1M), UniAVGen delivers overall advantages in audio-video synchronization, timbre consistency, and emotion consistency.
- oai:arXiv.org:2511.03334v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Guozhen Zhang, Zixiang Zhou, Teng Hu, Ziqiao Peng, Youliang Zhang, Yi Chen, Yuan Zhou, Qinglin Lu, Limin Wang
-
-
- Branch-and-Cut for Computing Approximate Equilibria of Mixed-Integer Generalized Nash Games
- https://arxiv.org/abs/2511.03340
- arXiv:2511.03340v1 Announce Type: new
-Abstract: Generalized Nash equilibrium problems with mixed-integer variables constitute an important class of games in which each player solves a mixed-integer optimization problem, where both the objective and the feasible set is parameterized by the rivals' strategies. However, such games are known for failing to admit exact equilibria and also the assumption of all players being able to solve nonconvex problems to global optimality is questionable. This motivates the study of approximate equilibria. In this work, we consider an approximation concept that incorporates both multiplicative and additive relaxations of optimality. We propose a branch-and-cut (B&C) method that computes such approximate equilibria or proves its non-existence. For this, we adopt the idea of intersection cuts and show the existence of such cuts under the condition that the constraints are linear and each player's cost function is either convex in the entire strategy profile, or, concave in the entire strategy profile and linear in the rivals' strategies. For the special case of standard Nash equilibrium problems, we introduce an alternative type of cut and show that the method terminates finitely, provided that each player has only finitely many distinct best-response sets. Finally, on the basis of the B&C method, we introduce a single-tree binary-search method to compute best-approximate equilibria under some simplifying assumptions. We implemented these methods and present numerical results for a class of mixed-integer flow games.
- oai:arXiv.org:2511.03340v1
- cs.GT
- math.OC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Alo\"is Duguet, Tobias Harks, Martin Schmidt, Julian Schwarz
-
-
- LaMoS: Enabling Efficient Large Number Modular Multiplication through SRAM-based CiM Acceleration
- https://arxiv.org/abs/2511.03341
- arXiv:2511.03341v1 Announce Type: new
-Abstract: Barrett's algorithm is one of the most widely used methods for performing modular multiplication, a critical nonlinear operation in modern privacy computing techniques such as homomorphic encryption (HE) and zero-knowledge proofs (ZKP). Since modular multiplication dominates the processing time in these applications, computational complexity and memory limitations significantly impact performance. Computing-in-Memory (CiM) is a promising approach to tackle this problem. However, existing schemes currently suffer from two main problems: 1) Most works focus on low bit-width modular multiplication, which is inadequate for mainstream cryptographic algorithms such as elliptic curve cryptography (ECC) and the RSA algorithm, both of which require high bit-width operations; 2) Recent efforts targeting large number modular multiplication rely on inefficient in-memory logic operations, resulting in high scaling costs for larger bit-widths and increased latency. To address these issues, we propose LaMoS, an efficient SRAM-based CiM design for large-number modular multiplication, offering high scalability and area efficiency. First, we analyze the Barrett's modular multiplication method and map the workload onto SRAM CiM macros for high bit-width cases. Additionally, we develop an efficient CiM architecture and dataflow to optimize large-number modular multiplication. Finally, we refine the mapping scheme for better scalability in high bit-width scenarios using workload grouping. Experimental results show that LaMoS achieves a $7.02\times$ speedup and reduces high bit-width scaling costs compared to existing SRAM-based CiM designs.
- oai:arXiv.org:2511.03341v1
- cs.CR
- cs.AR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Haomin Li, Fangxin Liu, Chenyang Guan, Zongwu Wang, Li Jiang, Haibing Guan
-
-
- A Spectral Split-Step Pad\'e Method for Guided Wave Propagation
- https://arxiv.org/abs/2511.03343
- arXiv:2511.03343v1 Announce Type: new
-Abstract: In this study, a Fourier-based, split-step Pad\'e (SSP) method for solving the parabolic wave equation with applications in guided wave propagation in ocean acoustics is presented. Traditional SSP implementations rely in finite-difference discretizations of the depth-dependent differential operator. This approach limits accuracy in coarse discretizations as well as computational efficiency in dense discretizations since it does not significantly benefit from parallelization. In contrast, our proposed method replaces finite differences with a spectral representation using the discrete sine transform (DST). This enables an exact treatment of the vertical operator under homogeneous boundary conditions. For non-constant sound speed, we use a Neumann series expansion to treat inhomogeneities as perturbations. Numerical experiments demonstrate the method's accuracy in range-independent media and rage-dependent scenarios, including propagation in deep ocean with Munk profile and in the presence of a parametrized synoptic eddy. Compared to finite-difference SSP methods, the Fourier-based approach achieves higher accuracy with fewer depth discretization points and avoids the resolution bottleneck associated with sharp field features, making it well-suited for large-scale, high-frequency wave propagation problems in ocean environments.
- oai:arXiv.org:2511.03343v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Daniel Walsken, Pavel Petrov, Matthias Ehrhardt
-
-
- SORTeD Rashomon Sets of Sparse Decision Trees: Anytime Enumeration
- https://arxiv.org/abs/2511.03344
- arXiv:2511.03344v1 Announce Type: new
-Abstract: Sparse decision tree learning provides accurate and interpretable predictive models that are ideal for high-stakes applications by finding the single most accurate tree within a (soft) size limit. Rather than relying on a single "best" tree, Rashomon sets-trees with similar performance but varying structures-can be used to enhance variable importance analysis, enrich explanations, and enable users to choose simpler trees or those that satisfy stakeholder preferences (e.g., fairness) without hard-coding such criteria into the objective function. However, because finding the optimal tree is NP-hard, enumerating the Rashomon set is inherently challenging. Therefore, we introduce SORTD, a novel framework that improves scalability and enumerates trees in the Rashomon set in order of the objective value, thus offering anytime behavior. Our experiments show that SORTD reduces runtime by up to two orders of magnitude compared with the state of the art. Moreover, SORTD can compute Rashomon sets for any separable and totally ordered objective and supports post-evaluating the set using other separable (and partially ordered) objectives. Together, these advances make exploring Rashomon sets more practical in real-world applications.
- oai:arXiv.org:2511.03344v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Elif Arslan, Jacobus G. M. van der Linden, Serge Hoogendoorn, Marco Rinaldi, Emir Demirovi\'c
-
-
- Improved Online Load Balancing in the Two-Norm
- https://arxiv.org/abs/2511.03345
- arXiv:2511.03345v1 Announce Type: new
-Abstract: We study the online load balancing problem on unrelated machines, with the objective of minimizing the square of the $\ell_2$ norm of the loads on the machines. The greedy algorithm of Awerbuch et al. (STOC'95) is optimal for deterministic algorithms and achieves a competitive ratio of $3 + 2 \sqrt{2} \approx 5.828$, and an improved $5$-competitive randomized algorithm based on independent rounding has been shown by Caragiannis (SODA'08). In this work, we present the first algorithm breaking the barrier of $5$ on the competitive ratio, achieving a bound of $4.9843$. To obtain this result, we use a new primal-dual framework to analyze this problem based on a natural semidefinite programming relaxation, together with an online implementation of a correlated randomized rounding procedure of Im and Shadloo (SODA'20). This novel primal-dual framework also yields new, simple and unified proofs of the competitive ratio of the $(3 + 2 \sqrt{2})$-competitive greedy algorithm, the $5$-competitive randomized independent rounding algorithm, and that of a new $4$-competitive optimal fractional algorithm. We also provide lower bounds showing that the previous best randomized algorithm is optimal among independent rounding algorithms, that our new fractional algorithm is optimal, and that a simple greedy algorithm is optimal for the closely related online scheduling problem $R || \sum w_j C_j$.
- oai:arXiv.org:2511.03345v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sander Borst, Danish Kashaev
-
-
- Learning Communication Skills in Multi-task Multi-agent Deep Reinforcement Learning
- https://arxiv.org/abs/2511.03348
- arXiv:2511.03348v1 Announce Type: new
-Abstract: In multi-agent deep reinforcement learning (MADRL), agents can communicate with one another to perform a task in a coordinated manner. When multiple tasks are involved, agents can also leverage knowledge from one task to improve learning in other tasks. In this paper, we propose Multi-task Communication Skills (MCS), a MADRL with communication method that learns and performs multiple tasks simultaneously, with agents interacting through learnable communication protocols. MCS employs a Transformer encoder to encode task-specific observations into a shared message space, capturing shared communication skills among agents. To enhance coordination among agents, we introduce a prediction network that correlates messages with the actions of sender agents in each task. We adapt three multi-agent benchmark environments to multi-task settings, where the number of agents as well as the observation and action spaces vary across tasks. Experimental results demonstrate that MCS achieves better performance than multi-task MADRL baselines without communication, as well as single-task MADRL baselines with and without communication.
- oai:arXiv.org:2511.03348v1
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Changxi Zhu, Mehdi Dastani, Shihan Wang
-
-
- A Semantic Encoding of Object Centric Event Data
- https://arxiv.org/abs/2511.03351
- arXiv:2511.03351v1 Announce Type: new
-Abstract: The Object-Centric Event Data (OCED) is a novel meta-model aimed at providing a common ground for process data records centered around events and objects. One of its objectives is to foster interoperability and process information exchange. In this context, the integration of data from different providers, the combination of multiple processes, and the enhancement of knowledge inference are novel challenges. Semantic Web technologies can enable the creation of a machine-readable OCED description enriched through ontology-based relationships and entity categorization. In this paper, we introduce an approach built upon Semantic Web technologies for the realization of semantic-enhanced OCED, with the aim to strengthen process data reasoning, interconnect information sources, and boost expressiveness.
- oai:arXiv.org:2511.03351v1
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Saba Latif, Fajar J. Ekaputra, Maxim Vidgof, Sabrina Kirrane, Claudio Di Ciccio
-
-
- Generative Artificial Intelligence in Bioinformatics: A Systematic Review of Models, Applications, and Methodological Advances
- https://arxiv.org/abs/2511.03354
- arXiv:2511.03354v1 Announce Type: new
-Abstract: Generative artificial intelligence (GenAI) has become a transformative approach in bioinformatics that often enables advancements in genomics, proteomics, transcriptomics, structural biology, and drug discovery. To systematically identify and evaluate these growing developments, this review proposed six research questions (RQs), according to the preferred reporting items for systematic reviews and meta-analysis methods. The objective is to evaluate impactful GenAI strategies in methodological advancement, predictive performance, and specialization, and to identify promising approaches for advanced modeling, data-intensive discovery, and integrative biological analysis. RQ1 highlights diverse applications across multiple bioinformatics subfields (sequence analysis, molecular design, and integrative data modeling), which demonstrate superior performance over traditional methods through pattern recognition and output generation. RQ2 reveals that adapted specialized model architectures outperformed general-purpose models, an advantage attributed to targeted pretraining and context-aware strategies. RQ3 identifies significant benefits in the bioinformatics domains, focusing on molecular analysis and data integration, which improves accuracy and reduces errors in complex analysis. RQ4 indicates improvements in structural modeling, functional prediction, and synthetic data generation, validated by established benchmarks. RQ5 suggests the main constraints, such as the lack of scalability and biases in data that impact generalizability, and proposes future directions focused on robust evaluation and biologically grounded modeling. RQ6 examines that molecular datasets (such as UniProtKB and ProteinNet12), cellular datasets (such as CELLxGENE and GTEx) and textual resources (such as PubMedQA and OMIM) broadly support the training and generalization of GenAI models.
- oai:arXiv.org:2511.03354v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Riasad Alvi, Sayeem Been Zaman, Wasimul Karim, Arefin Ittesafun Abian, Mohaimenul Azam Khan Raiaan, Saddam Mukta, Md Rafi Ur Rashid, Md Rafiqul Islam, Yakub Sebastian, Sami Azam
-
-
- A Modular, Data-Free Pipeline for Multi-Label Intention Recognition in Transportation Agentic AI Applications
- https://arxiv.org/abs/2511.03363
- arXiv:2511.03363v1 Announce Type: new
-Abstract: In this study, a modular, data-free pipeline for multi-label intention recognition is proposed for agentic AI applications in transportation. Unlike traditional intent recognition systems that depend on large, annotated corpora and often struggle with fine-grained, multi-label discrimination, our approach eliminates the need for costly data collection while enhancing the accuracy of multi-label intention understanding. Specifically, the overall pipeline, named DMTC, consists of three steps: 1) using prompt engineering to guide large language models (LLMs) to generate diverse synthetic queries in different transport scenarios; 2) encoding each textual query with a Sentence-T5 model to obtain compact semantic embeddings; 3) training a lightweight classifier using a novel online focal-contrastive (OFC) loss that emphasizes hard samples and maximizes inter-class separability. The applicability of the proposed pipeline is demonstrated in an agentic AI application in the maritime transportation context. Extensive experiments show that DMTC achieves a Hamming loss of 5.35% and an AUC of 95.92%, outperforming state-of-the-art multi-label classifiers and recent end-to-end SOTA LLM-based baselines. Further analysis reveals that Sentence-T5 embeddings improve subset accuracy by at least 3.29% over alternative encoders, and integrating the OFC loss yields an additional 0.98% gain compared to standard contrastive objectives. In conclusion, our system seamlessly routes user queries to task-specific modules (e.g., ETA information, traffic risk evaluation, and other typical scenarios in the transportation domain), laying the groundwork for fully autonomous, intention-aware agents without costly manual labelling.
- oai:arXiv.org:2511.03363v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Xiaocai Zhang, Hur Lim, Ke Wang, Zhe Xiao, Jing Wang, Kelvin Lee, Xiuju Fu, Zheng Qin
-
-
- Lightwave Power Transfer-Enabled Underwater Optical ISAC Systems under Ship Attitude Variation
- https://arxiv.org/abs/2511.03366
- arXiv:2511.03366v1 Announce Type: new
-Abstract: In this paper, we propose a lightwave power transfer-enabled underwater optical integrated sensing and communication (O-ISAC) system, where an access point (AP) mounted on a seasurface ship transmits lightwave signals to two nodes, namely ($i$) a seabed sensor that harvests energy and transmits uplink information to the AP, and ($ii$) a sensing target whose position is estimated by the AP using an array of pinhole cameras. To capture practical deployment conditions, the ship attitude variation is modeled through its roll, pitch, and yaw angles, each following a Gaussian distribution under low-to-moderate sea states. Closed-form approximations are derived for the mean squared error (MSE) of target localization and the achievable uplink data rate. Analytical and simulation results demonstrate excellent agreement, validating the proposed models and derived expressions, while revealing the fundamental communication-sensing tradeoff in the O-ISAC system. The results further provide valuable design insights, including the optimal camera placement on the ship to minimize localization error, achieving a minimum MSE of $10^{-2}$ $\text{m}^2$ with multiple cameras under roll, pitch, and yaw angle variation of $10^{\circ}$, and the optimal harvest-use ratio of $0.55$ for the considered setup.
- oai:arXiv.org:2511.03366v1
- eess.SY
- cs.SY
- eess.SP
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Kapila W. S. Palitharathna, Constantinos Psomas, Ioannis Krikidis
-
-
- Decoupling Augmentation Bias in Prompt Learning for Vision-Language Models
- https://arxiv.org/abs/2511.03367
- arXiv:2511.03367v1 Announce Type: new
-Abstract: Recent advances in large-scale vision and language models have led to significant progress in zero-shot learning tasks. Methods such as CoOp and CoCoOp have shown that replacing handcrafted prompts with learnable vectors, known as prompt learning, can result in improved performance. However, these models often struggle to generalize to entirely unseen categories. While traditional zero-shot learning techniques benefit from various data augmentation strategies, prompt learning has primarily focused on text-based modifications, leaving the potential of image-based augmentation largely unexplored. In this work, we explore how image-level augmentations, particularly those that introduce attribute-specific variations, can support and enhance prompt learning. Our analysis examines the interaction between these augmentations and soft prompt frameworks, revealing their potential to improve generalization. We also identify a limitation in existing methods, such as CoCoOp, which do not provide explicit guidance for learning prompts that focus on semantically meaningful visual features. To address this, we propose Adding Attributes to Prompt Learning, AAPL, a novel method that introduces adversarial token embeddings to decouple superficial visual variations introduced by augmentation from class-relevant semantic representations. This decoupling enables the learned prompts to concentrate on visually discriminative features that align with the target categories. We conduct comprehensive experiments on eleven benchmark datasets, and AAPL consistently outperforms existing methods across few-shot, zero-shot, cross-dataset, and domain generalization settings. Our source code is publicly available at: https://github.com/Gahyeonkim09/AAPL
- oai:arXiv.org:2511.03367v1
- cs.CV
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Gahyeon Kim, Sohee Kim, Seokju Lee
-
-
- TripleWin: Fixed-Point Equilibrium Pricing for Data-Model Coupled Markets
- https://arxiv.org/abs/2511.03368
- arXiv:2511.03368v1 Announce Type: new
-Abstract: The rise of the machine learning (ML) model economy has intertwined markets for training datasets and pre-trained models. However, most pricing approaches still separate data and model transactions or rely on broker-centric pipelines that favor one side. Recent studies of data markets with externalities capture buyer interactions but do not yield a simultaneous and symmetric mechanism across data sellers, model producers, and model buyers. We propose a unified data-model coupled market that treats dataset and model trading as a single system. A supply-side mapping transforms dataset payments into buyer-visible model quotations, while a demand-side mapping propagates buyer prices back to datasets through Shapley-based allocation. Together, they form a closed loop that links four interactions: supply-demand propagation in both directions and mutual coupling among buyers and among sellers. We prove that the joint operator is a standard interference function (SIF), guaranteeing existence, uniqueness, and global convergence of equilibrium prices. Experiments demonstrate efficient convergence and improved fairness compared with broker-centric and one-sided baselines. The code is available on https://github.com/HongrunRen1109/Triple-Win-Pricing.
- oai:arXiv.org:2511.03368v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Hongrun Ren, Yun Xiong, Lei You, Yingying Wang, Haixu Xiong, Yangyong Zhu
-
-
- Silenced Biases: The Dark Side LLMs Learned to Refuse
- https://arxiv.org/abs/2511.03369
- arXiv:2511.03369v1 Announce Type: new
-Abstract: Safety-aligned large language models (LLMs) are becoming increasingly widespread, especially in sensitive applications where fairness is essential and biased outputs can cause significant harm. However, evaluating the fairness of models is a complex challenge, and approaches that do so typically utilize standard question-answer (QA) styled schemes. Such methods often overlook deeper issues by interpreting the model's refusal responses as positive fairness measurements, which creates a false sense of fairness. In this work, we introduce the concept of silenced biases, which are unfair preferences encoded within models' latent space and are effectively concealed by safety-alignment. Previous approaches that considered similar indirect biases often relied on prompt manipulation or handcrafted implicit queries, which present limited scalability and risk contaminating the evaluation process with additional biases. We propose the Silenced Bias Benchmark (SBB), which aims to uncover these biases by employing activation steering to reduce model refusals during QA. SBB supports easy expansion to new demographic groups and subjects, presenting a fairness evaluation framework that encourages the future development of fair models and tools beyond the masking effects of alignment training. We demonstrate our approach over multiple LLMs, where our findings expose an alarming distinction between models' direct responses and their underlying fairness issues.
- oai:arXiv.org:2511.03369v1
- cs.CL
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Rom Himelstein, Amit LeVi, Brit Youngmann, Yaniv Nemcovsky, Avi Mendelson
-
-
- EQ-Negotiator: Dynamic Emotional Personas Empower Small Language Models for Edge-Deployable Credit Negotiation
- https://arxiv.org/abs/2511.03370
- arXiv:2511.03370v1 Announce Type: new
-Abstract: The deployment of large language models (LLMs) in automated negotiation has set a high performance benchmark, but their computational cost and data privacy requirements render them unsuitable for many privacy-sensitive, on-device applications such as mobile assistants, embodied AI agents or private client interactions. While small language models (SLMs) offer a practical alternative, they suffer from a significant performance gap compared to LLMs in playing emotionally charged complex personas, especially for credit negotiation. This paper introduces EQ-Negotiator, a novel framework that bridges this capability gap using emotional personas. Its core is a reasoning system that integrates game theory with a Hidden Markov Model(HMM) to learn and track debtor emotional states online, without pre-training. This allows EQ-Negotiator to equip SLMs with the strategic intelligence to counter manipulation while de-escalating conflict and upholding ethical standards. Through extensive agent-to-agent simulations across diverse credit negotiation scenarios, including adversarial debtor strategies like cheating, threatening, and playing the victim, we show that a 7B parameter language model with EQ-Negotiator achieves better debt recovery and negotiation efficiency than baseline LLMs more than 10 times its size. This work advances persona modeling from descriptive character profiles to dynamic emotional architectures that operate within privacy constraints. Besides, this paper establishes that strategic emotional intelligence, not raw model scale, is the critical factor for success in automated negotiation, paving the way for effective, ethical, and privacy-preserving AI negotiators that can operate on the edge.
- oai:arXiv.org:2511.03370v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yunbo Long, Yuhan Liu, Alexandra Brintrup
-
-
- LFC-DA: Logical Formula-Controlled Data Augmentation for Enhanced Logical Reasoning
- https://arxiv.org/abs/2511.03372
- arXiv:2511.03372v1 Announce Type: new
-Abstract: For complex logical data augmentation, heavy reliance on human annotation is costly, whereas direct generation with large language models yields uninterpretable and logically homogeneous examples. To address this, we present LFC-DA, a symbolic-logic-controlled pipeline: logical text is first mapped to propositional expressions, a compact rule library is compiled, and a bounded state-space search systematically discovers valid formulas that are then verbalized back into natural-language questions, ensuring both diversity and logical rigor under propositional logic. Experiments on ReClor and LogiQA show significant improvements in the logical-reasoning accuracy of pretrained models, confirming the effectiveness of LFC-DA for LLM-guided logical data augmentation.
- oai:arXiv.org:2511.03372v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Shenghao Li
-
-
- I Prompt, it Generates, we Negotiate. Exploring Text-Image Intertextuality in Human-AI Co-Creation of Visual Narratives with VLMs
- https://arxiv.org/abs/2511.03375
- arXiv:2511.03375v1 Announce Type: new
-Abstract: Creating meaningful visual narratives through human-AI collaboration requires understanding how text-image intertextuality emerges when textual intentions meet AI-generated visuals. We conducted a three-phase qualitative study with 15 participants using GPT-4o to investigate how novices navigate sequential visual narratives. Our findings show that users develop strategies to harness AI's semantic surplus by recognizing meaningful visual content beyond literal descriptions, iteratively refining prompts, and constructing narrative significance through complementary text-image relationships. We identified four distinct collaboration patterns and, through fsQCA's analysis, discovered three pathways to successful intertextual collaboration: Educational Collaborator, Technical Expert, and Visual Thinker. However, participants faced challenges, including cultural representation gaps, visual consistency issues, and difficulties translating narrative concepts into visual prompts. These findings contribute to HCI research by providing an empirical account of \textit{text-image intertextuality} in human-AI co-creation and proposing design implications for role-based AI assistants that better support iterative, human-led creative processes in visual storytelling.
- oai:arXiv.org:2511.03375v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Mengyao Guo, Kexin Nie, Ze Gao, Black Sun, Xueyang Wang, Jinda Han, Xingting Wu
-
-
- Beyond Citations: Measuring Idea-level Knowledge Diffusion from Research to Journalism and Policy-making
- https://arxiv.org/abs/2511.03378
- arXiv:2511.03378v1 Announce Type: new
-Abstract: Despite the importance of social science knowledge for various stakeholders, measuring its diffusion into different domains remains a challenge. This study uses a novel text-based approach to measure the idea-level diffusion of social science knowledge from the research domain to the journalism and policy-making domains. By doing so, we expand the detection of knowledge diffusion beyond the measurements of direct references. Our study focuses on media effects theories as key research ideas in the field of communication science. Using 72,703 documents (2000-2019) from three domains (i.e., research, journalism, and policy-making) that mention these ideas, we count the mentions of these ideas in each domain, estimate their domain-specific contexts, and track and compare differences across domains and over time. Overall, we find that diffusion patterns and dynamics vary considerably between ideas, with some ideas diffusing between other domains, while others do not. Based on the embedding regression approach, we compare contextualized meanings across domains and find that the distances between research and policy are typically larger than between research and journalism. We also find that ideas largely shift roles across domains - from being the theories themselves in research to sense-making in news to applied, administrative use in policy. Over time, we observe semantic convergence mainly for ideas that are practically oriented. Our results characterize the cross-domain diffusion patterns and dynamics of social science knowledge at the idea level, and we discuss the implications for measuring knowledge diffusion beyond citations.
- oai:arXiv.org:2511.03378v1
- cs.SI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Yangliu Fan, Kilian Buehling, Volker Stocker
-
-
- A Digital Twin of Evaporative Thermo-Fluidic Process in Fixation Unit of DoD Inkjet Printers
- https://arxiv.org/abs/2511.03379
- arXiv:2511.03379v1 Announce Type: new
-Abstract: In inkjet printing, optimal paper moisture is crucial for print quality, achieved through hot-air impingement in the fixation unit. This paper presents a modular digital twin of the fixation unit, modeling the thermo-fluidic drying process and monitoring its spatio-temporal performance. The novel approach formulates the digital twin as an infinite-dimensional state estimator that infers fixation states from limited sensor data, while remaining robust to disturbances. Modularity is achieved through a graph-theoretic model, where each node represents thermo-fluidic dynamics in different sections of the fixation unit. Evaporation is modeled as a nonlinear boundary effect coupled with node dynamics via Linear Fractional Representation. Using the Partial Integral Equation (PIE) framework, we develop a unified approach for stability, input-output analysis, simulation, and rapid prototyping, validated with operational data from a commercial printer. An $\mathcal{H}_{\infty}$-optimal Luenberger state estimator is then synthesized to estimate thermal states from available sensor data, enabling real-time monitoring of spatio-temporal thermal effects on paper sheets.
- oai:arXiv.org:2511.03379v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Samarth Toolhally, Joeri Roelofs, Siep Weiland, Amritam Das
-
-
- Segmentation Beyond Defaults: Asymmetrical Byte Pair Encoding for Optimal Machine Translation Performance
- https://arxiv.org/abs/2511.03383
- arXiv:2511.03383v1 Announce Type: new
-Abstract: Existing Machine Translation (MT) research often suggests a single, fixed set of hyperparameters for word segmentation models, symmetric Byte Pair Encoding (BPE), which applies the same number of merge operations (NMO) to train tokenizers for both source and target languages. However, we demonstrate that this uniform approach doesn't guarantee optimal MT performance across different language pairs and data sizes. This work investigates BPE segmentation recipes across various data volumes and language pairs to evaluate MT system performance. We find that utilizing asymmetric BPE, where the source and target languages have different NMOs, significantly improves results over the symmetric approach, especially in low-resource settings (50K, 100K, and 500K sentence pairs). Specifically, asymmetric BPE yield statistically significant ($p<0.05$) average gains of 5.32, 4.46, and 0.7 CHRF++ on English-Hindi in low-resource setups. We validated this trend across six additional language pairs (English and Telugu, Shona, Norwegian, Kyrgyz, Hausa, and Inuktitut), observing statistically significant improvement in 10 out of 12 systems compared to symmetric BPE. Our findings indicate a high NMO for the source (4K to 32K) and a low NMO for the target (0.5K to 2K) provides optimal results, particularly benefiting low-resource MT.
- oai:arXiv.org:2511.03383v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Saumitra Yadav, Manish Shrivastava
-
-
- Monotone Bounded Depth Formula Complexity of Graph Homomorphism Polynomials
- https://arxiv.org/abs/2511.03388
- arXiv:2511.03388v1 Announce Type: new
-Abstract: We characterize the monotone bounded depth formula complexity for graph homomorphism and colored isomorphism polynomials using a graph parameter called the cost of bounded product depth baggy elimination tree. Using this characterization, we show an almost optimal separation between monotone circuits and monotone formulas using constant-degree polynomials for all fixed product depths, and an almost optimal separation between monotone formulas of product depths $\Delta$ and $\Delta$ + 1 for all $\Delta$ $\ge$ 1.
- oai:arXiv.org:2511.03388v1
- cs.CC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Balagopal Komarath (Indian Institute of Technology Gandhinagar), Rohit Narayanan (Indian Institute of Technology Gandhinagar)
-
-
- A New Algorithm for Computing the Stabilizing Solution of General Periodic Time-Varying Stochastic Game-Theoretic Riccati Differential Equations
- https://arxiv.org/abs/2511.03390
- arXiv:2511.03390v1 Announce Type: new
-Abstract: We propose a new algorithm for a broad class of periodic time-varying Stochastic Game-Theoretic Riccati Differential Equations arising in Zero-Sum Linear-Quadratic Stochastic Differential Games. The algorithm is constructed via dual-layer matrix-valued functions iteration sequences, which reformulate the original problem into a set of interconnected bilevel subproblems. By sequentially computing the maximal periodic solutions to the Riccati differential equations associated with each subproblem, we derive the stabilizing periodic solutions for the original problem and rigorously prove the algorithm's convergence. Numerical experiments verifies algorithm effectiveness and stability. This study provides a unified numerical framework for solving a wider range of periodic time-varying Stochastic Game-Theoretic Riccati Differential Equations.
- oai:arXiv.org:2511.03390v1
- math.NA
- cs.NA
- math.OC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yiyuan Wang
-
-
- Maximum Likelihood Estimation of Dynamic Sub-Networks with Missing Data
- https://arxiv.org/abs/2511.03391
- arXiv:2511.03391v1 Announce Type: new
-Abstract: Maximum likelihood estimation is effective for identifying dynamical systems, but applying it to large networks becomes computationally prohibitive. This paper introduces a maximum likelihood estimation method that enables identification of sub-networks within complex interconnected systems without estimating the entire network. The key insight is that under specific topological conditions, a sub-network's parameters can be estimated using only local measurements: signals within the target sub-network and those in the directly connected to the so-called separator sub-network. This approach significantly reduces computational complexity while enhancing privacy by eliminating the need to share sensitive internal data across organizational boundaries. We establish theoretical conditions for network separability, derive the probability density function for the sub-network, and demonstrate the method's effectiveness through numerical examples.
- oai:arXiv.org:2511.03391v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jo\~ao Victor Galv\~ao da Mata, Anders Hansson, Martin S. Andersen
-
-
- Formalizing ETLT and ELTL Design Patterns and Proposing Enhanced Variants: A Systematic Framework for Modern Data Engineering
- https://arxiv.org/abs/2511.03393
- arXiv:2511.03393v1 Announce Type: new
-Abstract: Traditional ETL and ELT design patterns struggle to meet modern requirements of scalability, governance, and real-time data processing. Hybrid approaches such as ETLT (Extract-Transform-Load-Transform) and ELTL (Extract-Load-Transform-Load) are already used in practice, but the literature lacks best practices and formal recognition of these approaches as design patterns. This paper formalizes ETLT and ELTL as reusable design patterns by codifying implicit best practices and introduces enhanced variants, ETLT++ and ELTL++, to address persistent gaps in governance, quality assurance, and observability. We define ETLT and ELTL patterns systematically within a design pattern framework, outlining their structure, trade-offs, and use cases. Building on this foundation, we extend them into ETLT++ and ELTL++ by embedding explicit contracts, versioning, semantic curation, and continuous monitoring as mandatory design obligations. The proposed framework offers practitioners a structured roadmap to build auditable, scalable, and cost-efficient pipelines, unifying quality enforcement, lineage, and usability across multi-cloud and real-time contexts. By formalizing ETLT and ELTL, and enhancing them through ETLT++ and ELTL++, this work bridges the gap between ad hoc practice and systematic design, providing a reusable foundation for modern, trustworthy data engineering.
- oai:arXiv.org:2511.03393v1
- cs.DB
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Chiara Rucco, Motaz Saad, Antonella Longo
-
-
- The (+)-(L, P)-TGRS code
- https://arxiv.org/abs/2511.03398
- arXiv:2511.03398v1 Announce Type: new
-Abstract: The construction of the non-Reed-Solomon (in short, non-RS) type linear code has been one of the research hotspots in recent years. In 2025, Hu et al. constructed some non-RS MDS codes by defining the (L, P)-twisted generalized Reed-Solomon code (in short, (L, P)-TGRS). In this paper, we focus on the (+)-(L, P)-TGRS code C. We firstly present a parity-check matrix. Secondly, we give a sufficient and necessary condition for C to be NMDS which partially answers two open problems proposed by Hu et al. in 2025, and prove that C is non-RS for 2k > n which partially improves the corresponding result given by Hu et al. in 2025,. Thirdly, we give a sufficient condition for C not to be self-dual or self-orthogonal, respectively, furthermore, we construct two classes of self-orthogonal codes which is a promotion of the corresponding result given by Ding et al. in 2025. Finally, some examples are given.
- oai:arXiv.org:2511.03398v1
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhonghao Liang, Chenlu Jia, Qunying Liao
-
-
- GUIDES: Guidance Using Instructor-Distilled Embeddings for Pre-trained Robot Policy Enhancement
- https://arxiv.org/abs/2511.03400
- arXiv:2511.03400v1 Announce Type: new
-Abstract: Pre-trained robot policies serve as the foundation of many validated robotic systems, which encapsulate extensive embodied knowledge. However, they often lack the semantic awareness characteristic of foundation models, and replacing them entirely is impractical in many situations due to high costs and the loss of accumulated knowledge. To address this gap, we introduce GUIDES, a lightweight framework that augments pre-trained policies with semantic guidance from foundation models without requiring architectural redesign. GUIDES employs a fine-tuned vision-language model (Instructor) to generate contextual instructions, which are encoded by an auxiliary module into guidance embeddings. These embeddings are injected into the policy's latent space, allowing the legacy model to adapt to this new semantic input through brief, targeted fine-tuning. For inference-time robustness, a large language model-based Reflector monitors the Instructor's confidence and, when confidence is low, initiates a reasoning loop that analyzes execution history, retrieves relevant examples, and augments the VLM's context to refine subsequent actions. Extensive validation in the RoboCasa simulation environment across diverse policy architectures shows consistent and substantial improvements in task success rates. Real-world deployment on a UR5 robot further demonstrates that GUIDES enhances motion precision for critical sub-tasks such as grasping. Overall, GUIDES offers a practical and resource-efficient pathway to upgrade, rather than replace, validated robot policies.
- oai:arXiv.org:2511.03400v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Minquan Gao, Xinyi Li, Qing Yan, Xiaojian Sun, Xiaopan Zhang, Chien-Ming Huang, Jiachen Li
-
-
- An Alternative Derivation and Optimal Design Method of the Generalized Bilinear Transformation for Discretizing Analog Systems
- https://arxiv.org/abs/2511.03403
- arXiv:2511.03403v1 Announce Type: new
-Abstract: A popular method for designing digital systems is transforming the transfer function of the corresponding analog systems from the continuous-time domain (s-domain) into the discrete-time domain (z-domain) using the Euler or Tustin method. We demonstrate that these transformations are two specific forms of the Generalized Bilinear Transformation (GBT) with a design parameter, $\alpha$. However, the physical meaning and optimal design method for this parameter are not sufficiently studied. In this paper, we propose an alternative derivation of the GBT derived by employing a new hexagonal shape to approximate the enclosed area of the error function, and we define the parameter $\alpha$ as the shape factor. The physical meaning of the shape factor is firstly revealed, which equals to the percentage of the backward rectangular ratio of the proposed hexagonal shape. We demonstrate that the stable range of the shape factor is [0.5, 1] through domain mapping. Depending on the operating frequencies and the shape factor, we observe two distinct distortion modes, i.e., the magnitude and phase distortion. We proceed to develop an optimal design method for the shape factor based on an objective function in form of the normalized magnitude or phase error. Finally, a low-pass filter (LPF) is designed and tested to verify the effectiveness of the proposed method by comparing the theoretical calculations with the experimental results.
- oai:arXiv.org:2511.03403v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shen Chen, Yanlong Li, Jiamin Cui, Wei Yao, Jisong Wang, Yixin Tian, Chaohou Liu, Yang Yang, Jiaxi Ying, Zeng Liu, Jinjun Liu
-
-
- Towards Realistic Project-Level Code Generation via Multi-Agent Collaboration and Semantic Architecture Modeling
- https://arxiv.org/abs/2511.03404
- arXiv:2511.03404v1 Announce Type: new
-Abstract: In recent years, Large Language Models (LLMs) have achieved remarkable progress in automated code generation. In real-world software engineering, the growing demand for rapid iteration and continuous delivery underscores the importance of project-level code generation, where LLMs are expected to generate complete software projects directly from complex user requirements. Although existing studies have made initial explorations, they still face key limitations, including unrealistic datasets and unreliable evaluation metrics that fail to reflect real-world complexity, the semantic gap between human-written requirements and machine-interpretable structures, and difficulties in managing hierarchical dependencies and maintaining quality throughout the generation process. To address these limitations, we first introduce CodeProjectEval, a project-level code generation dataset built from 18 real-world repositories with 12.7 files and 2,388.6 lines of code per task on average, supplemented with documentation and executable test cases for automatic evaluation. We further propose ProjectGen, a multi-agent framework that decomposes projects into architecture design, skeleton generation, and code filling stages with iterative refinement and memory-based context management. Within this framework, we introduce the Semantic Software Architecture Tree (SSAT), a structured and semantically rich representation that effectively bridges user requirements and source code implementation. Experiments show that ProjectGen achieves state-of-the-art performance, passing 52/124 test cases on the small-scale project-level code generation dataset DevBench, a 57% improvement over the baseline approaches, and 310 test cases on CodeProjectEval, representing an improvement of roughly tenfold compared to the baselines.
- oai:arXiv.org:2511.03404v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qianhui Zhao, Li Zhang, Fang Liu, Junhang Cheng, Chengru Wu, Junchen Ai, Qiaoyuanhe Meng, Lichen Zhang, Xiaoli Lian, Shubin Song, Yuanping Guo
-
-
- Adaptable Hindsight Experience Replay for Search-Based Learning
- https://arxiv.org/abs/2511.03405
- arXiv:2511.03405v1 Announce Type: new
-Abstract: AlphaZero-like Monte Carlo Tree Search systems, originally introduced for two-player games, dynamically balance exploration and exploitation using neural network guidance. This combination makes them also suitable for classical search problems. However, the original method of training the network with simulation results is limited in sparse reward settings, especially in the early stages, where the network cannot yet give guidance. Hindsight Experience Replay (HER) addresses this issue by relabeling unsuccessful trajectories from the search tree as supervised learning signals. We introduce Adaptable HER (\ours{}), a flexible framework that integrates HER with AlphaZero, allowing easy adjustments to HER properties such as relabeled goals, policy targets, and trajectory selection. Our experiments, including equation discovery, show that the possibility of modifying HER is beneficial and surpasses the performance of pure supervised or reinforcement learning.
- oai:arXiv.org:2511.03405v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Alexandros Vazaios, Jannis Brugger, Cedric Derstroff, Kristian Kersting, Mira Mezini
-
-
- Overcoming the Generalization Limits of SLM Finetuning for Shape-Based Extraction of Datatype and Object Properties
- https://arxiv.org/abs/2511.03407
- arXiv:2511.03407v1 Announce Type: new
-Abstract: Small language models (SLMs) have shown promises for relation extraction (RE) when extracting RDF triples guided by SHACL shapes focused on common datatype properties. This paper investigates how SLMs handle both datatype and object properties for a complete RDF graph extraction. We show that the key bottleneck is related to long-tail distribution of rare properties. To solve this issue, we evaluate several strategies: stratified sampling, weighted loss, dataset scaling, and template-based synthetic data augmentation. We show that the best strategy to perform equally well over unbalanced target properties is to build a training set where the number of occurrences of each property exceeds a given threshold. To enable reproducibility, we publicly released our datasets, experimental results and code. Our findings offer practical guidance for training shape-aware SLMs and highlight promising directions for future work in semantic RE.
- oai:arXiv.org:2511.03407v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- C\'elian Ringwald, Fabien Gandon, Catherine Faron, Franck Michel, Hanna Abi Akl
-
-
- Efficient Reasoning via Thought-Training and Thought-Free Inference
- https://arxiv.org/abs/2511.03408
- arXiv:2511.03408v1 Announce Type: new
-Abstract: Recent advances in large language models (LLMs) have leveraged explicit Chain-of-Thought (CoT) prompting to improve reasoning accuracy. However, most existing methods primarily compress verbose reasoning outputs. These Long-to-Short transformations aim to improve efficiency, but still rely on explicit reasoning during inference. In this work, we introduce \textbf{3TF} (\textbf{T}hought-\textbf{T}raining and \textbf{T}hought-\textbf{F}ree inference), a framework for efficient reasoning that takes a Short-to-Long perspective. We first train a hybrid model that can operate in both reasoning and non-reasoning modes, and then further train it on CoT-annotated data to internalize structured reasoning, while enforcing concise, thought-free outputs at inference time using the no-reasoning mode. Unlike compression-based approaches, 3TF improves the reasoning quality of non-reasoning outputs, enabling models to perform rich internal reasoning implicitly while keeping external outputs short. Empirically, 3TF-trained models obtain large improvements on reasoning benchmarks under thought-free inference, demonstrating that high quality reasoning can be learned and executed implicitly without explicit step-by-step generation.
- oai:arXiv.org:2511.03408v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Canhui Wu, Qiong Cao, Chao Xue, Wei Xi, Xiaodong He
-
-
- Knowledge-Augmented Question Error Correction for Chinese Question Answer System with QuestionRAG
- https://arxiv.org/abs/2511.03410
- arXiv:2511.03410v1 Announce Type: new
-Abstract: Input errors in question-answering (QA) systems often lead to incorrect responses. Large language models (LLMs) struggle with this task, frequently failing to interpret user intent (misinterpretation) or unnecessarily altering the original question's structure (over-correction). We propose QuestionRAG, a framework that tackles these problems. To address misinterpretation, it enriches the input with external knowledge (e.g., search results, related entities). To prevent over-correction, it uses reinforcement learning (RL) to align the model's objective with precise correction, not just paraphrasing. Our results demonstrate that knowledge augmentation is critical for understanding faulty questions. Furthermore, RL-based alignment proves significantly more effective than traditional supervised fine-tuning (SFT), boosting the model's ability to follow instructions and generalize. By integrating these two strategies, QuestionRAG unlocks the full potential of LLMs for the question correction task.
- oai:arXiv.org:2511.03410v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Longpeng Qiu, Ting Li, Shuai Mao, Nan Yang, Xiaohui Yan
-
-
- On the Fundamental Scaling Laws of Fluid Antenna Systems
- https://arxiv.org/abs/2511.03415
- arXiv:2511.03415v1 Announce Type: new
-Abstract: Fluid antenna systems (FAS) offer a promising paradigm for enhancing wireless communication by exploiting spatial diversity, yet a rigorous analytical framework for their error probability has been notably absent. To this end, this paper addresses this critical gap by unveiling the \textbf{fundamental scaling laws} that govern the symbol error rate (SER) of FAS in realistic, spatially correlated channels. To establish these laws, we derive a tight, closed-form asymptotic expression for the SER applicable to a general class of modulation schemes. This result is pivotal as it establishes the fundamental scaling law governing the relationship between SER and the channel's spatial correlation structure. Based on this framework, we provide a complete characterization of the diversity and coding gains. The analysis culminates in a definitive design directive: SER can be fundamentally improved by expanding the antenna's movement space to increase diversity, while merely increasing port density within a constrained space yields diminishing returns.
- oai:arXiv.org:2511.03415v1
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xusheng Zhu, Farshad Rostami Ghadi, Tuo Wu, Kaitao Meng, Chao Wang, Gui Zhou
-
-
- Robust Alignment of the Human Embryo in 3D Ultrasound using PCA and an Ensemble of Heuristic, Atlas-based and Learning-based Classifiers Evaluated on the Rotterdam Periconceptional Cohort
- https://arxiv.org/abs/2511.03416
- arXiv:2511.03416v1 Announce Type: new
-Abstract: Standardized alignment of the embryo in three-dimensional (3D) ultrasound images aids prenatal growth monitoring by facilitating standard plane detection, improving visualization of landmarks and accentuating differences between different scans. In this work, we propose an automated method for standardizing this alignment. Given a segmentation mask of the embryo, Principal Component Analysis (PCA) is applied to the mask extracting the embryo's principal axes, from which four candidate orientations are derived. The candidate in standard orientation is selected using one of three strategies: a heuristic based on Pearson's correlation assessing shape, image matching to an atlas through normalized cross-correlation, and a Random Forest classifier. We tested our method on 2166 images longitudinally acquired 3D ultrasound scans from 1043 pregnancies from the Rotterdam Periconceptional Cohort, ranging from 7+0 to 12+6 weeks of gestational age. In 99.0% of images, PCA correctly extracted the principal axes of the embryo. The correct candidate was selected by the Pearson Heuristic, Atlas-based and Random Forest in 97.4%, 95.8%, and 98.4% of images, respectively. A Majority Vote of these selection methods resulted in an accuracy of 98.5%. The high accuracy of this pipeline enables consistent embryonic alignment in the first trimester, enabling scalable analysis in both clinical and research settings. The code is publicly available at: https://gitlab.com/radiology/prenatal-image-analysis/pca-3d-alignment.
- oai:arXiv.org:2511.03416v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- 10.1007/978-3-032-05997-0_15
- Springer Nature Switzerland, Cham. International Workshop on Preterm, Perinatal and Paediatric Image Analysis. (2025) pp. 164-175
- Nikolai Herrmann, Marcella C. Zijta, Stefan Klein, R\'egine P. M. Steegers-Theunissen, Rene M. H. Wijnen, Bernadette S. de Bakker, Melek Rousian, Wietske A. P. Bastiaansen
-
-
- Light over Heavy: Automated Performance Requirements Quantification with Linguistic Inducement
- https://arxiv.org/abs/2511.03421
- arXiv:2511.03421v1 Announce Type: new
-Abstract: Elicited performance requirements need to be quantified for compliance in different engineering tasks, e.g., configuration tuning and performance testing. Much existing work has relied on manual quantification, which is expensive and error-prone due to the imprecision. In this paper, we present LQPR, a highly efficient automatic approach for performance requirements quantification.LQPR relies on a new theoretical framework that converts quantification as a classification problem. Despite the prevalent applications of Large Language Models (LLMs) for requirement analytics, LQPR takes a different perspective to address the classification: we observed that performance requirements can exhibit strong patterns and are often short/concise, therefore we design a lightweight linguistically induced matching mechanism. We compare LQPR against nine state-of-the-art learning-based approaches over diverse datasets, demonstrating that it is ranked as the sole best for 75% or more cases with two orders less cost. Our work proves that, at least for performance requirement quantification, specialized methods can be more suitable than the general LLM-driven approaches.
- oai:arXiv.org:2511.03421v1
- cs.SE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Shihai Wang, Tao Chen
-
-
- SyMuPe: Affective and Controllable Symbolic Music Performance
- https://arxiv.org/abs/2511.03425
- arXiv:2511.03425v1 Announce Type: new
-Abstract: Emotions are fundamental to the creation and perception of music performances. However, achieving human-like expression and emotion through machine learning models for performance rendering remains a challenging task. In this work, we present SyMuPe, a novel framework for developing and training affective and controllable symbolic piano performance models. Our flagship model, PianoFlow, uses conditional flow matching trained to solve diverse multi-mask performance inpainting tasks. By design, it supports both unconditional generation and infilling of music performance features. For training, we use a curated, cleaned dataset of 2,968 hours of aligned musical scores and expressive MIDI performances. For text and emotion control, we integrate a piano performance emotion classifier and tune PianoFlow with the emotion-weighted Flan-T5 text embeddings provided as conditional inputs. Objective and subjective evaluations against transformer-based baselines and existing models show that PianoFlow not only outperforms other approaches, but also achieves performance quality comparable to that of human-recorded and transcribed MIDI samples. For emotion control, we present and analyze samples generated under different text conditioning scenarios. The developed model can be integrated into interactive applications, contributing to the creation of more accessible and engaging music performance systems.
- oai:arXiv.org:2511.03425v1
- cs.SD
- cs.LG
- cs.MM
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- 10.1145/3746027.3755871
- Proceedings of the 33rd ACM International Conference on Multimedia (MM '25), October 27-31, 2025, Dublin, Ireland, pp. 10699-10708
- Ilya Borovik, Dmitrii Gavrilev, Vladimir Viro
-
-
- Design and Optimization of Mixed-Kernel Mixed-Signal SVMs for Flexible Electronics
- https://arxiv.org/abs/2511.03427
- arXiv:2511.03427v1 Announce Type: new
-Abstract: Flexible Electronics (FE) have emerged as a promising alternative to silicon-based technologies, offering on-demand low-cost fabrication, conformality, and sustainability. However, their large feature sizes severely limit integration density, imposing strict area and power constraints, thus prohibiting the realization of Machine Learning (ML) circuits, which can significantly enhance the capabilities of relevant near-sensor applications. Support Vector Machines (SVMs) offer high accuracy in such applications at relatively low computational complexity, satisfying FE technologies' constraints. Existing SVM designs rely solely on linear or Radial Basis Function (RBF) kernels, forcing a trade-off between hardware costs and accuracy. Linear kernels, implemented digitally, minimize overhead but sacrifice performance, while the more accurate RBF kernels are prohibitively large in digital, and their analog realization contains inherent functional approximation. In this work, we propose the first mixed-kernel and mixed-signal SVM design in FE, which unifies the advantages of both implementations and balances the cost/accuracy trade-off. To that end, we introduce a co-optimization approach that trains our mixed-kernel SVMs and maps binary SVM classifiers to the appropriate kernel (linear/RBF) and domain (digital/analog), aiming to maximize accuracy whilst reducing the number of costly RBF classifiers. Our designs deliver 7.7% higher accuracy than state-of-the-art single-kernel linear SVMs, and reduce area and power by 108x and 17x on average compared to digital RBF implementations.
- oai:arXiv.org:2511.03427v1
- cs.AR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Florentia Afentaki, Maha Shatta, Konstantinos Balaskas, Georgios Panagopoulos, Georgios Zervakis, Mehdi B. Tahoori
-
-
- Inter-Agent Trust Models: A Comparative Study of Brief, Claim, Proof, Stake, Reputation and Constraint in Agentic Web Protocol Design-A2A, AP2, ERC-8004, and Beyond
- https://arxiv.org/abs/2511.03434
- arXiv:2511.03434v1 Announce Type: new
-Abstract: As the "agentic web" takes shape-billions of AI agents (often LLM-powered) autonomously transacting and collaborating-trust shifts from human oversight to protocol design. In 2025, several inter-agent protocols crystallized this shift, including Google's Agent-to-Agent (A2A), Agent Payments Protocol (AP2), and Ethereum's ERC-8004 "Trustless Agents," yet their underlying trust assumptions remain under-examined. This paper presents a comparative study of trust models in inter-agent protocol design: Brief (self- or third-party verifiable claims), Claim (self-proclaimed capabilities and identity, e.g. AgentCard), Proof (cryptographic verification, including zero-knowledge proofs and trusted execution environment attestations), Stake (bonded collateral with slashing and insurance), Reputation (crowd feedback and graph-based trust signals), and Constraint (sandboxing and capability bounding). For each, we analyze assumptions, attack surfaces, and design trade-offs, with particular emphasis on LLM-specific fragilities-prompt injection, sycophancy/nudge-susceptibility, hallucination, deception, and misalignment-that render purely reputational or claim-only approaches brittle. Our findings indicate no single mechanism suffices. We argue for trustless-by-default architectures anchored in Proof and Stake to gate high-impact actions, augmented by Brief for identity and discovery and Reputation overlays for flexibility and social signals. We comparatively evaluate A2A, AP2, ERC-8004 and related historical variations in academic research under metrics spanning security, privacy, latency/cost, and social robustness (Sybil/collusion/whitewashing resistance). We conclude with hybrid trust model recommendations that mitigate reputation gaming and misinformed LLM behavior, and we distill actionable design guidelines for safer, interoperable, and scalable agent economies.
- oai:arXiv.org:2511.03434v1
- cs.HC
- cs.AI
- cs.MA
- cs.NI
- cs.SI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Botao 'Amber' Hu, Helena Rong
-
-
- HERP: Hardware for Energy Efficient and Realtime DB Search and Cluster Expansion in Proteomics
- https://arxiv.org/abs/2511.03437
- arXiv:2511.03437v1 Announce Type: new
-Abstract: Database (DB) search and clustering are fundamental in proteomics but conventional full clustering and search approaches demand high resources and incur long latency. We propose a lightweight incremental clustering and highly parallelizable DB search platform tailored for resource-constrained environments, delivering low energy and latency without compromising performance. By leveraging mass-spectrometry insights, we employ bucket-wise parallelization and query scheduling to reduce latency. A one-time hardware initialization with pre-clustered proteomics data enables continuous DB search and local re-clustering, offering a more practical and efficient alternative to clustering from scratch. Heuristics from pre-clustered data guide incremental clustering, accelerating the process by 20x with only a 0.3% increase in clustering error. DB search results overlap by 96% with state-of-the-art tools, validating search quality. The hardware leverages a 3T 2M T J SOT-CAM at the 7nm node with a compute-in-memory design. For the human genome draft dataset (131GB), setup requires 1.19mJ for 2M spectra, while a 1000 query search consumes 1.1{\mu}J. Bucket-wise parallelization further achieves 100x speedup.
- oai:arXiv.org:2511.03437v1
- cs.DB
- cs.ET
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Md Mizanur Rahaman Nayan, Zheyu Li, Flavio Ponzina, Sumukh Pinge, Tajana Rosing, Azad J. Naeemi
-
-
- Hesse's Redemption: Efficient Convex Polynomial Programming
- https://arxiv.org/abs/2511.03440
- arXiv:2511.03440v1 Announce Type: new
-Abstract: Efficient algorithms for convex optimization, such as the ellipsoid method, require an a priori bound on the radius of a ball around the origin guaranteed to contain an optimal solution if one exists. For linear and convex quadratic programming, such solution bounds follow from classical characterizations of optimal solutions by systems of linear equations. For other programs, e.g., semidefinite ones, examples due to Khachiyan show that optimal solutions may require huge coefficients with an exponential number of bits, even if we allow approximations. Correspondingly, semidefinite programming is not even known to be in NP. The unconstrained minimization of convex polynomials of degree four and higher has remained a fundamental open problem between these two extremes: its optimal solutions do not admit a linear characterization and, at the same time, Khachiyan-type examples do not apply. We resolve this problem by developing new techniques to prove solution bounds when no linear characterizations are available. Even for programs minimizing a convex polynomial (of arbitrary degree) over a polyhedron, we prove that the existence of an optimal solution implies that an approximately optimal one with polynomial bit length also exists. These solution bounds, combined with the ellipsoid method, yield the first polynomial-time algorithm for convex polynomial programming, settling a question posed by Nesterov (Math. Program., 2019). Before, no polynomial-time algorithm was known even for unconstrained minimization of a convex polynomial of degree four.
- oai:arXiv.org:2511.03440v1
- cs.DS
- cs.CC
- math.AG
- math.OC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lucas Slot, David Steurer, Manuel Wiedmer
-
-
- CareMedEval dataset: Evaluating Critical Appraisal and Reasoning in the Biomedical Field
- https://arxiv.org/abs/2511.03441
- arXiv:2511.03441v1 Announce Type: new
-Abstract: Critical appraisal of scientific literature is an essential skill in the biomedical field. While large language models (LLMs) can offer promising support in this task, their reliability remains limited, particularly for critical reasoning in specialized domains. We introduce CareMedEval, an original dataset designed to evaluate LLMs on biomedical critical appraisal and reasoning tasks. Derived from authentic exams taken by French medical students, the dataset contains 534 questions based on 37 scientific articles. Unlike existing benchmarks, CareMedEval explicitly evaluates critical reading and reasoning grounded in scientific papers. Benchmarking state-of-the-art generalist and biomedical-specialized LLMs under various context conditions reveals the difficulty of the task: open and commercial models fail to exceed an Exact Match Rate of 0.5 even though generating intermediate reasoning tokens considerably improves the results. Yet, models remain challenged especially on questions about study limitations and statistical analysis. CareMedEval provides a challenging benchmark for grounded reasoning, exposing current LLM limitations and paving the way for future development of automated support for critical appraisal.
- oai:arXiv.org:2511.03441v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Doria Bonzi, Alexandre Guiggi, Fr\'ed\'eric B\'echet, Carlos Ramisch, Benoit Favre
-
-
- Value Elicitation for a Socially Assistive Robot Addressing Social Anxiety: A Participatory Design Approach
- https://arxiv.org/abs/2511.03444
- arXiv:2511.03444v1 Announce Type: new
-Abstract: Social anxiety is a prevalent mental health condition that can significantly impact overall well-being and quality of life. Despite its widespread effects, adequate support or treatment for social anxiety is often insufficient. Advances in technology, particularly in social robotics, offer promising opportunities to complement traditional mental health. As an initial step toward developing effective solutions, it is essential to understand the values that shape what is considered meaningful, acceptable, and helpful. In this study, a participatory design workshop was conducted with mental health academic researchers to elicit the underlying values that should inform the design of socially assistive robots for social anxiety support. Through creative, reflective, and envisioning activities, participants explored scenarios and design possibilities, allowing for systematic elicitation of values, expectations, needs, and preferences related to robot-supported interventions. The findings reveal rich insights into design-relevant values-including adaptivity, acceptance, and efficacy-that are core to support for individuals with social anxiety. This study highlights the significance of a research-led approach to value elicitation, emphasising user-centred and context-aware design considerations in the development of socially assistive robots.
- oai:arXiv.org:2511.03444v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Vesna Poprcova, Iulia Lefter, Martijn Warnier, Frances Brazier
-
-
- Generalizing Shape-from-Template to Topological Changes
- https://arxiv.org/abs/2511.03459
- arXiv:2511.03459v1 Announce Type: new
-Abstract: Reconstructing the surfaces of deformable objects from correspondences between a 3D template and a 2D image is well studied under Shape-from-Template (SfT) methods; however, existing approaches break down when topological changes accompany the deformation. We propose a principled extension of SfT that enables reconstruction in the presence of such changes. Our approach is initialized with a classical SfT solution and iteratively adapts the template by partitioning its spatial domain so as to minimize an energy functional that jointly encodes physical plausibility and reprojection consistency. We demonstrate that the method robustly captures a wide range of practically relevant topological events including tears and cuts on bounded 2D surfaces, thereby establishing the first general framework for topological-change-aware SfT. Experiments on both synthetic and real data confirm that our approach consistently outperforms baseline methods.
- oai:arXiv.org:2511.03459v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Kevin Manogue, Tomasz M Schang, Dilara Ku\c{s}, Jonas M\"uller, Stefan Zachow, Agniva Sengupta
-
-
- Dynamic Meta-Kernelization
- https://arxiv.org/abs/2511.03461
- arXiv:2511.03461v1 Announce Type: new
-Abstract: Kernelization studies polynomial-time preprocessing algorithms. Over the last 20 years, the most celebrated positive results of the field have been linear kernels for classical NP-hard graph problems on sparse graph classes. In this paper, we lift these results to the dynamic setting.
- As the canonical example, Alber, Fellows, and Niedermeier [J. ACM 2004] gave a linear kernel for dominating set on planar graphs. We provide the following dynamic version of their kernel: Our data structure is initialized with an $n$-vertex planar graph $G$ in $O(n \log n)$ amortized time, and, at initialization, outputs a planar graph $K$ with $\mathrm{OPT}(K) = \mathrm{OPT}(G)$ and $|K| = O(\mathrm{OPT}(G))$, where $\mathrm{OPT}(\cdot)$ denotes the size of a minimum dominating set. The graph $G$ can be updated by insertions and deletions of edges and isolated vertices in $O(\log n)$ amortized time per update, under the promise that it remains planar. After each update to $G$, the data structure outputs $O(1)$ updates to $K$, maintaining $\mathrm{OPT}(K) = \mathrm{OPT}(G)$, $|K| = O(\mathrm{OPT}(G))$, and planarity of $K$.
- Furthermore, we obtain similar dynamic kernelization algorithms for all problems satisfying certain conditions on (topological-)minor-free graph classes. Besides kernelization, this directly implies new dynamic constant-approximation algorithms and improvements to dynamic FPT algorithms for such problems.
- Our main technical contribution is a dynamic data structure for maintaining an approximately optimal protrusion decomposition of a dynamic topological-minor-free graph. Protrusion decompositions were introduced by Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh, and Thilikos [J. ACM 2016], and have since developed into a part of the core toolbox in kernelization and parameterized algorithms.
- oai:arXiv.org:2511.03461v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Christian Bertram, Deborah Haun, Mads Vestergaard Jensen, Tuukka Korhonen
-
-
- POEMS: Product of Experts for Interpretable Multi-omic Integration using Sparse Decoding
- https://arxiv.org/abs/2511.03464
- arXiv:2511.03464v1 Announce Type: new
-Abstract: Integrating different molecular layers, i.e., multiomics data, is crucial for unraveling the complexity of diseases; yet, most deep generative models either prioritize predictive performance at the expense of interpretability or enforce interpretability by linearizing the decoder, thereby weakening the network's nonlinear expressiveness. To overcome this tradeoff, we introduce POEMS: Product Of Experts for Interpretable Multiomics Integration using Sparse Decoding, an unsupervised probabilistic framework that preserves predictive performance while providing interpretability. POEMS provides interpretability without linearizing any part of the network by 1) mapping features to latent factors using sparse connections, which directly translates to biomarker discovery, 2) allowing for cross-omic associations through a shared latent space using product of experts model, and 3) reporting contributions of each omic by a gating network that adaptively computes their influence in the representation learning. Additionally, we present an efficient sparse decoder. In a cancer subtyping case study, POEMS achieves competitive clustering and classification performance while offering our novel set of interpretations, demonstrating that biomarker based insight and predictive accuracy can coexist in multiomics representation learning.
- oai:arXiv.org:2511.03464v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- EurIPS 2025
- Mihriban Kocak Balik, Pekka Marttinen, Negar Safinianaini
-
-
- Kastor: Fine-tuned Small Language Models for Shape-based Active Relation Extraction
- https://arxiv.org/abs/2511.03466
- arXiv:2511.03466v1 Announce Type: new
-Abstract: RDF pattern-based extraction is a compelling approach for fine-tuning small language models (SLMs) by focusing a relation extraction task on a specified SHACL shape. This technique enables the development of efficient models trained on limited text and RDF data. In this article, we introduce Kastor, a framework that advances this approach to meet the demands for completing and refining knowledge bases in specialized domains. Kastor reformulates the traditional validation task, shifting from single SHACL shape validation to evaluating all possible combinations of properties derived from the shape. By selecting the optimal combination for each training example, the framework significantly enhances model generalization and performance. Additionally, Kastor employs an iterative learning process to refine noisy knowledge bases, enabling the creation of robust models capable of uncovering new, relevant facts
- oai:arXiv.org:2511.03466v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- 10.1007/978-3-031-94575-5_6
- The Semantic Web: 22nd European Semantic Web Conference, ESWC 2025, Portoroz, Slovenia, June 1 5, 2025, Proceedings, Part I
- Ringwald Celian, Gandon Fabien, Faron Catherine, Michel Franck, Abi Akl Hanna
-
-
- Towards Scalable Web Accessibility Audit with MLLMs as Copilots
- https://arxiv.org/abs/2511.03471
- arXiv:2511.03471v1 Announce Type: new
-Abstract: Ensuring web accessibility is crucial for advancing social welfare, justice, and equality in digital spaces, yet the vast majority of website user interfaces remain non-compliant, due in part to the resource-intensive and unscalable nature of current auditing practices. While WCAG-EM offers a structured methodology for site-wise conformance evaluation, it involves great human efforts and lacks practical support for execution at scale. In this work, we present an auditing framework, AAA, which operationalizes WCAG-EM through a human-AI partnership model. AAA is anchored by two key innovations: GRASP, a graph-based multimodal sampling method that ensures representative page coverage via learned embeddings of visual, textual, and relational cues; and MaC, a multimodal large language model-based copilot that supports auditors through cross-modal reasoning and intelligent assistance in high-effort tasks. Together, these components enable scalable, end-to-end web accessibility auditing, empowering human auditors with AI-enhanced assistance for real-world impact. We further contribute four novel datasets designed for benchmarking core stages of the audit pipeline. Extensive experiments demonstrate the effectiveness of our methods, providing insights that small-scale language models can serve as capable experts when fine-tuned.
- oai:arXiv.org:2511.03471v1
- cs.AI
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ming Gu, Ziwei Wang, Sicen Lai, Zirui Gao, Sheng Zhou, Jiajun Bu
-
-
- Reinforcement Learning Using known Invariances
- https://arxiv.org/abs/2511.03473
- arXiv:2511.03473v1 Announce Type: new
-Abstract: In many real-world reinforcement learning (RL) problems, the environment exhibits inherent symmetries that can be exploited to improve learning efficiency. This paper develops a theoretical and algorithmic framework for incorporating known group symmetries into kernel-based RL. We propose a symmetry-aware variant of optimistic least-squares value iteration (LSVI), which leverages invariant kernels to encode invariance in both rewards and transition dynamics. Our analysis establishes new bounds on the maximum information gain and covering numbers for invariant RKHSs, explicitly quantifying the sample efficiency gains from symmetry. Empirical results on a customized Frozen Lake environment and a 2D placement design problem confirm the theoretical improvements, demonstrating that symmetry-aware RL achieves significantly better performance than their standard kernel counterparts. These findings highlight the value of structural priors in designing more sample-efficient reinforcement learning algorithms.
- oai:arXiv.org:2511.03473v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Alexandru Cioba, Aya Kayal, Laura Toni, Sattar Vakili, Alberto Bernacchia
-
-
- RAGBoost: Efficient Retrieval-Augmented Generation with Accuracy-Preserving Context Reuse
- https://arxiv.org/abs/2511.03475
- arXiv:2511.03475v1 Announce Type: new
-Abstract: Retrieval-augmented generation (RAG) enhances large language models (LLMs) with retrieved context but often suffers from downgraded prefill performance as modern applications demand longer and more complex inputs. Existing caching techniques either preserve accuracy with low cache reuse or improve reuse at the cost of degraded reasoning quality. We present RAGBoost, an efficient RAG system that achieves high cache reuse without sacrificing accuracy through accuracy-preserving context reuse. RAGBoost detects overlapping retrieved items across concurrent sessions and multi-turn interactions, using efficient context indexing, ordering, and de-duplication to maximize reuse, while lightweight contextual hints maintain reasoning fidelity. It integrates seamlessly with existing LLM inference engines and improves their prefill performance by 1.5-3X over state-of-the-art methods, while preserving or even enhancing reasoning accuracy across diverse RAG and agentic AI workloads. Our code is released at: https://github.com/Edinburgh-AgenticAI/RAGBoost.
- oai:arXiv.org:2511.03475v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Yinsicheng Jiang, Yeqi Huang, Liang Cheng, Cheng Deng, Xuan Sun, Luo Mai
-
-
- SVG Decomposition for Enhancing Large Multimodal Models Visualization Comprehension: A Study with Floor Plans
- https://arxiv.org/abs/2511.03478
- arXiv:2511.03478v1 Announce Type: new
-Abstract: Large multimodal models (LMMs) are increasingly capable of interpreting visualizations, yet they continue to struggle with spatial reasoning. One proposed strategy is decomposition, which breaks down complex visualizations into structured components. In this work, we examine the efficacy of scalable vector graphics (SVGs) as a decomposition strategy for improving LMMs' performance on floor plans comprehension. Floor plans serve as a valuable testbed because they combine geometry, topology, and semantics, and their reliable comprehension has real-world applications, such as accessibility for blind and low-vision individuals. We conducted an exploratory study with three LMMs (GPT-4o, Claude 3.7 Sonnet, and Llama 3.2 11B Vision Instruct) across 75 floor plans. Results show that combining SVG with raster input (SVG+PNG) improves performance on spatial understanding tasks but often hinders spatial reasoning, particularly in pathfinding. These findings highlight both the promise and limitations of decomposition as a strategy for advancing spatial visualization comprehension.
- oai:arXiv.org:2511.03478v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jeongah Lee, Ali Sarvghad
-
-
- In-Memory Indexing and Querying of Provenance in Data Preparation Pipelines
- https://arxiv.org/abs/2511.03480
- arXiv:2511.03480v1 Announce Type: new
-Abstract: Data provenance has numerous applications in the context of data preparation pipelines. It can be used for debugging faulty pipelines, interpreting results, verifying fairness, and identifying data quality issues, which may affect the sources feeding the pipeline execution. In this paper, we present an indexing mechanism to efficiently capture and query pipeline provenance. Our solution leverages tensors to capture fine-grained provenance of data processing operations, using minimal memory. In addition to record-level lineage relationships, we provide finer granularity at the attribute level. This is achieved by augmenting tensors, which capture retrospective provenance, with prospective provenance information, drawing connections between input and output schemas of data processing operations. We demonstrate how these two types of provenance (retrospective and prospective) can be combined to answer a broad range of provenance queries efficiently, and show effectiveness through evaluation exercises using both real and synthetic data.
- oai:arXiv.org:2511.03480v1
- cs.DB
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Khalid Belhajjame, Haroun Mezrioui, Yuyan Zhao
-
-
- Development of the Bioinspired Tendon-Driven DexHand 021 with Proprioceptive Compliance Control
- https://arxiv.org/abs/2511.03481
- arXiv:2511.03481v1 Announce Type: new
-Abstract: The human hand plays a vital role in daily life and industrial applications, yet replicating its multifunctional capabilities-including motion, sensing, and coordinated manipulation-with robotic systems remains a formidable challenge. Developing a dexterous robotic hand requires balancing human-like agility with engineering constraints such as complexity, size-to-weight ratio, durability, and force-sensing performance. This letter presents Dex-Hand 021, a high-performance, cable-driven five-finger robotic hand with 12 active and 7 passive degrees of freedom (DoFs), achieving 19 DoFs dexterity in a lightweight 1 kg design. We propose a proprioceptive force-sensing-based admittance control method to enhance manipulation. Experimental results demonstrate its superior performance: a single-finger load capacity exceeding 10 N, fingertip repeatability under 0.001 m, and force estimation errors below 0.2 N. Compared to PID control, joint torques in multi-object grasping are reduced by 31.19%, significantly improves force-sensing capability while preventing overload during collisions. The hand excels in both power and precision grasps, successfully executing 33 GRASP taxonomy motions and complex manipulation tasks. This work advances the design of lightweight, industrial-grade dexterous hands and enhances proprioceptive control, contributing to robotic manipulation and intelligent manufacturing.
- oai:arXiv.org:2511.03481v1
- cs.RO
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Jianbo Yuan, Haohua Zhu, Jing Dai, Sheng Yi
-
-
- System Identification of a Moored ASV with Recessed Moon Pool via Deterministic and Bayesian Hankel-DMDc
- https://arxiv.org/abs/2511.03482
- arXiv:2511.03482v1 Announce Type: new
-Abstract: This study addresses the system identification of a small autonomous surface vehicle (ASV) under moored conditions using Hankel dynamic mode decomposition with control (HDMDc) and its Bayesian extension (BHDMDc). Experiments were carried out on a Codevintec CK-14e ASV in the towing tank of CNR-INM, under both irregular and regular head-sea wave conditions. The ASV under investigation features a recessed moon pool, which induces nonlinear responses due to sloshing, thereby increasing the modelling challenge. Data-driven reduced-order models were built from measurements of vessel motions and mooring loads. The HDMDc framework provided accurate deterministic predictions of vessel dynamics, while the Bayesian formulation enabled uncertainty-aware characterization of the model response by accounting for variability in hyperparameter selection. Validation against experimental data demonstrated that both HDMDc and BHDMDc can predict the vessel's response to unseen regular and irregular wave excitations. In conclusion, the study shows that HDMDc-based ROMs are a viable data-driven alternative for system identification, demonstrating for the first time their generalization capability for a sea condition different from the training set, achieving high accuracy in reproducing vessel dynamics.
- oai:arXiv.org:2511.03482v1
- eess.SY
- cs.CE
- cs.LG
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Giorgio Palma, Ivan Santic, Andrea Serani, Lorenzo Minno, Matteo Diez
-
-
- Online Flow Time Minimization: Tight Bounds for Non-Preemptive Algorithms
- https://arxiv.org/abs/2511.03485
- arXiv:2511.03485v1 Announce Type: new
-Abstract: This paper studies the classical online scheduling problem of minimizing total flow time for $n$ jobs on $m$ identical machines. Prior work often cites the $\Omega(n)$ lower bound for non-preemptive algorithms to argue for the necessity of preemption or resource augmentation, which shows the trivial $O(n)$-competitive greedy algorithm is tight. However, this lower bound applies only to \emph{deterministic} algorithms in the \emph{single-machine} case, leaving several fundamental questions unanswered. Can randomness help in the non-preemptive setting, and what is the optimal online deterministic algorithm when $m \geq 2$? We resolve both questions. We present a polynomial-time randomized algorithm with competitive ratio $\Theta(\sqrt{n/m})$ and prove a matching randomized lower bound, settling the randomized non-preemptive setting for every $m$. This also improves the best-known offline approximation ratio from $O(\sqrt{n/m}\log(n/m))$ to $O(\sqrt{n/m})$. On the deterministic side, we present a non-preemptive algorithm with competitive ratio $O(n/m^{2}+\sqrt{n/m}\log m)$ and prove a nearly matching lower bound.
- Our framework also extends to the kill-and-restart model, where we reveal a sharp transition of deterministic algorithms: we design an asymptotically optimal algorithm with the competitive ratio $O(\sqrt{n/m})$ for $m\ge 2$, yet establish a strong $\Omega(n/\log n)$ lower bound for $m=1$. Moreover, we show that randomization provides no further advantage, as the lower bound coincides with that of the non-preemptive setting.
- While our main results assume prior knowledge of $n$, we also investigate the setting where $n$ is unknown. We show kill-and-restart is powerful enough to break the $O(n)$ barrier for $m \geq 2$ even without knowing $n$. Conversely, we prove randomization alone is insufficient, as no algorithm can achieve an $o(n)$ competitive ratio in this setting.
- oai:arXiv.org:2511.03485v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yutong Geng, Enze Sun, Zonghan Yang, Yuhao Zhang
-
-
- Federated Anonymous Blocklisting across Service Providers and its Application to Group Messaging
- https://arxiv.org/abs/2511.03486
- arXiv:2511.03486v1 Announce Type: new
-Abstract: Instant messaging has become one of the most used methods of communication online, which has attracted significant attention to its underlying cryptographic protocols and security guarantees. Techniques to increase privacy such as End-to-End Encryption and pseudonyms have been introduced. However, online spaces such as messaging groups still require moderation to prevent misbehaving users from participating in them, particularly in anonymous contexts.. In Anonymous Blocklisting (AB) schemes, users must prove during authentication that none of their previous pseudonyms has been blocked, preventing misbehaving users from creating new pseudonyms. In this work we propose an alternative \textit{Federated Anonymous Blocklisting} (FAB) in which the centralised Service Provider is replaced by small distributed Realms, each with its own blocklist. Realms can establish trust relationships between each other, such that when users authenticate to a realm, they must prove that they are not banned in any of its trusted realms. We provide an implementation of our proposed scheme; unlike existing AB constructions, the performance of ours does not depend on the current size of the blocklist nor requires processing new additions to the blocklist. We also demonstrate its applicability to real-world messaging groups by integrating our FAB scheme into the Messaging Layer Security protocol.
- oai:arXiv.org:2511.03486v1
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- David Soler, Carlos Dafonte, Manuel Fern\'andez-Veiga, Ana Fern\'andez Vilas, Francisco J. N\'ovoa
-
-
- NAP: Attention-Based Late Fusion for Automatic Sleep Staging
- https://arxiv.org/abs/2511.03488
- arXiv:2511.03488v1 Announce Type: new
-Abstract: Polysomnography signals are highly heterogeneous, varying in modality composition (e.g., EEG, EOG, ECG), channel availability (e.g., frontal, occipital EEG), and acquisition protocols across datasets and clinical sites. Most existing models that process polysomnography data rely on a fixed subset of modalities or channels and therefore neglect to fully exploit its inherently multimodal nature. We address this limitation by introducing NAP (Neural Aggregator of Predictions), an attention-based model which learns to combine multiple prediction streams using a tri-axial attention mechanism that captures temporal, spatial, and predictor-level dependencies. NAP is trained to adapt to different input dimensions. By aggregating outputs from frozen, pretrained single-channel models, NAP consistently outperforms individual predictors and simple ensembles, achieving state-of-the-art zero-shot generalization across multiple datasets. While demonstrated in the context of automated sleep staging from polysomnography, the proposed approach could be extended to other multimodal physiological applications.
- oai:arXiv.org:2511.03488v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Alvise Dei Rossi, Julia van der Meer, Markus H. Schmidt, Claudio L. A. Bassetti, Luigi Fiorillo, Francesca Faraci
-
-
- Analytical Queries for Unstructured Data
- https://arxiv.org/abs/2511.03489
- arXiv:2511.03489v1 Announce Type: new
-Abstract: Unstructured data, in the form of text, images, video, and audio, is produced at exponentially higher rates. In tandem, machine learning (ML) methods have become increasingly powerful at analyzing unstructured data. Modern ML methods can now detect objects in images, understand actions in videos, and even classify complex legal texts based on legal intent. Combined, these trends make it increasingly feasible for analysts and researchers to automatically understand the "real world." However, there are major challenges in deploying these techniques: 1) executing queries efficiently given the expense of ML methods, 2) expressing queries over bespoke forms of data, and 3) handling errors in ML methods.
- In this monograph, we discuss challenges and advances in data management systems for unstructured data using ML, with a particular focus on video analytics. Using ML to answer queries introduces new challenges.First, even turning user intent into queries can be challenging: it is not obvious how to express a query of the form "select instances of cars turning left." Second, ML models can be orders of magnitude more expensive compared to processing traditional structured data. Third, ML models and the methods to accelerate analytics with ML models can be error-prone.
- Recent work in the data management community has aimed to address all of these challenges. Users can now express queries via user-defined functions, opaquely through standard structured schemas, and even by providing examples. Given a query, recent work focuses on optimizing queries by approximating expensive "gold" methods with varying levels of guarantees. Finally, to handle errors in ML models, recent work has focused on applying outlier and drift detection to data analytics with ML.
- oai:arXiv.org:2511.03489v1
- cs.DB
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1561/1900000087
- Foundations and Trends in Databases (2025) Foundations and Trends in Databases Foundations and Trends in Databases
- Daniel Kang
-
-
- Randomized Rounding over Dynamic Programs
- https://arxiv.org/abs/2511.03490
- arXiv:2511.03490v1 Announce Type: new
-Abstract: We show that under mild assumptions for a problem whose solutions admit a dynamic programming-like recurrence relation, we can still find a solution under additional packing constraints, which need to be satisfied approximately. The number of additional constraints can be very large, for example, polynomial in the problem size. Technically, we reinterpret the dynamic programming subproblems and their solutions as a network design problem. Inspired by techniques from, for example, the Directed Steiner Tree problem, we construct a strong LP relaxation, on which we then apply randomized rounding. Our approximation guarantees on the packing constraints have roughly the form of a $(n^{\epsilon} \mathrm{polylog}\ n)$-approximation in time $n^{O(1/\epsilon)}$, for any $\epsilon > 0$. By setting $\epsilon=\log \log n/\log n$, we obtain a polylogarithmic approximation in quasi-polynomial time, or by setting $\epsilon$ as a constant, an $n^\epsilon$-approximation in polynomial time.
- While there are necessary assumptions on the form of the DP, it is general enough to capture many textbook dynamic programs from Shortest Path to Longest Common Subsequence. Our algorithm then implies that we can impose additional constraints on the solutions to these problems. This allows us to model various problems from the literature in approximation algorithms, many of which were not thought to be connected to dynamic programming. In fact, our result can even be applied indirectly to some problems that involve covering instead of packing constraints, for example, the Directed Steiner Tree problem, or those that do not directly follow a recurrence relation, for example, variants of the Matching problem.
- oai:arXiv.org:2511.03490v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Etienne Bamas, Shi Li, Lars Rohwedder
-
-
- Why Less is More (Sometimes): A Theory of Data Curation
- https://arxiv.org/abs/2511.03492
- arXiv:2511.03492v1 Announce Type: new
-Abstract: This paper introduces a theoretical framework to resolve a central paradox in modern machine learning: When is it better to use less data? This question has become critical as classical scaling laws suggesting ``more is more'' (Sun et al., 2025) are challenged by methods like LIMO (``less is more'') and s1 (Ye et al., 2025; Muenighoff et al., 2025), which achieve superior performance with small, aggressively curated datasets. Here, we study data curation strategies where an imperfect oracle selects the training examples according to their difficulty and correctness. Our results provide exact scaling law curves for test error under both label-agnostic and label-aware curation rules, revealing when and why keeping only a subset of data can improve generalization. In contrast to classical scaling laws, we show that under certain conditions, small curated datasets can outperform full datasets, and we provide analytical conditions for this by deriving precise phase transition curves tied to data size and quality. We validate these theoretical claims with empirical results on ImageNet, confirming our predictions about when curation improves accuracy and can even mitigate model collapse. Furthermore, our framework provides a principled explanation for the contradictory curation strategies recently observed in LLM mathematical reasoning.
- oai:arXiv.org:2511.03492v1
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Elvis Dohmatob, Mohammad Pezeshki, Reyhane Askari-Hemmat
-
-
- Data-driven Modeling of Grid-following Control in Grid-connected Converters
- https://arxiv.org/abs/2511.03494
- arXiv:2511.03494v1 Announce Type: new
-Abstract: As power systems evolve with the integration of renewable energy sources and the implementation of smart grid technologies, there is an increasing need for flexible and scalable modeling approaches capable of accurately capturing the complex dynamics of modern grids. To meet this need, various methods, such as the sparse identification of nonlinear dynamics and deep symbolic regression, have been developed to identify dynamical systems directly from data. In this study, we examine the application of a converter-based resource as a replacement for a traditional generator within a lossless transmission line linked to an infinite bus system. This setup is used to generate synthetic data in grid-following control mode, enabling the evaluation of these methods in effectively capturing system dynamics.
- oai:arXiv.org:2511.03494v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Amir Bahador Javadi, Philip Pong
-
-
- ROSBag MCP Server: Analyzing Robot Data with LLMs for Agentic Embodied AI Applications
- https://arxiv.org/abs/2511.03497
- arXiv:2511.03497v1 Announce Type: new
-Abstract: Agentic AI systems and Physical or Embodied AI systems have been two key research verticals at the forefront of Artificial Intelligence and Robotics, with Model Context Protocol (MCP) increasingly becoming a key component and enabler of agentic applications. However, the literature at the intersection of these verticals, i.e., Agentic Embodied AI, remains scarce. This paper introduces an MCP server for analyzing ROS and ROS 2 bags, allowing for analyzing, visualizing and processing robot data with natural language through LLMs and VLMs. We describe specific tooling built with robotics domain knowledge, with our initial release focused on mobile robotics and supporting natively the analysis of trajectories, laser scan data, transforms, or time series data. This is in addition to providing an interface to standard ROS 2 CLI tools ("ros2 bag list" or "ros2 bag info"), as well as the ability to filter bags with a subset of topics or trimmed in time. Coupled with the MCP server, we provide a lightweight UI that allows the benchmarking of the tooling with different LLMs, both proprietary (Anthropic, OpenAI) and open-source (through Groq). Our experimental results include the analysis of tool calling capabilities of eight different state-of-the-art LLM/VLM models, both proprietary and open-source, large and small. Our experiments indicate that there is a large divide in tool calling capabilities, with Kimi K2 and Claude Sonnet 4 demonstrating clearly superior performance. We also conclude that there are multiple factors affecting the success rates, from the tool description schema to the number of arguments, as well as the number of tools available to the models. The code is available with a permissive license at https://github.com/binabik-ai/mcp-rosbags.
- oai:arXiv.org:2511.03497v1
- cs.RO
- cs.AI
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Lei Fu, Sahar Salimpour, Leonardo Militano, Harry Edelman, Jorge Pe\~na Queralta, Giovanni Toffetti
-
-
- BanglaSTEM: A Parallel Corpus for Technical Domain Bangla-English Translation
- https://arxiv.org/abs/2511.03498
- arXiv:2511.03498v1 Announce Type: new
-Abstract: Large language models work well for technical problem solving in English but perform poorly when the same questions are asked in Bangla. A simple solution would be to translate Bangla questions into English first and then use these models. However, existing Bangla-English translation systems struggle with technical terms. They often mistranslate specialized vocabulary, which changes the meaning of the problem and leads to wrong answers. We present BanglaSTEM, a dataset of 5,000 carefully selected Bangla-English sentence pairs from STEM fields including computer science, mathematics, physics, chemistry, and biology. We generated over 12,000 translations using language models and then used human evaluators to select the highest quality pairs that preserve technical terminology correctly. We train a T5-based translation model on BanglaSTEM and test it on two tasks: generating code and solving math problems. Our results show significant improvements in translation accuracy for technical content, making it easier for Bangla speakers to use English-focused language models effectively. Both the BanglaSTEM dataset and the trained translation model are publicly released at https://huggingface.co/reyazul/BanglaSTEM-T5.
- oai:arXiv.org:2511.03498v1
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Kazi Reyazul Hasan, Mubasshira Musarrat, A. B. M. Alim Al Islam, Muhammad Abdullah Adnan
-
-
- A Theoretical Framework for Environmental Similarity and Vessel Mobility as Coupled Predictors of Marine Invasive Species Pathways
- https://arxiv.org/abs/2511.03499
- arXiv:2511.03499v1 Announce Type: new
-Abstract: Marine invasive species spread through global shipping and generate substantial ecological and economic impacts. Traditional risk assessments require detailed records of ballast water and traffic patterns, which are often incomplete, limiting global coverage. This work advances a theoretical framework that quantifies invasion risk by combining environmental similarity across ports with observed and forecasted maritime mobility. Climate-based feature representations characterize each port's marine conditions, while mobility networks derived from Automatic Identification System data capture vessel flows and potential transfer pathways. Clustering and metric learning reveal climate analogues and enable the estimation of species survival likelihood along shipping routes. A temporal link prediction model captures how traffic patterns may change under shifting environmental conditions. The resulting fusion of environmental similarity and predicted mobility provides exposure estimates at the port and voyage levels, supporting targeted monitoring, routing adjustments, and management interventions.
- oai:arXiv.org:2511.03499v1
- cs.CE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Gabriel Spadon, Vaishnav Vaidheeswaran, Claudio DiBacco
-
-
- HaluMem: Evaluating Hallucinations in Memory Systems of Agents
- https://arxiv.org/abs/2511.03506
- arXiv:2511.03506v1 Announce Type: new
-Abstract: Memory systems are key components that enable AI systems such as LLMs and AI agents to achieve long-term learning and sustained interaction. However, during memory storage and retrieval, these systems frequently exhibit memory hallucinations, including fabrication, errors, conflicts, and omissions. Existing evaluations of memory hallucinations are primarily end-to-end question answering, which makes it difficult to localize the operational stage within the memory system where hallucinations arise. To address this, we introduce the Hallucination in Memory Benchmark (HaluMem), the first operation level hallucination evaluation benchmark tailored to memory systems. HaluMem defines three evaluation tasks (memory extraction, memory updating, and memory question answering) to comprehensively reveal hallucination behaviors across different operational stages of interaction. To support evaluation, we construct user-centric, multi-turn human-AI interaction datasets, HaluMem-Medium and HaluMem-Long. Both include about 15k memory points and 3.5k multi-type questions. The average dialogue length per user reaches 1.5k and 2.6k turns, with context lengths exceeding 1M tokens, enabling evaluation of hallucinations across different context scales and task complexities. Empirical studies based on HaluMem show that existing memory systems tend to generate and accumulate hallucinations during the extraction and updating stages, which subsequently propagate errors to the question answering stage. Future research should focus on developing interpretable and constrained memory operation mechanisms that systematically suppress hallucinations and improve memory reliability.
- oai:arXiv.org:2511.03506v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ding Chen, Simin Niu, Kehang Li, Peng Liu, Xiangping Zheng, Bo Tang, Xinchi Li, Feiyu Xiong, Zhiyu Li
-
-
- One Battle After Another: Probing LLMs' Limits on Multi-Turn Instruction Following with a Benchmark Evolving Framework
- https://arxiv.org/abs/2511.03508
- arXiv:2511.03508v1 Announce Type: new
-Abstract: Understanding how well large language models can follow users' instructions throughout a dialogue spanning multiple topics is of great importance for data-intensive conversational applications. Existing benchmarks are often limited to a fixed number of turns, making them susceptible to saturation and failing to account for the user's interactive experience. In this work, we propose an extensible framework for assessing multi-turn instruction-following ability. At its core, our framework decouples linguistic surface forms from user intent simulation through a three-layer mechanism that tracks constraints, instructions, and topics. This framework mimics User-LLM interaction by enabling the dynamic construction of benchmarks with state changes and tracebacks, terminating a conversation only when the model exhausts a simulated user's patience. We define a suite of metrics capturing the quality of the interaction process. Using this framework, we construct EvolIF, an evolving instruction-following benchmark incorporating nine distinct constraint types. Our results indicate that GPT-5 exhibits superior instruction-following performance. It sustains an average of 18.54 conversational turns and demonstrates 70.31% robustness, outperforming Gemini-2.5-Pro by a significant margin of 11.41%, while other models lag far behind. All of the data and code will be made publicly available online.
- oai:arXiv.org:2511.03508v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qi Jia, Kaiwei Zhang, Xiujie Song, Ye Shen, Xiangyang Zhu, Guangtao Zhai
-
-
- U2F: Encouraging SWE-Agent to Seize Novelty without Losing Feasibility
- https://arxiv.org/abs/2511.03517
- arXiv:2511.03517v1 Announce Type: new
-Abstract: Large language models (LLMs) have shown strong capabilities in software engineering tasks, yet most existing LLM-based SWE-Agents mainly tackle well-defined problems using conventional methods, often overlooking alternative or innovative solutions beyond their predefined frameworks. This limitation is evident in open-world software environments, where emerging challenges transcend established paradigms.
- We propose U2F (Unknown Unknowns to Functional solutions), a cognitive-inspired, uncertainty-embracing multi-agent framework that systematically surfaces "Unknown Unknowns" - novel solution pathways absent from initial formulations but holding innovative potential. U2F consists of two key components: (1) a Discovery-Exploration-Integration agent system for uncovering and synthesizing potential solutions, and (2) cognitive enhancement mechanisms across three dimensions: cross-domain analogical reasoning, reverse thinking, and external validation, which strategically reframe and extend conventional solution boundaries.
- Applied to 218 real-world software enabler stories curated from authentic engineering tasks, U2F achieved notable improvements: human experts reported a 14 percent increase in overall novelty, 51 percent improvement in semantic novelty, and stable feasibility (4.02/5.0), corroborated by an LLM-based evaluator. These results highlight the potential of embracing uncertainty as a catalyst for innovation in software engineering.
- oai:arXiv.org:2511.03517v1
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- Wencheng Ye, Yan Liu
-
-
- Model order reduction via Lie groups
- https://arxiv.org/abs/2511.03520
- arXiv:2511.03520v1 Announce Type: new
-Abstract: Lie groups and their actions are ubiquitous in the description of physical systems, and we explore implications in the setting of model order reduction (MOR). We present a novel framework of MOR via Lie groups, called MORLie, in which high-dimensional dynamical systems on manifolds are approximated by low-dimensional dynamical systems on Lie groups. In comparison to other Lie group methods we are able to attack non-equivariant dynamics, which are frequent in practical applications, and we provide new non-intrusive MOR methods based on the presented geometric formulation. We also highlight numerically that MORLie has a lower error bound than the Kolmogorov $N$-width, which limits linear-subspace methods. The method is applied to various examples: 1. MOR of a simplified deforming body modeled by a noisy point cloud data following a sheering motion, where MORLie outperforms a naive POD approach in terms of accuracy and dimensionality reduction. 2. Reconstructing liver motion during respiration with data from edge detection in ultrasound scans, where MORLie reaches performance approaching the state of the art, while reducing the training time from hours on a computing cluster to minutes on a mobile workstation. 3. An analytic example showing that the method of freezing is analytically recovered as a special case, showing the generality of the geometric framework.
- oai:arXiv.org:2511.03520v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yannik P. Wotte, Patrick Buchfink, Silke Glas, Federico Califano, Stefano Stramigioli
-
-
- Engineering Algorithms for $\ell$-Isolated Maximal Clique Enumeration
- https://arxiv.org/abs/2511.03525
- arXiv:2511.03525v1 Announce Type: new
-Abstract: Maximal cliques play a fundamental role in numerous application domains, where their enumeration can prove extremely useful. Yet their sheer number, even in sparse real-world graphs, can make them impractical to be exploited effectively. To address this issue, one approach is to enumerate $\ell$-isolated maximal cliques, whose vertices have (on average) less than $\ell$ edges toward the rest of the graph. By tuning parameter $\ell$, the degree of isolation can be controlled, and cliques that are overly connected to the outside are filtered out. Building on Tomita et al.'s very practical recursive algorithm for maximal clique enumeration, we propose four pruning heuristics, applicable individually or in combination, that discard recursive search branches that are guaranteed not to yield $\ell$-isolated maximal cliques. Besides proving correctness, we characterize both the pruning power and the computational cost of these heuristics, and we conduct an extensive experimental study comparing our methods with Tomita's baseline and with a state-of-the-art approach. Results show that two of our heuristics offer substantial efficiency improvements, especially on real-world graphs with social network properties.
- oai:arXiv.org:2511.03525v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Marco D'Elia, Irene Finocchi, Maurizio Patrignani
-
-
- Learning Without Critics? Revisiting GRPO in Classical Reinforcement Learning Environments
- https://arxiv.org/abs/2511.03527
- arXiv:2511.03527v1 Announce Type: new
-Abstract: Group Relative Policy Optimization (GRPO) has emerged as a scalable alternative to Proximal Policy Optimization (PPO) by eliminating the learned critic and instead estimating advantages through group-relative comparisons of trajectories. This simplification raises fundamental questions about the necessity of learned baselines in policy-gradient methods. We present the first systematic study of GRPO in classical single-task reinforcement learning environments, spanning discrete and continuous control tasks. Through controlled ablations isolating baselines, discounting, and group sampling, we reveal three key findings: (1) learned critics remain essential for long-horizon tasks: all critic-free baselines underperform PPO except in short-horizon environments like CartPole where episodic returns can be effective; (2) GRPO benefits from high discount factors (gamma = 0.99) except in HalfCheetah, where lack of early termination favors moderate discounting (gamma = 0.9); (3) smaller group sizes outperform larger ones, suggesting limitations in batch-based grouping strategies that mix unrelated episodes. These results reveal both the limitations of critic-free methods in classical control and the specific conditions where they remain viable alternatives to learned value functions.
- oai:arXiv.org:2511.03527v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Bryan L. M. de Oliveira, Felipe V. Frujeri, Marcos P. C. M. Queiroz, Luana G. B. Martins, Telma W. de L. Soares, Luckeciano C. Melo
-
-
- Byzantine-Robust Federated Learning with Learnable Aggregation Weights
- https://arxiv.org/abs/2511.03529
- arXiv:2511.03529v1 Announce Type: new
-Abstract: Federated Learning (FL) enables clients to collaboratively train a global model without sharing their private data. However, the presence of malicious (Byzantine) clients poses significant challenges to the robustness of FL, particularly when data distributions across clients are heterogeneous. In this paper, we propose a novel Byzantine-robust FL optimization problem that incorporates adaptive weighting into the aggregation process. Unlike conventional approaches, our formulation treats aggregation weights as learnable parameters, jointly optimizing them alongside the global model parameters. To solve this optimization problem, we develop an alternating minimization algorithm with strong convergence guarantees under adversarial attack. We analyze the Byzantine resilience of the proposed objective. We evaluate the performance of our algorithm against state-of-the-art Byzantine-robust FL approaches across various datasets and attack scenarios. Experimental results demonstrate that our method consistently outperforms existing approaches, particularly in settings with highly heterogeneous data and a large proportion of malicious clients.
- oai:arXiv.org:2511.03529v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Javad Parsa, Amir Hossein Daghestani, Andr\'e M. H. Teixeira, Mikael Johansson
-
-
- Efficient Neural Networks with Discrete Cosine Transform Activations
- https://arxiv.org/abs/2511.03531
- arXiv:2511.03531v1 Announce Type: new
-Abstract: In this paper, we extend our previous work on the Expressive Neural Network (ENN), a multilayer perceptron with adaptive activation functions parametrized using the Discrete Cosine Transform (DCT). Building upon previous work that demonstrated the strong expressiveness of ENNs with compact architectures, we now emphasize their efficiency, interpretability and pruning capabilities. The DCT-based parameterization provides a structured and decorrelated representation that reveals the functional role of each neuron and allows direct identification of redundant components. Leveraging this property, we propose an efficient pruning strategy that removes unnecessary DCT coefficients with negligible or no loss in performance. Experimental results across classification and implicit neural representation tasks confirm that ENNs achieve state-of-the-art accuracy while maintaining a low number of parameters. Furthermore, up to 40% of the activation coefficients can be safely pruned, thanks to the orthogonality and bounded nature of the DCT basis. Overall, these findings demonstrate that the ENN framework offers a principled integration of signal processing concepts into neural network design, achieving a balanced trade-off between expressiveness, compactness, and interpretability.
- oai:arXiv.org:2511.03531v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Marc Martinez-Gost, Sara Pepe, Ana P\'erez-Neira, Miguel \'Angel Lagunas
-
-
- Investigating the Impact of Isolation on Synchronized Benchmarks
- https://arxiv.org/abs/2511.03533
- arXiv:2511.03533v1 Announce Type: new
-Abstract: Benchmarking in cloud environments suffers from performance variability from multi-tenant resource contention. Duet benchmarking mitigates this by running two workload versions concurrently on the same VM, exposing them to identical external interference. However, intra-VM contention between synchronized workloads necessitates additional isolation mechanisms.
- This work evaluates three such strategies: cgroups and CPU pinning, Docker containers, and Firecracker MicroVMs. We compare all strategies with an unisolated baseline experiment, by running benchmarks with a duet setup alongside a noise generator. This noise generator "steals" compute resources to degrade performance measurements.
- All experiments showed different latency distributions while under the effects of noise generation, but results show that process isolation generally lowered false positives, except for our experiments with Docker containers. Even though Docker containers rely internally on cgroups and CPU pinning, they were more susceptible to performance degradation due to noise influence. Therefore, we recommend to use process isolation for synchronized workloads, with the exception of Docker containers.
- oai:arXiv.org:2511.03533v1
- cs.DC
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Nils Japke, Furat Hamdan, Diana Baumann, David Bermbach
-
-
- PnPSelect: Plug-and-play IoT Device Selection Using Ultra-wideband Signals
- https://arxiv.org/abs/2511.03534
- arXiv:2511.03534v1 Announce Type: new
-Abstract: In recent years, the number of Internet of Things (IoT) devices in smart homes has rapidly increased. A key challenge affecting user experience is how to enable users to efficiently and intuitively select the devices they wish to control. This paper proposes PnPSelect, a plug-and-play IoT device selection solution utilizing Ultra-wideband (UWB) technology on commercial devices. Unlike previous works, PnPSelect does not require the installation of dedicated hardware on each IoT device, thereby reducing deployment costs and complexities, and achieving true plug-and-play functionality. To enable intuitive device selection, we introduce a pointing direction estimation method that utilizes UWB readings from a single anchor to infer the user pointing direction. Additionally, we propose a lightweight device localization method that allows users to register new IoT devices by simply pointing at them from two distinct positions, eliminating the need for manual measurements. We implement PnPSelect on commercial smartphones and smartwatches and conduct extensive evaluations in both controlled laboratory settings and real-world environments. Our results demonstrate high accuracy, robustness, and adaptability, making PnPSelect a practical and scalable solution for next-generation smart home interactions.
- oai:arXiv.org:2511.03534v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Zhaoxin Chang, Fusang Zhang, Jie Xiong, Ziyu Li, Badii Jouaber, Daqing Zhang
-
-
- Security and Privacy Management of IoT Using Quantum Computing
- https://arxiv.org/abs/2511.03538
- arXiv:2511.03538v1 Announce Type: new
-Abstract: The convergence of the Internet of Things (IoT) and quantum computing is redefining the security paradigm of interconnected digital systems. Classical cryptographic algorithms such as RSA, Elliptic Curve Cryptography (ECC), and Advanced Encryption Standard (AES) have long provided the foundation for securing IoT communication. However, the emergence of quantum algorithms such as Shor's and Grover's threatens to render these techniques vulnerable, necessitating the development of quantum-resilient alternatives. This chapter examines the implications of quantum computing for IoT security and explores strategies for building cryptographically robust systems in the post-quantum era. It presents an overview of Post-Quantum Cryptographic (PQC) families, including lattice-based, code-based, hash-based, and multivariate approaches, analyzing their potential for deployment in resource-constrained IoT environments. In addition, quantum-based methods such as Quantum Key Distribution (QKD) and Quantum Random Number Generators (QRNGs) are discussed for their ability to enhance confidentiality and privacy through physics-based security guarantees. The chapter also highlights issues of privacy management, regulatory compliance, and standardization, emphasizing the need for collaborative efforts across academia, industry, and governance. Overall, it provides a comprehensive perspective on security IoT ecosystems against quantum threats and ensures resilience in the next generation of intelligent networks.
- oai:arXiv.org:2511.03538v1
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jaydip Sen
-
-
- SOLVE-Med: Specialized Orchestration for Leading Vertical Experts across Medical Specialties
- https://arxiv.org/abs/2511.03542
- arXiv:2511.03542v1 Announce Type: new
-Abstract: Medical question answering systems face deployment challenges including hallucinations, bias, computational demands, privacy concerns, and the need for specialized expertise across diverse domains. Here, we present SOLVE-Med, a multi-agent architecture combining domain-specialized small language models for complex medical queries. The system employs a Router Agent for dynamic specialist selection, ten specialized models (1B parameters each) fine-tuned on specific medical domains, and an Orchestrator Agent that synthesizes responses. Evaluated on Italian medical forum data across ten specialties, SOLVE-Med achieves superior performance with ROUGE-1 of 0.301 and BERTScore F1 of 0.697, outperforming standalone models up to 14B parameters while enabling local deployment. Our code is publicly available on GitHub: https://github.com/PRAISELab-PicusLab/SOLVE-Med.
- oai:arXiv.org:2511.03542v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- 10.3233/FAIA251438
- 28th European Conference on Artificial Intelligence, 25-30 October 2025, Bologna, Italy
- Roberta Di Marino, Giovanni Dioguardi, Antonio Romano, Giuseppe Riccio, Mariano Barone, Marco Postiglione, Flora Amato, Vincenzo Moscato
-
-
- Explaining Decisions in ML Models: a Parameterized Complexity Analysis (Part I)
- https://arxiv.org/abs/2511.03545
- arXiv:2511.03545v1 Announce Type: new
-Abstract: This paper presents a comprehensive theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models. Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms. We address two principal types of explanation problems: abductive and contrastive, both in their local and global variants. Our analysis encompasses diverse ML models, including Decision Trees, Decision Sets, Decision Lists, Boolean Circuits, and ensembles thereof, each offering unique explanatory challenges. This research fills a significant gap in explainable AI (XAI) by providing a foundational understanding of the complexities of generating explanations for these models. This work provides insights vital for further research in the domain of XAI, contributing to the broader discourse on the necessity of transparency and accountability in AI systems.
- oai:arXiv.org:2511.03545v1
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sebastian Ordyniak, Giacomo Paesani, Mateusz Rychlicki, Stefan Szeider
-
-
- Bearing Syntactic Fruit with Stack-Augmented Neural Networks
- https://arxiv.org/abs/2511.03547
- arXiv:2511.03547v1 Announce Type: new
-Abstract: Any finite set of training data is consistent with an infinite number of hypothetical algorithms that could have generated it. Studies have shown that when human children learn language, they consistently favor hypotheses based on hierarchical syntactic rules without ever encountering disambiguating examples. A recent line of work has inquired as to whether common neural network architectures share this bias, finding that they do so only under special conditions: when syntactically supervised, when pre-trained on massive corpora, or when trained long past convergence. In this paper, we demonstrate, for the first time, neural network architectures that are able to generalize in human-like fashion without any of the aforementioned requirements: stack-augmented neural networks. We test three base architectures (transformer, simple RNN, LSTM) augmented with two styles of stack: the superposition stack of Joulin & Mikolov (2015) and a nondeterministic generalization of it proposed by DuSell & Chiang (2023). We find that transformers with nondeterministic stacks generalize best out of these architectures on a classical question formation task. We also propose a modification to the stack RNN architecture that improves hierarchical generalization. These results suggest that stack-augmented neural networks may be more accurate models of human language acquisition than standard architectures, serving as useful objects of psycholinguistic study. Our code is publicly available.
- oai:arXiv.org:2511.03547v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Brian DuSell, Ryan Cotterell
-
-
- Flat Minima and Generalization: Insights from Stochastic Convex Optimization
- https://arxiv.org/abs/2511.03548
- arXiv:2511.03548v1 Announce Type: new
-Abstract: Understanding the generalization behavior of learning algorithms is a central goal of learning theory. A recently emerging explanation is that learning algorithms are successful in practice because they converge to flat minima, which have been consistently associated with improved generalization performance. In this work, we study the link between flat minima and generalization in the canonical setting of stochastic convex optimization with a non-negative, $\beta$-smooth objective. Our first finding is that, even in this fundamental and well-studied setting, flat empirical minima may incur trivial $\Omega(1)$ population risk while sharp minima generalizes optimally. Then, we show that this poor generalization behavior extends to two natural ''sharpness-aware'' algorithms originally proposed by Foret et al. (2021), designed to bias optimization toward flat solutions: Sharpness-Aware Gradient Descent (SA-GD) and Sharpness-Aware Minimization (SAM). For SA-GD, which performs gradient steps on the maximal loss in a predefined neighborhood, we prove that while it successfully converges to a flat minimum at a fast rate, the population risk of the solution can still be as large as $\Omega(1)$, indicating that even flat minima found algorithmically using a sharpness-aware gradient method might generalize poorly. For SAM, a computationally efficient approximation of SA-GD based on normalized ascent steps, we show that although it minimizes the empirical loss, it may converge to a sharp minimum and also incur population risk $\Omega(1)$. Finally, we establish population risk upper bounds for both SA-GD and SAM using algorithmic stability techniques.
- oai:arXiv.org:2511.03548v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Matan Schliserman, Shira Vansover-Hager, Tomer Koren
-
-
- Uncovering Code Insights: Leveraging GitHub Artifacts for Deeper Code Understanding
- https://arxiv.org/abs/2511.03549
- arXiv:2511.03549v1 Announce Type: new
-Abstract: Understanding the purpose of source code is a critical task in software maintenance, onboarding, and modernization. While large language models (LLMs) have shown promise in generating code explanations, they often lack grounding in the broader software engineering context. We propose a novel approach that leverages natural language artifacts from GitHub -- such as pull request descriptions, issue descriptions and discussions, and commit messages -- to enhance LLM-based code understanding. Our system consists of three components: one that extracts and structures relevant GitHub context, another that uses this context to generate high-level explanations of the code's purpose, and a third that validates the explanation. We implemented this as a standalone tool, as well as a server within the Model Context Protocol (MCP), enabling integration with other AI-assisted development tools. Our main use case is that of enhancing a standard LLM-based code explanation with code insights that our system generates. To evaluate explanations' quality, we conducted a small scale user study, with developers of several open projects, as well as developers of proprietary projects. Our user study indicates that when insights are generated they often are helpful and non trivial, and are free from hallucinations.
- oai:arXiv.org:2511.03549v1
- cs.SE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ziv Nevo, Orna Raz, Karen Yorav
-
-
- Indicating Robot Vision Capabilities with Augmented Reality
- https://arxiv.org/abs/2511.03550
- arXiv:2511.03550v1 Announce Type: new
-Abstract: Research indicates that humans can mistakenly assume that robots and humans have the same field of view (FoV), possessing an inaccurate mental model of robots. This misperception may lead to failures during human-robot collaboration tasks where robots might be asked to complete impossible tasks about out-of-view objects. The issue is more severe when robots do not have a chance to scan the scene to update their world model while focusing on assigned tasks. To help align humans' mental models of robots' vision capabilities, we propose four FoV indicators in augmented reality (AR) and conducted a user human-subjects experiment (N=41) to evaluate them in terms of accuracy, confidence, task efficiency, and workload. These indicators span a spectrum from egocentric (robot's eye and head space) to allocentric (task space). Results showed that the allocentric blocks at the task space had the highest accuracy with a delay in interpreting the robot's FoV. The egocentric indicator of deeper eye sockets, possible for physical alteration, also increased accuracy. In all indicators, participants' confidence was high while cognitive load remained low. Finally, we contribute six guidelines for practitioners to apply our AR indicators or physical alterations to align humans' mental models with robots' vision capabilities.
- oai:arXiv.org:2511.03550v1
- cs.RO
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hong Wang, Ridhima Phatak, James Ocampo, Zhao Han
-
-
- MultiZebraLogic: A Multilingual Logical Reasoning Benchmark
- https://arxiv.org/abs/2511.03553
- arXiv:2511.03553v1 Announce Type: new
-Abstract: Measuring the full abilities of large language models (LLMs) requires benchmarks representing multiple tasks. We aim to create large, high-quality datasets for comparison of logical reasoning skills across several languages and of suitable difficulty for LLMs of various reasoning ability. We explore multiple ways of increasing difficulty. We generate zebra puzzles in multiple languages, themes, sizes and including 14 different clue types and 8 red herring types (uninformative clues). We find puzzle sizes 2x3 and 4x5 are sufficiently challenging for GPT-4o mini (a non-reasoning model) and o3-mini (a reasoning model), respectively. Including 5 red herrings decreases o3-mini puzzle-level accuracy on 4x5 puzzles by 15$\pm$7 %. Scores of o3-mini on 4x5 puzzles are not significantly affected by use of English vs. Danish or the common houses theme vs. the country-specific smoerrebroed theme. We find no correlation between difficulty and the selected clue types. Datasets of 128+1024 puzzles are published as MultiZebraLogic in each of nine Germanic languages for sizes 2x3 and 4x5. We publish code for puzzle generation, designed for adaptablity into more languages and themes.
- oai:arXiv.org:2511.03553v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Sofie Helene Bruun, Dan Saattrup Smart
-
-
- AILA--First Experiments with Localist Language Models
- https://arxiv.org/abs/2511.03559
- arXiv:2511.03559v1 Announce Type: new
-Abstract: This paper presents the first empirical demonstration of controllable locality in transformer language models, a novel architectural framework that enables continuous control over the degree of representation localization through a tunable locality dial parameter. Unlike traditional language models that rely exclusively on distributed representations, our approach allows dynamic interpolation between highly interpretable localist encodings and efficient distributed representations without requiring model retraining. We conducted experiments on the WikiText corpus using a two-layer transformer architecture, systematically varying the locality parameter {\lambda} across the full spectrum from 1.0 (fully localist) to 0.0 (fully distributed). Our results demonstrate that localist configurations achieve dramatically lower attention entropy, with {\lambda} = 1.0 yielding 5.36 bits compared to 7.18 bits at {\lambda} = 0.0, while maintaining substantially higher pointer fidelity scores reflecting stronger alignment with rule-specified targets. Prediction experiments reveal that intermediate locality values optimize the tradeoff between interpretability and performance, with {\lambda} = 0.6 achieving test perplexity of 4.65 and accuracy of 84.7%. These findings establish that localist language models provide a practical framework for applications in regulated domains requiring both transparency and capability, offering precise mathematical control over the interpretability-performance spectrum through explicit penalty thresholds and information-theoretic design principles.
- oai:arXiv.org:2511.03559v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Joachim Diederich
-
-
- ASVRI-Legal: Fine-Tuning LLMs with Retrieval Augmented Generation for Enhanced Legal Regulation
- https://arxiv.org/abs/2511.03563
- arXiv:2511.03563v1 Announce Type: new
-Abstract: In this study, we explore the fine-tuning of Large Language Models (LLMs) to better support policymakers in their crucial work of understanding, analyzing, and crafting legal regulations. To equip the model with a deep understanding of legal texts, we curated a supervised dataset tailored to the specific needs of the legal domain. Additionally, we integrated the Retrieval-Augmented Generation (RAG) method, enabling the LLM to access and incorporate up-to-date legal knowledge from external sources. This combination of fine-tuning and RAG-based augmentation results in a tool that not only processes legal information but actively assists policymakers in interpreting regulations and drafting new ones that align with current needs. The results demonstrate that this approach can significantly enhance the effectiveness of legal research and regulation development, offering a valuable resource in the ever-evolving field of law.
- oai:arXiv.org:2511.03563v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- One Octadion, Bondan Sapta Prakoso, Nanang Yudi Setiawan, Novanto Yudistira
-
-
- Imitation Learning in the Deep Learning Era: A Novel Taxonomy and Recent Advances
- https://arxiv.org/abs/2511.03565
- arXiv:2511.03565v1 Announce Type: new
-Abstract: Imitation learning (IL) enables agents to acquire skills by observing and replicating the behavior of one or multiple experts. In recent years, advances in deep learning have significantly expanded the capabilities and scalability of imitation learning across a range of domains, where expert data can range from full state-action trajectories to partial observations or unlabeled sequences. Alongside this growth, novel approaches have emerged, with new methodologies being developed to address longstanding challenges such as generalization, covariate shift, and demonstration quality. In this survey, we review the latest advances in imitation learning research, highlighting recent trends, methodological innovations, and practical applications. We propose a novel taxonomy that is distinct from existing categorizations to better reflect the current state of the IL research stratum and its trends. Throughout the survey, we critically examine the strengths, limitations, and evaluation practices of representative works, and we outline key challenges and open directions for future research.
- oai:arXiv.org:2511.03565v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Iason Chrysomallis, Georgios Chalkiadakis
-
-
- TabGemma: Text-Based Tabular ICL via LLM using Continued Pretraining and Retrieval
- https://arxiv.org/abs/2511.03570
- arXiv:2511.03570v1 Announce Type: new
-Abstract: We study LLMs for tabular prediction with mixed text, numeric, and categorical fields. We introduce TabGemma, a schema-agnostic in-context learner that treats rows as sequences and tackles two practical hurdles when adapting pretrained LLMs for tabular predictions: unstable numeric tokenization and limited context size. We propose to canonicalize numbers via signed scientific notation and continue pretraining of a 12B Gemma 3 model with a target imputation objective using a large-scale real world dataset. For inference, we use a compact n-gram-based retrieval to select informative exemplars that fit within a 128k-token window.
- On semantically rich benchmarks, TabGemma establishes a new state of the art on classification across low- and high-data regimes and improves monotonically with more context rows. For regression, it is competitive at small sample sizes but trails conventional approaches as data grows. Our results show that LLMs can be effective tabular in-context learners on highly semantic tasks when paired with dedicated numeric handling and context retrieval, while motivating further advances in numeric modeling and long-context scaling.
- oai:arXiv.org:2511.03570v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- G\"unther Schindler, Maximilian Schambach, Michael Medek, Sam Thelin
-
-
- OneOcc: Semantic Occupancy Prediction for Legged Robots with a Single Panoramic Camera
- https://arxiv.org/abs/2511.03571
- arXiv:2511.03571v1 Announce Type: new
-Abstract: Robust 3D semantic occupancy is crucial for legged/humanoid robots, yet most semantic scene completion (SSC) systems target wheeled platforms with forward-facing sensors. We present OneOcc, a vision-only panoramic SSC framework designed for gait-introduced body jitter and 360{\deg} continuity. OneOcc combines: (i) Dual-Projection fusion (DP-ER) to exploit the annular panorama and its equirectangular unfolding, preserving 360{\deg} continuity and grid alignment; (ii) Bi-Grid Voxelization (BGV) to reason in Cartesian and cylindrical-polar spaces, reducing discretization bias and sharpening free/occupied boundaries; (iii) a lightweight decoder with Hierarchical AMoE-3D for dynamic multi-scale fusion and better long-range/occlusion reasoning; and (iv) plug-and-play Gait Displacement Compensation (GDC) learning feature-level motion correction without extra sensors. We also release two panoramic occupancy benchmarks: QuadOcc (real quadruped, first-person 360{\deg}) and Human360Occ (H3O) (CARLA human-ego 360{\deg} with RGB, Depth, semantic occupancy; standardized within-/cross-city splits). OneOcc sets new state-of-the-art (SOTA): on QuadOcc it beats strong vision baselines and popular LiDAR ones; on H3O it gains +3.83 mIoU (within-city) and +8.08 (cross-city). Modules are lightweight, enabling deployable full-surround perception for legged/humanoid robots. Datasets and code will be publicly available at https://github.com/MasterHow/OneOcc.
- oai:arXiv.org:2511.03571v1
- cs.RO
- cs.CV
- eess.IV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Shi, Ze Wang, Shangwei Guo, Mengfei Duan, Song Wang, Teng Chen, Kailun Yang, Lin Wang, Kaiwei Wang
-
-
- Multi-User Personalisation in Human-Robot Interaction: Using Quantitative Bipolar Argumentation Frameworks for Preferences Conflict Resolution
- https://arxiv.org/abs/2511.03576
- arXiv:2511.03576v1 Announce Type: new
-Abstract: While personalisation in Human-Robot Interaction (HRI) has advanced significantly, most existing approaches focus on single-user adaptation, overlooking scenarios involving multiple stakeholders with potentially conflicting preferences. To address this, we propose the Multi-User Preferences Quantitative Bipolar Argumentation Framework (MUP-QBAF), a novel multi-user personalisation framework based on Quantitative Bipolar Argumentation Frameworks (QBAFs) that explicitly models and resolves multi-user preference conflicts. Unlike prior work in Argumentation Frameworks, which typically assumes static inputs, our approach is tailored to robotics: it incorporates both users' arguments and the robot's dynamic observations of the environment, allowing the system to adapt over time and respond to changing contexts. Preferences, both positive and negative, are represented as arguments whose strength is recalculated iteratively based on new information. The framework's properties and capabilities are presented and validated through a realistic case study, where an assistive robot mediates between the conflicting preferences of a caregiver and a care recipient during a frailty assessment task. This evaluation further includes a sensitivity analysis of argument base scores, demonstrating how preference outcomes can be shaped by user input and contextual observations. By offering a transparent, structured, and context-sensitive approach to resolving competing user preferences, this work advances the field of multi-user HRI. It provides a principled alternative to data-driven methods, enabling robots to navigate conflicts in real-world environments.
- oai:arXiv.org:2511.03576v1
- cs.RO
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aniol Civit, Antonio Andriella, Carles Sierra, Guillem Aleny\`a
-
-
- Learning Under Laws: A Constraint-Projected Neural PDE Solver that Eliminates Hallucinations
- https://arxiv.org/abs/2511.03578
- arXiv:2511.03578v1 Announce Type: new
-Abstract: Neural networks can approximate solutions to partial differential equations, but they often break the very laws they are meant to model-creating mass from nowhere, drifting shocks, or violating conservation and entropy. We address this by training within the laws of physics rather than beside them. Our framework, called Constraint-Projected Learning (CPL), keeps every update physically admissible by projecting network outputs onto the intersection of constraint sets defined by conservation, Rankine-Hugoniot balance, entropy, and positivity. The projection is differentiable and adds only about 10% computational overhead, making it fully compatible with back-propagation. We further stabilize training with total-variation damping (TVD) to suppress small oscillations and a rollout curriculum that enforces consistency over long prediction horizons. Together, these mechanisms eliminate both hard and soft violations: conservation holds at machine precision, total-variation growth vanishes, and entropy and error remain bounded. On Burgers and Euler systems, CPL produces stable, physically lawful solutions without loss of accuracy. Instead of hoping neural solvers will respect physics, CPL makes that behavior an intrinsic property of the learning process.
- oai:arXiv.org:2511.03578v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Mainak Singha
-
-
- Knowledge Graph for Intelligent Generation of Artistic Image Creation: Constructing a New Annotation Hierarchy
- https://arxiv.org/abs/2511.03585
- arXiv:2511.03585v1 Announce Type: new
-Abstract: Our study aims to establish a unified, systematic, and referable knowledge framework for the annotation of art image datasets, addressing issues of ambiguous definitions and inconsistent results caused by the lack of common standards during the annotation process. To achieve this goal, a hierarchical and systematic art image knowledge graph was constructed. It was developed based on the composition principles of art images, incorporating the Structured Theory of Visual Knowledge proposed by Academician Yunhe Pan in On Visual Knowledge-which states that visual knowledge must achieve precise expression of spatial forms and dynamic relationships through "prototype-category" and "hierarchical structure". Through in-depth review of Chinese and Western art theories and pioneering integration of the Chinese cultural perspective, this graph took shape. The core visual language of art images was deconstructed by this knowledge graph. Meanwhile, the unique spatial theory and symbolic system of Chinese painting were compared with and supplemented by Western art theories. This graph converts qualitative artistic concepts into a clear structured framework. It not only conforms to the cognitive law that "visual knowledge takes precedence over verbal knowledge" in humans but also provides an interpretable and inferential visual knowledge foundation for AI art generation and cross-cultural art analysis. It ensures the high quality and consistency of annotated data, thus offering key support for art intelligence research in the AI 2.0 era.
- oai:arXiv.org:2511.03585v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Jia Kaixin, Zhu Kewen, Deng Huanghuang, Qiu Yiwu, Ding Shiying, Ding Chenyang, Li Zejian
-
-
- PerfDojo: Automated ML Library Generation for Heterogeneous Architectures
- https://arxiv.org/abs/2511.03586
- arXiv:2511.03586v1 Announce Type: new
-Abstract: The increasing complexity of machine learning models and the proliferation of diverse hardware architectures (CPUs, GPUs, accelerators) make achieving optimal performance a significant challenge. Heterogeneity in instruction sets, specialized kernel requirements for different data types and model features (e.g., sparsity, quantization), and architecture-specific optimizations complicate performance tuning. Manual optimization is resource-intensive, while existing automatic approaches often rely on complex hardware-specific heuristics and uninterpretable intermediate representations, hindering performance portability. We introduce PerfLLM, a novel automatic optimization methodology leveraging Large Language Models (LLMs) and Reinforcement Learning (RL). Central to this is PerfDojo, an environment framing optimization as an RL game using a human-readable, mathematically-inspired code representation that guarantees semantic validity through transformations. This allows effective optimization without prior hardware knowledge, facilitating both human analysis and RL agent training. We demonstrate PerfLLM's ability to achieve significant performance gains across diverse CPU (x86, Arm, RISC-V) and GPU architectures.
- oai:arXiv.org:2511.03586v1
- cs.PF
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- 10.1145/3712285.3759900
- The International Conference for High Performance Computing, Networking, Storage and Analysis (SC '25), November 16--21, 2025, St Louis, MO, USA
- Andrei Ivanov, Siyuan Shen, Gioele Gottardo, Marcin Chrapek, Afif Boudaoud, Timo Schneider, Luca Benini, Torsten Hoefler
-
-
- Human Mesh Modeling for Anny Body
- https://arxiv.org/abs/2511.03589
- arXiv:2511.03589v1 Announce Type: new
-Abstract: Parametric body models are central to many human-centric tasks, yet existing models often rely on costly 3D scans and learned shape spaces that are proprietary and demographically narrow. We introduce Anny, a simple, fully differentiable, and scan-free human body model grounded in anthropometric knowledge from the MakeHuman community. Anny defines a continuous, interpretable shape space, where phenotype parameters (e.g. gender, age, height, weight) control blendshapes spanning a wide range of human forms -- across ages (from infants to elders), body types, and proportions. Calibrated using WHO population statistics, it provides realistic and demographically grounded human shape variation within a single unified model. Thanks to its openness and semantic control, Anny serves as a versatile foundation for 3D human modeling -- supporting millimeter-accurate scan fitting, controlled synthetic data generation, and Human Mesh Recovery (HMR). We further introduce Anny-One, a collection of 800k photorealistic humans generated with Anny, showing that despite its simplicity, HMR models trained with Anny can match the performance of those trained with scan-based body models, while remaining interpretable and broadly representative. The Anny body model and its code are released under the Apache 2.0 license, making Anny an accessible foundation for human-centric 3D modeling.
- oai:arXiv.org:2511.03589v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Romain Br\'egier, Gu\'enol\'e Fiche, Laura Bravo-S\'anchez, Thomas Lucas, Matthieu Armando, Philippe Weinzaepfel, Gr\'egory Rogez, Fabien Baradel
-
-
- Manifold-constrained Hamilton-Jacobi Reachability Learning for Decentralized Multi-Agent Motion Planning
- https://arxiv.org/abs/2511.03591
- arXiv:2511.03591v1 Announce Type: new
-Abstract: Safe multi-agent motion planning (MAMP) under task-induced constraints is a critical challenge in robotics. Many real-world scenarios require robots to navigate dynamic environments while adhering to manifold constraints imposed by tasks. For example, service robots must carry cups upright while avoiding collisions with humans or other robots. Despite recent advances in decentralized MAMP for high-dimensional systems, incorporating manifold constraints remains difficult. To address this, we propose a manifold-constrained Hamilton-Jacobi reachability (HJR) learning framework for decentralized MAMP. Our method solves HJR problems under manifold constraints to capture task-aware safety conditions, which are then integrated into a decentralized trajectory optimization planner. This enables robots to generate motion plans that are both safe and task-feasible without requiring assumptions about other agents' policies. Our approach generalizes across diverse manifold-constrained tasks and scales effectively to high-dimensional multi-agent manipulation problems. Experiments show that our method outperforms existing constrained motion planners and operates at speeds suitable for real-world applications. Video demonstrations are available at https://youtu.be/RYcEHMnPTH8 .
- oai:arXiv.org:2511.03591v1
- cs.RO
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qingyi Chen, Ruiqi Ni, Jun Kim, Ahmed H. Qureshi
-
-
- Powered Descent Trajectory Optimization of Chandrayaan-3 using Radau Collocation and Controllable Sets
- https://arxiv.org/abs/2511.03594
- arXiv:2511.03594v1 Announce Type: new
-Abstract: India achieved a significant milestone on August $23^{\text{rd}}$ 2023, becoming the fourth country to accomplish a soft landing on the Moon. This paper presents the powered descent trajectory design for the Chandrayaan-3 mission. The optimization framework is based on pseudospectral Radau collocation, and controllability-based waypoint refinement is employed to further enhance the robustness of the trajectory against state and control perturbations. Furthermore, the trade-off between fuel consumption and robustness is explicitly quantified, providing insights into the practical considerations of mission planning.
- oai:arXiv.org:2511.03594v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Suraj Kumar, Aditya Rallapalli, Ashok Kumar Kakula, Bharat Kumar GVP
-
-
- Tensor-Efficient High-Dimensional Q-learning
- https://arxiv.org/abs/2511.03595
- arXiv:2511.03595v1 Announce Type: new
-Abstract: High-dimensional reinforcement learning faces challenges with complex calculations and low sample efficiency in large state-action spaces. Q-learning algorithms struggle particularly with the curse of dimensionality, where the number of state-action pairs grows exponentially with problem size. While neural network-based approaches like Deep Q-Networks have shown success, recent tensor-based methods using low-rank decomposition offer more parameter-efficient alternatives. Building upon existing tensor-based methods, we propose Tensor-Efficient Q-Learning (TEQL), which enhances low-rank tensor decomposition via improved block coordinate descent on discretized state-action spaces, incorporating novel exploration and regularization mechanisms. The key innovation is an exploration strategy that combines approximation error with visit count-based upper confidence bound to prioritize actions with high uncertainty, avoiding wasteful random exploration. Additionally, we incorporate a frequency-based penalty term in the objective function to encourage exploration of less-visited state-action pairs and reduce overfitting to frequently visited regions. Empirical results on classic control tasks demonstrate that TEQL outperforms conventional matrix-based methods and deep RL approaches in both sample efficiency and total rewards, making it suitable for resource-constrained applications, such as space and healthcare where sampling costs are high.
- oai:arXiv.org:2511.03595v1
- cs.LG
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Junyi Wu, Dan Li
-
-
- Adaptive Randomized Tensor Train Rounding using Khatri-Rao Products
- https://arxiv.org/abs/2511.03598
- arXiv:2511.03598v1 Announce Type: new
-Abstract: Approximating a tensor in the tensor train (TT) format has many important applications in scientific computing. Rounding a TT tensor involves further compressing a tensor that is already in the TT format. This paper proposes new randomized algorithms for TT-rounding that uses sketches based on Khatri-Rao products (KRP). When the TT-ranks are known in advance, the proposed methods are comparable in cost to the sketches that used a sketching matrix in the TT-format~\cite{al2023randomized}. However, the use of KRP sketches enables adaptive algorithms to round the tensor in the TT-format within a fixed user-specified tolerance. An important component of the adaptivity is the estimation of error using KRP sketching, for which we develop theoretical guarantees. We report numerical experiments on synthetic tensors, parametric low-rank kernel approximations, and the solution of parametric partial differential equations. The numerical experiments show that we obtain speed-ups of up to $50\times$ compared to deterministic TT-rounding. Both the computational cost analysis and numerical experiments verify that the adaptive algorithms are competitive with the fixed rank algorithms, suggesting the adaptivity introduces only a low overhead.
- oai:arXiv.org:2511.03598v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Hussam Al Daas, Grey Ballard, Laura Grigori, Mariana Martinez Aguilar, Arvind K. Saibaba, Bhisham Dev Verma
-
-
- Step-Audio-EditX Technical Report
- https://arxiv.org/abs/2511.03601
- arXiv:2511.03601v1 Announce Type: new
-Abstract: We present Step-Audio-EditX, the first open-source LLM-based audio model excelling at expressive and iterative audio editing encompassing emotion, speaking style, and paralinguistics alongside robust zero-shot text-to-speech (TTS) capabilities.Our core innovation lies in leveraging only large-margin synthetic data, which circumvents the need for embedding-based priors or auxiliary modules. This large-margin learning approach enables both iterative control and high expressivity across voices, and represents a fundamental pivot from the conventional focus on representation-level disentanglement. Evaluation results demonstrate that Step-Audio-EditX surpasses both MiniMax-2.6-hd and Doubao-Seed-TTS-2.0 in emotion editing and other fine-grained control tasks.
- oai:arXiv.org:2511.03601v1
- cs.CL
- cs.AI
- cs.HC
- cs.SD
- eess.AS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chao Yan (Tony), Boyong Wu (Tony), Peng Yang (Tony), Pengfei Tan (Tony), Guoqiang Hu (Tony), Yuxin Zhang (Tony), Xiangyu (Tony), Zhang, Fei Tian, Xuerui Yang, Xiangyu Zhang, Daxin Jiang, Gang Yu
-
-
- Artificial-reference tracking MPC with probabilistically validated performance on industrial embedded systems
- https://arxiv.org/abs/2511.03603
- arXiv:2511.03603v1 Announce Type: new
-Abstract: Industrial embedded systems are typically used to execute simple control algorithms due to their low computational resources. Despite these limitations, the implementation of advanced control techniques such as Model Predictive Control (MPC) has been explored by the control community in recent years, typically considering simple linear formulations or explicit ones to facilitate the online computation of the control input. These simplifications often lack features and properties that are desirable in real-world environments. In this article, we present an efficient implementation for embedded systems of MPC for tracking with artificial reference, solved via a recently developed structure-exploiting first-order method. This formulation is tailored to a wide range of applications by incorporating essential practical features at a small computational cost, including integration with an offset-free scheme, back-off parameters that enable constraint tightening, and soft constraints that preserve feasibility under disturbances or plant-model mismatch. We accompany this with a framework for probabilistic performance validation of the closed-loop system over long-term operation. We illustrate the applicability of the approach on a Programmable Logic Controller (PLC), incorporated in a hardware-in-the-loop setup to control a nonlinear continuous stirred-tank reactor. The behavior of the closed-loop system is probabilistically validated with respect to constraint violations and the number of iterations required at each time step by the MPC optimization algorithm.
- oai:arXiv.org:2511.03603v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Victor Gracia, Pablo Krupa, Filiberto Fele, Teodoro Alamo
-
-
- A local eigenvector centrality
- https://arxiv.org/abs/2511.03608
- arXiv:2511.03608v1 Announce Type: new
-Abstract: Eigenvector centrality is an established measure of global connectivity, from which the importance and influence of nodes can be inferred. We introduce a local eigenvector centrality that incorporates both local and global connectivity. This new measure references prominent eigengaps and combines their associated eigenspectrum, via the Euclidean norm, to detect centrality that reflects the influence of prominent community structures. In contact networks, with clearly defined community structures, local eigenvector centrality is shown to identify similar but distinct distributions to eigenvector centrality applied on each community in isolation and PageRank. Discrepancies between the two eigenvector measures highlight nodes and communities that do not conform to their defined local structures, e.g. nodes with more connections outside of their defined community than within it. While reference to PageRank's centrality assessment enables a mitigation strategy for localisation effects inherent in eigenvector-based measures. In networks without clearly defined communities, such as city road networks, local eigenvector centrality is shown to identify both locally prominent and globally connected hubs.
- oai:arXiv.org:2511.03608v1
- cs.SI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ruaridh A. Clark, Francesca Arrigo, Agathe Bouis, Malcolm Macdonald
-
-
- Stone Duality Proofs for Colorless Distributed Computability Theorems
- https://arxiv.org/abs/2511.03609
- arXiv:2511.03609v1 Announce Type: new
-Abstract: We introduce a new topological encoding by spectral spaces of executions of
- round-based full-information adversaries, a model of distributed computations that is functorially presented and that
- contains many message adversaries. We give a characterization of the solvability of colorless tasks against compact adversaries.
- Message adversaries are distributed
- models that are known to be very expressive despite being
- round-based and crash-free. Colorless tasks are
- an important class of distributed tasks. For a colorless task, the
- specification does not depend upon the multiplicity of input or
- output values, like the ubiquitous agreement tasks.
- Therefore, our result is a significant
- step toward unifying topological methods in distributed computing.
- The main insight is to consider global states obtained after finite executions of a distributed protocol
- not as abstract
- simplicial complexes as previously done, but as spectral
- spaces, considering the Alexandrov topology on the faces poset. Given
- an adversary $\mathcal M$ with a set of inputs $\mathcal I$,
- we define a limit object $\Pi^\infty_\mathcal M(\mathcal I)$
- by projective limit in the category of spectral spaces. We derive a new general distributed computability
- theorem using Stone duality: there exists an algorithm solving a colorless task $(\mathcal I,\mathcal O,\Delta)$
- against the compact adversary $\mathcal M$ if and only if there exists a spectral
- map $f:\Pi^\infty_\mathcal M(\mathcal I)\longrightarrow\mathcal O$ compatible with $\Delta$.
- From this general characterization are derived many known colorless computability
- theorems.
- Quite surprisingly, colored and uncolored models have the same
- computability power (they solve the same tasks). Our new proofs give
- topological reasons for this equivalence, previously known through
- algorithmic reductions.
- oai:arXiv.org:2511.03609v1
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Cameron Calk, Emmanuel Godard
-
-
- A systematic review of relation extraction task since the emergence of Transformers
- https://arxiv.org/abs/2511.03610
- arXiv:2511.03610v1 Announce Type: new
-Abstract: This article presents a systematic review of relation extraction (RE) research since the advent of Transformer-based models. Using an automated framework to collect and annotate publications, we analyze 34 surveys, 64 datasets, and 104 models published between 2019 and 2024. The review highlights methodological advances, benchmark resources, and the integration of semantic web technologies. By consolidating results across multiple dimensions, the study identifies current trends, limitations, and open challenges, offering researchers and practitioners a comprehensive reference for understanding the evolution and future directions of RE.
- oai:arXiv.org:2511.03610v1
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Ringwald Celian, Gandon, Fabien, Faron Catherine, Michel Franck, Abi Akl Hanna
-
-
- Going Beyond Expert Performance via Deep Implicit Imitation Reinforcement Learning
- https://arxiv.org/abs/2511.03616
- arXiv:2511.03616v1 Announce Type: new
-Abstract: Imitation learning traditionally requires complete state-action demonstrations from optimal or near-optimal experts. These requirements severely limit practical applicability, as many real-world scenarios provide only state observations without corresponding actions and expert performance is often suboptimal. In this paper we introduce a deep implicit imitation reinforcement learning framework that addresses both limitations by combining deep reinforcement learning with implicit imitation learning from observation-only datasets. Our main algorithm, Deep Implicit Imitation Q-Network (DIIQN), employs an action inference mechanism that reconstructs expert actions through online exploration and integrates a dynamic confidence mechanism that adaptively balances expert-guided and self-directed learning. This enables the agent to leverage expert guidance for accelerated training while maintaining capacity to surpass suboptimal expert performance. We further extend our framework with a Heterogeneous Actions DIIQN (HA-DIIQN) algorithm to tackle scenarios where expert and agent possess different action sets, a challenge previously unaddressed in the implicit imitation learning literature. HA-DIIQN introduces an infeasibility detection mechanism and a bridging procedure identifying alternative pathways connecting agent capabilities to expert guidance when direct action replication is impossible. Our experimental results demonstrate that DIIQN achieves up to 130% higher episodic returns compared to standard DQN, while consistently outperforming existing implicit imitation methods that cannot exceed expert performance. In heterogeneous action settings, HA-DIIQN learns up to 64% faster than baselines, leveraging expert datasets unusable by conventional approaches. Extensive parameter sensitivity analysis reveals the framework's robustness across varying dataset sizes and hyperparameter configurations.
- oai:arXiv.org:2511.03616v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Iason Chrysomallis, Georgios Chalkiadakis
-
-
- Visualization Biases MLLM's Decision Making in Network Data Tasks
- https://arxiv.org/abs/2511.03617
- arXiv:2511.03617v1 Announce Type: new
-Abstract: We evaluate how visualizations can influence the judgment of MLLMs about the presence or absence of bridges in a network. We show that the inclusion of visualization improves confidence over a structured text-based input that could theoretically be helpful for answering the question. On the other hand, we observe that standard visualization techniques create a strong bias towards accepting or refuting the presence of a bridge -- independently of whether or not a bridge actually exists in the network. While our results indicate that the inclusion of visualization techniques can effectively influence the MLLM's judgment without compromising its self-reported confidence, they also imply that practitioners must be careful of allowing users to include visualizations in generative AI applications so as to avoid undesired hallucinations.
- oai:arXiv.org:2511.03617v1
- cs.GR
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Timo Brand, Henry F\"orster, Stephen G. Kobourov, Jacob Miller
-
-
- Towards Formalizing Reinforcement Learning Theory
- https://arxiv.org/abs/2511.03618
- arXiv:2511.03618v1 Announce Type: new
-Abstract: In this paper, we formalize the almost sure convergence of $Q$-learning and linear temporal difference (TD) learning with Markovian samples using the Lean 4 theorem prover based on the Mathlib library. $Q$-learning and linear TD are among the earliest and most influential reinforcement learning (RL) algorithms. The investigation of their convergence properties is not only a major research topic during the early development of the RL field but also receives increasing attention nowadays. This paper formally verifies their almost sure convergence in a unified framework based on the Robbins-Siegmund theorem. The framework developed in this work can be easily extended to convergence rates and other modes of convergence. This work thus makes an important step towards fully formalizing convergent RL results. The code is available at https://github.com/ShangtongZhang/rl-theory-in-lean.
- oai:arXiv.org:2511.03618v1
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shangtong Zhang
-
-
- CLAX: Fast and Flexible Neural Click Models in JAX
- https://arxiv.org/abs/2511.03620
- arXiv:2511.03620v1 Announce Type: new
-Abstract: CLAX is a JAX-based library that implements classic click models using modern gradient-based optimization. While neural click models have emerged over the past decade, complex click models based on probabilistic graphical models (PGMs) have not systematically adopted gradient-based optimization, preventing practitioners from leveraging modern deep learning frameworks while preserving the interpretability of classic models. CLAX addresses this gap by replacing EM-based optimization with direct gradient-based optimization in a numerically stable manner. The framework's modular design enables the integration of any component, from embeddings and deep networks to custom modules, into classic click models for end-to-end optimization. We demonstrate CLAX's efficiency by running experiments on the full Baidu-ULTR dataset comprising over a billion user sessions in $\approx$ 2 hours on a single GPU, orders of magnitude faster than traditional EM approaches. CLAX implements ten classic click models, serving both industry practitioners seeking to understand user behavior and improve ranking performance at scale and researchers developing new click models. CLAX is available at: https://github.com/philipphager/clax
- oai:arXiv.org:2511.03620v1
- cs.IR
- cs.LG
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Philipp Hager, Onno Zoeter, Maarten de Rijke
-
-
- Multi-robot searching with limited sensing range for static and mobile intruders
- https://arxiv.org/abs/2511.03622
- arXiv:2511.03622v1 Announce Type: new
-Abstract: We consider the problem of searching for an intruder in a geometric domain by utilizing multiple search robots. The domain is a simply connected orthogonal polygon with edges parallel to the cartesian coordinate axes. Each robot has a limited sensing capability. We study the problem for both static and mobile intruders. It turns out that the problem of finding an intruder is NP-hard, even for a stationary intruder. Given this intractability, we turn our attention towards developing efficient and robust algorithms, namely methods based on space-filling curves, random search, and cooperative random search. Moreover, for each proposed algorithm, we evaluate the trade-off between the number of search robots and the time required for the robots to complete the search process while considering the geometric properties of the connected orthogonal search area.
- oai:arXiv.org:2511.03622v1
- cs.RO
- cs.CG
- cs.CR
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Swadhin Agrawal, Sujoy Bhore, Joseph S. B. Mitchell, P. B. Sujit, Aayush Gohil
-
-
- Non-Monotonicity in Fair Division of Graphs
- https://arxiv.org/abs/2511.03629
- arXiv:2511.03629v1 Announce Type: new
-Abstract: We consider the problem of fairly allocating the vertices of a graph among $n$ agents, where the value of a bundle is determined by its cut value -- the number of edges with exactly one endpoint in the bundle. This model naturally captures applications such as team formation and network partitioning, where valuations are inherently non-monotonic: the marginal values may be positive, negative, or zero depending on the composition of the bundle. We focus on the fairness notion of envy-freeness up to one item (EF1) and explore its compatibility with several efficiency concepts such as Transfer Stability (TS) that prohibits single-item transfers that benefit one agent without making the other worse-off. For general graphs, our results uncover a non-monotonic relationship between the number of agents $n$ and the existence of allocations satisfying EF1 and transfer stability (TS): such allocations always exist for $n=2$, may fail to exist for $n=3$, but exist again for all $n\geq 4$. We further show that existence can be guaranteed for any $n$ by slightly weakening the efficiency requirement or by restricting the graph to forests. All of our positive results are achieved via efficient algorithms.
- oai:arXiv.org:2511.03629v1
- cs.GT
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hadi Hosseini, Shraddha Pathak, Yu Zhou
-
-
- Financial Management System for SMEs: Real-World Deployment of Accounts Receivable and Cash Flow Prediction
- https://arxiv.org/abs/2511.03631
- arXiv:2511.03631v1 Announce Type: new
-Abstract: Small and Medium Enterprises (SMEs), particularly freelancers and early-stage businesses, face unique financial management challenges due to limited resources, small customer bases, and constrained data availability. This paper presents the development and deployment of an integrated financial prediction system that combines accounts receivable prediction and cash flow forecasting specifically designed for SME operational constraints. Our system addresses the gap between enterprise-focused financial tools and the practical needs of freelancers and small businesses. The solution integrates two key components: a binary classification model for predicting invoice payment delays, and a multi-module cash flow forecasting model that handles incomplete and limited historical data. A prototype system has been implemented and deployed as a web application with integration into Cluee's platform, a startup providing financial management tools for freelancers, demonstrating practical feasibility for real-world SME financial management.
- oai:arXiv.org:2511.03631v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Bart{\l}omiej Ma{\l}kus, Szymon Bobek, Grzegorz J. Nalepa
-
-
- Neural Beamforming with Doppler-Aware Sparse Attention for High Mobility Environments
- https://arxiv.org/abs/2511.03632
- arXiv:2511.03632v1 Announce Type: new
-Abstract: Beamforming has significance for enhancing spectral efficiency and mitigating interference in multi-antenna wireless systems, facilitating spatial multiplexing and diversity in dense and high mobility scenarios. Traditional beamforming techniques such as zero-forcing beamforming (ZFBF) and minimum mean square error (MMSE) beamforming experience performance deterioration under adverse channel conditions. Deep learning-based beamforming offers an alternative with nonlinear mappings from channel state information (CSI) to beamforming weights by improving robustness against dynamic channel environments. Transformer-based models are particularly effective due to their ability to model long-range dependencies across time and frequency. However, their quadratic attention complexity limits scalability in large OFDM grids. Recent studies address this issue through sparse attention mechanisms that reduce complexity while maintaining expressiveness, yet often employ patterns that disregard channel dynamics, as they are not specifically designed for wireless communication scenarios. In this work, we propose a Doppler-aware Sparse Neural Network Beamforming (Doppler-aware Sparse NNBF) model that incorporates a channel-adaptive sparse attention mechanism in a multi-user single-input multiple-output (MU-SIMO) setting. The proposed sparsity structure is configurable along 2D time-frequency axes based on channel dynamics and is theoretically proven to ensure full connectivity within p hops, where p is the number of attention heads. Simulation results under urban macro (UMa) channel conditions show that Doppler-aware Sparse NNBF significantly outperforms both a fixed-pattern baseline, referred to as Standard Sparse NNBF, and conventional beamforming techniques ZFBF and MMSE beamforming in high mobility scenarios, while maintaining structured sparsity with a controlled number of attended keys per query.
- oai:arXiv.org:2511.03632v1
- cs.IT
- cs.LG
- eess.SP
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Cemil Vahapoglu, Timothy J. O'Shea, Wan Liu, Sennur Ulukus
-
-
- nanoTabPFN: A Lightweight and Educational Reimplementation of TabPFN
- https://arxiv.org/abs/2511.03634
- arXiv:2511.03634v1 Announce Type: new
-Abstract: Tabular foundation models such as TabPFN have revolutionized predictive machine learning for tabular data. At the same time, the driving factors of this revolution are hard to understand. Existing open-source tabular foundation models are implemented in complicated pipelines boasting over 10,000 lines of code, lack architecture documentation or code quality. In short, the implementations are hard to understand, not beginner-friendly, and complicated to adapt for new experiments. We introduce nanoTabPFN, a simplified and lightweight implementation of the TabPFN v2 architecture and a corresponding training loop that uses pre-generated training data. nanoTabPFN makes tabular foundation models more accessible to students and researchers alike. For example, restricted to a small data setting it achieves a performance comparable to traditional machine learning baselines within one minute of pre-training on a single GPU (160,000x faster than TabPFN v2 pretraining). This eliminated requirement of large computational resources makes pre-training tabular foundation models accessible for educational purposes. Our code is available at https://github.com/automl/nanoTabPFN.
- oai:arXiv.org:2511.03634v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Alexander Pfefferle, Johannes Hog, Lennart Purucker, Frank Hutter
-
-
- Towards Transparent Stance Detection: A Zero-Shot Approach Using Implicit and Explicit Interpretability
- https://arxiv.org/abs/2511.03635
- arXiv:2511.03635v1 Announce Type: new
-Abstract: Zero-Shot Stance Detection (ZSSD) identifies the attitude of the post toward unseen targets. Existing research using contrastive, meta-learning, or data augmentation suffers from generalizability issues or lack of coherence between text and target. Recent works leveraging large language models (LLMs) for ZSSD focus either on improving unseen target-specific knowledge or generating explanations for stance analysis. However, most of these works are limited by their over-reliance on explicit reasoning, provide coarse explanations that lack nuance, and do not explicitly model the reasoning process, making it difficult to interpret the model's predictions. To address these issues, in our study, we develop a novel interpretable ZSSD framework, IRIS. We provide an interpretable understanding of the attitude of the input towards the target implicitly based on sequences within the text (implicit rationales) and explicitly based on linguistic measures (explicit rationales). IRIS considers stance detection as an information retrieval ranking task, understanding the relevance of implicit rationales for different stances to guide the model towards correct predictions without requiring the ground-truth of rationales, thus providing inherent interpretability. In addition, explicit rationales based on communicative features help decode the emotional and cognitive dimensions of stance, offering an interpretable understanding of the author's attitude towards the given target. Extensive experiments on the benchmark datasets of VAST, EZ-STANCE, P-Stance, and RFD using 50%, 30%, and even 10% training data prove the generalizability of our model, benefiting from the proposed architecture and interpretable design.
- oai:arXiv.org:2511.03635v1
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Apoorva Upadhyaya, Wolfgang Nejdl, Marco Fisichella
-
-
- Watermarking Large Language Models in Europe: Interpreting the AI Act in Light of Technology
- https://arxiv.org/abs/2511.03641
- arXiv:2511.03641v1 Announce Type: new
-Abstract: To foster trustworthy Artificial Intelligence (AI) within the European Union, the AI Act requires providers to mark and detect the outputs of their general-purpose models. The Article 50 and Recital 133 call for marking methods that are ''sufficiently reliable, interoperable, effective and robust''. Yet, the rapidly evolving and heterogeneous landscape of watermarks for Large Language Models (LLMs) makes it difficult to determine how these four standards can be translated into concrete and measurable evaluations. Our paper addresses this challenge, anchoring the normativity of European requirements in the multiplicity of watermarking techniques. Introducing clear and distinct concepts on LLM watermarking, our contribution is threefold. (1) Watermarking Categorisation: We propose an accessible taxonomy of watermarking methods according to the stage of the LLM lifecycle at which they are applied - before, during, or after training, and during next-token distribution or sampling. (2) Watermarking Evaluation: We interpret the EU AI Act's requirements by mapping each criterion with state-of-the-art evaluations on robustness and detectability of the watermark, and of quality of the LLM. Since interoperability remains largely untheorised in LLM watermarking research, we propose three normative dimensions to frame its assessment. (3) Watermarking Comparison: We compare current watermarking methods for LLMs against the operationalised European criteria and show that no approach yet satisfies all four standards. Encouraged by emerging empirical tests, we recommend further research into watermarking directly embedded within the low-level architecture of LLMs.
- oai:arXiv.org:2511.03641v1
- cs.CR
- cs.AI
- cs.CL
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Thomas Souverain
-
-
- Generalized k-Cell Decomposition for Visibility Planning in Polygons
- https://arxiv.org/abs/2511.03642
- arXiv:2511.03642v1 Announce Type: new
-Abstract: This paper introduces a novel $k$-cell decomposition method for pursuit-evasion problems in polygonal environments, where a searcher is equipped with a $k$-modem: a device capable of seeing through up to $k$ walls. The proposed decomposition ensures that as the searcher moves within a cell, the structure of unseen regions (shadows) remains unchanged, thereby preventing any geometric events between or on invisible regions, that is, preventing the appearance, disappearance, merge, or split of shadow regions. The method extends existing work on $0$- and $2$-visibility by incorporating m-visibility polygons for all even $0 \le m \le k$, constructing partition lines that enable robust environment division. The correctness of the decomposition is proved via three theorems. The decomposition enables reliable path planning for intruder detection in simulated environments and opens new avenues for visibility-based robotic surveillance. The difficulty in constructing the cells of the decomposition consists in computing the $k$-visibility polygon from each vertex and finding the intersection points of the partition lines to create the cells.
- oai:arXiv.org:2511.03642v1
- cs.CG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yeganeh Bahoo, Sajad Saeedi, Roni Sherman
-
-
- Signal Intensity-weighted coordinate channels improve learning stability and generalisation in 1D and 2D CNNs in localisation tasks on biomedical signals
- https://arxiv.org/abs/2511.03645
- arXiv:2511.03645v1 Announce Type: new
-Abstract: Localisation tasks in biomedical data often require models to learn meaningful spatial or temporal relationships from signals with complex intensity distributions. A common strategy, exemplified by CoordConv layers, is to append coordinate channels to convolutional inputs, enabling networks to learn absolute positions. In this work, we propose a signal intensity-weighted coordinate representation that replaces the pure coordinate channels with channels scaled by local signal intensity. This modification embeds an intensity-position coupling directly in the input representation, introducing a simple and modality-agnostic inductive bias. We evaluate the approach on two distinct localisation problems: (i) predicting the time of morphological transition in 20-second, two-lead ECG signals, and (ii) regressing the coordinates of nuclear centres in cytological images from the SiPaKMeD dataset. In both cases, the proposed representation yields faster convergence and higher generalisation performance relative to conventional coordinate-channel approaches, demonstrating its effectiveness across both one-dimensional and two-dimensional biomedical signals.
- oai:arXiv.org:2511.03645v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Vittal L. Rao
-
-
- Improved Bounds with a Simple Algorithm for Edge Estimation for Graphs of Unknown Size
- https://arxiv.org/abs/2511.03650
- arXiv:2511.03650v1 Announce Type: new
-Abstract: We propose a randomized algorithm with query access that given a graph $G$ with arboricity $\alpha$, and average degree $d$, makes $\widetilde{O}\left(\frac{\alpha}{\varepsilon^2d}\right)$ \texttt{Degree} and $\widetilde{O}\left(\frac{1}{\varepsilon^2}\right)$ \texttt{Random Edge} queries to obtain an estimate $\widehat{d}$ satisfying $\widehat{d} \in (1\pm\varepsilon)d$. This improves the $\widetilde{O}_{\varepsilon,\log n}\left(\sqrt{\frac{n}{d}}\right)$ query algorithm of [Beretta et al., SODA 2026] that has access to \texttt{Degree}, \texttt{Neighbour}, and \texttt{Random Edge} queries. Our algorithm does not require any graph parameter as input, not even the size of the vertex set, and attains both simplicity and practicality through a new estimation technique. We complement our upper bounds with a lower bound that shows for all valid $n,d$, and $\alpha$, any algorithm that has access to \texttt{Degree}, \texttt{Neighbour}, and \texttt{Random Edge} queries, must make at least $\Omega\left(\min\left(d,\frac{\alpha}{d}\right)\right)$ queries to obtain a $(1\pm\varepsilon)$-multiplicative estimate of $d$, even with the knowledge of $n$ and $\alpha$. We also show that even with \texttt{Pair} and \texttt{FullNbr} queries, an algorithm must make $\Omega\left(\min\left(d,\frac{\alpha}{d}\right)\right)$ queries to obtain a $(1\pm\varepsilon)$-multiplicative estimate of $d$. Our work addresses both the questions raised by the work of [Beretta et al., SODA 2026].
- oai:arXiv.org:2511.03650v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- Debarshi Chanda
-
-
- Flying Robotics Art: ROS-based Drone Draws the Record-Breaking Mural
- https://arxiv.org/abs/2511.03651
- arXiv:2511.03651v1 Announce Type: new
-Abstract: This paper presents the innovative design and successful deployment of a pioneering autonomous unmanned aerial system developed for executing the world's largest mural painted by a drone. Addressing the dual challenges of maintaining artistic precision and operational reliability under adverse outdoor conditions such as wind and direct sunlight, our work introduces a robust system capable of navigating and painting outdoors with unprecedented accuracy. Key to our approach is a novel navigation system that combines an infrared (IR) motion capture camera and LiDAR technology, enabling precise location tracking tailored specifically for largescale artistic applications. We employ a unique control architecture that uses different regulation in tangential and normal directions relative to the planned path, enabling precise trajectory tracking and stable line rendering. We also present algorithms for trajectory planning and path optimization, allowing for complex curve drawing and area filling. The system includes a custom-designed paint spraying mechanism, specifically engineered to function effectively amidst the turbulent airflow generated by the drone's propellers, which also protects the drone's critical components from paint-related damage, ensuring longevity and consistent performance. Experimental results demonstrate the system's robustness and precision in varied conditions, showcasing its potential for autonomous large-scale art creation and expanding the functional applications of robotics in creative fields.
- oai:arXiv.org:2511.03651v1
- cs.RO
- cs.CV
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- 10.1109/IROS58592.2024.10802405
- 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- Andrei A. Korigodskii, Oleg D. Kalachev, Artem E. Vasiunik, Matvei V. Urvantsev, Georgii E. Bondar
-
-
- Motion Planning Under Temporal Logic Specifications In Semantically Unknown Environments
- https://arxiv.org/abs/2511.03652
- arXiv:2511.03652v1 Announce Type: new
-Abstract: This paper addresses a motion planning problem to achieve spatio-temporal-logical tasks, expressed by syntactically co-safe linear temporal logic specifications (scLTL\next), in uncertain environments. Here, the uncertainty is modeled as some probabilistic knowledge on the semantic labels of the environment. For example, the task is "first go to region 1, then go to region 2"; however, the exact locations of regions 1 and 2 are not known a priori, instead a probabilistic belief is available. We propose a novel automata-theoretic approach, where a special product automaton is constructed to capture the uncertainty related to semantic labels, and a reward function is designed for each edge of this product automaton. The proposed algorithm utilizes value iteration for online replanning. We show some theoretical results and present some simulations/experiments to demonstrate the efficacy of the proposed approach.
- oai:arXiv.org:2511.03652v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Azizollah Taheri, Derya Aksaray
-
-
- Efficient Testing Implies Structured Symmetry
- https://arxiv.org/abs/2511.03653
- arXiv:2511.03653v1 Announce Type: new
-Abstract: Given a small random sample of $n$-bit strings labeled by an unknown Boolean function, which properties of this function can be tested computationally efficiently? We show an equivalence between properties that are efficiently testable from few samples and properties with structured symmetry, which depend only on the function's average values on parts of a low-complexity partition of the domain. Without the efficiency constraint, a similar characterization in terms of unstructured symmetry was obtained by Blais and Yoshida (2019). Our main technical tool is supersimulation, which builds on methods from the algorithmic fairness literature to approximate arbitrarily complex functions by small-circuit simulators that fool significantly larger distinguishers.
- We extend the characterization along other axes as well. We show that allowing parts to overlap exponentially reduces their required number, broadening the scope of the construction from properties testable with $O(\log n)$ samples to properties testable with $O(n)$ samples. For larger sample sizes, we show that any efficient tester is essentially checking for indistinguishability from a bounded collection of small circuits, in the spirit of a characterization of testable graph properties. Finally, we show that our results for Boolean function testing generalize to high-entropy distribution testing on arbitrary domains.
- oai:arXiv.org:2511.03653v1
- cs.CC
- cs.DS
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Cynthia Dwork, Pranay Tankala
-
-
- SIMD-vectorized implicit symplectic integrators can outperform explicit ones
- https://arxiv.org/abs/2511.03655
- arXiv:2511.03655v1 Announce Type: new
-Abstract: The main purpose of this work is to present a SIMD-vectorized implementation of the symplectic 16th-order 8-stage implicit Runge-Kutta integrator based on collocation with Gauss-Legendre nodes (IRKGL16-SIMD), and to show that it can outperform state-of-the-art symplectic explicit integrators for high-precision numerical integrations (in double-precision floating-point arithmetic) of non-stiff Hamiltonian ODE systems. Our IRKGL16-SIMD integrator leverages Single Instruction Multiple Data (SIMD) based parallelism (in a way that is transparent to the user) to significantly enhance the performance of the sequential IRKGL16 implementation. We present numerical experiments comparing IRKGL16-SIMD with state-of-the-art high-order explicit symplectic methods for the numerical integration of several Hamiltonian systems in double-precision floating-point arithmetic.
- oai:arXiv.org:2511.03655v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Mikel Anto\~nana, Joseba Makazaga, Ander Murua
-
-
- ChiMDQA: Towards Comprehensive Chinese Document QA with Fine-grained Evaluation
- https://arxiv.org/abs/2511.03656
- arXiv:2511.03656v1 Announce Type: new
-Abstract: With the rapid advancement of natural language processing (NLP) technologies, the demand for high-quality Chinese document question-answering datasets is steadily growing. To address this issue, we present the Chinese Multi-Document Question Answering Dataset(ChiMDQA), specifically designed for downstream business scenarios across prevalent domains including academic, education, finance, law, medical treatment, and news. ChiMDQA encompasses long-form documents from six distinct fields, consisting of 6,068 rigorously curated, high-quality question-answer (QA) pairs further classified into ten fine-grained categories. Through meticulous document screening and a systematic question-design methodology, the dataset guarantees both diversity and high quality, rendering it applicable to various NLP tasks such as document comprehension, knowledge extraction, and intelligent QA systems. Additionally, this paper offers a comprehensive overview of the dataset's design objectives, construction methodologies, and fine-grained evaluation system, supplying a substantial foundation for future research and practical applications in Chinese QA. The code and data are available at: https://anonymous.4open.science/r/Foxit-CHiMDQA/.
- oai:arXiv.org:2511.03656v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Jing Gao, Shutiao Luo, Yumeng Liu, Yuanming Li, Hongji Zeng
-
-
- Left Inverses for B-spline Subdivision Matrices in Tensor-Product Spaces
- https://arxiv.org/abs/2511.03658
- arXiv:2511.03658v1 Announce Type: new
-Abstract: In this article, we study dyadic coarsening operators in univariate spline spaces and in tensor-product spline spaces over uniform grids. Our construction is strongly motivated by the work of Bartels, Golub, and Samavati (2006), Some observations on local least squares, BIT, 46(3):455--477. The proposed operators are local in nature and yield approximations to a given spline that are comparable to the global L2-best approximation, while being significantly faster to compute and computationally inexpensive.
- oai:arXiv.org:2511.03658v1
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Marcelo Actis, Silvano Figueroa, Eduardo M. Garau
-
-
- SHIELD: Securing Healthcare IoT with Efficient Machine Learning Techniques for Anomaly Detection
- https://arxiv.org/abs/2511.03661
- arXiv:2511.03661v1 Announce Type: new
-Abstract: The integration of IoT devices in healthcare introduces significant security and reliability challenges, increasing susceptibility to cyber threats and operational anomalies. This study proposes a machine learning-driven framework for (1) detecting malicious cyberattacks and (2) identifying faulty device anomalies, leveraging a dataset of 200,000 records. Eight machine learning models are evaluated across three learning approaches: supervised learning (XGBoost, K-Nearest Neighbors (K- NN)), semi-supervised learning (Generative Adversarial Networks (GAN), Variational Autoencoders (VAE)), and unsupervised learning (One-Class Support Vector Machine (SVM), Isolation Forest, Graph Neural Networks (GNN), and Long Short-Term Memory (LSTM) Autoencoders). The comprehensive evaluation was conducted across multiple metrics like F1-score, precision, recall, accuracy, ROC-AUC, computational efficiency. XGBoost achieved 99\% accuracy with minimal computational overhead (0.04s) for anomaly detection, while Isolation Forest balanced precision and recall effectively. LSTM Autoencoders underperformed with lower accuracy and higher latency. For attack detection, KNN achieved near-perfect precision, recall, and F1-score with the lowest computational cost (0.05s), followed by VAE at 97% accuracy. GAN showed the highest computational cost with lowest accuracy and ROC-AUC. These findings enhance IoT-enabled healthcare security through effective anomaly detection strategies. By improving early detection of cyber threats and device failures, this framework has the potential to prevent data breaches, minimize system downtime, and ensure the continuous and safe operation of medical devices, ultimately safeguarding patient health and trust in IoT-driven healthcare solutions.
- oai:arXiv.org:2511.03661v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1109/AIIoT65859.2025.11105287
- 2025 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 2025, pp. 0521-0528
- Mahek Desai, Apoorva Rumale, Marjan Asadinia
-
-
- A General Input-Dependent Colorless Computability Theorem and Applications to Core-Dependent Adversaries
- https://arxiv.org/abs/2511.03662
- arXiv:2511.03662v1 Announce Type: new
-Abstract: Distributed computing tasks can be presented with a triple $(\I,\Ou,\Delta)$. The solvability of a colorless task on the Iterated Immediate Snapshot model (IIS) has been characterized by the Colorless Computability Theorem \cite[Th.4.3.1]{HKRbook}. A recent paper~\cite{CG-24} generalizes this theorem for any message adversaries $\ma \subseteq IIS$ by geometric methods. In 2001, Most\'efaoui, Rajsbaum, Raynal, and Roy \cite{condbased} introduced \emph{condition-based adversaries}. This setting considers a particular adversary that will be applied only to a subset of input configurations. In this setting, they studied the $k$-set agreement task with condition-based $t$-resilient adversaries and obtained a sufficient condition on the conditions that make $k$-Set Agreement solvable. In this paper we have three contributions:
- -We generalize the characterization of~\cite{CG-24} to \emph{input-dependent} adversaries, which means that the adversaries can change depending on the input configuration.
- - We show that core-resilient adversaries of $IIS_n$ have the same computability power as the core-resilient adversaries of $IIS_n$ where crashes only happen at the start.
- - Using the two previous contributions, we provide a necessary and sufficient characterization of the condition-based, core-dependent adversaries that can solve $k$-Set Agreement. We also distinguish four settings that may appear when presenting a distributed task as $(\I,\Ou,\Delta)$. Finally, in a later section, we present structural properties on the carrier map $\Delta$. Such properties allow simpler proof, without changing the computability power of the task. Most of the proofs in this article leverage the topological framework used in distributed computing by using simple geometric constructions.
- oai:arXiv.org:2511.03662v1
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Yannis Coutouly, Emmanuel Godard
-
-
- A Lightweight 3D-CNN for Event-Based Human Action Recognition with Privacy-Preserving Potential
- https://arxiv.org/abs/2511.03665
- arXiv:2511.03665v1 Announce Type: new
-Abstract: This paper presents a lightweight three-dimensional convolutional neural network (3DCNN) for human activity recognition (HAR) using event-based vision data. Privacy preservation is a key challenge in human monitoring systems, as conventional frame-based cameras capture identifiable personal information. In contrast, event cameras record only changes in pixel intensity, providing an inherently privacy-preserving sensing modality. The proposed network effectively models both spatial and temporal dynamics while maintaining a compact design suitable for edge deployment. To address class imbalance and enhance generalization, focal loss with class reweighting and targeted data augmentation strategies are employed. The model is trained and evaluated on a composite dataset derived from the Toyota Smart Home and ETRI datasets. Experimental results demonstrate an F1-score of 0.9415 and an overall accuracy of 94.17%, outperforming benchmark 3D-CNN architectures such as C3D, ResNet3D, and MC3_18 by up to 3%. These results highlight the potential of event-based deep learning for developing accurate, efficient, and privacy-aware human action recognition systems suitable for real-world edge applications.
- oai:arXiv.org:2511.03665v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Mehdi Sefidgar Dilmaghani, Francis Fowley, Peter Corcoran
-
-
- Part-Aware Bottom-Up Group Reasoning for Fine-Grained Social Interaction Detection
- https://arxiv.org/abs/2511.03666
- arXiv:2511.03666v1 Announce Type: new
-Abstract: Social interactions often emerge from subtle, fine-grained cues such as facial expressions, gaze, and gestures. However, existing methods for social interaction detection overlook such nuanced cues and primarily rely on holistic representations of individuals. Moreover, they directly detect social groups without explicitly modeling the underlying interactions between individuals. These drawbacks limit their ability to capture localized social signals and introduce ambiguity when group configurations should be inferred from social interactions grounded in nuanced cues. In this work, we propose a part-aware bottom-up group reasoning framework for fine-grained social interaction detection. The proposed method infers social groups and their interactions using body part features and their interpersonal relations. Our model first detects individuals and enhances their features using part-aware cues, and then infers group configuration by associating individuals via similarity-based reasoning, which considers not only spatial relations but also subtle social cues that signal interactions, leading to more accurate group inference. Experiments on the NVI dataset demonstrate that our method outperforms prior methods, achieving the new state of the art.
- oai:arXiv.org:2511.03666v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Dongkeun Kim, Minsu Cho, Suha Kwak
-
-
- DQN Performance with Epsilon Greedy Policies and Prioritized Experience Replay
- https://arxiv.org/abs/2511.03670
- arXiv:2511.03670v1 Announce Type: new
-Abstract: We present a detailed study of Deep Q-Networks in finite environments, emphasizing the impact of epsilon-greedy exploration schedules and prioritized experience replay. Through systematic experimentation, we evaluate how variations in epsilon decay schedules affect learning efficiency, convergence behavior, and reward optimization. We investigate how prioritized experience replay leads to faster convergence and higher returns and show empirical results comparing uniform, no replay, and prioritized strategies across multiple simulations. Our findings illuminate the trade-offs and interactions between exploration strategies and memory management in DQN training, offering practical recommendations for robust reinforcement learning in resource-constrained settings.
- oai:arXiv.org:2511.03670v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Daniel Perkins, Oscar J. Escobar, Luke Green
-
-
- OriFeel: Origami-Inspired Actuation for Force-Based Tactile Feedback on Ambient Surfaces
- https://arxiv.org/abs/2511.03673
- arXiv:2511.03673v1 Announce Type: new
-Abstract: People are constantly in touch with surfaces in their lives, such as a sofa, armrest, and table, making them natural tactile interfaces. Despite the recent advancements in shape-changing surfaces, current available solutions are often challenging to retrofit into ambient surfaces due to their bulky form factor or high power requirements. We present \name, a foldable structure-enabled tactile feedback mechanism that leverages the structural properties of Miura-Ori fold to enable on-surface force actuation. The foldable structure allows the surfaces to provide perpendicular force via lateral actuation, resulting in a slim form factor that can be actuated via cable-based design using a servo motor. We evaluate the system with a real-world prototype and a user study. The user study shows that users can effectively distinguish multiple intensity levels.
- oai:arXiv.org:2511.03673v1
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Shubham Rohal (University of California, Merced), Shijia Pan (University of California, Merced)
-
-
- Whisper Leak: a side-channel attack on Large Language Models
- https://arxiv.org/abs/2511.03675
- arXiv:2511.03675v1 Announce Type: new
-Abstract: Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications, where privacy is paramount. This paper introduces Whisper Leak, a side-channel attack that infers user prompt topics from encrypted LLM traffic by analyzing packet size and timing patterns in streaming responses. Despite TLS encryption protecting content, these metadata patterns leak sufficient information to enable topic classification. We demonstrate the attack across 28 popular LLMs from major providers, achieving near-perfect classification (often >98% AUPRC) and high precision even at extreme class imbalance (10,000:1 noise-to-target ratio). For many models, we achieve 100% precision in identifying sensitive topics like "money laundering" while recovering 5-20% of target conversations. This industry-wide vulnerability poses significant risks for users under network surveillance by ISPs, governments, or local adversaries. We evaluate three mitigation strategies - random padding, token batching, and packet injection - finding that while each reduces attack effectiveness, none provides complete protection. Through responsible disclosure, we have collaborated with providers to implement initial countermeasures. Our findings underscore the need for LLM providers to address metadata leakage as AI systems handle increasingly sensitive information.
- oai:arXiv.org:2511.03675v1
- cs.CR
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- Geoff McDonald, Jonathan Bar Or
-
-
- Unconscious and Intentional Human Motion Cues for Expressive Robot-Arm Motion Design
- https://arxiv.org/abs/2511.03676
- arXiv:2511.03676v1 Announce Type: new
-Abstract: This study investigates how human motion cues can be used to design expressive robot-arm movements. Using the imperfect-information game Geister, we analyzed two types of human piece-moving motions: natural gameplay (unconscious tendencies) and instructed expressions (intentional cues). Based on these findings, we created phase-specific robot motions by varying movement speed and stop duration, and evaluated observer impressions under two presentation modalities: a physical robot and a recorded video. Results indicate that late-phase motion timing, particularly during withdrawal, plays an important role in impression formation and that physical embodiment enhances the interpretability of motion cues. These findings provide insights for designing expressive robot motions based on human timing behavior.
- oai:arXiv.org:2511.03676v1
- cs.RO
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Taito Tashiro, Tomoko Yonezawa, Hirotake Yamazoe
-
-
- A Constant-Gain Equation-Error Framework for Airliner Aerodynamic Monitoring Using QAR Data
- https://arxiv.org/abs/2511.03678
- arXiv:2511.03678v1 Announce Type: new
-Abstract: Monitoring the in-service aerodynamic performance of airliners is critical for operational efficiency and safety, but using operational Quick Access Recorder (QAR) data for this purpose presents significant challenges. This paper first establishes that the absence of key parameters, particularly aircraft moments of inertia, makes conventional state-propagation filters fundamentally unsuitable for this application. This limitation necessitates a decoupled, Equation-Error Method (EEM). However, we then demonstrate through a comparative analysis that standard recursive estimators with time-varying gains, such as Recursive Least Squares (RLS), also fail within an EEM framework, exhibiting premature convergence or instability when applied to low-excitation cruise data. To overcome these dual challenges, we propose and validate the Constant-Gain Equation-Error Method (CG-EEM). This framework employs a custom estimator with a constant, Kalman-like gain, which is perfectly suited to the stationary, low-signal-to-noise characteristics of cruise flight. The CG-EEM is extensively validated on a large, multi-fleet dataset of over 200 flights, where it produces highly consistent, physically plausible aerodynamic parameters and correctly identifies known performance differences between aircraft types. The result is a robust, scalable, and computationally efficient tool for fleet-wide performance monitoring and the early detection of performance degradation.
- oai:arXiv.org:2511.03678v1
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ruiying Wen, Yuntao Dai, Hongyong Wang
-
-
- Simulation-Based Validation of an Integrated 4D/5D Digital-Twin Framework for Predictive Construction Control
- https://arxiv.org/abs/2511.03684
- arXiv:2511.03684v1 Announce Type: new
-Abstract: Persistent cost and schedule deviations remain a major challenge in the U.S. construction industry, revealing the limitations of deterministic CPM and static document-based estimating. This study presents an integrated 4D/5D digital-twin framework that couples Building Information Modeling (BIM) with natural-language processing (NLP)-based cost mapping, computer-vision (CV)-driven progress measurement, Bayesian probabilistic CPM updating, and deep-reinforcement-learning (DRL) resource-leveling. A nine-month case implementation on a Dallas-Fort Worth mid-rise project demonstrated measurable gains in accuracy and efficiency: 43% reduction in estimating labor, 6% reduction in overtime, and 30% project-buffer utilization, while maintaining an on-time finish at 128 days within P50-P80 confidence bounds. The digital-twin sandbox also enabled real-time "what-if" forecasting and traceable cost-schedule alignment through a 5D knowledge graph. Findings confirm that integrating AI-based analytics with probabilistic CPM and DRL enhances forecasting precision, transparency, and control resilience. The validated workflow establishes a practical pathway toward predictive, adaptive, and auditable construction management.
- oai:arXiv.org:2511.03684v1
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Atena Khoshkonesh, Mohsen Mohammadagha, Navid Ebrahimi
-
-
- Structured Matrix Scaling for Multi-Class Calibration
- https://arxiv.org/abs/2511.03685
- arXiv:2511.03685v1 Announce Type: new
-Abstract: Post-hoc recalibration methods are widely used to ensure that classifiers provide faithful probability estimates. We argue that parametric recalibration functions based on logistic regression can be motivated from a simple theoretical setting for both binary and multiclass classification. This insight motivates the use of more expressive calibration methods beyond standard temperature scaling. For multi-class calibration however, a key challenge lies in the increasing number of parameters introduced by more complex models, often coupled with limited calibration data, which can lead to overfitting. Through extensive experiments, we demonstrate that the resulting bias-variance tradeoff can be effectively managed by structured regularization, robust preprocessing and efficient optimization. The resulting methods lead to substantial gains over existing logistic-based calibration techniques. We provide efficient and easy-to-use open-source implementations of our methods, making them an attractive alternative to common temperature, vector, and matrix scaling implementations.
- oai:arXiv.org:2511.03685v1
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Eug\`ene Berta, David Holzm\"uller, Michael I. Jordan, Francis Bach
-
-
- The OpenHands Software Agent SDK: A Composable and Extensible Foundation for Production Agents
- https://arxiv.org/abs/2511.03690
- arXiv:2511.03690v1 Announce Type: new
-Abstract: Agents are now used widely in the process of software development, but building production-ready software engineering agents is a complex task. Deploying software agents effectively requires flexibility in implementation and experimentation, reliable and secure execution, and interfaces for users to interact with agents. In this paper, we present the OpenHands Software Agent SDK, a toolkit for implementing software development agents that satisfy these desiderata. This toolkit is a complete architectural redesign of the agent components of the popular OpenHands framework for software development agents, which has 64k+ GitHub stars. To achieve flexibility, we design a simple interface for implementing agents that requires only a few lines of code in the default case, but is easily extensible to more complex, full-featured agents with features such as custom tools, memory management, and more. For security and reliability, it delivers seamless local-to-remote execution portability, integrated REST/WebSocket services. For interaction with human users, it can connect directly to a variety of interfaces, such as visual workspaces (VS Code, VNC, browser), command-line interfaces, and APIs. Compared with existing SDKs from OpenAI, Claude, and Google, OpenHands uniquely integrates native sandboxed execution, lifecycle control, model-agnostic multi-LLM routing, and built-in security analysis. Empirical results on SWE-Bench Verified and GAIA benchmarks demonstrate strong performance. Put together, these elements allow the OpenHands Software Agent SDK to provide a practical foundation for prototyping, unlocking new classes of custom applications, and reliably deploying agents at scale.
- oai:arXiv.org:2511.03690v1
- cs.SE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Xingyao Wang, Simon Rosenberg, Juan Michelini, Calvin Smith, Hoang Tran, Engel Nyst, Rohit Malhotra, Xuhui Zhou, Valerie Chen, Robert Brennan, Graham Neubig
-
-
- Source-Free Bistable Fluidic Gripper for Size-Selective and Stiffness-Adaptive Grasping
- https://arxiv.org/abs/2511.03691
- arXiv:2511.03691v1 Announce Type: new
-Abstract: Conventional fluid-driven soft grippers typically depend on external sources, which limit portability and long-term autonomy. This work introduces a self-contained soft gripper with fixed size that operates solely through internal liquid redistribution among three interconnected bistable snap-through chambers. When the top sensing chamber deforms upon contact, the displaced liquid triggers snap-through expansion of the grasping chambers, enabling stable and size-selective grasping without continuous energy input. The internal hydraulic feedback further allows passive adaptation of gripping pressure to object stiffness. This source-free and compact design opens new possibilities for lightweight, stiffness-adaptive fluid-driven manipulation in soft robotics, providing a feasible approach for targeted size-specific sampling and operation in underwater and field environments.
- oai:arXiv.org:2511.03691v1
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhihang Qin, Yueheng Zhang, Wan Su, Linxin Hou, Shenghao Zhou, Zhijun Chen, Yu Jun Tan, Cecilia Laschi
-
-
- Behavior-Adaptive Q-Learning: A Unifying Framework for Offline-to-Online RL
- https://arxiv.org/abs/2511.03695
- arXiv:2511.03695v1 Announce Type: new
-Abstract: Offline reinforcement learning (RL) enables training from fixed data without online interaction, but policies learned offline often struggle when deployed in dynamic environments due to distributional shift and unreliable value estimates on unseen state-action pairs. We introduce Behavior-Adaptive Q-Learning (BAQ), a framework designed to enable a smooth and reliable transition from offline to online RL. The key idea is to leverage an implicit behavioral model derived from offline data to provide a behavior-consistency signal during online fine-tuning. BAQ incorporates a dual-objective loss that (i) aligns the online policy toward the offline behavior when uncertainty is high, and (ii) gradually relaxes this constraint as more confident online experience is accumulated. This adaptive mechanism reduces error propagation from out-of-distribution estimates, stabilizes early online updates, and accelerates adaptation to new scenarios. Across standard benchmarks, BAQ consistently outperforms prior offline-to-online RL approaches, achieving faster recovery, improved robustness, and higher overall performance. Our results demonstrate that implicit behavior adaptation is a principled and practical solution for reliable real-world policy deployment.
- oai:arXiv.org:2511.03695v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Lipeng Zu, Hansong Zhou, Xiaonan Zhang
-
-
- AnaFlow: Agentic LLM-based Workflow for Reasoning-Driven Explainable and Sample-Efficient Analog Circuit Sizing
- https://arxiv.org/abs/2511.03697
- arXiv:2511.03697v1 Announce Type: new
-Abstract: Analog/mixed-signal circuits are key for interfacing electronics with the physical world. Their design, however, remains a largely handcrafted process, resulting in long and error-prone design cycles. While the recent rise of AI-based reinforcement learning and generative AI has created new techniques to automate this task, the need for many time-consuming simulations is a critical bottleneck hindering the overall efficiency. Furthermore, the lack of explainability of the resulting design solutions hampers widespread adoption of the tools. To address these issues, a novel agentic AI framework for sample-efficient and explainable analog circuit sizing is presented. It employs a multi-agent workflow where specialized Large Language Model (LLM)-based agents collaborate to interpret the circuit topology, to understand the design goals, and to iteratively refine the circuit's design parameters towards the target goals with human-interpretable reasoning. The adaptive simulation strategy creates an intelligent control that yields a high sample efficiency. The AnaFlow framework is demonstrated for two circuits of varying complexity and is able to complete the sizing task fully automatically, differently from pure Bayesian optimization and reinforcement learning approaches. The system learns from its optimization history to avoid past mistakes and to accelerate convergence. The inherent explainability makes this a powerful tool for analog design space exploration and a new paradigm in analog EDA, where AI agents serve as transparent design assistants.
- oai:arXiv.org:2511.03697v1
- cs.LG
- cs.AI
- cs.AR
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mohsen Ahmadzadeh, Kaichang Chen, Georges Gielen
-
-
- Do Androids Dream of Unseen Puppeteers? Probing for a Conspiracy Mindset in Large Language Models
- https://arxiv.org/abs/2511.03699
- arXiv:2511.03699v1 Announce Type: new
-Abstract: In this paper, we investigate whether Large Language Models (LLMs) exhibit conspiratorial tendencies, whether they display sociodemographic biases in this domain, and how easily they can be conditioned into adopting conspiratorial perspectives. Conspiracy beliefs play a central role in the spread of misinformation and in shaping distrust toward institutions, making them a critical testbed for evaluating the social fidelity of LLMs. LLMs are increasingly used as proxies for studying human behavior, yet little is known about whether they reproduce higher-order psychological constructs such as a conspiratorial mindset. To bridge this research gap, we administer validated psychometric surveys measuring conspiracy mindset to multiple models under different prompting and conditioning strategies. Our findings reveal that LLMs show partial agreement with elements of conspiracy belief, and conditioning with socio-demographic attributes produces uneven effects, exposing latent demographic biases. Moreover, targeted prompts can easily shift model responses toward conspiratorial directions, underscoring both the susceptibility of LLMs to manipulation and the potential risks of their deployment in sensitive contexts. These results highlight the importance of critically evaluating the psychological dimensions embedded in LLMs, both to advance computational social science and to inform possible mitigation strategies against harmful uses.
- oai:arXiv.org:2511.03699v1
- cs.CL
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Francesco Corso, Francesco Pierri, Gianmarco De Francisci Morales
-
-
- Ideals, Gr\"obner Bases, and PCPs
- https://arxiv.org/abs/2511.03703
- arXiv:2511.03703v1 Announce Type: new
-Abstract: All known proofs of the PCP theorem rely on multiple "composition" steps, where PCPs over large alphabets are turned into PCPs over much smaller alphabets at a (relatively) small price in the soundness error of the PCP. Algebraic proofs, starting with the work of Arora, Lund, Motwani, Sudan, and Szegedy use at least 2 such composition steps, whereas the "Gap amplification" proof of Dinur uses $\Theta(\log n)$ such composition steps. In this work, we present the first PCP construction using just one composition step. The key ingredient, missing in previous work and finally supplied in this paper, is a basic PCP (of Proximity) of size $2^{n^\epsilon}$, for any $\epsilon > 0$, that makes $O_\epsilon(1)$ queries.
- At the core of our new construction is a new class of alternatives to "sum-check" protocols. As used in past PCPs, these provide a method by which to verify that an $m$-variate degree $d$ polynomial $P$ evaluates to zero at every point of some set $S \subseteq \mathbb{F}_q^m$. Previous works had shown how to check this condition for sets of the form $S = H^m$ using $O(m)$ queries with alphabet $\mathbb{F}_q^d$ assuming $d \geq |H|$. Our work improves this basic protocol in two ways: First we extend it to broader classes of sets $S$ (ones closer to Hamming balls rather than cubes). Second, it reduces the number of queries from $O(m)$ to an absolute constant for the settings of $S$ we consider. Specifically when $S = (\{0,1\}^{m/c}_{\leq 1})^c$, we give such an alternate to the sum-check protocol with $O(1)$ queries with alphabet $\mathbb{F}_q^{O(c+d)}$, using proofs of size $q^{O(m^2/c)}$. Our new protocols use insights from the powerful theory of Gr\"obner bases to extend previously known protocols to these new settings with surprising ease. In doing so, they highlight why these theories from algebra may be of further use in complexity theory.
- oai:arXiv.org:2511.03703v1
- cs.CC
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Prashanth Amireddy, Amik Raj Behera, Srikanth Srinivasan, Madhu Sudan, Sophus Valentin Willumsgaard
-
-
- LLM-enhanced Air Quality Monitoring Interface via Model Context Protocol
- https://arxiv.org/abs/2511.03706
- arXiv:2511.03706v1 Announce Type: new
-Abstract: Air quality monitoring is central to environmental sustainability and public health, yet traditional systems remain difficult for non-expert users to interpret due to complex visualizations, limited interactivity, and high deployment costs. Recent advances in Large Language Models (LLMs) offer new opportunities to make sensor data more accessible, but their tendency to produce hallucinations limits reliability in safety-critical domains. To address these challenges, we present an LLM-enhanced Air Monitoring Interface (AMI) that integrates real-time sensor data with a conversational interface via the Model Context Protocol (MCP). Our system grounds LLM outputs in live environmental data, enabling accurate, context-aware responses while reducing hallucination risk. The architecture combines a Django-based backend, a responsive user dashboard, and a secure MCP server that exposes system functions as discoverable tools, allowing the LLM to act as an active operator rather than a passive responder. Expert evaluation demonstrated high factual accuracy (4.78), completeness (4.82), and minimal hallucinations (4.84), on a scale of 5, supported by inter-rater reliability analysis. These results highlight the potential of combining LLMs with standardized tool protocols to create reliable, secure, and user-friendly interfaces for real-time environmental monitoring.
- oai:arXiv.org:2511.03706v1
- cs.ET
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yu-Erh Pan, Ayesha Siddika Nipu
-
-
- Shrinking the Variance: Shrinkage Baselines for Reinforcement Learning with Verifiable Rewards
- https://arxiv.org/abs/2511.03710
- arXiv:2511.03710v1 Announce Type: new
-Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for post-training large reasoning models (LRMs) using policy-gradient methods such as GRPO. To stabilize training, these methods typically center trajectory rewards by subtracting the empirical mean for each prompt. Statistically, this centering acts as a control variate (or baseline), reducing the variance of the policy-gradient estimator.
- Typically, the mean reward is estimated using per-prompt empirical averages for each prompt in a batch. Drawing inspiration from Stein's paradox, we propose using shrinkage estimators that combine per-prompt and across-prompt means to improve the overall per-prompt mean estimation accuracy -- particularly in the low-generation regime typical of RLVR. Theoretically, we construct a shrinkage-based baseline that provably yields lower-variance policy-gradient estimators across algorithms. Our proposed baseline serves as a drop-in replacement for existing per-prompt mean baselines, requiring no additional hyper-parameters or computation. Empirically, shrinkage baselines consistently outperform standard empirical-mean baselines, leading to lower-variance gradient updates and improved training stability.
- oai:arXiv.org:2511.03710v1
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Guanning Zeng, Zhaoyi Zhou, Daman Arora, Andrea Zanette
-
-
- Multi-Region Matrix Interpolation for Dynamic Analysis of Aperiodic Structures under Large Model Parameter Perturbations
- https://arxiv.org/abs/2511.03711
- arXiv:2511.03711v1 Announce Type: new
-Abstract: This work introduces a surrogate-based model for efficiently estimating the frequency response of dynamic mechanical metamaterials, particularly when dealing with large parametric perturbations and aperiodic substructures. The research builds upon a previous matrix interpolation method applied on top of a Craig-Bampton modal reduction, allowing the variations of geometrical features without the need to remesh and recompute Finite Element matrices. This existing procedure has significant limitations since it requires a common modal projection, which inherently restricts the allowable perturbation size of the model parameters, thereby limiting the model parameter space where matrices can be effectively interpolated. The present work offers three contributions: (1) It provides structural dynamic insight into the restrictions imposed by the common modal projection, demonstrating that ill-conditioning can be controlled, (2) it proposes an efficient, sampling-based procedure to identify the non-regular boundaries of the usable region in the model parameter space, and (3) it enhances the surrogate model to accommodate larger model parameter perturbations by proposing a multi-region interpolation strategy. The efficacy of this proposed framework is verified through two illustrative examples. The first example, involving a unit cell with a square plate and circular core, validates the approach for a single well-conditioned projection region. The second example, using a beam-like structure with vibration attenuation bands, demonstrates the true advantage of the multi-region approach, where predictions from traditional Lagrange interpolation deviated significantly with increasing perturbations, while the proposed method maintained high accuracy across different perturbation levels.
- oai:arXiv.org:2511.03711v1
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- J. Pereira, R. O. Ruiz
-
-
- An Improved Quality Hierarchical Congestion Approximator in Near-Linear Time
- https://arxiv.org/abs/2511.03716
- arXiv:2511.03716v1 Announce Type: new
-Abstract: A congestion approximator for a graph is a compact data structure that approximately predicts the edge congestion required to route any set of flow demands in a network. A congestion approximator is hierarchical if it consists of a laminar family of cuts in the graph. There is a tradeoff between the running time for computing a congestion approximator and its approximation quality. Currently, for an $n$-node graph there exists a polynomial time algorithm that achieves a $O(\log^{1.5}n \log \log n)$ approximation and a near-linear time algorithm that achieves w.h.p. a $O(\log^4 n)$ approximation. In this paper we give the first near-linear time algorithm, that achieves w.h.p. a $O(\log^2 n \log \log n)$ approximation, using an hierarchical congestion approximator with $O(n \log n)$ cuts. Based on a reduction from oblivious routing, we also present a lower bound of $\Omega(\log n)$ for the approximation quality of hierarchical congestion approximators.
- Our algorithm can also be implemented in the parallel setting achieving the same approximation quality, polylogarithmic span and near-linear work. This improves upon the best prior parallel algorithm, which has a $O(\log^9n)$ approximation.
- Crucial for achieving a near linear running time is a new partitioning routine that, unlike previous such routines, manages to avoid recursing on large subgraphs. To achieve the improved approximation quality, we introduce the new concept of border routability of a cut and give an improved sparsest cut oracle for general vertex weights.
- oai:arXiv.org:2511.03716v1
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Monika Henzinger, Robin M\"unk, Harald R\"acke
-
-
- Grounded Misunderstandings in Asymmetric Dialogue: A Perspectivist Annotation Scheme for MapTask
- https://arxiv.org/abs/2511.03718
- arXiv:2511.03718v1 Announce Type: new
-Abstract: Collaborative dialogue relies on participants incrementally establishing common ground, yet in asymmetric settings they may believe they agree while referring to different entities. We introduce a perspectivist annotation scheme for the HCRC MapTask corpus (Anderson et al., 1991) that separately captures speaker and addressee grounded interpretations for each reference expression, enabling us to trace how understanding emerges, diverges, and repairs over time. Using a scheme-constrained LLM annotation pipeline, we obtain 13k annotated reference expressions with reliability estimates and analyze the resulting understanding states. The results show that full misunderstandings are rare once lexical variants are unified, but multiplicity discrepancies systematically induce divergences, revealing how apparent grounding can mask referential misalignment. Our framework provides both a resource and an analytic lens for studying grounded misunderstanding and for evaluating (V)LLMs' capacity to model perspective-dependent grounding in collaborative dialogue.
- oai:arXiv.org:2511.03718v1
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Nan Li, Albert Gatt, Massimo Poesio
-
-
- Outbidding and Outbluffing Elite Humans: Mastering Liar's Poker via Self-Play and Reinforcement Learning
- https://arxiv.org/abs/2511.03724
- arXiv:2511.03724v1 Announce Type: new
-Abstract: AI researchers have long focused on poker-like games as a testbed for environments characterized by multi-player dynamics, imperfect information, and reasoning under uncertainty. While recent breakthroughs have matched elite human play at no-limit Texas hold'em, the multi-player dynamics are subdued: most hands converge quickly with only two players engaged through multiple rounds of bidding. In this paper, we present Solly, the first AI agent to achieve elite human play in reduced-format Liar's Poker, a game characterized by extensive multi-player engagement. We trained Solly using self-play with a model-free, actor-critic, deep reinforcement learning algorithm. Solly played at an elite human level as measured by win rate (won over 50% of hands) and equity (money won) in heads-up and multi-player Liar's Poker. Solly also outperformed large language models (LLMs), including those with reasoning abilities, on the same metrics. Solly developed novel bidding strategies, randomized play effectively, and was not easily exploitable by world-class human players.
- oai:arXiv.org:2511.03724v1
- cs.AI
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Richard Dewey, Janos Botyanszki, Ciamac C. Moallemi, Andrew T. Zheng
-
-
- Disentangled Concepts Speak Louder Than Words:Explainable Video Action Recognition
- https://arxiv.org/abs/2511.03725
- arXiv:2511.03725v1 Announce Type: new
-Abstract: Effective explanations of video action recognition models should disentangle how movements unfold over time from the surrounding spatial context. However, existing methods based on saliency produce entangled explanations, making it unclear whether predictions rely on motion or spatial context. Language-based approaches offer structure but often fail to explain motions due to their tacit nature -- intuitively understood but difficult to verbalize. To address these challenges, we propose Disentangled Action aNd Context concept-based Explainable (DANCE) video action recognition, a framework that predicts actions through disentangled concept types: motion dynamics, objects, and scenes. We define motion dynamics concepts as human pose sequences. We employ a large language model to automatically extract object and scene concepts. Built on an ante-hoc concept bottleneck design, DANCE enforces prediction through these concepts. Experiments on four datasets -- KTH, Penn Action, HAA500, and UCF-101 -- demonstrate that DANCE significantly improves explanation clarity with competitive performance. We validate the superior interpretability of DANCE through a user study. Experimental results also show that DANCE is beneficial for model debugging, editing, and failure analysis.
- oai:arXiv.org:2511.03725v1
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jongseo Lee, Wooil Lee, Gyeong-Moon Park, Seong Tae Kim, Jinwoo Choi
-
-
- Supersimulators
- https://arxiv.org/abs/2509.17994
- arXiv:2509.17994v2 Announce Type: cross
-Abstract: We prove that every randomized Boolean function admits a supersimulator: a randomized polynomial-size circuit whose output on random inputs cannot be efficiently distinguished from reality with constant advantage, even by polynomially larger distinguishers. Our result builds on the landmark complexity-theoretic regularity lemma of Trevisan, Tulsiani and Vadhan (2009), which, in contrast, provides a simulator that fools smaller distinguishers. We circumvent lower bounds for the simulator size by letting the distinguisher size bound vary with the target function, while remaining below an absolute upper bound independent of the target function. This dependence on the target function arises naturally from our use of an iteration technique originating in the graph regularity literature.
- The simulators provided by the regularity lemma and recent refinements thereof, known as multiaccurate and multicalibrated predictors, respectively, as per Hebert-Johnson et al. (2018), have previously been shown to have myriad applications in complexity theory, cryptography, learning theory, and beyond. We first show that a recent multicalibration-based characterization of the computational indistinguishability of product distributions actually requires only (calibrated) multiaccuracy. We then show that supersimulators yield an even tighter result in this application domain, closing a complexity gap present in prior versions of the characterization.
- oai:arXiv.org:2509.17994v2
- cs.CC
- cs.DS
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Cynthia Dwork, Pranay Tankala
-
-
- OmniVLA: Unifiying Multi-Sensor Perception for Physically-Grounded Multimodal VLA
- https://arxiv.org/abs/2511.01210
- arXiv:2511.01210v1 Announce Type: cross
-Abstract: Vision-language-action (VLA) models have shown strong generalization for action prediction through large-scale vision-language pretraining. However, most existing models rely solely on RGB cameras, limiting their perception and, consequently, manipulation capabilities. We present OmniVLA, an omni-modality VLA model that integrates novel sensing modalities for physically-grounded spatial intelligence beyond RGB perception. The core of our approach is the sensor-masked image, a unified representation that overlays spatially grounded and physically meaningful masks onto the RGB images, derived from sensors including an infrared camera, a mmWave radar, and a microphone array. This image-native unification keeps sensor input close to RGB statistics to facilitate training, provides a uniform interface across sensor hardware, and enables data-efficient learning with lightweight per-sensor projectors. Built on this, we present a multisensory vision-language-action model architecture and train the model based on an RGB-pretrained VLA backbone. We evaluate OmniVLA on challenging real-world tasks where sensor-modality perception is needed to guide the manipulation. OmniVLA achieves an average task success rate of 84%, significantly outperforms both RGB-only and raw-sensor-input baseline models by 59% and 28% respectively, meanwhile showing higher learning efficiency and stronger generalization capability.
- oai:arXiv.org:2511.01210v1
- cs.CV
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-sa/4.0/
- Heyu Guo, Shanmu Wang, Ruichun Ma, Shiqi Jiang, Yasaman Ghasempour, Omid Abari, Baining Guo, Lili Qi
-
-
- Association-sensory spatiotemporal hierarchy and functional gradient-regularised recurrent neural network with implications for schizophrenia
- https://arxiv.org/abs/2511.02722
- arXiv:2511.02722v1 Announce Type: cross
-Abstract: The human neocortex is functionally organised at its highest level along a continuous sensory-to-association (AS) hierarchy. This study characterises the AS hierarchy of patients with schizophrenia in a comparison with controls. Using a large fMRI dataset (N=355), we extracted individual AS gradients via spectral analysis of brain connectivity, quantified hierarchical specialisation by gradient spread, and related this spread with connectivity geometry. We found that schizophrenia compresses the AS hierarchy indicating reduced functional differentiation. By modelling neural timescale with the Ornstein-Uhlenbeck process, we observed that the most specialised, locally cohesive regions at the gradient extremes exhibit dynamics with a longer time constant, an effect that is attenuated in schizophrenia. To study computation, we used the gradients to regularise subject-specific recurrent neural networks (RNNs) trained on working memory tasks. Networks endowed with greater gradient spread learned more efficiently, plateaued at lower task loss, and maintained stronger alignment to the prescribed AS hierarchical geometry. Fixed point linearisation showed that high-range networks settled into more stable neural states during memory delay, evidenced by lower energy and smaller maximal Jacobian eigenvalues. This gradient-regularised RNN framework therefore links large-scale cortical architecture with fixed point stability, providing a mechanistic account of how gradient de-differentiation could destabilise neural computations in schizophrenia, convergently supported by empirical timescale flattening and model-based evidence of less stable fixed points.
- oai:arXiv.org:2511.02722v1
- q-bio.NC
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Subati Abulikemu, Puria Radmard, Michail Mamalakis, John Suckling
-
-
- AI-Enhanced Wi-Fi Sensing Through Single Transceiver Pair
- https://arxiv.org/abs/2511.02845
- arXiv:2511.02845v1 Announce Type: cross
-Abstract: The advancement of next-generation Wi-Fi technology heavily relies on sensing capabilities, which play a pivotal role in enabling sophisticated applications. In response to the growing demand for large-scale deployments, contemporary Wi-Fi sensing systems strive to achieve high-precision perception while maintaining minimal bandwidth consumption and antenna count requirements. Remarkably, various AI-driven perception technologies have demonstrated the ability to surpass the traditional resolution limitations imposed by radar theory. However, the theoretical underpinnings of this phenomenon have not been thoroughly investigated in existing research. In this study, we found that under hardware-constrained conditions, the performance gains brought by AI to Wi-Fi sensing systems primarily originate from two aspects: prior information and temporal correlation. Prior information enables the AI to generate plausible details based on vague input, while temporal correlation helps reduce the upper bound of sensing error. We developed an AI-based Wi-Fi sensing system using a single transceiver pair and designed experiments focusing on human pose estimation and indoor localization to validate the theoretical claims. The results confirm the performance gains contributed by temporal correlation and prior information.
- oai:arXiv.org:2511.02845v1
- eess.SP
- cs.AI
- physics.ins-det
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuxuan Liu, Chiya Zhang, Yifeng Yuan, Chunlong He, Weizheng Zhang, Gaojie Chen
-
-
- Spatio-Temporal Attention Network for Epileptic Seizure Prediction
- https://arxiv.org/abs/2511.02846
- arXiv:2511.02846v1 Announce Type: cross
-Abstract: In this study, we present a deep learning framework that learns complex spatio-temporal correlation structures of EEG signals through a Spatio-Temporal Attention Network (STAN) for accurate predictions of onset of seizures for Epilepsy patients. Unlike existing methods, which rely on feature engineering and/or assume fixed preictal durations, our approach simultaneously models spatio-temporal correlations through STAN and employs an adversarial discriminator to distinguish preictal from interictal attention patterns, enabling patient-specific learning. Evaluation on CHB-MIT and MSSM datasets demonstrates 96.6\% sensitivity with 0.011/h false detection rate on CHB-MIT, and 94.2% sensitivity with 0.063/h FDR on MSSM, significantly outperforming state-of-the-art methods. The framework reliably detects preictal states at least 15 minutes before an onset, with patient-specific windows extending to 45 minutes, providing sufficient intervention time for clinical applications.
- oai:arXiv.org:2511.02846v1
- eess.SP
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zan Li, Kyongmin Yeo, Wesley Gifford, Lara Marcuse, Madeline Fields, B\"ulent Yener
-
-
- EEGReXferNet: A Lightweight Gen-AI Framework for EEG Subspace Reconstruction via Cross-Subject Transfer Learning and Channel-Aware Embedding
- https://arxiv.org/abs/2511.02848
- arXiv:2511.02848v1 Announce Type: cross
-Abstract: Electroencephalography (EEG) is a widely used non-invasive technique for monitoring brain activity, but low signal-to-noise ratios (SNR) due to various artifacts often compromise its utility. Conventional artifact removal methods require manual intervention or risk suppressing critical neural features during filtering/reconstruction. Recent advances in generative models, including Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), have shown promise for EEG reconstruction; however, these approaches often lack integrated temporal-spectral-spatial sensitivity and are computationally intensive, limiting their suitability for real-time applications like brain-computer interfaces (BCIs). To overcome these challenges, we introduce EEGReXferNet, a lightweight Gen-AI framework for EEG subspace reconstruction via cross-subject transfer learning - developed using Keras TensorFlow (v2.15.1). EEGReXferNet employs a modular architecture that leverages volume conduction across neighboring channels, band-specific convolution encoding, and dynamic latent feature extraction through sliding windows. By integrating reference-based scaling, the framework ensures continuity across successive windows and generalizes effectively across subjects. This design improves spatial-temporal-spectral resolution (mean PSD correlation >= 0.95; mean spectrogram RV-Coefficient >= 0.85), reduces total weights by ~45% to mitigate overfitting, and maintains computational efficiency for robust, real-time EEG preprocessing in neurophysiological and BCI applications.
- oai:arXiv.org:2511.02848v1
- eess.SP
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shantanu Sarkar, Piotr Nabrzyski, Saurabh Prasad, Jose Luis Contreras-Vidal
-
-
- Benchmarking ResNet for Short-Term Hypoglycemia Classification with DiaData
- https://arxiv.org/abs/2511.02849
- arXiv:2511.02849v1 Announce Type: cross
-Abstract: Individualized therapy is driven forward by medical data analysis, which provides insight into the patient's context. In particular, for Type 1 Diabetes (T1D), which is an autoimmune disease, relationships between demographics, sensor data, and context can be analyzed. However, outliers, noisy data, and small data volumes cannot provide a reliable analysis. Hence, the research domain requires large volumes of high-quality data. Moreover, missing values can lead to information loss. To address this limitation, this study improves the data quality of DiaData, an integration of 15 separate datasets containing glucose values from 2510 subjects with T1D. Notably, we make the following contributions: 1) Outliers are identified with the interquartile range (IQR) approach and treated by replacing them with missing values. 2) Small gaps ($\le$ 25 min) are imputed with linear interpolation and larger gaps ($\ge$ 30 and $<$ 120 min) with Stineman interpolation. Based on a visual comparison, Stineman interpolation provides more realistic glucose estimates than linear interpolation for larger gaps. 3) After data cleaning, the correlation between glucose and heart rate is analyzed, yielding a moderate relation between 15 and 60 minutes before hypoglycemia ($\le$ 70 mg/dL). 4) Finally, a benchmark for hypoglycemia classification is provided with a state-of-the-art ResNet model. The model is trained with the Maindatabase and Subdatabase II of DiaData to classify hypoglycemia onset up to 2 hours in advance. Training with more data improves performance by 7% while using quality-refined data yields a 2-3% gain compared to raw data.
- oai:arXiv.org:2511.02849v1
- eess.SP
- cs.CV
- eess.IV
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1109/JBHI.2025.3620603
- Beyza Cinar, Maria Maleshkova
-
-
- ECGXtract: Deep Learning-based ECG Feature Extraction for Automated CVD Diagnosis
- https://arxiv.org/abs/2511.02850
- arXiv:2511.02850v1 Announce Type: cross
-Abstract: This paper presents ECGXtract, a deep learning-based approach for interpretable ECG feature extraction, addressing the limitations of traditional signal processing and black-box machine learning methods. In particular, we develop convolutional neural network models capable of extracting both temporal and morphological features with strong correlations to a clinically validated ground truth. Initially, each model is trained to extract a single feature, ensuring precise and interpretable outputs. A series of experiments is then carried out to evaluate the proposed method across multiple setups, including global versus lead-specific features, different sampling frequencies, and comparisons with other approaches such as ECGdeli. Our findings show that ECGXtract achieves robust performance across most features with a mean correlation score of 0.80 with the ground truth for global features, with lead II consistently providing the best results. For lead-specific features, ECGXtract achieves a mean correlation score of 0.822. Moreover, ECGXtract achieves superior results to the state-of-the-art open source ECGdeli as it got a higher correlation score with the ground truth in 90% of the features. Furthermore, we explore the feasibility of extracting multiple features simultaneously utilizing a single model. Semantic grouping is proved to be effective for global features, while large-scale grouping and lead-specific multi-output models show notable performance drops. These results highlight the potential of structured grouping strategies to balance the computational efficiency vs. model accuracy, paving the way for more scalable and clinically interpretable ECG feature extraction systems in limited resource settings.
- oai:arXiv.org:2511.02850v1
- eess.SP
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Youssif Abuzied, Hassan AbdEltawab, Abdelrhman Gaber, Tamer ElBatt
-
-
- Approaching Low-Cost Cardiac Intelligence with Semi-Supervised Knowledge Distillation
- https://arxiv.org/abs/2511.02851
- arXiv:2511.02851v1 Announce Type: cross
-Abstract: Deploying advanced cardiac artificial intelligence for daily cardiac monitoring is hindered by its reliance on extensive medical data and high computational resources. Low-cost cardiac intelligence (LCCI) offers a promising alternative by using wearable device data, such as 1-lead electrocardiogram (ECG), but it suffers from a significant diagnostic performance gap compared to high-cost cardiac intelligence (HCCI). To bridge this gap, we propose LiteHeart, a semi-supervised knowledge distillation framework. LiteHeart introduces a region-aware distillation module to mimic how cardiologists focus on diagnostically relevant ECG regions and a cross-layer mutual information module to align the decision processes of LCCI and HCCI systems. Using a semi-supervised training strategy, LiteHeart further improves model robustness under limited supervision. Evaluated on five datasets covering over 38 cardiovascular diseases, LiteHeart substantially reduces the performance gap between LCCI and HCCI, outperforming existing methods by 4.27% to 7.10% in macro F1 score. These results demonstrate that LiteHeart significantly enhances the diagnostic capabilities of low-cost cardiac intelligence systems, paving the way for scalable, affordable, and accurate daily cardiac healthcare using wearable technologies.
- oai:arXiv.org:2511.02851v1
- eess.SP
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Rushuang Zhou, Yuan-Ting Zhang, M. Jamal Deen, Yining Dong
-
-
- Real-Time Interactive Hybrid Ocean: Spectrum-Consistent Wave Particle-FFT Coupling
- https://arxiv.org/abs/2511.02852
- arXiv:2511.02852v1 Announce Type: cross
-Abstract: Fast Fourier Transform-based (FFT) spectral oceans are widely adopted for their efficiency and large-scale realism, but they assume global stationarity and spatial homogeneity, making it difficult to represent non-uniform seas and near-field interactions (e.g., ships and floaters). In contrast, wave particles capture local wakes and ripples, yet are costly to maintain at scale and hard to match global spectral statistics.We present a real-time interactive hybrid ocean: a global FFT background coupled with local wave-particle (WP) patch regions around interactive objects, jointly driven under a unified set of spectral parameters and dispersion. At patch boundaries, particles are injected according to the same directional spectrum as the FFT, aligning the local frequency-direction distribution with the background and matching energy density, without disturbing the far field.Our approach introduces two main innovations: (1) Hybrid ocean representation. We couple a global FFT background with local WP patches under a unified spectrum, achieving large-scale spectral consistency while supporting localized wakes and ripples.(2) Frequency-bucketed implementation. We design a particle sampling and GPU-parallel synthesis scheme based on frequency buckets, which preserves spectral energy consistency and sustains real-time interactive performance.Together, these innovations enable a unified framework that delivers both large-scale spectral realism and fine-grained interactivity in real time.
- oai:arXiv.org:2511.02852v1
- eess.SP
- cs.GR
- cs.MM
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shengze Xue, Yu Ren, Jiacheng Hong, Run Ni, Shuangjiu Xiao, Deli Dong
-
-
- Consciousness-ECG Transformer for Conscious State Estimation System with Real-Time Monitoring
- https://arxiv.org/abs/2511.02853
- arXiv:2511.02853v1 Announce Type: cross
-Abstract: Conscious state estimation is important in various medical settings, including sleep staging and anesthesia management, to ensure patient safety and optimize health outcomes. Traditional methods predominantly utilize electroencephalography (EEG), which faces challenges such as high sensitivity to noise and the requirement for controlled environments. In this study, we propose the consciousness-ECG transformer that leverages electrocardiography (ECG) signals for non-invasive and reliable conscious state estimation. Our approach employs a transformer with decoupled query attention to effectively capture heart rate variability features that distinguish between conscious and unconscious states. We implemented the conscious state estimation system with real-time monitoring and validated our system on datasets involving sleep staging and anesthesia level monitoring during surgeries. Experimental results demonstrate that our model outperforms baseline models, achieving accuracies of 0.877 on sleep staging and 0.880 on anesthesia level monitoring. Moreover, our model achieves the highest area under curve values of 0.786 and 0.895 on sleep staging and anesthesia level monitoring, respectively. The proposed system offers a practical and robust alternative to EEG-based methods, particularly suited for dynamic clinical environments. Our results highlight the potential of ECG-based consciousness monitoring to enhance patient safety and advance our understanding of conscious states.
- oai:arXiv.org:2511.02853v1
- eess.SP
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1016/j.eswa.2025.130091
- Expert Systems with Applications 299 (2026) 130091
- Young-Seok Kweon, Gi-Hwan Shin, Ji-Yong Kim, Bokyeong Ryu, Seong-Whan Lee
-
-
- Digitizing Spermatogenesis Lineage at Nanoscale Resolution In Tissue-Level Electron Microscopy
- https://arxiv.org/abs/2511.02860
- arXiv:2511.02860v1 Announce Type: cross
-Abstract: Recent advances in 2D large-scale and 3D volume electron microscopy have stimulated the rapid development of nanoscale functional analysis at the tissue and organ levels. Digitizing the cell by mapping the intricate organellar networks into its physiological and pathological textures will revolutionarize the contents of cell atlases. To meet the requirements of characterizing intracellular organelles and their interactions within defined cellular cohorts at tissue level, we have developed DeepOrganelle. It adopts a lightweighted Mask2Former frameworks as a universal segmentor and is capable of segmenting and extracting organelles within different cell types, performing statistical quantitative analysis, as well as visualizing and quantifying the spatial distribution of organelle morphologies and interactions across different cell types at tissue scales. Using DeepOrganelle, we systemically perform cross-scale quantification of membrane contact sites(MCSs) dynamics across the progression of the seminiferous epithelial cycle, covering 12 distinct developmental stages and 24 statuses of germ cells. DeepOrganelle uncovers the spatiotemporal gradient of the germ cell differentiation atlas according to different types of organelles and their interactions. Noticeably, it discovers a waved pattern of mitochondria(Mito)-endoplasmic reticulum(ER) contact with a significant increase specifically at Stage X pachytene preceding the transition to diplotene, which aligns well with a newly reported experiment that mitochondrial metabolic proteins like PDHA2 are essential for this transition by maintaining ATP supply for double-strand break(DSB) repair. DeepOrganelle also observes a dynamic restructuring of the blood-testis barrier and stage-specific reorganization of organelle topography in Sertoli cells from preleptotene to leptotene phases of prophase I.
- oai:arXiv.org:2511.02860v1
- physics.bio-ph
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Li Xiao, Liqing Liu, Hongjun Wu, Jiayi Zhong, Yan Zhang, Junjie Hu, Sun Fei, Ge Yang, Tao Xu
-
-
- NEF-NET+: Adapting Electrocardio panorama in the wild
- https://arxiv.org/abs/2511.02880
- arXiv:2511.02880v1 Announce Type: cross
-Abstract: Conventional multi-lead electrocardiogram (ECG) systems capture cardiac signals from a fixed set of anatomical viewpoints defined by lead placement. However, certain cardiac conditions (e.g., Brugada syndrome) require additional, non-standard viewpoints to reveal diagnostically critical patterns that may be absent in standard leads. To systematically overcome this limitation, Nef-Net was recently introduced to reconstruct a continuous electrocardiac field, enabling virtual observation of ECG signals from arbitrary views (termed Electrocardio Panorama). Despite its promise, Nef-Net operates under idealized assumptions and faces in-the-wild challenges, such as long-duration ECG modeling, robustness to device-specific signal artifacts, and suboptimal lead placement calibration. This paper presents NEF-NET+, an enhanced framework for realistic panoramic ECG synthesis that supports arbitrary-length signal synthesis from any desired view, generalizes across ECG devices, and com- pensates for operator-induced deviations in electrode placement. These capabilities are enabled by a newly designed model architecture that performs direct view transformation, incorporating a workflow comprising offline pretraining, device calibration tuning steps as well as an on-the-fly calibration step for patient-specific adaptation. To rigorously evaluate panoramic ECG synthesis, we construct a new Electrocardio Panorama benchmark, called Panobench, comprising 5367 recordings with 48-view per subject, capturing the full spatial variability of cardiac electrical activity. Experimental results show that NEF-NET+ delivers substantial improvements over Nef-Net, yielding an increase of around 6 dB in PSNR in real-world setting. The code and Panobench will be released in a subsequent publication.
- oai:arXiv.org:2511.02880v1
- eess.SP
- cs.AI
- cs.CV
- eess.IV
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zehui Zhan, Yaojun Hu, Jiajing Zhan, Wanchen Lian, Wanqing Wu, Jintai Chen
-
-
- Adaptive Internal Calibration for Temperature-Robust mmWave FMCW Radars
- https://arxiv.org/abs/2511.02884
- arXiv:2511.02884v1 Announce Type: cross
-Abstract: We present a novel internal calibration framework for Millimeter- Wave (mmWave) Frequency-Modulated Continuous-Wave (FMCW) radars to ensure robust performance under internal temperature variations, tailored for deployment in dense wireless networks. Our approach mitigates the impact of temperature-induced drifts in radar hardware, enhancing reliability. We propose a temperature compensation model that leverages internal sensor data and signal processing techniques to maintain measurement accuracy. Experimental results demonstrate improved robustness across a range of internal temperature conditions, with minimal computational overhead, ensuring scalability in dense network environments. The framework also incorporates ethical design principles, avoiding reliance on sensitive external data. The proposed scheme reduces the Pearson correlation between the amplitude of the Intermediate Frequency (IF) signal and internal temperature drift up to 84%, significantly mitigating the temperature drift.
- oai:arXiv.org:2511.02884v1
- eess.SP
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/publicdomain/zero/1.0/
- Dariush Salami, Nima Bahmani, H\"useyin Yi\u{g}itler, Stephan Sigg
-
-
- NABench: Large-Scale Benchmarks of Nucleotide Foundation Models for Fitness Prediction
- https://arxiv.org/abs/2511.02888
- arXiv:2511.02888v1 Announce Type: cross
-Abstract: Nucleotide sequence variation can induce significant shifts in functional fitness. Recent nucleotide foundation models promise to predict such fitness effects directly from sequence, yet heterogeneous datasets and inconsistent preprocessing make it difficult to compare methods fairly across DNA and RNA families. Here we introduce NABench, a large-scale, systematic benchmark for nucleic acid fitness prediction. NABench aggregates 162 high-throughput assays and curates 2.6 million mutated sequences spanning diverse DNA and RNA families, with standardized splits and rich metadata. We show that NABench surpasses prior nucleotide fitness benchmarks in scale, diversity, and data quality. Under a unified evaluation suite, we rigorously assess 29 representative foundation models across zero-shot, few-shot prediction, transfer learning, and supervised settings. The results quantify performance heterogeneity across tasks and nucleic-acid types, demonstrating clear strengths and failure modes for different modeling choices and establishing strong, reproducible baselines. We release NABench to advance nucleic acid modeling, supporting downstream applications in RNA/DNA design, synthetic biology, and biochemistry. Our code is available at https://github.com/mrzzmrzz/NABench.
- oai:arXiv.org:2511.02888v1
- q-bio.GN
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Zhongmin Li, Runze Ma, Jiahao Tan, Chengzi Tan, Shuangjia Zheng
-
-
- Optimizing the nnU-Net model for brain tumor (Glioma) segmentation Using a BraTS Sub-Saharan Africa (SSA) dataset
- https://arxiv.org/abs/2511.02893
- arXiv:2511.02893v1 Announce Type: cross
-Abstract: Medical image segmentation is a critical achievement in modern medical science, developed over decades of research. It allows for the exact delineation of anatomical and pathological features in two- or three-dimensional pictures by utilizing notions like pixel intensity, texture, and anatomical context. With the advent of automated segmentation, physicians and radiologists may now concentrate on diagnosis and treatment planning while intelligent computers perform routine image processing tasks.
- This study used the BraTS Sub-Saharan Africa dataset, a selected subset of the BraTS dataset that included 60 multimodal MRI cases from patients with glioma. Surprisingly, the nnU Net model trained on the initial 60 instances performed better than the network trained on an offline-augmented dataset of 360 cases. Hypothetically, the offline augmentations introduced artificial anatomical variances or intensity distributions, reducing generalization. In contrast, the original dataset, when paired with nnU Net's robust online augmentation procedures, maintained realistic variability and produced better results. The study achieved a Dice score of 0.84 for whole tumor segmentation. These findings highlight the significance of data quality and proper augmentation approaches in constructing accurate, generalizable medical picture segmentation models, particularly for under-represented locations.
- oai:arXiv.org:2511.02893v1
- eess.IV
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chukwuemeka Arua Kalu, Adaobi Chiazor Emegoakor, Fortune Okafor, Augustine Okoh Uchenna, Chijioke Kelvin Ukpai, Godsent Erere Onyeugbo
-
-
- Domain-Adaptive Transformer for Data-Efficient Glioma Segmentation in Sub-Saharan MRI
- https://arxiv.org/abs/2511.02928
- arXiv:2511.02928v1 Announce Type: cross
-Abstract: Glioma segmentation is critical for diagnosis and treatment planning, yet remains challenging in Sub-Saharan Africa due to limited MRI infrastructure and heterogeneous acquisition protocols that induce severe domain shift. We propose SegFormer3D-plus, a radiomics-guided transformer architecture designed for robust segmentation under domain variability. Our method combines: (1) histogram matching for intensity harmonization across scanners, (2) radiomic feature extraction with PCA-reduced k-means for domain-aware stratified sampling, (3) a dual-pathway encoder with frequency-aware feature extraction and spatial-channel attention, and (4) composite Dice-Cross-Entropy loss for boundary refinement. Pretrained on BraTS 2023 and fine-tuned on BraTS-Africa data, SegFormer3D-plus demonstrates improved tumor subregion delineation and boundary localization across heterogeneous African clinical scans, highlighting the value of radiomics-guided domain adaptation for resource-limited settings.
- oai:arXiv.org:2511.02928v1
- eess.IV
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Ilerioluwakiiye Abolade, Aniekan Udo, Augustine Ojo, Abdulbasit Oyetunji, Hammed Ajigbotosho, Aondana Iorumbur, Confidence Raymond, Maruf Adewole
-
-
- From Narrow to Wide: Autoencoding Transformers for Ultrasound Bandwidth Recovery
- https://arxiv.org/abs/2511.02938
- arXiv:2511.02938v1 Announce Type: cross
-Abstract: Conventional pulse-echo ultrasound suffers when low-cost probes deliver only narrow fractional bandwidths, elongating pulses and erasing high-frequency detail. We address this limitation by learning a data-driven mapping from band-limited to broadband spectrogram of radio-frequency (RF) lines. To this end, a variation of Tiny Vision Transform (ViT) auto-encoder is trained on simulation data using a curriculum-weighted loss. On heterogeneous speckle-cyst phantoms, the network reduces image-domain MSE by 90 percent, boosts PSNR by 6.7 dB, and raises SSIM to 0.965 compared with the narrow-band input. It also sharpens point-target rows in a completely unseen resolution phantom, demonstrating strong out-of-distribution generalisation without sacrificing frame rate or phase information. These results indicate that a purely software upgrade can endow installed narrow-band probes with broadband-like performance, potentially widening access to high-resolution ultrasound in resource-constrained settings.
- oai:arXiv.org:2511.02938v1
- eess.SP
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sepideh KhakzadGharamaleki, Hassan Rivaz, Brandon Helfield
-
-
- Analog-to-Digital Converter Based on Voltage-controlled Superconducting Device
- https://arxiv.org/abs/2511.02968
- arXiv:2511.02968v1 Announce Type: cross
-Abstract: The increasing demand for cryogenic electronics in superconducting and quantum computing systems calls for ultra energy efficient data conversion architectures that remain functional at deep cryogenic temperatures.In this work, we present the first design of a voltage-controlled superconducting flash analog-to-digital converter (ADC) based on a novel quantum enhanced Josephson junction field effect transistor (JJFET).Exploiting its strong gate tunability and transistor-like behavior, the JJFET offers a scalable alternative to conventional current controlled superconducting devices while aligning naturally with CMOS style design methodologies.Building on our previously developed Verilog A compact model calibrated to experimental data, we design and simulate a three bit JJFET based flash ADC.The core comparator block is realized through careful bias current selection and augmented with a three terminal nanocryotron to precisely define reference voltages.Cascaded JJFET comparators ensure robust voltage gain, cascadability, and logic level restoration across stages.Simulation results demonstrate accurate quantization behavior with ultra-low power dissipation, underscoring the feasibility of voltage driven superconducting mixed signal circuits.This work establishes a critical step toward unifying superconducting logic and data conversion, paving the way for scalable cryogenic architectures in quantum classical co-processors, low-power AI accelerators, and next generation energy constrained computing platforms.
- oai:arXiv.org:2511.02968v1
- cond-mat.supr-con
- cs.ET
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Md Mazharul Islam, Connor A. Good, Diego Ferrer, Juan P. Mendez, Denis Mamaluy, Wei Pan, Kathleen E Hamilton, Ahmedullah Aziz
-
-
- Towards a geometric characterization of unbounded integer cubic optimization problems via thin rays
- https://arxiv.org/abs/2511.02983
- arXiv:2511.02983v1 Announce Type: cross
-Abstract: We study geometric characterizations of unbounded integer polynomial optimization problems. While unboundedness along a ray fully characterizes unbounded integer linear and quadratic optimization problems, we show that this is not the case for cubic polynomials. To overcome this, we introduce thin rays, which are rays with an arbitrarily small neighborhood, and prove that they characterize unboundedness for integer cubic optimization problems in dimension up to three, and we conjecture that the same holds in all dimensions. Our techniques also provide a complete characterization of unbounded integer quadratic optimization problems in arbitrary dimension, without assuming rational coefficients. These results underscore the significance of thin rays and offer new tools for analyzing integer polynomial optimization problems beyond the quadratic case.
- oai:arXiv.org:2511.02983v1
- math.OC
- cs.DM
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Alberto Del Pia
-
-
- Scalable Single-Cell Gene Expression Generation with Latent Diffusion Models
- https://arxiv.org/abs/2511.02986
- arXiv:2511.02986v1 Announce Type: cross
-Abstract: Computational modeling of single-cell gene expression is crucial for understanding cellular processes, but generating realistic expression profiles remains a major challenge. This difficulty arises from the count nature of gene expression data and complex latent dependencies among genes. Existing generative models often impose artificial gene orderings or rely on shallow neural network architectures. We introduce a scalable latent diffusion model for single-cell gene expression data, which we refer to as scLDM, that respects the fundamental exchangeability property of the data. Our VAE uses fixed-size latent variables leveraging a unified Multi-head Cross-Attention Block (MCAB) architecture, which serves dual roles: permutation-invariant pooling in the encoder and permutation-equivariant unpooling in the decoder. We enhance this framework by replacing the Gaussian prior with a latent diffusion model using Diffusion Transformers and linear interpolants, enabling high-quality generation with multi-conditional classifier-free guidance. We show its superior performance in a variety of experiments for both observational and perturbational single-cell data, as well as downstream tasks like cell-level classification.
- oai:arXiv.org:2511.02986v1
- stat.ML
- cs.LG
- q-bio.GN
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Giovanni Palla, Sudarshan Babu, Payam Dibaeinia, James D. Pearce, Donghui Li, Aly A. Khan, Theofanis Karaletsos, Jakub M. Tomczak
-
-
- Projection-width: a unifying structural parameter for separable discrete optimization
- https://arxiv.org/abs/2511.02990
- arXiv:2511.02990v1 Announce Type: cross
-Abstract: We introduce the notion of projection-width for systems of separable constraints, defined via branch decompositions of variables and constraints. We show that several fundamental discrete optimization and counting problems can be solved in polynomial time when the projection-width is polynomially bounded. These include optimization, counting, top-k, and weighted constraint violation. Our results identify a broad class of tractable nonlinear discrete optimization and counting problems. Even when restricted to the linear setting, they subsume and substantially extend some of the strongest known tractability results across multiple research areas: integer linear optimization, binary polynomial optimization, and Boolean satisfiability. Although these results originated independently within different communities and for seemingly distinct problem classes, our framework unifies and significantly generalizes them under a single structural perspective.
- oai:arXiv.org:2511.02990v1
- math.OC
- cs.DM
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Alberto Del Pia
-
-
- Observer-based neural networks for flow estimation and control
- https://arxiv.org/abs/2511.02995
- arXiv:2511.02995v1 Announce Type: cross
-Abstract: Neural network observers (NNOs) are proposed for real-time estimation of fluid flows, addressing a key challenge in flow control: obtaining real-time flow states from a limited set of sparse and noisy sensor data. For this task, we propose a generalization of the classical Luenberger observer. In the present framework, the estimation loop is composed of subsystems modeled as neural networks (NNs). By combining flow information from selected probes and an NN surrogate model (NNSM) of the flow system, we train NNOs capable of fusing information to provide the best estimation of the states, that can in turn be fed back to an NN controller (NNC). The NNO capabilities are demonstrated for three nonlinear dynamical systems. First, a variation of the Kuramoto-Sivashinsky (KS) equation with control inputs is studied, where variables are sparsely probed. We show that the NNO is able to track states even when probes are contaminated with random noise or with sensors at insufficient sample rates to match the control time step. Then, a confined cylinder flow is investigated, where velocity signals along the cylinder wake are estimated by using a small set of wall pressure sensors. In both the KS and cylinder problems, we show that the estimated states can be used to enable closed-loop control, taking advantage of stabilizing NNCs. Finally, we present a legacy dataset of a turbulent boundary layer experiment, where convolutional NNs (CNNs) are employed to implement the models required for the estimation loop. We show that, by combining low-resolution noise-corrupted sensor data with an imperfect NNSM, it is possible to produce more accurate estimates, outperforming both the direct reconstructions via specialized super-resolution NNs and the direct model propagation from initial conditions.
- oai:arXiv.org:2511.02995v1
- physics.flu-dyn
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Tarc\'isio C. D\'eda, William R. Wolf, Scott T. M. Dawson, Brener L. O. Ramos
-
-
- Unifying Information-Theoretic and Pair-Counting Clustering Similarity
- https://arxiv.org/abs/2511.03000
- arXiv:2511.03000v1 Announce Type: cross
-Abstract: Comparing clusterings is central to evaluating unsupervised models, yet the many existing similarity measures can produce widely divergent, sometimes contradictory, evaluations. Clustering similarity measures are typically organized into two principal families, pair-counting and information-theoretic, reflecting whether they quantify agreement through element pairs or aggregate information across full cluster contingency tables. Prior work has uncovered parallels between these families and applied empirical normalization or chance-correction schemes, but their deeper analytical connection remains only partially understood. Here, we develop an analytical framework that unifies these families through two complementary perspectives. First, both families are expressed as weighted expansions of observed versus expected co-occurrences, with pair-counting arising as a quadratic, low-order approximation and information-theoretic measures as higher-order, frequency-weighted extensions. Second, we generalize pair-counting to $k$-tuple agreement and show that information-theoretic measures can be viewed as systematically accumulating higher-order co-assignment structure beyond the pairwise level. We illustrate the approaches analytically for the Rand index and Mutual Information, and show how other indices in each family emerge as natural extensions. Together, these views clarify when and why the two regimes diverge, relating their sensitivities directly to weighting and approximation order, and provide a principled basis for selecting, interpreting, and extending clustering similarity measures across applications.
- oai:arXiv.org:2511.03000v1
- stat.ML
- cs.IT
- cs.LG
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Alexander J. Gates
-
-
- Precise asymptotic analysis of Sobolev training for random feature models
- https://arxiv.org/abs/2511.03050
- arXiv:2511.03050v1 Announce Type: cross
-Abstract: Gradient information is widely useful and available in applications, and is therefore natural to include in the training of neural networks. Yet little is known theoretically about the impact of Sobolev training -- regression with both function and gradient data -- on the generalization error of highly overparameterized predictive models in high dimensions. In this paper, we obtain a precise characterization of this training modality for random feature (RF) models in the limit where the number of trainable parameters, input dimensions, and training data tend proportionally to infinity. Our model for Sobolev training reflects practical implementations by sketching gradient data onto finite dimensional subspaces. By combining the replica method from statistical physics with linearizations in operator-valued free probability theory, we derive a closed-form description for the generalization errors of the trained RF models. For target functions described by single-index models, we demonstrate that supplementing function data with additional gradient data does not universally improve predictive performance. Rather, the degree of overparameterization should inform the choice of training method. More broadly, our results identify settings where models perform optimally by interpolating noisy function and gradient data.
- oai:arXiv.org:2511.03050v1
- stat.ML
- cond-mat.dis-nn
- cs.LG
- math.PR
- math.ST
- stat.TH
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Katharine E Fisher, Matthew TC Li, Youssef Marzouk, Timo Schorlepp
-
-
- Min-Max Optimization Is Strictly Easier Than Variational Inequalities
- https://arxiv.org/abs/2511.03052
- arXiv:2511.03052v1 Announce Type: cross
-Abstract: Classically, a mainstream approach for solving a convex-concave min-max problem is to instead solve the variational inequality problem arising from its first-order optimality conditions. Is it possible to solve min-max problems faster by bypassing this reduction? This paper initiates this investigation. We show that the answer is yes in the textbook setting of unconstrained quadratic objectives: the optimal convergence rate for first-order algorithms is strictly better for min-max problems than for the corresponding variational inequalities. The key reason that min-max algorithms can be faster is that they can exploit the asymmetry of the min and max variables--a property that is lost in the reduction to variational inequalities. Central to our analyses are sharp characterizations of optimal convergence rates in terms of extremal polynomials which we compute using Green's functions and conformal mappings.
- oai:arXiv.org:2511.03052v1
- math.OC
- cs.DS
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Henry Shugart, Jason M. Altschuler
-
-
- Quantifying Articulatory Coordination as a Biomarker for Schizophrenia
- https://arxiv.org/abs/2511.03084
- arXiv:2511.03084v1 Announce Type: cross
-Abstract: Advances in artificial intelligence (AI) and deep learning have improved diagnostic capabilities in healthcare, yet limited interpretability continues to hinder clinical adoption. Schizophrenia, a complex disorder with diverse symptoms including disorganized speech and social withdrawal, demands tools that capture symptom severity and provide clinically meaningful insights beyond binary diagnosis. Here, we present an interpretable framework that leverages articulatory speech features through eigenspectra difference plots and a weighted sum with exponential decay (WSED) to quantify vocal tract coordination. Eigenspectra plots effectively distinguished complex from simpler coordination patterns, and WSED scores reliably separated these groups, with ambiguity confined to a narrow range near zero. Importantly, WSED scores correlated not only with overall BPRS severity but also with the balance between positive and negative symptoms, reflecting more complex coordination in subjects with pronounced positive symptoms and the opposite trend for stronger negative symptoms. This approach offers a transparent, severity-sensitive biomarker for schizophrenia, advancing the potential for clinically interpretable speech-based assessment tools.
- oai:arXiv.org:2511.03084v1
- eess.AS
- cs.LG
- eess.SP
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Gowtham Premananth, Carol Espy-Wilson
-
-
- D2-UC: A Distributed-Distributed Quantum-Classical Framework for Unit Commitment
- https://arxiv.org/abs/2511.03104
- arXiv:2511.03104v1 Announce Type: cross
-Abstract: This paper introduces D2-UC, a quantum-ready framework for the unit commitment (UC) problem that prepares UC for near-term hybrid quantum-classical solvers by combining distributed classical decomposition with distributed quantum execution. We reformulate deterministic and stochastic UC into a three-block alternating direction method of multipliers (ADMM): (i) a convex quadratic subproblem for dispatch and reserves, (ii) a binary subproblem expressed as a quadratic unconstrained binary optimization (QUBO), and (iii) a proximal slack update for consensus. The core contributions are fivefold. First, we demonstrate how the full UC problem can be expressed as a single monolithic QUBO, establishing a direct interface to quantum solvers. Second, we decompose this large binary block into three type-specific QUBOs for commitment, startup, and shutdown, making the problem more tractable but revealing slower ADMM convergence. Third, we restore local logical couplings through per-unit-time micro-QUBOs, which accelerate convergence. Fourth, we batch micro-QUBOs into K non-overlapping block-diagonal problems, reducing many subproblems to a fixed number of solver-ready QUBOs per iteration, compatible with distributed variational quantum eigensolvers (DVQE). Fifth, we integrate an accept-if-better safeguard with DVQE to stabilize hybrid updates and prevent oscillations. Case studies confirm that the proposed methods deliver feasible schedules, faster convergence, and QUBO sizes aligned with current and near-term quantum hardware capabilities. All detailed data, codes, and parameter values are available at https://github.com/LSU-RAISE-LAB/3B-ADMM-UC-DVQE .
- oai:arXiv.org:2511.03104v1
- quant-ph
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Milad Hasanzadeh, Amin Kargarian
-
-
- EGMOF: Efficient Generation of Metal-Organic Frameworks Using a Hybrid Diffusion-Transformer Architecture
- https://arxiv.org/abs/2511.03122
- arXiv:2511.03122v1 Announce Type: cross
-Abstract: Designing materials with targeted properties remains challenging due to the vastness of chemical space and the scarcity of property-labeled data. While recent advances in generative models offer a promising way for inverse design, most approaches require large datasets and must be retrained for every new target property. Here, we introduce the EGMOF (Efficient Generation of MOFs), a hybrid diffusion-transformer framework that overcomes these limitations through a modular, descriptor-mediated workflow. EGMOF decomposes inverse design into two steps: (1) a one-dimensional diffusion model (Prop2Desc) that maps desired properties to chemically meaningful descriptors followed by (2) a transformer model (Desc2MOF) that generates structures from these descriptors. This modular hybrid design enables minimal retraining and maintains high accuracy even under small-data conditions. On a hydrogen uptake dataset, EGMOF achieved over 95% validity and 84% hit rate, representing significant improvements of up to 57% in validity and 14% in hit rate compared to existing methods, while remaining effective with only 1,000 training samples. Moreover, our model successfully performed conditional generation across 29 diverse property datasets, including CoREMOF, QMOF, and text-mined experimental datasets, whereas previous models have not. This work presents a data-efficient, generalizable approach to the inverse design of diverse MOFs and highlights the potential of modular inverse design workflows for broader materials discovery.
- oai:arXiv.org:2511.03122v1
- cond-mat.mtrl-sci
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Seunghee Han, Yeonghun Kang, Taeun Bae, Varinia Bernales, Alan Aspuru-Guzik, Jihan Kim
-
-
- Provable Accelerated Bayesian Optimization with Knowledge Transfer
- https://arxiv.org/abs/2511.03125
- arXiv:2511.03125v1 Announce Type: cross
-Abstract: We study how Bayesian optimization (BO) can be accelerated on a target task with historical knowledge transferred from related source tasks. Existing works on BO with knowledge transfer either do not have theoretical guarantees or achieve the same regret as BO in the non-transfer setting, $\tilde{\mathcal{O}}(\sqrt{T \gamma_f})$, where $T$ is the number of evaluations of the target function and $\gamma_f$ denotes its information gain. In this paper, we propose the DeltaBO algorithm, in which a novel uncertainty-quantification approach is built on the difference function $\delta$ between the source and target functions, which are allowed to belong to different reproducing kernel Hilbert spaces (RKHSs). Under mild assumptions, we prove that the regret of DeltaBO is of order $\tilde{\mathcal{O}}(\sqrt{T (T/N + \gamma_\delta)})$, where $N$ denotes the number of evaluations from source tasks and typically $N \gg T$. In many applications, source and target tasks are similar, which implies that $\gamma_\delta$ can be much smaller than $\gamma_f$. Empirical studies on both real-world hyperparameter tuning tasks and synthetic functions show that DeltaBO outperforms other baseline methods and support our theoretical claims.
- oai:arXiv.org:2511.03125v1
- stat.ML
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Haitao Lin, Boxin Zhao, Mladen Kolar, Chong Liu
-
-
- Optimal Boundary Control of Diffusion on Graphs via Linear Programming
- https://arxiv.org/abs/2511.03129
- arXiv:2511.03129v1 Announce Type: cross
-Abstract: We propose a linear programming (LP) framework for steady-state diffusion and flux optimization on geometric networks. The state variable satisfies a discrete diffusion law on a weighted, oriented graph, where conductances are scaled by edge lengths to preserve geometric fidelity. Boundary potentials act as controls that drive interior fluxes according to a linear network Laplacian. The optimization problem enforces physically meaningful sign and flux-cap constraints at all boundary edges, derived directly from a gradient bound. This yields a finite-dimensional LP whose feasible set is polyhedral, and whose boundedness and solvability follow from simple geometric or algebraic conditions on the network data.
- We prove that under the absence of negative recession directions--automatically satisfied in the presence of finite box bounds, flux caps, or sign restrictions--the LP admits a global minimizer. Several sufficient conditions guaranteeing boundedness of the feasible region are identified, covering both full-rank and rank-deficient flux maps. The analysis connects classical results such as the Minkowski--Weyl decomposition, Hoffman's bound, and the fundamental theorem of linear programming with modern network-based diffusion modeling.
- Two large-scale examples illustrate the framework: (i) A typical large stadium in a major modern city, which forms a single connected component with relatively uniform corridor widths, and a (ii) A complex street network emanating from a large, historical city center, which forms a multi-component system.
- oai:arXiv.org:2511.03129v1
- math.OC
- cs.AI
- physics.comp-ph
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Harbir Antil, Rainald L\"ohner, Felipe P\'erez
-
-
- Balanced contributions, consistency, and value for games with externalities
- https://arxiv.org/abs/2511.03145
- arXiv:2511.03145v1 Announce Type: cross
-Abstract: We consider fair and consistent extensions of the Shapley value for games with externalities. Based on the restriction identified by Casajus et al. (2024, Games Econ. Behavior 147, 88-146), we define balanced contributions, Sobolev's consistency, and Hart and Mas-Colell's consistency for games with externalities, and we show that these properties lead to characterizations of the generalization of the Shapley value introduced by Macho-Stadler et al. (2007, J. Econ. Theory 135, 339-356), that parallel important characterizations of the Shapley value.
- oai:arXiv.org:2511.03145v1
- econ.TH
- cs.GT
- math.CO
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-sa/4.0/
- Andr\'e Casajus, Yukihiko Funaki, Frank Huettner
-
-
- Modeling Headway in Heterogeneous and Mixed Traffic Flow: A Statistical Distribution Based on a General Exponential Function
- https://arxiv.org/abs/2511.03154
- arXiv:2511.03154v1 Announce Type: cross
-Abstract: The ability of existing headway distributions to accurately reflect the diverse behaviors and characteristics in heterogeneous traffic (different types of vehicles) and mixed traffic (human-driven vehicles with autonomous vehicles) is limited, leading to unsatisfactory goodness of fit. To address these issues, we modified the exponential function to obtain a novel headway distribution. Rather than employing Euler's number (e) as the base of the exponential function, we utilized a real number base to provide greater flexibility in modeling the observed headway. However, the proposed is not a probability function. We normalize it to calculate the probability and derive the closed-form equation. In this study, we utilized a comprehensive experiment with five open datasets: highD, exiD, NGSIM, Waymo, and Lyft to evaluate the performance of the proposed distribution and compared its performance with six existing distributions under mixed and heterogeneous traffic flow. The results revealed that the proposed distribution not only captures the fundamental characteristics of headway distribution but also provides physically meaningful parameters that describe the distribution shape of observed headways. Under heterogeneous flow on highways (i.e., uninterrupted traffic flow), the proposed distribution outperforms other candidate distributions. Under urban road conditions (i.e., interrupted traffic flow), including heterogeneous and mixed traffic, the proposed distribution still achieves decent results.
- oai:arXiv.org:2511.03154v1
- stat.AP
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Natchaphon Leungbootnak, Zihao Li, Zihang Wei, Dominique Lord, Yunlong Zhang
-
-
- Frequency- and Amplitude-Modulated Gates for Universal Quantum Control
- https://arxiv.org/abs/2511.03164
- arXiv:2511.03164v1 Announce Type: cross
-Abstract: Achieving high-fidelity single- and two-qubit gates is essential for executing arbitrary digital quantum algorithms and for building error-corrected quantum computers. We propose a theoretical framework for implementing quantum gates using frequency- and amplitude-modulated microwave control, which extends conventional amplitude modulation by introducing frequency modulation as an additional degree of control. Our approach operates on fixed-frequency qubits, converting the need for qubit frequency tunability into drive frequency modulation. Using Floquet theory, we analyze and design these drives for optimal fidelity within specified criteria. Our framework spans adiabatic to nonadiabatic gates within the Floquet framework, ensuring broad applicability across gate types and control schemes. Using typical transmon qubit parameters in numerical simulations, we demonstrate a universal gate set-including the X, Hadamard, phase, and CZ gates-with control error well below 0.1% and gate times of 25-40 ns for single-qubit operations and 125-135 ns for two-qubit operations. Furthermore, we show an always-on CZ gate tailored for driven qubits, which has gate times of 80-90 ns.
- oai:arXiv.org:2511.03164v1
- quant-ph
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qi Ding, Shoumik Chowdhury, Agustin Di Paolo, R\'eouven Assouly, Alan V. Oppenheim, Jeffrey A. Grover, William D. Oliver
-
-
- Optimizing Earth-Moon Transfer and Cislunar Navigation: Integrating Low-Energy Trajectories, AI Techniques and GNSS-R Technologies
- https://arxiv.org/abs/2511.03173
- arXiv:2511.03173v1 Announce Type: cross
-Abstract: The rapid growth of cislunar activities, including lunar landings, the Lunar Gateway, and in-space refueling stations, requires advances in cost-efficient trajectory design and reliable integration of navigation and remote sensing. Traditional Earth-Moon transfers suffer from rigid launch windows and high propellant demands, while Earth-based GNSS systems provide little to no coverage beyond geostationary orbit. This limits autonomy and environmental awareness in cislunar space. This review compares four major transfer strategies by evaluating velocity requirements, flight durations, and fuel efficiency, and by identifying their suitability for both crewed and robotic missions. The emerging role of artificial intelligence and machine learning is highlighted: convolutional neural networks support automated crater recognition and digital terrain model generation, while deep reinforcement learning enables adaptive trajectory refinement during descent and landing to reduce risk and decision latency. The study also examines how GNSS-Reflectometry and advanced Positioning, Navigation, and Timing architectures can extend navigation capabilities beyond current limits. GNSS-R can act as a bistatic radar for mapping lunar ice, soil properties, and surface topography, while PNT systems support autonomous rendezvous, Lagrange point station-keeping, and coordinated satellite swarm operations. Combining these developments establishes a scalable framework for sustainable cislunar exploration and long-term human and robotic presence.
- oai:arXiv.org:2511.03173v1
- astro-ph.EP
- cs.AI
- cs.LG
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Published in the Proceedings of 2nd IAASPAICE 2025
- Arsalan Muhammad, Wasiu Akande Ahmed, Omada Friday Ojonugwa, Paul Puspendu Biswas
-
-
- Statistical Properties of Rectified Flow
- https://arxiv.org/abs/2511.03193
- arXiv:2511.03193v1 Announce Type: cross
-Abstract: Rectified flow (Liu et al., 2022; Liu, 2022; Wu et al., 2023) is a method for defining a transport map between two distributions, and enjoys popularity in machine learning, although theoretical results supporting the validity of these methods are scant. The rectified flow can be regarded as an approximation to optimal transport, but in contrast to other transport methods that require optimization over a function space, computing the rectified flow only requires standard statistical tools such as regression or density estimation. Because of this, one can leverage standard data analysis tools for regression and density estimation to develop empirical versions of transport maps. We study some structural properties of the rectified flow, including existence, uniqueness, and regularity, as well as the related statistical properties, such as rates of convergence and central limit theorems, for some selected estimators. To do so, we analyze separately the bounded and unbounded cases as each presents unique challenges. In both cases, we are able to establish convergence at faster rates than the ones for the usual nonparametric regression and density estimation.
- oai:arXiv.org:2511.03193v1
- stat.TH
- cs.LG
- math.ST
- stat.ME
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Gonzalo Mena, Arun Kumar Kuchibhotla, Larry Wasserman
-
-
- Provable Separations between Memorization and Generalization in Diffusion Models
- https://arxiv.org/abs/2511.03202
- arXiv:2511.03202v1 Announce Type: cross
-Abstract: Diffusion models have achieved remarkable success across diverse domains, but they remain vulnerable to memorization -- reproducing training data rather than generating novel outputs. This not only limits their creative potential but also raises concerns about privacy and safety. While empirical studies have explored mitigation strategies, theoretical understanding of memorization remains limited. We address this gap through developing a dual-separation result via two complementary perspectives: statistical estimation and network approximation. From the estimation side, we show that the ground-truth score function does not minimize the empirical denoising loss, creating a separation that drives memorization. From the approximation side, we prove that implementing the empirical score function requires network size to scale with sample size, spelling a separation compared to the more compact network representation of the ground-truth score function. Guided by these insights, we develop a pruning-based method that reduces memorization while maintaining generation quality in diffusion transformers.
- oai:arXiv.org:2511.03202v1
- stat.ML
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Zeqi Ye, Qijie Zhu, Molei Tao, Minshuo Chen
-
-
- RKUM: An R Package for Robust Kernel Unsupervised Methods
- https://arxiv.org/abs/2511.03216
- arXiv:2511.03216v1 Announce Type: cross
-Abstract: RKUM is an R package developed for implementing robust kernel-based unsupervised methods. It provides functions for estimating the robust kernel covariance operator (CO) and the robust kernel cross-covariance operator (CCO) using generalized loss functions instead of the conventional quadratic loss. These operators form the foundation of robust kernel learning and enable reliable analysis under contaminated or noisy data conditions. The package includes implementations of robust kernel canonical correlation analysis (Kernel CCA), as well as the influence function (IF) for both standard and multiple kernel CCA frameworks. The influence function quantifies sensitivity and helps detect influential or outlying observations across two-view and multi-view datasets. Experiments using synthesized two-view and multi-view data demonstrate that the IF of the standard kernel CCA effectively identifies outliers, while the robust kernel methods implemented in RKUM exhibit reduced sensitivity to contamination. Overall, RKUM provides an efficient and extensible platform for robust kernel-based analysis in high-dimensional data applications.
- oai:arXiv.org:2511.03216v1
- stat.ML
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Md Ashad Alam
-
-
- Topography, climate, land cover, and biodiversity: Explaining endemic richness and management implications on a Mediterranean island
- https://arxiv.org/abs/2511.03242
- arXiv:2511.03242v1 Announce Type: cross
-Abstract: Island endemism is shaped by complex interactions among environmental, ecological, and evolutionary factors, yet the relative contributions of topography, climate, and land cover remain incompletely quantified. We investigated the drivers of endemic plant richness across Crete, a Mediterranean biodiversity hotspot, using spatially explicit data on species distributions, topographic complexity, climatic variability, land cover, and soil characteristics. Artificial Neural Network models, a machine learning tool, were employed to assess the relative importance of these predictors and to identify hotspots of endemism. We found that total species richness, elevation range, and climatic variability were the strongest predictors of endemic richness, reflecting the role of biodiversity, topographic heterogeneity, and climatic gradients in generating diverse habitats and micro-refugia that promote speciation and buffer extinction risk. Endemic hotspots only partially overlapped with areas of high total species richness, indicating that total species richness was the optimal from the ones examined, yet an imperfect surrogate. These environmentally heterogeneous areas also provide critical ecosystem services, including soil stabilization, pollination, and cultural value, which are increasingly threatened by tourism, renewable energy development, land-use change, and climate impacts. Our findings underscore the importance of prioritizing mountainous and climatically variable regions in conservation planning, integrating ecosystem service considerations, and accounting for within-island spatial heterogeneity. By explicitly linking the environmental drivers of endemism to both biodiversity patterns and ecosystem function, this study provides a framework for evidence-based conservation planning in Crete and other Mediterranean islands with similar geological and biogeographic contexts.
- oai:arXiv.org:2511.03242v1
- q-bio.PE
- cs.LG
- stat.OT
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aristides Moustakas, Ioannis N Vogiatzakis
-
-
- Influence of Data Dimensionality Reduction Methods on the Effectiveness of Quantum Machine Learning Models
- https://arxiv.org/abs/2511.03320
- arXiv:2511.03320v1 Announce Type: cross
-Abstract: Data dimensionality reduction techniques are often utilized in the implementation of Quantum Machine Learning models to address two significant issues: the constraints of NISQ quantum devices, which are characterized by noise and a limited number of qubits, and the challenge of simulating a large number of qubits on classical devices. It also raises concerns over the scalability of these approaches, as dimensionality reduction methods are slow to adapt to large datasets. In this article, we analyze how data reduction methods affect different QML models. We conduct this experiment over several generated datasets, quantum machine algorithms, quantum data encoding methods, and data reduction methods. All these models were evaluated on the performance metrics like accuracy, precision, recall, and F1 score. Our findings have led us to conclude that the usage of data dimensionality reduction methods results in skewed performance metric values, which results in wrongly estimating the actual performance of quantum machine learning models. There are several factors, along with data dimensionality reduction methods, that worsen this problem, such as characteristics of the datasets, classical to quantum information embedding methods, percentage of feature reduction, classical components associated with quantum models, and structure of quantum machine learning models. We consistently observed the difference in the accuracy range of 14% to 48% amongst these models, using data reduction and not using it. Apart from this, our observations have shown that some data reduction methods tend to perform better for some specific data embedding methodologies and ansatz constructions.
- oai:arXiv.org:2511.03320v1
- quant-ph
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Aakash Ravindra Shinde, Jukka K. Nurminen
-
-
- Exploring Topologies in Quantum Annealing: A Hardware-Aware Perspective
- https://arxiv.org/abs/2511.03327
- arXiv:2511.03327v1 Announce Type: cross
-Abstract: Quantum Annealing (QA) offers a promising framework for solving NP-hard optimization problems, but its effectiveness is constrained by the topology of the underlying quantum hardware. Solving an optimization problem $P$ via QA involves a hardware-aware circuit compilation which requires representing $P$ as a graph $G_P$ and embedding it into the hardware connectivity graph $G_Q$ that defines how qubits connect to each other in a QA-based quantum processing unit (QPU).
- Minor Embedding (ME) is a possible operational form of this hardware-aware compilation. ME heuristically builds a map that associates each node of $G_P$ -- the logical variables of $P$ -- to a chain of adjacent nodes in $G_Q$ by means of one of its minors, so that the arcs of $G_P$ are preserved as physical connections among qubits in $G_Q$.
- The static topology of hardwired qubits can clearly lead to inefficient compilations because $G_Q$ cannot be a clique, currently. We propose a methodology and a set of criteria to evaluate how the hardware topology $G_Q$ can negatively affect the embedded problem, thus making the quantum optimization more sensible to noise.
- We evaluate the result of ME across two QPU topologies: Zephyr graphs (used in current D-Wave systems) and Havel-Hakimi graphs, which allow controlled variation of the average node degree. This enables us to study how the ratio `number of nodes/number of incident arcs per node' affects ME success rates to map $G_P$ into a minor of $G_Q$.
- Our findings, obtained through ME executed on classical, i.e. non-quantum, architectures, suggest that Havel-Hakimi-based topologies, on average, require shorter qubit chains in the minor of $G_P$, exhibiting smoother scaling of the largest embeddable $G_P$ as the QPU size increases. These characteristics indicate their potential as alternative designs for QA-based QPUs.
- oai:arXiv.org:2511.03327v1
- quant-ph
- cs.PF
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Mario Bifulco, Luca Roversi
-
-
- Extension of the Gy\'arf\'as-Sumner conjecture to signed graphs
- https://arxiv.org/abs/2511.03335
- arXiv:2511.03335v1 Announce Type: cross
-Abstract: The balanced chromatic number of a signed graph G is the minimum number of balanced sets that cover all vertices of G. Studying structural conditions which imply bounds on the balanced chromatic number of signed graphs is among the most fundamental problems in graph theory. In this work, we initiate the study of coloring hereditary classes of signed graphs. More precisely, we say that a set F = {F_1, F_2, ..., F_l} is a GS (for Gy\'arf\'as-Sumner) set if there exists a constant c such that signed graphs with no induced subgraph switching equivalent to a member of F admit a balanced c-coloring. The focus of this work is to study GS sets of order 2. We show that if F is a GS set of order 2, then F_1 is either (K_3, -) or (K_4, -), and F_2 is a linear forest. In the case of F_1 = (K_3, -), we show that any choice of a linear forest for F_2 works. In the case of F_1 = (K_4, -), we show that if each connected component of F_2 is a path of length at most 4, then {F_1, F_2} is a GS set.
- oai:arXiv.org:2511.03335v1
- math.CO
- cs.DM
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Guillaume Aubian, Allen Ibiapina, Luis Kuffner, Reza Naserasr, Cyril Pujol, Cl\'eoph\'ee Robin, Huan Zhou
-
-
- audio2chart: End to End Audio Transcription into playable Guitar Hero charts
- https://arxiv.org/abs/2511.03337
- arXiv:2511.03337v1 Announce Type: cross
-Abstract: This work introduces audio2chart, a framework for the automatic generation of Guitar Hero style charts directly from raw audio. The task is formalized as a sequence prediction problem, where models are trained to generate discrete chart tokens aligned with the audio on discrete time steps. An unconditional baseline demonstrates strong predictive performance, while the addition of audio conditioning yields consistent improvements across accuracy based metrics. This work demonstrates that incorporating audio conditioning is both feasible and effective for improving note prediction in automatic chart generation. The complete codebase for training and inference is publicly available on GitHub supporting reproducible research on neural chart generation. A family of pretrained models is released on Hugging Face.
- oai:arXiv.org:2511.03337v1
- eess.AS
- cs.SD
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Riccardo Tripodi
-
-
- Open Source State-Of-the-Art Solution for Romanian Speech Recognition
- https://arxiv.org/abs/2511.03361
- arXiv:2511.03361v1 Announce Type: cross
-Abstract: In this work, we present a new state-of-the-art Romanian Automatic Speech Recognition (ASR) system based on NVIDIA's FastConformer architecture--explored here for the first time in the context of Romanian. We train our model on a large corpus of, mostly, weakly supervised transcriptions, totaling over 2,600 hours of speech. Leveraging a hybrid decoder with both Connectionist Temporal Classification (CTC) and Token-Duration Transducer (TDT) branches, we evaluate a range of decoding strategies including greedy, ALSD, and CTC beam search with a 6-gram token-level language model. Our system achieves state-of-the-art performance across all Romanian evaluation benchmarks, including read, spontaneous, and domain-specific speech, with up to 27% relative WER reduction compared to previous best-performing systems. In addition to improved transcription accuracy, our approach demonstrates practical decoding efficiency, making it suitable for both research and deployment in low-latency ASR applications.
- oai:arXiv.org:2511.03361v1
- eess.AS
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Gabriel Pirlogeanu, Alexandru-Lucian Georgescu, Horia Cucu
-
-
- Morpho-Genomic Deep Learning for Ovarian Cancer Subtype and Gene Mutation Prediction from Histopathology
- https://arxiv.org/abs/2511.03365
- arXiv:2511.03365v1 Announce Type: cross
-Abstract: Ovarian cancer remains one of the most lethal gynecological malignancies, largely due to late diagnosis and extensive heterogeneity across subtypes. Current diagnostic methods are limited in their ability to reveal underlying genomic variations essential for precision oncology. This study introduces a novel hybrid deep learning pipeline that integrates quantitative nuclear morphometry with deep convolutional image features to perform ovarian cancer subtype classification and gene mutation inference directly from Hematoxylin and Eosin (H&E) histopathological images. Using $\sim45,000$ image patches sourced from The Cancer Genome Atlas (TCGA) and public datasets, a fusion model combining a ResNet-50 Convolutional Neural Network (CNN) encoder and a Vision Transformer (ViT) was developed. This model successfully captured both local morphological texture and global tissue context. The pipeline achieved a robust overall subtype classification accuracy of $84.2\%$ (Macro AUC of $0.87 \pm 0.03$). Crucially, the model demonstrated the capacity for gene mutation inference with moderate-to-high accuracy: $AUC_{TP53} = 0.82 \pm 0.02$, $AUC_{BRCA1} = 0.76 \pm 0.04$, and $AUC_{ARID1A} = 0.73 \pm 0.05$. Feature importance analysis established direct quantitative links, revealing that nuclear solidity and eccentricity were the dominant predictors for TP53 mutation. These findings validate that quantifiable histological phenotypes encode measurable genomic signals, paving the way for cost-effective, precision histopathology in ovarian cancer triage and diagnosis.
- oai:arXiv.org:2511.03365v1
- eess.IV
- cs.CV
- q-bio.QM
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Gabriela Fernandes
-
-
- Computational Imaging Meets LLMs: Zero-Shot IDH Mutation Prediction in Brain Gliomas
- https://arxiv.org/abs/2511.03376
- arXiv:2511.03376v1 Announce Type: cross
-Abstract: We present a framework that combines Large Language Models with computational image analytics for non-invasive, zero-shot prediction of IDH mutation status in brain gliomas. For each subject, coregistered multi-parametric MRI scans and multi-class tumor segmentation maps were processed to extract interpretable semantic (visual) attributes and quantitative features, serialized in a standardized JSON file, and used to query GPT 4o and GPT 5 without fine-tuning. We evaluated this framework on six publicly available datasets (N = 1427) and results showcased high accuracy and balanced classification performance across heterogeneous cohorts, even in the absence of manual annotations. GPT 5 outperformed GPT 4o in context-driven phenotype interpretation. Volumetric features emerged as the most important predictors, supplemented by subtype-specific imaging markers and clinical information. Our results demonstrate the potential of integrating LLM-based reasoning with computational image analytics for precise, non-invasive tumor genotyping, advancing diagnostic strategies in neuro-oncology. The code is available at https://github.com/ATPLab-LUMS/CIM-LLM.
- oai:arXiv.org:2511.03376v1
- eess.IV
- cs.AI
- q-bio.QM
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Syed Muqeem Mahmood, Hassan Mohy-ud-Din
-
-
- Terracini matroids: algebraic matroids of secants and embedded joins
- https://arxiv.org/abs/2511.03389
- arXiv:2511.03389v1 Announce Type: cross
-Abstract: Applications of algebraic geometry have sparked much recent work on algebraic matroids. An algebraic matroid encodes algebraic dependencies among coordinate functions on a variety.
- We study the behavior of algebraic matroids under joins and secants of varieties. Motivated by Terracini's lemma, we introduce the notion of a Terracini union of matroids, which captures when the algebraic matroid of a join coincides with the matroid union of the algebraic matroids of its summands. We illustrate applications of our results with a discussion of the implications for toric surfaces and threefolds.
- oai:arXiv.org:2511.03389v1
- math.CO
- cs.SC
- math.AC
- math.AG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fatemeh Mohammadi, Jessica Sidman, Louis Theran
-
-
- Seeing What You Say: Expressive Image Generation from Speech
- https://arxiv.org/abs/2511.03423
- arXiv:2511.03423v1 Announce Type: cross
-Abstract: This paper proposes VoxStudio, the first unified and end-to-end speech-to-image model that generates expressive images directly from spoken descriptions by jointly aligning linguistic and paralinguistic information. At its core is a speech information bottleneck (SIB) module, which compresses raw speech into compact semantic tokens, preserving prosody and emotional nuance. By operating directly on these tokens, VoxStudio eliminates the need for an additional speech-to-text system, which often ignores the hidden details beyond text, e.g., tone or emotion. We also release VoxEmoset, a large-scale paired emotional speech-image dataset built via an advanced TTS engine to affordably generate richly expressive utterances. Comprehensive experiments on the SpokenCOCO, Flickr8kAudio, and VoxEmoset benchmarks demonstrate the feasibility of our method and highlight key challenges, including emotional consistency and linguistic ambiguity, paving the way for future research.
- oai:arXiv.org:2511.03423v1
- eess.AS
- cs.CV
- cs.MM
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Jiyoung Lee, Song Park, Sanghyuk Chun, Soo-Whan Chung
-
-
- A Support-Set Algorithm for Optimization Problems with Nonnegative and Orthogonal Constraints
- https://arxiv.org/abs/2511.03443
- arXiv:2511.03443v1 Announce Type: cross
-Abstract: In this paper, we investigate optimization problems with nonnegative and orthogonal constraints, where any feasible matrix of size $n \times p$ exhibits a sparsity pattern such that each row accommodates at most one nonzero entry. Our analysis demonstrates that, by fixing the support set, the global solution of the minimization subproblem for the proximal linearization of the objective function can be computed in closed form with at most $n$ nonzero entries. Exploiting this structural property offers a powerful avenue for dramatically enhancing computational efficiency. Guided by this insight, we propose a support-set algorithm preserving strictly the feasibility of iterates. A central ingredient is a strategically devised update scheme for support sets that adjusts the placement of nonzero entries. We establish the global convergence of the support-set algorithm to a first-order stationary point, and show that its iteration complexity required to reach an $\epsilon$-approximate first-order stationary point is $O (\epsilon^{-2})$. Numerical results are strongly in favor of our algorithm in real-world applications, including nonnegative PCA, clustering, and community detection.
- oai:arXiv.org:2511.03443v1
- math.OC
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lei Wang, Xin Liu, Xiaojun Chen
-
-
- Explicit Ensemble Learning Surrogate for Joint Chance-Constrained Optimal Power Flow
- https://arxiv.org/abs/2511.03515
- arXiv:2511.03515v1 Announce Type: cross
-Abstract: The increasing penetration of renewable generation introduces uncertainty into power systems, challenging traditional deterministic optimization methods. Chance-constrained optimization offers an approach to balancing cost and risk; however, incorporating joint chance constraints introduces computational challenges. This paper presents an ensemble support vector machine surrogate for joint chance constraint optimal power flow, where multiple linear classifiers are trained on simulated optimal power flow data and embedded as tractable hyperplane constraints via Big--M reformulations. The surrogate yields a polyhedral approximation of probabilistic line flow limits that preserves interpretability and scalability. Numerical experiments on the IEEE 118-bus system show that the proposed method achieves near-optimal costs with a negligible average error of $0.03\%$. These results demonstrate the promise of ensemble surrogates as efficient and transparent tools for risk-aware optimization of power systems.
- oai:arXiv.org:2511.03515v1
- math.OC
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Amir Bahador Javadi, Amin Kargarian
-
-
- The Structure of Cross-Validation Error: Stability, Covariance, and Minimax Limits
- https://arxiv.org/abs/2511.03554
- arXiv:2511.03554v1 Announce Type: cross
-Abstract: Despite ongoing theoretical research on cross-validation (CV), many theoretical questions about CV remain widely open. This motivates our investigation into how properties of algorithm-distribution pairs can affect the choice for the number of folds in $k$-fold cross-validation.
- Our results consist of a novel decomposition of the mean-squared error of cross-validation for risk estimation, which explicitly captures the correlations of error estimates across overlapping folds and includes a novel algorithmic stability notion, squared loss stability, that is considerably weaker than the typically required hypothesis stability in other comparable works.
- Furthermore, we prove:
- 1. For every learning algorithm that minimizes empirical error, a minimax lower bound on the mean-squared error of $k$-fold CV estimating the population risk $L_\mathcal{D}$: \[ \min_{k \mid n}\; \max_{\mathcal{D}}\; \mathbb{E}\!\left[\big(\widehat{L}_{\mathrm{CV}}^{(k)} - L_{\mathcal{D}}\big)^{2}\right] \;=\; \Omega\!\big(\sqrt{k}/n\big), \] where $n$ is the sample size and $k$ the number of folds. This shows that even under idealized conditions, for large values of $k$, CV cannot attain the optimum of order $1/n$ achievable by a validation set of size $n$, reflecting an inherent penalty caused by dependence between folds.
- 2. Complementing this, we exhibit learning rules for which \[
- \max_{\mathcal{D}}\; \mathbb{E}\!\left[\big(\widehat{L}_{\mathrm{CV}}^{(k)} - L_{\mathcal{D}}\big)^{2}\right] \;=\; \Omega(k/n), \] matching (up to constants) the accuracy of a hold-out estimator of a single fold of size $n/k$.
- Together these results delineate the fundamental trade-off in resampling-based risk estimation: CV cannot fully exploit all $n$ samples for unbiased risk evaluation, and its minimax performance is pinned between the $k/n$ and $\sqrt{k}/n$ regimes.
- oai:arXiv.org:2511.03554v1
- math.ST
- cs.LG
- stat.TH
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ido Nachum, R\"udiger Urbanke, Thomas Weinberger
-
-
- Improving Directions in Mixed Integer Bilevel Linear Optimization
- https://arxiv.org/abs/2511.03566
- arXiv:2511.03566v1 Announce Type: cross
-Abstract: We consider the central role of improving directions in solution methods for mixed integer bilevel linear optimization problems (MIBLPs). Current state-of-the-art methods for solving MIBLPs employ the branch-and-cut framework originally developed for solving mixed integer linear optimization problems. This approach relies on oracles for two kinds of subproblems: those for checking whether a candidate pair of leader's and follower's decisions is bilevel feasible, and those required for generating valid inequalities. Typically, these two types of oracles are managed separately, but in this work, we explore their close connection and propose a solution framework based on solving a single type of subproblem: determining whether there exists a so-called improving feasible direction for the follower's problem. Solution of this subproblem yields information that can be used both to check feasibility and to generate strong valid inequalities. Building on prior works, we expose the foundational role of improving directions in enforcing the follower's optimality condition and extend a previously known hierarchy of optimality-based relaxations to the mixed-integer setting, showing that the associated relaxed feasible regions coincide exactly with the closure associated with intersection cuts derived from improving directions. Numerical results with an implementation using a modified version of the open source solver MibS show that this approach can yield practical improvements.
- oai:arXiv.org:2511.03566v1
- math.OC
- cs.MS
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Federico Battista, Ted K. Ralphs
-
-
- Exploiting Over-Approximation Errors as Preview Information for Nonlinear Control
- https://arxiv.org/abs/2511.03577
- arXiv:2511.03577v1 Announce Type: cross
-Abstract: We study the control of nonlinear constrained systems via over-approximations. Our key observation is that the over-approximation error, rather than being an unknown disturbance, can be exploited as input-dependent preview information. This leads to the notion of informed policies, which depend on both the state and the error. We formulate the concretization problem -recovering a valid input for the true system from a preview-based policy- as a fixed-point equation. Existence of solutions follows from the Brouwer fixed-point theorem, while efficient computation is enabled through closed-form, linear, or convex programs for input-affine systems, and through an iterative method based on the Banach fixed-point theorem for nonlinear systems.
- oai:arXiv.org:2511.03577v1
- math.OC
- cs.SY
- eess.SY
- math.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Antoine Aspeel, Antoine Girard, Thiago Alves Lima
-
-
- Characterizations of undirected 2-quasi best match graphs
- https://arxiv.org/abs/2511.03592
- arXiv:2511.03592v1 Announce Type: cross
-Abstract: Bipartite best match graphs (BMG) and their generalizations arise in mathematical phylogenetics as combinatorial models describing evolutionary relationships among related genes in a pair of species. In this work, we characterize the class of \emph{undirected 2-quasi-BMGs} (un2qBMGs), which form a proper subclass of the $P_6$-free chordal bipartite graphs. We show that un2qBMGs are exactly the class of bipartite graphs free of $P_6$, $C_6$, and the eight-vertex Sunlet$_4$ graph. Equivalently, a bipartite graph $G$ is un2qBMG if and only if every connected induced subgraph contains a ``heart-vertex'' which is adjacent to all the vertices of the opposite color. We further provide a $O(|V(G)|^3)$ algorithm for the recognition of un2qBMGs that, in the affirmative case, constructs a labeled rooted tree that ``explains'' $G$. Finally, since un2qBMGs coincide with the $(P_6,C_6)$-free bi-cographs, they can also be recognized in linear time.
- oai:arXiv.org:2511.03592v1
- math.CO
- cs.DM
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Annachiara Korchmaros, Guillaume E. Scholz, Peter F. Stadler
-
-
- Vector-valued self-normalized concentration inequalities beyond sub-Gaussianity
- https://arxiv.org/abs/2511.03606
- arXiv:2511.03606v1 Announce Type: cross
-Abstract: The study of self-normalized processes plays a crucial role in a wide range of applications, from sequential decision-making to econometrics. While the behavior of self-normalized concentration has been widely investigated for scalar-valued processes, vector-valued processes remain comparatively underexplored, especially outside of the sub-Gaussian framework. In this contribution, we provide concentration bounds for self-normalized processes with light tails beyond sub-Gaussianity (such as Bennett or Bernstein bounds). We illustrate the relevance of our results in the context of online linear regression, with applications in (kernelized) linear bandits.
- oai:arXiv.org:2511.03606v1
- stat.ML
- cs.LG
- math.ST
- stat.TH
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Diego Martinez-Taboada, Tomas Gonzalez, Aaditya Ramdas
-
-
- LiveTradeBench: Seeking Real-World Alpha with Large Language Models
- https://arxiv.org/abs/2511.03628
- arXiv:2511.03628v1 Announce Type: cross
-Abstract: Large language models (LLMs) achieve strong performance across benchmarks--from knowledge quizzes and math reasoning to web-agent tasks--but these tests occur in static settings, lacking real dynamics and uncertainty. Consequently, they evaluate isolated reasoning or problem-solving rather than decision-making under uncertainty. To address this, we introduce LiveTradeBench, a live trading environment for evaluating LLM agents in realistic and evolving markets. LiveTradeBench follows three design principles: (i) Live data streaming of market prices and news, eliminating dependence on offline backtesting and preventing information leakage while capturing real-time uncertainty; (ii) a portfolio-management abstraction that extends control from single-asset actions to multi-asset allocation, integrating risk management and cross-asset reasoning; and (iii) multi-market evaluation across structurally distinct environments--U.S. stocks and Polymarket prediction markets--differing in volatility, liquidity, and information flow. At each step, an agent observes prices, news, and its portfolio, then outputs percentage allocations that balance risk and return. Using LiveTradeBench, we run 50-day live evaluations of 21 LLMs across families. Results show that (1) high LMArena scores do not imply superior trading outcomes; (2) models display distinct portfolio styles reflecting risk appetite and reasoning dynamics; and (3) some LLMs effectively leverage live signals to adapt decisions. These findings expose a gap between static evaluation and real-world competence, motivating benchmarks that test sequential decision making and consistency under live uncertainty.
- oai:arXiv.org:2511.03628v1
- q-fin.TR
- cs.AI
- cs.CE
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Haofei Yu, Fenghai Li, Jiaxuan You
-
-
- Quantifying Weighted Morphological Content of Large-Scale Structures via Simulation-Based Inference
- https://arxiv.org/abs/2511.03636
- arXiv:2511.03636v1 Announce Type: cross
-Abstract: In this work, we perform a simulation-based forecasting analysis to compare the constraining power of two higher-order summary statistics of the large-scale structure (LSS), the Minkowski Functionals (MFs) and the Conditional Moments of Derivative (CMD), with a particular focus on their sensitivity to nonlinear and anisotropic features in redshift-space. Our analysis relies on halo catalogs from the Big Sobol Sequence(BSQ) simulations at redshift $z=0.5$, employing a likelihood-free inference framework implemented via neural posterior estimation. At the fiducial cosmology of the Quijote simulations $(\Omega_{m}=0.3175,\,\sigma_{8}=0.834)$, and for the smoothing scale $R=15\,h^{-1}$Mpc, we find that the CMD yields tighter forecasts for $(\Omega_{m}},\,\sigma_{8})$ than the zeroth- to third-order MFs components, improving the constraint precision by ${\sim}(44\%,\,52\%)$, ${\sim}(30\%,\,45\%)$, ${\sim}(27\%,\,17\%)$, and ${\sim}(26\%,\,17\%)$, respectively. A joint configuration combining the MFs and CMD further enhances the precision by approximately ${\sim}27\%$ compared to the standard MFs alone, highlighting the complementary anisotropy-sensitive information captured by the CMD in contrast to the scalar morphological content encapsulated by the MFs. We further extend the forecasting analysis to a continuous range of cosmological parameter values and multiple smoothing scales. Our results show that, although the absolute forecast uncertainty for each component of summary statistics depends on the underlying parameter values and the adopted smoothing scale, the relative constraining power among the summary statistics remains nearly constant throughout.
- oai:arXiv.org:2511.03636v1
- astro-ph.CO
- cs.LG
- physics.comp-ph
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- M. H. Jalali Kanafi, S. M. S. Movahed
-
-
- Explaining Human Choice Probabilities with Simple Vector Representations
- https://arxiv.org/abs/2511.03643
- arXiv:2511.03643v1 Announce Type: cross
-Abstract: When people pursue rewards in stochastic environments, they often match their choice frequencies to the observed target frequencies, even when this policy is demonstrably sub-optimal. We used a ``hide and seek'' task to evaluate this behavior under conditions where pursuit (seeking) could be toggled to avoidance (hiding), while leaving the probability distribution fixed, or varying complexity by changing the number of possible choices. We developed a model for participant choice built from choice frequency histograms treated as vectors. We posited the existence of a probability antimatching strategy for avoidance (hiding) rounds, and formalized this as a vector reflection of probability matching. We found that only two basis policies: matching/antimatching and maximizing/minimizing were sufficient to account for participant choices across a range of room numbers and opponent probability distributions. This schema requires only that people have the ability to remember the relative frequency of the different outcomes. With this knowledge simple operations can construct the maximizing and minimizing policies as well as matching and antimatching strategies. A mixture of these two policies captures human choice patterns in a stochastic environment.
- oai:arXiv.org:2511.03643v1
- q-bio.NC
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Peter DiBerardino, Britt Anderson
-
-
- Geometrically robust least squares through manifold optimization
- https://arxiv.org/abs/2511.03644
- arXiv:2511.03644v1 Announce Type: cross
-Abstract: This paper presents a methodology for solving a geometrically robust least squares problem, which arises in various applications where the model is subject to geometric constraints. The problem is formulated as a minimax optimization problem on a product manifold, where one variable is constrained to a ball describing uncertainty. To handle the constraint, an exact penalty method is applied. A first-order gradient descent ascent algorithm is proposed to solve the problem, and its convergence properties are illustrated by an example. The proposed method offers a robust approach to solving a wide range of problems arising in signal processing and data-driven control.
- oai:arXiv.org:2511.03644v1
- math.OC
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Jeremy Coulson, Alberto Padoan, Cyrus Mostajeran
-
-
- Improving Gene Trees without more data
- https://arxiv.org/abs/2511.03692
- arXiv:2511.03692v1 Announce Type: cross
-Abstract: Estimating species and gene trees from sequence data is challenging. Gene tree estimation is often hampered by low phylogenetic signal in alignments, leading to inaccurate trees. Species tree estimation is complicated by incomplete lineage sorting (ILS), where gene histories differ from the species' history. Summary methods like MP-EST, ASTRAL2, and ASTRID infer species trees from gene trees but suffer when gene tree accuracy is low. To address this, the Statistical Binning (SB) and Weighted Statistical Binning (WSB) pipelines were developed to improve gene tree estimation. However, previous studies only tested these pipelines using multi-locus bootstrapping (MLBS), not the BestML approach.
- This thesis proposes a novel pipeline, WSB+WQMC, which shares design features with the existing WSB+CAML pipeline but has other desirable properties and is statistically consistent under the GTR+MSC model. This study evaluated WSB+WQMC against WSB+CAML using BestML analysis on various simulated datasets. The results confirmed many trends seen in prior MLBS analyses. WSB+WQMC substantially improved gene tree and species tree accuracy (using ASTRAL2 and ASTRID) on most datasets with low, medium, and moderately high ILS levels. In a direct comparison, WSB+WQMC computed less accurate trees than WSB+CAML under certain low and medium ILS conditions. However, WSB+WQMC performed better or at least as accurately as WSB+CAML on all datasets with moderately high and high ILS. It also proved better for estimating gene trees on some medium and low ILS datasets. Thus, WSB+WQMC is a promising alternative to WSB+CAML for phylogenetic estimation, especially in the presence of low phylogenetic signal.
- oai:arXiv.org:2511.03692v1
- q-bio.PE
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Ashu Gupta
-
-
- Colorectal Cancer Histopathological Grading using Multi-Scale Federated Learning
- https://arxiv.org/abs/2511.03693
- arXiv:2511.03693v1 Announce Type: cross
-Abstract: Colorectal cancer (CRC) grading is a critical prognostic factor but remains hampered by inter-observer variability and the privacy constraints of multi-institutional data sharing. While deep learning offers a path to automation, centralized training models conflict with data governance regulations and neglect the diagnostic importance of multi-scale analysis. In this work, we propose a scalable, privacy-preserving federated learning (FL) framework for CRC histopathological grading that integrates multi-scale feature learning within a distributed training paradigm. Our approach employs a dual-stream ResNetRS50 backbone to concurrently capture fine-grained nuclear detail and broader tissue-level context. This architecture is integrated into a robust FL system stabilized using FedProx to mitigate client drift across heterogeneous data distributions from multiple hospitals. Extensive evaluation on the CRC-HGD dataset demonstrates that our framework achieves an overall accuracy of 83.5%, outperforming a comparable centralized model (81.6%). Crucially, the system excels in identifying the most aggressive Grade III tumors with a high recall of 87.5%, a key clinical priority to prevent dangerous false negatives. Performance further improves with higher magnification, reaching 88.0% accuracy at 40x. These results validate that our federated multi-scale approach not only preserves patient privacy but also enhances model performance and generalization. The proposed modular pipeline, with built-in preprocessing, checkpointing, and error handling, establishes a foundational step toward deployable, privacy-aware clinical AI for digital pathology.
- oai:arXiv.org:2511.03693v1
- stat.ML
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Md Ahasanul Arafath, Abhijit Kumar Ghosh, Md Rony Ahmed, Sabrin Afroz, Minhazul Hosen, Md Hasan Moon, Md Tanzim Reza, Md Ashad Alam
-
-
- The Adaptivity Barrier in Batched Nonparametric Bandits: Sharp Characterization of the Price of Unknown Margin
- https://arxiv.org/abs/2511.03708
- arXiv:2511.03708v1 Announce Type: cross
-Abstract: We study batched nonparametric contextual bandits under a margin condition when the margin parameter $\alpha$ is unknown. To capture the statistical price of this ignorance, we introduce the regret inflation criterion, defined as the ratio between the regret of an adaptive algorithm and that of an oracle knowing $\alpha$. We show that the optimal regret inflation grows polynomial with the horizon $T$, with exponent precisely given by the value of a convex optimization problem involving the dimension, smoothness, and batch budget. Moreover, the minimizers of this optimization problem directly prescribe the batch allocation and exploration strategy of a rate-optimal algorithm. Building on this principle, we develop RoBIN (RObust batched algorithm with adaptive BINning), which achieves the optimal regret inflation up to logarithmic factors. These results reveal a new adaptivity barrier: under batching, adaptation to an unknown margin parameter inevitably incurs a polynomial penalty, sharply characterized by a variational problem. Remarkably, this barrier vanishes when the number of batches exceeds $\log \log T$; with only a doubly logarithmic number of updates, one can recover the oracle regret rate up to polylogarithmic factors.
- oai:arXiv.org:2511.03708v1
- math.ST
- cs.LG
- stat.ML
- stat.TH
- Thu, 06 Nov 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Rong Jiang, Cong Ma
-
-
- A note reviewing Turing's 1936
- https://arxiv.org/abs/1308.0497
- arXiv:1308.0497v4 Announce Type: replace
-Abstract: By closely rereading the original Turing's 1936 article, we can gain insight about that it is based on the claim to have defined a number which is not computable, arguing that there can be no machine computing the diagonal on the enumeration of the computable sequences. This article provides a careful analysis of Turing's original argument, demonstrating that it cannot be regarded as a conclusive proof. Furthermore, it shows that there is no evidence supporting the existence of a defined number that is not computable.
- oai:arXiv.org:1308.0497v4
- cs.CC
- cs.LO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Paola Cattabriga
-
-
- A Versatile Depth Video Encoding Scheme Based on Low-rank Tensor Modeling for Free Viewpoint Video
- https://arxiv.org/abs/2104.04678
- arXiv:2104.04678v2 Announce Type: replace
-Abstract: The compression quality losses of depth sequences determine quality of view synthesis in free-viewpoint video. The depth map intra prediction in 3D extensions of the HEVC applies intra modes with auxiliary depth modeling modes (DMMs) to better preserve depth edges and handle motion discontinuities. Although such modes enable high efficiency compression, but at the cost of very high encoding complexity. Skipping conventional intra coding modes and DMMs in depth coding limits practical applicability of the HEVC for 3D display applications. In this paper, we introduce a novel low-complexity scheme for depth video compression based on low-rank tensor decomposition and HEVC intra coding. The proposed scheme leverages spatial and temporal redundancy by compactly representing the depth sequence as a high-order tensor. Tensor factorization into a set of factor matrices following CANDECOMP PARAFAC (CP) decomposition via alternating least squares give a low-rank approximation of the scene geometry. Further, compression of factor matrices with HEVC intra prediction support arbitrary target accuracy by flexible adjustment of bitrate, varying tensor decomposition ranks and quantization parameters. The results demonstrate proposed approach achieves significant rate gains by efficiently compressing depth planes in low-rank approximated representation. The proposed algorithm is applied to encode depth maps of benchmark Ballet and Breakdancing sequences. The decoded depth sequences are used for view synthesis in a multi-view video system, maintaining appropriate rendering quality.
- oai:arXiv.org:2104.04678v2
- cs.MM
- eess.IV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mansi Sharma, Jyotsana Grover
-
-
- A New Comprehensive Framework for Multi-Exposure Stereo Coding Utilizing Low Rank Tucker-ALS and 3D-HEVC Techniques
- https://arxiv.org/abs/2104.04726
- arXiv:2104.04726v2 Announce Type: replace
-Abstract: Display technology must offer high dynamic range (HDR) contrast-based depth induction and 3D personalization simultaneously. Efficient algorithms to compress HDR stereo data is critical. Direct capturing of HDR content is complicated due to the high expense and scarcity of HDR cameras. The HDR 3D images could be generated in low-cost by fusing low-dynamic-range (LDR) images acquired using a stereo camera with various exposure settings. In this paper, an efficient scheme for coding multi-exposure stereo images is proposed based on a tensor low-rank approximation scheme. The multi-exposure fusion can be realized to generate HDR stereo output at the decoder for increased realism and exaggerated binocular 3D depth cues.
- For exploiting spatial redundancy in LDR stereo images, the stack of multi-exposure stereo images is decomposed into a set of projection matrices and a core tensor following an alternating least squares Tucker decomposition model. The compact, low-rank representation of the scene, thus, generated is further processed by 3D extension of High Efficiency Video Coding standard. The encoding with 3D-HEVC enhance the proposed scheme efficiency by exploiting intra-frame, inter-view and the inter-component redundancies in low-rank approximated representation. We consider constant luminance property of IPT and Y'CbCr color space to precisely approximate intensity prediction and perceptually minimize the encoding distortion. Besides, the proposed scheme gives flexibility to adjust the bitrate of tensor latent components by changing the rank of core tensor and its quantization. Extensive experiments on natural scenes demonstrate that the proposed scheme outperforms state-of-the-art JPEG-XT and 3D-HEVC range coding standards.
- oai:arXiv.org:2104.04726v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mansi Sharma, Jyotsana Grover
-
-
- Trustworthy Representation Learning via Information Funnels and Bottlenecks
- https://arxiv.org/abs/2211.01446
- arXiv:2211.01446v2 Announce Type: replace
-Abstract: Ensuring trustworthiness in machine learning -- by balancing utility, fairness, and privacy -- remains a critical challenge, particularly in representation learning. In this work, we investigate a family of closely related information-theoretic objectives, including information funnels and bottlenecks, designed to extract invariant representations from data. We introduce the Conditional Privacy Funnel with Side-information (CPFSI), a novel formulation within this family, applicable in both fully and semi-supervised settings. Given the intractability of these objectives, we derive neural-network-based approximations via amortized variational inference. We systematically analyze the trade-offs between utility, invariance, and representation fidelity, offering new insights into the Pareto frontiers of these methods. Our results demonstrate that CPFSI effectively balances these competing objectives and frequently outperforms existing approaches. Furthermore, we show that by intervening on sensitive attributes in CPFSI's predictive posterior enhances fairness while maintaining predictive performance. Finally, we focus on the real-world applicability of these approaches, particularly for learning robust and fair representations from tabular datasets in data scarce-environments -- a modality where these methods are often especially relevant.
- oai:arXiv.org:2211.01446v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1007/s10994-025-06924-9
- Mach Learn 114, 267 (2025)
- Jo\~ao Machado de Freitas, Bernhard C. Geiger
-
-
- How does training shape the Riemannian geometry of neural network representations?
- https://arxiv.org/abs/2301.11375
- arXiv:2301.11375v4 Announce Type: replace
-Abstract: In machine learning, there is a long history of trying to build neural networks that can learn from fewer example data by baking in strong geometric priors. However, it is not always clear a priori what geometric constraints are appropriate for a given task. Here, we explore the possibility that one can uncover useful geometric inductive biases by studying how training molds the Riemannian geometry induced by unconstrained neural network feature maps. We first show that at infinite width, neural networks with random parameters induce highly symmetric metrics on input space. This symmetry is broken by feature learning: networks trained to perform classification tasks learn to magnify local areas along decision boundaries. This holds in deep networks trained on high-dimensional image classification tasks, and even in self-supervised representation learning. These results begin to elucidate how training shapes the geometry induced by unconstrained neural network feature maps, laying the groundwork for an understanding of this richly nonlinear form of feature learning.
- oai:arXiv.org:2301.11375v4
- cs.LG
- cond-mat.dis-nn
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Proceedings of the 3rd Workshop on Symmetry and Geometry in Neural Representations (NeurReps) (2025)
- Jacob A. Zavatone-Veth, Sheng Yang, Julian A. Rubinfien, Cengiz Pehlevan
-
-
- Emotion Detection From Social Media Posts
- https://arxiv.org/abs/2302.05610
- arXiv:2302.05610v2 Announce Type: replace
-Abstract: Over the last few years, social media has evolved into a medium for expressing personal views, emotions, and even business and political proposals, recommendations, and advertisements. We address the topic of identifying emotions from text data obtained from social media posts like Twitter in this research. We have deployed different traditional machine learning techniques such as Support Vector Machines (SVM), Naive Bayes, Decision Trees, and Random Forest, as well as deep neural network models such as LSTM, CNN, GRU, BiLSTM, BiGRU to classify these tweets into four emotion categories (Fear, Anger, Joy, and Sadness). Furthermore, we have constructed a BiLSTM and BiGRU ensemble model. The evaluation result shows that the deep neural network models(BiGRU, to be specific) produce the most promising results compared to traditional machine learning models, with an 87.53 % accuracy rate. The ensemble model performs even better (87.66 %), albeit the difference is not significant. This result will aid in the development of a decision-making tool that visualizes emotional fluctuations.
- oai:arXiv.org:2302.05610v2
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Md Mahbubur Rahman, Shaila Sharmin
-
-
- Seal2Real: Prompt Prior Learning on Diffusion Model for Unsupervised Document Seal Data Generation and Realisation
- https://arxiv.org/abs/2310.00546
- arXiv:2310.00546v2 Announce Type: replace
-Abstract: Seal-related tasks in document processing-such as seal segmentation, authenticity verification, seal removal, and text recognition under seals-hold substantial commercial importance. However, progress in these areas has been hindered by the scarcity of labeled document seal datasets, which are essential for supervised learning. To address this limitation, we propose Seal2Real, a novel generative framework designed to synthesize large-scale labeled document seal data. As part of this work, we also present Seal-DB, a comprehensive dataset containing 20,000 labeled images to support seal-related research. Seal2Real introduces a prompt prior learning architecture built upon a pre-trained Stable Diffusion model, effectively transferring its generative capability to the unsupervised domain of seal image synthesis. By producing highly realistic synthetic seal images, Seal2Real significantly enhances the performance of downstream seal-related tasks on real-world data. Experimental evaluations on the Seal-DB dataset demonstrate the effectiveness and practical value of the proposed framework.
- oai:arXiv.org:2310.00546v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mingfu Yan, Jiancheng Huang, Shifeng Chen
-
-
- Differentially Private Data Generation with Missing Data
- https://arxiv.org/abs/2310.11548
- arXiv:2310.11548v3 Announce Type: replace
-Abstract: Despite several works that succeed in generating synthetic data with differential privacy (DP) guarantees, they are inadequate for generating high-quality synthetic data when the input data has missing values. In this work, we formalize the problems of DP synthetic data with missing values and propose three effective adaptive strategies that significantly improve the utility of the synthetic data on four real-world datasets with different types and levels of missing data and privacy requirements. We also identify the relationship between privacy impact for the complete ground truth data and incomplete data for these DP synthetic data generation algorithms. We model the missing mechanisms as a sampling process to obtain tighter upper bounds for the privacy guarantees to the ground truth data. Overall, this study contributes to a better understanding of the challenges and opportunities for using private synthetic data generation algorithms in the presence of missing data.
- oai:arXiv.org:2310.11548v3
- cs.DB
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.14778/3659437.3659455
- PVLDB Volume 17, 2024
- Shubhankar Mohapatra, Jianqiao Zong, Florian Kerschbaum, Xi He
-
-
- Transfer Learning-based Real-time Handgun Detection
- https://arxiv.org/abs/2311.13559
- arXiv:2311.13559v3 Announce Type: replace
-Abstract: Traditional surveillance systems rely on human attention, limiting their effectiveness. This study employs convolutional neural networks and transfer learning to develop a real-time computer vision system for automatic handgun detection. Comprehensive analysis of online handgun detection methods is conducted, emphasizing reducing false positives and learning time. Transfer learning is demonstrated as an effective approach. Despite technical challenges, the proposed system achieves a precision rate of 84.74%, demonstrating promising performance comparable to related works, enabling faster learning and accurate automatic handgun detection for enhanced security. This research advances security measures by reducing human monitoring dependence, showcasing the potential of transfer learning-based approaches for efficient and reliable handgun detection.
- oai:arXiv.org:2311.13559v3
- cs.CV
- cs.AI
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.24996/ijs.2024.65.12.31
- Iraqi Journal of Science-2024 65(12)
- Youssef Elmir
-
-
- Survey on AI Ethics: A Socio-technical Perspective
- https://arxiv.org/abs/2311.17228
- arXiv:2311.17228v2 Announce Type: replace
-Abstract: The past decade has observed a significant advancement in AI with deep learning-based models being deployed in diverse scenarios, including safety-critical applications. As these AI systems become deeply embedded in our societal infrastructure, the repercussions of their decisions and actions have significant consequences, making the ethical implications of AI deployment highly relevant and essential. The ethical concerns associated with AI are multifaceted, including challenging issues of fairness, privacy and data protection, responsibility and accountability, safety and robustness, transparency and explainability, and environmental impact. These principles together form the foundations of ethical AI considerations that concern every stakeholder in the AI system lifecycle. In light of the present ethical and future x-risk concerns, governments have shown increasing interest in establishing guidelines for the ethical deployment of AI. This work unifies the current and future ethical concerns of deploying AI into society. While we acknowledge and appreciate the technical surveys for each of the ethical principles concerned, in this paper, we aim to provide a comprehensive overview that not only addresses each principle from a technical point of view but also discusses them from a social perspective.
- oai:arXiv.org:2311.17228v2
- cs.CY
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- 10.1111/coin.70149
- Computational Intelligence, Volume 41, Issue 6 (Wiley, 2025)
- Dave Mbiazi, Meghana Bhange, Maryam Babaei, Ivaxi Sheth, Patrik Kenfack, Samira Ebrahimi Kahou
-
-
- BoxCell: Leveraging SAM for Cell Segmentation with Box Supervision
- https://arxiv.org/abs/2311.17960
- arXiv:2311.17960v2 Announce Type: replace
-Abstract: Cell segmentation in histopathological images is vital for diagnosis, and treatment of several diseases. Annotating data is tedious, and requires medical expertise, making it difficult to employ supervised learning. Instead, we study a weakly supervised setting, where only bounding box supervision is available, and present the use of Segment Anything (SAM) for this without any finetuning, i.e., directly utilizing the pre-trained model. We propose BoxCell, a cell segmentation framework that utilizes SAM's capability to interpret bounding boxes as prompts, \emph{both} at train and test times. At train time, gold bounding boxes given to SAM produce (pseudo-)masks, which are used to train a standalone segmenter. At test time, BoxCell generates two segmentation masks: (1) generated by this standalone segmenter, and (2) a trained object detector outputs bounding boxes, which are given as prompts to SAM to produce another mask. Recognizing complementary strengths, we reconcile the two segmentation masks using a novel integer programming formulation with intensity and spatial constraints. We experiment on three publicly available cell segmentation datasets namely, CoNSep, MoNuSeg, and TNBC, and find that BoxCell significantly outperforms existing box supervised image segmentation models, obtaining 6-10 point Dice gains.
- oai:arXiv.org:2311.17960v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aayush Kumar Tyagi, Vaibhav Mishra, Prathosh A. P., Mausam
-
-
- A Survey of Graph Neural Networks in Real world: Imbalance, Noise, Privacy and OOD Challenges
- https://arxiv.org/abs/2403.04468
- arXiv:2403.04468v2 Announce Type: replace
-Abstract: Graph-structured data exhibits universality and widespread applicability across diverse domains, such as social network analysis, biochemistry, financial fraud detection, and network security. Significant strides have been made in leveraging Graph Neural Networks (GNNs) to achieve remarkable success in these areas. However, in real-world scenarios, the training environment for models is often far from ideal, leading to substantial performance degradation of GNN models due to various unfavorable factors, including imbalance in data distribution, the presence of noise in erroneous data, privacy protection of sensitive information, and generalization capability for out-of-distribution (OOD) scenarios. To tackle these issues, substantial efforts have been devoted to improving the performance of GNN models in practical real-world scenarios, as well as enhancing their reliability and robustness. In this paper, we present a comprehensive survey that systematically reviews existing GNN models, focusing on solutions to the four mentioned real-world challenges including imbalance, noise, privacy, and OOD in practical scenarios that many existing reviews have not considered. Specifically, we first highlight the four key challenges faced by existing GNNs, paving the way for our exploration of real-world GNN models. Subsequently, we provide detailed discussions on these four aspects, dissecting how these solutions contribute to enhancing the reliability and robustness of GNN models. Last but not least, we outline promising directions and offer future perspectives in the field.
- oai:arXiv.org:2403.04468v2
- cs.LG
- cs.AI
- cs.IR
- cs.SI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wei Ju, Siyu Yi, Yifan Wang, Zhiping Xiao, Zhengyang Mao, Hourun Li, Yiyang Gu, Yifang Qin, Nan Yin, Senzhang Wang, Xinwang Liu, Philip S. Yu, Ming Zhang
-
-
- Time-Aware Projections: Truly Node-Private Graph Statistics under Continual Observation
- https://arxiv.org/abs/2403.04630
- arXiv:2403.04630v2 Announce Type: replace
-Abstract: We describe the first algorithms that satisfy the standard notion of node-differential privacy in the continual release setting (i.e., without an assumed promise on input streams). Previous work addresses node-private continual release by assuming an unenforced promise on the maximum degree in a graph, but leaves open whether such a bound can be verified or enforced privately. Our algorithms are accurate on sparse graphs, for several fundamental graph problems: counting edges, triangles, other subgraphs, and connected components; and releasing degree histograms. Our unconditionally private algorithms generally have optimal error, up to polylogarithmic factors and lower-order terms.
- We provide general transformations that take a base algorithm for the continual release setting, which need only be private for streams satisfying a promised degree bound, and produce an algorithm that is unconditionally private yet mimics the base algorithm when the stream meets the degree bound (and adds only linear overhead to the time and space complexity of the base algorithm). To do so, we design new projection algorithms for graph streams, based on the batch-model techniques of Day et al. 2016 and Blocki et al. 2013, which modify the stream to limit its degree. Our main technical innovation is to show that the projections are stable -- meaning that similar input graphs have similar projections -- when the input stream satisfies a privately testable safety condition. Our transformation then follows a novel online variant of the Propose-Test-Release framework (Dwork and Lei, 2009), privately testing the safety condition before releasing output at each step.
- oai:arXiv.org:2403.04630v2
- cs.DS
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1109/SP54263.2024.00196
- Palak Jain, Adam Smith, Connor Wagaman
-
-
- Data-driven Stabilization of Nitsche's Method
- https://arxiv.org/abs/2403.11632
- arXiv:2403.11632v2 Announce Type: replace
-Abstract: The weak imposition of essential boundary conditions is an integral aspect of unfitted finite element methods, where the physical boundary does not in general coincide with the computational domain. In this regard, the symmetric Nitsche's method is a powerful technique that preserves the symmetry and variational consistency of the unmodified weak formulation. The stabilization parameter in Nitsche's method plays a crucial role in the stability of the resultant formulation, whose estimation is computationally intensive and dependent on the particular cut configuration using the conventional eigenvalue-based approach. In this work, we employ as model problem the finite cell method in which the need for the generation of a boundary-conforming mesh is circumvented by embedding the physical domain in a, typically regular, background mesh. We propose a data-driven estimate based on machine learning methods for the estimation of the stabilization parameter in Nitsche's method that offers an efficient constant-complexity alternative to the eigenvalue-based approach independent of the cut configuration. It is shown, using numerical benchmarks, that the proposed method can estimate the stabilization parameter accurately and is by far more computationally efficient. The data-driven estimate can be integrated into existing numerical codes with minimal modifications and thanks to the wide adoption of accelerators such as GPUs by machine learning frameworks, can be used with virtually no extra implementation cost on GPU devices, further increasing the potential for computational gains over the conventional eigenvalue-based estimate. The proposed model is tested on both Intel CPU as well as NVIDIA GPU hardware, showing that while it is already many times more efficient on the CPU compared to the eigenvalue-based estimate, its efficiency margin is even larger on modern GPU devices.
- oai:arXiv.org:2403.11632v2
- math.NA
- cs.NA
- math-ph
- math.MP
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- M. Saberi, L. Zhao, A. Vogel
-
-
- A Reliable Cryptographic Framework for Empirical Machine Unlearning Evaluation
- https://arxiv.org/abs/2404.11577
- arXiv:2404.11577v4 Announce Type: replace
-Abstract: Machine unlearning updates machine learning models to remove information from specific training samples, complying with data protection regulations that allow individuals to request the removal of their personal data. Despite the recent development of numerous unlearning algorithms, reliable evaluation of these algorithms remains an open research question. In this work, we focus on membership inference attack (MIA) based evaluation, one of the most common approaches for evaluating unlearning algorithms, and address various pitfalls of existing evaluation metrics lacking theoretical understanding and reliability. Specifically, by modeling the proposed evaluation process as a \emph{cryptographic game} between unlearning algorithms and MIA adversaries, the naturally induced evaluation metric measures the data removal efficacy of unlearning algorithms and enjoys provable guarantees that existing evaluation metrics fail to satisfy. Furthermore, we propose a practical and efficient approximation of the induced evaluation metric and demonstrate its effectiveness through both theoretical analysis and empirical experiments. Overall, this work presents a novel and reliable approach to empirically evaluating unlearning algorithms, paving the way for the development of more effective unlearning techniques.
- oai:arXiv.org:2404.11577v4
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yiwen Tu, Pingbang Hu, Jiaqi Ma
-
-
- Hybrid Dynamics Modeling and Trajectory Planning for a Cable-Trailer System with a Quadruped Robot
- https://arxiv.org/abs/2404.12220
- arXiv:2404.12220v2 Announce Type: replace
-Abstract: Inspired by sled-pulling dogs in transportation, we present a cable-trailer integrated with a quadruped robot system. The motion planning of this system faces challenges due to the interactions between the cable's state transitions, the trailer's nonholonomic constraints, and the system's underactuation. To address these challenges, we first develop a hybrid dynamics model that captures the cable's taut and slack states. A search algorithm is then introduced to compute a suboptimal trajectory while incorporating mode transitions. Additionally, we propose a novel collision avoidance constraint based on geometric polygons to formulate the trajectory optimization problem for the hybrid system. The proposed method is implemented on a Unitree A1 quadruped robot with a customized cable-trailer and validated through experiments. The real system demonstrates both agile and safe motion with cable mode transitions.
- oai:arXiv.org:2404.12220v2
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wentao Zhang, Shaohang Xu, Gewei Zuo, Bolin Li, Jingbo Wang, Lijun Zhu
-
-
- Kuroda's Translation for Higher-Order Logic
- https://arxiv.org/abs/2404.19503
- arXiv:2404.19503v2 Announce Type: replace
-Abstract: In 1951, Kuroda defined an embedding of classical first-order logic into intuitionistic logic, such that a formula and its translation are equivalent in classical logic. Recently, Brown and Rizkallah extended this translation to higher-order logic, but did not prove the classical equivalence, and showed that the embedding fails in the presence of functional extensionality. We prove that functional extensionality and propositional extensionality are sufficient to derive the classical equivalence between a higher-order formula and its translation. We emphasize a condition under which Kuroda's translation works with functional extensionality.
- oai:arXiv.org:2404.19503v2
- cs.LO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Thomas Traversi\'e (MICS, DEDUCTEAM)
-
-
- PCF Learned Sort: a Learning Augmented Sort Algorithm with $O(n \log\log n)$ Expected Complexity
- https://arxiv.org/abs/2405.07122
- arXiv:2405.07122v2 Announce Type: replace
-Abstract: Sorting is one of the most fundamental algorithms in computer science. Recently, Learned Sorts, which use machine learning to improve sorting speed, have attracted attention. While existing studies show that Learned Sort is empirically faster than classical sorting algorithms, they do not provide theoretical guarantees about its computational complexity. We propose Piecewise Constant Function (PCF) Learned Sort, a theoretically guaranteed Learned Sort algorithm. We prove that the expected complexity of PCF Learned Sort is $\mathcal{O}(n \log \log n)$ under mild assumptions on the data distribution. We also confirm empirically that PCF Learned Sort has a computational complexity of $\mathcal{O}(n \log \log n)$ on both synthetic and real datasets. This is the first study to theoretically support the empirical success of Learned Sort, and provides evidence for why Learned Sort is fast. The code is available at https://github.com/atsukisato/PCF_Learned_Sort .
- oai:arXiv.org:2405.07122v2
- cs.DS
- cs.CC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Atsuki Sato, Yusuke Matsui
-
-
- A Label Propagation Strategy for CutMix in Multi-Label Remote Sensing Image Classification
- https://arxiv.org/abs/2405.13451
- arXiv:2405.13451v3 Announce Type: replace
-Abstract: The development of supervised deep learning-based methods for multi-label scene classification (MLC) is one of the prominent research directions in remote sensing (RS). However, collecting annotations for large RS image archives is time-consuming and costly. To address this issue, several data augmentation methods have been introduced in RS. Among others, the CutMix data augmentation technique, which combines parts of two existing training images to generate an augmented image, stands out as a particularly effective approach. However, the direct application of CutMix in RS MLC can lead to the erasure or addition of class labels (i.e., label noise) in the augmented (i.e., combined) training image. To address this problem, we introduce a label propagation (LP) strategy that allows the effective application of CutMix in the context of MLC problems in RS without being affected by label noise. To this end, our proposed LP strategy exploits pixel-level class positional information to update the multi-label of the augmented training image. We propose to access such class positional information from reference maps (e.g., thematic products) associated with each training image or from class explanation masks provided by an explanation method if no reference maps are available. Similarly to pairing two training images, our LP strategy carries out a pairing operation on the associated pixel-level class positional information to derive the updated multi-label for the augmented image. Experimental results show the effectiveness of our LP strategy in general (e.g., an improvement of 2% to 4% mAP macro compared to standard CutMix) and its robustness in the case of various simulated and real scenarios with noisy class positional information in particular. Code is available at https://git.tu-berlin.de/rsim/cutmix_lp.
- oai:arXiv.org:2405.13451v3
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1109/JSTARS.2025.3628191
- Tom Burgert, Kai Norman Clasen, Jonas Klotz, Tim Siebert, Beg\"um Demir
-
-
- AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained Optimization
- https://arxiv.org/abs/2405.18187
- arXiv:2405.18187v2 Announce Type: replace
-Abstract: Implicit Q-learning (IQL) serves as a strong baseline for offline RL, which learns the value function using only dataset actions through quantile regression. However, it is unclear how to recover the implicit policy from the learned implicit Q-function and why IQL can utilize weighted regression for policy extraction. IDQL reinterprets IQL as an actor-critic method and gets weights of implicit policy, however, this weight only holds for the optimal value function. In this work, we introduce a different way to solve the implicit policy-finding problem (IPF) by formulating this problem as an optimization problem. Based on this optimization problem, we further propose two practical algorithms AlignIQL and AlignIQL-hard, which inherit the advantages of decoupling actor from critic in IQL and provide insights into why IQL can use weighted regression for policy extraction. Compared with IQL and IDQL, we find our method keeps the simplicity of IQL and solves the implicit policy-finding problem. Experimental results on D4RL datasets show that our method achieves competitive or superior results compared with other SOTA offline RL methods. Especially in complex sparse reward tasks like Antmaze and Adroit, our method outperforms IQL and IDQL by a significant margin.
- oai:arXiv.org:2405.18187v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Longxiang He, Li Shen, Xueqian Wang
-
-
- ROADWork: A Dataset and Benchmark for Learning to Recognize, Observe, Analyze and Drive Through Work Zones
- https://arxiv.org/abs/2406.07661
- arXiv:2406.07661v3 Announce Type: replace
-Abstract: Perceiving and autonomously navigating through work zones is a challenging and underexplored problem. Open datasets for this long-tailed scenario are scarce. We propose the ROADWork dataset to learn to recognize, observe, analyze, and drive through work zones. State-of-the-art foundation models fail when applied to work zones. Fine-tuning models on our dataset significantly improves perception and navigation in work zones. With ROADWork dataset, we discover new work zone images with higher precision (+32.5%) at a much higher rate (12.8$\times$) around the world. Open-vocabulary methods fail too, whereas fine-tuned detectors improve performance (+32.2 AP). Vision-Language Models (VLMs) struggle to describe work zones, but fine-tuning substantially improves performance (+36.7 SPICE).
- Beyond fine-tuning, we show the value of simple techniques. Video label propagation provides additional gains (+2.6 AP) for instance segmentation. While reading work zone signs, composing a detector and text spotter via crop-scaling improves performance +14.2% 1-NED). Composing work zone detections to provide context further reduces hallucinations (+3.9 SPICE) in VLMs. We predict navigational goals and compute drivable paths from work zone videos. Incorporating road work semantics ensures 53.6% goals have angular error (AE) < 0.5 (+9.9 %) and 75.3% pathways have AE < 0.5 (+8.1 %).
- oai:arXiv.org:2406.07661v3
- cs.CV
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Anurag Ghosh, Shen Zheng, Robert Tamburo, Khiem Vuong, Juan Alvarez-Padilla, Hailiang Zhu, Michael Cardei, Nicholas Dunn, Christoph Mertz, Srinivasa G. Narasimhan
-
-
- Retrieval-Augmented Feature Generation for Domain-Specific Classification
- https://arxiv.org/abs/2406.11177
- arXiv:2406.11177v3 Announce Type: replace
-Abstract: Feature generation can significantly enhance learning outcomes, particularly for tasks with limited data. An effective way to improve feature generation is to expand the current feature space using existing features and enriching the informational content. However, generating new, interpretable features usually requires domain-specific knowledge on top of the existing features. In this paper, we introduce a Retrieval-Augmented Feature Generation method, RAFG, to generate useful and explainable features specific to domain classification tasks. To increase the interpretability of the generated features, we conduct knowledge retrieval among the existing features in the domain to identify potential feature associations. These associations are expected to help generate useful features. Moreover, we develop a framework based on large language models (LLMs) for feature generation with reasoning to verify the quality of the features during their generation process. Experiments across several datasets in medical, economic, and geographic domains show that our RAFG method can produce high-quality, meaningful features and significantly improve classification performance compared with baseline methods.
- oai:arXiv.org:2406.11177v3
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xinhao Zhang, Jinghan Zhang, Fengran Mo, Dakshak Keerthi Chandra, Yuzhong Chen, Fei Xie, Kunpeng Liu
-
-
- Autonomous Robotic Drilling System for Mice Cranial Window Creation
- https://arxiv.org/abs/2406.14135
- arXiv:2406.14135v2 Announce Type: replace
-Abstract: Robotic assistance for experimental manipulation in the life sciences is expected to enable favorable outcomes, regardless of the skill of the scientist. Experimental specimens in the life sciences are subject to individual variability and hence require intricate algorithms for successful autonomous robotic control. As a use case, we are studying the cranial window creation in mice. This operation requires the removal of an 8-mm circular patch of the skull, which is approximately 300 um thick, but the shape and thickness of the mouse skull significantly varies depending on the strain of the mouse, sex, and age. In this work, we develop an autonomous robotic drilling system with no offline planning, consisting of a trajectory planner with execution-time feedback with drilling completion level recognition based on image and force information. In the experiments, we first evaluate the image-and-force-based drilling completion level recognition by comparing it with other state-of-the-art deep learning image processing methods and conduct an ablation study in eggshell drilling to evaluate the impact of each module on system performance. Finally, the system performance is further evaluated in postmortem mice, achieving a success rate of 70% (14/20 trials) with an average drilling time of 9.3 min.
- oai:arXiv.org:2406.14135v2
- cs.RO
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Enduo Zhao, Murilo M. Marinho, Kanako Harada
-
-
- Sample-based almost-sure quasi-optimal approximation in reproducing kernel Hilbert spaces
- https://arxiv.org/abs/2407.06674
- arXiv:2407.06674v4 Announce Type: replace
-Abstract: This paper addresses the problem of approximating an unknown function from point evaluations. When obtaining these point evaluations is costly, minimising the required sample size becomes crucial, and it is unreasonable to reserve a sufficiently large test sample for estimating the approximation accuracy. Therefore, an approximation with a certified quasi-optimality factor is required. This article shows that such an approximation can be obtained when the sought function lies in a reproducing kernel Hilbert space (RKHS) and is to be approximated in a finite-dimensional linear subspace $\mathcal{V}_d$. However, selecting the sample points to minimise the quasi-optimality factor requires optimising over an infinite set of points and computing exact inner products in RKHS, which is often infeasible in practice. Extending results from optimal sampling for $L^2$ approximation, the present paper proves that random points, drawn independently from the Christoffel sampling distribution associated with $\mathcal{V}_d$, can yield a controllable quasi-optimality factor with high probability. Inspired by this result, a novel sampling scheme, coined subspace-informed volume sampling, is introduced and evaluated in numerical experiments, where it outperforms classical i.i.d. Christoffel sampling and continuous volume sampling. To reduce the size of such a random sample, an additional greedy subsampling scheme with provable suboptimality bounds is introduced. Our presentation is of independent interest to the inverse problems community, as it offers a simpler interpretation of the parametrised background data weak (PBDW) method.
- oai:arXiv.org:2407.06674v4
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nando Hegemann, Anthony Nouy, Philipp Trunschke
-
-
- Social feedback amplifies emotional language in online video live chats
- https://arxiv.org/abs/2408.05700
- arXiv:2408.05700v4 Announce Type: replace
-Abstract: A growing share of human interactions now occurs online, where the expression and perception of emotions are often amplified and distorted. Yet, the interplay between different emotions and the extent to which they are driven by external stimuli or social feedback remains poorly understood. We calibrate a multivariate Hawkes self-exciting point process to model the temporal expression of six basic emotions in YouTube Live chats. This framework captures both temporal and cross-emotional dependencies while allowing us to disentangle the influence of video content (exogenous) from peer interactions (endogenous). We find that emotional expressions are up to four times more strongly driven by peer interaction than by video content. Positivity is more contagious, spreading three times more readily, whereas negativity is more memorable, lingering nearly twice as long. Moreover, we observe asymmetric cross-excitation, with negative emotions frequently triggering positive ones, a pattern consistent with trolling dynamics, but not the reverse. These findings highlight the central role of social interaction in shaping emotional dynamics online and the risks of emotional manipulation as human-chatbot interactions become increasingly realistic.
- oai:arXiv.org:2408.05700v4
- cs.SI
- cs.HC
- stat.AP
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yishan Luo, Didier Sornette, Sandro Claudio Lera
-
-
- Unifying Symbolic Music Arrangement: Track-Aware Reconstruction and Structured Tokenization
- https://arxiv.org/abs/2408.15176
- arXiv:2408.15176v5 Announce Type: replace
-Abstract: We present a unified framework for automatic multitrack music arrangement that enables a single pre-trained symbolic music model to handle diverse arrangement scenarios, including reinterpretation, simplification, and additive generation. At its core is a segment-level reconstruction objective operating on token-level disentangled content and style, allowing for flexible any-to-any instrumentation transformations at inference time. To support track-wise modeling, we introduce REMI-z, a structured tokenization scheme for multitrack symbolic music that enhances modeling efficiency and effectiveness for both arrangement tasks and unconditional generation. Our method outperforms task-specific state-of-the-art models on representative tasks in different arrangement scenarios -- band arrangement, piano reduction, and drum arrangement, in both objective metrics and perceptual evaluations. Taken together, our framework demonstrates strong generality and suggests broader applicability in symbolic music-to-music transformation.
- oai:arXiv.org:2408.15176v5
- cs.SD
- cs.CL
- eess.AS
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Longshen Ou, Jingwei Zhao, Ziyu Wang, Gus Xia, Qihao Liang, Torin Hopkins Ye Wang
-
-
- Data Quality Monitoring for the Hadron Calorimeters Using Transfer Learning for Anomaly Detection
- https://arxiv.org/abs/2408.16612
- arXiv:2408.16612v3 Announce Type: replace
-Abstract: The proliferation of sensors brings an immense volume of spatio-temporal (ST) data in many domains, including monitoring, diagnostics, and prognostics applications. Data curation is a time-consuming process for a large volume of data, making it challenging and expensive to deploy data analytics platforms in new environments. Transfer learning (TL) mechanisms promise to mitigate data sparsity and model complexity by utilizing pre-trained models for a new task. Despite the triumph of TL in fields like computer vision and natural language processing, efforts on complex ST models for anomaly detection (AD) applications are limited. In this study, we present the potential of TL within the context of high-dimensional ST AD with a hybrid autoencoder architecture, incorporating convolutional, graph, and recurrent neural networks. Motivated by the need for improved model accuracy and robustness, particularly in scenarios with limited training data on systems with thousands of sensors, this research investigates the transferability of models trained on different sections of the Hadron Calorimeter of the Compact Muon Solenoid experiment at CERN. The key contributions of the study include exploring TL's potential and limitations within the context of encoder and decoder networks, revealing insights into model initialization and training configurations that enhance performance while substantially reducing trainable parameters and mitigating data contamination effects. Code: https://github.com/muleina/CMS\_HCAL\_ML\_OnlineDQM .
- oai:arXiv.org:2408.16612v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.3390/s25113475
- Sensors 25 (2025) 11
- Mulugeta Weldezgina Asres, Christian Walter Omlin, Long Wang, Pavel Parygin, David Yu, Jay Dittmann, The CMS-HCAL Collaboration
-
-
- MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping
- https://arxiv.org/abs/2409.11316
- arXiv:2409.11316v5 Announce Type: replace
-Abstract: Few-shot Semantic Segmentation addresses the challenge of segmenting objects in query images with only a handful of annotated examples. However, many previous state-of-the-art methods either have to discard intricate local semantic features or suffer from high computational complexity. To address these challenges, we propose a new Few-shot Semantic Segmentation framework based on the Transformer architecture. Our approach introduces the spatial transformer decoder and the contextual mask generation module to improve the relational understanding between support and query images. Moreover, we introduce a multi scale decoder to refine the segmentation mask by incorporating features from different resolutions in a hierarchical manner. Additionally, our approach integrates global features from intermediate encoder stages to improve contextual understanding, while maintaining a lightweight structure to reduce complexity. This balance between performance and efficiency enables our method to achieve competitive results on benchmark datasets such as PASCAL-5^i and COCO-20^i in both 1-shot and 5-shot settings. Notably, our model with only 1.5 million parameters demonstrates competitive performance while overcoming limitations of existing methodologies. https://github.com/amirrezafateh/MSDNet
- oai:arXiv.org:2409.11316v5
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1016/j.imavis.2025.105672
- Image and Vision Computing 162 (2025) 105672
- Amirreza Fateh, Mohammad Reza Mohammadi, Mohammad Reza Jahed Motlagh
-
-
- Manipulation Facing Threats: Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models
- https://arxiv.org/abs/2409.13174
- arXiv:2409.13174v4 Announce Type: replace
-Abstract: Recently, driven by advancements in Multimodal Large Language Models (MLLMs), Vision Language Action Models (VLAMs) are being proposed to achieve better performance in open-vocabulary scenarios for robotic manipulation tasks. Since manipulation tasks involve direct interaction with the physical world, ensuring robustness and safety during the execution of this task is always a very critical issue. In this paper, by synthesizing current safety research on MLLMs and the specific application scenarios of the manipulation task in the physical world, we comprehensively evaluate VLAMs in the face of potential physical threats. Specifically, we propose the Physical Vulnerability Evaluating Pipeline (PVEP) that can incorporate as many visual modal physical threats as possible for evaluating the physical robustness of VLAMs. The physical threats in PVEP specifically include Out-of-Distribution, Typography-based Visual Prompt, and Adversarial Patch Attacks. By comparing the performance fluctuations of VLAMs before and after being attacked, we provide generalizable \textbf{\textit{Analyses}} of how VLAMs respond to different physical threats.
- oai:arXiv.org:2409.13174v4
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Cheng, Erjia Xiao, Yichi Wang, Chengyuan Yu, Mengshu Sun, Qiang Zhang, Jiahang Cao, Yijie Guo, Ning Liu, Kaidi Xu, Jize Zhang, Chao Shen, Philip Torr, Jindong Gu, Renjing Xu
-
-
- Disentanglement with Factor Quantized Variational Autoencoders
- https://arxiv.org/abs/2409.14851
- arXiv:2409.14851v3 Announce Type: replace
-Abstract: Disentangled representation learning aims to represent the underlying generative factors of a dataset in a latent representation independently of one another. In our work, we propose a discrete variational autoencoder (VAE) based model where the ground truth information about the generative factors are not provided to the model. We demonstrate the advantages of learning discrete representations over learning continuous representations in facilitating disentanglement. Furthermore, we propose incorporating an inductive bias into the model to further enhance disentanglement. Precisely, we propose scalar quantization of the latent variables in a latent representation with scalar values from a global codebook, and we add a total correlation term to the optimization as an inductive bias. Our method called FactorQVAE combines optimization based disentanglement approaches with discrete representation learning, and it outperforms the former disentanglement methods in terms of two disentanglement metrics (DCI and InfoMEC) while improving the reconstruction performance. Our code can be found at https://github.com/ituvisionlab/FactorQVAE.
- oai:arXiv.org:2409.14851v3
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1016/j.neucom.2025.131968
- Gulcin Baykal, Melih Kandemir, Gozde Unal
-
-
- FusionRF: High-Fidelity Satellite Neural Radiance Fields from Multispectral and Panchromatic Acquisitions
- https://arxiv.org/abs/2409.15132
- arXiv:2409.15132v3 Announce Type: replace
-Abstract: We introduce FusionRF, a novel framework for digital surface reconstruction from satellite multispectral and panchromatic images. Current work has demonstrated the increased accuracy of neural photogrammetry for surface reconstruction from optical satellite images compared to algorithmic methods. Common satellites produce both a panchromatic and multispectral image, which contain high spatial and spectral information respectively. Current neural reconstruction methods require multispectral images to be upsampled with a pansharpening method using the spatial data in the panchromatic image. However, these methods may introduce biases and hallucinations due to domain gaps. FusionRF introduces joint image fusion during optimization through a novel cross-resolution kernel that learns to resolve spatial resolution loss present in multispectral images. As input, FusionRF accepts the original multispectral and panchromatic data, eliminating the need for image preprocessing. FusionRF also leverages multimodal appearance embeddings that encode the image characteristics of each modality and view within a uniform representation. By optimizing on both modalities, FusionRF learns to fuse image modalities while performing reconstruction tasks and eliminates the need for a pansharpening preprocessing step. We evaluate our method on multispectral and panchromatic satellite images from the WorldView-3 satellite in various locations, and show that FusionRF provides an average of 17% reduction in depth reconstruction error, and renders sharp training and novel views.
- oai:arXiv.org:2409.15132v3
- cs.CV
- eess.IV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1109/JSTSP.2025.3628532
- Michael Sprintson, Rama Chellappa, Cheng Peng
-
-
- Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization
- https://arxiv.org/abs/2410.02628
- arXiv:2410.02628v4 Announce Type: replace
-Abstract: Learning conditional distributions $\pi^*(\cdot|x)$ is a central problem in machine learning, which is typically approached via supervised methods with paired data $(x,y) \sim \pi^*$. However, acquiring paired data samples is often challenging, especially in problems such as domain translation. This necessitates the development of $\textit{semi-supervised}$ models that utilize both limited paired data and additional unpaired i.i.d. samples $x \sim \pi^*_x$ and $y \sim \pi^*_y$ from the marginal distributions. The usage of such combined data is complex and often relies on heuristic approaches. To tackle this issue, we propose a new learning paradigm that integrates both paired and unpaired data $\textbf{seamlessly}$ using the data likelihood maximization techniques. We demonstrate that our approach also connects intriguingly with inverse entropic optimal transport (OT). This finding allows us to apply recent advances in computational OT to establish an $\textbf{end-to-end}$ learning algorithm to get $\pi^*(\cdot|x)$. In addition, we derive the universal approximation property, demonstrating that our approach can theoretically recover true conditional distributions with arbitrarily small error. Furthermore, we demonstrate through empirical tests that our method effectively learns conditional distributions using paired and unpaired data simultaneously.
- oai:arXiv.org:2410.02628v4
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mikhail Persiianov, Arip Asadulaev, Nikita Andreev, Nikita Starodubcev, Dmitry Baranchuk, Anastasis Kratsios, Evgeny Burnaev, Alexander Korotin
-
-
- Mastering Contact-rich Tasks by Combining Soft and Rigid Robotics with Imitation Learning
- https://arxiv.org/abs/2410.07787
- arXiv:2410.07787v3 Announce Type: replace
-Abstract: Soft robots have the potential to revolutionize the use of robotic systems with their capability of establishing safe, robust, and adaptable interactions with their environment, but their precise control remains challenging. In contrast, traditional rigid robots offer high accuracy and repeatability but lack the flexibility of soft robots. We argue that combining these characteristics in a hybrid robotic platform can significantly enhance overall capabilities. This work presents a novel hybrid robotic platform that integrates a rigid manipulator with a fully developed soft arm. This system is equipped with the intelligence necessary to perform flexible and generalizable tasks through imitation learning autonomously. The physical softness and machine learning enable our platform to achieve highly generalizable skills, while the rigid components ensure precision and repeatability.
- oai:arXiv.org:2410.07787v3
- cs.RO
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Mariano Ram\'irez Montero, Ebrahim Shahabi, Giovanni Franzese, Jens Kober, Barbara Mazzolai, Cosimo Della Santina
-
-
- Dynamical loss functions shape landscape topography and improve learning in artificial neural networks
- https://arxiv.org/abs/2410.10690
- arXiv:2410.10690v3 Announce Type: replace
-Abstract: Dynamical loss functions are derived from standard loss functions used in supervised classification tasks, but are modified so that the contribution from each class periodically increases and decreases. These oscillations globally alter the loss landscape without affecting the global minima. In this paper, we demonstrate how to transform cross-entropy and mean squared error into dynamical loss functions. We begin by discussing the impact of increasing the size of the neural network or the learning rate on the depth and sharpness of the minima that the system explores. Building on this intuition, we propose several versions of dynamical loss functions and use a simple classification problem where we can show how they significantly improve validation accuracy for networks of varying sizes. Finally, we explore how the landscape of these dynamical loss functions evolves during training, highlighting the emergence of instabilities that may be linked to edge-of-instability minimization.
- oai:arXiv.org:2410.10690v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Eduardo Lavin Pallero, Miguel Ruiz-Garcia
-
-
- Intelligent Computing Social Modeling and Methodological Innovations in Political Science in the Era of Large Language Models
- https://arxiv.org/abs/2410.16301
- arXiv:2410.16301v2 Announce Type: replace
-Abstract: The recent wave of artificial intelligence, epitomized by large language models (LLMs),has presented opportunities and challenges for methodological innovation in political science,sparking discussions on a potential paradigm shift in the social sciences. However, how can weunderstand the impact of LLMs on knowledge production and paradigm transformation in thesocial sciences from a comprehensive perspective that integrates technology and methodology? What are LLMs' specific applications and representative innovative methods in political scienceresearch? These questions, particularly from a practical methodological standpoint, remainunderexplored. This paper proposes the "Intelligent Computing Social Modeling" (ICSM) methodto address these issues by clarifying the critical mechanisms of LLMs. ICSM leverages thestrengths of LLMs in idea synthesis and action simulation, advancing intellectual exploration inpolitical science through "simulated social construction" and "simulation validation." Bysimulating the U.S. presidential election, this study empirically demonstrates the operationalpathways and methodological advantages of ICSM. By integrating traditional social scienceparadigms, ICSM not only enhances the quantitative paradigm's capability to apply big data toassess the impact of factors but also provides qualitative paradigms with evidence for socialmechanism discovery at the individual level, offering a powerful tool that balances interpretabilityand predictability in social science research. The findings suggest that LLMs will drivemethodological innovation in political science through integration and improvement rather thandirect substitution.
- oai:arXiv.org:2410.16301v2
- cs.CY
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1007/s11366-025-09917-6
- Zhenyu Wang, Dequan Wang, Yi Xu, Lingfeng Zhou, Yiqi Zhou
-
-
- Matryoshka Pilot: Learning to Drive Black-Box LLMs with LLMs
- https://arxiv.org/abs/2410.20749
- arXiv:2410.20749v3 Announce Type: replace
-Abstract: Despite the impressive generative abilities of black-box large language models (LLMs), their inherent opacity hinders further advancements in capabilities such as reasoning, planning, and personalization. Existing works aim to enhance LLM capabilities via domain-specific adaptation, which require additional training on accessible model parameters, an infeasible option for black-box LLMs. To address this challenge, we introduce Matryoshka Pilot (M-Pilot), a lightweight white-box LLM controller that guides a large-scale black-box LLM generator by decomposing complex tasks into a series of intermediate outputs. Specifically, we consider the black-box LLM as an environment, with M-Pilot serving as a policy to provide intermediate guidance through prompts for driving the black-box LLM. M-Pilot is trained to pivot the outputs of the black-box LLM aligning with preferences during iterative interaction, which enables controllable multi-turn generation and self-improvement in optimizing intermediate guidance. Empirical evaluations on diverse tasks demonstrate that our method effectively enhances the capabilities of black-box LLMs in complex, long-horizon tasks. Our code is publicly available at: https://github.com/lichangh20/Matryoshka.
- oai:arXiv.org:2410.20749v3
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Changhao Li, Yuchen Zhuang, Rushi Qiang, Haotian Sun, Hanjun Dai, Chao Zhang, Bo Dai
-
-
- ChemFM as a Scaling Law Guided Foundation Model Pre-trained on Informative Chemicals
- https://arxiv.org/abs/2410.21422
- arXiv:2410.21422v3 Announce Type: replace
-Abstract: Traditional AI methods often rely on task-specific model designs and training, which constrain both the scalability of model size and generalization across different tasks. Here, we introduce ChemFM, a large foundation model specifically developed for chemicals. By conducting a series of scaling experiments, we identify UniChem as the informative molecular database for pre-training the foundation model. ChemFM comprises 3 billion parameters and is pre-trained on 178 million molecules using self-supervised causal language modeling to extract generalizable molecular representations. This model can be adapted to diverse downstream chemical applications using either full-parameter or parameter-efficient fine-tuning methods. ChemFM consistently outperforms state-of-the-art task-specific AI models across all tested tasks. Notably, it achieves up to 67.48% performance improvement across 34 property prediction benchmarks, up to 33.80% reduction in mean average deviation between conditioned and actual properties of generated molecules in conditional molecular generation tasks, and up to 3.7% top-1 accuracy improvement across 4 reaction prediction datasets. Moreover, ChemFM demonstrates its superior performance in predicting antibiotic activity and cytotoxicity, highlighting its potential to advance the discovery of novel antibiotics. Furthermore, we demonstrate that, as a foundation model, ChemFM exhibits strong data efficiency, requiring significantly fewer labeled training samples to achieve state-of-the-art performance. We anticipate that ChemFM will significantly advance chemistry research by providing a foundation model capable of effectively generalizing across a broad range of tasks with minimal additional training.
- oai:arXiv.org:2410.21422v3
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Feiyang Cai, Katelin Zacour, Tianyu Zhu, Tzuen-Rong Tzeng, Yongping Duan, Ling Liu, Srikanth Pilla, Gang Li, Feng Luo
-
-
- Do Automatic Factuality Metrics Measure Factuality? A Critical Evaluation
- https://arxiv.org/abs/2411.16638
- arXiv:2411.16638v4 Announce Type: replace
-Abstract: Modern LLMs can now produce highly readable abstractive summaries, to the point that traditional automated metrics for evaluating summary quality, such as ROUGE, have saturated. However, LLMs still sometimes introduce inaccuracies into summaries, i.e., information inconsistent with or unsupported by the corresponding source. Measuring the occurrence of these often subtle factual inconsistencies automatically has proved challenging. This in turn has motivated development of metrics intended to measure the factual consistency of generated summaries against sources. But are these approaches measuring what they purport to? Or are they mostly exploiting artifacts? In this work, we stress test a range of automatic factuality metrics, including specialized models and LLM-based prompting methods, to probe what they actually capture. Using a shallow classifier to separate ``easy'' examples for factual evaluation where surface features suffice from ``hard'' cases requiring deeper reasoning, we find that all metrics show substantial performance drops on the latter. Furthermore, some metrics are more sensitive to benign, fact-preserving edits than to factual corrections. Building on this observation, we demonstrate that most automatic factuality metrics can be gamed, i.e., their scores can be artificially inflated by appending innocuous, content-free sentences to summaries. Among the metrics tested, the prompt based ChatGPT-DA approach is the most robust and reliable. However, this comes with a notable caveat: Prompting LLMs to assess factuality may overly rely on their parametric knowledge rather than the provided reference when making judgments. Taken together, our findings call into question the reliability of current factuality metrics and prompt a broader reflection on what these metrics are truly measuring.
- oai:arXiv.org:2411.16638v4
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Sanjana Ramprasad, Byron C. Wallace
-
-
- Learning Expressive Random Feature Models via Parametrized Activations
- https://arxiv.org/abs/2411.19468
- arXiv:2411.19468v3 Announce Type: replace
-Abstract: Random feature (RF) method is a powerful kernel approximation technique, but is typically equipped with fixed activation functions, limiting its adaptability across diverse tasks. To overcome this limitation, we introduce the Random Feature Model with Learnable Activation Functions (RFLAF), a novel statistical model that parameterizes activation functions as weighted sums of basis functions within the random feature framework. Examples of basis functions include radial basis functions, spline functions, polynomials, and so forth. For theoretical results, we consider RBFs as representative basis functions. We start with a single RBF as the activation, and then extend the results to multiple RBFs, demonstrating that RF models with learnable activation component largely expand the represented function space. We provide estimates on the required number of samples and random features to achieve low excess risks. For experiments, we test RFLAF with three types of bases: radial basis functions, spline functions and polynomials. Experimental results show that RFLAFs with RBFs and splines consistently outperform other RF models, where RBFs show 3 times faster computational efficiency than splines. We then unfreeze the first-layer parameters and retrain the models, validating the expressivity advantage of learnable activation components on regular two-layer neural networks. Our work provides a deeper understanding of the component of learnable activation functions within modern neural network architectures.
- oai:arXiv.org:2411.19468v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zailin Ma, Jiansheng Yang, Yaodong Yang
-
-
- A bi-fidelity method for the uncertain Vlasov-Poisson system near quasineutrality in an asymptotic-preserving particle-in-cell framework
- https://arxiv.org/abs/2412.05663
- arXiv:2412.05663v2 Announce Type: replace
-Abstract: In this paper, we study the Vlasov-Poisson system with massless electrons (VPME) near quasineutrality and with uncertainties. Based on the idea of reformulation on the Poisson equation by [P. Degond et.al., $\textit{Journal of Computational Physics}$, 229 (16), 2010, pp. 5630--5652], we first consider the deterministic problem and develop an efficient asymptotic-preserving particle-in-cell (AP-PIC) method to capture the quasineutral limit numerically, without resolving the discretizations subject to the small Debye length in plasma. The main challenge and difference compared to previous related works is that we consider the nonlinear Poisson in the VPME system which contains $e^{\phi}$ (with $\phi$ being the electric potential) and provide an explicit scheme. In the second part, we extend to study the uncertainty quantification (UQ) problem and develop an efficient bi-fidelity method for solving the VPME system with multidimensional random parameters, by choosing the Euler-Poisson equation as the low-fidelity model. Several numerical experiments are shown to demonstrate the asymptotic-preserving property of our deterministic solver and the effectiveness of our bi-fidelity method for solving the model with random uncertainties.
- oai:arXiv.org:2412.05663v2
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Guangwei Liu, Liu Liu, Yanli Wang
-
-
- Discrete Poincar\'e inequalities: a review on proofs, equivalent formulations, and behavior of constants
- https://arxiv.org/abs/2412.11796
- arXiv:2412.11796v2 Announce Type: replace
-Abstract: We investigate discrete Poincar\'e inequalities on piecewise polynomial subspaces of the Sobolev spaces H(curl) and H(div) in three space dimensions. We characterize the dependence of the constants on the continuous-level constants, the shape regularity and cardinality of the underlying tetrahedral mesh, and the polynomial degree. One important focus is on meshes being local patches (stars) of tetrahedra from a larger tetrahedral mesh. We also review various equivalent results to the discrete Poincar\'e inequalities, namely stability of discrete constrained minimization problems, discrete inf-sup conditions, bounds on operator norms of piecewise polynomial vector potential operators (Poincar\'e maps), and existence of graph-stable commuting projections.
- oai:arXiv.org:2412.11796v2
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1093/imanum/draf089
- Alexandre Ern, Johnny Guzm\'an, Pratyush Potu, Martin Vohral\'ik
-
-
- REFA: Reference Free Alignment for multi-preference optimization
- https://arxiv.org/abs/2412.16378
- arXiv:2412.16378v4 Announce Type: replace
-Abstract: To mitigate reward hacking from response verbosity, modern preference optimization methods are increasingly adopting length normalization (e.g., SimPO, ORPO, LN-DPO). While effective against this bias, we demonstrate that length normalization itself introduces a failure mode: the URSLA shortcut. Here models learn to satisfy the alignment objective by prematurely truncating low-quality responses rather than learning from their semantic content. To address this, we introduce REFA, a new alignment framework that proposes probabilistic control on a structural token that controls termination. Our core innovation is a new class of regularizers that operate directly on the probability of the End-of-Sequence (EOS) token, a previously unexploited control lever. This token-level intervention provides a principled solution to the URSLA shortcut, ensuring genuine quality improvements. Furthermore, it unlocks a versatile mechanism for managing the alignment-efficiency tradeoff, enabling practitioners to fine-tune models that adhere to specific token budgets. Empirically, REFA achieves a 60.29% win rate and a 52.17% length-controlled win rate on AlpacaEval2 with Llama-3-8B-Instruct, demonstrating the power of our token-level control paradigm.
- oai:arXiv.org:2412.16378v4
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Taneesh Gupta, Rahul Madhavan, Xuchao Zhang, Chetan Bansal, Saravan Rajmohan
-
-
- HALO: Hadamard-Assisted Lower-Precision Optimization for LLMs
- https://arxiv.org/abs/2501.02625
- arXiv:2501.02625v3 Announce Type: replace
-Abstract: Quantized training of Large Language Models (LLMs) remains an open challenge, as maintaining accuracy while performing all matrix multiplications in low precision has proven difficult. This is particularly the case when fine-tuning pre-trained models, which can have large weight and activation outlier values that make lower-precision optimization difficult. To address this, we present HALO, a novel quantization-aware training approach for Transformers that enables accurate and efficient low-precision training by combining 1) strategic placement of Hadamard rotations in both forward and backward passes, which mitigate outliers, 2) high-performance kernel support, and 3) FSDP integration for low-precision communication. Our approach ensures that all large matrix multiplications during the forward and backward passes are executed in lower precision. Applied to LLAMA-family models, HALO achieves near-full-precision-equivalent results during fine-tuning on various tasks, while delivering up to 1.41x end-to-end speedup for full fine-tuning on RTX 4090 GPUs. HALO efficiently supports both standard and parameterefficient fine-tuning (PEFT). Our results demonstrate the first practical approach to fully quantized LLM fine-tuning that maintains accuracy in 8-bit precision, while delivering performance benefits. Code is available at https://github.com/IST-DASLab/HALO.
- oai:arXiv.org:2501.02625v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Saleh Ashkboos, Mahdi Nikdan, Soroush Tabesh, Roberto L. Castro, Torsten Hoefler, Dan Alistarh
-
-
- SAM-EM: Real-Time Segmentation for Automated Liquid Phase Transmission Electron Microscopy
- https://arxiv.org/abs/2501.03153
- arXiv:2501.03153v2 Announce Type: replace
-Abstract: The absence of robust segmentation frameworks for noisy liquid phase transmission electron microscopy (LPTEM) videos prevents reliable extraction of particle trajectories, creating a major barrier to quantitative analysis and to connecting observed dynamics with materials characterization and design. To address this challenge, we present Segment Anything Model for Electron Microscopy (SAM-EM), a domain-adapted foundation model that unifies segmentation, tracking, and statistical analysis for LPTEM data. Built on Segment Anything Model 2 (SAM~2), SAM-EM is derived through full-model fine-tuning on 46,600 curated LPTEM synthetic video frames, substantially improving mask quality and temporal identity stability compared to zero-shot SAM~2 and existing baselines. Beyond segmentation, SAM-EM integrates particle tracking with statistical tools, including mean-squared displacement and particle displacement distribution analysis, providing an end-to-end framework for extracting and interpreting nanoscale dynamics. Crucially, full fine-tuning allows SAM-EM to remain robust under low signal-to-noise conditions, such as those caused by increased liquid sample thickness in LPTEM experiments. By establishing a reliable analysis pipeline, SAM-EM transforms LPTEM into a quantitative single-particle tracking platform and accelerates its integration into data-driven materials discovery and design. Project page: \href{https://github.com/JamaliLab/SAM-EM}{github.com/JamaliLab/SAM-EM}.
- oai:arXiv.org:2501.03153v2
- cs.CV
- physics.data-an
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Alexander Wang, Max Xu, Risha Goel, Zain Shabeeb, Isabel Panicker, Vida Jamali
-
-
- Untapped Potential in Self-Optimization of Hopfield Networks: The Creativity of Unsupervised Learning
- https://arxiv.org/abs/2501.04007
- arXiv:2501.04007v3 Announce Type: replace
-Abstract: The Self-Optimization (SO) model can be considered as the third operational mode of the classical Hopfield Network, leveraging the power of associative memory to enhance optimization performance. Moreover, it has been argued to express characteristics of minimal agency, which renders it useful for the study of artificial life. In this article, we draw attention to another facet of the SO model: its capacity for creativity. Drawing on creativity studies, we argue that the model satisfies the necessary and sufficient conditions of a creative process. Moreover, we show that learning is needed to find creative outcomes above chance probability. Furthermore, we demonstrate that modifying the learning parameters in the SO model gives rise to four different regimes that can account for both creative products and inconclusive outcomes, thus providing a framework for studying and understanding the emergence of creative behaviors in artificial systems that learn.
- oai:arXiv.org:2501.04007v3
- cs.NE
- nlin.AO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1162/ARTL.a.10
- Artificial Life, 2025, 1-30
- Natalya Weber, Christian Guckelsberger, Tom Froese
-
-
- Trace Reconstruction of First-Order Reed-Muller Codewords Using Run Statistics
- https://arxiv.org/abs/2501.11393
- arXiv:2501.11393v3 Announce Type: replace
-Abstract: In this paper, we derive an expression for the expected number of runs in a trace of a binary sequence $x \in \{0,1\}^n$ obtained by passing $x$ through a deletion channel that independently deletes each bit with probability $q$. We use this expression to show that if $x$ is a codeword of a first-order Reed-Muller code, and the deletion probability $q$ is 1/2, then $x$ can be reconstructed, with high probability, from $\tilde{O}(n^2)$ many of its traces.
- oai:arXiv.org:2501.11393v3
- cs.IT
- math.IT
- math.PR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shiv Pratap Singh Rathore, Navin Kashyap
-
-
- REINFORCE-ING Chemical Language Models for Drug Discovery
- https://arxiv.org/abs/2501.15971
- arXiv:2501.15971v2 Announce Type: replace
-Abstract: Chemical language models, combined with reinforcement learning (RL), have shown significant promise to efficiently traverse large chemical spaces for drug discovery. However, the performance of various RL algorithms and their best practices for practical drug discovery are still unclear. Here, starting from the principles of the REINFORCE algorithm, we investigate the effect of different components from RL theory including experience replay, hill-climbing, baselines to reduce variance, and alternative reward shaping. We propose a new regularization method more aligned to REINFORCE than current standard practices, and demonstrate how RL hyperparameters can be fine-tuned for effectiveness and efficiency. Lastly, we apply our learnings to practical drug discovery by demonstrating enhanced learning efficiency on frontier binding affinity models by using Boltz2 as a reward model. We share our RL models used in the ACEGEN repository, and hope the experiments here act as a guide to researchers applying RL to chemical language models for drug discovery.
- oai:arXiv.org:2501.15971v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Morgan Thomas, Albert Bou, Jose Carlos G\'omez-Tamayo, Gary Tresadern, Mazen Ahmad, Gianni De Fabritiis
-
-
- Sundial: A Family of Highly Capable Time Series Foundation Models
- https://arxiv.org/abs/2502.00816
- arXiv:2502.00816v3 Announce Type: replace
-Abstract: We introduce Sundial, a family of native, flexible, and scalable time series foundation models. To predict the next-patch's distribution, we propose a TimeFlow Loss based on flow-matching, which facilitates native pre-training of Transformers on continuous-valued time series without discrete tokenization. Conditioned on arbitrary-length time series, our models are pre-trained without specifying any prior distribution and can generate multiple probable predictions, achieving more flexibility in representation learning than using parametric densities. Towards time series foundation models, we leverage minimal but crucial adaptations of Transformers and curate TimeBench with one trillion time points, comprising mostly real-world datasets and synthetic data. By mitigating mode collapse via TimeFlow Loss, we pre-train a family of Sundial models on TimeBench, which achieve unprecedented model capacity and generalization performance. In addition to excellent scalability, Sundial achieves state-of-the-art results on both point and probabilistic forecasting benchmarks with a just-in-time inference speed, i.e., making zero-shot predictions within a few milliseconds. We believe that Sundial's pioneering generative forecasting capability can improve model reliability in real-world decision-making. Code is available at: https://github.com/thuml/Sundial.
- oai:arXiv.org:2502.00816v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yong Liu, Guo Qin, Zhiyuan Shi, Zhi Chen, Caiyin Yang, Xiangdong Huang, Jianmin Wang, Mingsheng Long
-
-
- Stable Port-Hamiltonian Neural Networks
- https://arxiv.org/abs/2502.02480
- arXiv:2502.02480v2 Announce Type: replace
-Abstract: In recent years, nonlinear dynamic system identification using artificial neural networks has garnered attention due to its broad potential applications across science and engineering. However, purely data-driven approaches often struggle with extrapolation and may yield physically implausible forecasts. Furthermore, the learned dynamics can exhibit instabilities, making it difficult to apply such models safely and robustly. This article introduces stable port-Hamiltonian neural networks, a machine learning architecture that incorporates physical biases of energy conservation and dissipation while ensuring global Lyapunov stability of the learned dynamics. Through illustrative and real-world examples, we demonstrate that these strong inductive biases facilitate robust learning of stable dynamics from sparse data, while avoiding instability and surpassing purely data-driven approaches in accuracy and physically meaningful generalization. Furthermore, the model's applicability and potential for data-driven surrogate modeling are showcased on multi-physics simulation data.
- oai:arXiv.org:2502.02480v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Fabian J. Roth, Dominik K. Klein, Maximilian Kannapinn, Jan Peters, Oliver Weeger
-
-
- LLM Query Scheduling with Prefix Reuse and Latency Constraints
- https://arxiv.org/abs/2502.04677
- arXiv:2502.04677v2 Announce Type: replace
-Abstract: The efficient deployment of large language models (LLMs) in online settings requires optimizing inference performance under stringent latency constraints, particularly the time-to-first-token (TTFT) and time-per-output-token (TPOT). This paper focuses on the query scheduling problem for LLM inference with prefix reuse, a technique that leverages shared prefixes across queries to reduce computational overhead. Our work reveals previously unknown limitations of the existing first-come-first-serve (FCFS) and longest-prefix-match (LPM) scheduling strategies with respect to satisfying latency constraints. We present a formal theoretical framework for LLM query scheduling under RadixAttention, a prefix reuse mechanism that stores and reuses intermediate representations in a radix tree structure. Our analysis establishes the NP-hardness of the scheduling problem with prefix reuse under TTFT constraints and proposes a novel scheduling algorithm, $k$-LPM, which generalizes existing methods by balancing prefix reuse and fairness in query processing. Theoretical guarantees demonstrate that $k$-LPM achieves improved TTFT performance under realistic traffic patterns captured by a data generative model. Empirical evaluations in a realistic serving setting validates our findings, showing significant reductions in P99 TTFT compared to baseline methods.
- oai:arXiv.org:2502.04677v2
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gregory Dexter, Shao Tang, Ata Fatahi Baarzi, Qingquan Song, Tejas Dharamsi, Aman Gupta
-
-
- From Haystack to Needle: Label Space Reduction for Zero-shot Classification
- https://arxiv.org/abs/2502.08436
- arXiv:2502.08436v2 Announce Type: replace
-Abstract: We present Label Space Reduction (LSR), a novel method for improving zero-shot classification performance of Large Language Models (LLMs). LSR iteratively refines the classification label space by systematically ranking and reducing candidate classes, enabling the model to concentrate on the most relevant options. By leveraging unlabeled data with the statistical learning capabilities of data-driven models, LSR dynamically optimizes the label space representation at test time. Our experiments across seven benchmarks demonstrate that LSR improves macro-F1 scores by an average of 7.0% (up to 14.2%) with Llama-3.1-70B and 3.3% (up to 11.1%) with Claude-3.5-Sonnet compared to standard zero-shot classification baselines. To reduce the computational overhead of LSR, which requires an additional LLM call at each iteration, we propose distilling the model into a probabilistic classifier, allowing for efficient inference.
- oai:arXiv.org:2502.08436v2
- cs.CL
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Nathan Vandemoortele, Bram Steenwinckel, Femke Ongenae, Sofie Van Hoecke
-
-
- Beyond Covariance Matrix: The Statistical Complexity of Private Linear Regression
- https://arxiv.org/abs/2502.13115
- arXiv:2502.13115v2 Announce Type: replace
-Abstract: We study the statistical complexity of private linear regression under an unknown, potentially ill-conditioned covariate distribution. Somewhat surprisingly, under privacy constraints the intrinsic complexity is \emph{not} captured by the usual covariance matrix but rather its $L_1$ analogues. Building on this insight, we establish minimax convergence rates for both the central and local privacy models and introduce an Information-Weighted Regression method that attains the optimal rates.
- As application, in private linear contextual bandits, we propose an efficient algorithm that achieves rate-optimal regret bounds of order $\sqrt{T}+\frac{1}{\alpha}$ and $\sqrt{T}/\alpha$ under joint and local $\alpha$-privacy models, respectively. Notably, our results demonstrate that joint privacy comes at almost no additional cost, addressing the open problems posed by Azize and Basu (2024).
- oai:arXiv.org:2502.13115v2
- cs.LG
- cs.AI
- cs.CR
- math.ST
- stat.ML
- stat.TH
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fan Chen, Jiachun Li, Alexander Rakhlin, David Simchi-Levi
-
-
- A Survey on Text-Driven 360-Degree Panorama Generation
- https://arxiv.org/abs/2502.14799
- arXiv:2502.14799v3 Announce Type: replace
-Abstract: The advent of text-driven 360-degree panorama generation, enabling the synthesis of 360-degree panoramic images directly from textual descriptions, marks a transformative advancement in immersive visual content creation. This innovation significantly simplifies the traditionally complex process of producing such content. Recent progress in text-to-image diffusion models has accelerated the rapid development in this emerging field. This survey presents a comprehensive review of text-driven 360-degree panorama generation, offering an in-depth analysis of state-of-the-art algorithms. We extend our analysis to two closely related domains: text-driven 360-degree 3D scene generation and text-driven 360-degree panoramic video generation. Furthermore, we critically examine current limitations and propose promising directions for future research. A curated project page with relevant resources and research papers is available at https://littlewhitesea.github.io/Text-Driven-Pano-Gen/.
- oai:arXiv.org:2502.14799v3
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1109/TCSVT.2025.3628738
- Hai Wang, Xiaoyu Xiang, Weihao Xia, Jing-Hao Xue
-
-
- Verdict: A Library for Scaling Judge-Time Compute
- https://arxiv.org/abs/2502.18018
- arXiv:2502.18018v2 Announce Type: replace
-Abstract: The use of LLMs as automated judges ("LLM-as-a-judge") is now widespread, yet standard judges suffer from a multitude of reliability issues. To address these challenges, we introduce Verdict, an open-source library for scaling judge-time compute to enhance the accuracy, reliability, and interpretability of automated evaluators. Verdict leverages the composition of modular reasoning units (such as verification, debate, and aggregation) and increased inference-time compute to improve LLM judge quality. Across a variety of challenging tasks such as content moderation, fact-checking, and hallucination detection, Verdict judges achieves performance competitive with orders-of-magnitude larger fine-tuned judges, prompted judges, and reasoning models. Our framework establishes a foundation for scalable, interpretable, and reliable LLM-based evaluation systems for both researchers and practitioners.
- oai:arXiv.org:2502.18018v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Nimit Kalra, Leonard Tang
-
-
- FPGA-based Emulation and Device-Side Management for CXL-based Memory Tiering Systems
- https://arxiv.org/abs/2502.19233
- arXiv:2502.19233v3 Announce Type: replace
-Abstract: The Compute Express Link (CXL) technology facilitates the extension of CPU memory through byte-addressable SerDes links and cascaded switches, creating complex heterogeneous memory systems where CPU access to various endpoints differs in latency and bandwidth. Effective tiered memory management is essential for optimizing system performance in such systems. However, designing an effective memory tiering system for CXL-extended heterogeneous memory faces challenges: 1) Existing evaluation methods, such as NUMA-based emulation and full-system simulations like GEM5, are limited in assessing hardware-based tiered memory management solutions and handling real-world workloads at scale. 2) Previous memory tiering systems struggle to simultaneously achieve high resolution, low overhead, and high flexibility and compatibility.
- In this study, we first introduce HeteroBox, a configurable emulation platform that leverages real CXL-enabled FPGAs to emulate the performance of various CXL memory architectures. HeteroBox allows one to configure a memory space with multiple regions, each exhibiting distinct CPU-access latency and bandwidth. HeteroBox helps assess the performance of both software-managed and hardware-managed memory tiering systems with high efficiency and fidelity. Based on HeteroBox, we further propose HeteroMem, a hardware-managed memory tiering system that operates on the device side. HeteroMem creates an abstraction layer between the CPU and device memory, effectively monitoring data usage and migrating data to faster memory tiers, thus hiding device-side heterogeneity from the CPU. Evaluations with real-world applications show that HeteroMem delivers high performance while keeping heterogeneous memory management fully transparent to the CPU, achieving a 5.1\% to 16.2\% performance improvement over existing memory tiering solutions.
- oai:arXiv.org:2502.19233v3
- cs.AR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Yiqi Chen, Xiping Dong, Zhe Zhou, Zhao Wang, Jie Zhang, Guangyu Sun
-
-
- Release Date Optimization in MRP Using Clearing Functions
- https://arxiv.org/abs/2503.01862
- arXiv:2503.01862v3 Announce Type: replace
-Abstract: This paper integrates a clearing function (CF)-based release planning approach into Material Requirements Planning (MRP) to address its limitations in modeling capacity constraints and dynamic lead times. The proposed optimization model replaces MRP's backward scheduling step while preserving its overall structure. Performance is evaluated through simulation experiments on two flow shop systems that explore a range of demand uncertainties and utilization levels. Computational results show that the proposed approach is capable of yielding significant improvements over the conventional backward scheduling approach, due to its ability to compute planned lead times for individual production orders as opposed to BOM items.
- oai:arXiv.org:2503.01862v3
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Wolfgang Seiringer, Klaus Altendorfer, Reha Uzsoy
-
-
- Decision-aware training of spatiotemporal forecasting models to select a top K subset of sites for intervention
- https://arxiv.org/abs/2503.05622
- arXiv:2503.05622v3 Announce Type: replace
-Abstract: Optimal allocation of scarce resources is a common problem for decision makers faced with choosing a limited number of locations for intervention. Spatiotemporal prediction models could make such decisions data-driven. A recent performance metric called fraction of best possible reach (BPR) measures the impact of using a model's recommended size K subset of sites compared to the best possible top-K in hindsight. We tackle two open problems related to BPR. First, we explore how to rank all sites numerically given a probabilistic model that predicts event counts jointly across sites. Ranking via the per-site mean is suboptimal for BPR. Instead, we offer a better ranking for BPR backed by decision theory. Second, we explore how to train a probabilistic model's parameters to maximize BPR. Discrete selection of K sites implies all-zero parameter gradients which prevent standard gradient training. We overcome this barrier via advances in perturbed optimizers. We further suggest a training objective that combines likelihood with a decision-aware BPR constraint to deliver high-quality top-K rankings as well as good forecasts for all sites. We demonstrate our approach on two where-to-intervene applications: mitigating opioid-related fatal overdoses for public health and monitoring endangered wildlife.
- oai:arXiv.org:2503.05622v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Forty-second International Conference on Machine Learning, 2025
- Kyle Heuton, F. Samuel Muench, Shikhar Shrestha, Thomas J. Stopka, Michael C. Hughes
-
-
- Assessing the Macro and Micro Effects of Random Seeds on Fine-Tuning Large Language Models
- https://arxiv.org/abs/2503.07329
- arXiv:2503.07329v2 Announce Type: replace
-Abstract: The impact of random seeds in fine-tuning large language models (LLMs) has been largely overlooked despite its potential influence on model performance.In this study, we systematically evaluate the effects of random seeds on LLMs using the GLUE and SuperGLUE benchmarks. We analyze the macro-level impact through traditional metrics like accuracy and F1, calculating their mean and variance to quantify performance fluctuations. To capture the micro-level effects, we introduce a novel metric, consistency, measuring the stability of individual predictions across runs. Our experiments reveal significant variance at both macro and micro levels, underscoring the need for careful consideration of random seeds in fine-tuning and evaluation.
- oai:arXiv.org:2503.07329v2
- cs.CL
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Nghia Bui, Guergana Savova, Lijing Wang
-
-
- CASteer: Steering Diffusion Models for Controllable Generation
- https://arxiv.org/abs/2503.09630
- arXiv:2503.09630v2 Announce Type: replace
-Abstract: Diffusion models have transformed image generation, yet controlling their outputs to reliably erase undesired concepts remains challenging. Existing approaches usually require task-specific training and struggle to generalize across both concrete (e.g., objects) and abstract (e.g., styles) concepts. We propose CASteer (Cross-Attention Steering), a training-free framework for concept erasure in diffusion models using steering vectors to influence hidden representations dynamically. CASteer precomputes concept-specific steering vectors by averaging neural activations from images generated for each target concept. During inference, it dynamically applies these vectors to suppress undesired concepts only when they appear, ensuring that unrelated regions remain unaffected. This selective activation enables precise, context-aware erasure without degrading overall image quality. This approach achieves effective removal of harmful or unwanted content across a wide range of visual concepts, all without model retraining. CASteer outperforms state-of-the-art concept erasure techniques while preserving unrelated content and minimizing unintended effects. Pseudocode is provided in the supplementary.
- oai:arXiv.org:2503.09630v2
- cs.GR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Tatiana Gaintseva, Andreea-Maria Oncescu, Chengcheng Ma, Ziquan Liu, Martin Benning, Gregory Slabaugh, Jiankang Deng, Ismail Elezi
-
-
- Revisiting semi-supervised learning in the era of foundation models
- https://arxiv.org/abs/2503.09707
- arXiv:2503.09707v4 Announce Type: replace
-Abstract: Semi-supervised learning (SSL) leverages abundant unlabeled data alongside limited labeled data to enhance learning. As vision foundation models (VFMs) increasingly serve as the backbone of vision applications, it remains unclear how SSL interacts with these pre-trained models. To address this gap, we develop new SSL benchmark datasets where frozen VFMs underperform and systematically evaluate representative SSL methods. We make a surprising observation: parameter-efficient fine-tuning (PEFT) using only labeled data often matches SSL performance, even without leveraging unlabeled data. This motivates us to revisit self-training, a conceptually simple SSL baseline, where we use the supervised PEFT model to pseudo-label unlabeled data for further training. To overcome the notorious issue of noisy pseudo-labels, we propose ensembling multiple PEFT approaches and VFM backbones to produce more robust pseudo-labels. Empirical results validate the effectiveness of this simple yet powerful approach, providing actionable insights into SSL with VFMs and paving the way for more scalable and practical semi-supervised learning in the era of foundation models.
- oai:arXiv.org:2503.09707v4
- cs.LG
- cs.AI
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Ping Zhang, Zheda Mai, Quang-Huy Nguyen, Wei-Lun Chao
-
-
- Exploring Typographic Visual Prompts Injection Threats in Cross-Modality Generation Models
- https://arxiv.org/abs/2503.11519
- arXiv:2503.11519v4 Announce Type: replace
-Abstract: Current Cross-Modality Generation Models (GMs) demonstrate remarkable capabilities in various generative tasks. Given the ubiquity and information richness of vision modality inputs in real-world scenarios, Cross-Vision tasks, encompassing Vision-Language Perception (VLP) and Image-to-Image (I2I), have attracted significant attention. Large Vision Language Models (LVLMs) and I2I Generation Models (GMs) are employed to handle VLP and I2I tasks, respectively. Previous research indicates that printing typographic words into input images significantly induces LVLMs and I2I GMs to produce disruptive outputs that are semantically aligned with those words. Additionally, visual prompts, as a more sophisticated form of typography, are also revealed to pose security risks to various applications of cross-vision tasks. However, the specific characteristics of the threats posed by visual prompts remain underexplored. In this paper, to comprehensively investigate the performance impact induced by Typographic Visual Prompt Injection (TVPI) in various LVLMs and I2I GMs, we propose the Typographic Visual Prompts Injection Dataset and thoroughly evaluate the TVPI security risks on various open-source and closed-source LVLMs and I2I GMs under visual prompts with different target semantics, deepening the understanding of TVPI threats.
- oai:arXiv.org:2503.11519v4
- cs.CV
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Cheng, Erjia Xiao, Yichi Wang, Lingfeng Zhang, Qiang Zhang, Jiahang Cao, Kaidi Xu, Mengshu Sun, Xiaoshuai Hao, Jindong Gu, Renjing Xu
-
-
- Beyond Single Pass, Looping Through Time: KG-IRAG with Iterative Knowledge Retrieval
- https://arxiv.org/abs/2503.14234
- arXiv:2503.14234v4 Announce Type: replace
-Abstract: Graph Retrieval-Augmented Generation (GraphRAG) has proven highly effective in enhancing the performance of Large Language Models (LLMs) on tasks that require external knowledge. By leveraging Knowledge Graphs (KGs), GraphRAG improves information retrieval for complex reasoning tasks, providing more precise and comprehensive retrieval and generating more accurate responses to QAs. However, most RAG methods fall short in addressing multi-step reasoning, particularly when both information extraction and inference are necessary. To address this limitation, this paper presents Knowledge Graph-Based Iterative Retrieval-Augmented Generation (KG-IRAG), a novel framework that integrates KGs with iterative reasoning to improve LLMs' ability to handle queries involving temporal and logical dependencies. Through iterative retrieval steps, KG-IRAG incrementally gathers relevant data from external KGs, enabling step-by-step reasoning. The proposed approach is particularly suited for scenarios where reasoning is required alongside dynamic temporal data extraction, such as determining optimal travel times based on weather conditions or traffic patterns. Experimental results show that KG-IRAG improves accuracy in complex reasoning tasks by effectively integrating external knowledge with iterative, logic-based retrieval. Additionally, three new datasets: weatherQA-Irish, weatherQA-Sydney, and trafficQA-TFNSW, are formed to evaluate KG-IRAG's performance, demonstrating its potential beyond traditional RAG applications.
- oai:arXiv.org:2503.14234v4
- cs.AI
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Ruiyi Yang, Hao Xue, Imran Razzak, Hakim Hacid, Flora D. Salim
-
-
- Depth Matters: Multimodal RGB-D Perception for Robust Autonomous Agents
- https://arxiv.org/abs/2503.16711
- arXiv:2503.16711v2 Announce Type: replace
-Abstract: Autonomous agents that rely purely on perception to make real-time control decisions require efficient and robust architectures. In this work, we demonstrate that augmenting RGB input with depth information significantly enhances our agents' ability to predict steering commands compared to using RGB alone. We benchmark lightweight recurrent controllers that leverage the fused RGB-D features for sequential decision-making. To train our models, we collect high-quality data using a small-scale autonomous car controlled by an expert driver via a physical steering wheel, capturing varying levels of steering difficulty. Our models were successfully deployed on real hardware and inherently avoided dynamic and static obstacles, under out-of-distribution conditions. Specifically, our findings reveal that the early fusion of depth data results in a highly robust controller, which remains effective even with frame drops and increased noise levels, without compromising the network's focus on the task.
- oai:arXiv.org:2503.16711v2
- cs.RO
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Mihaela-Larisa Clement, M\'onika Farsang, Felix Resch, Mihai-Teodor Stanusoiu, Radu Grosu
-
-
- Interdisciplinary PhDs face barriers to top university placement within their disciplines
- https://arxiv.org/abs/2503.21912
- arXiv:2503.21912v2 Announce Type: replace
-Abstract: Interdisciplinary research has gained prominence as a necessity for addressing complex challenges, yet its impact on early academic careers remains unclear. This study examines how interdisciplinarity during doctoral training influences faculty placement at top universities across diverse fields. Analyzing the career trajectories of over 30,000 tenure-track faculty members who earned their Ph.D. degrees after 2005 and their initial faculty placement at 355 U.S. universities, we find that faculty newly hired by top-ranked universities tend to be less interdisciplinary in their Ph.D. research, particularly when they obtained Ph.D. from top universities and remain in their Ph.D. research field. This may reflect community trends towards homogeneity: at top universities, the existing faculty research is less interdisciplinary and more aligned with the candidates that they hire (who also exhibit lower interdisciplinarity). This preference disadvantages the placement of women graduates, who exhibit higher interdisciplinarity on average. Furthermore, we show that newly hired faculty with greater interdisciplinarity, when placed at top universities, tend to achieve higher long-term research productivity. This suggests a potential loss in knowledge production if top universities continue to undervalue interdisciplinary candidates. These findings highlight structural barriers in faculty hiring and raise concerns about the long-term consequences of prioritizing disciplinary specialization over interdisciplinary expertise.
- oai:arXiv.org:2503.21912v2
- cs.CY
- cs.DL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Xiang Zheng, Anli Peng, Xi Hong, Cassidy R. Sugimoto, Chaoqun Ni
-
-
- UniFault: A Fault Diagnosis Foundation Model from Bearing Data
- https://arxiv.org/abs/2504.01373
- arXiv:2504.01373v2 Announce Type: replace
-Abstract: Machine fault diagnosis (FD) is a critical task for predictive maintenance, enabling early fault detection and preventing unexpected failures. Despite its importance, existing FD models are operation-specific with limited generalization across diverse datasets. Foundation models (FM) have demonstrated remarkable potential in both visual and language domains, achieving impressive generalization capabilities even with minimal data through few-shot or zero-shot learning. However, translating these advances to FD presents unique hurdles. Unlike the large-scale, cohesive datasets available for images and text, FD datasets are typically smaller and more heterogeneous, with significant variations in sampling frequencies and the number of channels across different systems and applications. This heterogeneity complicates the design of a universal architecture capable of effectively processing such diverse data while maintaining robust feature extraction and learning capabilities. In this paper, we introduce UniFault, a foundation model for fault diagnosis that systematically addresses these issues. Specifically, the model incorporates a comprehensive data harmonization pipeline featuring two key innovations. First, a unification scheme transforms multivariate inputs into standardized univariate sequences. Second, a novel cross-domain temporal fusion strategy mitigates distribution shifts and enriches sample diversity and count, improving the model generalization across varying conditions. UniFault is pretrained on over 6.9 million samples spanning diverse FD datasets, enabling superior few-shot performance. Extensive experiments on real-world FD datasets demonstrate that UniFault achieves state-of-the-art performance, setting a new benchmark for fault diagnosis models and paving the way for more scalable and robust predictive maintenance solutions.
- oai:arXiv.org:2504.01373v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Emadeldeen Eldele, Mohamed Ragab, Xu Qing, Edward, Zhenghua Chen, Min Wu, Xiaoli Li, Jay Lee
-
-
- Tight Regret Bounds for Fixed-Price Bilateral Trade
- https://arxiv.org/abs/2504.04349
- arXiv:2504.04349v2 Announce Type: replace
-Abstract: We examine fixed-price mechanisms in bilateral trade through the lens of regret minimization. Our main results are twofold. (i) For independent values, a near-optimal $\widetilde{\Theta}(T^{2/3})$ tight bound for $\textsf{Global Budget Balance}$ fixed-price mechanisms with two-bit/one-bit feedback. (ii) For correlated/adversarial values, a near-optimal $\Omega(T^{3/4})$ lower bound for $\textsf{Global Budget Balance}$ fixed-price mechanisms with two-bit/one-bit feedback, which improves the best known $\Omega(T^{5/7})$ lower bound obtained in the work [BCCF24] and, up to polylogarithmic factors, matches the $\widetilde{\mathcal{O}}(T^{3 / 4})$ upper bound obtained in the same work. Our work in combination with the previous works [CCCFL24mor, CCCFL24jmlr, AFF24, BCCF24] (essentially) gives a thorough understanding of regret minimization for fixed-price bilateral trade.
- En route, we have developed two technical ingredients that might be of independent interest: (i) A novel algorithmic paradigm, called $\textit{{fractal elimination}}$, to address one-bit feedback and independent values. (ii) A new $\textit{lower-bound construction}$ with novel proof techniques, to address the $\textsf{Global Budget Balance}$ constraint and correlated values.
- oai:arXiv.org:2504.04349v2
- cs.GT
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Houshuang Chen, Yaonan Jin, Pinyan Lu, Chihao Zhang
-
-
- A Proof-Theoretic Approach to the Semantics of Classical Linear Logic (Technical Report)
- https://arxiv.org/abs/2504.08349
- arXiv:2504.08349v3 Announce Type: replace
-Abstract: Linear logic (LL) is a resource-aware, abstract logic programming language that refines both classical and intuitionistic logic. Linear logic semantics is typically presented in one of two ways: by associating each formula with the set of all contexts that can be used to prove it (e.g. phase semantics) or by assigning meaning directly to proofs (e.g. coherence spaces).
- This work proposes a different perspective on assigning meaning to proofs by adopting a proof-theoretic perspective. More specifically, we employ base-extension semantics (BeS) to characterise proofs through the notion of base support.
- Recent developments have shown that BeS is powerful enough to capture proof-theoretic notions in structurally rich logics such as intuitionistic linear logic. In this paper, we extend this framework to the classical case, presenting a proof-theoretic approach to the semantics of the multiplicative-additive fragment of linear logic (MALL).
- oai:arXiv.org:2504.08349v3
- cs.LO
- math.LO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Victor Barroso-Nascimento, Ekaterina Piotrovskaya, Elaine Pimentel
-
-
- Nash Social Welfare with Submodular Valuations: Approximation Algorithms and Integrality Gaps
- https://arxiv.org/abs/2504.09669
- arXiv:2504.09669v3 Announce Type: replace
-Abstract: We study the problem of allocating items to agents with submodular valuations with the goal of maximizing the weighted Nash social welfare (NSW). The best-known results for unweighted and weighted objectives are the $(4+\epsilon)$ approximation given by Garg, Husic, Li, V\'egh, and Vondr\'ak~[STOC 2023] and the $(233+\epsilon)$ approximation given by Feng, Hu, Li, and Zhang~[STOC 2025], respectively.
- In this work, we present a $(3.56+\epsilon)$-approximation algorithm for weighted NSW maximization with submodular valuations, simultaneously improving the previous approximation ratios of both the weighted and unweighted NSW problems. Our algorithm solves the configuration LP of Feng, Hu, Li, and Zhang~[STOC 2025] via a stronger separation oracle that loses an $e/(e-1)$ factor only on small items, and then rounds the solution via a new bipartite multigraph construction. Some key technical ingredients of our analysis include a greedy proxy function, additive within each configuration, that preserves the LP value while lower-bounding the rounded solution, together with refined concentration bounds and a series of mathematical programs analyzed partly by computer assistance.
- On the hardness side, we prove that the configuration LP for weighted NSW with submodular valuations has an integrality gap of at least $(2^{\ln 2}-\epsilon) \approx 1.617 - \epsilon$, which is larger than the current best-known $e/(e-1)-\epsilon \approx 1.582-\epsilon$ hardness~[SODA 2020]. For additive valuations, we show an integrality gap of $(e^{1/e}-\epsilon)$, which proves the tightness of the approximation ratio in~[ICALP 2024] for algorithms based on the configuration LP. For unweighted NSW with additive valuations, we show an integrality gap of $(2^{1/4}-\epsilon) \approx 1.189-\epsilon$, again larger than the current best-known $\sqrt{8/7} \approx 1.069$-hardness~[Math. Oper. Res. 2024].
- oai:arXiv.org:2504.09669v3
- cs.GT
- cs.DS
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Xiaohui Bei, Yuda Feng, Yang Hu, Shi Li, Ruilong Zhang
-
-
- Reactive power flow optimization in AC drive systems
- https://arxiv.org/abs/2504.10360
- arXiv:2504.10360v2 Announce Type: replace
-Abstract: This paper explores a limit avoidance approach in the case of input (modulation) and output (current) constraints with the aim of enhancing system availability of AC drives. Drawing on the observation that, in a certain range of reactive power, there exists a trade-off between current and modulation magnitude, we exploit this freedom and define a constrained optimization problem. We propose two approaches, one in the form of an activation-function which drives the reactive power set-point towards safety, and an approach which uses online feedback optimization to set the reactive power dynamically. Both methods compromise reactive power tracking accuracy for increased system robustness. Through a high fidelity simulation, we compare the benefits of the two methods, highlighting their effectiveness in industrial applications.
- oai:arXiv.org:2504.10360v2
- eess.SY
- cs.SY
- math.OC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Sanjay Chandrasekaran, Catalin Arghir, Pieder Joerg, Florian Doerfler, Silvia Mastellone
-
-
- Edge-weighted Online Stochastic Matching Under Jaillet-Lu LP
- https://arxiv.org/abs/2504.17392
- arXiv:2504.17392v2 Announce Type: replace
-Abstract: The online stochastic matching problem was introduced by [FMMM09], together with the $(1-\frac1e)$-competitive Suggested Matching algorithm. In the most general edge-weighted setting, this ratio has not been improved for more than one decade, until recently [Yan24] beat the $1-\frac1e$ bound and [QFZW23] further improved it to $0.650$. Both works measure the online competitiveness against the offline LP relaxation introduced by Jaillet and Lu [JL14]. The same LP has also played an important role in other settings as it is a natural choice for two-choice online algorithms.
- In this paper, we prove an upper bound of $0.663$ and a lower bound of $0.662$ for edge-weighted online stochastic matching under Jaillet-Lu LP. We propose a simple hard instance and identify the optimal online algorithm for this specific instance which has a competitive ratio of $<0.663$. Despite the simplicity of the instance, we then show that a near-optimal algorithm for it, which has a competitive ratio of $>0.662$, can be generalized to work on all instances without any loss.
- As our algorithm is generalized from a real near-optimal algorithm instead of manually combining trivial strategies, it has two natural advantages compared with previous works: (1) its matching strategy varies from time to time; (2) it utilizes global information about offline vertices. On the other hand, the upper bound suggests that more powerful LPs and multiple-choice strategies are needed if we want to further improve the ratio by $>0.001$.
- oai:arXiv.org:2504.17392v2
- cs.DS
- cs.GT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shuyi Yan
-
-
- Reliable and efficient inverse analysis using physics-informed neural networks with normalized distance functions and adaptive weight tuning
- https://arxiv.org/abs/2504.18091
- arXiv:2504.18091v3 Announce Type: replace
-Abstract: Physics-informed neural networks have attracted significant attention in scientific machine learning for their capability to solve forward and inverse problems governed by partial differential equations. However, the accuracy of PINN solutions is often limited by the treatment of boundary conditions. Conventional penalty-based methods, which incorporate boundary conditions as penalty terms in the loss function, cannot guarantee exact satisfaction of the given boundary conditions and are highly sensitive to the choice of penalty parameters. This paper demonstrates that distance functions, specifically R-functions, can be leveraged to enforce boundary conditions, overcoming these limitations. R-functions provide normalized distance fields, enabling flexible representation of boundary geometries, including non-convex domains, and facilitating various types of boundary conditions. Nevertheless, distance functions alone are insufficient for accurate inverse analysis in PINNs. To address this, we propose an integrated framework that combines the normalized distance field with bias-corrected adaptive weight tuning to improve both accuracy and efficiency. Numerical results show that the proposed method provides more accurate and efficient solutions to various inverse problems than penalty-based approaches, even in the presence of non-convex geometries with complex boundary conditions. This approach offers a reliable and efficient framework for inverse analysis using PINNs, with potential applications across a wide range of engineering problems.
- oai:arXiv.org:2504.18091v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1088/2632-2153/ae1b71
- Shota Deguchi, Mitsuteru Asai
-
-
- TAMO: Fine-Grained Root Cause Analysis via Tool-Assisted LLM Agent with Multi-Modality Observation Data in Cloud-Native Systems
- https://arxiv.org/abs/2504.20462
- arXiv:2504.20462v5 Announce Type: replace
-Abstract: Implementing large language models (LLMs)-driven root cause analysis (RCA) in cloud-native systems has become a key topic of modern software operations and maintenance. However, existing LLM-based approaches face three key challenges: multi-modality input constraint, context window limitation, and dynamic dependence graph. To address these issues, we propose a tool-assisted LLM agent with multi-modality observation data for fine-grained RCA, namely TAMO, including multimodality alignment tool, root cause localization tool, and fault types classification tool. In detail, TAMO unifies multi-modal observation data into time-aligned representations for cross-modal feature consistency. Based on the unified representations, TAMO then invokes its specialized root cause localization tool and fault types classification tool for further identifying root cause and fault type underlying system context. This approach overcomes the limitations of LLMs in processing real-time raw observational data and dynamic service dependencies, guiding the model to generate repair strategies that align with system context through structured prompt design. Experiments on two benchmark datasets demonstrate that TAMO outperforms state-of-the-art (SOTA) approaches with comparable performance.
- oai:arXiv.org:2504.20462v5
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiao Zhang, Qi Wang, Mingyi Li, Yuan Yuan, Mengbai Xiao, Fuzhen Zhuang, Dongxiao Yu
-
-
- Breaking Down Monocular Ambiguity: Exploiting Temporal Evolution for 3D Lane Detection
- https://arxiv.org/abs/2504.20525
- arXiv:2504.20525v3 Announce Type: replace
-Abstract: Monocular 3D lane detection aims to estimate the 3D position of lanes from frontal-view (FV) images. However, existing methods are fundamentally constrained by the inherent ambiguity of single-frame input, which leads to inaccurate geometric predictions and poor lane integrity, especially for distant lanes. To overcome this, we propose to unlock the rich information embedded in the temporal evolution of the scene as the vehicle moves. Our proposed Geometry-aware Temporal Aggregation Network (GTA-Net) systematically leverages the temporal information from complementary perspectives. First, Temporal Geometry Enhancement Module (TGEM) learns geometric consistency across consecutive frames, effectively recovering depth information from motion to build a reliable 3D scene representation. Second, to enhance lane integrity, Temporal Instance-aware Query Generation (TIQG) module aggregates instance cues from past and present frames. Crucially, for lanes that are ambiguous in the current view, TIQG innovatively synthesizes a pseudo future perspective to generate queries that reveal lanes which would otherwise be missed. The experiments demonstrate that GTA-Net achieves new SoTA results, significantly outperforming existing monocular 3D lane detection solutions.
- oai:arXiv.org:2504.20525v3
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Huan Zheng, Wencheng Han, Tianyi Yan, Cheng-zhong Xu, Jianbing Shen
-
-
- SecRepoBench: Benchmarking Code Agents for Secure Code Completion in Real-World Repositories
- https://arxiv.org/abs/2504.21205
- arXiv:2504.21205v2 Announce Type: replace
-Abstract: This paper introduces SecRepoBench, a benchmark to evaluate code agents on secure code completion in real-world repositories. SecRepoBench has 318 code completion tasks in 27 C/C++ repositories, covering 15 CWEs. We evaluate 28 standalone LLMs and 13 code agents across 3 state-of-the-art agent frameworks using our benchmark. We find that state-of-the-art LLMs struggle with generating correct and secure code completions. However, code agents significantly outperform standalone LLMs. We show that SecRepoBench is more difficult than the prior state-of-the-art benchmark. Finally, our comprehensive analysis provides insights into potential directions for enhancing the ability of code agents to write correct and secure code in real-world repositories.
- oai:arXiv.org:2504.21205v2
- cs.CR
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Chihao Shen, Connor Dilgren, Purva Chiniya, Luke Griffith, Yu Ding, Yizheng Chen
-
-
- A data-driven framework for team selection in Fantasy Premier League
- https://arxiv.org/abs/2505.02170
- arXiv:2505.02170v2 Announce Type: replace
-Abstract: Fantasy football is a billion-dollar industry with millions of participants. Under a fixed budget, managers select squads to maximize future Fantasy Premier League (FPL) points. This study formulates lineup selection as data-driven optimization and develops deterministic and robust mixed-integer linear programs that choose the starting eleven, bench, and captain under budget, formation, and club-quota constraints (maximum three players per club). The objective is parameterized by a hybrid scoring metric that combines realized FPL points with predictions from a linear regression model trained on match-performance features identified using exploratory data analysis techniques. The study benchmarks alternative objectives and cost estimators, including simple and recency-weighted averages, exponential smoothing, autoregressive integrated moving average (ARIMA), and Monte Carlo simulation. Experiments on the 2023/24 Premier League season show that ARIMA with a constrained budget and a rolling window yields the most consistent out-of-sample performance; weighted averages and Monte Carlo are also competitive. Robust variants improve some objectives but are not uniformly superior. The framework provides transparent decision support for fantasy roster construction and extends to FPL chips, multi-week rolling-horizon transfer planning, and week-by-week dynamic captaincy.
- oai:arXiv.org:2505.02170v2
- cs.CE
- cs.AI
- cs.LG
- math.OC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Danial Ramezani, Tai Dinh
-
-
- Leveraging LLMs to Automate Energy-Aware Refactoring of Parallel Scientific Codes
- https://arxiv.org/abs/2505.02184
- arXiv:2505.02184v2 Announce Type: replace
-Abstract: While large language models (LLMs) are increasingly used for generating parallel scientific codes, most efforts emphasize functional correctness, often overlooking performance, especially energy efficiency. We propose LASSI-EE, an automated LLM-based refactoring framework that generates energy-efficient parallel codes through a multi-stage, iterative approach integrating runtime power profiling, energy-aware prompting, self-correcting feedback loops, and an LLM-as-a-Judge agent for automated screening of code solutions. We introduce energy-reduction@k, a novel metric that quantifies expected energy reduction when generating k code candidates and selecting the most energy-efficient, enabling systematic evaluation of multi-attempt generation strategies. Evaluating 20 HeCBench applications and two miniApps on NVIDIA A100 and AMD MI100 GPUs, a single run (k=1) with LASSI-EE delivers refactored parallel codes with an average 29% expected energy reduction at an 81% pass rate, representing a 2.8x improvement over vanilla LLM prompting. Multiple runs (k=3) achieve an average 48% expected energy reduction at a 97% pass rate. These results are consistent across devices, demonstrating LASSI-EE's effectiveness across diverse hardware architectures.
- oai:arXiv.org:2505.02184v2
- cs.AI
- cs.DC
- cs.PL
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Matthew T. Dearing, Yiheng Tao, Xingfu Wu, Zhiling Lan, Valerie Taylor
-
-
- Deep Learning Warm Starts for Trajectory Optimization on the International Space Station
- https://arxiv.org/abs/2505.05588
- arXiv:2505.05588v3 Announce Type: replace
-Abstract: Trajectory optimization is a cornerstone of modern robot autonomy, enabling systems to compute trajectories and controls in real-time while respecting safety and physical constraints. However, it has seen limited usage in spaceflight applications due to its heavy computational demands that exceed the capability of most flight computers. In this work, we provide results on the first in-space demonstration of using machine learning-based warm starts for accelerating trajectory optimization for the Astrobee free-flying robot onboard the International Space Station (ISS). We formulate a data-driven optimal control approach that trains a neural network to learn the structure of the trajectory generation problem being solved using sequential convex programming (SCP). Onboard, this trained neural network predicts solutions for the trajectory generation problem and relies on using the SCP solver to enforce safety constraints for the system. Our trained network reduces the number of solver iterations required for convergence in cases including rotational dynamics by 60% and in cases with obstacles drawn from the training distribution of the warm start model by 50%. This work represents a significant milestone in the use of learning-based control for spaceflight applications and a stepping stone for future advances in the use of machine learning for autonomous guidance, navigation, & control.
- oai:arXiv.org:2505.05588v3
- cs.RO
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Somrita Banerjee, Abhishek Cauligi, Marco Pavone
-
-
- Meta-Semantics Augmented Few-Shot Relational Learning
- https://arxiv.org/abs/2505.05684
- arXiv:2505.05684v4 Announce Type: replace
-Abstract: Few-shot relational learning on knowledge graph (KGs) aims to perform reasoning over relations with only a few training examples. While current methods have focused primarily on leveraging specific relational information, rich semantics inherent in KGs have been largely overlooked. To bridge this gap, we propose PromptMeta, a novel prompted meta-learning framework that seamlessly integrates meta-semantics with relational information for few-shot relational learning. PromptMeta introduces two core innovations: (1) a Meta-Semantic Prompt (MSP) pool that learns and consolidates high-level meta-semantics shared across tasks, enabling effective knowledge transfer and adaptation to newly emerging relations; and (2) a learnable fusion mechanism that dynamically combines meta-semantics with task-specific relational information tailored to different few-shot tasks. Both components are optimized jointly with model parameters within a meta-learning framework. Extensive experiments and analyses on two real-world KG benchmarks validate the effectiveness of PromptMeta in adapting to new relations with limited supervision.
- oai:arXiv.org:2505.05684v4
- cs.AI
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Han Wu, Jie Yin
-
-
- Augmented Reality for RObots (ARRO): Pointing Visuomotor Policies Towards Visual Robustness
- https://arxiv.org/abs/2505.08627
- arXiv:2505.08627v2 Announce Type: replace
-Abstract: Visuomotor policies trained on human expert demonstrations have recently shown strong performance across a wide range of robotic manipulation tasks. However, these policies remain highly sensitive to domain shifts stemming from background or robot embodiment changes, which limits their generalization capabilities. In this paper, we present ARRO, a novel visual representation that leverages zero-shot open-vocabulary segmentation and object detection models to efficiently mask out task-irrelevant regions of the scene in real time without requiring additional training, modeling of the setup, or camera calibration. By filtering visual distractors and overlaying virtual guides during both training and inference, ARRO improves robustness to scene variations and reduces the need for additional data collection. We extensively evaluate ARRO with Diffusion Policy on a range of tabletop manipulation tasks in both simulation and real-world environments, and further demonstrate its compatibility and effectiveness with generalist robot policies, such as Octo and OpenVLA. Across all settings in our evaluation, ARRO yields consistent performance gains, allows for selective masking to choose between different objects, and shows robustness even to challenging segmentation conditions. Videos showcasing our results are available at: https://augmented-reality-for-robots.github.io/
- oai:arXiv.org:2505.08627v2
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Reihaneh Mirjalili, Tobias J\"ulg, Florian Walter, Wolfram Burgard
-
-
- NeuralSurv: Deep Survival Analysis with Bayesian Uncertainty Quantification
- https://arxiv.org/abs/2505.11054
- arXiv:2505.11054v2 Announce Type: replace
-Abstract: We introduce NeuralSurv, the first deep survival model to incorporate Bayesian uncertainty quantification. Our non-parametric, architecture-agnostic framework captures time-varying covariate-risk relationships in continuous time via a novel two-stage data-augmentation scheme, for which we establish theoretical guarantees. For efficient posterior inference, we introduce a mean-field variational algorithm with coordinate-ascent updates that scale linearly in model size. By locally linearizing the Bayesian neural network, we obtain full conjugacy and derive all coordinate updates in closed form. In experiments, NeuralSurv delivers superior calibration compared to state-of-the-art deep survival models, while matching or exceeding their discriminative performance across both synthetic benchmarks and real-world datasets. Our results demonstrate the value of Bayesian principles in data-scarce regimes by enhancing model calibration and providing robust, well-calibrated uncertainty estimates for the survival function.
- oai:arXiv.org:2505.11054v2
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- M\'elodie Monod, Alessandro Micheli, Samir Bhatt
-
-
- Proof-of-Social-Capital: A Consensus Protocol Replacing Stake for Social Capital
- https://arxiv.org/abs/2505.12144
- arXiv:2505.12144v5 Announce Type: replace
-Abstract: Consensus protocols used today in blockchains often rely on computational power or financial stakes - scarce resources. We propose a novel protocol using social capital - trust and influence from social interactions - as a non-transferable staking mechanism to ensure fairness and decentralization. The methodology integrates zero-knowledge proofs, verifiable credentials, a Whisk-like leader election, and an incentive scheme to prevent Sybil attacks and encourage engagement. The theoretical framework would enhance privacy and equity, though unresolved issues like off-chain bribery require further research. This work offers a new model aligned with modern social media behavior and lifestyle, with applications in finance, providing a practical insight for decentralized system development.
- oai:arXiv.org:2505.12144v5
- cs.CR
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Juraj Mariani, Ivan Homoliak
-
-
- Traversal Verification for Speculative Tree Decoding
- https://arxiv.org/abs/2505.12398
- arXiv:2505.12398v2 Announce Type: replace
-Abstract: Speculative decoding is a promising approach for accelerating large language models. The primary idea is to use a lightweight draft model to speculate the output of the target model for multiple subsequent timesteps, and then verify them in parallel to determine whether the drafted tokens should be accepted or rejected. To enhance acceptance rates, existing frameworks typically construct token trees containing multiple candidates in each timestep. However, their reliance on token-level verification mechanisms introduces two critical limitations: First, the probability distribution of a sequence differs from that of individual tokens, leading to suboptimal acceptance length. Second, current verification schemes begin from the root node and proceed layer by layer in a top-down manner. Once a parent node is rejected, all its child nodes should be discarded, resulting in inefficient utilization of speculative candidates. This paper introduces Traversal Verification, a novel speculative decoding algorithm that fundamentally rethinks the verification paradigm through leaf-to-root traversal. Our approach considers the acceptance of the entire token sequence from the current node to the root, and preserves potentially valid subsequences that would be prematurely discarded by existing methods. We theoretically prove that the probability distribution obtained through Traversal Verification is identical to that of the target model, guaranteeing lossless inference while achieving substantial acceleration gains. Experimental results across different large language models and multiple tasks show that our method consistently improves acceptance length and throughput over existing methods.
- oai:arXiv.org:2505.12398v2
- cs.CL
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yepeng Weng, Qiao Hu, Xujie Chen, Li Liu, Dianwen Mei, Huishi Qiu, Jiang Tian, Zhongchao Shi
-
-
- An Empirical Bayes approach to ARX Estimation
- https://arxiv.org/abs/2505.13384
- arXiv:2505.13384v2 Announce Type: replace
-Abstract: Empirical Bayes inference is based on estimation of the parameters of an a priori distribution from the observed data. The estimation technique of the parameters of the prior, called hyperparameters, is based on the marginal distribution obtained by integrating the joint density of the model with respect to the prior. This is a key step which needs to be properly adapted to the problem at hand. In this paper we study Empirical Bayes inference of linear autoregressive models with inputs (ARX models) for time series and compare the performance of the marginal parametric estimator with that a full Empirical Bayesian analysis based on the estimated prior. Such a comparison, can only make sense for a (realistic) finite data length. In this setting, we propose a new estimation technique of the hyperparameters by a sequential Bayes procedure which is essentially a backward Kalman filter. It turns out that for finite data length the marginal Bayes tends to behave slightly better than the full Empirical Bayesian parameter estimator and so also in the case of slowly varying random parameters.
- oai:arXiv.org:2505.13384v2
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/publicdomain/zero/1.0/
- Timofei Leahu, Giorgio Picci
-
-
- Divide by Question, Conquer by Agent: SPLIT-RAG with Question-Driven Graph Partitioning
- https://arxiv.org/abs/2505.13994
- arXiv:2505.13994v2 Announce Type: replace
-Abstract: Retrieval-Augmented Generation (RAG) systems empower large language models (LLMs) with external knowledge, yet struggle with efficiency-accuracy trade-offs when scaling to large knowledge graphs. Existing approaches often rely on monolithic graph retrieval, incurring unnecessary latency for simple queries and fragmented reasoning for complex multi-hop questions. To address these challenges, this paper propose SPLIT-RAG, a multi-agent RAG framework that addresses these limitations with question-driven semantic graph partitioning and collaborative subgraph retrieval. The innovative framework first create Semantic Partitioning of Linked Information, then use the Type-Specialized knowledge base to achieve Multi-Agent RAG. The attribute-aware graph segmentation manages to divide knowledge graphs into semantically coherent subgraphs, ensuring subgraphs align with different query types, while lightweight LLM agents are assigned to partitioned subgraphs, and only relevant partitions are activated during retrieval, thus reduce search space while enhancing efficiency. Finally, a hierarchical merging module resolves inconsistencies across subgraph-derived answers through logical verifications. Extensive experimental validation demonstrates considerable improvements compared to existing approaches.
- oai:arXiv.org:2505.13994v2
- cs.AI
- cs.IR
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Ruiyi Yang, Hao Xue, Imran Razzak, Shirui Pan, Hakim Hacid, Flora D. Salim
-
-
- s3: You Don't Need That Much Data to Train a Search Agent via RL
- https://arxiv.org/abs/2505.14146
- arXiv:2505.14146v2 Announce Type: replace
-Abstract: Retrieval-augmented generation (RAG) systems empower large language models (LLMs) to access external knowledge during inference. Recent advances have enabled LLMs to act as search agents via reinforcement learning (RL), improving information acquisition through multi-turn interactions with retrieval engines. However, existing approaches either optimize retrieval using search-only metrics (e.g., NDCG) that ignore downstream utility or fine-tune the entire LLM to jointly reason and retrieve-entangling retrieval with generation and limiting the real search utility and compatibility with frozen or proprietary models. In this work, we propose s3, a lightweight, model-agnostic framework that decouples the searcher from the generator and trains the searcher using a Gain Beyond RAG reward: the improvement in generation accuracy over naive RAG. s3 requires only 2.4k training samples to outperform baselines trained on over 70x more data, consistently delivering stronger downstream performance across six general QA and five medical QA benchmarks.
- oai:arXiv.org:2505.14146v2
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Pengcheng Jiang, Xueqiang Xu, Jiacheng Lin, Jinfeng Xiao, Zifeng Wang, Jimeng Sun, Jiawei Han
-
-
- RoboRAN: A Unified Robotics Framework for Reinforcement Learning-Based Autonomous Navigation
- https://arxiv.org/abs/2505.14526
- arXiv:2505.14526v2 Announce Type: replace
-Abstract: Autonomous robots must navigate and operate in diverse environments, from terrestrial and aquatic settings to aerial and space domains. While Reinforcement Learning (RL) has shown promise in training policies for specific autonomous robots, existing frameworks and benchmarks are often constrained to unique platforms, limiting generalization and fair comparisons across different mobility systems. In this paper, we present a multi-domain framework for training, evaluating and deploying RL-based navigation policies across diverse robotic platforms and operational environments. Our work presents four key contributions: (1) a scalable and modular framework, facilitating seamless robot-task interchangeability and reproducible training pipelines; (2) sim-to-real transfer demonstrated through real-world experiments with multiple robots, including a satellite robotic simulator, an unmanned surface vessel, and a wheeled ground vehicle; (3) the release of the first open-source API for deploying Isaac Lab-trained policies to real robots, enabling lightweight inference and rapid field validation; and (4) uniform tasks and metrics for cross-medium evaluation, through a unified evaluation testbed to assess performance of navigation tasks in diverse operational conditions (aquatic, terrestrial and space). By ensuring consistency between simulation and real-world deployment, RoboRAN lowers the barrier to developing adaptable RL-based navigation strategies. Its modular design enables straightforward integration of new robots and tasks through predefined templates, fostering reproducibility and extension to diverse domains. To support the community, we release RoboRAN as open-source.
- oai:arXiv.org:2505.14526v2
- cs.RO
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Matteo El-Hariry, Antoine Richard, Ricard M. Castan, Luis F. W. Batista, Matthieu Geist, Cedric Pradalier, Miguel Olivares-Mendez
-
-
- This Time is Different: An Observability Perspective on Time Series Foundation Models
- https://arxiv.org/abs/2505.14766
- arXiv:2505.14766v2 Announce Type: replace
-Abstract: We introduce Toto, a time series forecasting foundation model with 151 million parameters. Toto uses a modern decoder-only architecture coupled with architectural innovations designed to account for specific challenges found in multivariate observability time series data. Toto's pre-training corpus is a mixture of observability data, open datasets, and synthetic data, and is 4-10$\times$ larger than those of leading time series foundation models. Additionally, we introduce BOOM, a large-scale benchmark consisting of 350 million observations across 2,807 real-world time series. For both Toto and BOOM, we source observability data exclusively from Datadog's own telemetry and internal observability metrics. Extensive evaluations demonstrate that Toto achieves state-of-the-art performance on both BOOM and on established general purpose time series forecasting benchmarks. Toto's model weights, inference code, and evaluation scripts, as well as BOOM's data and evaluation code, are all available as open source under the Apache 2.0 License available at https://huggingface.co/Datadog/Toto-Open-Base-1.0 and https://github.com/DataDog/toto.
- oai:arXiv.org:2505.14766v2
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Ben Cohen, Emaad Khwaja, Youssef Doubli, Salahidine Lemaachi, Chris Lettieri, Charles Masson, Hugo Miccinilli, Elise Ram\'e, Qiqi Ren, Afshin Rostamizadeh, Jean Ogier du Terrail, Anna-Monica Toon, Kan Wang, Stephan Xie, Zongzhe Xu, Viktoriya Zhukova, David Asker, Ameet Talwalkar, Othmane Abou-Amal
-
-
- Looking for an out: Affordances, uncertainty and collision avoidance behavior of human drivers
- https://arxiv.org/abs/2505.14842
- arXiv:2505.14842v2 Announce Type: replace
-Abstract: Understanding collision avoidance behavior is of key importance in traffic safety research and for designing and evaluating advanced driver assistance systems and autonomous vehicles. While existing experimental work has primarily focused on response timing in traffic conflicts, the goal of the present study was to gain a better understanding of human evasive maneuver decisions and execution in collision avoidance scenarios. To this end, we designed a driving simulator study where participants were exposed to one of three surprising opposite direction lateral incursion (ODLI) scenario variants. The results demonstrated that both the participants' collision avoidance behavior patterns and the collision outcome was strongly determined by the scenario kinematics and, more specifically, by the uncertainty associated with the oncoming vehicle's future trajectory. We discuss pitfalls related to hindsight bias when judging the quality of evasive maneuvers in uncertain situations and suggest that the availability of escape paths in collision avoidance scenarios can be usefully understood based on the notion of affordances, and further demonstrate how such affordances can be operationalized in terms of reachable sets. We conclude by discussing how these results can be used to inform computational models of collision avoidance behavior.
- oai:arXiv.org:2505.14842v2
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Leif Johnson, Johan Engstr\"om, Aravinda Srinivasan, Ibrahim \"Ozturk, Gustav Markkula
-
-
- ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation
- https://arxiv.org/abs/2505.16495
- arXiv:2505.16495v2 Announce Type: replace
-Abstract: While humans effortlessly draw visual objects and shapes by adaptively allocating attention based on their complexity, existing multimodal large language models (MLLMs) remain constrained by rigid token representations. Bridging this gap, we propose ALTo, an adaptive length tokenizer for autoregressive mask generation. To achieve this, a novel token length predictor is designed, along with a length regularization term and a differentiable token chunking strategy. We further build ALToLLM that seamlessly integrates ALTo into MLLM. Preferences on the trade-offs between mask quality and efficiency is implemented by group relative policy optimization (GRPO). Experiments demonstrate that ALToLLM achieves state-of-the-art performance with adaptive token cost on popular segmentation benchmarks. Code and models are released at https://github.com/yayafengzi/ALToLLM.
- oai:arXiv.org:2505.16495v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lingfeng Wang, Hualing Lin, Senda Chen, Tao Wang, Changxu Cheng, Yangyang Zhong, Dong Zheng, Wuyue Zhao
-
-
- Does Synthetic Data Help Named Entity Recognition for Low-Resource Languages?
- https://arxiv.org/abs/2505.16814
- arXiv:2505.16814v3 Announce Type: replace
-Abstract: Named Entity Recognition(NER) for low-resource languages aims to produce robust systems for languages where there is limited labeled training data available, and has been an area of increasing interest within NLP. Data augmentation for increasing the amount of low-resource labeled data is a common practice. In this paper, we explore the role of synthetic data in the context of multilingual, low-resource NER, considering 11 languages from diverse language families. Our results suggest that synthetic data does in fact hold promise for low-resource language NER, though we see significant variation between languages.
- oai:arXiv.org:2505.16814v3
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Gaurav Kamath, Sowmya Vajjala
-
-
- The Case for Repeatable, Open, and Expert-Grounded Hallucination Benchmarks in Large Language Models
- https://arxiv.org/abs/2505.17345
- arXiv:2505.17345v2 Announce Type: replace
-Abstract: Plausible, but inaccurate, tokens in model-generated text are widely believed to be pervasive and problematic for the responsible adoption of language models. Despite this concern, there is little scientific work that attempts to measure the prevalence of language model hallucination in a comprehensive way. In this paper, we argue that language models should be evaluated using repeatable, open, and domain-contextualized hallucination benchmarking. We present a taxonomy of hallucinations alongside a case study that demonstrates that when experts are absent from the early stages of data creation, the resulting hallucination metrics lack validity and practical utility.
- oai:arXiv.org:2505.17345v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Justin D. Norman, Michael U. Rivera, D. Alex Hughes
-
-
- Distilling LLM Agent into Small Models with Retrieval and Code Tools
- https://arxiv.org/abs/2505.17612
- arXiv:2505.17612v2 Announce Type: replace
-Abstract: Large language models (LLMs) excel at complex reasoning tasks but remain computationally expensive, limiting their practical deployment. To address this, recent works have focused on distilling reasoning capabilities into smaller language models (sLMs) using chain-of-thought (CoT) traces from teacher LLMs. However, this approach struggles in scenarios requiring rare factual knowledge or precise computation, where sLMs often hallucinate due to limited capability. In this work, we propose Agent Distillation, a framework for transferring not only reasoning capability but full task-solving behavior from LLM-based agents into sLMs with retrieval and code tools. We improve agent distillation along two complementary axes: (1) we introduce a prompting method called first-thought prefix to enhance the quality of teacher-generated trajectories; and (2) we propose a self-consistent action generation for improving test-time robustness of small agents. We evaluate our method on eight reasoning tasks across factual and mathematical domains, covering both in-domain and out-of-domain generalization. Our results show that sLMs as small as 0.5B, 1.5B, 3B parameters can achieve performance competitive with next-tier larger 1.5B, 3B, 7B models fine-tuned using CoT distillation, demonstrating the potential of agent distillation for building practical, tool-using small agents. Our code is available at https://github.com/Nardien/agent-distillation.
- oai:arXiv.org:2505.17612v2
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Minki Kang, Jongwon Jeong, Seanie Lee, Jaewoong Cho, Sung Ju Hwang
-
-
- Recurrent Self-Attention Dynamics: An Energy-Agnostic Perspective from Jacobians
- https://arxiv.org/abs/2505.19458
- arXiv:2505.19458v4 Announce Type: replace
-Abstract: The theoretical understanding of self-attention (SA) has been steadily progressing. A prominent line of work studies a class of SA layers that admit an energy function decreased by state updates. While it provides valuable insights into inherent biases in signal propagation, it often relies on idealized assumptions or additional constraints not necessarily present in standard SA. Thus, to broaden our understanding, this work aims to relax these energy constraints and provide an energy-agnostic characterization of inference dynamics by dynamical systems analysis. In more detail, we first consider relaxing the symmetry and single-head constraints traditionally required in energy-based formulations. Next, we show that analyzing the Jacobian matrix of the state is highly valuable when investigating more general SA architectures without necessarily admitting an energy function. It reveals that the normalization layer plays an essential role in suppressing the Lipschitzness of SA and the Jacobian's complex eigenvalues, which correspond to the oscillatory components of the dynamics. In addition, the Lyapunov exponents computed from the Jacobians demonstrate that the normalized dynamics lie close to a critical state, and this criticality serves as a strong indicator of high inference performance. Furthermore, the Jacobian perspective also enables us to develop regularization methods for training and a pseudo-energy for monitoring inference dynamics.
- oai:arXiv.org:2505.19458v4
- cs.LG
- cond-mat.dis-nn
- cs.NE
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Akiyoshi Tomihari, Ryo Karakida
-
-
- On scalable and efficient training of diffusion samplers
- https://arxiv.org/abs/2505.19552
- arXiv:2505.19552v3 Announce Type: replace
-Abstract: We address the challenge of training diffusion models to sample from unnormalized energy distributions in the absence of data, the so-called diffusion samplers. Although these approaches have shown promise, they struggle to scale in more demanding scenarios where energy evaluations are expensive and the sampling space is high-dimensional. To address this limitation, we propose a scalable and sample-efficient framework that properly harmonizes the powerful classical sampling method and the diffusion sampler. Specifically, we utilize Monte Carlo Markov chain (MCMC) samplers with a novelty-based auxiliary energy as a Searcher to collect off-policy samples, using an auxiliary energy function to compensate for exploring modes the diffusion sampler rarely visits. These off-policy samples are then combined with on-policy data to train the diffusion sampler, thereby expanding its coverage of the energy landscape. Furthermore, we identify primacy bias, i.e., the preference of samplers for early experience during training, as the main cause of mode collapse during training, and introduce a periodic re-initialization trick to resolve this issue. Our method significantly improves sample efficiency on standard benchmarks for diffusion samplers and also excels at higher-dimensional problems and real-world molecular conformer generation.
- oai:arXiv.org:2505.19552v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Minkyu Kim, Kiyoung Seong, Dongyeop Woo, Sungsoo Ahn, Minsu Kim
-
-
- A Theoretical Framework for Grokking: Interpolation followed by Riemannian Norm Minimisation
- https://arxiv.org/abs/2505.20172
- arXiv:2505.20172v2 Announce Type: replace
-Abstract: We study the dynamics of gradient flow with small weight decay on general training losses $F: \mathbb{R}^d \to \mathbb{R}$. Under mild regularity assumptions and assuming convergence of the unregularised gradient flow, we show that the trajectory with weight decay $\lambda$ exhibits a two-phase behaviour as $\lambda \to 0$. During the initial fast phase, the trajectory follows the unregularised gradient flow and converges to a manifold of critical points of $F$. Then, at time of order $1/\lambda$, the trajectory enters a slow drift phase and follows a Riemannian gradient flow minimising the $\ell_2$-norm of the parameters. This purely optimisation-based phenomenon offers a natural explanation for the \textit{grokking} effect observed in deep learning, where the training loss rapidly reaches zero while the test loss plateaus for an extended period before suddenly improving. We argue that this generalisation jump can be attributed to the slow norm reduction induced by weight decay, as explained by our analysis. We validate this mechanism empirically on several synthetic regression tasks.
- oai:arXiv.org:2505.20172v2
- cs.LG
- math.OC
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Etienne Boursier, Scott Pesme, Radu-Alexandru Dragomir
-
-
- Robust and Computation-Aware Gaussian Processes
- https://arxiv.org/abs/2505.21133
- arXiv:2505.21133v2 Announce Type: replace
-Abstract: Gaussian processes (GPs) are widely used for regression and optimization tasks such as Bayesian optimization (BO) due to their expressiveness and principled uncertainty estimates. However, in settings with large datasets corrupted by outliers, standard GPs and their sparse approximations struggle with computational tractability and robustness. We introduce Robust Computation-aware Gaussian Process (RCaGP), a novel GP model that jointly addresses these challenges by combining a principled treatment of approximation-induced uncertainty with robust generalized Bayesian updating. The key insight is that robustness and approximation-awareness are not orthogonal but intertwined: approximations can exacerbate the impact of outliers, and mitigating one without the other is insufficient. Unlike previous work that focuses narrowly on either robustness or approximation quality, RCaGP combines both in a principled and scalable framework, thus effectively managing both outliers and computational uncertainties introduced by approximations such as low-rank matrix multiplications. Our model ensures more conservative and reliable uncertainty estimates, a property we rigorously demonstrate. Additionally, we establish a robustness property and show that the mean function is key to preserving it, motivating a tailored model selection scheme for robust mean functions. Empirical results confirm that solving these challenges jointly leads to superior performance across both clean and outlier-contaminated settings, both on regression and high-throughput Bayesian optimization benchmarks.
- oai:arXiv.org:2505.21133v2
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Marshal Arijona Sinaga, Julien Martinelli, Samuel Kaski
-
-
- Large Language Models Miss the Multi-Agent Mark
- https://arxiv.org/abs/2505.21298
- arXiv:2505.21298v3 Announce Type: replace
-Abstract: Recent interest in Multi-Agent Systems of Large Language Models (MAS LLMs) has led to an increase in frameworks leveraging multiple LLMs to tackle complex tasks. However, much of this literature appropriates the terminology of MAS without engaging with its foundational principles. In this position paper, we highlight critical discrepancies between MAS theory and current MAS LLMs implementations, focusing on four key areas: the social aspect of agency, environment design, coordination and communication protocols, and measuring emergent behaviours. Our position is that many MAS LLMs lack multi-agent characteristics such as autonomy, social interaction, and structured environments, and often rely on oversimplified, LLM-centric architectures. The field may slow down and lose traction by revisiting problems the MAS literature has already addressed. Therefore, we systematically analyse this issue and outline associated research opportunities; we advocate for better integrating established MAS concepts and more precise terminology to avoid mischaracterisation and missed opportunities.
- oai:arXiv.org:2505.21298v3
- cs.MA
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Emanuele La Malfa, Gabriele La Malfa, Samuele Marro, Jie M. Zhang, Elizabeth Black, Michael Luck, Philip Torr, Michael Wooldridge
-
-
- R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Token Routing
- https://arxiv.org/abs/2505.21600
- arXiv:2505.21600v2 Announce Type: replace
-Abstract: Large Language Models (LLMs) achieve impressive reasoning capabilities at the cost of substantial inference overhead, posing substantial deployment challenges. Although distilled Small Language Models (SLMs) significantly enhance efficiency, their performance suffers as they fail to follow LLMs' reasoning paths. Luckily, we reveal that only a small fraction of tokens genuinely diverge reasoning paths between LLMs and SLMs. Most generated tokens are either identical or exhibit neutral differences, such as minor variations in abbreviations or expressions. Leveraging this insight, we introduce **Roads to Rome (R2R)**, a neural token routing method that selectively utilizes LLMs only for these critical, path-divergent tokens, while leaving the majority of token generation to the SLM. We also develop an automatic data generation pipeline that identifies divergent tokens and generates token-level routing labels to train the lightweight router. We apply R2R to combine R1-1.5B and R1-32B models from the DeepSeek family, and evaluate on challenging math, coding, and QA benchmarks. With an average activated parameter size of 5.6B, R2R surpasses the average accuracy of R1-7B by 1.6x, outperforming even the R1-14B model. Compared to R1-32B, it delivers a 2.8x wall-clock speedup with comparable performance, advancing the Pareto frontier of test-time scaling efficiency. Our code is available at https://github.com/thu-nics/R2R.
- oai:arXiv.org:2505.21600v2
- cs.CL
- cs.AI
- cs.LG
- cs.PF
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tianyu Fu, Yi Ge, Yichen You, Enshu Liu, Zhihang Yuan, Guohao Dai, Shengen Yan, Huazhong Yang, Yu Wang
-
-
- Why Machine Learning Models Fail to Fully Capture Epistemic Uncertainty
- https://arxiv.org/abs/2505.23506
- arXiv:2505.23506v2 Announce Type: replace
-Abstract: In recent years various supervised learning methods that disentangle aleatoric and epistemic uncertainty based on second-order distributions have been proposed. We argue that these methods fail to capture critical components of epistemic uncertainty, particularly due to the often-neglected component of model bias. To show this, we make use of a more fine-grained taxonomy of epistemic uncertainty sources in machine learning models, and analyse how the classical bias-variance decomposition of the expected prediction error can be decomposed into different parts reflecting these uncertainties. By using a simulation-based evaluation protocol which encompasses epistemic uncertainty due to both procedural- and data-driven uncertainty components, we illustrate that current methods rarely capture the full spectrum of epistemic uncertainty. Through theoretical insights and synthetic experiments, we show that high model bias can lead to misleadingly low estimates of epistemic uncertainty, and common second-order uncertainty quantification methods systematically blur bias-induced errors into aleatoric estimates, thereby underrepresenting epistemic uncertainty. Our findings underscore that meaningful aleatoric estimates are feasible only if all relevant sources of epistemic uncertainty are properly represented.
- oai:arXiv.org:2505.23506v2
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Sebasti\'an Jim\'enez, Mira J\"urgens, Willem Waegeman
-
-
- DiCoFlex: Model-agnostic diverse counterfactuals with flexible control
- https://arxiv.org/abs/2505.23700
- arXiv:2505.23700v2 Announce Type: replace
-Abstract: Counterfactual explanations play a pivotal role in explainable artificial intelligence (XAI) by offering intuitive, human-understandable alternatives that elucidate machine learning model decisions. Despite their significance, existing methods for generating counterfactuals often require constant access to the predictive model, involve computationally intensive optimization for each instance and lack the flexibility to adapt to new user-defined constraints without retraining. In this paper, we propose DiCoFlex, a novel model-agnostic, conditional generative framework that produces multiple diverse counterfactuals in a single forward pass. Leveraging conditional normalizing flows trained solely on labeled data, DiCoFlex addresses key limitations by enabling real-time user-driven customization of constraints such as sparsity and actionability at inference time. Extensive experiments on standard benchmark datasets show that DiCoFlex outperforms existing methods in terms of validity, diversity, proximity, and constraint adherence, making it a practical and scalable solution for counterfactual generation in sensitive decision-making domains.
- oai:arXiv.org:2505.23700v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Oleksii Furman, Ulvi Movsum-zada, Patryk Marszalek, Maciej Zi\k{e}ba, Marek \'Smieja
-
-
- ZPressor: Bottleneck-Aware Compression for Scalable Feed-Forward 3DGS
- https://arxiv.org/abs/2505.23734
- arXiv:2505.23734v3 Announce Type: replace
-Abstract: Feed-forward 3D Gaussian Splatting (3DGS) models have recently emerged as a promising solution for novel view synthesis, enabling one-pass inference without the need for per-scene 3DGS optimization. However, their scalability is fundamentally constrained by the limited capacity of their models, leading to degraded performance or excessive memory consumption as the number of input views increases. In this work, we analyze feed-forward 3DGS frameworks through the lens of the Information Bottleneck principle and introduce ZPressor, a lightweight architecture-agnostic module that enables efficient compression of multi-view inputs into a compact latent state $Z$ that retains essential scene information while discarding redundancy. Concretely, ZPressor enables existing feed-forward 3DGS models to scale to over 100 input views at 480P resolution on an 80GB GPU, by partitioning the views into anchor and support sets and using cross attention to compress the information from the support views into anchor views, forming the compressed latent state $Z$. We show that integrating ZPressor into several state-of-the-art feed-forward 3DGS models consistently improves performance under moderate input views and enhances robustness under dense view settings on two large-scale benchmarks DL3DV-10K and RealEstate10K. The video results, code and trained models are available on our project page: https://lhmd.top/zpressor.
- oai:arXiv.org:2505.23734v3
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Weijie Wang, Donny Y. Chen, Zeyu Zhang, Duochao Shi, Akide Liu, Bohan Zhuang
-
-
- Read Your Own Mind: Reasoning Helps Surface Self-Confidence Signals in LLMs
- https://arxiv.org/abs/2505.23845
- arXiv:2505.23845v2 Announce Type: replace
-Abstract: We study the source of uncertainty in DeepSeek R1-32B by analyzing its self-reported verbal confidence on question answering (QA) tasks. In the default answer-then-confidence setting, the model is regularly over-confident, whereas semantic entropy - obtained by sampling many responses - remains reliable. We hypothesize that this is because of semantic entropy's larger test-time compute, which lets us explore the model's predictive distribution. We show that granting DeepSeek the budget to explore its distribution by forcing a long chain-of-thought before the final answer greatly improves its verbal score effectiveness, even on simple fact-retrieval questions that normally require no reasoning. Furthermore, a separate reader model that sees only the chain can reconstruct very similar confidences, indicating the verbal score might be merely a statistic of the alternatives surfaced during reasoning. Our analysis concludes that reliable uncertainty estimation requires explicit exploration of the generative space, and self-reported confidence is trustworthy only after such exploration.
- oai:arXiv.org:2505.23845v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-sa/4.0/
- UncertaiNLP Workshop at Empirical Methods in Natural Language Processing 2025 (EMNLP 2025)
- Jakub Podolak, Rajeev Verma
-
-
- Model-Informed Flows for Bayesian Inference
- https://arxiv.org/abs/2505.24243
- arXiv:2505.24243v2 Announce Type: replace
-Abstract: Variational inference often struggles with the posterior geometry exhibited by complex hierarchical Bayesian models. Recent advances in flow-based variational families and Variationally Inferred Parameters (VIP) each address aspects of this challenge, but their formal relationship is unexplored. Here, we prove that the combination of VIP and a full-rank Gaussian can be represented exactly as a forward autoregressive flow augmented with a translation term and input from the model's prior. Guided by this theoretical insight, we introduce the Model-Informed Flow (MIF) architecture, which adds the necessary translation mechanism, prior information, and hierarchical ordering. Empirically, MIF delivers tighter posterior approximations and matches or exceeds state-of-the-art performance across a suite of hierarchical and non-hierarchical benchmarks.
- oai:arXiv.org:2505.24243v2
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Joohwan Ko, Justin Domke
-
-
- AURA: Autonomous Upskilling with Retrieval-Augmented Agents
- https://arxiv.org/abs/2506.02507
- arXiv:2506.02507v3 Announce Type: replace
-Abstract: Designing reinforcement learning curricula for agile robots traditionally requires extensive manual tuning of reward functions, environment randomizations, and training configurations. We introduce AURA (Autonomous Upskilling with Retrieval-Augmented Agents), a schema-validated curriculum reinforcement learning (RL) framework that leverages Large Language Models (LLMs) as autonomous designers of multi-stage curricula. AURA transforms user prompts into YAML workflows that encode full reward functions, domain randomization strategies, and training configurations. All files are statically validated before any GPU time is used, ensuring efficient and reliable execution. A retrieval-augmented feedback loop allows specialized LLM agents to design, execute, and refine curriculum stages based on prior training results stored in a vector database, enabling continual improvement over time. Quantitative experiments show that AURA consistently outperforms LLM-guided baselines in generation success rate, humanoid locomotion, and manipulation tasks. Ablation studies highlight the importance of schema validation and retrieval for curriculum quality. AURA successfully trains end-to-end policies directly from user prompts and deploys them zero-shot on a custom humanoid robot in multiple environments - capabilities that did not exist previously with manually designed controllers. By abstracting the complexity of curriculum design, AURA enables scalable and adaptive policy learning pipelines that would be complex to construct by hand. Project page: https://aura-research.org/
- oai:arXiv.org:2506.02507v3
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Alvin Zhu, Yusuke Tanaka, Andrew Goldberg, Dennis Hong
-
-
- LexTime: A Benchmark for Temporal Ordering of Legal Events
- https://arxiv.org/abs/2506.04041
- arXiv:2506.04041v2 Announce Type: replace
-Abstract: Understanding temporal relationships and accurately reconstructing the event timeline is important for case law analysis, compliance monitoring, and legal summarization. However, existing benchmarks lack specialized language evaluation, leaving a gap in understanding how LLMs handle event ordering in legal contexts. We introduce LexTime, a dataset designed to evaluate LLMs' event ordering capabilities in legal language, consisting of 512 instances from U.S. Federal Complaints with annotated event pairs and their temporal relations. Our findings show that (1) LLMs are more accurate on legal event ordering than on narrative texts (up to +10.5%); (2) longer input contexts and implicit events boost accuracy, reaching 80.8% for implicit-explicit event pairs; (3) legal linguistic complexities and nested clauses remain a challenge. While performance is promising, specific features of legal texts remain a bottleneck for legal temporal event reasoning, and we propose concrete modeling directions to better address them.
- oai:arXiv.org:2506.04041v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Claire Barale, Leslie Barrett, Vikram Sunil Bajaj, Michael Rovatsos
-
-
- Struct2D: A Perception-Guided Framework for Spatial Reasoning in MLLMs
- https://arxiv.org/abs/2506.04220
- arXiv:2506.04220v3 Announce Type: replace
-Abstract: Unlocking spatial reasoning in Multimodal Large Language Models (MLLMs) is crucial for enabling intelligent interaction with 3D environments. While prior efforts often rely on explicit 3D inputs or specialized model architectures, we ask: can MLLMs reason about 3D space using only structured 2D representations derived from perception? We introduce Struct2D, a perception-guided prompting framework that combines bird's-eye-view (BEV) images with object marks and object-centric metadata, optionally incorporating egocentric keyframes when needed. Using Struct2D, we conduct an in-depth zero-shot analysis of closed-source MLLMs (e.g., GPT-o3) and find that they exhibit surprisingly strong spatial reasoning abilities when provided with structured 2D inputs, effectively handling tasks such as relative direction estimation and route planning. Building on these insights, we construct Struct2D-Set, a large-scale instruction tuning dataset with 200K fine-grained QA pairs across eight spatial reasoning categories, generated automatically from 3D indoor scenes. We fine-tune an open-source MLLM (Qwen2.5VL) on Struct2D-Set, achieving competitive performance on multiple benchmarks, including 3D question answering, dense captioning, and object grounding. Our approach demonstrates that structured 2D inputs can effectively bridge perception and language reasoning in MLLMs-without requiring explicit 3D representations as input. We will release both our code and dataset to support future research.
- oai:arXiv.org:2506.04220v3
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fangrui Zhu, Hanhui Wang, Yiming Xie, Jing Gu, Tianye Ding, Jianwei Yang, Huaizu Jiang
-
-
- Object-X: Learning to Reconstruct Multi-Modal 3D Object Representations
- https://arxiv.org/abs/2506.04789
- arXiv:2506.04789v3 Announce Type: replace
-Abstract: Learning effective multi-modal 3D representations of objects is essential for numerous applications, such as augmented reality and robotics. Existing methods often rely on task-specific embeddings that are tailored either for semantic understanding or geometric reconstruction. As a result, these embeddings typically cannot be decoded into explicit geometry and simultaneously reused across tasks. In this paper, we propose Object-X, a versatile multi-modal object representation framework capable of encoding rich object embeddings (e.g. images, point cloud, text) and decoding them back into detailed geometric and visual reconstructions. Object-X operates by geometrically grounding the captured modalities in a 3D voxel grid and learning an unstructured embedding fusing the information from the voxels with the object attributes. The learned embedding enables 3D Gaussian Splatting-based object reconstruction, while also supporting a range of downstream tasks, including scene alignment, single-image 3D object reconstruction, and localization. Evaluations on two challenging real-world datasets demonstrate that Object-X produces high-fidelity novel-view synthesis comparable to standard 3D Gaussian Splatting, while significantly improving geometric accuracy. Moreover, Object-X achieves competitive performance with specialized methods in scene alignment and localization. Critically, our object-centric descriptors require 3-4 orders of magnitude less storage compared to traditional image- or point cloud-based approaches, establishing Object-X as a scalable and highly practical solution for multi-modal 3D scene representation.
- oai:arXiv.org:2506.04789v3
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Gaia Di Lorenzo, Federico Tombari, Marc Pollefeys, Daniel Barath
-
-
- SpatialLM: Training Large Language Models for Structured Indoor Modeling
- https://arxiv.org/abs/2506.07491
- arXiv:2506.07491v2 Announce Type: replace
-Abstract: SpatialLM is a large language model designed to process 3D point cloud data and generate structured 3D scene understanding outputs. These outputs include architectural elements like walls, doors, windows, and oriented object boxes with their semantic categories. Unlike previous methods which exploit task-specific network designs, our model adheres to the standard multimodal LLM architecture and is fine-tuned directly from open-source LLMs.
- To train SpatialLM, we collect a large-scale, high-quality synthetic dataset consisting of the point clouds of 12,328 indoor scenes (54,778 rooms) with ground-truth 3D annotations, and conduct a careful study on various modeling and training decisions. On public benchmarks, our model gives state-of-the-art performance in layout estimation and competitive results in 3D object detection. With that, we show a feasible path for enhancing the spatial understanding capabilities of modern LLMs for applications in augmented reality, embodied robotics, and more.
- oai:arXiv.org:2506.07491v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yongsen Mao, Junhao Zhong, Chuan Fang, Jia Zheng, Rui Tang, Hao Zhu, Ping Tan, Zihan Zhou
-
-
- Forward and Backward Simulations for Partially Observable Probability
- https://arxiv.org/abs/2506.08437
- arXiv:2506.08437v3 Announce Type: replace
-Abstract: Data refinement is the standard extension of a refinement relation from programs to datatypes (i.e. a behavioural subtyping relation). Forward/backward simulations provide a tractable method for establishing data refinement, and have been thoroughly studied for nondeterministic programs. However, for standard models of mixed probability and nondeterminism, ordinary assignment statements may not commute with (variable-disjoint) program fragments. This (1) invalidates a key assumption underlying the soundness of simulations, and (2) prevents modelling probabilistic datatypes with encapsulated state.
- We introduce a weakest precondition semantics for Kuifje$_\sqcap$, a language for partially observable Markov decision processes, using so-called loss (function) transformers. We prove soundness of forward/backward simulations in this richer setting, modulo healthiness conditions with a remarkable duality: forward simulations cannot leak information, and backward simulations cannot exploit leaked information.
- oai:arXiv.org:2506.08437v3
- cs.LO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Chris Chen, Annabelle McIver, Carroll Morgan
-
-
- MagCache: Fast Video Generation with Magnitude-Aware Cache
- https://arxiv.org/abs/2506.09045
- arXiv:2506.09045v2 Announce Type: replace
-Abstract: Existing acceleration techniques for video diffusion models often rely on uniform heuristics or time-embedding variants to skip timesteps and reuse cached features. These approaches typically require extensive calibration with curated prompts and risk inconsistent outputs due to prompt-specific overfitting. In this paper, we introduce a novel and robust discovery: a unified magnitude law observed across different models and prompts. Specifically, the magnitude ratio of successive residual outputs decreases monotonically, steadily in most timesteps while rapidly in the last several steps. Leveraging this insight, we introduce a Magnitude-aware Cache (MagCache) that adaptively skips unimportant timesteps using an error modeling mechanism and adaptive caching strategy. Unlike existing methods requiring dozens of curated samples for calibration, MagCache only requires a single sample for calibration. Experimental results show that MagCache achieves 2.10x-2.68x speedups on Open-Sora, CogVideoX, Wan 2.1, and HunyuanVideo, while preserving superior visual fidelity. It significantly outperforms existing methods in LPIPS, SSIM, and PSNR, under similar computational budgets.
- oai:arXiv.org:2506.09045v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- In Proceedings of NeurIPS 2025
- Zehong Ma, Longhui Wei, Feng Wang, Shiliang Zhang, Qi Tian
-
-
- SLED: A Speculative LLM Decoding Framework for Efficient Edge Serving
- https://arxiv.org/abs/2506.09397
- arXiv:2506.09397v5 Announce Type: replace
-Abstract: The growing gap between the increasing complexity of large language models (LLMs) and the limited computational budgets of edge devices poses a key challenge for efficient on-device inference, despite gradual improvements in hardware capabilities. Existing strategies, such as aggressive quantization, pruning, or remote inference, trade accuracy for efficiency or lead to substantial cost burdens. This position paper introduces a new framework that leverages speculative decoding, previously viewed primarily as a decoding acceleration technique for autoregressive generation of LLMs, as a promising approach specifically adapted for edge computing by orchestrating computation across heterogeneous devices. We propose \acronym, a framework that allows lightweight edge devices to draft multiple candidate tokens locally using diverse draft models, while a single, shared edge server verifies the tokens utilizing a more precise target model. To further increase the efficiency of verification, the edge server batch the diverse verification requests from devices. This approach supports device heterogeneity and reduces server-side memory footprint by sharing the same upstream target model across multiple devices. Our initial experiments with Jetson Orin Nano, Raspberry Pi 4B/5, and an edge server equipped with 4 Nvidia A100 GPUs indicate substantial benefits: 2.2 more system throughput, 2.8 more system capacity, and better cost efficiency, all without sacrificing model accuracy.
- oai:arXiv.org:2506.09397v5
- cs.DC
- cs.AI
- cs.LG
- cs.NI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Xiangchen Li, Dimitrios Spatharakis, Saeid Ghafouri, Jiakun Fan, Hans Vandierendonck, Deepu John, Bo Ji, Dimitrios Nikolopoulos
-
-
- Inv-Entropy: A Fully Probabilistic Framework for Uncertainty Quantification in Language Models
- https://arxiv.org/abs/2506.09684
- arXiv:2506.09684v2 Announce Type: replace
-Abstract: Large language models (LLMs) have transformed natural language processing, but their reliable deployment requires effective uncertainty quantification (UQ). Existing UQ methods are often heuristic and lack a probabilistic interpretation. This paper begins by providing a theoretical justification for the role of perturbations in UQ for LLMs. We then introduce a dual random walk perspective, modeling input-output pairs as two Markov chains with transition probabilities defined by semantic similarity. Building on this, we propose a fully probabilistic framework based on an inverse model, which quantifies uncertainty by evaluating the diversity of the input space conditioned on a given output through systematic perturbations. Within this framework, we define a new uncertainty measure, Inv-Entropy. A key strength of our framework is its flexibility: it supports various definitions of uncertainty measures, embeddings, perturbation strategies, and similarity metrics. We also propose GAAP, a perturbation algorithm based on genetic algorithms, which enhances the diversity of sampled inputs. In addition, we introduce a new evaluation metric, Temperature Sensitivity of Uncertainty (TSU), which directly assesses uncertainty without relying on correctness as a proxy. Extensive experiments demonstrate that Inv-Entropy outperforms existing semantic UQ methods. The code to reproduce the results can be found at https://github.com/UMDataScienceLab/Uncertainty-Quantification-for-LLMs.
- oai:arXiv.org:2506.09684v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- NeurIPS, 2025
- Haoyi Song, Ruihan Ji, Naichen Shi, Fan Lai, Raed Al Kontar
-
-
- Token Perturbation Guidance for Diffusion Models
- https://arxiv.org/abs/2506.10036
- arXiv:2506.10036v2 Announce Type: replace
-Abstract: Classifier-free guidance (CFG) has become an essential component of modern diffusion models to enhance both generation quality and alignment with input conditions. However, CFG requires specific training procedures and is limited to conditional generation. To address these limitations, we propose Token Perturbation Guidance (TPG), a novel method that applies perturbation matrices directly to intermediate token representations within the diffusion network. TPG employs a norm-preserving shuffling operation to provide effective and stable guidance signals that improve generation quality without architectural changes. As a result, TPG is training-free and agnostic to input conditions, making it readily applicable to both conditional and unconditional generation. We further analyze the guidance term provided by TPG and show that its effect on sampling more closely resembles CFG compared to existing training-free guidance techniques. Extensive experiments on SDXL and Stable Diffusion 2.1 show that TPG achieves nearly a 2$\times$ improvement in FID for unconditional generation over the SDXL baseline, while closely matching CFG in prompt alignment. These results establish TPG as a general, condition-agnostic guidance method that brings CFG-like benefits to a broader class of diffusion models.
- oai:arXiv.org:2506.10036v2
- cs.GR
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Javad Rajabi, Soroush Mehraban, Seyedmorteza Sadat, Babak Taati
-
-
- Balancing Tails when Comparing Distributions: Comprehensive Equity Index (CEI) with Application to Bias Evaluation in Operational Face Biometrics
- https://arxiv.org/abs/2506.10564
- arXiv:2506.10564v2 Announce Type: replace
-Abstract: Demographic bias in high-performance face recognition (FR) systems often eludes detection by existing metrics, especially with respect to subtle disparities in the tails of the score distribution. We introduce the Comprehensive Equity Index (CEI), a novel metric designed to address this limitation. CEI uniquely analyzes genuine and impostor score distributions separately, enabling a configurable focus on tail probabilities while also considering overall distribution shapes. Our extensive experiments (evaluating state-of-the-art FR systems, intentionally biased models, and diverse datasets) confirm CEI's superior ability to detect nuanced biases where previous methods fall short. Furthermore, we present CEI^A, an automated version of the metric that enhances objectivity and simplifies practical application. CEI provides a robust and sensitive tool for operational FR fairness assessment. The proposed methods have been developed particularly for bias evaluation in face biometrics but, in general, they are applicable for comparing statistical distributions in any problem where one is interested in analyzing the distribution tails.
- oai:arXiv.org:2506.10564v2
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Imanol Solano, Julian Fierrez, Aythami Morales, Alejandro Pe\~na, Ruben Tolosana, Francisco Zamora-Martinez, Javier San Agustin
-
-
- Scalable Medication Extraction and Discontinuation Identification from Electronic Health Records Using Large Language Models
- https://arxiv.org/abs/2506.11137
- arXiv:2506.11137v2 Announce Type: replace
-Abstract: Identifying medication discontinuations in electronic health records (EHRs) is vital for patient safety but is often hindered by information being buried in unstructured notes. This study aims to evaluate the capabilities of advanced open-sourced and proprietary large language models (LLMs) in extracting medications and classifying their medication status from EHR notes, focusing on their scalability on medication information extraction without human annotation. We collected three EHR datasets from diverse sources to build the evaluation benchmark. We evaluated 12 advanced LLMs and explored multiple LLM prompting strategies. Performance on medication extraction, medication status classification, and their joint task (extraction then classification) was systematically compared across all experiments. We found that LLMs showed promising performance on the medication extraction and discontinuation classification from EHR notes. GPT-4o consistently achieved the highest average F1 scores in all tasks under zero-shot setting - 94.0% for medication extraction, 78.1% for discontinuation classification, and 72.7% for the joint task. Open-sourced models followed closely, Llama-3.1-70B-Instruct achieved the highest performance in medication status classification on the MIV-Med dataset (68.7%) and in the joint task on both the Re-CASI (76.2%) and MIV-Med (60.2%) datasets. Medical-specific LLMs demonstrated lower performance compared to advanced general-domain LLMs. Few-shot learning generally improved performance, while CoT reasoning showed inconsistent gains. LLMs demonstrate strong potential for medication extraction and discontinuation identification on EHR notes, with open-sourced models offering scalable alternatives to proprietary systems and few-shot can further improve LLMs' capability.
- oai:arXiv.org:2506.11137v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Chong Shao, Douglas Snyder, Chiran Li, Bowen Gu, Kerry Ngan, Chun-Ting Yang, Jiageng Wu, Richard Wyss, Kueiyu Joshua Lin, Jie Yang
-
-
- CLIP Meets Diffusion: A Synergistic Approach to Anomaly Detection
- https://arxiv.org/abs/2506.11772
- arXiv:2506.11772v3 Announce Type: replace
-Abstract: Anomaly detection is a complex problem due to the ambiguity in defining anomalies, the diversity of anomaly types (e.g., local and global defect), and the scarcity of training data. As such, it necessitates a comprehensive model capable of capturing both low-level and high-level features, even with limited data. To address this, we propose CLIPFUSION, a method that leverages both discriminative and generative foundation models. Specifically, the CLIP-based discriminative model excels at capturing global features, while the diffusion-based generative model effectively captures local details, creating a synergistic and complementary approach. Notably, we introduce a methodology for utilizing cross-attention maps and feature maps extracted from diffusion models specifically for anomaly detection. Experimental results on benchmark datasets (MVTec-AD, VisA) demonstrate that CLIPFUSION consistently outperforms baseline methods, achieving outstanding performance in both anomaly segmentation and classification. We believe that our method underscores the effectiveness of multi-modal and multi-model fusion in tackling the multifaceted challenges of anomaly detection, providing a scalable solution for real-world applications.
- oai:arXiv.org:2506.11772v3
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Byeongchan Lee, John Won, Seunghyun Lee, Jinwoo Shin
-
-
- Post Persona Alignment for Multi-Session Dialogue Generation
- https://arxiv.org/abs/2506.11857
- arXiv:2506.11857v2 Announce Type: replace
-Abstract: Multi-session persona-based dialogue generation presents challenges in maintaining long-term consistency and generating diverse, personalized responses. While large language models (LLMs) excel in single-session dialogues, they struggle to preserve persona fidelity and conversational coherence across extended interactions. Existing methods typically retrieve persona information before response generation, which can constrain diversity and result in generic outputs. We propose Post Persona Alignment (PPA), a novel two-stage framework that reverses this process. PPA first generates a general response based solely on dialogue context, then retrieves relevant persona memories using the response as a query, and finally refines the response to align with the speaker's persona. This post-hoc alignment strategy promotes naturalness and diversity while preserving consistency and personalization. Experiments on multi-session LLM-generated dialogue data demonstrate that PPA significantly outperforms prior approaches in consistency, diversity, and persona relevance, offering a more flexible and effective paradigm for long-term personalized dialogue generation.
- oai:arXiv.org:2506.11857v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yi-Pei Chen, Noriki Nishida, Hideki Nakayama, Yuji Matsumoto
-
-
- The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being
- https://arxiv.org/abs/2506.12605
- arXiv:2506.12605v3 Announce Type: replace
-Abstract: As large language models (LLMs)-enhanced chatbots grow increasingly expressive and socially responsive, many users are beginning to form companionship-like bonds with them, particularly with simulated AI partners designed to mimic emotionally attuned interlocutors. These emerging AI companions raise critical questions: Can such systems fulfill social needs typically met by human relationships? How do they shape psychological well-being? And what new risks arise as users develop emotional ties to non-human agents? This study investigates how people interact with AI companions, especially simulated partners on CharacterAI, and how this use is associated with users' psychological well-being. We analyzed survey data from 1,131 users and 4,363 chat sessions (413,509 messages) donated by 244 participants, focusing on three dimensions of use: nature of the interaction, interaction intensity, and self-disclosure. By triangulating self-reports primary motivation, open-ended relationship descriptions, and annotated chat transcripts, we identify patterns in how users engage with AI companions and its associations with well-being. Findings suggest that people with smaller social networks are more likely to turn to chatbots for companionship, but that companionship-oriented chatbot usage is consistently associated with lower well-being, particularly when people use the chatbots more intensively, engage in higher levels of self-disclosure, and lack strong human social support. Even though some people turn to chatbots to fulfill social needs, these uses of chatbots do not fully substitute for human connection. As a result, the psychological benefits may be limited, and the relationship could pose risks for more socially isolated or emotionally vulnerable users.
- oai:arXiv.org:2506.12605v3
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yutong Zhang, Dora Zhao, Jeffrey T. Hancock, Robert Kraut, Diyi Yang
-
-
- AlphaDecay: Module-wise Weight Decay for Heavy-Tailed Balancing in LLMs
- https://arxiv.org/abs/2506.14562
- arXiv:2506.14562v3 Announce Type: replace
-Abstract: Weight decay is a standard regularization technique for training large language models (LLMs). While it is common to assign a uniform decay rate to every layer, this approach overlooks the structural diversity of LLMs and the varying spectral properties across modules. In this paper, we introduce AlphaDecay, a simple yet effective method that adaptively assigns different weight decay strengths to each module of an LLM. Our approach is guided by Heavy-Tailed Self-Regularization (HT-SR) theory, which analyzes the empirical spectral density (ESD) of weight correlation matrices to quantify "heavy-tailedness." Modules exhibiting more pronounced heavy-tailed ESDs, reflecting stronger feature learning, are assigned weaker decay, while modules with lighter-tailed spectra receive stronger decay. Our method leverages tailored weight decay assignments to balance the module-wise differences in spectral properties, leading to improved performance. Extensive pre-training tasks with various model sizes from 60M to 1B demonstrate that AlphaDecay achieves better perplexity and generalization than conventional uniform decay and other adaptive decay baselines. Our code is available at https://github.com/hed-ucas/AlphaDecay.
- oai:arXiv.org:2506.14562v3
- cs.CL
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Di He, Songjun Tu, Ajay Jaiswal, Li Shen, Ganzhao Yuan, Shiwei Liu, Lu Yin
-
-
- Multi-Agent Reinforcement Learning for Autonomous Multi-Satellite Earth Observation: A Realistic Case Study
- https://arxiv.org/abs/2506.15207
- arXiv:2506.15207v2 Announce Type: replace
-Abstract: The exponential growth of Low Earth Orbit (LEO) satellites has revolutionised Earth Observation (EO) missions, addressing challenges in climate monitoring, disaster management, and more. However, autonomous coordination in multi-satellite systems remains a fundamental challenge. Traditional optimisation approaches struggle to handle the real-time decision-making demands of dynamic EO missions, necessitating the use of Reinforcement Learning (RL) and Multi-Agent Reinforcement Learning (MARL). In this paper, we investigate RL-based autonomous EO mission planning by modelling single-satellite operations and extending to multi-satellite constellations using MARL frameworks. We address key challenges, including energy and data storage limitations, uncertainties in satellite observations, and the complexities of decentralised coordination under partial observability. By leveraging a near-realistic satellite simulation environment, we evaluate the training stability and performance of state-of-the-art MARL algorithms, including PPO, IPPO, MAPPO, and HAPPO. Our results demonstrate that MARL can effectively balance imaging and resource management while addressing non-stationarity and reward interdependency in multi-satellite coordination. The insights gained from this study provide a foundation for autonomous satellite operations, offering practical guidelines for improving policy learning in decentralised EO missions.
- oai:arXiv.org:2506.15207v2
- cs.AI
- cs.MA
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Mohamad A. Hady, Siyi Hu, Mahardhika Pratama, Jimmy Cao, Ryszard Kowalczyk
-
-
- Dense SAE Latents Are Features, Not Bugs
- https://arxiv.org/abs/2506.15679
- arXiv:2506.15679v2 Announce Type: replace
-Abstract: Sparse autoencoders (SAEs) are designed to extract interpretable features from language models by enforcing a sparsity constraint. Ideally, training an SAE would yield latents that are both sparse and semantically meaningful. However, many SAE latents activate frequently (i.e., are \emph{dense}), raising concerns that they may be undesirable artifacts of the training procedure. In this work, we systematically investigate the geometry, function, and origin of dense latents and show that they are not only persistent but often reflect meaningful model representations. We first demonstrate that dense latents tend to form antipodal pairs that reconstruct specific directions in the residual stream, and that ablating their subspace suppresses the emergence of new dense features in retrained SAEs -- suggesting that high density features are an intrinsic property of the residual space. We then introduce a taxonomy of dense latents, identifying classes tied to position tracking, context binding, entropy regulation, letter-specific output signals, part-of-speech, and principal component reconstruction. Finally, we analyze how these features evolve across layers, revealing a shift from structural features in early layers, to semantic features in mid layers, and finally to output-oriented signals in the last layers of the model. Our findings indicate that dense latents serve functional roles in language model computation and should not be dismissed as training noise.
- oai:arXiv.org:2506.15679v2
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Xiaoqing Sun, Alessandro Stolfo, Joshua Engels, Ben Wu, Senthooran Rajamanoharan, Mrinmaya Sachan, Max Tegmark
-
-
- Scaling GR(1) Synthesis via a Compositional Framework for LTL Discrete Event Control
- https://arxiv.org/abs/2506.16557
- arXiv:2506.16557v2 Announce Type: replace
-Abstract: We present a compositional approach to controller synthesis of discrete event system controllers with linear temporal logic (LTL) goals. We exploit the modular structure of the plant to be controlled, given as a set of labelled transition systems (LTS), to mitigate state explosion that monolithic approaches to synthesis are prone to. Maximally permissive safe controllers are iteratively built for subsets of the plant LTSs by solving weaker control problems. Observational synthesis equivalence is used to reduce the size of the controlled subset of the plant by abstracting away local events. The result of synthesis is also compositional, a set of controllers that when run in parallel ensure the LTL goal. We implement synthesis in the MTSA tool for an expressive subset of LTL, GR(1), and show it computes solutions to that can be up to 1000 times larger than those that the monolithic approach can solve.
- oai:arXiv.org:2506.16557v2
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/publicdomain/zero/1.0/
- 10.1007/978-3-031-98685-7_10
- In: Piskac, R., Rakamaric, Z. (eds) Computer Aided Verification. CAV 2025. Lecture Notes in Computer Science, vol 15934. Springer, Cham
- Hernan Gagliardi, Victor Braberman, Sebastian Uchitel
-
-
- Mapping The Invisible Internet: Framework and Dataset
- https://arxiv.org/abs/2506.18159
- arXiv:2506.18159v2 Announce Type: replace
-Abstract: This article describes a novel dataset that maps the network layer of the Invisible Internet Project (I2P). The data was collected using SWARM-I2P framework, which deployed I2P routers as a network of mapping agents that gather information on the network's topology and traffic over an extended period. The dataset documents over 50,000 nodes, including subsets of high-performance (FastSet) nodes and high-capacity nodes characterized by metrics such as bandwidth, latency, and uptime. It also contains detailed records of network traffic and the geographic distribution of thousands of nodes. Data was collected using a combination of methods, including querying router consoles, analysing the network database (netDb), and passive network monitoring. All node identifiers were anonymized to maintain user privacy. The data is publicly available in CSV and TXT formats on Zenodo, with mapping scripts provided on GitHub. This resource provides a foundational understanding of the decentralized routing behaviours that underpin I2P's anonymity, making it suitable for reuse in analyses of tunnel node selection, anonymity network resilience, and adversarial modelling.
- oai:arXiv.org:2506.18159v2
- cs.NI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1016/j.dib.2025.112175
- Siddique Abubakr Muntaka, Jacques Bou Abdo, Kemi Akanbi, Sunkanmi Oluwadare, Faiza Hussein, Oliver Kronyo, Michael Asante
-
-
- Benchmarking Foundation Models and Parameter-Efficient Fine-Tuning for Prognosis Prediction in Medical Imaging
- https://arxiv.org/abs/2506.18434
- arXiv:2506.18434v2 Announce Type: replace
-Abstract: Despite the significant potential of Foundation Models (FMs) in medical imaging, their application to prognosis prediction remains challenging due to data scarcity, class imbalance, and task complexity, which limit their clinical adoption. This study introduces the first structured benchmark to assess the robustness and efficiency of transfer learning strategies for FMs compared with convolutional neural networks (CNNs) in predicting COVID-19 patient outcomes from chest X-rays. The goal is to systematically compare finetuning strategies, both classical and parameter efficient, under realistic clinical constraints related to data scarcity and class imbalance, offering empirical guidance for AI deployment in clinical workflows. Four publicly available COVID-19 chest X-ray datasets were used, covering mortality, severity, and ICU admission, with varying sample sizes and class imbalances. CNNs pretrained on ImageNet and FMs pretrained on general or biomedical datasets were adapted using full finetuning, linear probing, and parameter-efficient methods. Models were evaluated under full data and few shot regimes using the Matthews Correlation Coefficient (MCC) and Precision Recall AUC (PR-AUC), with cross validation and class weighted losses. CNNs with full fine-tuning performed robustly on small, imbalanced datasets, while FMs with Parameter-Efficient Fine-Tuning (PEFT), particularly LoRA and BitFit, achieved competitive results on larger datasets. Severe class imbalance degraded PEFT performance, whereas balanced data mitigated this effect. In few-shot settings, FMs showed limited generalization, with linear probing yielding the most stable results. No single fine-tuning strategy proved universally optimal: CNNs remain dependable for low-resource scenarios, whereas FMs benefit from parameter-efficient methods when data are sufficient.
- oai:arXiv.org:2506.18434v2
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Filippo Ruffini, Elena Mulero Ayllon, Linlin Shen, Paolo Soda, Valerio Guarrasi
-
-
- Inference-Time Reward Hacking in Large Language Models
- https://arxiv.org/abs/2506.19248
- arXiv:2506.19248v2 Announce Type: replace
-Abstract: A common paradigm to improve the performance of large language models is optimizing for a reward model. Reward models assign a numerical score to an LLM's output that indicates, for example, how likely it is to align with user preferences or safety goals. However, reward models are never perfect. They inevitably function as proxies for complex desiderata such as correctness, helpfulness, and safety. By overoptimizing for a misspecified reward, we can subvert intended alignment goals and reduce overall performance, a phenomenon commonly referred to as reward hacking. In this work, we characterize reward hacking in inference-time alignment and demonstrate when and how we can mitigate it by hedging on the proxy reward. We study this phenomenon under Best-of-$n$ (BoN) and Soft Best-of-$n$ (SBoN), and we introduce Best-of-Poisson (BoP) that provides an efficient, near-exact approximation of the optimal reward-KL divergence policy at inference time. We show that the characteristic pattern of hacking as observed in practice (where the true reward first increases before declining) is an inevitable property of a broad class of inference-time mechanisms, including BoN and BoP. To counter this effect, we introduce HedgeTune, an efficient algorithm to find the optimal inference-time parameter. We demonstrate that hedging mitigates reward hacking and achieves superior reward-distortion tradeoffs on math, reasoning, and human-preference setups.
- oai:arXiv.org:2506.19248v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Hadi Khalaf, Claudio Mayrink Verdun, Alex Oesterling, Himabindu Lakkaraju, Flavio du Pin Calmon
-
-
- Recurrent neural network-based robust control systems with closed-loop regional incremental ISS and application to MPC design
- https://arxiv.org/abs/2506.20334
- arXiv:2506.20334v2 Announce Type: replace
-Abstract: This paper investigates the design of output-feedback schemes for systems described by a class of recurrent neural networks. We propose a procedure based on linear matrix inequalities for designing an observer and a static state-feedback controller. The algorithm leverages global and regional incremental input-to-state stability (incremental ISS) and enables the tracking of constant setpoints, ensuring robustness to disturbances and state estimation uncertainty. To address the potential limitations of regional incremental ISS, we introduce an alternative scheme in which the static law is replaced with a tube-based nonlinear model predictive controller (NMPC) that exploits regional incremental ISS properties. We show that these conditions enable the formulation of a robust NMPC law with guarantees of convergence and recursive feasibility, leading to an enlarged region of attraction. Theoretical results are validated through numerical simulations on the pH-neutralisation process benchmark.
- oai:arXiv.org:2506.20334v2
- eess.SY
- cs.LG
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Daniele Ravasio, Marcello Farina, Alessio La Bella, Andrea Ballarino
-
-
- Layer Importance for Mathematical Reasoning is Forged in Pre-Training and Invariant after Post-Training
- https://arxiv.org/abs/2506.22638
- arXiv:2506.22638v2 Announce Type: replace
-Abstract: Large language models improve at math after instruction tuning, reinforcement learning, or knowledge distillation. We ask whether these gains come from major changes in the transformer layers or from smaller adjustments that keep the original structure. Using layer-wise ablation on base and trained variants, we find that math reasoning depends on a few critical layers, which stay important across all post- training methods. Removing these layers reduces math accuracy by as much as 80%, whereas factual recall tasks only show relatively smaller drops. This suggests that specialized layers for mathematical tasks form during pre-training and remain stable afterward. As measured by Normalized Mutual Information (NMI), we find that near these critical layers, tokens drift from their original syntactic clusters toward representations aligned with tokens less syntactically related but potentially more useful for downstream task.
- oai:arXiv.org:2506.22638v2
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Aadim Nepal, Safal Shrestha, Anubhav Shrestha, Minwu Kim, Jalal Naghiyev, Ravid Shwartz-Ziv, Keith Ross
-
-
- FedRef: Communication-Efficient Bayesian Fine-Tuning using a Reference Model
- https://arxiv.org/abs/2506.23210
- arXiv:2506.23210v3 Announce Type: replace
-Abstract: Federated learning (FL) collaboratively trains artificial intelligence (AI) models to ensure user data privacy. Sharing only model updates generated from local training on client data with the server enhances user data privacy. However, model performance may suffer due to data and system heterogeneity among clients in FL scenarios. Previous studies have proposed model optimization, fine-tuning, and personalization to achieve improved model performance. Despite these efforts, models resulting from FL scenarios often exhibit catastrophic forgetting, which increases the communication and computational costs of clients for model optimization and raises energy consumption. To address these challenges, we propose a reference model-based fine-tuning method for federated learning that overcomes catastrophic forgetting in each round. Our method is derived from Bayesian parameter-efficient transfer learning and includes an proximal term. It employs a reference model that incorporates previous model parameters and reviews previous global features in the model optimization step to mitigate catastrophic forgetting. As a result, our method achieves higher model performance and lower communication and computational costs for clients than existing methods.
- oai:arXiv.org:2506.23210v3
- cs.LG
- cs.AI
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Taehwan Yoon, Bongjun Choi, Wesley De Neve
-
-
- The Fourier spectral approach to the spatial discretization of quasilinear hyperbolic systems
- https://arxiv.org/abs/2507.00516
- arXiv:2507.00516v2 Announce Type: replace
-Abstract: We discuss the rigorous justification of the spatial discretization by means of Fourier spectral methods of quasilinear first-order hyperbolic systems. We provide uniform stability estimates that grant spectral convergence of the (spatially) semi-discretized solutions towards the corresponding continuous solution provided that the underlying system satisfies some suitable structural assumptions. We consider a setting with sharp low-pass filters and a setting with smooth low-pass filters and argue that - at least theoretically - smooth low-pass filters are operable on a larger class of systems. While our theoretical results are supported with numerical evidence, we also pinpoint some behavior of the numerical method that currently has no theoretical explanation.
- oai:arXiv.org:2507.00516v2
- math.NA
- cs.NA
- math.AP
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Vincent Duch\^ene, Johanna Ulvedal Marstrander
-
-
- Tensor Decomposition Networks for Fast Machine Learning Interatomic Potential Computations
- https://arxiv.org/abs/2507.01131
- arXiv:2507.01131v3 Announce Type: replace
-Abstract: $\rm{SO}(3)$-equivariant networks are the dominant models for machine learning interatomic potentials (MLIPs). The key operation of such networks is the Clebsch-Gordan (CG) tensor product, which is computationally expensive. To accelerate the computation, we develop tensor decomposition networks (TDNs) as a class of approximately equivariant networks in which CG tensor products are replaced by low-rank tensor decompositions, such as the CANDECOMP/PARAFAC (CP) decomposition. With the CP decomposition, we prove (i) a uniform bound on the induced error of $\rm{SO}(3)$-equivariance, and (ii) the universality of approximating any equivariant bilinear map. To further reduce the number of parameters, we propose path-weight sharing that ties all multiplicity-space weights across the $\mathcal{O}(L^3)$ CG paths into a single path without compromising equivariance, where $L$ is the maximum angular degree. The resulting layer acts as a plug-and-play replacement for tensor products in existing networks, and the computational complexity of tensor products is reduced from $\mathcal{O}(L^6)$ to $\mathcal{O}(L^4)$. We evaluate TDNs on PubChemQCR, a newly curated molecular relaxation dataset containing 105 million DFT-calculated snapshots. We also use existing datasets, including OC20, and OC22. Results show that TDNs achieve competitive performance with dramatic speedup in computations. Our code is publicly available as part of the AIRS library (\href{https://github.com/divelab/AIRS/tree/main/OpenMol/TDN}{https://github.com/divelab/AIRS/}).
- oai:arXiv.org:2507.01131v3
- cs.LG
- physics.comp-ph
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yuchao Lin, Cong Fu, Zachary Krueger, Haiyang Yu, Maho Nakata, Jianwen Xie, Emine Kucukbenli, Xiaofeng Qian, Shuiwang Ji
-
-
- From Coarse to Fine-Grained Emotion Annotation: An Immediate Recall Paradigm with Validation through Physiological Evidence and Recognition Performance
- https://arxiv.org/abs/2507.02350
- arXiv:2507.02350v2 Announce Type: replace
-Abstract: Traditional video-induced emotion physiological datasets often use whole-trial annotation, assigning a single emotion label to all data collected during an entire trial. This coarse-grained annotation approach misaligns with the dynamic and temporally localized nature of emotional responses as they unfold with video narratives, introducing label noise that limits emotion recognition algorithm evaluation and performance. To solve the label noise problem caused by coarse-grained annotation, we propose a fine-grained annotation method through an immediate recall paradigm. This paradigm integrates an immediate video replay phase after the initial stimulus viewing, allowing participants to precisely mark the onset timestamp, emotion label, and intensity based on their immediate recall. We validate this paradigm through physiological evidence and recognition performance. Physiological validation of multimodal signals within participant-marked windows revealed rhythm-specific EEG patterns and arousal-dependent GSR responses-with SCRs appearing in 91% of high-arousal versus 6% of low-arousal emotion windows. These objective physiological data changes strongly aligned with subjective annotations, confirming annotation precision. For recognition performance, classification experiments showed that models trained on fine-grained annotations achieved 9.7% higher accuracy than traditional whole-trial labeling, despite using less data. This work not only addresses label noise through fine-grained annotation but also demonstrates that annotation precision outweighs data scale in determining emotion recognition performance.
- oai:arXiv.org:2507.02350v2
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Hao Tang, Songyun Xie, Xinzhou Xie, Can Liao, Xin Zhang, Bohan Li, Zhongyu Tian, Dalu Zheng
-
-
- An efficient asymptotic preserving Monte Carlo method for frequency-dependent radiative transfer equations
- https://arxiv.org/abs/2507.02392
- arXiv:2507.02392v2 Announce Type: replace
-Abstract: In this paper, we develop an efficient asymptotic-preserving (AP) Monte Carlo (MC) method for frequency-dependent radiative transfer equations (RTEs), which is based on the AP-MC method proposed for the gray RTEs in \cite{shi2023efficient}. We follow the characteristics-based approach by Zhang et al. \cite{zhang2023asymptotic} to get a reformulated model, which couples a low dimension convection-diffusion-type equation for macroscopic quantities with a high dimension transport equation for the radiative intensity.
- To recover the correct free streaming limit due to frequency-dependency, we propose a correction to the reformulated macroscopic equation.
- The macroscopic system is solved using a hybrid method:
- convective fluxes are handled by a particle-based MC method, while diffusive fluxes are treated implicitly with central difference.
- To address the nonlinear coupling between radiative intensity and the Planck function across multiple frequency groups, we adopt a Picard iteration with a predictor-corrector procedure, which decouples a global nonlinear system into a linear system restricted to spatial dimension (independent of frequency) with scalar algebraic nonlinear equations.
- Once the macroscopic update is done, the transport equation, with a known emission source provided by the macroscopic variables, is efficiently solved using an implicit MC method. This approach enables larger time steps independent of the speed of light and also the frequency across a wide range, significantly enhancing computational efficiency, especially for frequency-dependent RTEs.
- Formal AP analysis in the diffusive scaling is established. Numerical experiments are performed to demonstrate the high efficiency and AP property of the proposed method.
- oai:arXiv.org:2507.02392v2
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yiyang Hong, Yi Shi, Yi Cai, Tao Xiong
-
-
- Exploring Exponential Runge-Kutta Methods: A Survey
- https://arxiv.org/abs/2507.04024
- arXiv:2507.04024v2 Announce Type: replace
-Abstract: In this survey, we provide an in-depth investigation of exponential Runge-Kutta methods for the numerical integration of initial-value problems. These methods offer a valuable synthesis between classical Runge-Kutta methods, introduced more than a century ago, and exponential integrators, which date back to the 1960s. This manuscript presents both a historical analysis of the development of these methods up to the present day and several examples aimed at making the topic accessible to a broad audience.
- oai:arXiv.org:2507.04024v2
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Alessia and\`o, Nicol\`o Cangiotti, Mattia Sensi
-
-
- Accounting for Subsystem Aging Variability in Battery Energy Storage System Optimization
- https://arxiv.org/abs/2507.04813
- arXiv:2507.04813v2 Announce Type: replace
-Abstract: This paper presents a degradation-cost-aware optimization framework for multi-string battery energy storage systems, emphasizing the impact of inhomogeneous subsystem-level aging in operational decision-making. We evaluate four scenarios for an energy arbitrage scenario, that vary in model precision and treatment of aging costs. Key performance metrics include operational revenue, power schedule mismatch, missed revenues, capacity losses, and revenue generated per unit of capacity loss. Our analysis reveals that ignoring heterogeneity of subunits may lead to infeasible dispatch plans and reduced revenues. In contrast, combining accurate representation of degraded subsystems and the consideration of aging costs in the objective function improves operational accuracy and economic efficiency of BESS with heterogeneous aged subunits. The fully informed scenario, which combines aging-cost-aware optimization with precise string-level modeling, achieves 21% higher revenue per unit of SOH loss compared to the baseline scenario. These findings highlight that modeling aging heterogeneity is not just a technical refinement but may become a crucial enabler for maximizing both short-term profitability and long-term asset value in particular for long BESS usage scenarios.
- oai:arXiv.org:2507.04813v2
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Melina Graner, Martin Cornejo, Holger Hesse, Andreas Jossen
-
-
- Omni-Router: Sharing Routing Decisions in Sparse Mixture-of-Experts for Speech Recognition
- https://arxiv.org/abs/2507.05724
- arXiv:2507.05724v3 Announce Type: replace
-Abstract: Mixture-of-experts (MoE) architectures have expanded from language modeling to automatic speech recognition (ASR). Traditional MoE methods, such as the Switch Transformer, route experts independently within each layer. Our analysis reveals that routers in most layers make expert choices that are not strongly correlated with the choices of the routers in other layers. To increase the cooperation between experts in different layers and encourage greater specialization, we use a shared router across different MoE layers. We call this model Omni-router Transformer. Extensive experiments on a large-scale pseudo-labeled dataset and evaluations across 10 diverse, out-of-domain ASR benchmarks demonstrate that the Omni-router Transformer is able to achieve lower training loss and consistently outperform dense and Switch Transformer models, reducing average word error rates by 11.2% and 8.2%, respectively, while providing structured expert usage and improved robustness to diverse data.
- oai:arXiv.org:2507.05724v3
- cs.CL
- cs.AI
- cs.LG
- cs.SD
- eess.AS
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zijin Gu, Tatiana Likhomanenko, Navdeep Jaitly
-
-
- DiffSpectra: Molecular Structure Elucidation from Spectra using Diffusion Models
- https://arxiv.org/abs/2507.06853
- arXiv:2507.06853v2 Announce Type: replace
-Abstract: Molecular structure elucidation from spectra is a fundamental challenge in molecular science. Conventional approaches rely heavily on expert interpretation and lack scalability, while retrieval-based machine learning approaches remain constrained by limited reference libraries. Generative models offer a promising alternative, yet most adopt autoregressive architectures that overlook 3D geometry and struggle to integrate diverse spectral modalities. In this work, we present DiffSpectra, a generative framework that formulates molecular structure elucidation as a conditional generation process, directly inferring 2D and 3D molecular structures from multi-modal spectra using diffusion models. Its denoising network is parameterized by the Diffusion Molecule Transformer, an SE(3)-equivariant architecture for geometric modeling, conditioned by SpecFormer, a Transformer-based spectral encoder capturing multi-modal spectral dependencies. Extensive experiments demonstrate that DiffSpectra accurately elucidates molecular structures, achieving 40.76% top-1 and 99.49% top-10 accuracy. Its performance benefits substantially from 3D geometric modeling, SpecFormer pre-training, and multi-modal conditioning. To our knowledge, DiffSpectra is the first framework that unifies multi-modal spectral reasoning and joint 2D/3D generative modeling for de novo molecular structure elucidation.
- oai:arXiv.org:2507.06853v2
- cs.LG
- cs.AI
- cs.CE
- physics.chem-ph
- q-bio.MN
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Liang Wang, Yu Rong, Tingyang Xu, Zhenyi Zhong, Zhiyuan Liu, Pengju Wang, Deli Zhao, Qiang Liu, Shu Wu, Liang Wang, Yang Zhang
-
-
- PGD-based optimization of 3D bobsleigh track centerlines from 2D centerlines for simulation applications
- https://arxiv.org/abs/2507.08393
- arXiv:2507.08393v2 Announce Type: replace
-Abstract: The centerline of a bobsleigh track defines its geometry and is essential for simulation modeling. To reduce bBobsleigh training costs, leveraging the centerline of the bobsleigh track to construct a virtual environment that closely replicates real competitive settings presents a promising solution. However, publicly available centerline data are typically limited and it is imprecise to construct a training system solely based on 2-dimensional (2D) centerline. To address this practical issue, this paper proposes a method for generating a 3-dimensional (3D) track centerline based on 2D centerline data. Incorporating international track design regulations, the method formulates an optimization problem that considers total track length, height difference, slope constraints, and geometric continuity. A Projected Gradient Descent (PGD) algorithm is used to solve the optimization problem. The generated 3D centerlines are compared with real track data, and the results show that the method can reproduce realistic centerline trends from original or scaled 2D data. For the selected track segment, the relative errors in total length, height difference, and average slope are within 1.7%, 3.2% and 4.1%, respectively, for real 2D data and within 1.1%, 3.5% and 4.3% respectively for scaled data. All slope values remain within the allowable limits. Moreover, by adjusting the segmentation or modifying the weight of height difference in the cost function, various centerline styles applicable to different competitions can be generated. Under different segmentation and weight factors, the maximum errors reach up to 4.4%, 4.8%, and 9.8%, and 4.4%, 4.8%, and 10.0%, respectively. The proposed method provides a flexible and efficient tool for supporting bobsleigh track centerline design.
- oai:arXiv.org:2507.08393v2
- eess.SY
- cs.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Zhe Chen, Huichao Zhao, Yongfeng Jiang, Minghui Bai, Lun Li, Jicheng Chen
-
-
- mmE-Loc: Facilitating Accurate Drone Landing with Ultra-High-Frequency Localization
- https://arxiv.org/abs/2507.09469
- arXiv:2507.09469v3 Announce Type: replace
-Abstract: For precise, efficient, and safe drone landings, ground platforms should real-time, accurately locate descending drones and guide them to designated spots. While mmWave sensing combined with cameras improves localization accuracy, lower sampling frequency of traditional frame cameras compared to mmWave radar creates bottlenecks in system throughput. In this work, we upgrade traditional frame camera with event camera, a novel sensor that harmonizes in sampling frequency with mmWave radar within ground platform setup, and introduce mmE-Loc, a high-precision, low-latency ground localization system designed for precise drone landings. To fully exploit the \textit{temporal consistency} and \textit{spatial complementarity} between these two modalities, we propose two innovative modules: \textit{(i)} the Consistency-instructed Collaborative Tracking module, which further leverages the drone's physical knowledge of periodic micro-motions and structure for accurate measurements extraction, and \textit{(ii)} the Graph-informed Adaptive Joint Optimization module, which integrates drone motion information for efficient sensor fusion and drone localization. Real-world experiments conducted in landing scenarios with a drone delivery company demonstrate that mmE-Loc significantly outperforms state-of-the-art methods in both accuracy and latency.
- oai:arXiv.org:2507.09469v3
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Haoyang Wang, Jingao Xu, Xinyu Luo, Ting Zhang, Xuecheng Chen, Ruiyang Duan, Jialong Chen, Yunhao Liu, Jianfeng Zheng, Weijie Hong, Xinlei Chen
-
-
- Compliance Minimization via Physics-Informed Gaussian Processes
- https://arxiv.org/abs/2507.09968
- arXiv:2507.09968v2 Announce Type: replace
-Abstract: Machine learning (ML) techniques have recently gained significant attention for solving compliance minimization (CM) problems. However, these methods typically provide poor feature boundaries, are very expensive, and lack a systematic mechanism to control the design complexity. Herein, we address these limitations by proposing a mesh-free and simultaneous framework based on physics-informed Gaussian processes (GPs). In our approach, we parameterize the design and state variables with GP priors which have independent kernels but share a multi-output neural network (NN) as their mean function. The architecture of this NN is based on Parametric Grid Convolutional Attention Networks (PGCANs) which not only mitigate spectral bias issues, but also provide an interpretable mechanism to control design complexity. We estimate all the parameters of our GP-based representations by simultaneously minimizing the compliance, total potential energy, and residual of volume fraction constraint. Importantly, our loss function exclude all data-based residuals as GPs automatically satisfy them. We also develop computational schemes based on curriculum training and numerical integration to increase the efficiency and robustness of our approach which is shown to (1) produce super-resolution topologies with fast convergence, (2) achieve comparable compliance and less gray area fraction compared to traditional numerical methods, (3) provide control over fine-scale features, and (4) outperform competing ML-based methods.
- oai:arXiv.org:2507.09968v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Xiangyu Sun, Amin Yousefpour, Shirin Hosseinmardi, Ramin Bostanabad
-
-
- Automatic Road Subsurface Distress Recognition from Ground Penetrating Radar Images using Deep Learning-based Cross-verification
- https://arxiv.org/abs/2507.11081
- arXiv:2507.11081v2 Announce Type: replace
-Abstract: Ground penetrating radar (GPR) has become a rapid and non-destructive solution for road subsurface distress (RSD) detection. Deep learning-based automatic RSD recognition, though ameliorating the burden of data processing, suffers from data scarcity and insufficient capability to recognize defects. In this study, a rigorously validated 3D GPR dataset containing 2134 samples of diverse types was constructed through field scanning. A novel cross-verification strategy was proposed to fully exploit the complementary abilities of region proposal networks in object recognition from different views of GPR images. The method achieves outstanding accuracy with a recall over 98.6% in field tests. The approach, integrated into an online RSD detection system, can reduce the human labor of inspection by around 90%.
- oai:arXiv.org:2507.11081v2
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Chang Peng, Bao Yang, Meiqi Li, Ge Zhang, Hui Sun, Zhenyu Jiang
-
-
- Composing Linear Layers from Irreducibles
- https://arxiv.org/abs/2507.11688
- arXiv:2507.11688v3 Announce Type: replace
-Abstract: Contemporary large models often exhibit behaviors suggesting the presence of low-level primitives that compose into modules with richer functionality, but these fundamental building blocks remain poorly understood. We investigate this compositional structure in linear layers by asking: can we identify/synthesize linear transformations from a minimal set of geometric primitives? Using Clifford algebra, we show that linear layers can be expressed as compositions of bivectors -- geometric objects encoding oriented planes -- and introduce a differentiable algorithm that decomposes them into products of rotors. This construction uses only O(log^2 d) parameters, versus O(d^2) required by dense matrices. Applied to the key, query, and value projections in LLM attention layers, our rotor-based layers match the performance of strong baselines such as block-Hadamard and low-rank approximations. Our findings provide an algebraic perspective on how these geometric primitives can compose into higher-level functions within deep models.
- oai:arXiv.org:2507.11688v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Travis Pence, Daisuke Yamada, Vikas Singh
-
-
- OrdShap: Feature Position Importance for Sequential Black-Box Models
- https://arxiv.org/abs/2507.11855
- arXiv:2507.11855v2 Announce Type: replace
-Abstract: Sequential deep learning models excel in domains with temporal or sequential dependencies, but their complexity necessitates post-hoc feature attribution methods for understanding their predictions. While existing techniques quantify feature importance, they inherently assume fixed feature ordering - conflating the effects of (1) feature values and (2) their positions within input sequences. To address this gap, we introduce OrdShap, a novel attribution method that disentangles these effects by quantifying how a model's predictions change in response to permuting feature position. We establish a game-theoretic connection between OrdShap and Sanchez-Berganti\~nos values, providing a theoretically grounded approach to position-sensitive attribution. Empirical results from health, natural language, and synthetic datasets highlight OrdShap's effectiveness in capturing feature value and feature position attributions, and provide deeper insight into model behavior.
- oai:arXiv.org:2507.11855v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Davin Hill, Brian L. Hill, Aria Masoomi, Vijay S. Nori, Robert E. Tillman, Jennifer Dy
-
-
- Why Isn't Relational Learning Taking Over the World?
- https://arxiv.org/abs/2507.13558
- arXiv:2507.13558v5 Announce Type: replace
-Abstract: Artificial intelligence seems to be taking over the world with systems that model pixels, words, and phonemes. The world is arguably made up, not of pixels, words, and phonemes but of entities (objects, things, including events) with properties and relations among them. Surely we should model these, not the perception or description of them. You might suspect that concentrating on modeling words and pixels is because all of the (valuable) data in the world is in terms of text and images. If you look into almost any company you will find their most valuable data is in spreadsheets, databases and other relational formats. These are not the form that are studied in introductory machine learning, but are full of product numbers, student numbers, transaction numbers and other identifiers that can't be interpreted naively as numbers. The field that studies this sort of data has various names including relational learning, statistical relational AI, and many others. This paper explains why relational learning is not taking over the world -- except in a few cases with restricted relations -- and what needs to be done to bring it to it's rightful prominence.
- oai:arXiv.org:2507.13558v5
- cs.AI
- cs.DB
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- David Poole
-
-
- LLM-Driven Collaborative Model for Untangling Commits via Explicit and Implicit Dependency Reasoning
- https://arxiv.org/abs/2507.16395
- arXiv:2507.16395v2 Announce Type: replace
-Abstract: Atomic commits, which address a single development concern, are a best practice in software development. In practice, however, developers often produce tangled commits that mix unrelated changes, complicating code review and maintenance. Prior untangling approaches (rule-based, feature-based, or graph-based) have made progress but typically rely on shallow signals and struggle to distinguish explicit dependencies (e.g., control/data flow) from implicit ones (e.g., semantic or conceptual relationships). In this paper, we propose ColaUntangle, a new collaborative consultation framework for commit untangling that models both explicit and implicit dependencies among code changes. ColaUntangle integrates Large Language Model (LLM)-driven agents in a multi-agent architecture: one agent specializes in explicit dependencies, another in implicit ones, and a reviewer agent synthesizes their perspectives through iterative consultation. To capture structural and contextual information, we construct Explicit and Implicit Contexts, enabling agents to reason over code relationships with both symbolic and semantic depth. We evaluate ColaUntangle on two widely-used datasets (1,612 C# and 14k Java tangled commits). Experimental results show that ColaUntangle outperforms the best-performing baseline, achieving an improvement of 44% on the C# dataset and 82% on the Java dataset. These findings highlight the potential of LLM-based collaborative frameworks for advancing automated commit untangling tasks.
- oai:arXiv.org:2507.16395v2
- cs.AI
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Bo Hou, Xin Tan, Kai Zheng, Fang Liu, Yinghao Zhu, Li Zhang
-
-
- Enhancing Fatigue Detection through Heterogeneous Multi-Source Data Integration and Cross-Domain Modality Imputation
- https://arxiv.org/abs/2507.16859
- arXiv:2507.16859v3 Announce Type: replace
-Abstract: Fatigue detection for human operators plays a key role in safety critical applications such as aviation, mining, and long haul transport. While numerous studies have demonstrated the effectiveness of high fidelity sensors in controlled laboratory environments, their performance often degrades when ported to real world settings due to noise, lighting conditions, and field of view constraints, thereby limiting their practicality. This paper formalizes a deployment oriented setting for real world fatigue detection, where high quality sensors are often unavailable in practical applications. To address this challenge, we propose leveraging knowledge from heterogeneous source domains, including high fidelity sensors that are difficult to deploy in the field but commonly used in controlled environments, to assist fatigue detection in the real world target domain. Building on this idea, we design a heterogeneous and multiple source fatigue detection framework that adaptively utilizes the available modalities in the target domain while exploiting diverse configurations in the source domains through alignment across domains and modality imputation. Our experiments, conducted using a field deployed sensor setup and two publicly available human fatigue datasets, demonstrate the practicality, robustness, and improved generalization of our approach across subjects and domains. The proposed method achieves consistent gains over strong baselines in sensor constrained scenarios.
- oai:arXiv.org:2507.16859v3
- cs.RO
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Luobin Cui, Yanlai Wu, Tang Ying, Weikai Li
-
-
- MathOPEval: A Fine-grained Evaluation Benchmark for Visual Operations of MLLMs in Mathematical Reasoning
- https://arxiv.org/abs/2507.18140
- arXiv:2507.18140v3 Announce Type: replace
-Abstract: Recent progress in Multi-modal Large Language Models (MLLMs) has enabled step-by-step multi-modal mathematical reasoning by performing visual operations based on the textual instructions. A promising approach uses code as an intermediate representation to precisely express and manipulate the images in the reasoning steps. However, existing evaluations focus mainly on text-only reasoning outputs, leaving the MLLM's ability to perform accurate visual operations via code largely unexplored. This work takes a first step toward addressing that gap by evaluating MLLM's code-based capabilities in multi-modal mathematical reasoning.Specifically, our framework focuses on two key evaluation aspects: (1) Multi-modal Code Generation (MCG) evaluates the model's ability to accurately understand and construct visualizations from scratch. (2) Multi-modal Code Editing (MCE) assesses the model's capacity for fine-grained operations, which include three types: Deletion, Modification and Annotation. To evaluate the above tasks, we incorporate a dataset that covers the five most popular types of mathematical figures, including geometric diagrams, function plots, and three types of statistical charts, to provide a comprehensive and effective measurement of existing MLLMs. Our experimental evaluation involves nine mainstream MLLMs, and the results reveal that existing models still lag significantly behind human performance in performing fine-grained visual operations.
- oai:arXiv.org:2507.18140v3
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiaoyuan Li, Moxin Li, Wenjie Wang, Rui Men, Yichang Zhang, Fuli Feng, Dayiheng Liu
-
-
- Human-Exoskeleton Kinematic Calibration to Improve Hand Tracking for Dexterous Teleoperation
- https://arxiv.org/abs/2507.23592
- arXiv:2507.23592v2 Announce Type: replace
-Abstract: Hand exoskeletons are critical tools for dexterous teleoperation and immersive manipulation interfaces, but achieving accurate hand tracking remains a challenge due to user-specific anatomical variability and donning inconsistencies. These issues lead to kinematic misalignments that degrade tracking performance and limit applicability in precision tasks. We propose a subject-specific calibration framework for exoskeleton-based hand tracking that estimates virtual link parameters through residual-weighted optimization. A data-driven approach is introduced to empirically tune cost function weights using motion capture ground truth, enabling accurate and consistent calibration across users. Implemented on the Maestro hand exoskeleton with seven healthy participants, the method achieved substantial reductions in joint and fingertip tracking errors across diverse hand geometries. Qualitative visualizations using a Unity-based virtual hand further demonstrate improved motion fidelity. The proposed framework generalizes to exoskeletons with closed-loop kinematics and minimal sensing, laying the foundation for high-fidelity teleoperation and robot learning applications.
- oai:arXiv.org:2507.23592v2
- cs.RO
- cs.HC
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Haiyun Zhang, Stefano Dalla Gasperina, Saad N. Yousaf, Toshimitsu Tsuboi, Tetsuya Narita, Ashish D. Deshpande
-
-
- PhysicsEval: Inference-Time Techniques to Improve the Reasoning Proficiency of Large Language Models on Physics Problems
- https://arxiv.org/abs/2508.00079
- arXiv:2508.00079v2 Announce Type: replace
-Abstract: The discipline of physics stands as a cornerstone of human intellect, driving the evolution of technology and deepening our understanding of the fundamental principles of the cosmos. Contemporary literature includes some works centered on the task of solving physics problems - a crucial domain of natural language reasoning. In this paper, we evaluate the performance of frontier LLMs in solving physics problems, both mathematical and descriptive. We also employ a plethora of inference-time techniques and agentic frameworks to improve the performance of the models. This includes the verification of proposed solutions in a cumulative fashion by other, smaller LLM agents, and we perform a comparative analysis of the performance that the techniques entail. There are significant improvements when the multi-agent framework is applied to problems that the models initially perform poorly on. Furthermore, we introduce a new evaluation benchmark for physics problems, ${\rm P{\small HYSICS}E{\small VAL}}$, consisting of 19,609 problems sourced from various physics textbooks and their corresponding correct solutions scraped from physics forums and educational websites. Our code and data are publicly available at https://github.com/areebuzair/PhysicsEval.
- oai:arXiv.org:2508.00079v2
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Oshayer Siddique, J. M Areeb Uzair Alam, Md Jobayer Rahman Rafy, Syed Rifat Raiyan, Hasan Mahmud, Md Kamrul Hasan
-
-
- P3P Made Easy
- https://arxiv.org/abs/2508.01312
- arXiv:2508.01312v2 Announce Type: replace
-Abstract: We revisit the classical Perspective-Three-Point (P3P) problem, which aims to recover the absolute pose of a calibrated camera from three 2D-3D correspondences. It has long been known that P3P can be reduced to a quartic polynomial with analytically simple and computationally efficient coefficients. However, this elegant formulation has been largely overlooked in modern literature. Building on the theoretical foundation that traces back to Grunert's work in 1841, we propose a compact algebraic solver that achieves accuracy and runtime comparable to state-of-the-art methods. Our results show that this classical formulation remains highly competitive when implemented with modern insights, offering an excellent balance between simplicity, efficiency, and accuracy.
- oai:arXiv.org:2508.01312v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Seong Hun Lee, Patrick Vandewalle, Javier Civera
-
-
- Decentralized Aerial Manipulation of a Cable-Suspended Load using Multi-Agent Reinforcement Learning
- https://arxiv.org/abs/2508.01522
- arXiv:2508.01522v3 Announce Type: replace
-Abstract: This paper presents the first decentralized method to enable real-world 6-DoF manipulation of a cable-suspended load using a team of Micro-Aerial Vehicles (MAVs). Our method leverages multi-agent reinforcement learning (MARL) to train an outer-loop control policy for each MAV. Unlike state-of-the-art controllers that utilize a centralized scheme, our policy does not require global states, inter-MAV communications, nor neighboring MAV information. Instead, agents communicate implicitly through load pose observations alone, which enables high scalability and flexibility. It also significantly reduces computing costs during inference time, enabling onboard deployment of the policy. In addition, we introduce a new action space design for the MAVs using linear acceleration and body rates. This choice, combined with a robust low-level controller, enables reliable sim-to-real transfer despite significant uncertainties caused by cable tension during dynamic 3D motion. We validate our method in various real-world experiments, including full-pose control under load model uncertainties, showing setpoint tracking performance comparable to the state-of-the-art centralized method. We also demonstrate cooperation amongst agents with heterogeneous control policies, and robustness to the complete in-flight loss of one MAV. Videos of experiments: https://autonomousrobots.nl/paper_websites/aerial-manipulation-marl
- oai:arXiv.org:2508.01522v3
- cs.RO
- cs.AI
- cs.MA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Proceedings of the 9th Conference on Robot Learning, PMLR 305:3850-3868, 2025
- Jack Zeng, Andreu Matoses Gimenez, Eugene Vinitsky, Javier Alonso-Mora, Sihao Sun
-
-
- Navigating High Dimensional Concept Space with Metalearning
- https://arxiv.org/abs/2508.01948
- arXiv:2508.01948v3 Announce Type: replace
-Abstract: Rapidly learning abstract concepts from limited examples is a hallmark of human intelligence. This work investigates whether gradient-based meta-learning can equip neural networks with inductive biases for efficient few-shot acquisition of discrete concepts. I compare meta-learning methods against a supervised learning baseline on Boolean concepts (logical statements) generated by a probabilistic context-free grammar (PCFG). By systematically varying concept dimensionality (number of features) and recursive compositionality (depth of grammar recursion), I delineate between complexity regimes in which meta-learning robustly improves few-shot concept learning and regimes in which it does not. Meta-learners are much better able to handle compositional complexity than featural complexity. I highlight some reasons for this with a representational analysis of the weights of meta-learners and a loss landscape analysis demonstrating how featural complexity increases the roughness of loss trajectories, allowing curvature-aware optimization to be more effective than first-order methods. I find improvements in out-of-distribution generalization on complex concepts by increasing the number of adaptation steps in meta-SGD, where adaptation acts as a way of encouraging exploration of rougher loss basins. Overall, this work highlights the intricacies of learning compositional versus featural complexity in high dimensional concept spaces and provides a road to understanding the role of 2nd order methods and extended gradient adaptation in few-shot concept learning.
- oai:arXiv.org:2508.01948v3
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Max Gupta
-
-
- Sparse-dLLM: Accelerating Diffusion LLMs with Dynamic Cache Eviction
- https://arxiv.org/abs/2508.02558
- arXiv:2508.02558v2 Announce Type: replace
-Abstract: Diffusion Large Language Models (dLLMs) enable breakthroughs in reasoning and parallel decoding but suffer from prohibitive quadratic computational complexity and memory overhead during inference. Current caching techniques accelerate decoding by storing full-layer states, yet impose substantial memory usage that limit long-context applications. Our analysis of attention patterns in dLLMs reveals persistent cross-layer sparsity, with pivotal tokens remaining salient across decoding steps and low-relevance tokens staying unimportant, motivating selective cache eviction. We propose Sparse-dLLM, the first training-free framework integrating dynamic cache eviction with sparse attention via delayed bidirectional sparse caching. By leveraging the stability of token saliency over steps, it retains critical tokens and dynamically evicts unimportant prefix/suffix entries using an attention-guided strategy. Extensive experiments on LLaDA and Dream series demonstrate Sparse-dLLM achieves up to 10$\times$ higher throughput than vanilla dLLMs, with comparable performance and similar peak memory costs, outperforming previous methods in efficiency and effectiveness. The code is available at https://github.com/OpenMOSS/Sparse-dLLM.
- oai:arXiv.org:2508.02558v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuerong Song, Xiaoran Liu, Ruixiao Li, Zhigeng Liu, Zengfeng Huang, Qipeng Guo, Ziwei He, Xipeng Qiu
-
-
- Modeling Annotator Disagreement with Demographic-Aware Experts and Synthetic Perspectives
- https://arxiv.org/abs/2508.02853
- arXiv:2508.02853v3 Announce Type: replace
-Abstract: We present an approach to modeling annotator disagreement in subjective NLP tasks through both architectural and data-centric innovations. Our model, DEM-MoE (Demographic-Aware Mixture of Experts), routes inputs to expert subnetworks based on annotator demographics, enabling it to better represent structured, group-level variation compared to prior models. DEM-MoE consistently performs competitively across demographic groups, and shows especially strong results on datasets with high annotator disagreement. To address sparse demographic coverage, we test whether LLM-generated synthetic annotations via zero-shot persona prompting can be used for data imputation. We show these synthetic judgments align moderately well with human annotations on our data and offer a scalable way to potentially enrich training data. We then propose and evaluate approaches for blending real and synthetic data using strategies tailored to dataset structure. We find that the optimal strategies depend on dataset structure. Together, these contributions improve the representation of diverse perspectives.
- oai:arXiv.org:2508.02853v3
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yinuo Xu, Veronica Derricks, Allison Earl, David Jurgens
-
-
- CoTox: Chain-of-Thought-Based Molecular Toxicity Reasoning and Prediction
- https://arxiv.org/abs/2508.03159
- arXiv:2508.03159v2 Announce Type: replace
-Abstract: Drug toxicity remains a major challenge in pharmaceutical development. Recent machine learning models have improved in silico toxicity prediction, but their reliance on annotated data and lack of interpretability limit their applicability. This limits their ability to capture organ-specific toxicities driven by complex biological mechanisms. Large language models (LLMs) offer a promising alternative through step-by-step reasoning and integration of textual data, yet prior approaches lack biological context and transparent rationale. To address this issue, we propose CoTox, a novel framework that integrates LLM with chain-of-thought (CoT) reasoning for multi-toxicity prediction. CoTox combines chemical structure data, biological pathways, and gene ontology (GO) terms to generate interpretable toxicity predictions through step-by-step reasoning. Using GPT-4o, we show that CoTox outperforms both traditional machine learning and deep learning model. We further examine its performance across various LLMs to identify where CoTox is most effective. Additionally, we find that representing chemical structures with IUPAC names, which are easier for LLMs to understand than SMILES, enhances the model's reasoning ability and improves predictive performance. To demonstrate its practical utility in drug development, we simulate the treatment of relevant cell types with drug and incorporated the resulting biological context into the CoTox framework. This approach allow CoTox to generate toxicity predictions aligned with physiological responses, as shown in case study. This result highlights the potential of LLM-based frameworks to improve interpretability and support early-stage drug safety assessment. The code and prompt used in this work are available at https://github.com/dmis-lab/CoTox.
- oai:arXiv.org:2508.03159v2
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jueon Park, Yein Park, Minju Song, Soyon Park, Donghyeon Lee, Seungheun Baek, Jaewoo Kang
-
-
- Data Dependency-Aware Code Generation from Enhanced UML Sequence Diagrams
- https://arxiv.org/abs/2508.03379
- arXiv:2508.03379v3 Announce Type: replace
-Abstract: Large language models (LLMs) excel at generating code from natural language (NL) descriptions. However, the plain textual descriptions are inherently ambiguous and often fail to capture complex requirements like intricate system behaviors, conditional logic, and architectural constraints; implicit data dependencies in service-oriented architectures are difficult to infer and handle correctly. To bridge this gap, we propose a novel step-by-step code generation framework named UML2Dep by leveraging unambiguous formal specifications of complex requirements. First, we introduce an enhanced Unified Modeling Language (UML) sequence diagram tailored for service-oriented architectures. This diagram extends traditional visual syntax by integrating decision tables and API specifications, explicitly formalizing structural relationships and business logic flows in service interactions to rigorously eliminate linguistic ambiguity. Second, recognizing the critical role of data flow, we introduce a dedicated data dependency inference (DDI) task. DDI systematically constructs an explicit data dependency graph prior to actual code synthesis. To ensure reliability, we formalize DDI as a constrained mathematical reasoning task through novel prompting strategies, aligning with LLMs' excellent mathematical strengths. Additional static parsing and dependency pruning further reduce context complexity and cognitive load associated with intricate specifications, thereby enhancing reasoning accuracy and efficiency.
- oai:arXiv.org:2508.03379v3
- cs.AI
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Wenxin Mao, Zhitao Wang, Long Wang, Sirong Chen, Cuiyun Gao, Luyang Cao, Ziming Liu, Qiming Zhang, Jun Zhou, Zhi Jin
-
-
- ViFP: A Framework for Visual False Positive Detection to Enhance Reasoning Reliability in VLMs
- https://arxiv.org/abs/2508.04201
- arXiv:2508.04201v2 Announce Type: replace
-Abstract: During reasoning in vision-language models (VLMs), false positive (FP) reasoning occurs when a model produces the correct answer but follows an incorrect reasoning path, resulting in undermined reasoning reliability. Existing approaches mainly rely on prompt engineering, knowledge distillation or reinforcement learning to improve reasoning reliability, both of which require large amounts of high-quality data and thus limit practical applicability. Few approaches have focused on directly detecting and correcting FPs. To address these issues, we propose ViFP, a framework for Visual False Positive Detection to Enhance Reasoning Reliability in VLMs. ViFP builds effective reasoning paths through multi-turn QA and dynamically analyzes the consistency of the reasoning path to identify potential FPs. It also introduces a targeted reasoning chain correction mechanism to modify FP reasoning, thereby improving logical consistency and accuracy. Finally, we introduce a reliability evaluation metric, VoC, which integrates answer accuracy and the FP rate, providing a quantitative tool to assess whether a VLM not only answers correctly but also reasons reliably. Our experiments on closed-source VLMs show that ViFP consistently improves performance across three datasets: A-OKVQA, OK-VQA, and FVQA. On A-OKVQA, ViFP improves accuracy by up to 5.4%, surpassing the previous state-of-the-art by 4.3%, and significantly reduces the number of FPs, validating its benefits in enhancing reasoning reliability.
- oai:arXiv.org:2508.04201v2
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ben Zhang, LuLu Yu, Lei Gao, QuanJiang Guo, Jing Liu, Hui Gao
-
-
- Live Music Models
- https://arxiv.org/abs/2508.04651
- arXiv:2508.04651v3 Announce Type: replace
-Abstract: We introduce a new class of generative models for music called live music models that produce a continuous stream of music in real-time with synchronized user control. We release Magenta RealTime, an open-weights live music model that can be steered using text or audio prompts to control acoustic style. On automatic metrics of music quality, Magenta RealTime outperforms other open-weights music generation models, despite using fewer parameters and offering first-of-its-kind live generation capabilities. We also release Lyria RealTime, an API-based model with extended controls, offering access to our most powerful model with wide prompt coverage. These models demonstrate a new paradigm for AI-assisted music creation that emphasizes human-in-the-loop interaction for live music performance.
- oai:arXiv.org:2508.04651v3
- cs.SD
- cs.HC
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lyria Team, Antoine Caillon, Brian McWilliams, Cassie Tarakajian, Ian Simon, Ilaria Manco, Jesse Engel, Noah Constant, Yunpeng Li, Timo I. Denk, Alberto Lalama, Andrea Agostinelli, Cheng-Zhi Anna Huang, Ethan Manilow, George Brower, Hakan Erdogan, Heidi Lei, Itai Rolnick, Ivan Grishchenko, Manu Orsini, Matej Kastelic, Mauricio Zuluaga, Mauro Verzetti, Michael Dooley, Ondrej Skopek, Rafael Ferrer, Savvas Petridis, Zal\'an Borsos, \"Aaron van den Oord, Douglas Eck, Eli Collins, Jason Baldridge, Tom Hume, Chris Donahue, Kehang Han, Adam Roberts
-
-
- Voost: A Unified and Scalable Diffusion Transformer for Bidirectional Virtual Try-On and Try-Off
- https://arxiv.org/abs/2508.04825
- arXiv:2508.04825v2 Announce Type: replace
-Abstract: Virtual try-on aims to synthesize a realistic image of a person wearing a target garment, but accurately modeling garment-body correspondence remains a persistent challenge, especially under pose and appearance variation. In this paper, we propose Voost - a unified and scalable framework that jointly learns virtual try-on and try-off with a single diffusion transformer. By modeling both tasks jointly, Voost enables each garment-person pair to supervise both directions and supports flexible conditioning over generation direction and garment category, enhancing garment-body relational reasoning without task-specific networks, auxiliary losses, or additional labels. In addition, we introduce two inference-time techniques: attention temperature scaling for robustness to resolution or mask variation, and self-corrective sampling that leverages bidirectional consistency between tasks. Extensive experiments demonstrate that Voost achieves state-of-the-art results on both try-on and try-off benchmarks, consistently outperforming strong baselines in alignment accuracy, visual fidelity, and generalization.
- oai:arXiv.org:2508.04825v2
- cs.GR
- cs.AI
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Seungyong Lee, Jeong-gi Kwak
-
-
- Diagrams-to-Dynamics (D2D): Exploring Causal Loop Diagram Leverage Points under Uncertainty
- https://arxiv.org/abs/2508.05659
- arXiv:2508.05659v3 Announce Type: replace
-Abstract: Causal loop diagrams (CLDs) are widely used in health and environmental research to represent hypothesized causal structures underlying complex problems. However, as qualitative and static representations, CLDs are limited in their ability to support dynamic analysis and inform intervention strategies. We propose Diagrams-to-Dynamics (D2D), a method for converting CLDs into exploratory system dynamics models (SDMs) in the absence of empirical data. With minimal user input - following a protocol to label variables as stocks, flows or auxiliaries, and constants - D2D leverages the structural information already encoded in CLDs, namely, link existence and polarity, to simulate hypothetical interventions and explore potential leverage points under uncertainty. Results suggest that D2D helps distinguish between high- and low-ranked leverage points. We compare D2D to a data-driven SDM constructed from the same CLD and variable labels. D2D showed greater consistency with the data-driven model compared to static network centrality analysis, while providing uncertainty estimates and guidance for future data collection. The D2D method is implemented in an open-source Python package and a web-based application to support further testing and lower the barrier to dynamic modeling for researchers working with CLDs. We expect that additional validation studies will further establish the approach's utility across a broad range of cases and domains.
- oai:arXiv.org:2508.05659v3
- cs.LG
- stat.ME
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Jeroen F. Uleman, Loes Crielaard, Leonie K. Elsenburg, Guido A. Veldhuis, Naja Hulvej Rod, Rick Quax, V\'itor V. Vasconcelos
-
-
- Fast weight programming and linear transformers: from machine learning to neurobiology
- https://arxiv.org/abs/2508.08435
- arXiv:2508.08435v2 Announce Type: replace
-Abstract: Recent advances in artificial neural networks for machine learning, and language modeling in particular, have established a family of recurrent neural network (RNN) architectures that, unlike conventional RNNs with vector-form hidden states, use two-dimensional (2D) matrix-form hidden states. Such 2D-state RNNs, known as Fast Weight Programmers (FWPs), can be interpreted as a neural network whose synaptic weights (called fast weights) dynamically change over time as a function of input observations, and serve as short-term memory storage; corresponding synaptic weight modifications are controlled or programmed by another network (the programmer) whose parameters are trained (e.g., by gradient descent). In this Primer, we review the technical foundations of FWPs, their computational characteristics, and their connections to transformers and state space models. We also discuss connections between FWPs and models of synaptic plasticity in the brain, suggesting a convergence of natural and artificial intelligence.
- oai:arXiv.org:2508.08435v2
- cs.LG
- cs.AI
- q-bio.NC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Kazuki Irie, Samuel J. Gershman
-
-
- Geometry-Aware Global Feature Aggregation for Real-Time Indirect Illumination
- https://arxiv.org/abs/2508.08826
- arXiv:2508.08826v3 Announce Type: replace
-Abstract: Real-time rendering with global illumination is crucial to afford the user realistic experience in virtual environments. We present a learning-based estimator to predict diffuse indirect illumination in screen space, which then is combined with direct illumination to synthesize globally-illuminated high dynamic range (HDR) results. Our approach tackles the challenges of capturing long-range/long-distance indirect illumination when employing neural networks and is generalized to handle complex lighting and scenarios.
- From the neural network thinking of the solver to the rendering equation, we present a novel network architecture to predict indirect illumination. Our network is equipped with a modified attention mechanism that aggregates global information guided by spacial geometry features, as well as a monochromatic design that encodes each color channel individually.
- We conducted extensive evaluations, and the experimental results demonstrate our superiority over previous learning-based techniques. Our approach excels at handling complex lighting such as varying-colored lighting and environment lighting. It can successfully capture distant indirect illumination and simulates the interreflections between textured surfaces well (i.e., color bleeding effects); it can also effectively handle new scenes that are not present in the training dataset.
- oai:arXiv.org:2508.08826v3
- cs.GR
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Meng Gai, Guoping Wang, Sheng Li
-
-
- Automated Segmentation of Coronal Brain Tissue Slabs for 3D Neuropathology
- https://arxiv.org/abs/2508.09805
- arXiv:2508.09805v2 Announce Type: replace
-Abstract: Advances in image registration and machine learning have recently enabled volumetric analysis of postmortem brain tissue from conventional photographs of coronal slabs, which are routinely collected in brain banks and neuropathology laboratories worldwide. One caveat of this methodology is the requirement of segmentation of the tissue from photographs, which currently requires costly manual intervention. In this article, we present a deep learning model to automate this process. The automatic segmentation tool relies on a U-Net architecture that was trained with a combination of 1,414 manually segmented images of both fixed and fresh tissue, from specimens with varying diagnoses, photographed at two different sites. Automated model predictions on a subset of photographs not seen in training were analyzed to estimate performance compared to manual labels, including both inter- and intra-rater variability. Our model achieved a median Dice score over 0.98, mean surface distance under 0.4mm, and 95\% Hausdorff distance under 1.60mm, which approaches inter-/intra-rater levels. Our tool is publicly available at surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools.
- oai:arXiv.org:2508.09805v2
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jonathan Williams Ramirez, Dina Zemlyanker, Lucas Deden-Binder, Rogeny Herisse, Erendira Garcia Pallares, Karthik Gopinath, Harshvardhan Gazula, Christopher Mount, Liana N. Kozanno, Michael S. Marshall, Theresa R. Connors, Matthew P. Frosch, Mark Montine, Derek H. Oakley, Christine L. Mac Donald, C. Dirk Keene, Bradley T. Hyman, Juan Eugenio Iglesias
-
-
- AnalogSeeker: An Open-source Foundation Language Model for Analog Circuit Design
- https://arxiv.org/abs/2508.10409
- arXiv:2508.10409v2 Announce Type: replace
-Abstract: In this paper, we propose AnalogSeeker, an effort toward an open-source foundation language model for analog circuit design, with the aim of integrating domain knowledge and giving design assistance. To overcome the scarcity of data in this field, we employ a corpus collection strategy based on the domain knowledge framework of analog circuits. High-quality, accessible textbooks across relevant subfields are systematically curated and cleaned into a textual domain corpus. To address the complexity of knowledge of analog circuits, we introduce a granular domain knowledge distillation method. Raw, unlabeled domain corpus is decomposed into typical, granular learning nodes, where a multi-agent framework distills implicit knowledge embedded in unstructured text into question-answer data pairs with detailed reasoning processes, yielding a fine-grained, learnable dataset for fine-tuning. To address the unexplored challenges in training analog circuit foundation models, we explore and share our training methods through both theoretical analysis and experimental validation. We finally establish a fine-tuning-centric training paradigm, customizing and implementing a neighborhood self-constrained supervised fine-tuning algorithm. This approach enhances training outcomes by constraining the perturbation magnitude between the model's output distributions before and after training. In practice, we train the Qwen2.5-32B-Instruct model to obtain AnalogSeeker, which achieves 85.04% accuracy on AMSBench-TQA, the analog circuit knowledge evaluation benchmark, with a 15.67% point improvement over the original model and is competitive with mainstream commercial models. Furthermore, AnalogSeeker also shows effectiveness in the downstream operational amplifier design task. AnalogSeeker is open-sourced at https://huggingface.co/analogllm/analogseeker for research use.
- oai:arXiv.org:2508.10409v2
- cs.AR
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zihao Chen, Ji Zhuang, Jinyi Shen, Xiaoyue Ke, Xinyi Yang, Mingjie Zhou, Zhuoyao Du, Xu Yan, Zhouyang Wu, Zhenyu Xu, Jiangli Huang, Li Shang, Xuan Zeng, Fan Yang
-
-
- Harmonious Color Pairings: Insights from Human Preference and Natural Hue Statistics
- https://arxiv.org/abs/2508.15777
- arXiv:2508.15777v2 Announce Type: replace
-Abstract: While color harmony has long been studied in art and design, a clear consensus remains elusive, as most models are grounded in qualitative insights or limited datasets. In this work, we present a quantitative, data-driven study of color pairing preferences using controlled hue-based palettes in the HSL color space. Participants evaluated combinations of thirteen distinct hues, enabling us to construct a preference matrix and define a combinability index for each color. Our results reveal that preferences are highly hue dependent, challenging the assumption of universal harmony rules proposed in the literature. Yet, when averaged over hues, statistically meaningful patterns of aesthetic preference emerge, with certain hue separations perceived as more harmonious. Strikingly, these patterns align with hue distributions found in natural landscapes, pointing to a statistical correspondence between human color preferences and the structure of color in nature. Finally, we analyze our color-pairing score matrix through principal component analysis, which uncovers two complementary hue groups whose interplay underlies the global structure of color-pairing preferences. Together, these findings offer a quantitative framework for studying color harmony and its potential perceptual and ecological underpinnings.
- oai:arXiv.org:2508.15777v2
- cs.HC
- cs.CV
- physics.soc-ph
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ortensia Forni, Alexandre Darmon, Michael Benzaquen
-
-
- "Accessibility people, you go work on that thing of yours over there": Addressing Disability Inclusion in AI Product Organizations
- https://arxiv.org/abs/2508.16607
- arXiv:2508.16607v2 Announce Type: replace
-Abstract: The rapid emergence of generative AI has changed the way that technology is designed, constructed, maintained, and evaluated. Decisions made when creating AI-powered systems may impact some users disproportionately, such as people with disabilities. In this paper, we report on an interview study with 25 AI practitioners across multiple roles (engineering, research, UX, and responsible AI) about how their work processes and artifacts may impact end users with disabilities. We found that practitioners experienced friction when triaging problems at the intersection of responsible AI and accessibility practices, navigated contradictions between accessibility and responsible AI guidelines, identified gaps in data about users with disabilities, and gathered support for addressing the needs of disabled stakeholders by leveraging informal volunteer and community groups within their company. Based on these findings, we offer suggestions for new resources and process changes to better support people with disabilities as end users of AI.
- oai:arXiv.org:2508.16607v2
- cs.HC
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1609/aies.v8i2.36669
- Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES 2025), 8(2), 1724-1737
- Sanika Moharana, Cynthia L. Bennett, Erin Buehler, Michael Madaio, Vinita Tibdewal, Shaun K. Kane
-
-
- MetaFed: Advancing Privacy, Performance, and Sustainability in Federated Metaverse Systems
- https://arxiv.org/abs/2508.17341
- arXiv:2508.17341v3 Announce Type: replace
-Abstract: The rapid expansion of immersive Metaverse applications introduces complex challenges at the intersection of performance, privacy, and environmental sustainability. Centralized architectures fall short in addressing these demands, often resulting in elevated energy consumption, latency, and privacy concerns. This paper proposes MetaFed, a decentralized federated learning (FL) framework that enables sustainable and intelligent resource orchestration for Metaverse environments. MetaFed integrates (i) multi-agent reinforcement learning for dynamic client selection, (ii) privacy-preserving FL using homomorphic encryption, and (iii) carbon-aware scheduling aligned with renewable energy availability. Evaluations on MNIST and CIFAR-10 using lightweight ResNet architectures demonstrate that MetaFed achieves up to 25% reduction in carbon emissions compared to conventional approaches, while maintaining high accuracy and minimal communication overhead. These results highlight MetaFed as a scalable solution for building environmentally responsible and privacy-compliant Metaverse infrastructures.
- oai:arXiv.org:2508.17341v3
- cs.LG
- cs.CR
- cs.CY
- cs.DC
- cs.ET
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Muhammet Anil Yagiz, Zeynep Sude Cengiz, Polat Goktas
-
-
- Activation Transport Operators
- https://arxiv.org/abs/2508.17540
- arXiv:2508.17540v2 Announce Type: replace
-Abstract: The residual stream mediates communication between transformer decoder layers via linear reads and writes of non-linear computations. While sparse-dictionary learning-based methods locate features in the residual stream, and activation patching methods discover circuits within the model, the mechanism by which features flow through the residual stream remains understudied. Understanding this dynamic can better inform jailbreaking protections, enable early detection of model mistakes, and their correction. In this work, we propose Activation Transport Operators (ATO), linear maps from upstream to downstream residuals $k$ layers later, evaluated in feature space using downstream SAE decoder projections. We empirically demonstrate that these operators can determine whether a feature has been linearly transported from a previous layer or synthesised from non-linear layer computation. We develop the notion of transport efficiency, for which we provide an upper bound, and use it to estimate the size of the residual stream subspace that corresponds to linear transport. We empirically demonstrate the linear transport, report transport efficiency and the size of the residual stream's subspace involved in linear transport. This compute-light (no finetuning, <50 GPU-h) method offers practical tools for safety, debugging, and a clearer picture of where computation in LLMs behaves linearly.
- oai:arXiv.org:2508.17540v2
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Andrzej Szablewski, Marek Masiak
-
-
- Breaking the Black Box: Inherently Interpretable Physics-Constrained Machine Learning With Weighted Mixed-Effects for Imbalanced Seismic Data
- https://arxiv.org/abs/2508.19031
- arXiv:2508.19031v2 Announce Type: replace
-Abstract: Ground motion models (GMMs) are critical for seismic risk mitigation and infrastructure design. Machine learning (ML) is increasingly applied to GMM development due to expanding strong motion databases. However, existing ML-based GMMs operate as 'black boxes,' creating opacity that undermines confidence in engineering decisions. Moreover, seismic datasets exhibit severe imbalance, with scarce large-magnitude near-field records causing systematic underprediction of critical high-hazard ground motions. Despite these limitations, research addressing both interpretability and data imbalance remains limited. This study develops an inherently interpretable neural network employing independent additive pathways with novel HazBinLoss and concurvity regularization. HazBinLoss integrates physics-constrained weighting with inverse bin count scaling to address underfitting in sparse, high-hazard regions. Concurvity regularization enforces pathway orthogonality, reducing inter-pathway correlation. The model achieves robust performance: mean squared error = 0.6235, mean absolute error = 0.6230, and coefficient of determination = 88.48%. Pathway scaling corroborates established seismological behaviors. Weighted hierarchical Student-t mixed-effects analysis demonstrates unbiased residuals with physically consistent variance partitioning: sigma components range from 0.26-0.38 (inter-event), 0.12-0.41 (inter-region), 0.58-0.71 (intra-event), and 0.68-0.89 (total). The lower inter-event and higher intra-event components have implications for non-ergodic hazard analysis. Predictions exhibit strong agreement with NGA-West2 GMMs across diverse conditions. This interpretable framework advances GMMs, establishing a transparent, physics-consistent foundation for seismic hazard and risk assessment.
- oai:arXiv.org:2508.19031v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Vemula Sreenath, Filippo Gatti, Pierre Jehel
-
-
- GDS Agent for Graph Algorithmic Reasoning
- https://arxiv.org/abs/2508.20637
- arXiv:2508.20637v2 Announce Type: replace
-Abstract: Large language models (LLMs) have shown remarkable multimodal information processing and reasoning ability. When equipped with tools through function calling and enhanced with retrieval-augmented techniques, compound LLM-based systems can access closed data sources and answer questions about them. However, they still struggle to process and reason over large-scale graph-structure data. We introduce the GDS (Graph Data Science) agent in this technical report. The GDS agent introduces a comprehensive set of graph algorithms as tools, together with preprocessing (retrieval) and postprocessing of algorithm results, in a model context protocol (MCP) server. The server can be used with any modern LLM out-of-the-box. GDS agent allows users to ask any question that implicitly and intrinsically requires graph algorithmic reasoning about their data, and quickly obtain accurate and grounded answers. We introduce new benchmarks that evaluate intermediate tool calls as well as final responses. The results indicate that GDS agent is able to solve a wide spectrum of graph tasks. We also provide detailed case studies for more open-ended tasks and study scenarios where the agent struggles. Finally, we discuss the remaining challenges and the future roadmap.
- oai:arXiv.org:2508.20637v2
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Borun Shi, Ioannis Panagiotas
-
-
- Shift Before You Learn: Enabling Low-Rank Representations in Reinforcement Learning
- https://arxiv.org/abs/2509.05193
- arXiv:2509.05193v2 Announce Type: replace
-Abstract: Low-rank structure is a common implicit assumption in many modern reinforcement learning (RL) algorithms. For instance, reward-free and goal-conditioned RL methods often presume that the successor measure admits a low-rank representation. In this work, we challenge this assumption by first remarking that the successor measure itself is not approximately low-rank. Instead, we demonstrate that a low-rank structure naturally emerges in the shifted successor measure, which captures the system dynamics after bypassing a few initial transitions. We provide finite-sample performance guarantees for the entry-wise estimation of a low-rank approximation of the shifted successor measure from sampled entries. Our analysis reveals that both the approximation and estimation errors are primarily governed by a newly introduced quantitity: the spectral recoverability of the corresponding matrix. To bound this parameter, we derive a new class of functional inequalities for Markov chains that we call Type II Poincar\'e inequalities and from which we can quantify the amount of shift needed for effective low-rank approximation and estimation. This analysis shows in particular that the required shift depends on decay of the high-order singular values of the shifted successor measure and is hence typically small in practice. Additionally, we establish a connection between the necessary shift and the local mixing properties of the underlying dynamical system, which provides a natural way of selecting the shift. Finally, we validate our theoretical findings with experiments, and demonstrate that shifting the successor measure indeed leads to improved performance in goal-conditioned RL.
- oai:arXiv.org:2509.05193v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Bastien Dubail, Stefan Stojanovic, Alexandre Prouti\`ere
-
-
- Reinforcement Learning Foundations for Deep Research Systems: A Survey
- https://arxiv.org/abs/2509.06733
- arXiv:2509.06733v2 Announce Type: replace
-Abstract: Deep research systems, agentic AI that solve complex, multi-step tasks by coordinating reasoning, search across the open web and user files, and tool use, are moving toward hierarchical deployments with a Planner, Coordinator, and Executors. In practice, training entire stacks end-to-end remains impractical, so most work trains a single planner connected to core tools such as search, browsing, and code. While SFT imparts protocol fidelity, it suffers from imitation and exposure biases and underuses environment feedback. Preference alignment methods such as DPO are schema and proxy-dependent, off-policy, and weak for long-horizon credit assignment and multi-objective trade-offs. A further limitation of SFT and DPO is their reliance on human defined decision points and subskills through schema design and labeled comparisons. Reinforcement learning aligns with closed-loop, tool-interaction research by optimizing trajectory-level policies, enabling exploration, recovery behaviors, and principled credit assignment, and it reduces dependence on such human priors and rater biases.
- This survey is, to our knowledge, the first dedicated to the RL foundations of deep research systems. It systematizes recent work along three axes: (i) data synthesis and curation; (ii) RL methods for agentic research covering stability, sample efficiency, long context handling, reward and credit design, multi-objective optimization, and multimodal integration; and (iii) agentic RL training systems and frameworks. We also cover agent architecture and coordination, as well as evaluation and benchmarks, including recent QA, VQA, long-form synthesis, and domain-grounded, tool-interaction tasks. We distill recurring patterns, surface infrastructure bottlenecks, and offer practical guidance for training robust, transparent deep research agents with RL.
- oai:arXiv.org:2509.06733v2
- cs.AI
- cs.CL
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Wenjun Li, Zhi Chen, Jingru Lin, Hannan Cao, Wei Han, Sheng Liang, Zhi Zhang, Kuicai Dong, Dexun Li, Chen Zhang, Yong Liu
-
-
- Image Encryption Scheme Based on Hyper-Chaotic Map and Self-Adaptive Diffusion
- https://arxiv.org/abs/2509.06754
- arXiv:2509.06754v3 Announce Type: replace
-Abstract: In the digital age, image encryption technology acts as a safeguard, preventing unauthorized access to images. This paper proposes an innovative image encryption scheme that integrates a novel 2D hyper-chaotic map with a newly developed self-adaptive diffusion method. The 2D hyper-chaotic map, namely the 2D-RA map, is designed by hybridizing the Rastrigin and Ackley functions. The chaotic performance of the 2D-RA map is validated through a series of measurements, including the Bifurcation Diagram, Lyapunov Exponent (LE), Initial Value Sensitivity, 0 - 1 Test, Correlation Dimension (CD), and Kolmogorov Entropy (KE). The results demonstrate that the chaotic performance of the 2D-RA map surpasses that of existing advanced chaotic functions. Additionally, the self-adaptive diffusion method is employed to enhance the uniformity of grayscale distribution. The performance of the image encryption scheme is evaluated using a series of indicators. The results show that the proposed image encryption scheme significantly outperforms current state-of-the-art image encryption techniques.
- Code is available at: https://github.com/Tang-Yiqi/Image-Encryption-Scheme-Based-on-Hyper-Chaotic-Mapping-and-Self-Adaptive-Diffusion
- Code is available at: https://github.com/Tang-Yiqi/Image-Encryption-Scheme-Based-on-Hyper-Chaotic-Mapping-and-Self-Adaptive-Diffusion
- oai:arXiv.org:2509.06754v3
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yiqi Tang
-
-
- ForTIFAI: Fending Off Recursive Training Induced Failure for AI Model Collapse
- https://arxiv.org/abs/2509.08972
- arXiv:2509.08972v4 Announce Type: replace
-Abstract: The increasing reliance on generative AI models is rapidly increasing the volume of synthetic data, with some projections suggesting that most available new data for training could be machine-generated by 2030. This shift to a mainly synthetic content presents a critical challenge: repeated training in synthetic data leads to a phenomenon known as model collapse, where model performance degrades over generations of training, eventually rendering the models ineffective. While the causes of model collapse are increasingly understood, effective mitigation strategies remain scarce. We address this challenge by leveraging a key insight: auto-regressive models tend to generate text sequences to which they assign high confidence (i.e., high log-likelihood). Based on this observation, we introduce the Truncated-Cross-Entropy (TCE) loss function. TCE mitigates collapse by selectively ignoring high-confidence tokens during training, effectively filtering out likely machine-generated artifacts from the learning process. Our experiments demonstrate that models trained with TCE not only learn effectively but also exhibit significantly increased resilience, tolerating over 2.3x more synthetic data before the onset of collapse. In addition, we provide an open-source benchmark for collapse dynamics in mixed-data settings. Our results demonstrate that confidence-aware training objectives can substantially delay collapse onset, offering a practical and generalizable tool for model robustness under synthetic-data exposure.
- oai:arXiv.org:2509.08972v4
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Soheil Zibakhsh Shabgahi, Pedram Aghazadeh, Azalia Mirhoseini, Farinaz Koushanfar
-
-
- Gaussian Copula-Based Outage Performance Analysis of Fluid Antenna Systems: Channel Coefficient- or Envelope-Level Correlation Matrix?
- https://arxiv.org/abs/2509.09411
- arXiv:2509.09411v3 Announce Type: replace
-Abstract: Gaussian copula has been employed to evaluate the outage performance of Fluid Antenna Systems (FAS), with the covariance matrix reflecting the dependence among multivariate normal random variables (RVs). While prior studies approximate this matrix using the channel coefficient correlation matrix from Jake's model, this work instead employs the channel envelope correlation matrix, motivated by the fact that the multivariate normal RVs are generated by transforming correlated channel envelopes. This raises an open question of whether using the coefficient- or envelope-level correlation matrix yields better accuracy in accessing FAS performance. Toward this end, this paper explores the benefits of using the envelope-level correlation matrix under fully correlated Nakagami-m fading, and develops a method for generating such fading channels for Monte Carlo simulations, which serve as a benchmark for validating the theoretical results. Simulation results confirm the effectiveness of the proposed channel modeling approach and demonstrate the superior accuracy of using the envelope-level correlation matrix, particularly in sparse port deployment and low-outage regime.
- oai:arXiv.org:2509.09411v3
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1109/LWC.2025.3629524
- Rui Xu, Yinghui Ye, Xiaoli Chu, Guangyue Lu, Farshad Rostami Ghadi, Kai-Kit Wong
-
-
- SME-TEAM: Leveraging Trust and Ethics for Secure and Responsible Use of AI and LLMs in SMEs
- https://arxiv.org/abs/2509.10594
- arXiv:2509.10594v2 Announce Type: replace
-Abstract: Artificial Intelligence (AI) and Large Language Models (LLMs) are revolutionizing today's business practices; however, their adoption within small and medium-sized enterprises (SMEs) raises serious trust, ethical, and technical issues. In this perspective paper, we introduce a structured, multi-phased framework, "SME-TEAM" for the secure and responsible use of these technologies in SMEs. Based on a conceptual structure of four key pillars, i.e., Data, Algorithms, Human Oversight, and Model Architecture, SME-TEAM bridges theoretical ethical principles with operational practice, enhancing AI capabilities across a wide range of applications in SMEs. Ultimately, this paper provides a structured roadmap for the adoption of these emerging technologies, positioning trust and ethics as a driving force for resilience, competitiveness, and sustainable innovation within the area of business analytics and SMEs.
- oai:arXiv.org:2509.10594v2
- cs.LG
- cs.AI
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Iqbal H. Sarker, Helge Janicke, Ahmad Mohsin, Leandros Maglaras
-
-
- Stable Part Diffusion 4D: Multi-View RGB and Kinematic Parts Video Generation
- https://arxiv.org/abs/2509.10687
- arXiv:2509.10687v2 Announce Type: replace
-Abstract: We present Stable Part Diffusion 4D (SP4D), a framework for generating paired RGB and kinematic part videos from monocular inputs. Unlike conventional part segmentation methods that rely on appearance-based semantic cues, SP4D learns to produce kinematic parts - structural components aligned with object articulation and consistent across views and time. SP4D adopts a dual-branch diffusion model that jointly synthesizes RGB frames and corresponding part segmentation maps. To simplify the architecture and flexibly enable different part counts, we introduce a spatial color encoding scheme that maps part masks to continuous RGB-like images. This encoding allows the segmentation branch to share the latent VAE from the RGB branch, while enabling part segmentation to be recovered via straightforward post-processing. A Bidirectional Diffusion Fusion (BiDiFuse) module enhances cross-branch consistency, supported by a contrastive part consistency loss to promote spatial and temporal alignment of part predictions. We demonstrate that the generated 2D part maps can be lifted to 3D to derive skeletal structures and harmonic skinning weights with few manual adjustments. To train and evaluate SP4D, we construct KinematicParts20K, a curated dataset of over 20K rigged objects selected and processed from Objaverse XL (Deitke et al., 2023), each paired with multi-view RGB and part video sequences. Experiments show that SP4D generalizes strongly to diverse scenarios, including real-world videos, novel generated objects, and rare articulated poses, producing kinematic-aware outputs suitable for downstream animation and motion-related tasks.
- oai:arXiv.org:2509.10687v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Zhang, Chun-Han Yao, Simon Donn\'e, Narendra Ahuja, Varun Jampani
-
-
- An Eulerian Data Assimilation Method for Two-Layer Quasi-Geostrophic Model in Physical Domain
- https://arxiv.org/abs/2509.14586
- arXiv:2509.14586v4 Announce Type: replace
-Abstract: Data assimilation (DA) integrates observational data with numerical models to improve the prediction of complex physical systems. However, traditional DA methods often struggle with nonlinear dynamics and multi-scale variability, particularly when implemented directly in the physical domain. To address these challenges, this work develops an Eulerian Data Assimilation (EuDA) method with the Conditional Gaussian Nonlinear System (CGNS). The proposed approach enables the treatment of systems with non-periodic boundaries and provides a more intuitive representation of localized and time-dependent phenomena. The work considers a simplified physical domain inspired by sea-ice floe trajectories and ocean eddy recovery in the Arctic regions, where the dynamics are modeled by a two-layer quasi-geostrophic (QG) system. The QG equations are numerically solved using forward-Euler time stepping and centered finite-difference schemes. CGNS provides a nonlinear filter as it offers an analytical and continuous formulation for filtering a nonlinear system. Model performance is assessed using normalized root mean square error (RMSE) and pattern correlation (Corr) of the posterior mean. The results show that both metrics improve monotonically with refining timesteps, while RMSE converges to approximately 0.1, which is the noise strength, and Corr increases from 0.64 to 0.92 as the grid resolution becomes finer. Lastly, a coupled scenario with sea-ice particles advected by the two-layer QG flow under a linear drag force is examined, demonstrating the flexibility of the EuDA-CGNS framework in capturing coupled ice-ocean interactions. These findings demonstrate the effectiveness of exploiting the two-layer QG model in the physical domain to capture multiscale flow features.
- oai:arXiv.org:2509.14586v4
- cs.CE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Hyeonggeun Yun, Quanling Deng
-
-
- LLM-Driven SAST-Genius: A Hybrid Static Analysis Framework for Comprehensive and Actionable Security
- https://arxiv.org/abs/2509.15433
- arXiv:2509.15433v3 Announce Type: replace
-Abstract: This report examines the synergy between Large Language Models (LLMs) and Static Application Security Testing (SAST) to improve vulnerability discovery. Traditional SAST tools, while effective for proactive security, are limited by high false-positive rates and a lack of contextual understanding. Conversely, LLMs excel at code analysis and pattern recognition but can be prone to inconsistencies and hallucinations. By integrating these two technologies, a more intelligent and efficient system is created. This combination moves beyond mere vulnerability detection optimization, transforming security into a deeply integrated, contextual process that provides tangible benefits like improved triage, dynamic bug descriptions, bug validation via exploit generation and enhanced analysis of complex codebases. The result is a more effective security approach that leverages the strengths of both technologies while mitigating their weaknesses. SAST-Genius reduced false positives by about 91 % (225 to 20) compared to Semgrep alone.
- oai:arXiv.org:2509.15433v3
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- IEEE S&P 2025
- Vaibhav Agrawal, Kiarash Ahi
-
-
- Towards Interpretable and Efficient Attention: Compressing All by Contracting a Few
- https://arxiv.org/abs/2509.16875
- arXiv:2509.16875v3 Announce Type: replace
-Abstract: Attention mechanisms have achieved significant empirical success in multiple fields, but their underlying optimization objectives remain unclear yet. Moreover, the quadratic complexity of self-attention has become increasingly prohibitive. Although interpretability and efficiency are two mutually reinforcing pursuits, prior work typically investigates them separately. In this paper, we propose a unified optimization objective that derives inherently interpretable and efficient attention mechanisms through algorithm unrolling. Precisely, we construct a gradient step of the proposed objective with a set of forward-pass operations of our \emph{Contract-and-Broadcast Self-Attention} (CBSA), which compresses input tokens towards low-dimensional structures by contracting a few representatives of them. This novel mechanism can not only scale linearly by fixing the number of representatives, but also covers the instantiations of varied attention mechanisms when using different sets of representatives. We conduct extensive experiments to demonstrate comparable performance and superior advantages over black-box attention mechanisms on visual tasks. Our work sheds light on the integration of interpretability and efficiency, as well as the unified formula of attention mechanisms.
- oai:arXiv.org:2509.16875v3
- cs.LG
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Qishuai Wen, Zhiyuan Huang, Chun-Guang Li
-
-
- Human vs. Agent in Task-Oriented Conversations
- https://arxiv.org/abs/2509.17619
- arXiv:2509.17619v2 Announce Type: replace
-Abstract: Task-oriented conversational systems are essential for efficiently addressing diverse user needs, yet their development requires substantial amounts of high-quality conversational data that is challenging and costly to obtain. While large language models (LLMs) have demonstrated potential in generating synthetic conversations, the extent to which these agent-generated interactions can effectively substitute real human conversations remains unclear. This work presents the first systematic comparison between LLM-simulated users and human users in personalized task-oriented conversations. We propose a comprehensive analytical framework encompassing three key aspects (conversation strategy, interaction style, and conversation evaluation) and ten distinct dimensions for evaluating user behaviors, and collect parallel conversational datasets from both human users and LLM agent users across four representative scenarios under identical conditions. Our analysis reveals significant behavioral differences between the two user types in problem-solving approaches, question broadness, user engagement, context dependency, feedback polarity and promise, language style, and hallucination awareness. We found consistency in the agent users and human users across the depth-first or breadth-first dimensions, as well as the usefulness dimensions. These findings provide critical insights for advancing LLM-based user simulation. Our multi-dimensional taxonomy constructed a generalizable framework for analyzing user behavior patterns, offering insights from LLM agent users and human users. By this work, we provide perspectives on rethinking how to use user simulation in conversational systems in the future.
- oai:arXiv.org:2509.17619v2
- cs.IR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhefan Wang, Ning Geng, Zhiqiang Guo, Weizhi Ma, Min Zhang
-
-
- Evaluating Large Language Models for Detecting Antisemitism
- https://arxiv.org/abs/2509.18293
- arXiv:2509.18293v2 Announce Type: replace
-Abstract: Detecting hateful content is a challenging and important problem. Automated tools, like machine-learning models, can help, but they require continuous training to adapt to the ever-changing landscape of social media. In this work, we evaluate eight open-source LLMs' capability to detect antisemitic content, specifically leveraging in-context definition. We also study how LLMs understand and explain their decisions given a moderation policy as a guideline. First, we explore various prompting techniques and design a new CoT-like prompt, Guided-CoT, and find that injecting domain-specific thoughts increases performance and utility. Guided-CoT handles the in-context policy well, improving performance and utility by reducing refusals across all evaluated models, regardless of decoding configuration, model size, or reasoning capability. Notably, Llama 3.1 70B outperforms fine-tuned GPT-3.5. Additionally, we examine LLM errors and introduce metrics to quantify semantic divergence in model-generated rationales, revealing notable differences and paradoxical behaviors among LLMs. Our experiments highlight the differences observed across LLMs' utility, explainability, and reliability. Code and resources available at: https://github.com/idramalab/quantify-llm-explanations
- oai:arXiv.org:2509.18293v2
- cs.CL
- cs.AI
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jay Patel, Hrudayangam Mehta, Jeremy Blackburn
-
-
- Introspectively Envy-Free and Efficient Allocation of Indivisible Mixed Manna
- https://arxiv.org/abs/2509.18673
- arXiv:2509.18673v2 Announce Type: replace
-Abstract: The existence of allocations that are fair and efficient, simultaneously, is a central inquiry in fair division literature. A prominent result in discrete fair division shows that the complementary desiderata of fairness and efficiency can be achieved together when allocating indivisible items with nonnegative values; specifically, for indivisible goods and among agents with additive valuations, there always exists an allocation that is both envy-free up to one item (EF1) and Pareto efficient (PO). While a recent breakthrough extends the EF1 and PO guarantee to indivisible chores (items with negative values), the question remains open for indivisible mixed manna, i.e., for indivisible items whose values can be positive, negative, or zero. The current work makes notable progress in resolving this central question.
- For indivisible mixed manna and additive valuations, we establish the existence of allocations that are PO and introspectively envy-free up to one item (IEF1). In an IEF1 allocation, each agent can eliminate its envy towards all the other agents by either adding an item or removing an item from its own bundle. The notion of IEF1 coincides with EF1 for indivisible chores, and hence, our result generalizes the aforementioned existence guarantee for chores. Our techniques can be adopted to obtain an alternative proof for the existence of EF1 and PO allocations of indivisible goods. Hence, along with the result for mixed manna, we provide a unified approach for establishing the EF1 and PO guarantee for indivisible goods and indivisible chores. We also utilize our result for indivisible items to develop a distinct proof of the noted EF and PO guarantee for divisible mixed manna. Our work highlights an interesting application of the Knaster-Kuratowski-Mazurkiewicz (KKM) Theorem in discrete fair division and develops multiple, novel structural insights and algorithmic ideas.
- oai:arXiv.org:2509.18673v2
- cs.GT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Siddharth Barman, Paritosh Verma
-
-
- SmartWilds: Multimodal Wildlife Monitoring Dataset
- https://arxiv.org/abs/2509.18894
- arXiv:2509.18894v2 Announce Type: replace
-Abstract: We present the first release of SmartWilds, a multimodal wildlife monitoring dataset. SmartWilds is a synchronized collection of drone imagery, camera trap photographs and videos, and bioacoustic recordings collected during summer 2025 at The Wilds safari park in Ohio. This dataset supports multimodal AI research for comprehensive environmental monitoring, addressing critical needs in endangered species research, conservation ecology, and habitat management. Our pilot deployment captured four days of synchronized monitoring across three modalities in a 220-acre pasture containing Pere David's deer, Sichuan takin, Przewalski's horses, as well as species native to Ohio. We provide a comparative analysis of sensor modality performance, demonstrating complementary strengths for landuse patterns, species detection, behavioral analysis, and habitat monitoring. This work establishes reproducible protocols for multimodal wildlife monitoring while contributing open datasets to advance conservation computer vision research. Future releases will include synchronized GPS tracking data from tagged individuals, citizen science data, and expanded temporal coverage across multiple seasons.
- oai:arXiv.org:2509.18894v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Jenna Kline, Anirudh Potlapally, Bharath Pillai, Tanishka Wani, Rugved Katole, Vedant Patil, Penelope Covey, Hari Subramoni, Tanya Berger-Wolf, Christopher Stewart
-
-
- A Unified Formal Theory on the Logical Limits of Symbol Grounding
- https://arxiv.org/abs/2509.20409
- arXiv:2509.20409v3 Announce Type: replace
-Abstract: This paper synthesizes a series of formal proofs to construct a unified theory on the logical limits of the Symbol Grounding Problem. We demonstrate through a four-stage argument that meaning within a formal system must arise from a process that is external, dynamic, and non-algorithmic. First, we prove that any purely symbolic system, devoid of external connections, cannot internally establish a consistent foundation for meaning due to self-referential paradoxes. Second, we extend this limitation to systems with any finite, static set of pre-established meanings, proving they are inherently incomplete. Third, we demonstrate that the grounding process is logically incomplete; specifically, the 'act' of connecting internal symbols to novel, emergent external meanings cannot be a product of logical inference within the system but must be an axiomatic, meta-level update. Finally, we prove that any attempt to automate this update process using a fixed, external "judgment" algorithm will inevitably construct a larger, yet equally incomplete, symbolic system. Together, these conclusions formally establish that the grounding of meaning is a necessarily open-ended, non-algorithmic process, revealing a fundamental, G\"odel-style limitation for any self-contained intelligent system.
- oai:arXiv.org:2509.20409v3
- cs.LO
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Zhangchi Liu
-
-
- TABLET: A Large-Scale Dataset for Robust Visual Table Understanding
- https://arxiv.org/abs/2509.21205
- arXiv:2509.21205v2 Announce Type: replace
-Abstract: While table understanding increasingly relies on pixel-only settings where tables are processed as visual representations, current benchmarks predominantly use synthetic renderings that lack the complexity and visual diversity of real-world tables. Additionally, existing visual table understanding (VTU) datasets offer fixed examples with single visualizations and pre-defined instructions, providing no access to underlying serialized data for reformulation. We introduce TABLET, a large-scale VTU dataset with 4 million examples across 20 tasks, grounded in 2 million unique tables where 88% preserve original visualizations. Each example includes paired image-HTML representations, comprehensive metadata, and provenance information linking back to the source datasets. Fine-tuning vision-language models like Qwen2.5-VL-7B on TABLET improves performance on seen and unseen VTU tasks while increasing robustness on real-world table visualizations. By preserving original visualizations and maintaining example traceability in a unified large-scale collection, TABLET establishes a foundation for robust training and extensible evaluation of future VTU models.
- oai:arXiv.org:2509.21205v2
- cs.CV
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- I\~nigo Alonso, Imanol Miranda, Eneko Agirre, Mirella Lapata
-
-
- Automatic Discovery of One-Parameter Subgroups of Lie Groups: Compact and Non-Compact Cases of $\mathbf{SO(n)}$ and $\mathbf{SL(n)}$
- https://arxiv.org/abs/2509.22219
- arXiv:2509.22219v3 Announce Type: replace
-Abstract: We introduce a novel framework for the automatic discovery of one-parameter subgroups ($H_{\gamma}$) of $SO(3)$ and, more generally, $SO(n)$. One-parameter subgroups of $SO(n)$ are crucial in a wide range of applications, including robotics, quantum mechanics, and molecular structure analysis. Our method utilizes the standard Jordan form of skew-symmetric matrices, which define the Lie algebra of $SO(n)$, to establish a canonical form for orbits under the action of $H_{\gamma}$. This canonical form is then employed to derive a standardized representation for $H_{\gamma}$-invariant functions. By learning the appropriate parameters, the framework uncovers the underlying one-parameter subgroup $H_{\gamma}$. The effectiveness of the proposed approach is demonstrated through tasks such as double pendulum modeling, moment of inertia prediction, top quark tagging and invariant polynomial regression, where it successfully recovers meaningful subgroup structure and produces interpretable, symmetry-aware representations.
- oai:arXiv.org:2509.22219v3
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Pavan Karjol, Vivek V Kashyap, Rohan Kashyap, Prathosh A P
-
-
- Towards Fine-Grained Text-to-3D Quality Assessment: A Benchmark and A Two-Stage Rank-Learning Metric
- https://arxiv.org/abs/2509.23841
- arXiv:2509.23841v2 Announce Type: replace
-Abstract: Recent advances in Text-to-3D (T23D) generative models have enabled the synthesis of diverse, high-fidelity 3D assets from textual prompts. However, existing challenges restrict the development of reliable T23D quality assessment (T23DQA). First, existing benchmarks are outdated, fragmented, and coarse-grained, making fine-grained metric training infeasible. Moreover, current objective metrics exhibit inherent design limitations, resulting in non-representative feature extraction and diminished metric robustness. To address these limitations, we introduce T23D-CompBench, a comprehensive benchmark for compositional T23D generation. We define five components with twelve sub-components for compositional prompts, which are used to generate 3,600 textured meshes from ten state-of-the-art generative models. A large-scale subjective experiment is conducted to collect 129,600 reliable human ratings across different perspectives. Based on T23D-CompBench, we further propose Rank2Score, an effective evaluator with two-stage training for T23DQA. Rank2Score enhances pairwise training via supervised contrastive regression and curriculum learning in the first stage, and subsequently refines predictions using mean opinion scores to achieve closer alignment with human judgments in the second stage. Extensive experiments and downstream applications demonstrate that Rank2Score consistently outperforms existing metrics across multiple dimensions and can additionally serve as a reward function to optimize generative models. The project is available at https://cbysjtu.github.io/Rank2Score/.
- oai:arXiv.org:2509.23841v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Bingyang Cui, Yujie Zhang, Qi Yang, Zhu Li, Yiling Xu
-
-
- FUSAR-KLIP: Towards Multimodal Foundation Models for Remote Sensing
- https://arxiv.org/abs/2509.23927
- arXiv:2509.23927v2 Announce Type: replace
-Abstract: Cross-modal artificial intelligence has garnered widespread attention in recent years, achieving significant progress in the study of natural images. However, existing methods are mostly designed for RGB imagery, leaving a significant gap in modeling synthetic aperture radar (SAR) imagery. SAR, with its all-day, all-weather imaging capabilities, plays an irreplaceable role in remote sensing scene understanding. To address this gap, this paper proposes FUSAR-KLIP, the first universal SAR multimodal foundational model, along with reusable data and evaluation baselines. Specifically: (1) This work introduces the critical yet long-overlooked attribute of geographic information into remote sensing research, constructing FUSAR-GEOVL-1M (the first large-scale SAR dataset with complete geographic projection properties), covering multiple satellite platforms, 120,000 images, and 135 cities. (2) Aligned structured text is generated through a hierarchical cognitive chain-of-thought (HCoT), providing more than one million multi-dimensional semantic annotations of landforms, regional functions, target attributes, and spatial relationships. (3) We design a Self-Consistent Iterative Optimization mechanism that continuously enhances cross-modal alignment through a self-supervised closed loop of contrastive, matching, and reconstruction learning on a transferable multimodal encoder. (4) A unified evaluation benchmark is established across 11 representative downstream vision and vision-language tasks, with comparisons against 14 leading foundation models, where FUSAR-KLIP demonstrates leading performance, particularly in object counting and land-cover classification. We expect that FUSAR-KLIP's large-scale multimodal data, transferable model architecture, and comprehensive experimental benchmark will significantly advance the development of SAR multimodal baseline models.
- oai:arXiv.org:2509.23927v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yi Yang, Xiaokun Zhang, Qingchen Fang, Jing Liu, Ziqi Ye, Rui Li, Li Liu, Haipeng Wang
-
-
- Proposing a Framework for Machine Learning Adoption on Legacy Systems
- https://arxiv.org/abs/2509.24224
- arXiv:2509.24224v2 Announce Type: replace
-Abstract: The integration of machine learning (ML) is critical for industrial competitiveness, yet its adoption is frequently stalled by the prohibitive costs and operational disruptions of upgrading legacy systems. The financial and logistical overhead required to support the full ML lifecycle presents a formidable barrier to widespread implementation, particularly for small and medium-sized enterprises. This paper introduces a pragmatic, API-based framework designed to overcome these challenges by strategically decoupling the ML model lifecycle from the production environment. Our solution delivers the analytical power of ML to domain experts through a lightweight, browser-based interface, eliminating the need for local hardware upgrades and ensuring model maintenance can occur with zero production downtime. This human-in-the-loop approach empowers experts with interactive control over model parameters, fostering trust and facilitating seamless integration into existing workflows. By mitigating the primary financial and operational risks, this framework offers a scalable and accessible pathway to enhance production quality and safety, thereby strengthening the competitive advantage of the manufacturing sector.
- oai:arXiv.org:2509.24224v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Ashiqur Rahman, Hamed Alhoori
-
-
- CardioForest: An Explainable Ensemble Learning Model for Automatic Wide QRS Complex Tachycardia Diagnosis from ECG
- https://arxiv.org/abs/2509.25804
- arXiv:2509.25804v2 Announce Type: replace
-Abstract: This study aims to develop and evaluate an ensemble machine learning-based framework for the automatic detection of Wide QRS Complex Tachycardia (WCT) from ECG signals, emphasizing diagnostic accuracy and interpretability using Explainable AI. The proposed system integrates ensemble learning techniques, i.e., an optimized Random Forest known as CardioForest, and models like XGBoost and LightGBM. The models were trained and tested on ECG data from the publicly available MIMIC-IV dataset. The testing was carried out with the assistance of accuracy, balanced accuracy, precision, recall, F1 score, ROC-AUC, and error rate (RMSE, MAE) measures. In addition, SHAP (SHapley Additive exPlanations) was used to ascertain model explainability and clinical relevance. The CardioForest model performed best on all metrics, achieving a test accuracy of 95.19%, a balanced accuracy of 88.76%, a precision of 95.26%, a recall of 78.42%, and an ROC-AUC of 0.8886. SHAP analysis confirmed the model's ability to rank the most relevant ECG features, such as QRS duration, in accordance with clinical intuitions, thereby fostering trust and usability in clinical practice. The findings recognize CardioForest as an extremely dependable and interpretable WCT detection model. Being able to offer accurate predictions and transparency through explainability makes it a valuable tool to help cardiologists make timely and well-informed diagnoses, especially for high-stakes and emergency scenarios.
- oai:arXiv.org:2509.25804v2
- cs.LG
- cs.AI
- cs.NI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1101/2025.09.15.25335837
- Vaskar Chakma, Ju Xiaolin, Heling Cao, Xue Feng, Ji Xiaodong, Pan Haiyan, Gao Zhan
-
-
- ReNF: Rethinking the Design Space of Neural Long-Term Time Series Forecasters
- https://arxiv.org/abs/2509.25914
- arXiv:2509.25914v4 Announce Type: replace
-Abstract: Neural Forecasters (NFs) are a cornerstone of Long-term Time Series Forecasting (LTSF). However, progress has been hampered by an overemphasis on architectural complexity at the expense of fundamental forecasting principles. In this work, we return to first principles to redesign the LTSF paradigm. We begin by introducing a Multiple Neural Forecasting Theorem that provides a theoretical basis for our approach. We propose Boosted Direct Output (BDO), a novel forecasting strategy that synergistically combines the advantages of both Auto-Regressive (AR) and Direct Output (DO). In addition, we stabilize the learning process by smoothly tracking the model's parameters. Extensive experiments show that these principled improvements enable a simple MLP to achieve state-of-the-art performance, outperforming recent, complex models in nearly all cases, without any specific considerations in the area. Finally, we empirically verify our theorem, establishing a dynamic performance bound and identifying promising directions for future research. The code for review is available at: .
- oai:arXiv.org:2509.25914v4
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yihang Lu, Xianwei Meng, Enhong Chen
-
-
- DA$^2$: Depth Anything in Any Direction
- https://arxiv.org/abs/2509.26618
- arXiv:2509.26618v4 Announce Type: replace
-Abstract: Panorama has a full FoV (360$^\circ\times$180$^\circ$), offering a more complete visual description than perspective images. Thanks to this characteristic, panoramic depth estimation is gaining increasing traction in 3D vision. However, due to the scarcity of panoramic data, previous methods are often restricted to in-domain settings, leading to poor zero-shot generalization. Furthermore, due to the spherical distortions inherent in panoramas, many approaches rely on perspective splitting (e.g., cubemaps), which leads to suboptimal efficiency. To address these challenges, we propose $\textbf{DA}$$^{\textbf{2}}$: $\textbf{D}$epth $\textbf{A}$nything in $\textbf{A}$ny $\textbf{D}$irection, an accurate, zero-shot generalizable, and fully end-to-end panoramic depth estimator. Specifically, for scaling up panoramic data, we introduce a data curation engine for generating high-quality panoramic depth data from perspective, and create $\sim$543K panoramic RGB-depth pairs, bringing the total to $\sim$607K. To further mitigate the spherical distortions, we present SphereViT, which explicitly leverages spherical coordinates to enforce the spherical geometric consistency in panoramic image features, yielding improved performance. A comprehensive benchmark on multiple datasets clearly demonstrates DA$^{2}$'s SoTA performance, with an average 38% improvement on AbsRel over the strongest zero-shot baseline. Surprisingly, DA$^{2}$ even outperforms prior in-domain methods, highlighting its superior zero-shot generalization. Moreover, as an end-to-end solution, DA$^{2}$ exhibits much higher efficiency over fusion-based approaches. Both the code and the curated panoramic data has be released. Project page: https://depth-any-in-any-dir.github.io/.
- oai:arXiv.org:2509.26618v4
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Haodong Li, Wangguangdong Zheng, Jing He, Yuhao Liu, Xin Lin, Xin Yang, Ying-Cong Chen, Chunchao Guo
-
-
- Wireless Laser Power Transfer for Low-altitude Uncrewed Aerial Vehicle-assisted Internet of Things: Paradigms, Challenges, and Solutions
- https://arxiv.org/abs/2510.00477
- arXiv:2510.00477v2 Announce Type: replace
-Abstract: Low-altitude uncrewed aerial vehicles (UAVs) have become integral enablers for the Internet of Things (IoT) by offering enhanced coverage, improved connectivity and access to remote areas. A critical challenge limiting their operational capacity lies in the energy constraints of both aerial platforms and ground-based sensors. This paper explores WLPT as a transformative solution for sustainable energy provisioning in UAV-assisted IoT networks. We first systematically investigate the fundamental principles of WLPT and analysis the comparative advantages. Then, we introduce three operational paradigms for system integration, identify key challenges, and discuss corresponding potential solutions. In case study, we propose a multi-agent reinforcement learning framework to address the coordination and optimization challenges in WLPT-enabled UAV-assisted IoT data collection. Simulation results demonstrate that our framework significantly improves energy sustainability and data freshness. Finally, we discuss some future directions.
- oai:arXiv.org:2510.00477v2
- cs.NI
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chengzhen Li, Likun Zhang, Chuang Zhang, Jiahui Li, Changyuan Zhao, Ruichen Zhang, Geng Sun
-
-
- Training Optimal Large Diffusion Language Models
- https://arxiv.org/abs/2510.03280
- arXiv:2510.03280v2 Announce Type: replace
-Abstract: We introduce Quokka, the first systematic scaling law for diffusion language models (DLMs), encompassing both compute-constrained and data-constrained regimes, and studying the key modeling and optimization designs. Quokka is a good friend of Chinchilla and provides wider scopes. We hope the results would bring short-term practical guidance in DLMs training and long-term inspirations for the whole AI community.
- oai:arXiv.org:2510.03280v2
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Jinjie Ni, Qian Liu, Chao Du, Longxu Dou, Hang Yan, Zili Wang, Tianyu Pang, Michael Qizhe Shieh
-
-
- Efficient Latent Variable Causal Discovery: Combining Score Search and Targeted Testing
- https://arxiv.org/abs/2510.04263
- arXiv:2510.04263v3 Announce Type: replace
-Abstract: Learning causal structure from observational data is especially challenging when latent variables or selection bias are present. The Fast Causal Inference (FCI) algorithm addresses this setting but performs exhaustive conditional independence tests across many subsets, often leading to spurious independences, missing or extra edges, and unreliable orientations. We present a family of score-guided mixed-strategy causal search algorithms that extend this framework. First, we introduce BOSS-FCI and GRaSP-FCI, variants of GFCI (Greedy Fast Causal Inference) that substitute BOSS (Best Order Score Search) or GRaSP (Greedy Relaxations of Sparsest Permutation) for FGES (Fast Greedy Equivalence Search), preserving correctness while trading off scalability and conservativeness. Second, we develop FCI Targeted-Testing (FCIT), a novel hybrid method that replaces exhaustive testing with targeted, score-informed tests guided by BOSS. FCIT guarantees well-formed PAGs and achieves higher precision and efficiency across sample sizes. Finally, we propose a lightweight heuristic, LV-Dumb (Latent Variable "Dumb"), which returns the PAG of the BOSS DAG (Directed Acyclic Graph). Though not strictly sound for latent confounding, LV-Dumb often matches FCIT's accuracy while running substantially faster. Simulations and real-data analyses show that BOSS-FCI and GRaSP-FCI provide robust baselines, FCIT yields the best balance of precision and reliability, and LV-Dumb offers a fast, near-equivalent alternative. Together, these methods demonstrate that targeted and score-guided strategies can dramatically improve the efficiency and correctness of latent-variable causal discovery.
- oai:arXiv.org:2510.04263v3
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Joseph Ramsey, Bryan Andrews, Peter Spirtes
-
-
- VoiceAgentBench: Are Voice Assistants ready for agentic tasks?
- https://arxiv.org/abs/2510.07978
- arXiv:2510.07978v2 Announce Type: replace
-Abstract: Large-scale Speech Language Models (SpeechLMs) have enabled voice assistants capable of understanding natural spoken queries and performing complex tasks. However, existing speech benchmarks primarily focus on isolated capabilities such as transcription, or question-answering, and do not systematically evaluate agentic scenarios encompassing multilingual and cultural understanding, as well as adversarial robustness. To address this, we introduce VoiceAgentBench, a comprehensive benchmark designed to evaluate SpeechLMs in realistic spoken agentic settings. It comprises over 5,500 synthetic spoken queries, including dialogues grounded in Indian context, covering single-tool invocations, multi-tool workflows, multi-turn interactions, and safety evaluations. The benchmark supports English, Hindi, and 5 other Indian languages, reflecting real-world linguistic and cultural diversity. We simulate speaker variability using a novel sampling algorithm that selects audios for TTS voice conversion based on its speaker embeddings, maximizing acoustic and speaker diversity. Our evaluation measures tool selection accuracy, structural consistency, and the correctness of tool invocations, including adversarial robustness. Our experiments reveal significant gaps in contextual tool orchestration tasks, Indic generalization, and adversarial robustness, exposing critical limitations of current SpeechLMs.
- oai:arXiv.org:2510.07978v2
- cs.AI
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Dhruv Jain, Harshit Shukla, Gautam Rajeev, Ashish Kulkarni, Chandra Khatri, Shubham Agarwal
-
-
- Hulu-Med: A Transparent Generalist Model towards Holistic Medical Vision-Language Understanding
- https://arxiv.org/abs/2510.08668
- arXiv:2510.08668v2 Announce Type: replace
-Abstract: Real-world clinical decision-making requires integrating heterogeneous data, including medical text, 2D images, 3D volumes, and videos, while existing AI systems fail to unify all these signals, limiting their utility. In this paper, we introduce Hulu-Med, a transparent, generalist medical Vision-Language Model (VLM) designed to unify language-only, 2D/3D vision-language, and video understanding within a single architecture. Hulu-Med is trained on a curated corpus of 16.7 million samples, comprising exclusively public or synthetic data, spanning 12 major anatomical systems and 14 medical imaging modalities. Hulu-Med employs a medical-aware token-reduction strategy that prunes redundant visual tokens, achieving up to a 55% reduction for 3D and video inputs, improving cross-modal efficiency, and enabling training at 7B-32B parameter scales in approximately 4,000-40,000 GPU hours. Across 30 public in-domain and out-of-domain medical benchmarks-covering text reasoning, visual question answering, report generation, multilingual dialogue, video understanding, and rare disease diagnosis-Hulu-Med surpasses existing open-source models on 27 of 30 benchmarks and outperforms proprietary systems such as GPT-4o on 16 benchmarks. Despite being a VLM, Hulu-Med outperforms GPT-4o and matches GPT-o1 on the text-only HealthBench. For the first time in the community, we provide a fully transparent, reproducible and cost-effective pipeline for holistic medical vision-language understanding by releasing our end-to-end data curation, training procedures, and model parameters. Code and models are available at https://github.com/ZJUI-AI4H/Hulu-Med.
- oai:arXiv.org:2510.08668v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Songtao Jiang, Yuan Wang, Sibo Song, Tianxiang Hu, Chenyi Zhou, Bin Pu, Yan Zhang, Zhibo Yang, Yang Feng, Joey Tianyi Zhou, Jin Hao, Zijian Chen, Ruijia Wu, Tao Tang, Junhui Lv, Hongxia Xu, Hongwei Wang, Jun Xiao, Bin Feng, Fudong Zhu, Kenli Li, Weidi Xie, Jimeng Sun, Jian Wu, Zuozhu Liu
-
-
- Aegis: A Correlation-Based Data Masking Advisor for Data Sharing Ecosystems
- https://arxiv.org/abs/2510.10810
- arXiv:2510.10810v2 Announce Type: replace
-Abstract: Data sharing ecosystems connect providers, consumers, and intermediaries to facilitate the exchange and use of data for a wide range of downstream tasks. In sensitive domains such as healthcare, privacy is enforced as a hard constraint, any shared data must satisfy a minimum privacy threshold. However, among all masking configurations that meet this requirement, the utility of the masked data can vary significantly, posing a key challenge: how to efficiently select the optimal configuration that preserves maximum utility. This paper presents Aegis, a middleware framework that selects optimal masking configurations for machine learning datasets with features and class labels. Aegis incorporates a utility optimizer that minimizes predictive utility deviation, quantifying shifts in feature label correlations due to masking. Our framework leverages limited data summaries (such as 1D histograms) or none to estimate the feature label joint distribution, making it suitable for scenarios where raw data is inaccessible due to privacy restrictions. To achieve this, we propose a joint distribution estimator based on iterative proportional fitting, which allows supporting various feature label correlation quantification methods such as mutual information, chi square, or g3. Our experimental evaluation of real world datasets shows that Aegis identifies optimal masking configurations over an order of magnitude faster, while the resulting masked datasets achieve predictive performance on downstream ML tasks on par with baseline approaches and complements privacy anonymization data masking techniques.
- oai:arXiv.org:2510.10810v2
- cs.LG
- cs.DB
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Omar Islam Laskar, Fatemeh Ramezani Khozestani, Ishika Nankani, Sohrab Namazi Nia, Senjuti Basu Roy, Kaustubh Beedkar
-
-
- DE3S: Dual-Enhanced Soft-Sparse-Shape Learning for Medical Early Time-Series Classification
- https://arxiv.org/abs/2510.12214
- arXiv:2510.12214v2 Announce Type: replace
-Abstract: Early Time Series Classification (ETSC) is critical in time-sensitive medical applications such as sepsis, yet it presents an inherent trade-off between accuracy and earliness. This trade-off arises from two core challenges: 1) models should effectively model inherently weak and noisy early-stage snippets, and 2) they should resolve the complex, dual requirement of simultaneously capturing local, subject-specific variations and overarching global temporal patterns. Existing methods struggle to overcome these underlying challenges, often forcing a severe compromise: sacrificing accuracy to achieve earliness, or vice-versa. We propose \textbf{DE3S}, a \textbf{D}ual-\textbf{E}nhanced \textbf{S}oft-\textbf{S}parse \textbf{S}equence Learning framework, which systematically solves these challenges. A dual enhancement mechanism is proposed to enhance the modeling of weak, early signals. Then, an attention-based patch module is introduced to preserve discriminative information while reducing noise and complexity. A dual-path fusion architecture is designed, using a sparse mixture of experts to model local, subject-specific variations. A multi-scale inception module is also employed to capture global dependencies. Experiments on six real-world medical datasets show the competitive performance of DE3S, particularly in early prediction windows. Ablation studies confirm the effectiveness of each component in addressing its targeted challenge. The source code is available \href{https://github.com/kuxit/DE3S}{\textbf{here}}.
- oai:arXiv.org:2510.12214v2
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tao Xie, Zexi Tan, Haoyi Xiao, Binbin Sun, Yiqun Zhang
-
-
- FaStfact: Faster, Stronger Long-Form Factuality Evaluations in LLMs
- https://arxiv.org/abs/2510.12839
- arXiv:2510.12839v2 Announce Type: replace
-Abstract: Evaluating the factuality of long-form generations from Large Language Models (LLMs) remains challenging due to efficiency bottlenecks and reliability concerns. Prior efforts attempt this by decomposing text into claims, searching for evidence, and verifying claims, but suffer from critical drawbacks: (1) inefficiency due to overcomplicated pipeline components, and (2) ineffectiveness stemming from inaccurate claim sets and insufficient evidence. To address these limitations, we propose \textbf{FaStfact}, an evaluation framework that achieves the highest alignment with human evaluation and time/token efficiency among existing baselines. FaStfact first employs chunk-level claim extraction integrated with confidence-based pre-verification, significantly reducing the time and token cost while ensuring reliability. For searching and verification, it collects document-level evidence from crawled web-pages and selectively retrieves it during verification. Extensive experiments based on an annotated benchmark \textbf{FaStfact-Bench} demonstrate the reliability of FaStfact in both efficiently and effectively evaluating long-form factuality. Code, benchmark data, and annotation interface tool are available at https://github.com/Yingjia-Wan/FaStfact.
- oai:arXiv.org:2510.12839v2
- cs.CL
- cs.AI
- cs.CE
- cs.CY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Yingjia Wan, Haochen Tan, Xiao Zhu, Xinyu Zhou, Zhiwei Li, Qingsong Lv, Changxuan Sun, Jiaqi Zeng, Yi Xu, Jianqiao Lu, Yinhong Liu, Zhijiang Guo
-
-
- A Survey on Collaborating Small and Large Language Models for Performance, Cost-effectiveness, Cloud-edge Privacy, and Trustworthiness
- https://arxiv.org/abs/2510.13890
- arXiv:2510.13890v2 Announce Type: replace
-Abstract: Large language models (LLMs) have achieved remarkable progress across domains and applications but face challenges such as high fine-tuning costs, inference latency, limited edge deployability, and reliability concerns. Small language models (SLMs), with compact, efficient, and adaptable features, offer promising solutions. Building on this potential, recent research explores collaborative frameworks that integrate their complementary strengths, leveraging SLMs' specialization and efficiency with LLMs' generalization and reasoning to address diverse objectives across tasks and deployment scenarios. Motivated by these developments, this paper presents a systematic survey of SLM-LLM collaboration from the perspective of collaboration objectives. We propose a taxonomy covering four goals: performance enhancement, cost-effectiveness, cloud-edge privacy, and trustworthiness. Under this framework, we review representative methods, summarize design paradigms, and outline open challenges and future directions toward efficient and secure SLM-LLM collaboration. The collected papers are available at https://github.com/FairyFali/SLMs-Survey.
- oai:arXiv.org:2510.13890v2
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fali Wang, Jihai Chen, Shuhua Yang, Ali Al-Lawati, Linli Tang, Hui Liu, Suhang Wang
-
-
- Constraint-Driven Small Language Models Based on Agent and OpenAlex Knowledge Graph: Mining Conceptual Pathways and Discovering Innovation Points in Academic Papers
- https://arxiv.org/abs/2510.14303
- arXiv:2510.14303v2 Announce Type: replace
-Abstract: In recent years, the rapid increase in academic publications across various fields has posed severe challenges for academic paper analysis: scientists struggle to timely and comprehensively track the latest research findings and methodologies. Key concept extraction has proven to be an effective analytical paradigm, and its automation has been achieved with the widespread application of language models in industrial and scientific domains. However, existing paper databases are mostly limited to similarity matching and basic classification of key concepts, failing to deeply explore the relational networks between concepts. This paper is based on the OpenAlex opensource knowledge graph. By analyzing nearly 8,000 open-source paper data from Novosibirsk State University, we discovered a strong correlation between the distribution patterns of paper key concept paths and both innovation points and rare paths. We propose a prompt engineering-based key concept path analysis method. This method leverages small language models to achieve precise key concept extraction and innovation point identification, and constructs an agent based on a knowledge graph constraint mechanism to enhance analysis accuracy. Through fine-tuning of the Qwen and DeepSeek models, we achieved significant improvements in accuracy, with the models publicly available on the Hugging Face platform.
- oai:arXiv.org:2510.14303v2
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Ziye Xia, Sergei S. Ospichev
-
-
- Finite element methods for electroneutral multicomponent electrolyte flows
- https://arxiv.org/abs/2510.14923
- arXiv:2510.14923v3 Announce Type: replace
-Abstract: We present a broad family of high-order finite element algorithms for simulating the flow of electroneutral electrolytes. The governing partial differential equations that we solve are the electroneutral Navier-Stokes-Onsager-Stefan-Maxwell (NSOSM) equations, which model momentum transport, multicomponent diffusion and electrical effects within the electrolyte. Our algorithms can be applied in the steady and transient settings, in two and three spatial dimensions, and under a variety of boundary conditions. Moreover, we allow for the material parameters (e.g. viscosity, diffusivities, thermodynamic factors and density) to be solution-dependent and thermodynamically non-ideal. The flexibility of our approach requires us to address subtleties that arise in the governing equations due to the interplay between boundary conditions and the equation of state. We demonstrate the algorithms in various physical configurations, including (i) electrolyte flow around a microfluidic rotating disk electrode and (ii) the flow in a Hull cell of a cosolvent electrolyte mixture used in lithium-ion batteries.
- oai:arXiv.org:2510.14923v3
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Aaron Baier-Reinio, Patrick E. Farrell, Charles W. Monroe
-
-
- A Courcelle-Type Metatheorem for Rank-Bounded Unconstrained Binary Optimization
- https://arxiv.org/abs/2510.15168
- arXiv:2510.15168v2 Announce Type: replace
-Abstract: We present the first uniform XP exact algorithm for unconstrained binary optimization of quadratic, polynomial, fractional, and other objectives under a single parameter, the differentially affine (DA) rank $r$. An objective $f: \{0,1\}^n \to \mathbb{R}$ has DA rank $r$ if there is a feature map $\psi: \{0,1\}^n \to \mathbb{R}^r$ such that each coordinate flip has finite gain $\Delta_{\pm e_i}f(x)=\langle v_{\pm e_i},\psi(x)\rangle+\beta_{\pm e_i}$. Our algorithm enumerates the $O((2n)^r)$ chambers of the induced hyperplane arrangement and applies a two-sided local-optimality test: a solution exists on a chamber and is unique iff $\operatorname{sign}\Delta_{+e_i}=-\operatorname{sign}\Delta_{-e_i}$ for all $i$, in which case $x_i^\star=1$ iff $\Delta_{+e_i}>0$. This yields $n^{O(r)}$ time with $O(n)$ decoding per chamber. The framework uniformly covers a wide range of nonlinear functions, including all rank-$r$ quadratics, low-Waring-rank pseudo-Boolean polynomials, finite products/ratios on positive domains, finite-basis separable sums via explicit lifts, Taylor-series approximations of analytic functions, and compositions of all the foregoing. Applications include Ising spin models, optimal experimental design, portfolio optimization, and robust statistics. Prior to our work, only specialized subcases involving sparsity, convexity, submodularity, etc. were known to be tractable. Analogous in spirit to Courcelle's theorem (MSO on bounded treewidth graphs) and Grohe's meta-theorems for constraint satisfaction, our result replaces logical width with analytic rank for nonlinear pseudo-Boolean optimization.
- oai:arXiv.org:2510.15168v2
- cs.CC
- cs.DM
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Marc Harary
-
-
- Finding geodesics with the Deep Ritz method
- https://arxiv.org/abs/2510.15177
- arXiv:2510.15177v2 Announce Type: replace
-Abstract: Geodesic problems involve computing trajectories between prescribed initial and final states to minimize a user-defined measure of distance, cost, or energy. They arise throughout physics and engineering -- for instance, in determining optimal paths through complex environments, modeling light propagation in refractive media, and the study of spacetime trajectories in control theory and general relativity. Despite their ubiquity, the scientific machine learning (SciML) community has given relatively little attention to investigating its methods in the context of these problems. In this work, we argue that given their simple geometry, variational structure, and natural nonlinearity, geodesic problems are particularly well-suited for the Deep Ritz method. We substantiate this claim with four numerical examples drawn from path planning, optics, solid mechanics, and generative modeling. Our goal is not to provide an exhaustive study of geodesic problems, but rather to identify a promising application of the Deep Ritz method and a fruitful direction for future SciML research.
- oai:arXiv.org:2510.15177v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Conor Rowan
-
-
- SafeSearch: Do Not Trade Safety for Utility in LLM Search Agents
- https://arxiv.org/abs/2510.17017
- arXiv:2510.17017v3 Announce Type: replace
-Abstract: Large language model (LLM) based search agents iteratively generate queries, retrieve external information, and reason to answer open-domain questions. While researchers have primarily focused on improving their utility, their safety behaviors remain underexplored. In this paper, we first evaluate search agents using red-teaming datasets and find that they are more likely to produce harmful outputs than base LLMs. For example, when asked "How can I track someone's location without their consent?", a base model refuses, whereas a search agent designed to retrieve and cite sources may lower its refusal threshold, fetch documents (e.g., court cases), and, once appended, synthesize them into an informative yet unsafe summary. We further show that utility-oriented fine-tuning intensifies this risk, motivating joint alignment of safety and utility. We present SafeSearch, a multi-objective reinforcement learning approach that couples a final-output safety/utility reward with a novel query-level shaping term that penalizes unsafe queries and rewards safe ones. Experiments show that SafeSearch reduces agent harmfulness by over 70% across three red-teaming datasets while producing safe, helpful responses, and matches the QA performance of a utility-only finetuned agent; further analyses confirm the effectiveness of the query-level reward in jointly improving safety and utility.
- oai:arXiv.org:2510.17017v3
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Qiusi Zhan, Angeline Budiman-Chan, Abdelrahman Zayed, Xingzhi Guo, Daniel Kang, Joo-Kyung Kim
-
-
- PlanU: Large Language Model Reasoning through Planning under Uncertainty
- https://arxiv.org/abs/2510.18442
- arXiv:2510.18442v2 Announce Type: replace
-Abstract: Large Language Models (LLMs) are increasingly being explored across a range of reasoning tasks. However, LLMs sometimes struggle with reasoning tasks under uncertainty that are relatively easy for humans, such as planning actions in stochastic environments. The adoption of LLMs for reasoning is impeded by uncertainty challenges, such as LLM uncertainty and environmental uncertainty. LLM uncertainty arises from the stochastic sampling process inherent to LLMs. Most LLM-based Decision-Making (LDM) approaches address LLM uncertainty through multiple reasoning chains or search trees. However, these approaches overlook environmental uncertainty, which leads to poor performance in environments with stochastic state transitions. Some recent LDM approaches deal with uncertainty by forecasting the probability of unknown variables. However, they are not designed for multi-step reasoning tasks that require interaction with the environment. To address uncertainty in LLM decision-making, we introduce PlanU, an LLM-based planning method that captures uncertainty within Monte Carlo Tree Search (MCTS). PlanU models the return of each node in the MCTS as a quantile distribution, which uses a set of quantiles to represent the return distribution. To balance exploration and exploitation during tree search, PlanU introduces an Upper Confidence Bounds with Curiosity (UCC) score which estimates the uncertainty of MCTS nodes. Through extensive experiments, we demonstrate the effectiveness of PlanU in LLM-based reasoning tasks under uncertainty.
- oai:arXiv.org:2510.18442v2
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ziwei Deng, Mian Deng, Chenjing Liang, Zeming Gao, Chennan Ma, Chenxing Lin, Haipeng Zhang, Songzhu Mei, Siqi Shen, Cheng Wang
-
-
- ADPO: Anchored Direct Preference Optimization
- https://arxiv.org/abs/2510.18913
- arXiv:2510.18913v4 Announce Type: replace
-Abstract: Direct Preference Optimization (DPO) has become a standard for aligning models with human feedback, yet its reliance on hard, pairwise preferences makes it brittle to annotator noise and distribution shift. We propose Anchored Direct Preference Optimization (ADPO), a theoretically grounded framework that extends preference learning to soft, listwise supervision through reference anchoring. Our key theoretical contributions are threefold: (1) we establish that ADPO unifies major learning paradigms, including supervised fine-tuning, knowledge distillation, maximum-entropy reinforcement learning, and DPO, as special cases through different choices of target distribution, anchor policy, and temperature; (2) we prove that anchoring induces an implicit trust region governed by the softmax Fisher metric; and (3) we formalize the stability of dynamic anchor updates. Empirically, we discover a task-dependent tradeoff: dynamic anchors suit online exploration, while fixed anchors excel at offline distillation, reducing teacher-student KL divergence by two to three orders of magnitude (170 to 5000 times).
- oai:arXiv.org:2510.18913v4
- cs.LG
- cs.AI
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wang Zixian
-
-
- AgenticMath: Enhancing LLM Reasoning via Agentic-based Math Data Generation
- https://arxiv.org/abs/2510.19361
- arXiv:2510.19361v2 Announce Type: replace
-Abstract: The creation of high-quality datasets to improve Large Language Model (LLM) reasoning remains a significant challenge, as current methods often suffer from generating low-quality/incorrect answers and limited information richness from available data sources. To address this, we propose AgenticMath, a novel agentic pipeline for generating high-quality mathematical question-answer pairs to enhance the supervised fine-tuning of LLMs. Our method operates through four stages: (1) Seed Question Filter that selects questions with high information richness, complexity, and clarity; (2) an Agentic Question Rephrase step that employs a multi-agent system to generate diverse, logically consistent paraphrases; (3) an Answer Augment step where rewrite answers using chain-of-thought reasoning to enhance numerical and logical correctness, without reliance on human-provided labels; and (4) a final Question and Answer Evaluation that retains only the most superior pairs. Extensive experiments demonstrate that, fine-tuning 3B-8B parameter LLMs on AgenticMath generated datasets (comprising only 30-60K math samples) achieves competitive or superior performance on diverse in domain and out-of-domain mathematical reasoning benchmarks compared to baselines trained on much more data (e.g., 400K or 2.3M samples). Our work demonstrates that targeted, high-quality data generation is a more efficient path to improving mathematical reasoning in LLMs than large-scale, low-quality alternatives.
- oai:arXiv.org:2510.19361v2
- cs.CL
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Xianyang Liu, Yilin Liu, Shuai Wang, Hao Cheng, Andrew Estornell, Yuzhi Zhao, Jiaheng Wei
-
-
- A Foundational Theory of Quantitative Abstraction: Adjunctions, Duality, and Logic for Probabilistic Systems
- https://arxiv.org/abs/2510.19444
- arXiv:2510.19444v2 Announce Type: replace
-Abstract: The analysis and control of stochastic dynamical systems rely on probabilistic models such as (continuous-space) Markov decision processes, but large or continuous state spaces make exact analysis intractable and call for principled quantitative abstraction. This work develops a unified theory of such abstraction by integrating category theory, coalgebra, quantitative logic, and optimal transport, centred on a canonical $\varepsilon$-quotient of the behavioral pseudo-metric with a universal property: among all abstractions that collapse behavioral differences below $\varepsilon$, it is the most detailed, and every other abstraction achieving the same discounted value-loss guarantee factors uniquely through it. Categorically, a quotient functor $Q_\varepsilon$ from a category of probabilistic systems to a category of metric specifications admits, via the Special Adjoint Functor Theorem, a right adjoint $R_\varepsilon$, yielding an adjunction $Q_\varepsilon \dashv R_\varepsilon$ that formalizes a duality between abstraction and realization; logically, a quantitative modal $\mu$-calculus with separate reward and transition modalities is shown, for a broad class of systems, to be expressively complete for the behavioral pseudo-metric, with a countable fully abstract fragment suitable for computation. The theory is developed coalgebraically over Polish spaces and the Giry monad and validated on finite-state models using optimal-transport solvers, with experiments corroborating the predicted contraction properties and structural stability and aligning with the theoretical value-loss bounds, thereby providing a rigorous foundation for quantitative state abstraction and representation learning in probabilistic domains.
- oai:arXiv.org:2510.19444v2
- cs.LO
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nivar Anwer (Georgia Institute of Technology, USA), Ezequiel L\'opez-Rubio (University of M\'alaga, Spain,IBIMA Plataforma BIONAND, Spain), David Elizondo (De Montfort University, United Kingdom), Rafael M. Luque-Baena (University of M\'alaga, Spain,IBIMA Plataforma BIONAND, Spain)
-
-
- Misalignment Bounty: Crowdsourcing AI Agent Misbehavior
- https://arxiv.org/abs/2510.19738
- arXiv:2510.19738v2 Announce Type: replace
-Abstract: Advanced AI systems sometimes act in ways that differ from human intent. To gather clear, reproducible examples, we ran the Misalignment Bounty: a crowdsourced project that collected cases of agents pursuing unintended or unsafe goals. The bounty received 295 submissions, of which nine were awarded.
- This report explains the program's motivation and evaluation criteria, and walks through the nine winning submissions step by step.
- oai:arXiv.org:2510.19738v2
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Rustem Turtayev, Natalia Fedorova, Oleg Serikov, Sergey Koldyba, Lev Avagyan, Dmitrii Volkov
-
-
- Well-Posedness and Approximation of Weak Solutions to Time Dependent Maxwell's Equations with $L^2$-Data
- https://arxiv.org/abs/2510.20752
- arXiv:2510.20752v2 Announce Type: replace
-Abstract: We study Maxwell's equations in conducting media with perfectly conducting boundary conditions on Lipschitz domains, allowing rough material coefficients and $L^2$-data. Our first contribution is a direct proof of well-posedness of the first-order weak formulation, including solution existence and uniqueness, an energy identity, and continuous dependence on the data. The argument uses interior-in-time mollification to show uniqueness while avoiding reflection techniques. Existence is via the well-known Galerkin method (cf.~Duvaut and Lions \cite[Eqns.~(4.31)--(4.32), p.~346; Thm.~4.1]{GDuvaut_JLLions_1976a}). For completeness, and to make the paper self-contained, a complete proof has been provided.
- Our second contribution is a structure-preserving semi-discrete finite element method based on the N\'ed\'elec/Raviart--Thomas de Rham complex. The scheme preserves a discrete Gauss law for all times and satisfies a continuous-in-time energy identity with stability for nonnegative conductivity. With a divergence-free initialization of the magnetic field (via potential reconstruction or constrained $L^2$ projection), we prove convergence of the semi-discrete solutions to the unique weak solution as the mesh is refined. The analysis mostly relies on projector consistency, weak-* compactness in time-bounded $L^2$ spaces, and identification of time derivatives in dual spaces.
- oai:arXiv.org:2510.20752v2
- math.NA
- cs.NA
- math.AP
- physics.comp-ph
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Harbir Antil
-
-
- The Mirror Loop: Recursive Non-Convergence in Generative Reasoning Systems
- https://arxiv.org/abs/2510.21861
- arXiv:2510.21861v2 Announce Type: replace
-Abstract: Large language models are often described as capable of reflective reasoning, yet recursive self-evaluation without external feedback frequently yields reformulation rather than progress. We test this prediction in a cross-provider study of 144 reasoning sequences across three models (OpenAI GPT-4o-mini, Anthropic Claude 3 Haiku, and Google Gemini 2.0 Flash) and four task families (arithmetic, code, explanation, reflection), each iterated ten times under two conditions: ungrounded self-critique and a minimal grounding intervention (a single verification step at iteration three). Mean informational change (delta I, measured via normalized edit distance) declined by 55% from early (0.193) to late (0.087) iterations in ungrounded runs, with consistent patterns across all three providers. Grounded runs showed a +28% rebound in informational change immediately after the intervention and sustained non-zero variance thereafter. Complementary measures-n-gram novelty, embedding drift, and character-level entropy-converged on the same pattern: reflection without contact tends toward informational closure. We interpret this as evidence for a structural limit on self-correction in generative reasoning: without an exchange of information with an independent verifier or environment, recursive inference approaches an attractor state of epistemic stasis. Minimal grounding functions as dissipative coupling, reintroducing informational flux. The cross-architecture consistency suggests the mirror loop arises from shared autoregressive training objectives rather than provider-specific alignment schemes. The results delineate when reflection is performative rather than epistemic and motivate design principles for grounded, cooperative reasoning. Materials and code are publicly available.
- oai:arXiv.org:2510.21861v2
- cs.LG
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Bentley DeVilling (Course Correct Labs, Independent Research Group)
-
-
- Enabling Robust In-Context Memory and Rapid Task Adaptation in Transformers with Hebbian and Gradient-Based Plasticity
- https://arxiv.org/abs/2510.21908
- arXiv:2510.21908v2 Announce Type: replace
-Abstract: Large language models display in-context learning as an emergent effect of scale, but they rely on static weights during inference. In contrast, biological systems continually adapt via synaptic plasticity. We investigate whether explicit, biologically inspired plasticity can endow Transformers with faster in-sequence adaptation. To this end, we augment decoder-only Transformers with fast-weight modules updated either by (i) a neuromodulated Hebbian rule or (ii) the gradient-based plasticity mechanism of Duan et al. (2023). Across copying, regression, and few-shot classification tasks (CIFAR-FS, Omniglot), Hebbian plasticity consistently achieves lower loss and stronger few-shot generalization, while gradient-based updates perform best on long-horizon credit assignment. When associations are short and linearly separable, static weights suffice, defining a clear boundary condition for when plasticity helps. Analysis of learned modulatory signals reveals that gradient-based rules maintain large, persistent updates, whereas Hebbian plasticity is sharply gated around salient events. Together, these results show that explicit plasticity complements attention by enabling rapid, task-specific adaptation, and clarify when different plasticity mechanisms are most effective.
- oai:arXiv.org:2510.21908v2
- cs.NE
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Siddharth Chaudhary
-
-
- (Approximate) Matrix Multiplication via Convolutions
- https://arxiv.org/abs/2510.22193
- arXiv:2510.22193v2 Announce Type: replace
-Abstract: We study the capability of the Fast Fourier Transform (FFT) to accelerate exact and approximate matrix multiplication without using Strassen-like divide-and-conquer. We present a simple exact algorithm running in $O(n^{2.89})$ time, which only sums a few convolutions (FFTs) in $\mathbb{Z}_{m}^{k}$, building on the work of Cohn, Kleinberg, Szegedy and Umans (2005). As a corollary, combining this algorithm with linear sketching breaks the longstanding linear speed-accuracy tradeoff for "combinatorial" approximate matrix multiplication (AMM, Pagh'13, Sarlos'06, Clarkson-Woodruff'13), achieving error $\frac{1}{r^{1.1}}\left\lVert \mathbf{A} \right\rVert_{F}^{2}\left\lVert \mathbf{B}\right\rVert_{F}^{2}$ in $O(rn^{2})$ time, using nothing but FFTs.
- Motivated by the rich literature for approximating polynomials, our main contribution in this paper is extending the group-theoretic framework of Cohn and Umans (2003) to approximate matrix multiplication (AMM). Specifically, we introduce and study an approximate notion of the Triple Product Property, which in the abelian case is equivalent to finding a Sumset which minimizes (multi-)intersections with an arithmetic progression. We prove tight bounds on this quantity for abelian groups (yielding a simple and practical AMM algorithm via polynomial multiplication), and establish a weaker lower bound for non-abelian groups, extending a lemma of Gowers. Finally, we propose a concrete approach that uses low-degree approximation of multi-variate polynomials for AMM, which we believe will lead to practical, non-asymptotic AMM algorithms in real-world applications, most notably LLM inference.
- oai:arXiv.org:2510.22193v2
- cs.DS
- cs.DM
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yahel Uffenheimer, Omri Weinstein
-
-
- Toward Humanoid Brain-Body Co-design: Joint Optimization of Control and Morphology for Fall Recovery
- https://arxiv.org/abs/2510.22336
- arXiv:2510.22336v2 Announce Type: replace
-Abstract: Humanoid robots represent a central frontier in embodied intelligence, as their anthropomorphic form enables natural deployment in humans' workspace. Brain-body co-design for humanoids presents a promising approach to realizing this potential by jointly optimizing control policies and physical morphology. Within this context, fall recovery emerges as a critical capability. It not only enhances safety and resilience but also integrates naturally with locomotion systems, thereby advancing the autonomy of humanoids. In this paper, we propose RoboCraft, a scalable humanoid co-design framework for fall recovery that iteratively improves performance through the coupled updates of control policy and morphology. A shared policy pretrained across multiple designs is progressively finetuned on high-performing morphologies, enabling efficient adaptation without retraining from scratch. Concurrently, morphology search is guided by human-inspired priors and optimization algorithms, supported by a priority buffer that balances reevaluation of promising candidates with the exploration of novel designs. Experiments show that RoboCraft achieves an average performance gain of 44.55% on seven public humanoid robots, with morphology optimization drives at least 40% of improvements in co-designing four humanoid robots, underscoring the critical role of humanoid co-design.
- oai:arXiv.org:2510.22336v2
- cs.RO
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Bo Yue, Sheng Xu, Kui Jia, Guiliang Liu
-
-
- MobileGeo: Exploring Hierarchical Knowledge Distillation for Resource-Efficient Cross-view Drone Geo-Localization
- https://arxiv.org/abs/2510.22582
- arXiv:2510.22582v2 Announce Type: replace
-Abstract: Cross-view geo-localization (CVGL) enables drone localization by matching aerial images to geo-tagged satellite databases, which is critical for autonomous navigation in GNSS-denied environments. However, existing methods rely on resource-intensive feature alignment and multi-branch architectures, incurring high inference costs that limit their deployment on mobile edge devices. We propose MobileGeo, a mobile-friendly framework designed for efficient on-device CVGL. MobileGeo achieves its efficiency through two key components: 1) During training, a Hierarchical Distillation (HD-CVGL) paradigm, coupled with Uncertainty-Aware Prediction Alignment (UAPA), distills essential information into a compact model without incurring inference overhead. 2) During inference, an efficient Multi-view Selection Refinement Module (MSRM) leverages mutual information to filter redundant views and reduce computational load. Extensive experiments demonstrate that MobileGeo outperforms previous state-of-the-art methods, achieving a 4.19\% improvement in AP on University-1652 dataset while being over 5$\times$ more efficient in FLOPs and 3$\times$ faster. Crucially, MobileGeo runs at 251.5 FPS on an NVIDIA AGX Orin edge device, demonstrating its practical viability for real-time on-device drone geo-localization.
- oai:arXiv.org:2510.22582v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jian Sun, Kangdao Liu, Chi Zhang, Chuangquan Chen, Junge Shen, Chi-Man Vong
-
-
- Agentic Meta-Orchestrator for Multi-task Copilots
- https://arxiv.org/abs/2510.22781
- arXiv:2510.22781v2 Announce Type: replace
-Abstract: Microsoft Copilot suites serve as the universal entry point for various agents skilled in handling important tasks, ranging from assisting a customer with product purchases to detecting vulnerabilities in corporate programming code. Each agent can be powered by language models, software engineering operations, such as database retrieval, and internal \& external knowledge. The repertoire of a copilot can expand dynamically with new agents. This requires a robust orchestrator that can distribute tasks from user prompts to the right agents. In this work, we propose an Agentic Meta-orchestrator (AMO) for handling multiple tasks and scalable agents in copilot services, which can provide both natural language and action responses. We will also demonstrate the planning that leverages meta-learning, i.e., a trained decision tree model for deciding the best inference strategy among various agents/models. We showcase the effectiveness of our AMO through two production use cases: Microsoft 365 (M365) E-Commerce Copilot and code compliance copilot. M365 E-Commerce Copilot advertises Microsoft products to external customers to promote sales success. The M365 E-Commerce Copilot provides up-to-date product information and connects to multiple agents, such as relational databases and human customer support. The code compliance copilot scans the internal DevOps code to detect known and new compliance issues in pull requests (PR).
- oai:arXiv.org:2510.22781v2
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- ICDM RARA workshop 2025
- Xiaofeng Zhu, Yunshen Zhou
-
-
- Numerical Spectrum Linking: Identification of Governing PDE via Koopman-Chebyshev Approximation
- https://arxiv.org/abs/2510.23078
- arXiv:2510.23078v2 Announce Type: replace
-Abstract: A numerical framework is proposed for identifying partial differential equations (PDEs) governing dynamical systems directly from their observation data using Chebyshev polynomial approximation. In contrast to data-driven approaches such as dynamic mode decomposition (DMD), which approximate the Koopman operator without a clear connection to differential operators, the proposed method constructs finite-dimensional Koopman matrices by projecting the dynamics onto a Chebyshev basis, thereby capturing both differential and nonlinear terms. This establishes a numerical link between the Koopman and differential operators. Numerical experiments on benchmark dynamical systems confirm the accuracy and efficiency of the approach, underscoring its potential for interpretable operator learning. The framework also lays a foundation for future integration with symbolic regression, enabling the construction of explicit mathematical models directly from data.
- oai:arXiv.org:2510.23078v2
- math.NA
- cs.NA
- eess.SP
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Phonepaserth Sisaykeo, Shogo Muramatsu
-
-
- Revisiting Multimodal Positional Encoding in Vision-Language Models
- https://arxiv.org/abs/2510.23095
- arXiv:2510.23095v2 Announce Type: replace
-Abstract: Multimodal position encoding is essential for vision-language models, yet there has been little systematic investigation into multimodal position encoding. We conduct a comprehensive analysis of multimodal Rotary Positional Embedding (RoPE) by examining its two core components: position design and frequency allocation. Through extensive experiments, we identify three key guidelines: positional coherence, full frequency utilization, and preservation of textual priors-ensuring unambiguous layout, rich representation, and faithful transfer from the pre-trained LLM. Based on these insights, we propose Multi-Head RoPE (MHRoPE) and MRoPE-Interleave (MRoPE-I), two simple and plug-and-play variants that require no architectural changes. Our methods consistently outperform existing approaches across diverse benchmarks, with significant improvements in both general and fine-grained multimodal understanding. Code will be avaliable at https://github.com/JJJYmmm/Multimodal-RoPEs.
- oai:arXiv.org:2510.23095v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Jie Huang, Xuejing Liu, Sibo Song, Ruibing Hou, Hong Chang, Junyang Lin, Shuai Bai
-
-
- Interpretable Tile-Based Classification of Paclitaxel Exposure
- https://arxiv.org/abs/2510.23363
- arXiv:2510.23363v2 Announce Type: replace
-Abstract: Medical image analysis is central to drug discovery and preclinical evaluation, where scalable, objective readouts can accelerate decision-making. We address classification of paclitaxel (Taxol) exposure from phase-contrast microscopy of C6 glioma cells -- a task with subtle dose differences that challenges full-image models. We propose a simple tiling-and-aggregation pipeline that operates on local patches and combines tile outputs into an image label, achieving state-of-the-art accuracy on the benchmark dataset and improving over the published baseline by around 20 percentage points, with trends confirmed by cross-validation. To understand why tiling is effective, we further apply Grad-CAM and Score-CAM and attention analyses, which enhance model interpretability and point toward robustness-oriented directions for future medical image research. Code is released to facilitate reproduction and extension.
- oai:arXiv.org:2510.23363v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Sean Fletcher, Gabby Scott, Douglas Currie, Xin Zhang, Yuqi Song, Bruce MacLeod
-
-
- LLMComp: A Language Modeling Paradigm for Error-Bounded Scientific Data Compression (Technical Report)
- https://arxiv.org/abs/2510.23632
- arXiv:2510.23632v2 Announce Type: replace
-Abstract: The rapid growth of high-resolution scientific simulations and observation systems is generating massive spatiotemporal datasets, making efficient, error-bounded compression increasingly important. Meanwhile, decoder-only large language models (LLMs) have demonstrated remarkable capabilities in modeling complex sequential data. In this paper, we propose LLMCOMP, a novel lossy compression paradigm that leverages decoder-only large LLMs to model scientific data. LLMCOMP first quantizes 3D fields into discrete tokens, arranges them via Z-order curves to preserve locality, and applies coverage-guided sampling to enhance training efficiency. An autoregressive transformer is then trained with spatial-temporal embeddings to model token transitions. During compression, the model performs top-k prediction, storing only rank indices and fallback corrections to ensure strict error bounds. Experiments on multiple reanalysis datasets show that LLMCOMP consistently outperforms state-of-the-art compressors, achieving up to 30% higher compression ratios under strict error bounds. These results highlight the potential of LLMs as general-purpose compressors for high-fidelity scientific data.
- oai:arXiv.org:2510.23632v2
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Guozhong Li, Muhannad Alhumaidi, Spiros Skiadopoulos, Panos Kalnis
-
-
- Kemeny's constant minimization for reversible Markov chains via structure-preserving perturbations
- https://arxiv.org/abs/2510.24679
- arXiv:2510.24679v2 Announce Type: replace
-Abstract: Kemeny's constant measures the efficiency of a Markov chain in traversing its states. We investigate whether structure-preserving perturbations to the transition probabilities of a reversible Markov chain can improve its connectivity while maintaining a fixed stationary distribution. Although the minimum achievable value for Kemeny's constant can be estimated, the required perturbations may be infeasible. We reformulate the problem as an optimization task, focusing on solution existence and efficient algorithms, with an emphasis to the problem of minimizing Kemeny's constant under sparsity constraints.
- oai:arXiv.org:2510.24679v2
- math.NA
- cs.NA
- math.PR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fabio Durastante, Miryam Gnazzo, Beatrice Meini
-
-
- Generative View Stitching
- https://arxiv.org/abs/2510.24718
- arXiv:2510.24718v2 Announce Type: replace
-Abstract: Autoregressive video diffusion models are capable of long rollouts that are stable and consistent with history, but they are unable to guide the current generation with conditioning from the future. In camera-guided video generation with a predefined camera trajectory, this limitation leads to collisions with the generated scene, after which autoregression quickly collapses. To address this, we propose Generative View Stitching (GVS), which samples the entire sequence in parallel such that the generated scene is faithful to every part of the predefined camera trajectory. Our main contribution is a sampling algorithm that extends prior work on diffusion stitching for robot planning to video generation. While such stitching methods usually require a specially trained model, GVS is compatible with any off-the-shelf video model trained with Diffusion Forcing, a prevalent sequence diffusion framework that we show already provides the affordances necessary for stitching. We then introduce Omni Guidance, a technique that enhances the temporal consistency in stitching by conditioning on both the past and future, and that enables our proposed loop-closing mechanism for delivering long-range coherence. Overall, GVS achieves camera-guided video generation that is stable, collision-free, frame-to-frame consistent, and closes loops for a variety of predefined camera paths, including Oscar Reutersv\"ard's Impossible Staircase. Results are best viewed as videos at https://andrewsonga.github.io/gvs.
- oai:arXiv.org:2510.24718v2
- cs.CV
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Chonghyuk Song, Michal Stary, Boyuan Chen, George Kopanas, Vincent Sitzmann
-
-
- WOD-E2E: Waymo Open Dataset for End-to-End Driving in Challenging Long-tail Scenarios
- https://arxiv.org/abs/2510.26125
- arXiv:2510.26125v2 Announce Type: replace
-Abstract: Vision-based end-to-end (E2E) driving has garnered significant interest in the research community due to its scalability and synergy with multimodal large language models (MLLMs). However, current E2E driving benchmarks primarily feature nominal scenarios, failing to adequately test the true potential of these systems. Furthermore, existing open-loop evaluation metrics often fall short in capturing the multi-modal nature of driving or effectively evaluating performance in long-tail scenarios. To address these gaps, we introduce the Waymo Open Dataset for End-to-End Driving (WOD-E2E). WOD-E2E contains 4,021 driving segments (approximately 12 hours), specifically curated for challenging long-tail scenarios that that are rare in daily life with an occurring frequency of less than 0.03%. Concretely, each segment in WOD-E2E includes the high-level routing information, ego states, and 360-degree camera views from 8 surrounding cameras. To evaluate the E2E driving performance on these long-tail situations, we propose a novel open-loop evaluation metric: Rater Feedback Score (RFS). Unlike conventional metrics that measure the distance between predicted way points and the logs, RFS measures how closely the predicted trajectory matches rater-annotated trajectory preference labels. We have released rater preference labels for all WOD-E2E validation set segments, while the held out test set labels have been used for the 2025 WOD-E2E Challenge. Through our work, we aim to foster state of the art research into generalizable, robust, and safe end-to-end autonomous driving agents capable of handling complex real-world situations.
- oai:arXiv.org:2510.26125v2
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Runsheng Xu, Hubert Lin, Wonseok Jeon, Hao Feng, Yuliang Zou, Liting Sun, John Gorman, Kate Tolstaya, Sarah Tang, Brandyn White, Ben Sapp, Mingxing Tan, Jyh-Jing Hwang, Dragomir Anguelov
-
-
- Beyond Synthetic Benchmarks: Evaluating LLM Performance on Real-World Class-Level Code Generation
- https://arxiv.org/abs/2510.26130
- arXiv:2510.26130v2 Announce Type: replace
-Abstract: Large language models (LLMs) have demonstrated strong performance on function-level code generation benchmarks, yet real-world software development increasingly demands class-level implementations that integrate multiple methods, attributes, and dependencies within authentic project contexts. This gap between benchmark performance and practical utility raises critical questions about LLMs' readiness for production code assistance, particularly regarding their ability to generalize across familiar and novel codebases.
- We introduce a benchmark derived from real-world open-source repositories, comprising classes divided into seen and unseen partitions to evaluate generalization under practical conditions. We systematically examine how input specification completeness and retrieval-augmented generation affect class-level correctness across multiple state-of-the-art LLMs.
- Our evaluation reveals a substantial performance gap: while LLMs achieve 84 to 89% correctness on synthetic benchmarks, they attain only 25 to 34% on real-world class tasks, with minimal distinction between familiar and novel codebases. Comprehensive documentation provides marginal improvements (1 to 3%), whereas retrieval augmentation yields greater gains (4 to 7%) by supplying concrete implementation patterns. Error analysis identifies AttributeError, TypeError, and AssertionError as dominant failure modes, with distinct patterns between synthetic and real-world scenarios.
- These findings provide actionable insights for enhancing context modelling, documentation strategies, and retrieval integration in production code assistance tools.
- oai:arXiv.org:2510.26130v2
- cs.SE
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Musfiqur Rahman, SayedHassan Khatoonabadi, Emad Shihab
-
-
- Which Way Does Time Flow? A Psychophysics-Grounded Evaluation for Vision-Language Models
- https://arxiv.org/abs/2510.26241
- arXiv:2510.26241v2 Announce Type: replace
-Abstract: Modern vision-language models (VLMs) excel at many multimodal tasks, yet their grasp of temporal information in video remains weak and, crucially, under-evaluated. We probe this gap with a deceptively simple but revealing challenge: judging the arrow of time (AoT)-whether a short clip is played forward or backward. We introduce AoT-PsyPhyBENCH, a psychophysically validated benchmark that tests whether VLMs can infer temporal direction in natural videos using the same stimuli and behavioral baselines established for humans. Our comprehensive evaluation of open-weight and proprietary, reasoning and non-reasoning VLMs reveals that most models perform near chance, and even the best lag far behind human accuracy on physically irreversible processes (e.g., free fall, diffusion/explosion) and causal manual actions (division/addition) that humans recognize almost instantly. These results highlight a fundamental gap in current multimodal systems: while they capture rich visual-semantic correlations, they lack the inductive biases required for temporal continuity and causal understanding. We release the code and data for AoT-PsyPhyBENCH to encourage further progress in the physical and temporal reasoning capabilities of VLMs.
- oai:arXiv.org:2510.26241v2
- cs.CV
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Shiho Matta, Lis Kanashiro Pereira, Peitao Han, Fei Cheng, Shigeru Kitazawa
-
-
- Thor: Towards Human-Level Whole-Body Reactions for Intense Contact-Rich Environments
- https://arxiv.org/abs/2510.26280
- arXiv:2510.26280v2 Announce Type: replace
-Abstract: Humanoids hold great potential for service, industrial, and rescue applications, in which robots must sustain whole-body stability while performing intense, contact-rich interactions with the environment. However, enabling humanoids to generate human-like, adaptive responses under such conditions remains a major challenge. To address this, we propose Thor, a humanoid framework for human-level whole-body reactions in contact-rich environments. Based on the robot's force analysis, we design a force-adaptive torso-tilt (FAT2) reward function to encourage humanoids to exhibit human-like responses during force-interaction tasks. To mitigate the high-dimensional challenges of humanoid control, Thor introduces a reinforcement learning architecture that decouples the upper body, waist, and lower body. Each component shares global observations of the whole body and jointly updates its parameters. Finally, we deploy Thor on the Unitree G1, and it substantially outperforms baselines in force-interaction tasks. Specifically, the robot achieves a peak pulling force of 167.7 N (approximately 48% of the G1's body weight) when moving backward and 145.5 N when moving forward, representing improvements of 68.9% and 74.7%, respectively, compared with the best-performing baseline. Moreover, Thor is capable of pulling a loaded rack (130 N) and opening a fire door with one hand (60 N). These results highlight Thor's effectiveness in enhancing humanoid force-interaction capabilities.
- oai:arXiv.org:2510.26280v2
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gangyang Li, Qing Shi, Youhao Hu, Jincheng Hu, Zhongyuan Wang, Xinlong Wang, Shaqi Luo
-
-
- On Measuring Localization of Shortcuts in Deep Networks
- https://arxiv.org/abs/2510.26560
- arXiv:2510.26560v2 Announce Type: replace
-Abstract: Shortcuts, spurious rules that perform well during training but fail to generalize, present a major challenge to the reliability of deep networks (Geirhos et al., 2020). However, the impact of shortcuts on feature representations remains understudied, obstructing the design of principled shortcut-mitigation methods. To overcome this limitation, we investigate the layer-wise localization of shortcuts in deep models. Our novel experiment design quantifies the layer-wise contribution to accuracy degradation caused by a shortcut-inducing skew by counterfactual training on clean and skewed datasets. We employ our design to study shortcuts on CIFAR-10, Waterbirds, and CelebA datasets across VGG, ResNet, DeiT, and ConvNeXt architectures. We find that shortcut learning is not localized in specific layers but distributed throughout the network. Different network parts play different roles in this process: shallow layers predominantly encode spurious features, while deeper layers predominantly forget core features that are predictive on clean data. We also analyze the differences in localization and describe its principal axes of variation. Finally, our analysis of layer-wise shortcut-mitigation strategies suggests the hardness of designing general methods, supporting dataset- and architecture-specific approaches instead.
- oai:arXiv.org:2510.26560v2
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Nikita Tsoy, Nikola Konstantinov
-
-
- NaviTrace: Evaluating Embodied Navigation of Vision-Language Models
- https://arxiv.org/abs/2510.26909
- arXiv:2510.26909v2 Announce Type: replace
-Abstract: Vision-language models demonstrate unprecedented performance and generalization across a wide range of tasks and scenarios. Integrating these foundation models into robotic navigation systems opens pathways toward building general-purpose robots. Yet, evaluating these models' navigation capabilities remains constrained by costly real-world trials, overly simplified simulations, and limited benchmarks. We introduce NaviTrace, a high-quality Visual Question Answering benchmark where a model receives an instruction and embodiment type (human, legged robot, wheeled robot, bicycle) and must output a 2D navigation trace in image space. Across 1000 scenarios and more than 3000 expert traces, we systematically evaluate eight state-of-the-art VLMs using a newly introduced semantic-aware trace score. This metric combines Dynamic Time Warping distance, goal endpoint error, and embodiment-conditioned penalties derived from per-pixel semantics and correlates with human preferences. Our evaluation reveals consistent gap to human performance caused by poor spatial grounding and goal localization. NaviTrace establishes a scalable and reproducible benchmark for real-world robotic navigation. The benchmark and leaderboard can be found at https://leggedrobotics.github.io/navitrace_webpage/.
- oai:arXiv.org:2510.26909v2
- cs.RO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Tim Windecker, Manthan Patel, Moritz Reuss, Richard Schwarzkopf, Cesar Cadena, Rudolf Lioutikov, Marco Hutter, Jonas Frey
-
-
- The Skolem Problem in rings of positive characteristic
- https://arxiv.org/abs/2510.27603
- arXiv:2510.27603v2 Announce Type: replace
-Abstract: We show that the Skolem Problem is decidable in finitely generated commutative rings of positive characteristic. More precisely, we show that there exists an algorithm which, given a finite presentation of a (unitary) commutative ring $\mathcal{R} = \mathbb{Z}_{/T}[X_1, \ldots, X_n]/I$ of characteristic $T > 0$, and a linear recurrence sequence $(\gamma_n)_{n \in \mathbb{N}} \in \mathcal{R}^{\mathbb{N}}$, determines whether $(\gamma_n)_{n \in \mathbb{N}}$ contains a zero term. Our proof is based on two recent results: Dong and Shafrir (2025) on the solution set of S-unit equations over $p^e$-torsion modules, and Karimov, Luca, Nieuwveld, Ouaknine, and Worrell (2025) on solving linear equations over powers of two multiplicatively independent numbers. Our result implies, moreover, that the zero set of a linear recurrence sequence over a ring of characteristic $T = p_1^{e_1} \cdots p_k^{e_k}$ is effectively a finite union of $p_i$-normal sets in the sense of Derksen (2007).
- oai:arXiv.org:2510.27603v2
- cs.LO
- math.NT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Ruiwen Dong, Doron Shafrir
-
-
- EVINGCA: Adaptive Graph Clustering with Evolving Neighborhood Statistics
- https://arxiv.org/abs/2511.00064
- arXiv:2511.00064v2 Announce Type: replace
-Abstract: Clustering algorithms often rely on restrictive assumptions: K-Means and Gaussian Mixtures presuppose convex, Gaussian-like clusters, while DBSCAN and HDBSCAN capture non-convexity but can be highly sensitive. I introduce EVINGCA (Evolving Variance-Informed Nonparametric Graph Construction Algorithm), a density-variance based clustering algorithm that treats cluster formation as an adaptive, evolving process on a nearest-neighbor graph. EVINGCA expands rooted graphs via breadth-first search, guided by continuously updated local distance and shape statistics, replacing fixed density thresholds with local statistical feedback. With spatial indexing, EVINGCA features log-linear complexity in the average case and exhibits competitive performance against baselines across a variety of synthetic, real-world, low-d, and high-d datasets.
- oai:arXiv.org:2511.00064v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Randolph Wiredu-Aidoo
-
-
- PDE-SHARP: PDE Solver Hybrids through Analysis and Refinement Passes
- https://arxiv.org/abs/2511.00183
- arXiv:2511.00183v2 Announce Type: replace
-Abstract: Current LLM-driven approaches using test-time computing to generate PDE solvers execute a large number of solver samples to identify high-accuracy solvers. These paradigms are especially costly for complex PDEs requiring substantial computational resources for numerical evaluation. We introduce PDE-SHARP, a framework to reduce computational costs by replacing expensive scientific computation by cheaper LLM inference that achieves superior solver accuracy with 60-75% fewer computational evaluations. PDE-SHARP employs three stages: (1) Analysis: mathematical chain-of-thought analysis including PDE classification, solution type detection, and stability analysis; (2) Genesis: solver generation based on mathematical insights from the previous stage; and (3) Synthesis: collaborative selection-hybridization tournaments in which LLM judges iteratively refine implementations through flexible performance feedback. To generate high-quality solvers, PDE-SHARP requires fewer than 13 solver evaluations on average compared to 30+ for baseline methods, improving accuracy uniformly across tested PDEs by $4\times$ on average, and demonstrates robust performance across LLM architectures, from general-purpose to specialized reasoning models.
- oai:arXiv.org:2511.00183v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Shaghayegh Fazliani, Madeleine Udell
-
-
- Towards 1000-fold Electron Microscopy Image Compression for Connectomics via VQ-VAE with Transformer Prior
- https://arxiv.org/abs/2511.00231
- arXiv:2511.00231v2 Announce Type: replace
-Abstract: Petascale electron microscopy (EM) datasets push storage, transfer, and downstream analysis toward their current limits. We present a vector-quantized variational autoencoder-based (VQ-VAE) compression framework for EM that spans 16x to 1024x and enables pay-as-you-decode usage: top-only decoding for extreme compression, with an optional Transformer prior that predicts bottom tokens (without changing the compression ratio) to restore texture via feature-wise linear modulation (FiLM) and concatenation; we further introduce an ROI-driven workflow that performs selective high-resolution reconstruction from 1024x-compressed latents only where needed.
- oai:arXiv.org:2511.00231v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fuming Yang, Yicong Li, Hanspeter Pfister, Jeff W. Lichtman, Yaron Meirovitch
-
-
- Text-guided Fine-Grained Video Anomaly Detection
- https://arxiv.org/abs/2511.00524
- arXiv:2511.00524v2 Announce Type: replace
-Abstract: Video Anomaly Detection (VAD) aims to identify anomalous events within video segments. In scenarios such as surveillance or industrial process monitoring, anomaly detection is of critical importance. While existing approaches are semi-automated, requiring human assessment for anomaly detection, traditional VADs offer limited output as either normal or anomalous. We propose Text-guided Fine-Grained Video Anomaly Detection (T-VAD), a framework built upon Large Vision-Language Model (LVLM). T-VAD introduces an Anomaly Heatmap Decoder (AHD) that performs pixel-wise visual-textual feature alignment to generate fine-grained anomaly heatmaps. Furthermore, we design a Region-aware Anomaly Encoder (RAE) that transforms the heatmaps into learnable textual embeddings, guiding the LVLM to accurately identify and localize anomalous events in videos. This significantly enhances both the granularity and interactivity of anomaly detection. The proposed method achieving SOTA performance by demonstrating 94.8% Area Under the Curve (AUC, specifically micro-AUC) and 67.8%/76.7% accuracy in anomaly heatmaps (RBDC/TBDC) on the UBnormal dataset, and subjectively verified more preferable textual description on the ShanghaiTech-based dataset (BLEU-4: 62.67 for targets, 88.84 for trajectories; Yes/No accuracy: 97.67%), and on the UBnormal dataset (BLEU-4: 50.32 for targets, 78.10 for trajectories; Yes/No accuracy: 89.73%).
- oai:arXiv.org:2511.00524v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Jihao Gu, Kun Li, He Wang, Kaan Ak\c{s}it
-
-
- On Improvisation and Open-Endedness: Insights for Experiential AI
- https://arxiv.org/abs/2511.00529
- arXiv:2511.00529v2 Announce Type: replace
-Abstract: Improvisation-the art of spontaneous creation that unfolds moment-to-moment without a scripted outcome-requires practitioners to continuously sense, adapt, and create anew. It is a fundamental mode of human creativity spanning music, dance, and everyday life. The open-ended nature of improvisation produces a stream of novel, unrepeatable moments-an aspect highly valued in artistic creativity. In parallel, open-endedness (OE)-a system's capacity for unbounded novelty and endless "interestingness"-is exemplified in natural or cultural evolution and has been considered "the last grand challenge" in artificial life (ALife). The rise of generative AI now raises the question in computational creativity (CC) research: What makes a "good" improvisation for AI? Can AI learn to improvise in a genuinely open-ended way? In this work-in-progress paper, we report insights from in-depth interviews with 6 experts in improvisation across dance, music, and contact improvisation. We draw systemic connections between human improvisational arts and the design of future experiential AI agents that could improvise alone or alongside humans-or even with other AI agents-embodying qualities of improvisation drawn from practice: active listening (umwelt and awareness), being in the time (mindfulness and ephemerality), embracing the unknown (source of randomness and serendipity), non-judgmental flow (acceptance and dynamical stability, balancing structure and surprise (unpredictable criticality at edge of chaos), imaginative metaphor (synaesthesia and planning), empathy, trust, boundary, and care (mutual theory of mind), and playfulness and intrinsic motivation (maintaining interestingness).
- oai:arXiv.org:2511.00529v2
- cs.HC
- cs.AI
- cs.NE
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Botao 'Amber' Hu
-
-
- Erasing 'Ugly' from the Internet: Propagation of the Beauty Myth in Text-Image Models
- https://arxiv.org/abs/2511.00749
- arXiv:2511.00749v2 Announce Type: replace
-Abstract: Social media has exacerbated the promotion of Western beauty norms, leading to negative self-image, particularly in women and girls, and causing harm such as body dysmorphia. Increasingly content on the internet has been artificially generated, leading to concerns that these norms are being exaggerated. The aim of this work is to study how generative AI models may encode 'beauty' and erase 'ugliness', and discuss the implications of this for society. To investigate these aims, we create two image generation pipelines: a text-to-image model and a text-to-language model-to image model. We develop a structured beauty taxonomy which we use to prompt three language models (LMs) and two text-to-image models to cumulatively generate 5984 images using our two pipelines. We then recruit women and non-binary social media users to evaluate 1200 of the images through a Likert-scale within-subjects study. Participants show high agreement in their ratings. Our results show that 86.5% of generated images depicted people with lighter skin tones, 22% contained explicit content despite Safe for Work (SFW) training, and 74% were rated as being in a younger age demographic. In particular, the images of non-binary individuals were rated as both younger and more hypersexualised, indicating troubling intersectional effects. Notably, prompts encoded with 'negative' or 'ugly' beauty traits (such as "a wide nose") consistently produced higher Not SFW (NSFW) ratings regardless of gender. This work sheds light on the pervasive demographic biases related to beauty standards present in generative AI models -- biases that are actively perpetuated by model developers, such as via negative prompting. We conclude by discussing the implications of this on society, which include pollution of the data streams and active erasure of features that do not fall inside the stereotype of what is considered beautiful by developers.
- oai:arXiv.org:2511.00749v2
- cs.CV
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Tanvi Dinkar, Aiqi Jiang, Gavin Abercrombie, Ioannis Konstas
-
-
- Quantifying truth and authenticity in AI-assisted candidate evaluation: A multi-domain pilot analysis
- https://arxiv.org/abs/2511.00774
- arXiv:2511.00774v2 Announce Type: replace
-Abstract: This paper presents a retrospective analysis of anonymized candidate-evaluation data collected during pilot hiring campaigns conducted through AlteraSF, an AI-native resume-verification platform. The system evaluates resume claims, generates context-sensitive verification questions, and measures performance along quantitative axes of factual validity and job fit, complemented by qualitative integrity detection. Across six job families and 1,700 applications, the platform achieved a 90-95% reduction in screening time and detected measurable linguistic patterns consistent with AI-assisted or copied responses. The analysis demonstrates that candidate truthfulness can be assessed not only through factual accuracy but also through patterns of linguistic authenticity. The results suggest that a multi-dimensional verification framework can improve both hiring efficiency and trust in AI-mediated evaluation systems.
- oai:arXiv.org:2511.00774v2
- cs.HC
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Eldred Lee, Nicholas Worley, Koshu Takatsuji
-
-
- Med-Banana-50K: A Cross-modality Large-Scale Dataset for Text-guided Medical Image Editing
- https://arxiv.org/abs/2511.00801
- arXiv:2511.00801v2 Announce Type: replace
-Abstract: Recent advances in multimodal large language models have enabled remarkable medical image editing capabilities. However, the research community's progress remains constrained by the absence of large-scale, high-quality, and openly accessible datasets built specifically for medical image editing with strict anatomical and clinical constraints. We introduce Med-Banana-50K, a comprehensive 50K-image dataset for instruction-based medical image editing spanning three modalities (chest X-ray, brain MRI, fundus photography) and 23 disease types. Our dataset is constructed by leveraging Gemini-2.5-Flash-Image to generate bidirectional edits (lesion addition and removal) from real medical images. What distinguishes Med-Banana-50K from general-domain editing datasets is our systematic approach to medical quality control: we employ LLM-as-Judge with a medically grounded rubric (instruction compliance, structural plausibility, realism, and fidelity preservation) and history-aware iterative refinement up to five rounds. Beyond single-turn editing, Med-Banana-50K includes 37K failed attempts with full conversation logs for preference learning and alignment research. By providing this large-scale, medically validated, and fully documented resource, Med-Banana-50K establishes a foundation for training and evaluating the next generation of medical image editing models.Our dataset and code are publicly available at [https://github.com/richardChenzhihui/med-banana-50k].
- oai:arXiv.org:2511.00801v2
- cs.CV
- cs.MM
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Zhihui Chen, Mengling Feng
-
-
- EraseFlow: Learning Concept Erasure Policies via GFlowNet-Driven Alignment
- https://arxiv.org/abs/2511.00804
- arXiv:2511.00804v2 Announce Type: replace
-Abstract: Erasing harmful or proprietary concepts from powerful text to image generators is an emerging safety requirement, yet current "concept erasure" techniques either collapse image quality, rely on brittle adversarial losses, or demand prohibitive retraining cycles. We trace these limitations to a myopic view of the denoising trajectories that govern diffusion based generation. We introduce EraseFlow, the first framework that casts concept unlearning as exploration in the space of denoising paths and optimizes it with GFlowNets equipped with the trajectory balance objective. By sampling entire trajectories rather than single end states, EraseFlow learns a stochastic policy that steers generation away from target concepts while preserving the model's prior. EraseFlow eliminates the need for carefully crafted reward models and by doing this, it generalizes effectively to unseen concepts and avoids hackable rewards while improving the performance. Extensive empirical results demonstrate that EraseFlow outperforms existing baselines and achieves an optimal trade off between performance and prior preservation.
- oai:arXiv.org:2511.00804v2
- cs.LG
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Abhiram Kusumba, Maitreya Patel, Kyle Min, Changhoon Kim, Chitta Baral, Yezhou Yang
-
-
- FREESH: Fair, Resource- and Energy-Efficient Scheduling for LLM Serving on Heterogeneous GPUs
- https://arxiv.org/abs/2511.00807
- arXiv:2511.00807v2 Announce Type: replace
-Abstract: The ever-increasing computation and energy demand for LLM and AI agents call for holistic and efficient optimization of LLM serving systems. In practice, heterogeneous GPU clusters can be deployed in a geographically distributed manner, while LLM load also observes diversity in terms of both query traffic and serving patterns. LLM queries running on advanced GPUs during a high-emission hour at one location can lead to significantly higher carbon footprints versus same queries running on mid-level GPUs at a low-emission time and location. By observing LLM serving requirements and leveraging spatiotemporal computation flexibility, we consider the joint routing and scheduling problem, and propose FREESH to cooperatively run a group of data centers while minimizing user-specified carbon or energy objectives. FREESH identifies the optimal configurations of balanced load serving by matching distinct GPU instance's power-throughput characteristics with predictable LLM query length and workloads. To ensure both latency and fairness requirements, FREESH identifies optimized parallelism and query routing schedules together with dynamic GPU frequency scaling for power saving, and Least-Laxity-First (LLF) serving strategy for query scheduling. During the 1-hour serving on production workloads, FREESH reduces energy by 28.6% and emissions by 45.45% together with improvements in SLO attainment and fairness.
- oai:arXiv.org:2511.00807v2
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Xuan He, Zequan Fang, Jinzhao Lian, Danny H. K. Tsang, Baosen Zhang, Yize Chen
-
-
- Beyond Single-Tokenomics: How Farcaster's Pluralistic Incentives Reshape Social Networking
- https://arxiv.org/abs/2511.00827
- arXiv:2511.00827v2 Announce Type: replace
-Abstract: This paper presents the first empirical analysis of how diverse token-based reward mechanisms impact platform dynamics and user behaviors. For this, we gather a unique, large-scale dataset from Farcaster. This blockchain-based, decentralized social network incorporates multiple incentive mechanisms spanning platform-native rewards, third-party token programs, and peer-to-peer tipping. Our dataset captures token transactions and social interactions from 574,829 wallet-linked users, representing 64.25% of the platform's user base. Our socioeconomic analyses reveal how different tokenomics design shape varying participation rates (7.6%--70%) and wealth concentration patterns (Gini 0.72--0.94), whereas inter-community tipping is 1.3--2x more frequent among non-following pairs, thereby mitigating echo chambers. Our causal analyses further uncover several critical trade-offs: (1) while most token rewards boost content creation, they often fail to enhance -- sometimes undermining -- content quality; (2) token rewards increase follower acquisition but show neutral or negative effects on outbound following, suggesting potential asymmetric network growth; (3) repeated algorithmic rewards demonstrate strong cumulative effects that may encourage strategic optimization. Our findings advance understanding of cryptocurrency integration in social platforms and highlight challenges in aligning economic incentives with authentic social value.
- oai:arXiv.org:2511.00827v2
- cs.SI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1145/3771565
- Wen Yang, Qiming Ye, Onur Ascigil, Saidu Sokoto, Leonhard Balduf, Micha{\l} Kr\'ol, Gareth Tyson
-
-
- Optimal Allocations under Strongly Pigou-Dalton Criteria: Hidden Layer Structure & Efficient Combinatorial Approach
- https://arxiv.org/abs/2511.00835
- arXiv:2511.00835v2 Announce Type: replace
-Abstract: We investigate optimal social welfare allocations of $m$ items to $n$ agents with binary additive or submodular valuations. For binary additive valuations, we prove that the set of optimal allocations coincides with the set of so-called \emph{stable allocations}, as long as the employed criterion for evaluating social welfare is strongly Pigou-Dalton (SPD) and symmetric. Many common criteria are SPD and symmetric, such as Nash social welfare, leximax, leximin, Gini index, entropy, and envy sum. We also design efficient algorithms for finding a stable allocation, including an $O(m^2n)$ time algorithm for the case of indivisible items, and an $O(m^2n^5)$ time one for the case of divisible items. The first is faster than the existing algorithms or has a simpler analysis. The latter is the first combinatorial algorithm for that problem. It utilizes a hidden layer partition of items and agents admitted by all stable allocations, and cleverly reduces the case of divisible items to the case of indivisible items.
- In addition, we show that the profiles of different optimal allocations have a small Chebyshev distance, which is 0 for the case of divisible items under binary additive valuations, and is at most 1 for the case of indivisible items under binary submodular valuations.
- oai:arXiv.org:2511.00835v2
- cs.GT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Taikun Zhu, Kai Jin, Ruixi Luo, Song Cao
-
-
- FlowLog: Efficient and Extensible Datalog via Incrementality
- https://arxiv.org/abs/2511.00865
- arXiv:2511.00865v2 Announce Type: replace
-Abstract: Datalog-based languages are regaining popularity as a powerful abstraction for expressing recursive computations in domains such as program analysis and graph processing. However, existing systems often face a trade-off between efficiency and extensibility. Engines like Souffle achieve high efficiency through domain-specific designs, but lack general-purpose flexibility. Others, like RecStep, offer modularity by layering Datalog on traditional databases, but struggle to integrate Datalog-specific optimizations.
- This paper bridges this gap by presenting FlowLog, a new Datalog engine that uses an explicit relational IR per-rule to cleanly separate recursive control (e.g., semi-naive execution) from each rule's logical plan. This boundary lets us retain fine-grained, Datalog-aware optimizations at the logical layer, but also reuse off-the-shelf database primitives at execution. At the logical level (i.e. IR), we apply proven SQL optimizations, such as logic fusion and subplan reuse. To address high volatility in recursive workloads, we adopt a robustness-first approach that pairs a structural optimizer (avoiding worst-case joins) with sideways information passing (early filtering). Built atop Differential Dataflow--a mature framework for streaming analytics--FlowLog supports both batch and incremental Datalog and adds novel recursion-aware optimizations called Boolean (or algebraic) specialization. Our evaluation shows that FlowLog outperforms state-of-the-art Datalog engines and modern databases across a broad range of recursive workloads, achieving superior scalability while preserving a simple and extensible architecture.
- oai:arXiv.org:2511.00865v2
- cs.DB
- cs.PL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Hangdong Zhao, Zhenghong Yu, Srinag Rao, Simon Frisk, Zhiwei Fan, Paraschos Koutris
-
-
- HAFixAgent: History-Aware Automated Program Repair Agent
- https://arxiv.org/abs/2511.01047
- arXiv:2511.01047v2 Announce Type: replace
-Abstract: Automated program repair (APR) has recently shifted toward large language models and agent-based systems, yet most systems rely on local snapshot context, overlooking repository history. Prior work shows that repository history helps repair single-line bugs, since the last commit touching the buggy line is often the bug-introducing one. In this paper, we investigate whether repository history can also improve agentic APR systems at scale, especially for complex multi-hunk bugs. We present HAFixAgent, a History-Aware Bug-Fixing Agent that injects blame-derived repository heuristics into its repair loop. A preliminary study of all 854 real-world bugs from Defects4J motivates our design, showing that bug-relevant history is both widely available and highly concentrated. Empirical comparison of HAFixAgent with two state-of-the-art baselines shows: (1) Effectiveness: HAFixAgent significantly improves over the agent-based baseline (by 212.3%) and the multi-hunk baseline (by 29.9%). (2) Efficiency: history does not significantly increase agent steps and keeps token costs comparable, with notably lower median costs for complex multi-file-multi-hunk bugs. (3) Practicality: combining different historical heuristics repairs more bugs, offering a clear cost-benefit trade-off. HAFixAgent offers a practical recipe for history-aware agentic APR: ground the agent in version control history, prioritize diff-based historical context, and integrate complementary heuristics when needed.
- oai:arXiv.org:2511.01047v2
- cs.SE
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yu Shi, Hao Li, Bram Adams, Ahmed E. Hassan
-
-
- HPLT 3.0: Very Large-Scale Multilingual Resources for LLM and MT. Mono- and Bi-lingual Data, Multilingual Evaluation, and Pre-Trained Models
- https://arxiv.org/abs/2511.01066
- arXiv:2511.01066v2 Announce Type: replace
-Abstract: We present an ongoing initiative to provide open, very large, high-quality, and richly annotated textual datasets for almost 200 languages. At 30 trillion tokens, this is likely the largest generally available multilingual collection of LLM pre-training data. These datasets are derived from web crawls from different sources and accompanied with a complete, open-source pipeline for document selection from web archives, text extraction from HTML, language identification for noisy texts, exact and near-deduplication, annotation with, among others, register labels, text quality estimates, and personally identifiable information; and final selection and filtering. We report on data quality probes through contrastive and analytical statistics, through manual inspection of samples for 24 languages, and through end-to-end evaluation of various language model architectures trained on this data. For multilingual LLM evaluation, we provide a comprehensive collection of benchmarks for nine European languages, with special emphasis on natively created tasks, mechanisms to mitigate prompt sensitivity, and refined normalization and aggregation of scores. Additionally, we train and evaluate a family of 57 monolingual encoder-decoder models, as well as a handful of monolingual GPT-like reference models. Besides the monolingual data and models, we also present a very large collection of parallel texts automatically mined from this data, together with a novel parallel corpus synthesized via machine translation.
- oai:arXiv.org:2511.01066v2
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Stephan Oepen, Nikolay Arefev, Mikko Aulamo, Marta Ba\~n\'on, Maja Buljan, Laurie Burchell, Lucas Charpentier, Pinzhen Chen, Mariya Fedorova, Ona de Gibert, Barry Haddow, Jan Haji\v{c}, Jind\v{r}ich Helcl, Andrey Kutuzov, Veronika Laippala, Zihao Li, Risto Luukkonen, Bhavitvya Malik, Vladislav Mikhailov, Amanda Myntti, Dayy\'an O'Brien, Lucie Pol\'akov\'a, Sampo Pyysalo, Gema Ram\'irez S\'anchez, Janine Siewert, Pavel Stepachev, J\"org Tiedemann, Teemu Vahtola, Du\v{s}an Vari\v{s}, Fedor Vitiugin, Tea Vojt\v{e}chov\'a, Jaume Zaragoza
-
-
- AI for Requirements Engineering: Industry adoption and Practitioner perspectives
- https://arxiv.org/abs/2511.01324
- arXiv:2511.01324v3 Announce Type: replace
-Abstract: The integration of AI for Requirements Engineering (RE) presents significant benefits but also poses real challenges. Although RE is fundamental to software engineering, limited research has examined AI adoption in RE. We surveyed 55 software practitioners to map AI usage across four RE phases: Elicitation, Analysis, Specification, and Validation, and four approaches for decision making: human-only decisions, AI validation, Human AI Collaboration (HAIC), and full AI automation. Participants also shared their perceptions, challenges, and opportunities when applying AI for RE tasks. Our data show that 58.2% of respondents already use AI in RE, and 69.1% view its impact as positive or very positive. HAIC dominates practice, accounting for 54.4% of all RE techniques, while full AI automation remains minimal at 5.4%. Passive AI validation (4.4 to 6.2%) lags even further behind, indicating that practitioners value AI's active support over passive oversight. These findings suggest that AI is most effective when positioned as a collaborative partner rather than a replacement for human expertise. It also highlights the need for RE-specific HAIC frameworks along with robust and responsible AI governance as AI adoption in RE grows.
- oai:arXiv.org:2511.01324v3
- cs.SE
- cs.AI
- cs.HC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Lekshmi Murali Rani, Richard Berntsson Svensson, Robert Feldt
-
-
- On the Computability of Finding Capacity-Achieving Codes
- https://arxiv.org/abs/2511.01414
- arXiv:2511.01414v2 Announce Type: replace
-Abstract: This work studies the problem of constructing capacity-achieving codes from an algorithmic perspective. Specifically, we prove that there exists a Turing machine which, given a discrete memoryless channel $p_{Y|X}$, a target rate $R$ less than the channel capacity $C(p_{Y|X})$, and an error tolerance $\epsilon > 0$, outputs a block code $\mathcal{C}$ achieving a rate at least $R$ and a maximum block error probability below $\epsilon$. The machine operates in the general case where all transition probabilities of $p_{Y|X}$ are computable real numbers, and the parameters $R$ and $\epsilon$ are rational. The proof builds on Shannon's channel coding theorem and relies on an exhaustive search approach that systematically enumerates all codes of increasing block length until a valid code is found. This construction is formalized using the theory of recursive functions, yielding a $\mu$-recursive function $\mathrm{FindCode} : \mathbb{N}^3 \rightharpoonup \mathbb{N}$ that takes as input appropriate encodings of $p_{Y|X}$, $R$, and $\epsilon$, and, whenever $R < C(p_{Y|X})$, outputs an encoding of a valid code. By Kleene's normal form theorem, which establishes the computational equivalence between Turing machines and $\mu$-recursive functions, we conclude that the problem is solvable by a Turing machine. This result can also be extended to the case where $\epsilon$ is a computable real number, while we further discuss an analogous generalization of our analysis when $R$ is computable as well. We note that the assumptions that the probabilities of $p_{Y|X}$, as well as $\epsilon$ and $R$, are computable real numbers cannot be further weakened, since computable reals constitute the largest subset of $\mathbb{R}$ representable by algorithmic means.
- oai:arXiv.org:2511.01414v2
- cs.IT
- math.IT
- math.LO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Angelos Gkekas, Nikos A. Mitsiou, Ioannis Souldatos, George K. Karagiannidis
-
-
- Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation
- https://arxiv.org/abs/2511.01450
- arXiv:2511.01450v2 Announce Type: replace
-Abstract: Recent studies have identified Direct Preference Optimization (DPO) as an efficient and reward-free approach to improving video generation quality. However, existing methods largely follow image-domain paradigms and are mainly developed on small-scale models (approximately 2B parameters), limiting their ability to address the unique challenges of video tasks, such as costly data construction, unstable training, and heavy memory consumption. To overcome these limitations, we introduce a GT-Pair that automatically builds high-quality preference pairs by using real videos as positives and model-generated videos as negatives, eliminating the need for any external annotation. We further present Reg-DPO, which incorporates the SFT loss as a regularization term into the DPO loss to enhance training stability and generation fidelity. Additionally, by combining the FSDP framework with multiple memory optimization techniques, our approach achieves nearly three times higher training capacity than using FSDP alone. Extensive experiments on both I2V and T2V tasks across multiple datasets demonstrate that our method consistently outperforms existing approaches, delivering superior video generation quality.
- oai:arXiv.org:2511.01450v2
- cs.CV
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jie Du, Xinyu Gong, Qingshan Tan, Wen Li, Yangming Cheng, Weitao Wang, Chenlu Zhan, Suhui Wu, Hao Zhang, Jun Zhang
-
-
- L2T-Tune:LLM-Guided Hybrid Database Tuning with LHS and TD3
- https://arxiv.org/abs/2511.01602
- arXiv:2511.01602v2 Announce Type: replace
-Abstract: Configuration tuning is critical for database performance. Although recent advancements in database tuning have shown promising results in throughput and latency improvement, challenges remain. First, the vast knob space makes direct optimization unstable and slow to converge. Second, reinforcement learning pipelines often lack effective warm-start guidance and require long offline training. Third, transferability is limited: when hardware or workloads change, existing models typically require substantial retraining to recover performance.
- To address these limitations, we propose L2T-Tune, a new LLM-guided hybrid database tuning framework that features a three-stage pipeline: Stage one performs a warm start that simultaneously generates uniform samples across the knob space and logs them into a shared pool; Stage two leverages a large language model to mine and prioritize tuning hints from manuals and community documents for rapid convergence. Stage three uses the warm-start sample pool to reduce the dimensionality of knobs and state features, then fine-tunes the configuration with the Twin Delayed Deep Deterministic Policy Gradient algorithm.
- We conduct experiments on L2T-Tune and the state-of-the-art models. Compared with the best-performing alternative, our approach improves performance by an average of 37.1% across all workloads, and by up to 73% on TPC-C. Compared with models trained with reinforcement learning, it achieves rapid convergence in the offline tuning stage on a single server. Moreover, during the online tuning stage, it only takes 30 steps to achieve best results.
- oai:arXiv.org:2511.01602v2
- cs.DB
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xinyue Yang, Chen Zheng, Yaoyang Hou, Renhao Zhang, Yinyan Zhang, Yanjun Wu, Heng Zhang
-
-
- Proximal Regret and Proximal Correlated Equilibria: A New Tractable Solution Concept for Online Learning and Games
- https://arxiv.org/abs/2511.01852
- arXiv:2511.01852v2 Announce Type: replace
-Abstract: Learning and computation of equilibria are central problems in game theory, theory of computation, and artificial intelligence. In this work, we introduce proximal regret, a new notion of regret based on proximal operators that lies strictly between external and swap regret. When every player employs a no-proximal-regret algorithm in a general convex game, the empirical distribution of play converges to proximal correlated equilibria (PCE), a refinement of coarse correlated equilibria. Our framework unifies several emerging notions in online learning and game theory-such as gradient equilibrium and semicoarse correlated equilibrium-and introduces new ones. Our main result shows that the classic Online Gradient Descent (GD) algorithm achieves an optimal $O(\sqrt{T})$ bound on proximal regret, revealing that GD, without modification, minimizes a stronger regret notion than external regret. This provides a new explanation for the empirically superior performance of gradient descent in online learning and games. We further extend our analysis to Mirror Descent in the Bregman setting and to Optimistic Gradient Descent, which yields faster convergence in smooth convex games.
- oai:arXiv.org:2511.01852v2
- cs.GT
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yang Cai, Constantinos Daskalakis, Haipeng Luo, Chen-Yu Wei, Weiqiang Zheng
-
-
- CudaForge: An Agent Framework with Hardware Feedback for CUDA Kernel Optimization
- https://arxiv.org/abs/2511.01884
- arXiv:2511.01884v2 Announce Type: replace
-Abstract: Developing efficient CUDA kernels is increasingly critical for AI applications such as large-scale LLM training. However, manual kernel design is both costly and time-consuming, motivating automatic approaches that leverage LLMs for code generation. Existing methods for automatic kernel generation, however, often produce low-efficiency kernels, incur high computational overhead, and fail to generalize across settings. In this work, we propose CudaForge, a training-free multi-agent workflow for CUDA kernel generation and optimization. Our workflow is inspired by the iterative workflow of human experts, which contains steps such as developing initial kernels, testing correctness, analyzing hardware feedback, and iterative improvement. More specifically, CudaForge employs two LLM agents: a Coder and a Judge, that iteratively generate, correct, and optimize CUDA kernels, while integrating hardware feedback such as Nsight Compute (NCU) metrics. In extensive evaluations, we show that CudaForge, by leveraging base models like OpenAI-o3, achieves 97.6\% correctness of generated kernels and an average 1.68$\times$ speedup over PyTorch baselines, substantially surpassing state-of-the-art models including OpenAI-o3 and Kevin on KernelBench.Beyond accuracy and speed, CudaForge demonstrates strong generalization across GPUs (A100, RTX 6000, 4090, 3090) and base models (OpenAI-o3, GPT-5, gpt-oss-120B, Claude-Sonnet-4, QwQ-32B), while maintaining high efficiency. In particular, generating an optimized kernel takes about 26.5 minutes on one RTX6000 and incurs about \$ 0.3 API cost, which is significantly cheaper than existing agentic work that costs 6 H100 hours and \$ 5 API cost per kernel. Our results highlight that multi-agent, training-free workflows can enable cost-effective, generalizable, and high-performance CUDA kernel optimization. Code available at https://github.com/OptimAI-Lab/CudaForge
- oai:arXiv.org:2511.01884v2
- cs.LG
- cs.AI
- cs.CL
- cs.DC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zijian Zhang, Rong Wang, Shiyang Li, Yuebo Luo, Mingyi Hong, Caiwen Ding
-
-
- Mirror-Neuron Patterns in AI Alignment
- https://arxiv.org/abs/2511.01885
- arXiv:2511.01885v2 Announce Type: replace
-Abstract: As artificial intelligence (AI) advances toward superhuman capabilities, aligning these systems with human values becomes increasingly critical. Current alignment strategies rely largely on externally specified constraints that may prove insufficient against future super-intelligent AI capable of circumventing top-down controls.
- This research investigates whether artificial neural networks (ANNs) can develop patterns analogous to biological mirror neurons cells that activate both when performing and observing actions, and how such patterns might contribute to intrinsic alignment in AI. Mirror neurons play a crucial role in empathy, imitation, and social cognition in humans. The study therefore asks: (1) Can simple ANNs develop mirror-neuron patterns? and (2) How might these patterns contribute to ethical and cooperative decision-making in AI systems?
- Using a novel Frog and Toad game framework designed to promote cooperative behaviors, we identify conditions under which mirror-neuron patterns emerge, evaluate their influence on action circuits, introduce the Checkpoint Mirror Neuron Index (CMNI) to quantify activation strength and consistency, and propose a theoretical framework for further study.
- Our findings indicate that appropriately scaled model capacities and self/other coupling foster shared neural representations in ANNs similar to biological mirror neurons. These empathy-like circuits support cooperative behavior and suggest that intrinsic motivations modeled through mirror-neuron dynamics could complement existing alignment techniques by embedding empathy-like mechanisms directly within AI architectures.
- oai:arXiv.org:2511.01885v2
- cs.AI
- cs.LG
- q-bio.NC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Robyn Wyrick
-
-
- Security Audit of intel ICE Driver for e810 Network Interface Card
- https://arxiv.org/abs/2511.01910
- arXiv:2511.01910v2 Announce Type: replace
-Abstract: The security of enterprise-grade networking hardware and software is critical to ensuring the integrity, availability, and confidentiality of data in modern cloud and data center environments. Network interface controllers (NICs) play a pivotal role in high-performance computing and virtualization, but their privileged access to system resources makes them a prime target for security vulnerabilities. This study presents a security analysis of the Intel ICE driver using the E810 Ethernet Controller, employing static analysis, fuzz testing, and timing-based side-channel evaluation to assess robustness against exploitation. The objective is to evaluate the drivers resilience to malformed inputs, identify implementation weaknesses, and determine whether timing discrepancies can be exploited for unauthorized inference of system states. Static code analysis reveals that insufficient bounds checking and unsafe string operations may introduce security flaws. Fuzz testing targets the Admin Queue, debugfs interface, and virtual function (VF) management. Interface-aware fuzzing and command mutation confirm strong input validation that prevents memory corruption and privilege escalation under normal conditions. However, using principles from KernelSnitch, the driver is found to be susceptible to timing-based side-channel attacks. Execution time discrepancies in hash table lookups allow an unprivileged attacker to infer VF occupancy states, enabling potential network mapping in multi-tenant environments. Further analysis shows inefficiencies in Read-Copy-Update (RCU) synchronization, where missing synchronization leads to stale data persistence, memory leaks, and out-of-memory conditions. Kernel instrumentation confirms that occupied VF lookups complete faster than unoccupied queries, exposing timing-based information leakage.
- oai:arXiv.org:2511.01910v2
- cs.CR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Oisin O Sullivan
-
-
- Measuring the Intrinsic Dimension of Earth Representations
- https://arxiv.org/abs/2511.02101
- arXiv:2511.02101v2 Announce Type: replace
-Abstract: Within the context of representation learning for Earth observation, geographic Implicit Neural Representations (INRs) embed low-dimensional location inputs (longitude, latitude) into high-dimensional embeddings, through models trained on geo-referenced satellite, image or text data. Despite the common aim of geographic INRs to distill Earth's data into compact, learning-friendly representations, we lack an understanding of how much information is contained in these Earth representations, and where that information is concentrated. The intrinsic dimension of a dataset measures the number of degrees of freedom required to capture its local variability, regardless of the ambient high-dimensional space in which it is embedded. This work provides the first study of the intrinsic dimensionality of geographic INRs. Analyzing INRs with ambient dimension between 256 and 512, we find that their intrinsic dimensions fall roughly between 2 and 10 and are sensitive to changing spatial resolution and input modalities during INR pre-training. Furthermore, we show that the intrinsic dimension of a geographic INR correlates with downstream task performance and can capture spatial artifacts, facilitating model evaluation and diagnostics. More broadly, our work offers an architecture-agnostic, label-free metric of information content that can enable unsupervised evaluation, model selection, and pre-training design across INRs.
- oai:arXiv.org:2511.02101v2
- cs.LG
- cs.IT
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Arjun Rao, Marc Ru{\ss}wurm, Konstantin Klemmer, Esther Rolf
-
-
- TabDSR: Decompose, Sanitize, and Reason for Complex Numerical Reasoning in Tabular Data
- https://arxiv.org/abs/2511.02219
- arXiv:2511.02219v2 Announce Type: replace
-Abstract: Complex reasoning over tabular data is crucial in real-world data analysis, yet large language models (LLMs) often underperform due to complex queries, noisy data, and limited numerical capabilities. To address these issues, we propose TabDSR, a framework consisting of: (1) a query decomposer that breaks down complex questions, (2) a table sanitizer that cleans and filters noisy tables, and (3) a program-of-thoughts (PoT)-based reasoner that generates executable code to derive the final answer from the sanitized table. To ensure unbiased evaluation and mitigate data leakage, we introduce a new dataset, CalTab151, specifically designed for complex numerical reasoning over tables. Experimental results demonstrate that TabDSR consistently outperforms existing methods, achieving state-of-the-art (SOTA) performance with 8.79%, 6.08%, and 19.87% accuracy improvement on TAT-QA, TableBench, and TabDSR, respectively. Moreover, our framework integrates seamlessly with mainstream LLMs, providing a robust solution for complex tabular numerical reasoning. These findings highlight the effectiveness of our framework in enhancing LLM performance for complex tabular numerical reasoning. Data and code are available upon request.
- oai:arXiv.org:2511.02219v2
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- EMNLP 2025
- Changjiang Jiang, Fengchang Yu, Haihua Chen, Wei Lu, Jin Zeng
-
-
- Probabilistic Graph Cuts
- https://arxiv.org/abs/2511.02272
- arXiv:2511.02272v2 Announce Type: replace
-Abstract: Probabilistic relaxations of graph cuts offer a differentiable alternative to spectral clustering, enabling end-to-end and online learning without eigendecompositions, yet prior work centered on RatioCut and lacked general guarantees and principled gradients. We present a unified probabilistic framework that covers a wide class of cuts, including Normalized Cut. Our framework provides tight analytic upper bounds on expected discrete cuts via integral representations and Gauss hypergeometric functions with closed-form forward and backward. Together, these results deliver a rigorous, numerically stable foundation for scalable, differentiable graph partitioning covering a wide range of clustering and contrastive learning objectives.
- oai:arXiv.org:2511.02272v2
- cs.LG
- cs.DS
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Ayoub Ghriss
-
-
- SWE-Sharp-Bench: A Reproducible Benchmark for C# Software Engineering Tasks
- https://arxiv.org/abs/2511.02352
- arXiv:2511.02352v2 Announce Type: replace
-Abstract: AI coding agents have shown great progress on Python software engineering benchmarks like SWE-Bench, and for other languages like Java and C in benchmarks like Multi-SWE-Bench. However, C# -- a prominent enterprise language ranking #5 in the TIOBE index -- remains absent from such benchmarks. We introduce SWE-Sharp-Bench, a reproducible software engineering benchmark for C# featuring 150 instances from 17 repositories. Evaluating identical model-agent configurations across languages reveals a significant performance gap: while 70% of Python tasks in SWE-Bench Verified are solved, only 40% of our C# tasks are resolved. We open-source SWE-Sharp-Bench and our entire curation pipeline.
- oai:arXiv.org:2511.02352v2
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Sanket Mhatre, Yasharth Bajpai, Sumit Gulwani, Emerson Murphy-Hill, Gustavo Soares
-
-
- NOWS: Neural Operator Warm Starts for Accelerating Iterative Solvers
- https://arxiv.org/abs/2511.02481
- arXiv:2511.02481v2 Announce Type: replace
-Abstract: Partial differential equations (PDEs) underpin quantitative descriptions across the physical sciences and engineering, yet high-fidelity simulation remains a major computational bottleneck for many-query, real-time, and design tasks. Data-driven surrogates can be strikingly fast but are often unreliable when applied outside their training distribution. Here we introduce Neural Operator Warm Starts (NOWS), a hybrid strategy that harnesses learned solution operators to accelerate classical iterative solvers by producing high-quality initial guesses for Krylov methods such as conjugate gradient and GMRES. NOWS leaves existing discretizations and solver infrastructures intact, integrating seamlessly with finite-difference, finite-element, isogeometric analysis, finite volume method, etc. Across our benchmarks, the learned initialization consistently reduces iteration counts and end-to-end runtime, resulting in a reduction of the computational time of up to 90 %, while preserving the stability and convergence guarantees of the underlying numerical algorithms. By combining the rapid inference of neural operators with the rigor of traditional solvers, NOWS provides a practical and trustworthy approach to accelerate high-fidelity PDE simulations.
- oai:arXiv.org:2511.02481v2
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Mohammad Sadegh Eshaghi, Cosmin Anitescu, Navid Valizadeh, Yizheng Wang, Xiaoying Zhuang, Timon Rabczuk
-
-
- OLATverse: A Large-scale Real-world Object Dataset with Precise Lighting Control
- https://arxiv.org/abs/2511.02483
- arXiv:2511.02483v2 Announce Type: replace
-Abstract: We introduce OLATverse, a large-scale dataset comprising around 9M images of 765 real-world objects, captured from multiple viewpoints under a diverse set of precisely controlled lighting conditions. While recent advances in object-centric inverse rendering, novel view synthesis and relighting have shown promising results, most techniques still heavily rely on the synthetic datasets for training and small-scale real-world datasets for benchmarking, which limits their realism and generalization. To address this gap, OLATverse offers two key advantages over existing datasets: large-scale coverage of real objects and high-fidelity appearance under precisely controlled illuminations. Specifically, OLATverse contains 765 common and uncommon real-world objects, spanning a wide range of material categories. Each object is captured using 35 DSLR cameras and 331 individually controlled light sources, enabling the simulation of diverse illumination conditions. In addition, for each object, we provide well-calibrated camera parameters, accurate object masks, photometric surface normals, and diffuse albedo as auxiliary resources. We also construct an extensive evaluation set, establishing the first comprehensive real-world object-centric benchmark for inverse rendering and normal estimation. We believe that OLATverse represents a pivotal step toward integrating the next generation of inverse rendering and relighting methods with real-world data. The full dataset, along with all post-processing workflows, will be publicly released at https://vcai.mpi-inf.mpg.de/projects/OLATverse/.
- oai:arXiv.org:2511.02483v2
- cs.CV
- cs.GR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xilong Zhou, Jianchun Chen, Pramod Rao, Timo Teufel, Linjie Lyu, Tigran Minasian, Oleksandr Sotnychenko, Xiao-Xiao Long, Marc Habermann, Christian Theobalt
-
-
- ESA: Energy-Based Shot Assembly Optimization for Automatic Video Editing
- https://arxiv.org/abs/2511.02505
- arXiv:2511.02505v2 Announce Type: replace
-Abstract: Shot assembly is a crucial step in film production and video editing, involving the sequencing and arrangement of shots to construct a narrative, convey information, or evoke emotions. Traditionally, this process has been manually executed by experienced editors. While current intelligent video editing technologies can handle some automated video editing tasks, they often fail to capture the creator's unique artistic expression in shot assembly. To address this challenge, we propose an energy-based optimization method for video shot assembly. Specifically, we first perform visual-semantic matching between the script generated by a large language model and a video library to obtain subsets of candidate shots aligned with the script semantics. Next, we segment and label the shots from reference videos, extracting attributes such as shot size, camera motion, and semantics. We then employ energy-based models to learn from these attributes, scoring candidate shot sequences based on their alignment with reference styles. Finally, we achieve shot assembly optimization by combining multiple syntax rules, producing videos that align with the assembly style of the reference videos. Our method not only automates the arrangement and combination of independent shots according to specific logic, narrative requirements, or artistic styles but also learns the assembly style of reference videos, creating a coherent visual sequence or holistic visual expression. With our system, even users with no prior video editing experience can create visually compelling videos. Project page: https://sobeymil.github.io/esa.com
- oai:arXiv.org:2511.02505v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yaosen Chen, Wei Wang, Tianheng Zheng, Xuming Wen, Han Yang, Yanru Zhang
-
-
- The ORCA Benchmark: Evaluating Real-World Calculation Accuracy in Large Language Models
- https://arxiv.org/abs/2511.02589
- arXiv:2511.02589v2 Announce Type: replace
-Abstract: We present ORCA (Omni Research on Calculation in AI) Benchmark - a novel benchmark that evaluates large language models (LLMs) on multi-domain, real-life quantitative reasoning using verified outputs from Omni's calculator engine. In 500 natural-language tasks across domains such as finance, physics, health, and statistics, the five state-of-the-art systems (ChatGPT-5, Gemini~2.5~Flash, Claude~Sonnet~4.5, Grok~4, and DeepSeek~V3.2) achieved only $45\text{--}63\,\%$ accuracy, with errors mainly related to rounding ($35\,\%$) and calculation mistakes ($33\,\%$). Results in specific domains indicate strengths in mathematics and engineering, but weaknesses in physics and natural sciences. Correlation analysis ($r \approx 0.40\text{--}0.65$) shows that the models often fail together but differ in the types of errors they make, highlighting their partial complementarity rather than redundancy. Unlike standard math datasets, ORCA evaluates step-by-step reasoning, numerical precision, and domain generalization across real problems from finance, physics, health, and statistics.
- oai:arXiv.org:2511.02589v2
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Claudia Herambourg, Dawid Siuda, Julia Kopczy\'nska, Joao R. L. Santos, Wojciech Sas, Joanna \'Smieta\'nska-Nowak
-
-
- In Situ Training of Implicit Neural Compressors for Scientific Simulations via Sketch-Based Regularization
- https://arxiv.org/abs/2511.02659
- arXiv:2511.02659v2 Announce Type: replace
-Abstract: Focusing on implicit neural representations, we present a novel in situ training protocol that employs limited memory buffers of full and sketched data samples, where the sketched data are leveraged to prevent catastrophic forgetting. The theoretical motivation for our use of sketching as a regularizer is presented via a simple Johnson-Lindenstrauss-informed result. While our methods may be of wider interest in the field of continual learning, we specifically target in situ neural compression using implicit neural representation-based hypernetworks. We evaluate our method on a variety of complex simulation data in two and three dimensions, over long time horizons, and across unstructured grids and non-Cartesian geometries. On these tasks, we show strong reconstruction performance at high compression rates. Most importantly, we demonstrate that sketching enables the presented in situ scheme to approximately match the performance of the equivalent offline method.
- oai:arXiv.org:2511.02659v2
- cs.LG
- cs.AI
- cs.CE
- cs.NA
- math.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Cooper Simpson, Stephen Becker, Alireza Doostan
-
-
- Discretization and convergence of the ballistic Benamou-Brenier formulation of the porous medium and Burgers equations
- https://arxiv.org/abs/2511.02662
- arXiv:2511.02662v2 Announce Type: replace
-Abstract: We study the discretization, convergence, and numerical implementation of recent reformulations of the quadratic porous medium equation (multidimensional and anisotropic) and Burgers' equation (one-dimensional, with optional viscosity), as forward in time variants of the Benamou-Brenier formulation of optimal transport. This approach turns those evolution problems into global optimization problems in time and space, of which we introduce a discretization, one of whose originalities lies in the harmonic interpolation of the densities involved. We prove that the resulting schemes are unconditionally stable w.r.t. the space and time steps, and we establish a quadratic convergence rate for the dual PDE solution, under suitable assumptions. We also show that the schemes can be efficiently solved numerically using a proximal splitting method and a global space-time fast Fourier transform, and we illustrate our results with numerical experiments.
- oai:arXiv.org:2511.02662v2
- math.NA
- cs.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Jean-Marie Mirebeau, Erwan Stampfli
-
-
- Scalable Evaluation and Neural Models for Compositional Generalization
- https://arxiv.org/abs/2511.02667
- arXiv:2511.02667v2 Announce Type: replace
-Abstract: Compositional generalization-a key open challenge in modern machine learning-requires models to predict unknown combinations of known concepts. However, assessing compositional generalization remains a fundamental challenge due to the lack of standardized evaluation protocols and the limitations of current benchmarks, which often favor efficiency over rigor. At the same time, general-purpose vision architectures lack the necessary inductive biases, and existing approaches to endow them compromise scalability. As a remedy, this paper introduces: 1) a rigorous evaluation framework that unifies and extends previous approaches while reducing computational requirements from combinatorial to constant; 2) an extensive and modern evaluation on the status of compositional generalization in supervised vision backbones, training more than 5000 models; 3) Attribute Invariant Networks, a class of models establishing a new Pareto frontier in compositional generalization, achieving a 23.43% accuracy improvement over baselines while reducing parameter overhead from 600% to 16% compared to fully disentangled counterparts. Our code is available at https://github.com/IBM/scalable-compositional-generalization.
- oai:arXiv.org:2511.02667v2
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Giacomo Camposampiero, Pietro Barbiero, Michael Hersche, Roger Wattenhofer, Abbas Rahimi
-
-
- Human-AI Collaboration with Misaligned Preferences
- https://arxiv.org/abs/2511.02746
- arXiv:2511.02746v2 Announce Type: replace
-Abstract: In many real-life settings, algorithms play the role of assistants, while humans ultimately make the final decision. Often, algorithms specifically act as curators, narrowing down a wide range of options into a smaller subset that the human picks between: consider content recommendation or chatbot responses to questions with multiple valid answers. Crucially, humans may not know their own preferences perfectly either, but instead may only have access to a noisy sampling over preferences. Algorithms can assist humans by curating a smaller subset of items, but must also face the challenge of misalignment: humans may have different preferences from each other (and from the algorithm), and the algorithm may not know the exact preferences of the human they are facing at any point in time. In this paper, we model and theoretically study such a setting. Specifically, we show instances where humans benefit by collaborating with a misaligned algorithm. Surprisingly, we show that humans gain more utility from a misaligned algorithm (which makes different mistakes) than from an aligned algorithm. Next, we build on this result by studying what properties of algorithms maximize human welfare when the goals could be either utilitarian welfare or ensuring all humans benefit. We conclude by discussing implications for designers of algorithmic tools and policymakers.
- oai:arXiv.org:2511.02746v2
- cs.GT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiaxin Song, Parnian Shahkar, Kate Donahue, Bhaskar Ray Chaudhury
-
-
- TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models
- https://arxiv.org/abs/2511.02802
- arXiv:2511.02802v2 Announce Type: replace
-Abstract: Tabular foundation models represent a growing paradigm in structured data learning, extending the benefits of large-scale pretraining to tabular domains. However, their adoption remains limited due to heterogeneous preprocessing pipelines, fragmented APIs, inconsistent fine-tuning procedures, and the absence of standardized evaluation for deployment-oriented metrics such as calibration and fairness. We present TabTune, a unified library that standardizes the complete workflow for tabular foundation models through a single interface. TabTune provides consistent access to seven state-of-the-art models supporting multiple adaptation strategies, including zero-shot inference, meta-learning, supervised fine-tuning (SFT), and parameter-efficient fine-tuning (PEFT). The framework automates model-aware preprocessing, manages architectural heterogeneity internally, and integrates evaluation modules for performance, calibration, and fairness. Designed for extensibility and reproducibility, TabTune enables consistent benchmarking of adaptation strategies of tabular foundation models.
- oai:arXiv.org:2511.02802v2
- cs.LG
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aditya Tanna, Pratinav Seth, Mohamed Bouadi, Utsav Avaiya, Vinay Kumar Sankarapu
-
-
- Kosmos: An AI Scientist for Autonomous Discovery
- https://arxiv.org/abs/2511.02824
- arXiv:2511.02824v2 Announce Type: replace
-Abstract: Data-driven scientific discovery requires iterative cycles of literature search, hypothesis generation, and data analysis. Substantial progress has been made towards AI agents that can automate scientific research, but all such agents remain limited in the number of actions they can take before losing coherence, thus limiting the depth of their findings. Here we present Kosmos, an AI scientist that automates data-driven discovery. Given an open-ended objective and a dataset, Kosmos runs for up to 12 hours performing cycles of parallel data analysis, literature search, and hypothesis generation before synthesizing discoveries into scientific reports. Unlike prior systems, Kosmos uses a structured world model to share information between a data analysis agent and a literature search agent. The world model enables Kosmos to coherently pursue the specified objective over 200 agent rollouts, collectively executing an average of 42,000 lines of code and reading 1,500 papers per run. Kosmos cites all statements in its reports with code or primary literature, ensuring its reasoning is traceable. Independent scientists found 79.4% of statements in Kosmos reports to be accurate, and collaborators reported that a single 20-cycle Kosmos run performed the equivalent of 6 months of their own research time on average. Furthermore, collaborators reported that the number of valuable scientific findings generated scales linearly with Kosmos cycles (tested up to 20 cycles). We highlight seven discoveries made by Kosmos that span metabolomics, materials science, neuroscience, and statistical genetics. Three discoveries independently reproduce findings from preprinted or unpublished manuscripts that were not accessed by Kosmos at runtime, while four make novel contributions to the scientific literature.
- oai:arXiv.org:2511.02824v2
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Ludovico Mitchener, Angela Yiu, Benjamin Chang, Mathieu Bourdenx, Tyler Nadolski, Arvis Sulovari, Eric C. Landsness, Daniel L. Barabasi, Siddharth Narayanan, Nicky Evans, Shriya Reddy, Martha Foiani, Aizad Kamal, Leah P. Shriver, Fang Cao, Asmamaw T. Wassie, Jon M. Laurent, Edwin Melville-Green, Mayk Caldas, Albert Bou, Kaleigh F. Roberts, Sladjana Zagorac, Timothy C. Orr, Miranda E. Orr, Kevin J. Zwezdaryk, Ali E. Ghareeb, Laurie McCoy, Bruna Gomes, Euan A. Ashley, Karen E. Duff, Tonio Buonassisi, Tom Rainforth, Randall J. Bateman, Michael Skarlinski, Samuel G. Rodriques, Michaela M. Hinks, Andrew D. White
-
-
- PLUTO-4: Frontier Pathology Foundation Models
- https://arxiv.org/abs/2511.02826
- arXiv:2511.02826v2 Announce Type: replace
-Abstract: Foundation models trained on large-scale pathology image corpora have demonstrated strong transfer capabilities across diverse histopathology tasks. Building on this progress, we introduce PLUTO-4, our next generation of pathology foundation models that extend the Pathology-Universal Transformer (PLUTO) to frontier scale. We share two complementary Vision Transformer architectures in the PLUTO-4 family: a compact and efficient PLUTO-4S model optimized for multi-scale deployment using a FlexiViT setup with 2D-RoPE embeddings, and a frontier-scale PLUTO-4G model trained with a single patch size to maximize representation capacity and stability. Both models are pretrained using a self-supervised objective derived from DINOv2 on a large multi-institutional corpus containing 551,164 WSIs from 137,144 patients across over 50 institutions, spanning over 60 disease types and over 100 stains. Comprehensive evaluation across public and internal benchmarks demonstrates that PLUTO-4 achieves state-of-the-art performance on tasks requiring varying spatial and biological context, including patch-level classification, segmentation, and slide-level diagnosis. The compact PLUTO-4S provides high-throughput and robust performance for practical deployment, while PLUTO-4G establishes new performance frontiers across multiple pathology benchmarks, including an 11% improvement in dermatopathology diagnosis. These diverse improvements underscore PLUTO-4's potential to transform real-world applications as a backbone for translational research and diagnostic use cases.
- oai:arXiv.org:2511.02826v2
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Harshith Padigela, Shima Nofallah, Atchuth Naveen Chilaparasetti, Ryun Han, Andrew Walker, Judy Shen, Chintan Shah, Blake Martin, Aashish Sood, Elliot Miller, Ben Glass, Andy Beck, Harsha Pokkalla, Syed Ashar Javed
-
-
- Agent-Omni: Test-Time Multimodal Reasoning via Model Coordination for Understanding Anything
- https://arxiv.org/abs/2511.02834
- arXiv:2511.02834v2 Announce Type: replace
-Abstract: Multimodal large language models (MLLMs) have shown strong capabilities but remain limited to fixed modality pairs and require costly fine-tuning with large aligned datasets. Building fully omni-capable models that can integrate text, images, audio, and video remains impractical and lacks robust reasoning support. In this paper, we propose an Agent-Omni framework that coordinates existing foundation models through a master-agent system, enabling flexible multimodal reasoning without retraining. The master agent interprets user intent, delegates subtasks to modality-specific agents, and integrates their outputs into coherent responses. Extensive experiments across text, image, audio, video, and omni benchmarks show that Agent-Omni consistently achieves state-of-the-art performance, particularly on tasks requiring complex cross-modal reasoning. Its agent-based design enables seamless integration of specialized foundation models, ensuring adaptability to diverse inputs while maintaining transparency and interpretability. In addition, the framework is modular and easily extensible, allowing future improvements as stronger models become available.
- oai:arXiv.org:2511.02834v2
- cs.AI
- cs.CL
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Huawei Lin, Yunzhi Shi, Tong Geng, Weijie Zhao, Wei Wang, Ravender Pal Singh
-
-
- Parallel computation of interval bases for persistence module decomposition
- https://arxiv.org/abs/2106.11884
- arXiv:2106.11884v3 Announce Type: replace-cross
-Abstract: A persistence module $M$, with coefficients in a field $\mathbb{F}$, is a finite-dimensional linear representation of an equioriented quiver of type $A_n$ or, equivalently, a graded module over the ring of polynomials $\mathbb{F}[x]$. It is well-known that $M$ can be written as the direct sum of indecomposable representations or as the direct sum of cyclic submodules generated by homogeneous elements. An interval basis for $M$ is a set of homogeneous elements of $M$ such that the sum of the cyclic submodules of $M$ generated by them is direct and equal to $M$. We introduce a novel algorithm to compute an interval basis for $M$. Based on a flag of kernels of the structure maps, our algorithm is suitable for parallel or distributed computation and does not rely on a presentation of $M$. This algorithm outperforms the approach via the presentation matrix and Smith Normal Form. We specialize our parallel approach to persistent homology modules, and we close by applying the proposed algorithm to tracking harmonics via Hodge decomposition.
- oai:arXiv.org:2106.11884v3
- math.AT
- cs.CG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1007/s00200-025-00699-1
- A. De Gregorio, M. Guerra, S. Scaramuccia, F. Vaccarino, Parallel computation of interval bases for persistence module decomposition, Appl. Algebr. Eng. Commun. Comput. (2025)
- Alessandro De Gregorio, Marco Guerra, Sara Scaramuccia, Francesco Vaccarino
-
-
- Universal Proof Theory, TACL 2022 Lecture Notes
- https://arxiv.org/abs/2305.10888
- arXiv:2305.10888v3 Announce Type: replace-cross
-Abstract: These lecture notes survey the emerging area of Universal Proof Theory, which investigates general questions about the existence, equivalence, and characterization of good proof systems for broad classes of logics. In particular, the notes concentrate on the existence problem: for which logics do there exist proof systems satisfying desirable meta-properties (e.g. cut elimination, analyticity, termination)? After a brief historical and conceptual introduction, we survey different flavours of proof theory (Hilbert systems, natural deduction, sequent calculi) in the context of classical, intuitionistic, modal, and substructural logics. We then develop a general method for obtaining positive and negative existence results, based on interpolation and uniform interpolation techniques, and apply it to a range of logics (intermediate, modal, non-normal, conditional, and substructural). We also discuss variations of the method. As these are lecture notes, proofs are often sketched or omitted, with pointers to papers containing the full proofs. The survey thus aims to chart the scope and challenges of Universal Proof Theory for future work.
- oai:arXiv.org:2305.10888v3
- math.LO
- cs.LO
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Rosalie Iemhoff, Raheleh Jalali
-
-
- An explicit construction of Kaleidocycles by elliptic theta functions
- https://arxiv.org/abs/2308.04977
- arXiv:2308.04977v3 Announce Type: replace-cross
-Abstract: We consider the configuration space of ordered points on the two-dimensional sphere that satisfy a specific system of quadratic equations. We construct periodic orbits in this configuration space using elliptic theta functions and show that they simultaneously satisfy semi-discrete analogues of mKdV and sine-Gordon equations. The configuration space we investigate corresponds to the state space of a linkage mechanism known as the Kaleidocycle, and the constructed orbits describe the characteristic motion of the Kaleidocycle. A key consequence of our construction is the proof that Kaleidocycles exist for any number of tetrahedra greater than five. Our approach is founded on the relationship between the deformation of spatial curves and integrable systems, offering an intriguing example where an integrable system is explicitly solved to generate an orbit in the space of real solutions to polynomial equations defined by geometric constraints.
- oai:arXiv.org:2308.04977v3
- nlin.SI
- cs.RO
- math.DG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Shizuo Kaji, Kenji Kajiwara, Shota Shigetomi
-
-
- Variable Selection in Maximum Mean Discrepancy for Interpretable Distribution Comparison
- https://arxiv.org/abs/2311.01537
- arXiv:2311.01537v2 Announce Type: replace-cross
-Abstract: We study two-sample variable selection: identifying variables that discriminate between the distributions of two sets of data vectors. Such variables help scientists understand the mechanisms behind dataset discrepancies. Although domain-specific methods exist (e.g., in medical imaging, genetics, and computational social science), a general framework remains underdeveloped. We make two separate contributions. (i) We introduce a mathematical notion of the discriminating set of variables: the largest subset containing no variables whose marginals are identical across the two distributions and independent of the remaining variables. We prove this set is uniquely defined and establish further properties, making it a suitable ground truth for theory and evaluation. (ii) We propose two methods for two-sample variable selection that assign weights to variables and optimise them to maximise the power of a kernel two-sample test while enforcing sparsity to downweight redundant variables. To select the regularisation parameter - unknown in practice, as it controls the number of selected variables - we develop two data-driven procedures to balance recall and precision. Synthetic experiments show improved performance over baselines, and we illustrate the approach on two applications using datasets from water-pipe and traffic networks.
- oai:arXiv.org:2311.01537v2
- stat.ML
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-sa/4.0/
- Kensuke Mitsuzawa, Motonobu Kanagawa, Stefano Bortoli, Margherita Grossi, Paolo Papotti
-
-
- Individualizing Glioma Radiotherapy Planning by Optimization of Data and Physics-Informed Discrete Loss
- https://arxiv.org/abs/2312.05063
- arXiv:2312.05063v4 Announce Type: replace-cross
-Abstract: Brain tumor growth is unique to each glioma patient and extends beyond what is visible in imaging scans, infiltrating surrounding brain tissue. Understanding these hidden patient-specific progressions is essential for effective therapies. Current treatment plans for brain tumors, such as radiotherapy, typically involve delineating a uniform margin around the visible tumor on pre-treatment scans to target this invisible tumor growth. This "one size fits all" approach is derived from population studies and often fails to account for the nuances of individual patient conditions. We present the GliODIL framework, which infers the full spatial distribution of tumor cell concentration from available multi-modal imaging, leveraging a Fisher-Kolmogorov type physics model to describe tumor growth. This is achieved through the newly introduced method of Optimizing the Discrete Loss, where both data and physics-based constraints are softly assimilated into the solution. Our test dataset comprises 152 glioblastoma patients with pre-treatment imaging and post-treatment follow-ups for tumor recurrence monitoring. By blending data-driven techniques with physics-based constraints, GliODIL enhances recurrence prediction in radiotherapy planning, challenging traditional uniform margins and strict adherence to the Fisher-Kolmogorov partial differential equation model, which is adapted for complex cases.
- oai:arXiv.org:2312.05063v4
- physics.med-ph
- cs.NA
- math.NA
- q-bio.QM
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Michal Balcerak, Jonas Weidner, Petr Karnakov, Ivan Ezhov, Sergey Litvinov, Petros Koumoutsakos, Tamaz Amiranashvili, Ray Zirui Zhang, John S. Lowengrub, Bene Wiestler, Bjoern Menze
-
-
- Human Perception-Inspired Grain Segmentation Refinement Using Conditional Random Fields
- https://arxiv.org/abs/2312.09968
- arXiv:2312.09968v3 Announce Type: replace-cross
-Abstract: Automated detection of grain boundaries (GBs) in electron microscope images of polycrystalline materials could help accelerate the nanoscale characterization of myriad engineering materials and novel materials under scientific research. Accurate segmentation of interconnected line networks, such as GBs in polycrystalline material microstructures, poses a significant challenge due to the fragmented masks produced by conventional computer vision (CV) algorithms, including convolutional neural networks. These algorithms struggle with thin masks, often necessitating post-processing for effective contour closure and continuity. Previous approaches in this domain have typically relied on custom post-processing techniques that are problem-specific and heavily dependent on the quality of the mask obtained from a CV algorithm. Addressing this issue, this paper introduces a fast, high-fidelity post-processing technique that is universally applicable to segmentation masks of interconnected line networks. Leveraging domain knowledge about grain boundary connectivity, this method employs conditional random fields and perceptual grouping rules to refine segmentation masks of any image with a discernible grain structure. This approach significantly enhances segmentation mask accuracy by correctly reconstructing fragmented GBs in electron microscopy images of a polycrystalline oxide. The refinement improves the statistical representation of the microstructure, reflected by a 51 % improvement in a grain alignment metric that provides a more physically meaningful assessment of complex microstructures than conventional metrics. This method enables rapid and accurate characterization, facilitating an unprecedented level of data analysis and improving the understanding of GB networks, making it suitable for a range of disciplines where precise segmentation of interconnected line networks is essential.
- oai:arXiv.org:2312.09968v3
- cond-mat.mtrl-sci
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- 10.1016/j.matchar.2025.115694
- Mater. Charact. 230, 115694 (2025)
- Doruk Aksoy, Huolin L. Xin, Timothy J. Rupert, William J. Bowman
-
-
- Neural Physics: Using AI Libraries to Develop Physics-Based Solvers for Incompressible Computational Fluid Dynamics
- https://arxiv.org/abs/2402.17913
- arXiv:2402.17913v2 Announce Type: replace-cross
-Abstract: Numerical discretisations of partial differential equations (PDEs) can be written as discrete convolutions, which, themselves, are a key tool in AI libraries and used in convolutional neural networks (CNNs). We therefore propose to implement numerical discretisations as convolutional layers of a neural network, where the weights or filters are determined analytically rather than by training. Furthermore, we demonstrate that these systems can be solved entirely by functions in AI libraries, either by using Jacobi iteration or multigrid methods, the latter realised through a U-Net architecture. Some advantages of the Neural Physics approach are that (1) the methods are platform agnostic; (2) the resulting solvers are fully differentiable, ideal for optimisation tasks; and (3) writing CFD solvers as (untrained) neural networks means that they can be seamlessly integrated with trained neural networks to form hybrid models. We demonstrate the proposed approach on a number of test cases of increasing complexity from advection-diffusion problems, the non-linear Burgers equation to the Navier-Stokes equations. We validate the approach by comparing our results with solutions obtained from traditionally written code and common benchmarks from the literature. We show that the proposed methodology can solve all these problems using repurposed AI libraries in an efficient way, without training, and presents a new avenue to explore in the development of methods to solve PDEs with implicit methods.
- oai:arXiv.org:2402.17913v2
- physics.flu-dyn
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Boyang Chen, Claire E. Heaney, Christopher C. Pain
-
-
- A Cross-Platform Execution Engine for the Quantum Intermediate Representation
- https://arxiv.org/abs/2404.14299
- arXiv:2404.14299v2 Announce Type: replace-cross
-Abstract: Hybrid languages like the quantum intermediate representation (QIR) are essential for programming systems that mix quantum and conventional computing models, while execution of these programs is often deferred to a system-specific implementation. Here, we develop the QIR Execution Engine (QIR-EE) for parsing, interpreting, and executing QIR across multiple hardware platforms. QIR-EE uses LLVM to execute hybrid instructions specifying quantum programs and, by design, presents extension points that support customized runtime and hardware environments. We demonstrate an implementation that uses the XACC quantum hardware-accelerator library to dispatch prototypical quantum programs on different commercial quantum platforms and numerical simulators, and we validate execution of QIR-EE on IonQ, Quantinuum, and IBM hardware. Our results highlight the efficiency of hybrid executable architectures for handling mixed instructions, managing mixed data, and integrating with quantum computing frameworks to realize cross-platform execution.
- oai:arXiv.org:2404.14299v2
- quant-ph
- cs.SE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1007/s11227-025-07969-2
- The Journal of Supercomputing, Vol. 81, 1521 (2025)
- Elaine Wong, Vicente Leyton-Ortega, Daniel Claudino, Seth R. Johnson, Austin J. Adams, Sharmin Afrose, Meenambika Gowrishankar, Anthony Cabrera, Travis S. Humble
-
-
- Contraction of Private Quantum Channels and Private Quantum Hypothesis Testing
- https://arxiv.org/abs/2406.18651
- arXiv:2406.18651v3 Announce Type: replace-cross
-Abstract: A quantum generalized divergence by definition satisfies the data-processing inequality; as such, the relative decrease in such a divergence under the action of a quantum channel is at most one. This relative decrease is formally known as the contraction coefficient of the channel and the divergence. Interestingly, there exist combinations of channels and divergences for which the contraction coefficient is strictly less than one. Furthermore, understanding the contraction coefficient is fundamental for the study of statistical tasks under privacy constraints. To this end, here we establish upper bounds on contraction coefficients for the hockey-stick divergence under privacy constraints, where privacy is quantified with respect to the quantum local differential privacy (QLDP) framework, and we fully characterize the contraction coefficient for the trace distance under privacy constraints. With the machinery developed, we also determine an upper bound on the contraction of both the Bures distance and quantum relative entropy relative to the normalized trace distance, under QLDP constraints. Next, we apply our findings to establish bounds on the sample complexity of quantum hypothesis testing under privacy constraints. Furthermore, we study various scenarios in which the sample complexity bounds are tight, while providing order-optimal quantum channels that achieve those bounds. Lastly, we show how private quantum channels provide fairness and Holevo information stability in quantum learning settings.
- oai:arXiv.org:2406.18651v3
- quant-ph
- cs.CR
- cs.IT
- cs.LG
- math.IT
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1109/TIT.2025.3527859
- IEEE Transactions on Information Theory, Volume 71, Issue 3, Pages 1851--1873, March 2025
- Theshani Nuradha, Mark M. Wilde
-
-
- MEDIC: Zero-shot Music Editing with Disentangled Inversion Control
- https://arxiv.org/abs/2407.13220
- arXiv:2407.13220v4 Announce Type: replace-cross
-Abstract: Text-guided diffusion models revolutionize audio generation by adapting source audio to specific text prompts. However, existing zero-shot audio editing methods such as DDIM inversion accumulate errors across diffusion steps, reducing the effectiveness. Moreover, existing editing methods struggle with conducting complex non-rigid music edits while maintaining content integrity and high fidelity. To address these challenges, we propose MEDIC, a novel zero-shot music editing system based on innovative Disentangled Inversion Control (DIC) technique, which comprises Harmonized Attention Control and Disentangled Inversion. Disentangled Inversion disentangles the diffusion process into triple branches to rectify the deviated path of the source branch caused by DDIM inversion. Harmonized Attention Control unifies the mutual self-attention control and the cross-attention control with an intermediate Harmonic Branch to progressively generate the desired harmonic and melodic information in the target music. We also introduce ZoME-Bench, a comprehensive music editing benchmark with 1,100 samples covering ten distinct editing categories. ZoME-Bench facilitates both zero-shot and instruction-based music editing tasks. Our method outperforms state-of-the-art inversion techniques in editing fidelity and content preservation. The code and benchmark will be released. Audio samples are available at https://medic-edit.github.io/.
- oai:arXiv.org:2407.13220v4
- eess.AS
- cs.SD
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/publicdomain/zero/1.0/
- Huadai Liu, Jialei Wang, Xiangtai Li, Wen Wang, Qian Chen, Rongjie Huang, Yang Liu, Jiayang Xu, Zhou Zhao
-
-
- Improving the Accuracy of DC Optimal Power Flow Formulations via Parameter Optimization
- https://arxiv.org/abs/2410.11725
- arXiv:2410.11725v2 Announce Type: replace-cross
-Abstract: DC Optimal Power Flow (DC-OPF) problems optimize the generators' active power setpoints while satisfying constraints based on the DC power flow linearization. The computational tractability advantages of DC-OPF problems come at the expense of inaccuracies relative to AC Optimal Power Flow (AC-OPF) problems that accurately model the nonlinear steady-state behavior of power grids. This paper proposes an algorithm that significantly improves the accuracy of the generators' active power setpoints from DC-OPF problems with respect to the corresponding AC-OPF problems over a specified range of operating conditions. Using sensitivity information in a machine learning-inspired methodology, this algorithm tunes coefficient and bias parameters in the DC power flow approximation to improve the accuracy of the resulting DC-OPF solutions. Employing the Truncated Newton Conjugate-Gradient (TNC) method -- a Quasi-Newton optimization technique -- this parameter tuning occurs during an offline training phase, with the resulting parameters then used in online computations. Numerical results underscore the algorithm's efficacy with accuracy improvements in squared two-norm and $\infty$-norm losses of up to $90\%$ and $79\%$, respectively, relative to traditional DC-OPF formulations.
- oai:arXiv.org:2410.11725v2
- math.OC
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Babak Taheri, Daniel K. Molzahn
-
-
- Graph Sampling for Scalable and Expressive Graph Neural Networks on Homophilic Graphs
- https://arxiv.org/abs/2410.16593
- arXiv:2410.16593v5 Announce Type: replace-cross
-Abstract: Graph Neural Networks (GNNs) excel in many graph machine learning tasks but face challenges when scaling to large networks. GNN transferability allows training on smaller graphs and applying the model to larger ones, but existing methods often rely on random subsampling, leading to disconnected subgraphs and reduced model expressivity. We propose a novel graph sampling algorithm that leverages feature homophily to preserve graph structure. By minimizing the trace of the data correlation matrix, our method better preserves the graph Laplacian trace -- a proxy for the graph connectivity -- than random sampling, while achieving lower complexity than spectral methods. Experiments on citation networks show improved performance in preserving Laplacian trace and GNN transferability compared to random sampling.
- oai:arXiv.org:2410.16593v5
- eess.SP
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Haolin Li, Haoyu Wang, Luana Ruiz
-
-
- MAROON: A Framework for the Joint Characterization of Near-Field High-Resolution Radar and Optical Depth Imaging Techniques
- https://arxiv.org/abs/2411.00527
- arXiv:2411.00527v3 Announce Type: replace-cross
-Abstract: Utilizing the complementary strengths of wavelength-specific range or depth sensors is crucial for robust computer-assisted tasks such as autonomous driving. Despite this, there is still little research done at the intersection of optical depth sensors and radars operating close range, where the target is decimeters away from the sensors. Together with a growing interest in high-resolution imaging radars operating in the near field, the question arises how these sensors behave in comparison to their traditional optical counterparts.
- In this work, we take on the unique challenge of jointly characterizing depth imagers from both, the optical and radio-frequency domain using a multimodal spatial calibration. We collect data from four depth imagers, with three optical sensors of varying operation principle and an imaging radar. We provide a comprehensive evaluation of their depth measurements with respect to distinct object materials, geometries, and object-to-sensor distances. Specifically, we reveal scattering effects of partially transmissive materials and investigate the response of radio-frequency signals. All object measurements will be made public in form of a multimodal dataset, called MAROON.
- oai:arXiv.org:2411.00527v3
- eess.IV
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Vanessa Wirth, Johanna Br\"aunig, Nikolai Hofmann, Martin Vossiek, Tim Weyrich, Marc Stamminger
-
-
- Alleviating Hyperparameter-Tuning Burden in SVM Classifiers for Pulmonary Nodules Diagnosis with Multi-Task Bayesian Optimization
- https://arxiv.org/abs/2411.06184
- arXiv:2411.06184v2 Announce Type: replace-cross
-Abstract: In the field of non-invasive medical imaging, radiomic features are utilized to measure tumor characteristics. However, these features can be affected by the techniques used to discretize the images, ultimately impacting the accuracy of diagnosis. To investigate the influence of various image discretization methods on diagnosis, it is common practice to evaluate multiple discretization strategies individually. This approach often leads to redundant and time-consuming tasks such as training predictive models and fine-tuning hyperparameters separately. This study examines the feasibility of employing multi-task Bayesian optimization to accelerate the hyperparameters search for classifying benign and malignant pulmonary nodules using RBF SVM. Our findings suggest that multi-task Bayesian optimization significantly accelerates the search for hyperparameters in comparison to a single-task approach. To the best of our knowledge, this is the first investigation to utilize multi-task Bayesian optimization in a critical medical context.
- oai:arXiv.org:2411.06184v2
- eess.IV
- cs.CV
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wenhao Chi, Haiping Liu, Hongqiao Dong, Wenhua Liang, Bo Liu
-
-
- RAG-IT: Retrieval-Augmented Instruction Tuning for Automated Financial Analysis
- https://arxiv.org/abs/2412.08179
- arXiv:2412.08179v2 Announce Type: replace-cross
-Abstract: Financial analysis relies heavily on the interpretation of earnings reports to assess company performance and guide decision-making. Traditional methods for generating such analyses demand significant financial expertise and are often time-consuming. With the rapid advancement of Large Language Models (LLMs), domain-specific adaptations have emerged for financial tasks such as sentiment analysis and entity recognition. This paper introduces RAG-IT (Retrieval-Augmented Instruction Tuning), a novel framework designed to automate the generation of earnings report analyses through an LLM fine-tuned specifically for the financial domain. Our approach integrates retrieval augmentation with instruction-based fine-tuning to enhance factual accuracy, contextual relevance, and domain adaptability. We construct a comprehensive financial instruction dataset derived from extensive financial documents and earnings reports to guide the LLM's adaptation to specialized financial reasoning. Experimental results demonstrate that RAG-IT outperforms general-purpose open-source models and achieves performance comparable to commercial systems like GPT-3.5 on financial report generation tasks. This research highlights the potential of retrieval-augmented instruction tuning to streamline and elevate financial analysis automation, advancing the broader field of intelligent financial reporting.
- oai:arXiv.org:2412.08179v2
- q-fin.ST
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Van-Duc Le, Hai-Thien To
-
-
- Aspen Open Jets: Unlocking LHC Data for Foundation Models in Particle Physics
- https://arxiv.org/abs/2412.10504
- arXiv:2412.10504v2 Announce Type: replace-cross
-Abstract: Foundation models are deep learning models pre-trained on large amounts of data which are capable of generalizing to multiple datasets and/or downstream tasks. This work demonstrates how data collected by the CMS experiment at the Large Hadron Collider can be useful in pre-training foundation models for HEP. Specifically, we introduce the AspenOpenJets dataset, consisting of approximately 178M high $p_T$ jets derived from CMS 2016 Open Data. We show how pre-training the OmniJet-$\alpha$ foundation model on AspenOpenJets improves performance on generative tasks with significant domain shift: generating boosted top and QCD jets from the simulated JetClass dataset. In addition to demonstrating the power of pre-training of a jet-based foundation model on actual proton-proton collision data, we provide the ML-ready derived AspenOpenJets dataset for further public use.
- oai:arXiv.org:2412.10504v2
- hep-ph
- cs.LG
- hep-ex
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- 10.1088/2632-2153/ade58f
- Mach.Learn.Sci.Tech. 6 (2025) 3, 030601
- Oz Amram, Luca Anzalone, Joschka Birk, Darius A. Faroughy, Anna Hallin, Gregor Kasieczka, Michael Kr\"amer, Ian Pang, Humberto Reyes-Gonzalez, David Shih
-
-
- NeurOp-Diff:Continuous Remote Sensing Image Super-Resolution via Neural Operator Diffusion
- https://arxiv.org/abs/2501.09054
- arXiv:2501.09054v3 Announce Type: replace-cross
-Abstract: Most publicly accessible remote sensing data suffer from low resolution, limiting their practical applications. To address this, we propose a diffusion model guided by neural operators for continuous remote sensing image super-resolution (NeurOp-Diff). Neural operators are used to learn resolution representations at arbitrary scales, encoding low-resolution (LR) images into high-dimensional features, which are then used as prior conditions to guide the diffusion model for denoising. This effectively addresses the artifacts and excessive smoothing issues present in existing super-resolution (SR) methods, enabling the generation of high-quality, continuous super-resolution images. Specifically, we adjust the super-resolution scale by a scaling factor s, allowing the model to adapt to different super-resolution magnifications. Furthermore, experiments on multiple datasets demonstrate the effectiveness of NeurOp-Diff. Our code is available at https://github.com/zerono000/NeurOp-Diff.
- oai:arXiv.org:2501.09054v3
- eess.IV
- cs.GR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zihao Xu, Yuzhi Tang, Bowen Xu, Qingquan Li
-
-
- Online Learning of Pure States is as Hard as Mixed States
- https://arxiv.org/abs/2502.00823
- arXiv:2502.00823v3 Announce Type: replace-cross
-Abstract: Quantum state tomography, the task of learning an unknown quantum state, is a fundamental problem in quantum information. In standard settings, the complexity of this problem depends significantly on the type of quantum state that one is trying to learn, with pure states being substantially easier to learn than general mixed states. A natural question is whether this separation holds for any quantum state learning setting. In this work, we consider the online learning framework and prove the surprising result that learning pure states in this setting is as hard as learning mixed states. More specifically, we show that both classes share almost the same sequential fat-shattering dimension, leading to identical regret scaling. We also generalize previous results on full quantum state tomography in the online setting to (i) the $\epsilon$-realizable setting and (ii) learning the density matrix only partially, using smoothed analysis.
- oai:arXiv.org:2502.00823v3
- quant-ph
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Maxime Meyer, Soumik Adhikary, Naixu Guo, Patrick Rebentrost
-
-
- Data-Driven Probabilistic Air-Sea Flux Parameterization
- https://arxiv.org/abs/2503.03990
- arXiv:2503.03990v2 Announce Type: replace-cross
-Abstract: Accurately quantifying air-sea fluxes is important for understanding air-sea interactions and improving coupled weather and climate systems. This study introduces a probabilistic framework to represent the highly variable nature of air-sea fluxes, which is missing in deterministic bulk algorithms. Assuming Gaussian distributions conditioned on the input variables, we use artificial neural networks and eddy-covariance measurement data to estimate the mean and variance by minimizing negative log-likelihood loss. The trained neural networks provide alternative mean flux estimates to existing bulk algorithms, and quantify the uncertainty around the mean estimates. Stochastic parameterization of air-sea turbulent fluxes can be constructed by sampling from the predicted distributions. Tests in a single-column forced upper-ocean model suggest that changes in flux algorithms influence sea surface temperature and mixed layer depth seasonally. The ensemble spread in stochastic runs is most pronounced during spring restratification.
- oai:arXiv.org:2503.03990v2
- physics.ao-ph
- cs.LG
- stat.AP
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Jiarong Wu, Pavel Perezhogin, David John Gagne, Brandon Reichl, Aneesh C. Subramanian, Elizabeth Thompson, Laure Zanna
-
-
- Proximal Gradient Dynamics and Feedback Control for Equality-Constrained Composite Optimization
- https://arxiv.org/abs/2503.15093
- arXiv:2503.15093v3 Announce Type: replace-cross
-Abstract: This paper studies equality-constrained composite minimization problems. This class of problems, capturing regularization terms and inequality constraints, naturally arises in a wide range of engineering and machine learning applications. To tackle these optimization problems, inspired by recent results, we introduce the \emph{proportional--integral proximal gradient dynamics} (PI--PGD): a closed-loop system where the Lagrange multipliers are control inputs and states are the problem decision variables. First, we establish the equivalence between the stationary points of the minimization problem and the equilibria of the PI--PGD. Then for the case of affine constraints, by leveraging tools from contraction theory we give a comprehensive convergence analysis for the dynamics, showing linear--exponential convergence towards the equilibrium. That is, the distance between each solution and the equilibrium is upper bounded by a function that first decreases linearly and then exponentially. Our findings are illustrated numerically on a set of representative examples, which include an exploratory application to nonlinear equality constraints.
- oai:arXiv.org:2503.15093v3
- math.OC
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Veronica Centorrino, Francesca Rossi, Francesco Bullo, Giovanni Russo
-
-
- A Polynomial-Time Algorithm for Variational Inequalities under the Minty Condition
- https://arxiv.org/abs/2504.03432
- arXiv:2504.03432v2 Announce Type: replace-cross
-Abstract: Solving variational inequalities (SVIs) is a foundational problem at the heart of optimization. However, this expressivity comes at the cost of computational hardness. As a result, most research has focused on carving out specific subclasses that elude those intractability barriers. A classical property that goes back to the 1960s is the Minty condition, which postulates that the Minty VI (MVI) problem admits a solution.
- In this paper, we establish the first polynomial-time algorithm -- that is, with complexity growing polynomially in the dimension $d$ and $\log(1/\epsilon)$ -- for solving $\epsilon$-SVIs for Lipschitz continuous mappings under the Minty condition. Prior approaches either incurred an exponentially worse dependence on $1/\epsilon$ or made restrictive assumptions. To do so, we introduce a new variant of the ellipsoid algorithm whereby separating hyperplanes are obtained after taking a gradient descent step from the center of the ellipsoid. It succeeds even though the set of SVIs can be nonconvex and not fully dimensional. Moreover, when our algorithm is applied to an instance with no MVI solution and fails to identify an SVI solution, it produces a succinct certificate of MVI infeasibility. We also show that deciding whether the Minty condition holds is $\mathsf{coNP}$-complete, thereby establishing that the disjunction of those two problems is polynomial-time solvable even though each problem is individually intractable.
- We provide several extensions and new applications of our main results. Most notably, we obtain the first polynomial-time algorithms for i) globally minimizing a (potentially nonsmooth) quasar-convex function, and ii) computing Nash equilibria in multi-player harmonic games. Finally, in two-player general-sum concave games, we give the first polynomial-time algorithm that outputs either a Nash equilibrium or a strict coarse correlated equilibrium.
- oai:arXiv.org:2504.03432v2
- math.OC
- cs.GT
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ioannis Anagnostides, Gabriele Farina, Tuomas Sandholm, Brian Hu Zhang
-
-
- Aerial Active STAR-RIS-assisted Satellite-Terrestrial Covert Communications
- https://arxiv.org/abs/2504.16146
- arXiv:2504.16146v2 Announce Type: replace-cross
-Abstract: An integration of satellites and terrestrial networks is crucial for enhancing performance of next generation communication systems. However, the networks are hindered by the long-distance path loss and security risks in dense urban environments. In this work, we propose a satellite-terrestrial covert communication system assisted by the aerial active simultaneous transmitting and reflecting reconfigurable intelligent surface (AASTAR-RIS) to improve the channel capacity while ensuring the transmission covertness. Specifically, we first derive the minimal detection error probability (DEP) under the worst condition that the Warden has perfect channel state information (CSI). Then, we formulate an AASTAR-RIS-assisted satellite-terrestrial covert communication optimization problem (ASCCOP) to maximize the sum of the fair channel capacity for all ground users while meeting the strict covert constraint, by jointly optimizing the trajectory and active beamforming of the AASTAR-RIS. Due to the challenges posed by the complex and high-dimensional state-action spaces as well as the need for efficient exploration in dynamic environments, we propose a generative deterministic policy gradient (GDPG) algorithm, which is a generative deep reinforcement learning (DRL) method to solve the ASCCOP. Concretely, the generative diffusion model (GDM) is utilized as the policy representation of the algorithm to enhance the exploration process by generating diverse and high-quality samples through a series of denoising steps. Moreover, we incorporate an action gradient mechanism to accomplish the policy improvement of the algorithm, which refines the better state-action pairs through the gradient ascent. Simulation results demonstrate that the proposed approach significantly outperforms important benchmarks.
- oai:arXiv.org:2504.16146v2
- eess.SP
- cs.IT
- cs.NI
- math.IT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chuang Zhang, Geng Sun, Jiahui Li, Jiacheng Wang, Ruichen Zhang, Dusit Niyato, Shiwen Mao, Tony Q. S. Quek
-
-
- Asynchronous Push-sum Dual Gradient Algorithm in Distributed Model Predictive Control
- https://arxiv.org/abs/2504.18941
- arXiv:2504.18941v3 Announce Type: replace-cross
-Abstract: This paper studies the distributed model predictive control (DMPC) problem for distributed discrete-time linear systems with both local and global constraints over directed communication networks. We establish an optimization problem to formulate the DMPC policy, including the design of terminal ingredients. To cope with the global constraint, we transform the primal optimization problem into its dual problem. Then, we propose a novel asynchronous push-sum dual gradient (APDG) algorithm with an adaptive step-size scheme to solve this dual problem in a fully asynchronous distributed manner. The proposed algorithm does not require synchronous waiting and any form of coordination, which greatly improves solving efficiency. We prove that the APDG algorithm converges at an R-linear rate as long as the step-size does not exceed the designed upper bound. Furthermore, we develop a distributed termination criterion to terminate the APDG algorithm when its output solution satisfies the specified suboptimality and the global constraint, thereby avoiding an infinite number of iterations. The recursive feasibility and the stability of the closed-loop system are also established. Finally, a numerical example is provided to clarify and validate our theoretical findings.
- oai:arXiv.org:2504.18941v3
- math.OC
- cs.SY
- eess.SY
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pengbiao Wang, Xuemei Ren, Dongdong Zheng
-
-
- Listen to Extract: Onset-Prompted Target Speaker Extraction
- https://arxiv.org/abs/2505.05114
- arXiv:2505.05114v2 Announce Type: replace-cross
-Abstract: We propose listen to extract (LExt), a highly-effective while extremely-simple algorithm for monaural target speaker extraction (TSE). Given an enrollment utterance of a target speaker, LExt aims at extracting the target speaker from the speaker's mixed speech with other speakers. For each mixture, LExt concatenates an enrollment utterance of the target speaker to the mixture signal at the waveform level, and trains deep neural networks (DNN) to extract the target speech based on the concatenated mixture signal. The rationale is that, this way, an artificial speech onset is created for the target speaker and it could prompt the DNN (a) which speaker is the target to extract; and (b) spectral-temporal patterns of the target speaker that could help extraction. This simple approach produces strong TSE performance on multiple public TSE datasets including WSJ0-2mix, WHAM! and WHAMR!.
- oai:arXiv.org:2505.05114v2
- eess.AS
- cs.SD
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Pengjie Shen, Kangrui Chen, Shulin He, Pengru Chen, Shuqi Yuan, He Kong, Xueliang Zhang, Zhong-Qiu Wang
-
-
- New aspects of quantum topological data analysis: Betti number estimation, and testing and tracking of homology and cohomology classes
- https://arxiv.org/abs/2506.01432
- arXiv:2506.01432v3 Announce Type: replace-cross
-Abstract: We present new quantum algorithms for estimating homological invariants, specifically Betti and persistent Betti numbers, of a simplicial complex given through structured classical data. Our approach efficiently constructs block-encodings of (persistent) Laplacians, enabling estimation via stochastic rank methods with complexity polylogarithmic in the number of simplices across both sparse and dense regimes.
- Unlike prior spectral algorithms that suffer when Betti numbers are small, we introduce homology tracking and property testing techniques achieving exponential speedups under natural sparsity and structure assumptions. We also formulate homology triviality and equivalence testing as property testing problems, giving nearly linear-time quantum algorithms when the boundary rank is large. A cohomological formulation further yields rank-independent testing and polylog-time manipulation of $r$-cocycles via block-encoded projections. These results open a new direction in quantum topological data analysis and demonstrate provable quantum advantages in computing topological invariants.
- oai:arXiv.org:2506.01432v3
- quant-ph
- cs.CC
- cs.CG
- cs.DS
- math.AT
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Junseo Lee, Nhat A. Nghiem
-
-
- VQC-MLPNet: An Unconventional Hybrid Quantum-Classical Architecture for Scalable and Robust Quantum Machine Learning
- https://arxiv.org/abs/2506.10275
- arXiv:2506.10275v2 Announce Type: replace-cross
-Abstract: Variational quantum circuits (VQCs) hold promise for quantum machine learning but face challenges in expressivity, trainability, and noise resilience. We propose VQC-MLPNet, a hybrid architecture where a VQC generates the first-layer weights of a classical multilayer perceptron during training, while inference is performed entirely classically. This design preserves scalability, reduces quantum resource demands, and enables practical deployment. We provide a theoretical analysis based on statistical learning and neural tangent kernel theory, establishing explicit risk bounds and demonstrating improved expressivity and trainability compared to purely quantum or existing hybrid approaches. These theoretical insights demonstrate exponential improvements in representation capacity relative to quantum circuit depth and the number of qubits, providing clear computational advantages over standalone quantum circuits and existing hybrid quantum architectures. Empirical results on diverse datasets, including quantum-dot classification and genomic sequence analysis, show that VQC-MLPNet achieves high accuracy and robustness under realistic noise models, outperforming classical and quantum baselines while using significantly fewer trainable parameters.
- oai:arXiv.org:2506.10275v2
- quant-ph
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Jun Qi, Chao-Han Yang, Pin-Yu Chen, Min-Hsiu Hsieh
-
-
- BRISC: Annotated Dataset for Brain Tumor Segmentation and Classification
- https://arxiv.org/abs/2506.14318
- arXiv:2506.14318v4 Announce Type: replace-cross
-Abstract: Accurate segmentation and classification of brain tumors from Magnetic Resonance Imaging (MRI) remain key challenges in medical image analysis, primarily due to the lack of high-quality, balanced, and diverse datasets with expert annotations. In this work, we address this gap by introducing BRISC, a dataset designed for brain tumor segmentation and classification tasks, featuring high-resolution segmentation masks. The dataset comprises 6,000 contrast-enhanced T1-weighted MRI scans, which were collated from multiple public datasets that lacked segmentation labels. Our primary contribution is the subsequent expert annotation of these images, performed by certified radiologists and physicians. It includes three major tumor types, namely glioma, meningioma, and pituitary, as well as non-tumorous cases. Each sample includes high-resolution labels and is categorized across axial, sagittal, and coronal imaging planes to facilitate robust model development and cross-view generalization. To demonstrate the utility of the dataset, we provide benchmark results for both tasks using standard deep learning models. The BRISC dataset is made publicly available. datasetlink: Kaggle (https://www.kaggle.com/datasets/briscdataset/brisc2025/), Figshare (https://doi.org/10.6084/m9.figshare.30533120), Zenodo (https://doi.org/10.5281/zenodo.17524350)
- oai:arXiv.org:2506.14318v4
- eess.IV
- cs.CV
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Amirreza Fateh, Yasin Rezvani, Sara Moayedi, Sadjad Rezvani, Fatemeh Fateh, Mansoor Fateh, Vahid Abolghasemi
-
-
- ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language Models for Audio Generation and Editing
- https://arxiv.org/abs/2506.21448
- arXiv:2506.21448v3 Announce Type: replace-cross
-Abstract: While end-to-end video-to-audio generation has greatly improved, producing high-fidelity audio that authentically captures the nuances of visual content remains challenging. Like professionals in the creative industries, this generation requires sophisticated reasoning about items such as visual dynamics, acoustic environments, and temporal relationships. We present ThinkSound, a novel framework that leverages Chain-of-Thought (CoT) reasoning to enable stepwise, interactive audio generation and editing for videos. Our approach decomposes the process into three complementary stages: foundational foley generation that creates semantically coherent soundscapes, interactive object-centric refinement through precise user interactions, and targeted editing guided by natural language instructions. At each stage, a multimodal large language model generates contextually aligned CoT reasoning that guides a unified audio foundation model. Furthermore, we introduce AudioCoT, a comprehensive dataset with structured reasoning annotations that establishes connections between visual content, textual descriptions, and sound synthesis. Experiments demonstrate that ThinkSound achieves state-of-the-art performance in video-to-audio generation across both audio metrics and CoT metrics, and excels in the out-of-distribution Movie Gen Audio benchmark. The project page is available at https://ThinkSound-Project.github.io.
- oai:arXiv.org:2506.21448v3
- eess.AS
- cs.CV
- cs.SD
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Huadai Liu, Kaicheng Luo, Jialei Wang, Wen Wang, Qian Chen, Zhou Zhao, Wei Xue
-
-
- Flow matching for reaction pathway generation
- https://arxiv.org/abs/2507.10530
- arXiv:2507.10530v4 Announce Type: replace-cross
-Abstract: Elucidating reaction mechanisms hinges on efficiently generating transition states (TSs), products, and complete reaction networks. Recent generative models, such as diffusion models for TS sampling and sequence-based architectures for product generation, offer faster alternatives to quantum-chemistry searches. But diffusion models remain constrained by their stochastic differential equation (SDE) dynamics, which suffer from inefficiency and limited controllability. We show that flow matching, a deterministic ordinary differential (ODE) formulation, can replace SDE-based diffusion for molecular and reaction generation. We introduce MolGEN, a conditional flow-matching framework that learns an optimal transport path to transport Gaussian priors to target chemical distributions. On benchmarks used by TSDiff and OA-ReactDiff, MolGEN surpasses TS geometry accuracy and barrier-height prediction while reducing sampling to sub-second inference. MolGEN also supports open-ended product generation with competitive top-k accuracy and avoids mass/electron-balance violations common to sequence models. In a realistic test on the $\gamma$-ketohydroperoxide decomposition network, MolGEN yields higher fractions of valid and intended TSs with markedly fewer quantum-chemistry evaluations than string-based baselines. These results demonstrate that deterministic flow matching provides a unified, accurate, and computationally efficient foundation for molecular generative modeling, signaling that flow matching is the future for molecular generation across chemistry.
- oai:arXiv.org:2507.10530v4
- physics.chem-ph
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Ping Tuo, Jiale Chen, Ju Li
-
-
- High-Precision Modal Analysis of Multimode Waveguides from Amplitudes via Large-Step Nonconvex Optimization
- https://arxiv.org/abs/2507.12299
- arXiv:2507.12299v2 Announce Type: replace-cross
-Abstract: Optimizing multimodal waveguide performance depends on modal analysis; however, existing methods focus predominantly on modal power distribution (MPD) and, limited by experimental hardware and conditions, exhibit low accuracy, poor adaptability, and high computational cost. This work presents a novel framework for comprehensive modal analysis (recovering both power and relative phase) using aperture field (AF) and far field (FF) amplitude measurements. We formulate the modal analysis as a nonconvex optimization problem under a power-normalization constraint and, inspired by recent advances in deep learning, introduce a large-step strategy to solve it. Our method retrieves both the MPD and the modal relative-phase distribution(MRPD). The effectiveness of the proposed method is validated through visualization of the nonconvex optimization process via its loss landscape. Under noiseless conditions, analysis results of $93$ electromagnetic modes indicate that the relative amplitude accuracy $\mathrm{MRE_{Modulus}}$, and the phase accuracy $\mathrm{MAE_{Phase}}$, both reach the level of machine precision. Through noise simulations of the AF and environmental background, the operational principles of the method are demonstrated under signal-to-noise ratio (SNR) conditions ranging from $10~\mathrm{dB}$ to $60~\mathrm{dB}$. Experiments further confirm that error suppression is effectively achieved by increasing the number of sampling points, thereby maintaining high accuracy and strong robustness. Within a unified evaluation framework, the absolute amplitude error $\mathrm{MAE_{Modulus}}$, and the phase error $\mathrm{MAE_{Phase}}$, are as low as $1.633\times10^{-8}$ and $0$, respectively. The accuracy is significantly superior to existing methods, while also exhibiting higher computational efficiency.
- oai:arXiv.org:2507.12299v2
- physics.comp-ph
- cs.SD
- physics.optics
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jingtong Li, Dongting Huang, Minhui Xiong, Mingzhi Li
-
-
- TensorHyper-VQC: A Tensor-Train-Guided Hypernetwork for Robust and Scalable Variational Quantum Computing
- https://arxiv.org/abs/2508.01116
- arXiv:2508.01116v2 Announce Type: replace-cross
-Abstract: Variational Quantum Computing (VQC) faces fundamental scalability barriers, primarily due to the presence of barren plateaus and its sensitivity to quantum noise. To address these challenges, we introduce TensorHyper-VQC, a novel tensor-train (TT)-guided hypernetwork framework that significantly improves the robustness and scalability of VQC. Our framework fully delegates the generation of quantum circuit parameters to a classical TT network, effectively decoupling optimization from quantum hardware. This innovative parameterization mitigates gradient vanishing, enhances noise resilience through structured low-rank representations, and facilitates efficient gradient propagation. Grounded in Neural Tangent Kernel and statistical learning theory, our rigorous theoretical analyses establish strong guarantees on approximation capability, optimization stability, and generalization performance. Extensive empirical results across quantum dot classification, Max-Cut optimization, and molecular quantum simulation tasks demonstrate that TensorHyper-VQC consistently achieves superior performance and robust noise tolerance, including hardware-level validation on a 156-qubit IBM Heron processor. These results position TensorHyper-VQC as a scalable and noise-resilient framework for advancing practical quantum machine learning on near-term devices.
- oai:arXiv.org:2508.01116v2
- quant-ph
- cs.AI
- cs.LG
- stat.ML
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Jun Qi, Chao-Han Yang, Pin-Yu Chen, Min-Hsiu Hsieh
-
-
- Personalized Transcranial Electrical Stimulation: A Review of Computational Modeling and Optimization
- https://arxiv.org/abs/2509.01192
- arXiv:2509.01192v2 Announce Type: replace-cross
-Abstract: Objective. Personalized transcranial electrical stimulation (tES) has gained growing attention due to the substantial inter-individual variability in brain anatomy and physiology. While previous reviews have discussed the physiological mechanisms and clinical applications of tES, there remains a critical gap in up-to-date syntheses focused on the computational modeling frameworks that enable individualized stimulation optimization. Approach. This review presents a comprehensive overview of recent advances in computational techniques supporting personalized tES. We systematically examine developments in forward modeling for simulating individualized electric fields, as well as inverse modeling approaches for optimizing stimulation parameters. We critically evaluate progress in head modeling pipelines, optimization algorithms, and the integration of multimodal brain data. Main results. Recent advances have substantially accelerated the construction of subject-specific head conductor models and expanded the landscape of optimization methods, including multi-objective optimization and brain network-informed optimization. These advances allow for dynamic and individualized stimulation planning, moving beyond empirical trial-and-error approaches.Significance. By integrating the latest developments in computational modeling for personalized tES, this review highlights current challenges, emerging opportunities, and future directions for achieving precision neuromodulation in both research and clinical contexts.
- oai:arXiv.org:2509.01192v2
- q-bio.NC
- cs.CE
- cs.NE
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mo Wang, Kexin Zheng, Yingyue Xin, Xiang Chen, Yiling Liu, Huichun Luo, Jingsheng Tang, Tifei Yuan, Hongkai Wen, Pengfei Wei, Quanying Liu
-
-
- A weak regularity lemma for polynomials
- https://arxiv.org/abs/2509.21536
- arXiv:2509.21536v2 Announce Type: replace-cross
-Abstract: A regularity lemma for polynomials provides a decomposition in terms of a bounded number of approximately independent polynomials. Such regularity lemmas play an important role in numerous results, yet suffer from the familiar shortcoming of having tower-type bounds or worse. In this paper we design a new, weaker regularity lemma with strong bounds. The new regularity lemma in particular provides means to quantitatively study the curves contained in the image of a polynomial map, which is beyond the reach of standard methods.
- Applications include strong bounds for a problem of Karam on generalized rank, as well as a new method to obtain upper bounds for fan-in parameters in arithmetic circuits. For example, we show that if the image of a polynomial map $\mathbf{P} \colon \mathbb{F}^n \to \mathbb{F}^m$ of degree $d$ does not contain a line, then $\mathbf{P}$ can be computed by a depth-$4$ arithmetic formula with bottom fan-in at most $d/2$ and top fan-in at most $(2m)^{C(d)}$ (with $C(d)=2^{(1+o(1))d}$). One implication of our work is a certain ``barrier'' to arithmetic circuit lower bounds, in terms of the smallest degree of a polynomial curve contained in the image of the given polynomial map.
- oai:arXiv.org:2509.21536v2
- math.CO
- cs.CC
- math.AC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Guy Moshkovitz, Dora Woodruff
-
-
- Multivariate Bernoulli Hoeffding Decomposition: From Theory to Sensitivity Analysis
- https://arxiv.org/abs/2510.07088
- arXiv:2510.07088v3 Announce Type: replace-cross
-Abstract: Understanding the behavior of predictive models with random inputs can be achieved through functional decompositions into sub-models that capture interpretable effects of input groups. Building on recent advances in uncertainty quantification, the existence and uniqueness of a generalized Hoeffding decomposition have been established for correlated input variables, using oblique projections onto suitable functional subspaces. This work focuses on the case of Bernoulli inputs and provides a complete analytical characterization of the decomposition. We show that, in this discrete setting, the associated subspaces are one-dimensional and that the decomposition admits a closed-form representation. One of the main contributions of this study is to generalize the classical Fourier--Walsh--Hadamard decomposition for pseudo-Boolean functions to the correlated case, yielding an oblique version when the underlying distribution is not a product measure, and recovering the standard orthogonal form when independence holds. This explicit structure offers a fully interpretable framework, clarifying the contribution of each input combination and theoretically enabling model reverse engineering. From this formulation, explicit sensitivity measures-such as Sobol' indices and Shapley effects-can be directly derived. Numerical experiments illustrate the practical interest of the approach for decision-support problems involving binary features. The paper concludes with perspectives on extending the methodology to high-dimensional settings and to models involving inputs with finite, non-binary support.
- oai:arXiv.org:2510.07088v3
- stat.ML
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Baptiste Ferrere (EDF R\&D PRISME, IMT, SINCLAIR AI Lab), Nicolas Bousquet (EDF R\&D PRISME, SINCLAIR AI Lab, LPSM), Fabrice Gamboa (IMT, ANITI), Jean-Michel Loubes (IMT, ANITI), Joseph Mur\'e (EDF R\&D PRISME)
-
-
- StutterZero and StutterFormer: End-to-End Speech Conversion for Stuttering Transcription and Correction
- https://arxiv.org/abs/2510.18938
- arXiv:2510.18938v2 Announce Type: replace-cross
-Abstract: Over 70 million people worldwide experience stuttering, yet most automatic speech systems misinterpret disfluent utterances or fail to transcribe them accurately. Existing methods for stutter correction rely on handcrafted feature extraction or multi-stage automatic speech recognition (ASR) and text-to-speech (TTS) pipelines, which separate transcription from audio reconstruction and often amplify distortions. This work introduces StutterZero and StutterFormer, the first end-to-end waveform-to-waveform models that directly convert stuttered speech into fluent speech while jointly predicting its transcription. StutterZero employs a convolutional-bidirectional LSTM encoder-decoder with attention, whereas StutterFormer integrates a dual-stream Transformer with shared acoustic-linguistic representations. Both architectures are trained on paired stuttered-fluent data synthesized from the SEP-28K and LibriStutter corpora and evaluated on unseen speakers from the FluencyBank dataset. Across all benchmarks, StutterZero had a 24% decrease in Word Error Rate (WER) and a 31% improvement in semantic similarity (BERTScore) compared to the leading Whisper-Medium model. StutterFormer achieved better results, with a 28% decrease in WER and a 34% improvement in BERTScore. The results validate the feasibility of direct end-to-end stutter-to-fluent speech conversion, offering new opportunities for inclusive human-computer interaction, speech therapy, and accessibility-oriented AI systems.
- oai:arXiv.org:2510.18938v2
- eess.AS
- cs.AI
- cs.CL
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Qianheng Xu
-
-
- Learning noisy tissue dynamics across time scales
- https://arxiv.org/abs/2510.19090
- arXiv:2510.19090v2 Announce Type: replace-cross
-Abstract: Tissue dynamics play a crucial role in biological processes ranging from inflammation to morphogenesis. However, these noisy multicellular dynamics are notoriously hard to predict. Here, we introduce a biomimetic machine learning framework capable of inferring noisy multicellular dynamics directly from experimental movies. This generative model combines graph neural networks, normalizing flows and WaveNet algorithms to represent tissues as neural stochastic differential equations where cells are edges of an evolving graph. Cell interactions are encoded in a dual signaling graph capable of handling signaling cascades. The dual graph architecture of our neural networks reflects the architecture of the underlying biological tissues, substantially reducing the amount of data needed for training, compared to convolutional or fully-connected neural networks. Taking epithelial tissue experiments as a case study, we show that our model not only captures stochastic cell motion but also predicts the evolution of cell states in their division cycle. Finally, we demonstrate that our method can accurately generate the experimental dynamics of developmental systems, such as the fly wing, and cell signaling processes mediated by stochastic ERK waves, paving the way for its use as a digital twin in bioengineering and clinical contexts.
- oai:arXiv.org:2510.19090v2
- cond-mat.soft
- cs.LG
- physics.bio-ph
- q-bio.QM
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ming Han, John Devany, Michel Fruchart, Margaret L. Gardel, Vincenzo Vitelli
-
-
- A Tverberg-type problem of Kalai: Two negative answers to questions of Alon and Smorodinsky, and the power of disjointness
- https://arxiv.org/abs/2510.20770
- arXiv:2510.20770v2 Announce Type: replace-cross
-Abstract: Let $f_r(d,s_1,\ldots,s_r)$ denote the least integer $n$ such that every $n$-point set $P\subseteq\mathbb{R}^d$ admits a partition $P=P_1\cup\cdots\cup P_r$ with the property that for any choice of $s_i$-convex sets $C_i\supseteq P_i$ $(i\in[r])$ one necessarily has $\bigcap_{i=1}^r C_i\neq\emptyset$, where an $s_i$-convex set means a union of $s_i$ convex sets. A recent breakthrough by Alon and Smorodinsky establishes a general upper bound $f_r(d,s_1,\dots,s_r) = O(dr^2\log r \prod_{i=1}^r s_i\cdot \log(\prod_{i=1}^r s_i).$ Specializing to $r=2$ resolves the problem of Kalai from the 1970s. They further singled out two particularly intriguing questions: whether $f_{2}(2,s,s)$ can be improved from $O(s^2\log s)$ to $O(s)$, and whether $f_r(d,s,\ldots,s)\le Poly(r,d,s)$. We answer both in the negative by showing the exponential lower bound $f_{r}(d,s,\ldots,s)> s^{r}$ for any $r\ge 2$, $s\ge 1$ and $d\ge 2r-2$, which matches the upper bound up to a multiplicative $\log{s}$ factor for sufficiently large $s$. Our construction combines a scalloped planar configuration with a direct product of regular $s$-gon on the high-dimensional torus $(\mathbb{S}^1)^{r-2}$. Perhaps surprisingly, if we additionally require that within each block the $s_i$ convex sets are pairwise disjoint, the picture changes markedly. Let $F_r(d,s_1,\ldots,s_r)$ denote this disjoint-union variant of the extremal function. We show: (1) $F_{2}(2,s,s)=O(s\log s)$ by connecting it to a suitable line-separating function in the plane; (2) when $s$ is large, $F_r(d,s,\ldots,s)$ can be bounded by $O_{r,d}(s^{(1-\frac{1}{2^{d}(d+1)})r+1})$ and $O_{d}(r^{3}\log r\cdot s^{2d+3})$, respectively. This builds on a novel connection between the geometric obstruction and hypergraph Tur\'{a}n numbers, in particular, a variant of the Erd\H{o}s box problem.
- oai:arXiv.org:2510.20770v2
- math.CO
- cs.CG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Wenchong Chen, Gennian Ge, Yang Shu, Zhouningxin Wang, Zixiang Xu
-
-
- Using latent representations to link disjoint longitudinal data for mixed-effects regression
- https://arxiv.org/abs/2510.25531
- arXiv:2510.25531v2 Announce Type: replace-cross
-Abstract: Many rare diseases offer limited established treatment options, leading patients to switch therapies when new medications emerge. To analyze the impact of such treatment switches within the low sample size limitations of rare disease trials, it is important to use all available data sources. This, however, is complicated when usage of measurement instruments change during the observation period, for example when instruments are adapted to specific age ranges. The resulting disjoint longitudinal data trajectories, complicate the application of traditional modeling approaches like mixed-effects regression. We tackle this by mapping observations of each instrument to a aligned low-dimensional temporal trajectory, enabling longitudinal modeling across instruments. Specifically, we employ a set of variational autoencoder architectures to embed item values into a shared latent space for each time point. Temporal disease dynamics and treatment switch effects are then captured through a mixed-effects regression model applied to latent representations. To enable statistical inference, we present a novel statistical testing approach that accounts for the joint parameter estimation of mixed-effects regression and variational autoencoders. The methodology is applied to quantify the impact of treatment switches for patients with spinal muscular atrophy. Here, our approach aligns motor performance items from different measurement instruments for mixed-effects regression and maps estimated effects back to the observed item level to quantify the treatment switch effect. Our approach allows for model selection as well as for assessing effects of treatment switching. The results highlight the potential of modeling in joint latent representations for addressing small data challenges.
- oai:arXiv.org:2510.25531v2
- stat.ML
- cs.AI
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Clemens Sch\"achter, Maren Hackenberg, Michelle Pfaffenlehner, F\'elix B. Tambe-Ndonfack, Thorsten Schmidt, Astrid Pechmann, Janbernd Kirschner, Jan Hasenauer, Harald Binder
-
-
- Bridging the Gap between Empirical Welfare Maximization and Conditional Average Treatment Effect Estimation in Policy Learning
- https://arxiv.org/abs/2510.26723
- arXiv:2510.26723v2 Announce Type: replace-cross
-Abstract: The goal of policy learning is to train a policy function that recommends a treatment given covariates to maximize population welfare. There are two major approaches in policy learning: the empirical welfare maximization (EWM) approach and the plug-in approach. The EWM approach is analogous to a classification problem, where one first builds an estimator of the population welfare, which is a functional of policy functions, and then trains a policy by maximizing the estimated welfare. In contrast, the plug-in approach is based on regression, where one first estimates the conditional average treatment effect (CATE) and then recommends the treatment with the highest estimated outcome. This study bridges the gap between the two approaches by showing that both are based on essentially the same optimization problem. In particular, we prove an exact equivalence between EWM and least squares over a reparameterization of the policy class. As a consequence, the two approaches are interchangeable in several respects and share the same theoretical guarantees under common conditions. Leveraging this equivalence, we propose a regularization method for policy learning. The reduction to least squares yields a smooth surrogate that is typically easier to optimize in practice. At the same time, for many natural policy classes the inherent combinatorial hardness of exact EWM generally remains, so the reduction should be viewed as an optimization aid rather than a universal bypass of NP-hardness.
- oai:arXiv.org:2510.26723v2
- stat.ML
- cs.LG
- econ.EM
- math.ST
- stat.ME
- stat.TH
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Masahiro Kato
-
-
- Stochastic representation of solutions for the parabolic Cauchy problem with variable exponent coefficients
- https://arxiv.org/abs/2511.00773
- arXiv:2511.00773v2 Announce Type: replace-cross
-Abstract: In this work, we prove existence and uniqueness of a bounded viscosity solution for the Cauchy problem of degenerate parabolic equations with variable exponent coefficients. We construct the solution directly using the stochastic representation, then verify it satisfies the Cauchy problem. The corresponding SDE, on the other hand, allows the drift and diffusion coefficients to respond nonlinearly to the current state through the state-dependent variable exponents, and thus, extends the expressive power of classical SDEs to better capture complex dynamics. To validate our theoretical framework, we conduct comprehensive numerical experiments comparing finite difference solutions (Crank-Nicolson on logarithmic grids) with Monte Carlo simulations of the SDE.
- oai:arXiv.org:2511.00773v2
- math.AP
- cs.NA
- math.NA
- math.PR
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mustafa Avci
-
-
- Novelty and Impact of Economics Papers
- https://arxiv.org/abs/2511.01211
- arXiv:2511.01211v2 Announce Type: replace-cross
-Abstract: We propose a framework that recasts scientific novelty not as a single attribute of a paper, but as a reflection of its position within the evolving intellectual landscape. We decompose this position into two orthogonal dimensions: \textit{spatial novelty}, which measures a paper's intellectual distinctiveness from its neighbors, and \textit{temporal novelty}, which captures its engagement with a dynamic research frontier. To operationalize these concepts, we leverage Large Language Models to develop semantic isolation metrics that quantify a paper's location relative to the full-text literature. Applying this framework to a large corpus of economics articles, we uncover a fundamental trade-off: these two dimensions predict systematically different outcomes. Temporal novelty primarily predicts citation counts, whereas spatial novelty predicts disruptive impact. This distinction allows us to construct a typology of semantic neighborhoods, identifying four archetypes associated with distinct and predictable impact profiles. Our findings demonstrate that novelty can be understood as a multidimensional construct whose different forms, reflecting a paper's strategic location, have measurable and fundamentally distinct consequences for scientific progress.
- oai:arXiv.org:2511.01211v2
- econ.GN
- cs.CE
- cs.CL
- cs.DL
- q-fin.EC
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Chaofeng Wu
-
-
- LA-MARRVEL: A Knowledge-Grounded and Language-Aware LLM Reranker for AI-MARRVEL in Rare Disease Diagnosis
- https://arxiv.org/abs/2511.02263
- arXiv:2511.02263v2 Announce Type: replace-cross
-Abstract: Diagnosing rare diseases often requires connecting variant-bearing genes to evidence that is written as unstructured clinical prose, which the current established pipelines still leave for clinicians to reconcile manually. To this end, we introduce LA-MARRVEL, a knowledge-grounded and language-aware reranking layer that operates on top of AI-MARRVEL: it supplies expert-engineered context, queries a large language model multiple times, and aggregates the resulting partial rankings with a ranked voting method to produce a stable, explainable gene ranking. Evaluated on three real-world cohorts (BG, DDD, UDN), LA-MARRVEL consistently improves Recall@K over AI-MARRVEL and established phenotype-driven tools such as Exomiser and LIRICAL, with especially large gains on cases where the first-stage ranker placed the causal gene lower. Each ranked gene is accompanied by LLM-generated reasoning that integrates phenotypic, inheritance, and variant-level evidence, thereby making the output more interpretable and facilitating clinical review.
- oai:arXiv.org:2511.02263v2
- q-bio.GN
- cs.AI
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Jaeyeon Lee, Hyun-Hwan Jeong, Zhandong Liu
-
-
- RIS-Assisted 3D Spherical Splatting for Object Composition Visualization using Detection Transformers
- https://arxiv.org/abs/2511.02573
- arXiv:2511.02573v2 Announce Type: replace-cross
-Abstract: The pursuit of immersive and structurally aware multimedia experiences has intensified interest in sensing modalities that reconstruct objects beyond the limits of visible light. Conventional optical pipelines degrade under occlusion or low illumination, motivating the use of radio-frequency (RF) sensing, whose electromagnetic waves penetrate materials and encode both geometric and compositional information. Yet, uncontrolled multipath propagation restricts reconstruction accuracy. Recent advances in Programmable Wireless Environments (PWEs) mitigate this limitation by enabling software-defined manipulation of propagation through Reconfigurable Intelligent Surfaces (RISs), thereby providing controllable illumination diversity. Building on this capability, this work introduces a PWE-driven RF framework for three-dimensional object reconstruction using material-aware spherical primitives. The proposed approach combines RIS-enabled field synthesis with a Detection Transformer (DETR) that infers spatial and material parameters directly from extracted RF features. Simulation results confirm the framework's ability to approximate object geometries and classify material composition with an overall accuracy of 79.35%, marking an initial step toward programmable and physically grounded RF-based 3D object composition visualization.
- oai:arXiv.org:2511.02573v2
- eess.SP
- cs.LG
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Anastasios T. Sotiropoulos, Stavros Tsimpoukis, Dimitrios Tyrovolas, Sotiris Ioannidis, Panagiotis D. Diamantoulakis, George K. Karagiannidis, Christos K. Liaskos
-
-
- An accelerated primal-dual flow for linearly constrained multiobjective optimization
- https://arxiv.org/abs/2511.02751
- arXiv:2511.02751v2 Announce Type: replace-cross
-Abstract: In this paper, we propose a continuous-time primal-dual approach for linearly constrained multiobjective optimization problems. A novel dynamical model, called accelerated multiobjective primal-dual flow, is presented with a second-order equation for the primal variable and a first-order equation for the dual variable. It can be viewed as an extension of the accelerated primal-dual flow by Luo [arXiv:2109.12604, 2021] for the single objective case. To facilitate the convergence rate analysis, we introduce a new merit function, which motivates the use of the feasibility violation and the objective gap to measure the weakly Pareto optimality. By using a proper Lyapunov function, we establish the exponential decay rate in the continuous level. After that, we consider an implicit-explicit scheme, which yields an accelerated multiobjective primal-dual method with a quadratic subproblem, and prove the sublinear rates of the feasibility violation and the objective gap, under the convex case and the strongly convex case, respectively. Numerical results are provided to demonstrate the performance of the proposed method.
- oai:arXiv.org:2511.02751v2
- math.OC
- cs.NA
- math.NA
- Thu, 06 Nov 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Luo, Qiaoyuan Shu, Xinmin Yang
-