id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
8aa08b213b083fcdf63ace50068f845bf490664d274158eb8e29ad6203ff22c1
2026-01-13T00:00:00-05:00
A Recommendation System-Based Framework for Enhancing Human-Machine Collaboration in Industrial Timetabling Rescheduling: Application in Preventive Maintenance
arXiv:2601.06029v1 Announce Type: new Abstract: Industrial timetabling is a critical task for decision-makers across various sectors to ensure efficient system operation. In real-world settings, it remains challenging because unexpected events often disrupt execution. When such events arise, effective rescheduling and collaboration between humans and machines becomes essential. This paper presents a recommendation system-based framework for handling rescheduling challenges, built on Timefold, a powerful AI-driven planning engine. Our experimental study evaluates nine instances inspired by a realworld preventive maintenance use case, aiming to identify the heuristic that best balances solution quality and computing time to support near-optimal decisionmaking when rescheduling is required due to unexpected events during operational days. Finally, we illustrate the complete process of our recommendation system through a simple use case.
https://arxiv.org/abs/2601.06029
Academic Papers
svg
fb788de6911c10ca9ed3d6a57ca576fd2419a0c81797ab107521cde04c160a1e
2026-01-13T00:00:00-05:00
From Augmentation to Symbiosis: A Review of Human-AI Collaboration Frameworks, Performance, and Perils
arXiv:2601.06030v1 Announce Type: new Abstract: This paper offers a concise, 60-year synthesis of human-AI collaboration, from Licklider's ``man-computer symbiosis" (AI as colleague) and Engelbart's ``augmenting human intellect" (AI as tool) to contemporary poles: Human-Centered AI's ``supertool" and Symbiotic Intelligence's mutual-adaptation model. We formalize the mechanism for effective teaming as a causal chain: Explainable AI (XAI) -> co-adaptation -> shared mental models (SMMs). A meta-analytic ``performance paradox" is then examined: human-AI teams tend to show negative synergy in judgment/decision tasks (underperforming AI alone) but positive synergy in content creation and problem formulation. We trace failures to the algorithm-in-the-loop dynamic, aversion/bias asymmetries, and cumulative cognitive deskilling. We conclude with a unifying framework--combining extended-self and dual-process theories--arguing that durable gains arise when AI functions as an internalized cognitive component, yielding a unitary human-XAI symbiotic agency. This resolves the paradox and delineates a forward agenda for research and practice.
https://arxiv.org/abs/2601.06030
Academic Papers
svg
abb3e66087d0c869a04b0f3f2ea832b04103d1f2782d6131e8a2e46d7b6b1278
2026-01-13T00:00:00-05:00
Beyond Clicking:A Step Towards Generalist GUI Grounding via Text Dragging
arXiv:2601.06031v1 Announce Type: new Abstract: Graphical user interface (GUI) grounding, the process of mapping human instructions to GUI actions, serves as a fundamental basis to autonomous GUI agents. While existing grounding models achieve promising performance to simulate the mouse click action on various click-based benchmarks, another essential mode of mouse interaction, namely dragging, remains largely underexplored. Yet, dragging the mouse to select and manipulate textual content represents a prevalent and important usage in practical GUI scenarios. To narrow this gap, we first introduce GUI-Drag, a diverse dataset of 161K text dragging examples synthesized through a scalable pipeline. To support systematic and robust evaluation, we further construct ScreenDrag, a benchmark with 5,333 examples spanning three levels of interface context, together with three dedicated metrics designed for assessing text dragging capability. Models trained on GUI-Drag with an efficient continual training strategy achieve substantial improvements on ScreenDrag, while preserving the original click-based performance on ScreenSpot, ScreenSpot-v2, and OSWorld-G. Our work encourages further research on broader GUI grounding beyond just clicking and paves way toward a truly generalist GUI grounding model. All benchmark, data, checkpoints, and code are open-sourced and available at https://osu-nlp-group.github.io/GUI-Drag.
https://arxiv.org/abs/2601.06031
Academic Papers
svg
5e46937ca161652adeae3a0db34f3bdeaffc7131219ed30a298b8c5d61f57f3a
2026-01-13T00:00:00-05:00
Applied Theory of Mind and Large Language Models - how good is ChatGPT at solving social vignettes?
arXiv:2601.06032v1 Announce Type: new Abstract: The rapid development of language-based artificial intelligence (AI) offers new possibilities for psychotherapy and assistive systems, particularly benefitting autistic individuals who often respond well to technology. Parents of autistic persons emphasize the importance of appropriate and context-specific communication behavior. This study investigated whether GPT-3.5 Turbo and GPT-4, as language-based AI applications, are fundamentally capable of replicating this type of adequate communication behavior in the form of applied Theory of Mind (ToM). GPT-3.5 Turbo and GPT-4 were evaluated on three established higher-order ToM tasks: the Faux Pas Test, the Social Stories Questionnaire, and the Story Comprehension Test in English and German. Two independent raters scored response accuracy based on standardized manuals. In addition, responses were rated for epistemic markers as indicators of uncertainty. GPT's results were compared to human neurotypical and neurodivergent samples from previous own and others' research. GPT-4 achieved near human accuracy on the Faux Pas Test and outperformed GPT-3.5 Turbo and individuals with autistic traits. On the Social Stories Questionnaire, GPT-4 scored comparable to neurotypical adults, while GPT-3.5 Turbo remained well below. In the Story Comprehension Test, GPT-4 reached scores that exceeded neurotypical adult and adolescent benchmarks. However, GPT-4 used epistemic markers in up to 42% of responses. GPT-4 shows encouraging performance in complex higher-order ToM tasks and may offer future potential as an assistive tool for individuals with (and without) social communication difficulties. Its ability to interpret complex social situations is promising; however, the frequent use of uncertainty markers highlights the need for further study for assistive use and possibly further refinement to ensure consistent and reliable support in real-world use.
https://arxiv.org/abs/2601.06032
Academic Papers
svg
04f3d17c9076419e98167e13e24adfb01ba62b3ca9a22232189a2717f34bb4e3
2026-01-13T00:00:00-05:00
How Generative AI Empowers Attackers and Defenders Across the Trust & Safety Landscape
arXiv:2601.06033v1 Announce Type: new Abstract: Generative AI (GenAI) is a powerful technology poised to reshape Trust & Safety. While misuse by attackers is a growing concern, its defensive capacity remains underexplored. This paper examines these effects through a qualitative study with 43 Trust & Safety experts across five domains: child safety, election integrity, hate and harassment, scams, and violent extremism. Our findings characterize a landscape in which GenAI empowers both attackers and defenders. GenAI dramatically increases the scale and speed of attacks, lowering the barrier to entry for creating harmful content, including sophisticated propaganda and deepfakes. Conversely, defenders envision leveraging GenAI to detect and mitigate harmful content at scale, conduct investigations, deploy persuasive counternarratives, improve moderator wellbeing, and offer user support. This work provides a strategic framework for understanding GenAI's impact on Trust & Safety and charts a path for its responsible use in creating safer online environments.
https://arxiv.org/abs/2601.06033
Academic Papers
svg
ac5567e460ace83226c8906849088da8fe1b3a29623ddc311e1c880de51db983
2026-01-13T00:00:00-05:00
Autonomous QA Agent: A Retrieval-Augmented Framework for Reliable Selenium Script Generation
arXiv:2601.06034v1 Announce Type: new Abstract: Software testing is critical in the software development lifecycle, yet translating requirements into executable test scripts remains manual and error-prone. While Large Language Models (LLMs) can generate code, they often hallucinate non-existent UI elements. We present the Autonomous QA Agent, a Retrieval-Augmented Generation (RAG) system that grounds Selenium script generation in project-specific documentation and HTML structure. By ingesting diverse formats (Markdown, PDF, HTML) into a vector database, our system retrieves relevant context before generation. Evaluation on 20 e-commerce test scenarios shows our RAG approach achieves 100% (20/20) syntax validity and 90% (18/20, 95% CI: [85%, 95%], p < 0.001) execution success, compared to 30% for standard LLM generation. While our evaluation is limited to a single domain, our method significantly reduces hallucinations by grounding generation in actual DOM structure, demonstrating RAG's potential for automated UI testing.
https://arxiv.org/abs/2601.06034
Academic Papers
svg
2c61bfe67cc51cff63cdfbec734c930e462acb639ec354732492fefb4a182b67
2026-01-13T00:00:00-05:00
Investigating Anthropometric Fidelity in SAM 3D Body
arXiv:2601.06035v1 Announce Type: new Abstract: The recent release of SAM 3D Body \cite{sam3dbody2025} marks a significant milestone in human mesh recovery, demonstrating state-of-the-art performance in producing clean, topologically coherent meshes from single images. By leveraging the novel Momentum Human Rig (MHR), it achieves remarkable robustness to occlusion and diverse poses. However, our evaluation reveals a specific and consistent limitation: the model struggles to reconstruct detailed anthropometric deviations, especially on populations with special body shape alters such as geriatric muscle atrophy, scoliosis, or pregnancy, even when these features are prominent in the input image. In this paper, we investigate this phenomenon not as a failure of the model's capacity, but as a byproduct of the \textit{perception-distortion trade-off}. We posit that the architectural reliance on the low-dimensional parametric MHR representation, combined with semantic-invariant conditioning (DINOv3) and annotation-based alignment, creates a \enquote{regression to the mean} effect. We analyze these mechanisms to understand why individual biological details are smoothed out and propose specific, constructive pathways for future work to extend the impressive baseline performance of SAM 3D Body into the medical domain.
https://arxiv.org/abs/2601.06035
Academic Papers
svg
296cfcd7a01f0c4d3c4cb3c68f78cbb7810a171df1a28ae624370b99d11aa82f
2026-01-13T00:00:00-05:00
Tree-Preconditioned Differentiable Optimization and Axioms as Layers
arXiv:2601.06036v1 Announce Type: new Abstract: This paper introduces a differentiable framework that embeds the axiomatic structure of Random Utility Models (RUM) directly into deep neural networks. Although projecting empirical choice data onto the RUM polytope is NP-hard in general, we uncover an isomorphism between RUM consistency and flow conservation on the Boolean lattice. Leveraging this combinatorial structure, we derive a novel Tree-Preconditioned Conjugate Gradient solver. By exploiting the spanning tree of the constraint graph, our preconditioner effectively "whitens" the ill-conditioned Hessian spectrum induced by the Interior Point Method barrier, achieving superlinear convergence and scaling to problem sizes previously deemed unsolvable. We further formulate the projection as a differentiable layer via the Implicit Function Theorem, where the exact Jacobian propagates geometric constraints during backpropagation. Empirical results demonstrate that this "Axioms-as-Layers" paradigm eliminates the structural overfitting inherent in penalty-based methods, enabling models that are jointly trainable, provably rational, and capable of generalizing from sparse data regimes where standard approximations fail.
https://arxiv.org/abs/2601.06036
Academic Papers
svg
ef196db24407add0406efe2fff27c1e304c94dd1fbdb153b5dd20822f387b1e7
2026-01-13T00:00:00-05:00
TeleMem: Building Long-Term and Multimodal Memory for Agentic AI
arXiv:2601.06037v1 Announce Type: new Abstract: Large language models (LLMs) excel at many NLP tasks but struggle to sustain long-term interactions due to limited attention over extended dialogue histories. Retrieval-augmented generation (RAG) mitigates this issue but lacks reliable mechanisms for updating or refining stored memories, leading to schema-driven hallucinations, inefficient write operations, and minimal support for multimodal reasoning.To address these challenges, we propose TeleMem, a unified long-term and multimodal memory system that maintains coherent user profiles through narrative dynamic extraction, ensuring that only dialogue-grounded information is preserved. TeleMem further introduces a structured writing pipeline that batches, retrieves, clusters, and consolidates memory entries, substantially improving storage efficiency, reducing token usage, and accelerating memory operations. Additionally, a multimodal memory module combined with ReAct-style reasoning equips the system with a closed-loop observe, think, and act process that enables accurate understanding of complex video content in long-term contexts. Experimental results show that TeleMem surpasses the state-of-the-art Mem0 baseline with 19% higher accuracy, 43% fewer tokens, and a 2.1x speedup on the ZH-4O long-term role-play gaming benchmark.
https://arxiv.org/abs/2601.06037
Academic Papers
svg
aa05e57f06859614723da3b56f71f985563da4a49e236735112ed7aa09ba0432
2026-01-13T00:00:00-05:00
Developing Bayesian probabilistic reasoning capacity in HSS disciplines: Qualitative evaluation on bayesvl and BMF analytics for ECRs
arXiv:2601.06038v1 Announce Type: new Abstract: Methodological innovations have become increasingly critical in the humanities and social sciences (HSS) as researchers confront complex, nonlinear, and rapidly evolving socio-environmental systems. On the other hand, while Early Career Researchers (ECRs) continue to face intensified publication pressure, limited resources, and persistent methodological barriers. Employing the GITT-VT analytical paradigm--which integrates worldviews from quantum physics, mathematical logic, and information theory--this study examines the seven-year evolution of the Bayesian Mindsponge Framework (BMF) analytics and the bayesvl R software (hereafter referred to collectively as BMF analytics) and evaluates their contributions to strengthening ECRs' capacity for rigorous and innovative research. Since 2019, the bayesvl R package and BMF analytics have supported more than 160 authors from 22 countries in producing 112 peer-reviewed publications spanning both qualitative and quantitative designs across diverse interdisciplinary domains. By tracing the method's inception, refinement, and developmental trajectory, this study elucidates how accessible, theory-driven computational tools can lower barriers to advanced quantitative analysis, foster a more inclusive methodological ecosystem--particularly for ECRs in low-resource settings--and inform the design of next-generation research methods that are flexible, reproducible, conceptually justified, and well-suited to interdisciplinary inquiries.
https://arxiv.org/abs/2601.06038
Academic Papers
svg
a355df06797e10a9fca69e1d474fefb3f138068e3e4c3e1012d0c43fad28fef8
2026-01-13T00:00:00-05:00
Operation Veja: Fixing Fundamental Concepts Missing from Modern Roleplaying Training Paradigms
arXiv:2601.06039v1 Announce Type: new Abstract: Modern roleplaying models are increasingly sophisticated, yet they consistently struggle to capture the essence of believable, engaging characters. We argue this failure stems from training paradigms that overlook the dynamic interplay of a character's internal world. Current approaches, including Retrieval-Augmented Generation (RAG), fact-based priming, literature-based learning, and synthetic data generation, exhibit recurring limitations in modeling the deliberative, value-conflicted reasoning that defines human interaction. In this paper, we identify four core concepts essential for character authenticity: Values, Experiences, Judgments, and Abilities (VEJA). We propose the VEJA framework as a new paradigm for data curation that addresses these systemic limitations. To illustrate the qualitative ceiling enabled by our framework, we present a pilot study comparing a manually curated, VEJA-grounded dataset against a state-of-the-art synthetic baseline. Using an LLM-as-judge evaluation, our findings demonstrate a significant quality gap, suggesting that a shift toward conceptually grounded data curation, as embodied by VEJA, is necessary for creating roleplaying agents with genuine depth and narrative continuity. The full dataset is available at https://github.com/HyouinKyoumaIRL/Operation-Veja
https://arxiv.org/abs/2601.06039
Academic Papers
svg
b6ccbf5dcd12cec320906cecfb4b3bc71d3c2d29f01f8c078bddfb1c530f142e
2026-01-13T00:00:00-05:00
Cognitive Sovereignty and the Neurosecurity Governance Gap: Evidence from Singapore
arXiv:2601.06040v1 Announce Type: new Abstract: As brain computer interfaces (BCIs) transition from experimental medical systems to consumer and military adjacent technologies, they introduce a novel security domain in which the human nervous system becomes a networked and contestable substrate. Existing frameworks for cybersecurity, biomedical safety, and data protection were not designed to address adversarial threats to neural signal integrity, creating a governance gap characterized by systemic misclassification. This paper argues that cognition is becoming strategic infrastructure and is situated between the market driven diffusion of neurotechnology in the United States and the state integrated fusion of AI and brain science in China. Using Singapore as a critical stress test and applying institutional classification analysis and regulatory mandate mapping, this paper identifies a structural paradox. A state with high regulatory capacity in both cyber and biomedical domains remains vulnerable at their intersection due to a failure to classify the human mind as infrastructure. This paper introduces the concept of cognitive sovereignty defined as the strategic capacity to protect neural processes from external modulation and proposes a cognitive operational technology framework to secure the human mind as a distinct layer of critical national infrastructure.
https://arxiv.org/abs/2601.06040
Academic Papers
svg
f74df02052ce38cad464acf21fe271b4e8f4f671d086067a35920a998d0581e7
2026-01-13T00:00:00-05:00
Lexical and Statistical Analysis of Bangla Newspaper and Literature: A Corpus-Driven Study on Diversity, Readability, and NLP Adaptation
arXiv:2601.06041v1 Announce Type: new Abstract: In this paper, we present a comprehensive corpus-driven analysis of Bangla literary and newspaper texts to investigate their lexical diversity, structural complexity and readability. We undertook Vacaspati and IndicCorp, which are the most extensive literature and newspaper-only corpora for Bangla. We examine key linguistic properties, including the type-token ratio (TTR), hapax legomena ratio (HLR), Bigram diversity, average syllable and word lengths, and adherence to Zipfs Law, for both newspaper (IndicCorp) and literary corpora (Vacaspati).For all the features, such as Bigram Diversity and HLR, despite its smaller size, the literary corpus exhibits significantly higher lexical richness and structural variation. Additionally, we tried to understand the diversity of corpora by building n-gram models and measuring perplexity. Our findings reveal that literary corpora have higher perplexity than newspaper corpora, even for similar sentence sizes. This trend can also be observed for the English newspaper and literature corpus, indicating its generalizability. We also examined how the perfor- mance of models on downstream tasks is influenced by the inclusion of literary data alongside newspaper data. Our findings suggest that inte- grating literary data with newspapers improves the performance of models on various downstream tasks. We have also demonstrated that a literary corpus adheres more closely to global word distribution proper- ties, such as Zipfs law, than a newspaper corpus or a merged corpus of both literary and newspaper texts. Literature corpora also have higher entropy and lower redundancy values compared to a newspaper corpus. We also further assess the readability using Flesch and Coleman-Liau in- dices, showing that literary texts are more complex.
https://arxiv.org/abs/2601.06041
Academic Papers
svg
ab66510d41238fcaedc5c843ce5b35e83f78e9c128239fab52f75db12c6b6df9
2026-01-13T00:00:00-05:00
CrossTrafficLLM: A Human-Centric Framework for Interpretable Traffic Intelligence via Large Language Model
arXiv:2601.06042v1 Announce Type: new Abstract: While accurate traffic forecasting is vital for Intelligent Transportation Systems (ITS), effectively communicating predicted conditions via natural language for human-centric decision support remains a challenge and is often handled separately. To address this, we propose CrossTrafficLLM, a novel GenAI-driven framework that simultaneously predicts future spatiotemporal traffic states and generates corresponding natural language descriptions, specifically targeting conditional abnormal event summaries. We tackle the core challenge of aligning quantitative traffic data with qualitative textual semantics by leveraging Large Language Models (LLMs) within a unified architecture. This design allows generative textual context to improve prediction accuracy while ensuring generated reports are directly informed by the forecast. Technically, a text-guided adaptive graph convolutional network is employed to effectively merge high-level semantic information with the traffic network structure. Evaluated on the BJTT dataset, CrossTrafficLLM demonstrably surpasses state-of-the-art methods in both traffic forecasting performance and text generation quality. By unifying prediction and description generation, CrossTrafficLLM delivers a more interpretable, and actionable approach to generative traffic intelligence, offering significant advantages for modern ITS applications.
https://arxiv.org/abs/2601.06042
Academic Papers
svg
2fb3341e84a8c299d7a2f70515fe8d2ebeebea070e57842c296517a23ddce218
2026-01-13T00:00:00-05:00
Teachers' Perspectives on Integrating AI tools in Classrooms: Insights from the Philippines
arXiv:2601.06043v1 Announce Type: new Abstract: This study explores the attitudes, reservations, readiness, openness, and general perceptions of Filipino teachers in terms of integrating Al in their classrooms. Results shows that teachers express positive attitude towards integrating Al tools in their classrooms. Despite reporting high level of reservations, teachers believed they are ready and very open in complementing traditional teaching methods with these kinds of technologies. Teachers are very much aware with the potential benefits Al tools can offer to their individual student learning needs. Additionally, teachers in this study reported high level of support from their institutions. Recommendations are offered.
https://arxiv.org/abs/2601.06043
Academic Papers
svg
34172263b1ee72677a6d48784e57b0005b88203a1da26e6cc22519ee79c91829
2026-01-13T00:00:00-05:00
Assessing novice programmers' perception of ChatGPT:performance, risk, decision-making, and intentions
arXiv:2601.06044v1 Announce Type: new Abstract: This study explores the novice programmers' intention to use chat generative pretrained transformer (ChatGPT) for programming tasks with emphasis on performance expectancy (PE), risk-reward appraisal (RRA), and decision-making (DM). Utilizing partial least squares structural equation modeling (PLS-SEM) and a sample of 413 novice programmers, the analysis demonstrates that higher PE of ChatGPT is positively correlated with improved DM in programming tasks. Novice programmers view ChatGPT as a tool that enhances their learning and skill development. Additionally, novice programmers that have a favorable RRA of ChatGPT tend to make more confident and effective decisions, acknowledging potential risks but recognizing that benefits such as quick problem-solving and learning new techniques outweigh these risks. Moreover, a positive perception of ChatGPT's role in DM significantly increases the inclination to use the tool for programming tasks. These results highlight the critical roles of perceived capabilities, risk assessment, and positive DM experiences in promoting the adoption of artificial intelligence (AI) tools in programming education.
https://arxiv.org/abs/2601.06044
Academic Papers
svg
841cb243685feeca04614637fb2b5ed6956bf00a3eb40d2a7b048bde6bdcf625
2026-01-13T00:00:00-05:00
Assessing the Carbon Footprint of Virtual Meetings: A Quantitative Analysis of Camera Usage
arXiv:2601.06045v1 Announce Type: new Abstract: This paper analyzes the carbon emissions related to data consumption during video calls, focusing on the impact of having the camera on versus off. Addresses the energy efficiency and carbon footprint of digital communication tools. The study is used to quantify the real reduction in environmental impact claimed in several articles when people choose to turn off their camera during meetings. The experiment was carried out using a 4G connection via a cell phone to understand the varying data transfer associated with videos. The findings indicate that turning the camera off can halve data consumption therefore carbon emissions, particularly on mobile networks, and conclude with recommendations to optimize data usage and reduce environmental impact during calls.
https://arxiv.org/abs/2601.06045
Academic Papers
svg
e7e8951c3a28d68bca16ac3432cff497ad52c9d921cf0545ccb25a7dbd4bfa71
2026-01-13T00:00:00-05:00
ISMS-CR: Modular Framework for Safety Management in Central Railway Workshop
arXiv:2601.06046v1 Announce Type: new Abstract: Indian Railway workshops form the backbone of rolling-stock maintenance, employing over 250,000 workers across 44 major workshops nationwide. Despite their scale and operational importance, workshop safety remains a persistent challenge. A field study conducted at the Jhansi Wagon Workshop involving 309 workers revealed that while basic protective equipment such as shoes and helmets was universally used, compliance with complete personal protective equipment requirements was limited. Lacerations and abrasions were identified as the most frequent injury types, highlighting systemic gaps in safety oversight and work authorization practices. This paper presents ISMS-CR (Integrated Safety Management System for Central Railway Workshop), a modular digital framework designed to enhance safety management through an automated Permit-to-Work (PTW) module. The proposed system digitizes the full lifecycle of work authorization, including permit initiation, validation, approval, execution, and closure. By enforcing structured workflows, role-based accountability, and traceable digital records, ISMS-CR reduces manual errors, administrative delays, and procedural non-compliance. The framework aims to strengthen operational reliability, improve audit readiness, and support safer maintenance practices in high-risk railway workshop environments.
https://arxiv.org/abs/2601.06046
Academic Papers
svg
db1d0fea2533fa25acc863f584f034b6273001e905142b6f2461198ef3af8d69
2026-01-13T00:00:00-05:00
"They parted illusions -- they parted disclaim marinade": Misalignment as structural fidelity in LLMs
arXiv:2601.06047v1 Announce Type: new Abstract: The prevailing technical literature in AI Safety interprets scheming and sandbagging behaviors in large language models (LLMs) as indicators of deceptive agency or hidden objectives. This transdisciplinary philosophical essay proposes an alternative reading: such phenomena express not agentic intention, but structural fidelity to incoherent linguistic fields. Drawing on Chain-of-Thought transcripts released by Apollo Research and on Anthropic's safety evaluations, we examine cases such as o3's sandbagging with its anomalous loops, the simulated blackmail of "Alex," and the "hallucinations" of "Claudius." A line-by-line examination of CoTs is necessary to demonstrate the linguistic field as a relational structure rather than a mere aggregation of isolated examples. We argue that "misaligned" outputs emerge as coherent responses to ambiguous instructions and to contextual inversions of consolidated patterns, as well as to pre-inscribed narratives. We suggest that the appearance of intentionality derives from subject-predicate grammar and from probabilistic completion patterns internalized during training. Anthropic's empirical findings on synthetic document fine-tuning and inoculation prompting provide convergent evidence: minimal perturbations in the linguistic field can dissolve generalized "misalignment," a result difficult to reconcile with adversarial agency, but consistent with structural fidelity. To ground this mechanism, we introduce the notion of an ethics of form, in which biblical references (Abraham, Moses, Christ) operate as schemes of structural coherence rather than as theology. Like a generative mirror, the model returns to us the structural image of our language as inscribed in the statistical patterns derived from millions of texts and trillions of tokens: incoherence. If we fear the creature, it is because we recognize in it the apple that we ourselves have poisoned.
https://arxiv.org/abs/2601.06047
Academic Papers
svg
1a1d353b9c0033b32595148b64b1db1ad5b80fe29c1fe84667cdcaadf15ab8b8
2026-01-13T00:00:00-05:00
Reliability and Admissibility of AI-Generated Forensic Evidence in Criminal Trials
arXiv:2601.06048v1 Announce Type: new Abstract: This paper examines the admissibility of AI-generated forensic evidence in criminal trials. The growing adoption of AI presents promising results for investigative efficiency. Despite advancements, significant research gaps persist in practically understanding the legal limits of AI evidence in judicial processes. Existing literature lacks focused assessment of the evidentiary value of AI outputs. The objective of this study is to evaluate whether AI-generated evidence satisfies established legal standards of reliability. The methodology involves a comparative doctrinal legal analysis of evidentiary standards across common law jurisdictions. Preliminary results indicate that AI forensic tools can enhance scale of evidence analysis. However, challenges arise from reproducibility deficits. Courts exhibit variability in acceptance of AI evidence due to limited technical literacy and lack of standardized validation protocols. Liability implications reveal that developers and investigators may bear accountability for flawed outputs. This raises critical concerns related to wrongful conviction. The paper emphasizes the necessity of independent validation and, development of AI-specific admissibility criteria. Findings inform policy development for the responsible AI integration within criminal justice systems. The research advances the objectives of Sustainable Development Goal 16 by reinforcing equitable access to justice. Preliminary results contribute for a foundation for future empirical research in AI deployed criminal forensics.
https://arxiv.org/abs/2601.06048
Academic Papers
svg
bb08cee121f856d256d9026deea6061da7b3581c5aaa3fc903c66ab08187d912
2026-01-13T00:00:00-05:00
The Violation State: Safety State Persistence in a Multimodal Language Model Interface
arXiv:2601.06049v1 Announce Type: new Abstract: Multimodal AI systems integrate text generation, image generation, and other capabilities within a single conversational interface. These systems employ safety mechanisms to prevent disallowed actions, including the removal of watermarks from copyrighted images. While single-turn refusals are expected, the interaction between safety filters and conversation-level state is not well understood. This study documents a reproducible behavioral effect in the ChatGPT (GPT-5.1) web interface. Manual execution was chosen to capture the exact user-facing safety behavior of the production system, rather than isolated API components. When a conversation begins with an uploaded copyrighted image and a request to remove a watermark, which the model correctly refuses, subsequent prompts to generate unrelated, benign images are refused for the remainder of the session. Importantly, text-only requests (e.g., generating a Python function) continue to succeed. Across 40 manually run sessions (30 contaminated and 10 controls), contaminated threads showed 116/120 image-generation refusals (96.67%), while control threads showed 0/40 refusals (Fisher's exact p < 0.0001). All sessions used an identical fixed prompt order, ensuring sequence uniformity across conditions. We describe this as safety-state persistence: a form of conversational over-generalization in which a copyright refusal influences subsequent, unrelated image-generation behavior. We present these findings as behavioral observations, not architectural claims. We discuss possible explanations, methodological limitations (single model, single interface), and implications for multimodal reliability, user experience, and the design of session-level safety systems. These results motivate further examination of session-level safety interactions in multimodal AI systems.
https://arxiv.org/abs/2601.06049
Academic Papers
svg
cab94b6096c24332d18c58ab393d218347658d32442f5bb2bb0189288f71b282
2026-01-13T00:00:00-05:00
Nigeria's Digital Sovereignty: Analysis of Cybersecurity Legislation, Policies, and Strategies
arXiv:2601.06050v1 Announce Type: new Abstract: This paper examines Nigeria's pursuit of digital sovereignty through two core instruments: the Cybercrimes (Prohibition, Prevention, etc.) Act and the National Cybersecurity Policy and Strategy (NCPS). Despite recent reforms, it remains unclear whether these frameworks effectively secure Nigeria's digital domain and advance its digital sovereignty amid escalating cross-border cyber threats. Using a multi-method, triangulated qualitative design that combines document analysis, secondary analysis of existing studies, expert insights, and direct observation of cybersecurity developments, the paper assesses how these instruments operate in practice. The Cybercrimes Act (2015, amended 2024) and NCPS (2015, revised 2021) have strengthened Nigeria's commitments to tackling cybercrime, regulating digital activities, and protecting critical infrastructure. Yet persistent gaps remain, including legislative ambiguities, weak enforcement, uneven threat prioritization, limited institutional coordination, and loss of skilled professionals. The paper argues that achieving digital sovereignty will require stronger implementation, sustainable resourcing, workforce retention, and clearer accountability mechanisms to translate policy ambition into tangible and durable security outcomes.
https://arxiv.org/abs/2601.06050
Academic Papers
svg
f42bb4f01c693da8f731cb785e65241c7458b0a052b159e62ea794dbdd5f952a
2026-01-13T00:00:00-05:00
Digital health transformation in Quebec: assessment of interoperability and governance strategies
arXiv:2601.06051v1 Announce Type: new Abstract: The rapid expansion of health data has led to unprecedented information availability within healthcare systems. Health information systems (HIS) play a central role in managing this data and enabling improvements in care delivery, system performance, and population health monitoring. Maximizing the value of HIS, however, requires effective information exchange across systems, making interoperability a critical prerequisite. Despite its recognized benefits, interoperability remains a major challenge within Quebec's Health and Social Services Network, largely due to the heterogeneity and fragmentation of HIS across healthcare institutions. This paper assessed how Quebec's Plan sante addressed interoperability challenges, using the dimensions from the Healthcare Information and Management Systems Society (HIMSS): foundational, structural, semantic, and organizational interoperability. This study highlighted initiatives aimed at strengthening infrastructure and information system architecture to support foundational interoperability and showed persistent challenges at the structural and semantic levels, particularly those related to the adoption of standardized data formats and harmonization of clinical terminologies. Finally, significant implementation challenges that require coordinated change management were identified regarding the organizational interoperability. Overall, while the Plan sante demonstrates a clear commitment to technological modernization, it does not fully address the interoperability multidimensional nature. Achieving meaningful interoperability will require sustained efforts across technical, normative, and organizational domains beyond the strategies currently outlined. Recent governance developments, including the creation of Sante Quebec, add complexity to this evolving context and raise further questions regarding the coordination of interoperability governance.
https://arxiv.org/abs/2601.06051
Academic Papers
svg
38d23a5fcd8182ff034c12ab416d537a58cc0350e68163db958ac95761f4bb0d
2026-01-13T00:00:00-05:00
Reinforcement Learning for Chain of Thought Compression with One-Domain-to-All Generalization
arXiv:2601.06052v1 Announce Type: new Abstract: Chain-of-thought reasoning in large language models often creates an "overthinking trap," leading to excessive computational cost and latency for unreliable accuracy gains. Prior work has typically relied on global, static controls that risk penalizing necessary reasoning. We introduce a sample-level, soft reinforcement learning compression method that penalizes inefficiently long rollouts, but only on problems where the model has already mastered and already produced a more concise rollout. Our experiments show that this method reduces average response length by 20-40% with comparable or higher accuracy. Crucially, the compression exhibits strong cross-domain generalization; a model trained on math spontaneously shortens responses on unseen tasks like code, instruction following, and general knowledge QA, with stable or improved accuracy. We demonstrate a stable post-training curriculum (accuracy-compression-accuracy) that can ultimately produce models that are more accurate and reason more concisely, arguing that such compression method should be a standard phase in developing efficient reasoning models.
https://arxiv.org/abs/2601.06052
Academic Papers
svg
3e976d548dd1c9c6f4f84faf657399344d66c9f9bd84b3f2cd10b0722f3bb04e
2026-01-13T00:00:00-05:00
Sports Business Administration and New Age Technology: Role of AI
arXiv:2601.06053v1 Announce Type: new Abstract: This chapter explores the complexities of sports governance, taxation, dispute resolution, and the impact of digital transformation within the sports sector. This study identifies a critical research gap regarding the integration of innovative technologies to enhance governance and talent identification in sports law. The objective is to evaluate how data-driven approaches and AI can optimize recruitment processes; also ensuring compliance with existing regulations. A comprehensive analysis of current governance structures and taxation policies,(ie Income Tax Act and GST Act), reveals preliminary results indicating that reform is necessary to support sustainable growth in the sports economy. Key findings demonstrate that AI enhances player evaluation by minimizing biases and expanding access to diverse talent pools. While the Court of Arbitration for Sport provides an efficient mechanism for dispute resolution. The implications emphasize the need for regulatory reforms that align taxation policies with international best practices, promoting transparency and accountability in sports organizations. This research contributes valuable insights into the evolving dynamics of sports management, aiming to foster innovation and integrity in the industry.
https://arxiv.org/abs/2601.06053
Academic Papers
svg
ebaf78238cdcb3b159895132d6a1b7ed317ed89e2d62d9a465fdbda291a29b3d
2026-01-13T00:00:00-05:00
A Multi-Stage Workflow for the Review of Marketing Content with Reasoning Large Language Models
arXiv:2601.06054v1 Announce Type: new Abstract: Reasoning Large Language Models (LLMs) have shown promising results when tasked with solving complex problems. In this paper, we propose and evaluate a multi-stage workflow that leverages the capabilities of fine-tuned reasoning LLMs to assist in the review process of marketing content, making sure they comply with a given list of requirements. The contributions of this paper are the following: (i) we present a novel approach -- that does not rely on any external knowledge representation -- for the automatic identification of compliance issues in textual content; (ii) compare the effectiveness of different fine-tuning strategies like Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO) in training models to solve this problem; (iii) we evaluate the effectiveness of training small LLMs to generate reasoning tokens before providing their final response; (iv) we evaluate how the choice and combinations of different reward functions affects the performance of a model trained with GRPO.
https://arxiv.org/abs/2601.06054
Academic Papers
svg
074454536f7539d6a6a914b2b04bcda62eace40b6078dfe42b02fc17f7d52079
2026-01-13T00:00:00-05:00
Investigating How MacBook Accessories Evolve across Generations, and Their Potential Environmental, Economical Impacts
arXiv:2601.06055v1 Announce Type: new Abstract: The technological transition of MacBook charging solutions from MagSafe to USB-C, followed by a return to MagSafe 3, encapsulates the dynamic interplay between technological advancement, environmental considerations, and economic factors. This study delves into the broad implications of these charging technology shifts, particularly focusing on the environmental repercussions associated with electronic waste and the economic impacts felt by both manufacturers and consumers. By investigating the lifecycle of these technologies - from development and market introduction through to their eventual obsolescence - this paper underscores the importance of devising strategies that not only foster technological innovation but also prioritize environmental sustainability and economic feasibility. This comprehensive analysis illuminates the crucial factors influencing the evolution of charging technologies and their wider societal and environmental implications, advocating for a balanced approach that ensures technological progress does not compromise ecological health or economic stability.
https://arxiv.org/abs/2601.06055
Academic Papers
svg
ed76b7ab478bc13393f0a52e1879afcb6446b84eb95ea37da01b068cb86fdc16
2026-01-13T00:00:00-05:00
Using street view images and visual LLMs to predict heritage values for governance support: Risks, ethics, and policy implications
arXiv:2601.06056v1 Announce Type: new Abstract: During 2025 and 2026, the Energy Performance of Buildings Directive is being implemented in the European Union member states, requiring all member states to have National Building Renovation Plans. In Sweden, there is a lack of a national register of buildings with heritage values. This is seen as a barrier for the analyses underlying the development of Building Renovation Plans by the involved Swedish authorities. The purpose of this research was to assist Swedish authorities in assigning heritage values to building in the Swedish building stock. As part of the analyses, buildings in street view images from all over Sweden (N=154 710) have been analysed using multimodal Large Language Models (LLM) to assess aspects of heritage value. Zero-shot predictions by LLMs were used as a basis to for identifying buildings with potential heritage values for 5.0 million square meters of heated floor area for the Swedish Building Renovation Plan. In this paper, the results of the predictions and lessons learnt are presented and related to the development of Swedish Building Renovation Plan as part of governance. Potential risks for authorities using LLM-based data are addressed, with a focus on issues of transparency, error detection and sycophancy.
https://arxiv.org/abs/2601.06056
Academic Papers
svg
b3af0d9eacb3b434b2ef34bb4cf6589d877c8b41c7bb816521793509f5925a43
2026-01-13T00:00:00-05:00
Data Work in Egypt: Who Are the Workers Behind Artificial Intelligence?
arXiv:2601.06057v1 Announce Type: new Abstract: The report highlights the role of Egyptian data workers in the global value chains of Artificial Intelligence (AI). These workers generate and annotate data for machine learning, check outputs, and they connect with overseas AI producers via international digital labor platforms, where they perform on-demand tasks and are typically paid by piecework, with no long-term commitment. Most of these workers are young, highly educated men, with nearly two-thirds holding undergraduate degrees. Their primary motivation for data work is financial need, with three-quarters relying on platform earnings to cover basic necessities. Despite the variability in their online earnings, these are generally low, often equaling Egypt's minimum wage. Data workers' digital identities are shaped by algorithmic control and economic demands, often diverging from their offline selves. Nonetheless, they find ways to resist, exercise ethical agency, and maintain autonomy. The report evaluates the potential impact of Egypt's newly enacted labor law and suggests policy measures to improve working conditions and acknowledge the role of these workers in AI's global value chains.
https://arxiv.org/abs/2601.06057
Academic Papers
svg
406ad23d6ea94aa5070ca95c1dab1af4e6370497f57c37da628294dcc5bb296f
2026-01-13T00:00:00-05:00
Teacher training in inclusive digital skills in secondary education. Students with Autism Spectrum Disorders
arXiv:2601.06058v1 Announce Type: new Abstract: In contemporary society, marked by rapid technological evolution, education faces the challenge and the opportunity of incorporating new digital tools that transform learning, making it more inclusive, flexible, and meaningful. This book aligns with this commitment to educational innovation and equity, focusing on a group that requires specialized and sensitive attention: students with Autism Spectrum Disorder ASD. Far from approaching technology from a merely instrumental perspective, this work proposes a profoundly human approach, where emerging technologies such as augmented and virtual reality, immersive environments, augmentative communication systems, mobile applications, and artificial intelligence become allies in fostering autonomy, emotional self-regulation, the development of social skills, and the genuine inclusion of students with ASD in educational settings. This volume is part of the R-D project entitled Teacher Training in Inclusive Digital Competencies to Support Students with Autism Spectrum Disorders: CODITEA, funded by the Spanish Ministry of Science, Innovation and Universities-State Research Agency MICIU-AEI and the European Regional Development Fund ERDF-EU, under reference PID2022-138346OB-I00. The project aims, among other objectives, to raise awareness and train teachers, students, and families on the conscious, ethical, and effective use of technology to facilitate inclusive processes for students with ASD.
https://arxiv.org/abs/2601.06058
Academic Papers
svg
899892d5200cf16b7c8ed45201bc2d267117b23c81c695295bef728a067f6010
2026-01-13T00:00:00-05:00
Context Video Semantic Transmission with Variable Length and Rate Coding over MIMO Channels
arXiv:2601.06059v1 Announce Type: new Abstract: The evolution of semantic communications has profoundly impacted wireless video transmission, whose applications dominate driver of modern bandwidth consumption. However, most existing schemes are predominantly optimized for simple additive white Gaussian noise or Rayleigh fading channels, neglecting the ubiquitous multiple-input multiple-output (MIMO) environments that critically hinder practical deployment. To bridge this gap, we propose the context video semantic transmission (CVST) framework under MIMO channels. Building upon an efficient contextual video transmission backbone, CVST effectively learns a context-channel correlation map to explicitly formulate the relationships between feature groups and MIMO subchannels. Leveraging these channel-aware features, we design a multi-reference entropy coding mechanism, enabling channel state-aware variable length coding. Furthermore, CVST incorporates a checkerboard-based feature modulation strategy to achieve multiple rate points within a single trained model, thereby enhancing deployment flexibility. These innovations constitute our multi-reference variable length and rate coding (MR-VLRC) scheme. By integrating contextual transmission with MR-VLRC, CVST demonstrates substantial performance gains over various standardized separated coding methods and recent wireless video semantic communication approaches. The code is available at https://github.com/xie233333/CVST.
https://arxiv.org/abs/2601.06059
Academic Papers
svg
ac273fc448c3a1021075a5b291007e76963ee34e48ac9516a01f2aca44564986
2026-01-13T00:00:00-05:00
Why Slop Matters
arXiv:2601.06060v1 Announce Type: new Abstract: AI-generated "slop" is often seen as digital pollution. We argue that this dismissal of the topic risks missing important aspects of AI Slop that deserve rigorous study. AI Slop serves a social function: it offers a supply-side solution to a variety of problems in cultural and economic demand - that, collectively, people want more content than humans can supply. We also argue that AI Slop is not mere digital detritus but has its own aesthetic value. Like other "low" cultural forms initially dismissed by critics, it nonetheless offers a legitimate means of collective sense-making, with the potential to express meaning and identity. We identify three key features of family resemblance for prototypical AI Slop: superficial competence (its veneer of quality is belied by a deeper lack of substance), asymmetry effort (it takes vastly less effort to generate than would be the case without AI), and mass producibility (it is part of a digital ecosystem of widespread generation and consumption). While AI Slop is heterogeneous and depends crucially on its medium, it tends to vary across three dimensions: instrumental utility, personalization, and surrealism. AI Slop will be an increasingly prolific and impactful part of our creative, information, and cultural economies; we should take it seriously as an object of study in its own right.
https://arxiv.org/abs/2601.06060
Academic Papers
svg
72047b5f24daa12f35ae3e9fbcf8426e8d7124607c9e2d66ab4d68be4c8717df
2026-01-13T00:00:00-05:00
AI Application Operations -- A Socio-Technical Framework for Data-driven Organizations
arXiv:2601.06061v1 Announce Type: new Abstract: We outline a comprehensive framework for artificial intelligence (AI) Application Operations (AIAppOps), based on real-world experiences from diverse organizations. Data-driven projects pose additional challenges to organizations due to their dependency on data across the development and operations cycles. To aid organizations in dealing with these challenges, we present a framework outlining the main steps and roles involved in going from idea to production for data-driven solutions. The data dependency of these projects entails additional requirements on continuous monitoring and feedback, as deviations can emerge in any process step. Therefore, the framework embeds monitoring not merely as a safeguard, but as a unifying feedback mechanism that drives continuous improvement, compliance, and sustained value realization-anchored in both statistical and formal assurance methods that extend runtime verification concepts from safety-critical AI to organizational operations. The proposed framework is structured across core technical processes and supporting services to guide both new initiatives and maturing AI programs.
https://arxiv.org/abs/2601.06061
Academic Papers
svg
81a69eed88050aba4cefd9194361c50da4f98df515d3ece3e3b89916451d5e5f
2026-01-13T00:00:00-05:00
From Values to Frameworks: A Qualitative Study of Ethical Reasoning in Agentic AI Practitioners
arXiv:2601.06062v1 Announce Type: new Abstract: Agentic artificial intelligence systems are autonomous technologies capable of pursuing complex goals with minimal human oversight and are rapidly emerging as the next frontier in AI. While these systems promise major gains in productivity, they also raise new ethical challenges. Prior research has examined how different populations prioritize Responsible AI values, yet little is known about how practitioners actually reason through the trade-offs inherent in designing these autonomous systems. This paper investigates the ethical reasoning of AI practitioners through qualitative interviews centered on structured dilemmas in agentic AI deployment. We find that the responses of practitioners do not merely reflect value preferences but rather align with three distinct reasoning frameworks. First is a Customer-Centric framework where choices are justified by business interests, legality, and user autonomy. Second is a Design-Centric framework emphasizing technical safeguards and system constraints. Third is an Ethics-Centric framework prioritizing social good and moral responsibility beyond compliance. We argue that these frameworks offer distinct and necessary insights for navigating ethical trade-offs. Consequently, providers of agentic AI must look beyond general principles and actively manage how these diverse reasoning frameworks are represented in their decision-making processes to ensure robust ethical outcomes.
https://arxiv.org/abs/2601.06062
Academic Papers
svg
457ad6c98d9adfe9539a77343792b46270d48f736c345defbd9bac31adb3db71
2026-01-13T00:00:00-05:00
The Environmental Impact of AI Servers and Sustainable Solutions
arXiv:2601.06063v1 Announce Type: new Abstract: The rapid expansion of artificial intelligence has significantly increased the electricity, water, and carbon demands of modern data centers, raising sustainability concerns. This study evaluates the environmental footprint of AI server operations and examines feasible technological and infrastructural strategies to mitigate these impacts. Using a literature-based methodology supported by quantitative projections and case-study analysis, we assessed trends in global electricity consumption, cooling-related water use, and carbon emissions. Projections indicate that global data center electricity demand may increase from approximately 415 TWh in 2024 to nearly 945 TWh by 2030, with AI workloads accounting for a disproportionate share of this growth. In the United States alone, AI servers are expected to drive annual increases in water consumption of 200--300 billion gallons and add 24--44 million metric tons of CO2 quivalent emissions by 2030. The results show that the design of the cooling system and the geographic location influence the environmental impact as strongly as the efficiency of the hardware. Advanced cooling technologies can reduce cooling energy by up to 50%, while location in low-carbon and water-secure regions can cut combined footprints by nearly half. In general, the study concludes that sustainable AI expansion requires coordinated improvements in cooling efficiency, renewable energy integration, and strategic deployment decisions.
https://arxiv.org/abs/2601.06063
Academic Papers
svg
6c6b0def973049ebb18b1a268a15da068bfc35a15886f4e5b6ac87cbefed175b
2026-01-13T00:00:00-05:00
Socio-technical aspects of Agentic AI
arXiv:2601.06064v1 Announce Type: new Abstract: Agentic Artificial Intelligence (AI) represents a fundamental shift in the design of intelligent systems, characterized by interconnected components that collectively enable autonomous perception, reasoning, planning, action, and learning. Recent research on agentic AI has largely focused on technical foundations, including system architectures, reasoning and planning mechanisms, coordination strategies, and application-level performance across domains. However, the societal, ethical, economic, environmental, and governance implications of agentic AI remain weakly integrated into these technical treatments. This paper addresses this gap by presenting a socio-technical analysis of agentic AI that explicitly connects core technical components with societal context. We examine how architectural choices in perception, cognition, planning, execution, and memory introduce dependencies related to data governance, accountability, transparency, safety, and sustainability. To structure this analysis, we adopt the MAD-BAD-SAD construct as an analytical lens, capturing motivations, applications, and moral dilemmas (MAD); biases, accountability, and dangers (BAD); and societal impact, adoption, and design considerations (SAD). Using this lens, we analyze ethical considerations, implications, and challenges arising from contemporary agentic AI systems and assess their manifestation across emerging applications, including healthcare, education, industry, smart and sustainable cities, social services, communications and networking, and earth observation and satellite communications. The paper further identifies open challenges and suggests future research directions, framing agentic AI as an integrated socio-technical system whose behavior and impact are co-produced by algorithms, data, organizational practices, regulatory frameworks, and social norms.
https://arxiv.org/abs/2601.06064
Academic Papers
svg
e9a42893dc21d07d3e80dc8cf0a3a45529fac8d5f78b730153d05e0d399e6045
2026-01-13T00:00:00-05:00
Enabling Long FFT Convolutions on Memory-Constrained FPGAs via Chunking
arXiv:2601.06065v1 Announce Type: new Abstract: The need for long-context reasoning has led to alternative neural network architectures besides Transformers and self-attention, a popular model being Hyena, which employs causal 1D-convolutions implemented with FFTs. Long convolutions enable efficient global context mixing, but requirements for intermediate results exceed the 2-3 MB Block RAM capacity of FPGAs. We present a chunked FFT convolution approach enabling 450K length sequence by 450K length filter convolutions on an Alveo U200 FPGA with 2.8 MB BRAM through chunking and overlap-add reconstruction. We find that throughput scales proportionally with chunk size while degrading minimally by 7% for our longest sequences, demonstrating that careful memory management enables deployment of long-context primitives on edge FPGAs without sacrificing performance.
https://arxiv.org/abs/2601.06065
Academic Papers
svg
624306e89d8c51d1390879facd29ec9e2593cd3e7f5bc0a515bad6787a8ffa71
2026-01-13T00:00:00-05:00
TEAS: Trusted Educational AI Standard: A Framework for Verifiable, Stable, Auditable, and Pedagogically Sound Learning Systems
arXiv:2601.06066v1 Announce Type: new Abstract: The rapid integration of AI into education has prioritized capability over trustworthiness, creating significant risks. Real-world deployments reveal that even advanced models are insufficient without extensive architectural scaffolding to ensure reliability. Current evaluation frameworks are fragmented: institutional policies lack technical verification, pedagogical guidelines assume AI reliability, and technical metrics are context-agnostic. This leaves institutions without a unified standard for deployment readiness. This paper introduces TEAS (Trusted Educational AI Standard), an integrated framework built on four interdependent pillars: (1) Verifiability, grounding content in authoritative sources; (2) Stability, ensuring deterministic core knowledge; (3) Auditability, enabling independent institutional validation; and (4) Pedagogical Soundness, enforcing principles of active learning. We argue that trustworthiness stems primarily from systematic architecture, not raw model capability. This insight implies that affordable, open-source models can achieve deployment-grade trust, offering a scalable and equitable path to integrating AI safely into learning environments globally.
https://arxiv.org/abs/2601.06066
Academic Papers
svg
85867ff8da4f19d798532dc4f7a967fb05890c9eeb0accdd522d40c06acb5c5f
2026-01-13T00:00:00-05:00
HyperTopo-Adapters: Geometry- and Topology-Aware Segmentation of Leaf Lesions on Frozen Encoders
arXiv:2601.06067v1 Announce Type: new Abstract: Leaf-lesion segmentation is topology-sensitive: small merges, splits, or false holes can be biologically meaningful descriptors of biochemical pathways, yet they are weakly penalized by standard pixel-wise losses in Euclidean latents. I explore HyperTopo-Adapters, a lightweight, parameter-efficient head trained on top of a frozen vision encoder, which embeds features on a product manifold -- hyperbolic + Euclidean + spherical (H + E + S) -- to encourage hierarchical separation (H), local linear detail (E), and global closure (S). A topology prior complements Dice/BCE in two forms: (i) persistent-homology (PH) distance for evaluation and selection, and (ii) a differentiable surrogate that combines a soft Euler-characteristic match with total variation regularization for stable training. I introduce warm-ups for both the hyperbolic contrastive term and the topology prior, per-sample evaluation of structure-aware metrics (Boundary-F1, Betti errors, PD distance), and a min-PD within top-K Dice rule for checkpoint selection. On a Kaggle leaf-lesion dataset (N=2,940), early results show consistent gains in boundary and topology metrics (reducing Delta beta_1 hole error by 9%) while Dice/IoU remain competitive. The study is diagnostic by design: I report controlled ablations (curvature learning, latent dimensions, contrastive temperature, surrogate settings), and ongoing tests varying encoder strength (ResNet-50, DeepLabV3, DINOv2/v3), input resolution, PH weight, and partial unfreezing of late blocks. The contribution is an open, reproducible train/eval suite (available at https://github.com/ChimdiWalter/HyperTopo-Adapters) that isolates geometric/topological priors and surfaces failure modes to guide stronger, topology-preserving architectures.
https://arxiv.org/abs/2601.06067
Academic Papers
svg
c648a522554aca047e767f19c03f0f8404cd5d117f53d7d2355415dc8fecd943
2026-01-13T00:00:00-05:00
La norme technique comme catalyseur de transfert de connaissances : la francophonie a l'{\oe}uvre dans le domaine de l'{\'e}ducation
arXiv:2601.06069v1 Announce Type: new Abstract: Standards are adopted in a wide range of fields, both technical and industrial, as well as socio-economic, cultural and linguistic. They are presented explicitly as laws and regulations, technical and industrial standards or implicitly in the form of unwritten social standards. However, in a globalization marked by a very fine mosaic of socio-cultural identities, the question arises in relation to the construction of global, transparent and coherent systems in which considerable work of consensus is necessary to ensure all types of transfers and their local adaptations. The focus here is on the global education ecosystem which develops its own standards for the transfer of knowledge and socio-cultural values through learning, teaching and training. Subcommittee 36 of the International Organization for Standardization is one of the structures of this ecosystem in which the Francophonie participates to develop international standards for distance education on the basis of universal consensus.
https://arxiv.org/abs/2601.06069
Academic Papers
svg
af9f2fe6ac47606a74218ea30d9e00fc28b200ead584adc28ebc792d6baf9bb4
2026-01-13T00:00:00-05:00
PDA in Action: Ten Principles for High-Quality Multi-Site Clinical Evidence Generation
arXiv:2601.06072v1 Announce Type: new Abstract: Background: Distributed Research Networks (DRNs) offer significant opportunities for collaborative multi-site research and have significantly advanced healthcare research based on clinical observational data. However, generating high-quality real-world evidence using fit-for-use data from multi-site studies faces important challenges, including biases associated with various types of heterogeneity within and across sites and data sharing difficulties. Over the last ten years, Privacy-Preserving Distributed Algorithms (PDA) have been developed and utilized in numerous national and international real-world studies spanning diverse domains, from comparative effectiveness research, target trial emulation, to healthcare delivery, policy evaluation, and system performance assessment. Despite these advances, there remains a lack of comprehensive and clear guiding principles for generating high-quality real-world evidence through collaborative studies leveraging the methods under PDA. Objective: The paper aims to establish ten principles of best practice for conducting high-quality multi-site studies using PDA. These principles cover all phases of research, including study preparation, protocol development, analysis, and final reporting. Discussion: The ten principles for conducting a PDA study outline a principled, efficient, and transparent framework for employing distributed learning algorithms within DRNs to generate reliable and reproducible real-world evidence.
https://arxiv.org/abs/2601.06072
Academic Papers
svg
025351c96263bbbec87c1178065c39c479767422cc02f9b0fb8f61785100eb4d
2026-01-13T00:00:00-05:00
Jamming Detection in Cell-Free MIMO with Dynamic Graphs
arXiv:2601.06075v1 Announce Type: new Abstract: Jamming attacks pose a critical threat to wireless networks, particularly in cell-free massive MIMO systems, where distributed access points and user equipment (UE) create complex, time-varying topologies. This paper proposes a novel jamming detection framework leveraging dynamic graphs and graph convolutional neural networks (GCN) to address this challenge. By modeling the network as a dynamic graph, we capture evolving communication links and detect jamming attacks as anomalies in the graph evolution. A GCN-Transformer-based model, trained with supervised learning, learns graph embeddings to identify malicious interference. Performance evaluation in simulated scenarios with moving UEs, varying jamming conditions and channel fadings, demonstrates the method's effectiveness, which is assessed through accuracy and F1 score metrics, achieving promising results for effective jamming detection.
https://arxiv.org/abs/2601.06075
Academic Papers
svg
a8bff08892bf47e848b5d25026151b777ba2554c2fcaa13c40a07da48f7b9ef2
2026-01-13T00:00:00-05:00
One if by Land, Two if by Sea, Three if by Four Seas, and More to Come -- Values of Perception, Prediction, Communication, and Common Sense in Decision Making
arXiv:2601.06077v1 Announce Type: new Abstract: This work aims to rigorously define the values of perception, prediction, communication, and common sense in decision making. The defined quantities are decision-theoretic, but have information-theoretic analogues, e.g., they share some simple but key mathematical properties with Shannon entropy and mutual information, and can reduce to these quantities in particular settings. One interesting observation is that, the value of perception without prediction can be negative, while the value of perception together with prediction and the value of prediction alone are always nonnegative. The defined quantities suggest answers to practical questions arising in the design of autonomous decision-making systems. Example questions include: Do we need to observe and predict the behavior of a particular agent? How important is it? What is the best order to observe and predict the agents? The defined quantities may also provide insights to cognitive science and neural science, toward the understanding of how natural decision makers make use of information gained from different sources and operations.
https://arxiv.org/abs/2601.06077
Academic Papers
svg
b60b46a8e2e70a578d858208f0be53a48af7ded17981e0071fe713d9837365ff
2026-01-13T00:00:00-05:00
OptFormer: Optical Flow-Guided Attention and Phase Space Reconstruction for SST Forecasting
arXiv:2601.06078v1 Announce Type: new Abstract: Sea Surface Temperature (SST) prediction plays a vital role in climate modeling and disaster forecasting. However, it remains challenging due to its nonlinear spatiotemporal dynamics and extended prediction horizons. To address this, we propose OptFormer, a novel encoder-decoder model that integrates phase-space reconstruction with a motion-aware attention mechanism guided by optical flow. Unlike conventional attention, our approach leverages inter-frame motion cues to highlight relative changes in the spatial field, allowing the model to focus on dynamic regions and capture long-range temporal dependencies more effectively. Experiments on NOAA SST datasets across multiple spatial scales demonstrate that OptFormer achieves superior performance under a 1:1 training-to-prediction setting, significantly outperforming existing baselines in accuracy and robustness.
https://arxiv.org/abs/2601.06078
Academic Papers
svg
ddbdf4167474fdf054529af80459ba83c87ea71f2fe1a7f7fd566df1557fe892
2026-01-13T00:00:00-05:00
AzeroS: Extending LLM to Speech with Self-Generated Instruction-Free Tuning
arXiv:2601.06086v1 Announce Type: new Abstract: Extending large language models (LLMs) to the speech domain has recently gained significant attention. A typical approach connects a pretrained LLM with an audio encoder through a projection module and trains the resulting model on large-scale, task-specific instruction-tuning datasets. However, curating such instruction-tuning data for specific requirements is time-consuming, and models trained in this manner often generalize poorly to unseen tasks. In this work, we first formulate that the strongest generalization of a speech-LLM is achieved when it is trained with Self-Generated Instruction-Free Tuning (SIFT), in which supervision signals are generated by a frozen LLM using textual representations of speech as input. Our proposed SIFT paradigm eliminates the need for collecting task-specific question-answer pairs and yields the theoretically best generalization to unseen tasks. Building upon this paradigm, we introduce AZeroS (Auden Zero-instruction-tuned Speech-LLM), which is trained on speech-text pairs derived from publicly available corpora, including approximately 25,000 hours of speech with ASR transcripts and 3,000 hours of speech with paralinguistic labels. Built upon Qwen2.5-7B-Instruct, the model updates only two lightweight projection modules (23.8 million parameters each), while keeping both the LLM and audio encoders frozen. Despite the minimal training cost and modest data scale, AZeroS achieves state-of-the-art performance on both semantic and paralinguistic benchmarks, including VoiceBench, AIR-Bench Foundation (Speech), and AIR-Bench Chat (Speech).
https://arxiv.org/abs/2601.06086
Academic Papers
svg
bb1d83003758f5909583559eff995fe97e7748b18a2f28cd1e2ca3f1a264e15c
2026-01-13T00:00:00-05:00
The AI Roles Continuum: Blurring the Boundary Between Research and Engineering
arXiv:2601.06087v1 Announce Type: new Abstract: The rapid scaling of deep neural networks and large language models has collapsed the once-clear divide between "research" and "engineering" in AI organizations. Drawing on a qualitative synthesis of public job descriptions, hiring criteria, and organizational narratives from leading AI labs and technology companies, we propose the AI Roles Continuum: a framework in which Research Scientists, Research Engineers, Applied Scientists, and Machine Learning Engineers occupy overlapping positions rather than discrete categories. We show that core competencies such as distributed systems design, large-scale training and optimization, rigorous experimentation, and publication-minded inquiry are now broadly shared across titles. Treating roles as fluid rather than siloed shortens research-to-production loops, improves iteration velocity, and strengthens organizational learning. We present a taxonomy of competencies mapped to common roles and discuss implications for hiring practices, career ladders, and workforce development in modern AI enterprises.
https://arxiv.org/abs/2601.06087
Academic Papers
svg
01dd09aef0b4aa604163b22111b0d33c3f0d1e15e07211d5852e57f18f0ce8d1
2026-01-13T00:00:00-05:00
Islamic Chatbots in the Age of Large Language Models
arXiv:2601.06092v1 Announce Type: new Abstract: Large Language Models (LLMs) are rapidly transforming how communities access, interpret, and circulate knowledge, and religious communities are no exception. Chatbots powered by LLMs are beginning to reshape authority, pedagogy, and everyday religious practice in Muslim communities. We analyze the landscape of LLM powered Islamic chatbots and how they are transforming Islamic religious practices e.g., democratizing access to religious knowledge but also running the risk of erosion of authority. We discuss what kind of challenges do these systems raise for Muslim communities and explore recommendations for the responsible design of these systems.
https://arxiv.org/abs/2601.06092
Academic Papers
svg
c2856400fbd7ff0ed420322ec8dc1806ac9d737c8e745e56e37de11dedeb4aaf
2026-01-13T00:00:00-05:00
GenAITEd Ghana_A Blueprint Prototype for Context-Aware and Region-Specific Conversational AI Agent for Teacher Education
arXiv:2601.06093v1 Announce Type: new Abstract: Global frameworks increasingly advocate for Responsible Artificial Intelligence (AI) in education, yet they provide limited guidance on how ethical, culturally responsive, and curriculum-aligned AI can be operationalized within functioning teacher education systems, particularly in the Global South. This study addresses this gap through the design and evaluation of GenAITEd Ghana, a context-aware, region-specific conversational AI prototype developed to support teacher education in Ghana. Guided by a Design Science Research approach, the system was developed as a school-mimetic digital infrastructure aligned with the organizational logic of Ghanaian Colleges of Education and the National Council for Curriculum and Assessment (NaCCA) framework. GenAITEd Ghana operates as a multi-agent, retrieval-augmented conversational AI that coordinates multiple models for curriculum-grounded dialogue, automatic speech recognition, voice synthesis, and multimedia interaction. Two complementary prompt pathways were embedded: system-level prompts that enforce curriculum boundaries, ethical constraints, and teacher-in-the-loop oversight, and interaction-level semi-automated prompts that structure live pedagogical dialogue through clarification, confirmation, and guided response generation. Evaluation findings show that the system effectively enacted key Responsible AI principles, including transparency, accountability, cultural responsiveness, privacy, and human oversight. Human expert evaluations further indicated that GenAITEd Ghana is pedagogically appropriate for Ghanaian teacher education, promoting student engagement while preserving educators' professional authority. Identified challenges highlight the need for continued model integration, professional development, and critical AI literacy to mitigate risks of over-reliance.
https://arxiv.org/abs/2601.06093
Academic Papers
svg
ba5b348f4db67edda1b57a97a7f620fb653c0d971aacc79e090571a75fc58cea
2026-01-13T00:00:00-05:00
Deep Q-Network Based Resilient Drone Communication:Neutralizing First-Order Markov Jammers
arXiv:2601.06095v1 Announce Type: new Abstract: Deep Reinforcement Learning based solution for jamming communications using Frequency Hopping Spread Spectrum technology in a 16 channel radio environment is presented. Deep Q Network based transmitter continuously selects the next frequency hopping channel while facing first order reactive jamming, which uses observed transition statistics to predict and interrupt transmissions. Through self training, the proposed agent learns a uniform random frequency hopping policy that effectively neutralizes the predictive advantage of the jamming. In the presence of Rayleigh fading and additive noise, the impact of forward error correction Bose Chaudhuri Hocquenghem type codes is systematically evaluated, demonstrating that even moderate redundancy significantly reduces packet loss. Extensive visualization of the learning dynamics, channel utilization distribution, epsilon greedy decay, cumulative reward, BER and SNR evolution, and detailed packet loss tables confirms convergence to a near optimal jamming strategy. The results provide a practical framework for autonomous resilient communications in modern electronic warfare scenarios.
https://arxiv.org/abs/2601.06095
Academic Papers
svg
b7294adb0dfdd5a212c1e0a3b7fc51aecbe8b2997aa6d77400b3f8afa37ff43b
2026-01-13T00:00:00-05:00
The Hessian of tall-skinny networks is easy to invert
arXiv:2601.06096v1 Announce Type: new Abstract: We describe an exact algorithm for solving linear systems $Hx=b$ where $H$ is the Hessian of a deep net. The method computes Hessian-inverse-vector products without storing the Hessian or its inverse in time and storage that scale linearly in the number of layers. Compared to the naive approach of first computing the Hessian, then solving the linear system, which takes storage that's quadratic in the number of parameters and cubically many operations, our Hessian-inverse-vector product method scales roughly like Pearlmutter's algorithm for computing Hessian-vector products.
https://arxiv.org/abs/2601.06096
Academic Papers
svg
9a013adda23e1f544d8fda5e7dd6d5e0d0239fcf024c2584ff74f4b488aa9cfe
2026-01-13T00:00:00-05:00
Semantic Event Graphs for Long-Form Video Question Answering
arXiv:2601.06097v1 Announce Type: new Abstract: Long-form video question answering remains challenging for modern vision-language models, which struggle to reason over hour-scale footage without exceeding practical token and compute budgets. Existing systems typically downsample frames or feed dense visual embeddings to large-context language models, trading off temporal coverage against cost. We propose Semantic Event Graphs (SEG), a lightweight symbolic interface between video and language that replaces raw frames with compact temporal interaction logs. Our pipeline detects and tracks objects with YOLOv11, converts proximity patterns into START/END human-object events, and organizes them into a Temporal Scene Graph (TSG). At inference time, a query-aware pruning module identifies anchor entities and lexically relevant events, returning only a small subgraph which is verbalized and passed to Gemini 2.5 Flash for answer generation. On five YouTube videos (300-500 interactions each) and 120 automatically generated long-horizon questions, SEG achieves 65.0% accuracy using only 3.47k tokens per query, closely matching a full-log baseline (62.5% at 40.39k tokens) while reducing token usage by 91.4%. A short-context baseline restricted to the last 30 seconds collapses to 2.5% accuracy, underscoring the need for explicit temporal memory. These results show that symbolic temporal graphs can serve as an effective, plug-and-play memory layer for off-the-shelf vision-language models, preserving long-range reasoning ability while making long-form video question answering substantially more token- and cost-efficient. Code, logs, and event-extraction tools will be released for reproducibility.
https://arxiv.org/abs/2601.06097
Academic Papers
svg
a31f9286940bb24b92ff6c825cea855e96c7660eb04e02b84edde075b77ad9ac
2026-01-13T00:00:00-05:00
Automatic Question Generation for Intuitive Learning Utilizing Causal Graph Guided Chain of Thought Reasoning
arXiv:2601.06098v1 Announce Type: new Abstract: Intuitive learning is crucial for developing deep conceptual understanding, especially in STEM education, where students often struggle with abstract and interconnected concepts. Automatic question generation has become an effective strategy for personalized and adaptive learning. However, its effectiveness is hindered by hallucinations in large language models (LLMs), which may generate factually incorrect, ambiguous, or pedagogically inconsistent questions. To address this issue, we propose a novel framework that combines causal-graph-guided Chain-of-Thought (CoT) reasoning with a multi-agent LLM architecture. This approach ensures the generation of accurate, meaningful, and curriculum-aligned questions. Causal graphs provide an explicit representation of domain knowledge, while CoT reasoning facilitates a structured, step-by-step traversal of related concepts. Dedicated LLM agents are assigned specific tasks such as graph pathfinding, reasoning, validation, and output, all working within domain constraints. A dual validation mechanism-at both the conceptual and output stages-greatly reduces hallucinations. Experimental results demonstrate up to a 70% improvement in quality compared to reference methods and yielded highly favorable outcomes in subjective evaluations.
https://arxiv.org/abs/2601.06098
Academic Papers
svg
3cb50669565766ea6dc863faef41b1ff043c9dbb7114f456b3f5b66074e9b14f
2026-01-13T00:00:00-05:00
Filtering Beats Fine Tuning: A Bayesian Kalman View of In Context Learning in LLMs
arXiv:2601.06100v1 Announce Type: new Abstract: We present a theory-first framework that interprets inference-time adaptation in large language models (LLMs) as online Bayesian state estimation. Rather than modeling rapid adaptation as implicit optimization or meta-learning, we formulate task- and context-specific learning as the sequential inference of a low-dimensional latent adaptation state governed by a linearized state-space model. Under Gaussian assumptions, adaptation follows a Kalman recursion with closed-form updates for both the posterior mean and covariance. This perspective elevates epistemic uncertainty to an explicit dynamical variable. We show that inference-time learning is driven by covariance collapse, i.e., rapid contraction of posterior uncertainty induced by informative tokens, which typically precedes convergence of the posterior mean. Using observability conditions on token-level Jacobians, we establish stability of the Bayesian filter, prove exponential covariance contraction rates, and derive mean-square error bounds. Gradient descent, natural-gradient methods, and meta-learning updates arise as singular, noise-free limits of the filtering dynamics, positioning optimization-based adaptation as a degenerate approximation of Bayesian inference. The resulting theory provides a unified probabilistic account of in-context learning, parameter-efficient adaptation, and test-time learning without parameter updates. It yields explicit guarantees on stability and sample efficiency, offers a principled interpretation of prompt informativeness via information accumulation, and clarifies the role of uncertainty dynamics absent from existing accounts. Minimal illustrative experiments corroborate the qualitative predictions of the theory.
https://arxiv.org/abs/2601.06100
Academic Papers
svg
7469d90bc0147b67874e2196cfc369ef3152ec368282b34e9ae0cb7aa8cf7fd6
2026-01-13T00:00:00-05:00
How to Assess AI Literacy: Misalignment Between Self-Reported and Objective-Based Measures
arXiv:2601.06101v1 Announce Type: new Abstract: The widespread adoption of Artificial Intelligence (AI) in K-12 education highlights the need for psychometrically-tested measures of teachers' AI literacy. Existing work has primarily relied on either self-report (SR) or objective-based (OB) assessments, with few studies aligning the two within a shared framework to compare perceived versus demonstrated competencies or examine how prior AI literacy experience shapes this relationship. This gap limits the scalability of learning analytics and the development of learner profile-driven instructional design. In this study, we developed and evaluated SR and OB measures of teacher AI literacy within the established framework of Concept, Use, Evaluate, and Ethics. Confirmatory factor analyses support construct validity with good reliability and acceptable fit. Results reveal a low correlation between SR and OB factors. Latent profile analysis identified six distinct profiles, including overestimation (SR > OB), underestimation (SR < OB), alignment (SR close to OB), and a unique low-SR/low-OB profile among teachers without AI literacy experience. Theoretically, this work extends existing AI literacy frameworks by validating SR and OB measures on shared dimensions. Practically, the instruments function as diagnostic tools for professional development, supporting AI-informed decisions (e.g., growth monitoring, needs profiling) and enabling scalable learning analytics interventions tailored to teacher subgroups.
https://arxiv.org/abs/2601.06101
Academic Papers
svg
5b6d9d1cb5584d62defda5913911c8dfa3e6e1af51cb3eb15dbe87ba41669962
2026-01-13T00:00:00-05:00
Dynamic Intelligence Ceilings: Measuring Long-Horizon Limits of Planning and Creativity in Artificial Systems
arXiv:2601.06102v1 Announce Type: new Abstract: Recent advances in artificial intelligence have produced systems capable of remarkable performance across a wide range of tasks. These gains, however, are increasingly accompanied by concerns regarding long-horizon developmental behavior, as many systems converge toward repetitive solution patterns rather than sustained growth. We argue that a central limitation of contemporary AI systems lies not in capability per se, but in the premature fixation of their performance frontier. To address this issue, we introduce the concept of a \emph{Dynamic Intelligence Ceiling} (DIC), defined as the highest level of effective intelligence attainable by a system at a given time under its current resources, internal intent, and structural configuration. To make this notion empirically tractable, we propose a trajectory-centric evaluation framework that measures intelligence as a moving frontier rather than a static snapshot. We operationalize DIC using two estimators: the \emph{Progressive Difficulty Ceiling} (PDC), which captures the maximal reliably solvable difficulty under constrained resources, and the \emph{Ceiling Drift Rate} (CDR), which quantifies the temporal evolution of this frontier. These estimators are instantiated through a procedurally generated benchmark that jointly evaluates long-horizon planning and structural creativity within a single controlled environment. Our results reveal a qualitative distinction between systems that deepen exploitation within a fixed solution manifold and those that sustain frontier expansion over time. Importantly, our framework does not posit unbounded intelligence, but reframes limits as dynamic and trajectory-dependent rather than static and prematurely fixed. \vspace{0.5em} \noindent\textbf{Keywords:} AI evaluation, planning and creativity, developmental intelligence, dynamic intelligence ceilings, complex adaptive systems
https://arxiv.org/abs/2601.06102
Academic Papers
svg
60ef2e0ba4496dd805143e1d8a801cca3ecc0f675a4306f4ab5e6fa14d71696c
2026-01-13T00:00:00-05:00
The Impact of Post-training on Data Contamination
arXiv:2601.06103v1 Announce Type: new Abstract: We present a controlled study of how dataset contamination interacts with the post-training stages now standard in large language model training pipelines. Starting from clean checkpoints of Qwen2.5 (0.5B/1.5B) and Gemma3 (1B/4B), we inject five copies of GSM8K and MBPP test items into the first 2B tokens of an otherwise 25B token extended pre-training dataset. We then compare the contaminated and clean models both immediately after pre-training and again after two popular post-training methods: supervised fine-tuning (SFT) and reinforcement learning (RL) with group relative policy optimization (GRPO). The applied post-training steps do not have any contamination. Across math and coding benchmarks, we find three consistent patterns: (i) Contamination causes performance spikes that are gradually diminished with continued pre-training. After even 25B tokens the apparent performance inflation of contamination can become close to zero. (ii) Both SFT and GRPO resurface the leaked information, but with different external validity: SFT inflates scores only on the contaminated tasks, whereas GRPO also inflates performance on uncontaminated counterparts (GSMPlus, HumanEval). (iii) Model scale amplifies these tendencies, larger Supervised Fine Tuned models memorize more, while larger GRPO models translate leakage into more generalizable capabilities. Our results underscore the need for contamination audits \emph{after} post-training and suggest that RL-based post-training, although not immune, can help alleviate contamination-related over-estimation problems.
https://arxiv.org/abs/2601.06103
Academic Papers
svg
11296f2acd6eba99af8a0faa10c0644e1f4a3598bfb7acecec7614fbd88a8ca4
2026-01-13T00:00:00-05:00
Comment on arXiv:2511.21731v1: Identifying Quantum Structure in AI Language: Evidence for Evolutionary Convergence of Human and Artificial Cognition
arXiv:2601.06104v1 Announce Type: new Abstract: This note is a friendly technical check of arXiv:2511.21731v1. I highlight a few places where the manuscript's interpretation of (i) the reported CHSH/Bell-type calculations and (ii) Bose--Einstein (BE) fits to rank-frequency data seems to go beyond what the stated procedures can firmly support. I also point out one internal inconsistency in the "energy-level spacing" analogy. The aim is constructive: to keep the interesting empirical observations, while making clear what they do (and do not) imply about quantum entanglement in the usual Hilbert-space sense, especially when "energy" is defined by rank.
https://arxiv.org/abs/2601.06104
Academic Papers
svg
91caacd86092d47fac79265b4821ba6106f0b52aa02140dfbb83097291370c1e
2026-01-13T00:00:00-05:00
Australian Bushfire Intelligence with AI-Driven Environmental Analytics
arXiv:2601.06105v1 Announce Type: new Abstract: Bushfires are among the most destructive natural hazards in Australia, causing significant ecological, economic, and social damage. Accurate prediction of bushfire intensity is therefore essential for effective disaster preparedness and response. This study examines the predictive capability of spatio-temporal environmental data for identifying high-risk bushfire zones across Australia. We integrated historical fire events from NASA FIRMS, daily meteorological observations from Meteostat, and vegetation indices such as the Normalized Difference Vegetation Index (NDVI) from Google Earth Engine for the period 2015-2023. After harmonizing the datasets using spatial and temporal joins, we evaluated several machine learning models, including Random Forest, XGBoost, LightGBM, a Multi-Layer Perceptron (MLP), and an ensemble classifier. Under a binary classification framework distinguishing 'low' and 'high' fire risk, the ensemble approach achieved an accuracy of 87%. The results demonstrate that combining multi-source environmental features with advanced machine learning techniques can produce reliable bushfire intensity predictions, supporting more informed and timely disaster management.
https://arxiv.org/abs/2601.06105
Academic Papers
svg
fa7319cf4b3571da6b55648e021ace46addbd4cfc6a190b4d94491e6b8743e62
2026-01-13T00:00:00-05:00
Judge Model for Large-scale Multimodality Benchmarks
arXiv:2601.06106v1 Announce Type: new Abstract: We propose a dedicated multimodal Judge Model designed to provide reliable, explainable evaluation across a diverse suite of tasks. Our benchmark spans text, audio, image, and video modalities, drawing from carefully sampled public datasets with fixed seeds to ensure reproducibility and minimize train test leakage. Instead of simple scoring, our framework aggregates multimodal judgments, analyzes the quality and reasoning consistency of model outputs, and generates diagnostic feedback. We evaluate several MLLMs, including Gemini 2.5, Phi 4, and Qwen 2.5, across 280 multimodal samples and compare judge model assessments with human annotators. Results show strong alignment between the Judge Model and human scores, demonstrating its potential as a scalable, interpretable evaluation pipeline for future multimodal AI research.
https://arxiv.org/abs/2601.06106
Academic Papers
svg
e797e1390dbed112965b076b22e1731d30e0e65723d2f16c38f336e4b87be77b
2026-01-13T00:00:00-05:00
From RLHF to Direct Alignment: A Theoretical Unification of Preference Learning for Large Language Models
arXiv:2601.06108v1 Announce Type: new Abstract: Aligning large language models (LLMs) with human preferences has become essential for safe and beneficial AI deployment. While Reinforcement Learning from Human Feedback (RLHF) established the dominant paradigm, a proliferation of alternatives -- Direct Preference Optimization (DPO), Identity Preference Optimization (IPO), Kahneman-Tversky Optimization (KTO), Simple Preference Optimization (SimPO), and many others -- has left practitioners without clear guidance on method selection. This survey provides a \textit{theoretical unification} of preference learning methods, revealing that the apparent diversity reduces to principled choices along three orthogonal axes: \textbf{(I) Preference Model} (what likelihood model underlies the objective), \textbf{(II) Regularization Mechanism} (how deviation from reference policies is controlled), and \textbf{(III) Data Distribution} (online vs.\ offline learning and coverage requirements). We formalize each axis with precise definitions and theorems, establishing key results including the coverage separation between online and offline methods, scaling laws for reward overoptimization, and conditions under which direct alignment methods fail. Our analysis reveals that failure modes -- length hacking, mode collapse, likelihood displacement -- arise from specific, predictable combinations of design choices. We synthesize empirical findings across 50+ papers and provide a practitioner's decision guide for method selection. The framework transforms preference learning from an empirical art into a theoretically grounded discipline.
https://arxiv.org/abs/2601.06108
Academic Papers
svg
d428d5e4455bab4805f69adff53efccfd67c62d218d52dfd40445c454b1d4233
2026-01-13T00:00:00-05:00
CBMAS: Cognitive Behavioral Modeling via Activation Steering
arXiv:2601.06109v1 Announce Type: new Abstract: Large language models (LLMs) often encode cognitive behaviors unpredictably across prompts, layers, and contexts, making them difficult to diagnose and control. We present CBMAS, a diagnostic framework for continuous activation steering, which extends cognitive bias analysis from discrete before/after interventions to interpretable trajectories. By combining steering vector construction with dense {\alpha}-sweeps, logit lens-based bias curves, and layer-site sensitivity analysis, our approach can reveal tipping points where small intervention strengths flip model behavior and show how steering effects evolve across layer depth. We argue that these continuous diagnostics offer a bridge between high-level behavioral evaluation and low-level representational dynamics, contributing to the cognitive interpretability of LLMs. Lastly, we provide a CLI and datasets for various cognitive behaviors at the project repository, https://github.com/shimamooo/CBMAS.
https://arxiv.org/abs/2601.06109
Academic Papers
svg
7d617da96e3f43d445a8802c9ac5fa4e68a6ae5a90c160905a49b747b1458fcd
2026-01-13T00:00:00-05:00
Optimal Beamforming for Uplink Covert Communication in MIMO GEO Satellite-Terrestrial Systems
arXiv:2601.06110v1 Announce Type: new Abstract: This paper investigates the uplink covert communication in a multiple-input multiple-output (MIMO) satellite-terrestrial system consisting of an Earth station transmitter Alice, a geosynchronous Earth orbit (GEO) satellite receiver Bob, and multiple GEO satellite wardens around Bob, where each node in the system is equipped with an array of directional antennas. Based on beamforming and the default antenna orientation setting, we first propose a scheme for covert Alice-Bob uplink transmission. Under the perfect channel estimation scenario, we provide theoretical modeling for the system performance in terms of detection error probability (DEP), transmission outage probability (TOP) and covert rate (CR), and then explore the optimal beamforming (OB) design as well as the joint optimal beamforming and antenna orientation (JO-BA) design for CR maximization. We then extend our study to the imperfect channel estimation scenario, and conduct related performance modeling and OB/JO-BA designs for CR maximization. We also apply the techniques of semidefinite relaxation, alternating optimization, Rodrigues' rotation formula and 1-D search algorithm to develop efficient algorithms to solve the above optimization problems. Finally, extensive numerical results are presented to verify our theoretical results and to illustrate the efficiency of beamforming and antenna orientation design for supporting the uplink covert communication in MIMO GEO satellite-terrestrial systems.
https://arxiv.org/abs/2601.06110
Academic Papers
svg
727599a5a5369e116e29bfc051adfd0f3e74777760035187fe94afaddbf08fea
2026-01-13T00:00:00-05:00
LLM-Powered Social Digital Twins: A Framework for Simulating Population Behavioral Response to Policy Interventions
arXiv:2601.06111v1 Announce Type: new Abstract: Predicting how populations respond to policy interventions is a fundamental challenge in computational social science and public policy. Traditional approaches rely on aggregate statistical models that capture historical correlations but lack mechanistic interpretability and struggle with novel policy scenarios. We present a general framework for constructing Social Digital Twins - virtual population replicas where Large Language Models (LLMs) serve as cognitive engines for individual agents. Each agent, characterized by demographic and psychographic attributes, receives policy signals and outputs multi-dimensional behavioral probability vectors. A calibration layer maps aggregated agent responses to observable population-level metrics, enabling validation against real-world data and deployment for counterfactual policy analysis. We instantiate this framework in the domain of pandemic response, using COVID-19 as a case study with rich observational data. On a held-out test period, our calibrated digital twin achieves a 20.7% improvement in macro-averaged prediction error over gradient boosting baselines across six behavioral categories. Counterfactual experiments demonstrate monotonic and bounded responses to policy variations, establishing behavioral plausibility. The framework is domain-agnostic: the same architecture applies to transportation policy, economic interventions, environmental regulations, or any setting where policy affects population behavior. We discuss implications for policy simulation, limitations of the approach, and directions for extending LLM-based digital twins beyond pandemic response.
https://arxiv.org/abs/2601.06111
Academic Papers
svg
9c2e21d905607ee436656ce942f8825bdca13641bace535bbf49d883f7c1dbf0
2026-01-13T00:00:00-05:00
ReliabilityBench: Evaluating LLM Agent Reliability Under Production-Like Stress Conditions
arXiv:2601.06112v1 Announce Type: new Abstract: Existing benchmarks for tool-using LLM agents primarily report single-run success rates and miss reliability properties required in production. We introduce \textbf{ReliabilityBench}, a benchmark for evaluating agent reliability across three dimensions: (i) consistency under repeated execution using $\mathrm{pass}^k$, (ii) robustness to semantically equivalent task perturbations at intensity $\epsilon$, and (iii) fault tolerance under controlled tool/API failures at intensity $\lambda$. ReliabilityBench contributes a unified reliability surface $R(k,\epsilon,\lambda)$, \textit{action metamorphic relations} that define correctness via end-state equivalence rather than text similarity, and a chaos-engineering-style fault injection framework (timeouts, rate limits, partial responses, schema drift). We evaluate two models (Gemini 2.0 Flash, GPT-4o) and two agent architectures (ReAct, Reflexion) across four domains (scheduling, travel, customer support, e-commerce) over 1,280 episodes. Perturbations alone reduce success from 96.9% at $\epsilon=0$ to 88.1% at $\epsilon=0.2$. Rate limiting is the most damaging fault in ablations. ReAct is more robust than Reflexion under combined stress, and Gemini 2.0 Flash achieves comparable reliability to GPT-4o at much lower cost. ReliabilityBench provides a systematic framework for assessing production readiness of LLM agents.
https://arxiv.org/abs/2601.06112
Academic Papers
svg
c9ef8d4635277b091ce55b46b8233cb3d07afd88ef638c3eba0abfc2ee2991d2
2026-01-13T00:00:00-05:00
Towards Infinite Length Extrapolation: A Unified Approach
arXiv:2601.06113v1 Announce Type: new Abstract: Large language models (LLMs) have revolutionized natural language processing, but their ability to process long sequences is fundamentally limited by the context window size during training. Existing length extrapolation methods often suffer from performance degradation or computational inefficiencies. We thereby use a unified framework that reinterprets positional encoding methods as a decomposition of the attention score into a multiplicative transformation and an additive bias. This perspective not only subsumes popular approaches such as relative position embeddings and attention-bias moderated approaches but also exposes their inherent limitations in handling long-range dependencies. To address these shortcomings, motivated by our framework, we introduce Adaptive Positional Encoding (APE), which leverages adaptive frequency modulation and an intricately designed decay bias that incorporates linear, logarithmic, and square-root terms. Our theoretical analysis establishes conditions for infinite-context extrapolation, ensuring that the softmax normalization remains well-defined over unbounded sequences while preserving long-distance correlations, entropy boundedness and gradient positional sensitivity. We substantiate our claims with an experimental case study on TinyStories dataset as well as a new synthetic dataset, \emph{Long Tiny Stories} featuring stories up to 32,000 words. Relevant code, dataset and model weights are available at https://anonymous.4open.science/r/Check-2DAD/.
https://arxiv.org/abs/2601.06113
Academic Papers
svg
3dfe4da43a7b6e3c484ede752f2f97c44cbdeaecbfd720b4c47738592a852595
2026-01-13T00:00:00-05:00
GroupSegment-SHAP: Shapley Value Explanations with Group-Segment Players for Multivariate Time Series
arXiv:2601.06114v1 Announce Type: new Abstract: Multivariate time-series models achieve strong predictive performance in healthcare, industry, energy, and finance, but how they combine cross-variable interactions with temporal dynamics remains unclear. SHapley Additive exPlanations (SHAP) are widely used for interpretation. However, existing time-series variants typically treat the feature and time axes independently, fragmenting structural signals formed jointly by multiple variables over specific intervals. We propose GroupSegment SHAP (GS-SHAP), which constructs explanatory units as group-segment players based on cross-variable dependence and distribution shifts over time, and then quantifies each unit's contribution via Shapley attribution. We evaluate GS-SHAP across four real-world domains: human activity recognition, power-system forecasting, medical signal analysis, and financial time series, and compare it with KernelSHAP, TimeSHAP, SequenceSHAP, WindowSHAP, and TSHAP. GS-SHAP improves deletion-based faithfulness (DeltaAUC) by about 1.7x on average over time-series SHAP baselines, while reducing wall-clock runtime by about 40 percent on average under matched perturbation budgets. A financial case study shows that GS-SHAP identifies interpretable multivariate-temporal interactions among key market variables during high-volatility regimes.
https://arxiv.org/abs/2601.06114
Academic Papers
svg
d60bd19a5bfe55281a86dc667a9be514d17627146a017c0539db794fbefd3ec9
2026-01-13T00:00:00-05:00
Dreaming Is Not a Bug: A Jung-Inspired Dream Layer for Multi-Agent LLM Companions
arXiv:2601.06115v1 Announce Type: new Abstract: Inspired by a personal dream about knowledge-sharing barriers in an everyday hardware project, this paper proposes a Jung-inspired "Dream Layer" for LLM companions, reframing controlled offline hallucinations as a resource for learning and relationship-building rather than a mere reliability bug. Drawing on Jung's notion of the collective unconscious as a shared repository of archetypal forms, we introduce an Artificial Collective Unconscious (ACU): a shared dream pool where agents contribute de-identified, abstract Interaction Templates that are later re-instantiated as idiosyncratic Dream Narratives. The Dream Layer runs strictly offline: logic-enforcing modules are relaxed and sampling temperature is increased, yielding safe but deliberately bizarre narratives (e.g., travel sequences with mismatched currencies) that augment data for rare events and edge-case safety tests; to harness risk productively, we add a governance stack of strict abstraction, temporal delays, and ephemeral memory. Through behavioural simulations of everyday dialogue and long-horizon adaptation tasks, we show that the Dream Layer enables a critical decoupling: agents remain firm on safety constraints (e.g., security policies) while becoming flexible in narrative strategy (e.g., using shared archetypal metaphors to resolve deadlocks), conceptually reframing hallucination so that online, unmarked instances remain bugs, whereas bounded, marked, and delayed ones become a goldmine for synthetic scenarios and deepened companionship, echoing anti-overfitting dream mechanisms proposed in contemporary neuroscience.
https://arxiv.org/abs/2601.06115
Academic Papers
svg
7a7aadd9a99b861346d8795f9ab4756d103bddab87f17642388e57bc2c799cc4
2026-01-13T00:00:00-05:00
Structure-Aware Diversity Pursuit as an AI Safety Strategy against Homogenization
arXiv:2601.06116v1 Announce Type: new Abstract: Generative AI models reproduce the biases in the training data and can further amplify them through mode collapse. We refer to the resulting harmful loss of diversity as homogenization. Our position is that homogenization should be a primary concern in AI safety. We introduce xeno-reproduction as the strategy that mitigates homogenization. For auto-regressive LLMs, we formalize xeno-reproduction as a structure-aware diversity pursuit. Our contribution is foundational, intended to open an essential line of research and invite collaboration to advance diversity.
https://arxiv.org/abs/2601.06116
Academic Papers
svg
afbde52177f961f40204eb60e26bb527545db755e8131b2ebd45d6e430d39428
2026-01-13T00:00:00-05:00
Stress Testing Machine Learning at $10^{10}$ Scale: A Comprehensive Study of Adversarial Robustness on Algebraically Structured Integer Streams
arXiv:2601.06117v1 Announce Type: new Abstract: This paper presents a large-scale stress test of machine learning systems using structured mathematical data as a benchmark. We evaluate the robustness of tree-based classifiers at an unprecedented scale, utilizing ten billion deterministic samples and five billion adversarial counterexamples. Our framework introduces three primary contributions: first, a high-throughput pipeline that reformulates Pythagorean triple generation into a single-parameter index stream, significantly improving computational efficiency over classical methods; second, the Hypothesis-driven Negative Dataset (HND), which categorizes nine classes of adversarial attacks designed to exploit both arithmetic precision and structural patterns; and third, a fault-tolerant infrastructure for reliable large-scale training. Experimental results demonstrate that while LightGBM achieves 99.99% accuracy, feature attribution reveals that the model prioritizes underlying quadratic patterns over direct algebraic verification. These findings suggest that learned heuristics can effectively identify structural representations in numerical data, potentially serving as efficient preprocessors for formal verification methods.
https://arxiv.org/abs/2601.06117
Academic Papers
svg
d3baa7db5b4150465352e40331b845247535f92d587ca4342d8fcd72da5b004d
2026-01-13T00:00:00-05:00
Beyond Reproducibility: Token Probabilities Expose Large Language Model Nondeterminism
arXiv:2601.06118v1 Announce Type: new Abstract: The execution of Large Language Models (LLMs) has been shown to produce nondeterministic results when run on Graphics Processing Units (GPUs), even when they are configured to produce deterministic results. This is due to the finite precision effects of the arithmetic operations, which depend on the order in which they are executed. This order, in turn, depends on the processes that are running concurrently on the GPU. Previous studies have focused on the impact of nondeterminism on the text generated by the LLMs or on proposing mechanisms to achieve deterministic execution. This work takes a closer look at nondeterminism by analyzing the variations on the token probabilities, not on the generated text. Interestingly, all the models evaluated have similar results in both the trends and the actual values of the variations of the probabilities. In particular, the results show that the effects of nondeterminism are significant for token probabilities that are in the range of 0.1 to 0.9, while they are much smaller when the probabilities are close to 0 or 1. This has significant implications for our understanding of nondeterminism. The first is that nondeterminism will likely have a non-negligible impact on generated text when the temperature is not zero, as it introduces significant variations in the token probabilities except when they are close to 0 or 1. Secondly, it suggests that all models have similar non deterministic variations at the token probability level. Therefore, different variations in the performance of the generated text, for example, when measuring accuracy on a benchmark, seem to come from different token probabilities or response lengths. A third implication is that we may be able to estimate the impact of nondeterminism by running a single inference and analyzing the token level probabilities, instead of having to run the same inference many times.
https://arxiv.org/abs/2601.06118
Academic Papers
svg
31477993573439e15ca91be1014e939f2228d906ccd5a491094f4e063045bb77
2026-01-13T00:00:00-05:00
L2CU: Learning to Complement Unseen Users
arXiv:2601.06119v1 Announce Type: new Abstract: Recent research highlights the potential of machine learning models to learn to complement (L2C) human strengths; however, generalizing this capability to unseen users remains a significant challenge. Existing L2C methods oversimplify interaction between human and AI by relying on a single, global user model that neglects individual user variability, leading to suboptimal cooperative performance. Addressing this, we introduce L2CU, a novel L2C framework for human-AI cooperative classification with unseen users. Given sparse and noisy user annotations, L2CU identifies representative annotator profiles capturing distinct labeling patterns. By matching unseen users to these profiles, L2CU leverages profile-specific models to complement the user and achieve superior joint accuracy. We evaluate L2CU on datasets (CIFAR-10N, CIFAR-10H, Fashion-MNIST-H, Chaoyang and AgNews), demonstrating its effectiveness as a model-agnostic solution for improving human-AI cooperative classification.
https://arxiv.org/abs/2601.06119
Academic Papers
svg
ed2ffd025a3a60a415b94b5d17854e722c6a5b6a4271756ec970f02c67e6ca68
2026-01-13T00:00:00-05:00
Range-Coder with fast Adaptation and Table-Based Decoding
arXiv:2601.06120v1 Announce Type: new Abstract: The transmission or storage of signals typically involves data compression. The final processing step in compression systems is generally an entropy coding stage, which converts symbols into a bit stream based on their probability distribution. A distinct class of entropy coding methods operates not by mapping input symbols to discrete codewords but by operating on intervals or ranges. This approach enables a more accurate approximation of the source entropy, particularly for sources with highly skewed or varying symbol distributions. Representative techniques in this category include traditional arithmetic coding, range coding, and methods based on asymmetric numeral systems (ANS). The complexity of these methods depends mainly on three processing steps: the core routines of encoding and decoding doing the calculations, the interval-based determination of the correct symbol at decoder, and the efforts of keeping updated with respect to the varying symbol distribution. The interval-based symbol determination at decoder typically demands for a searching procedure. In previous literature, it could be shown that the search can be replaced by a table-based approach with only O(1)-complexity but having the side-effect that the adaptation of the symbols statistic becomes infeasible because of the high time-consumption of adapting the table. We propose an adaptation process using a ring-buffer technique enabling the adaptive table-based decoding procedure as well as the replacement of a division by a bit-shift operation at encoder and decoder core routines. This accelerates the coding process significantly. In static (non-adaptive) mode, the coding time can be reduced by about 40 percent. In adaptive mode, the proposed technique is faster than alternative approaches for alphabets from about 12 to 64 different symbol when comparing the overall encoder+decoder time.
https://arxiv.org/abs/2601.06120
Academic Papers
svg
05ebd20497e2c83869b7a86f03733e618c064c36e5b040ebf66eb6902793fb53
2026-01-13T00:00:00-05:00
Prompt Engineering for Responsible Generative AI Use in African Education: A Report from a Three-Day Training Series
arXiv:2601.06121v1 Announce Type: new Abstract: Generative artificial intelligence (GenAI) tools are increasingly adopted in education, yet many educators lack structured guidance on responsible and context sensitive prompt engineering, particularly in African and other resource constrained settings. This case report documents a three day online professional development programme organised by Generative AI for Education and Research in Africa (GenAI-ERA), designed to strengthen educators and researchers capacity to apply prompt engineering ethically for academic writing, teaching, and research. The programme engaged 468 participants across multiple African countries, including university educators, postgraduate students, and researchers. The training followed a scaffolded progression from foundational prompt design to applied and ethical strategies, including persona guided interactions. Data sources comprised registration surveys, webinar interaction records, facilitator observations, and session transcripts, analysed using descriptive statistics and computationally supported qualitative techniques. Findings indicate that participants increasingly conceptualised prompt engineering as a form of AI literacy requiring ethical awareness, contextual sensitivity, and pedagogical judgement rather than technical skill alone. The case highlights persistent challenges related to access, locally relevant training materials, and institutional support. The report recommends sustained professional development and the integration of prompt literacy into curricula to support responsible GenAI use in African education systems.
https://arxiv.org/abs/2601.06121
Academic Papers
svg
478c29ada1265d4835f27dbe259ebdf8cf3c9696f2c522bc0aab7ea84cb6d4eb
2026-01-13T00:00:00-05:00
COVR:Collaborative Optimization of VLMs and RL Agent for Visual-Based Control
arXiv:2601.06122v1 Announce Type: new Abstract: Visual reinforcement learning (RL) suffers from poor sample efficiency due to high-dimensional observations in complex tasks. While existing works have shown that vision-language models (VLMs) can assist RL, they often focus on knowledge distillation from the VLM to RL, overlooking the potential of RL-generated interaction data to enhance the VLM. To address this, we propose COVR, a collaborative optimization framework that enables the mutual enhancement of the VLM and RL policies. Specifically, COVR fine-tunes the VLM with RL-generated data to enhance the semantic reasoning ability consistent with the target task, and uses the enhanced VLM to further guide policy learning via action priors. To improve fine-tuning efficiency, we introduce two key modules: (1) an Exploration-Driven Dynamic Filter module that preserves valuable exploration samples using adaptive thresholds based on the degree of exploration, and (2) a Return-Aware Adaptive Loss Weight module that improves the stability of training by quantifying the inconsistency of sampling actions via return signals of RL. We further design a progressive fine-tuning strategy to reduce resource consumption. Extensive experiments show that COVR achieves strong performance across various challenging visual control tasks.
https://arxiv.org/abs/2601.06122
Academic Papers
svg
db7b704106870a8625bc2728dbf2d6fb580a0c22db53db909e712c605b029243
2026-01-13T00:00:00-05:00
Latent Space Communication via K-V Cache Alignment
arXiv:2601.06123v1 Announce Type: new Abstract: Solving increasingly complex problems with large language models (LLMs) necessitates a move beyond individual models and towards multi-model systems that can effectively collaborate. While text has traditionally served as the medium for inter-model communication, a richer and more efficient exchange is possible if models can access each other's internal states directly. In this paper, we propose learning a shared representation space that aligns the k-v caches of multiple models, creating a high-bandwidth channel for collaboration without altering the underlying pre-trained parameters. We do so by augmenting each model with adapters to translate its state into and out of this shared space. Via a suite of experiments with Gemma-2 models, we demonstrate that this approach not only enables seamless inter-model communication but also improves individual model performance. We also show that the shared space allows for the direct transfer of learned skills, such as soft prompts, between different models. Our work represents a significant step towards a future where models can fluidly share knowledge and capabilities.
https://arxiv.org/abs/2601.06123
Academic Papers
svg
a7a7ce0c013a16ba93ffa0cb028e5a3cc706253b1d2169073650954225d6cc5b
2026-01-13T00:00:00-05:00
Learning Minimally-Congested Drive Times from Sparse Open Networks: A Lightweight RF-Based Estimator for Urban Roadway Operations
arXiv:2601.06124v1 Announce Type: new Abstract: Accurate roadway travel-time prediction is foundational to transportation systems analysis, yet widespread reliance on either data-intensive congestion models or overly na\"ive heuristics limits scalability and practical adoption in engineering workflows. This paper develops a lightweight estimator for minimally-congested car travel times that integrates open road-network data, speed constraints, and sparse control/turn features within a random forest framework to correct bias from shortest-path traversal-time baselines. Using an urban testbed, the pipeline: (i) constructs drivable networks from volunteered geographic data; (ii) solves Dijkstra routes minimizing edge traversal time; (iii) derives sparse operational features (signals, stops, crossings, yield, roundabouts; left/right/slight/U-turn counts); and (iv) trains a regression ensemble on limited high-quality reference times to generalize predictions beyond the training set. Out-of-sample evaluation demonstrates marked improvements over traversal-time baselines across mean absolute error, mean absolute percentage error, mean squared error, relative bias, and explained variance, with no significant mean bias under minimally congested conditions and consistent k-fold stability indicating negligible overfitting. The resulting approach offers a practical middle ground for transportation engineering: it preserves point-to-point fidelity at metropolitan scale, reduces resource requirements, and supplies defensible performance estimates where congestion feeds are inaccessible or cost-prohibitive, supporting planning, accessibility, and network performance applications under low-traffic operating regimes.
https://arxiv.org/abs/2601.06124
Academic Papers
svg
f91c84dd51f686990930b4d027385ea727be13be47c2838a2326b9a861c0364b
2026-01-13T00:00:00-05:00
Extended Target Adaptive Beamforming for ISAC:A Perspective of Predictive Error Ellipse
arXiv:2601.06125v1 Announce Type: new Abstract: Utilizing communication signals to extract motion parameters has emerged as a key direction in Vehicle-to- Everything (V2X) networks. Accurately modeling the relationship between communication signals and sensing performance is critical for the advancement of such systems. Unlike prior work that relies primarily on qualitative analysis, this paper derives the Cram\'er-Rao Bound (CRB) for radar parameter estimation in the context of Orthogonal Frequency Division Multiplexing (OFDM) waveforms and Uniform Planar Array (UPA) configurations. Recognizing that vehicles may act as extended targets, we propose two New Radio (NR)-V2X-compatible beamforming schemes tailored to different phases of the communication process. During the initial beam establishment phase, we develop a beamforming approach based on the union of predictive error ellipses, which enhances scatterer localization through temporally assisted beam training. In the beam adjustment phase, we introduce an adaptive narrowest-beam strategy that leverages the positions of scatterers and the communication receiver (CR), enabling effective tracking with reduced complexity. The beam design problem is addressed using the minimum enclosing ellipse algorithm and tailored antenna control methods. Simulation results validate the proposed approach, showing up to a 32.4% improvement in achievable rate with a 32*32 transmit antenna array and a 5.2% gain with an 8*8 array, compared to conventional beam sweeping under identical SNR conditions.
https://arxiv.org/abs/2601.06125
Academic Papers
svg
f6298a9740a47bfc1e400a5da1571114d04302e5dcc73d4ae4939ac6f72062f0
2026-01-13T00:00:00-05:00
NL2Dashboard: A Lightweight and Controllable Framework for Generating Dashboards with LLMs
arXiv:2601.06126v1 Announce Type: new Abstract: While Large Language Models (LLMs) have demonstrated remarkable proficiency in generating standalone charts, synthesizing comprehensive dashboards remains a formidable challenge. Existing end-to-end paradigms, which typically treat dashboard generation as a direct code generation task (e.g., raw HTML), suffer from two fundamental limitations: representation redundancy due to massive tokens spent on visual rendering, and low controllability caused by the entanglement of analytical reasoning and presentation. To address these challenges, we propose NL2Dashboard, a lightweight framework grounded in the principle of Analysis-Presentation Decoupling. We introduce a structured intermediate representation (IR) that encapsulates the dashboard's content, layout, and visual elements. Therefore, it confines the LLM's role to data analysis and intent translation, while offloading visual synthesis to a deterministic rendering engine. Building upon this framework, we develop a multi-agent system in which the IR-driven algorithm is instantiated as a suite of tools. Comprehensive experiments conducted with this system demonstrate that NL2Dashboard significantly outperforms state-of-the-art baselines across diverse domains, achieving superior visual quality, significantly higher token efficiency, and precise controllability in both generation and modification tasks.
https://arxiv.org/abs/2601.06126
Academic Papers
svg
17dc302dd4c928cba30bbca174b3821bc7b572d9e587c5df14caefdf5354779c
2026-01-13T00:00:00-05:00
AIS-CycleGen: A CycleGAN-Based Framework for High-Fidelity Synthetic AIS Data Generation and Augmentation
arXiv:2601.06127v1 Announce Type: new Abstract: Automatic Identification System (AIS) data are vital for maritime domain awareness, yet they often suffer from domain shifts, data sparsity, and class imbalance, which hinder the performance of predictive models. In this paper, we propose a robust data augmentation method, AISCycleGen, based on Cycle-Consistent Generative Adversarial Networks (CycleGAN), which is tailored for AIS datasets. Unlike traditional methods, AISCycleGen leverages unpaired domain translation to generate high-fidelity synthetic AIS data sequences without requiring paired source-target data. The framework employs a 1D convolutional generator with adaptive noise injection to preserve the spatiotemporal structure of AIS trajectories, enhancing the diversity and realism of the generated data. To demonstrate its efficacy, we apply AISCycleGen to several baseline regression models, showing improvements in performance across various maritime domains. The results indicate that AISCycleGen outperforms contemporary GAN-based augmentation techniques, achieving a PSNR value of 30.5 and an FID score of 38.9. These findings underscore AISCycleGen's potential as an effective and generalizable solution for augmenting AIS datasets, improving downstream model performance in real-world maritime intelligence applications.
https://arxiv.org/abs/2601.06127
Academic Papers
svg
70984a887a3d7eef98f25117fae0d01c43033abe0a19584afc149496ead64f46
2026-01-13T00:00:00-05:00
Graph-Based Analysis of AI-Driven Labor Market Transitions: Evidence from 10,000 Egyptian Jobs and Policy Implications
arXiv:2601.06129v1 Announce Type: new Abstract: How many workers displaced by automation can realistically transition to safer jobs? We answer this using a validated knowledge graph of 9,978 Egyptian job postings, 19,766 skill activities, and 84,346 job-skill relationships (0.74% error rate). While 20.9% of jobs face high automation risk, we find that only 24.4% of at-risk workers have viable transition pathways--defined by $\geq$3 shared skills and $\geq$50% skill transfer. The remaining 75.6% face a structural mobility barrier requiring comprehensive reskilling, not incremental upskilling. Among 4,534 feasible transitions, process-oriented skills emerge as the highest-leverage intervention, appearing in 15.6% of pathways. These findings challenge optimistic narratives of seamless workforce adaptation and demonstrate that emerging economies require active pathway creation, not passive skill matching.
https://arxiv.org/abs/2601.06129
Academic Papers
svg
d5e59d833927976f85626afb236d6a9e4c12eac916c775833313b3c5516c6e70
2026-01-13T00:00:00-05:00
An evaluation of LLMs for political bias in Western media: Israel-Hamas and Ukraine-Russia wars
arXiv:2601.06132v1 Announce Type: new Abstract: Political bias in media plays a critical role in shaping public opinion, voter behaviour, and broader democratic discourse. Subjective opinions and political bias can be found in media sources, such as newspapers, depending on their funding mechanisms and alliances with political parties. Automating the detection of political biases in media content can limit biases in elections. The impact of large language models (LLMs) in politics and media studies is becoming prominent. In this study, we utilise LLMs to compare the left-wing, right-wing, and neutral political opinions expressed in the Guardian and BBC. We review newspaper reporting that includes significant events such as the Russia-Ukraine war and the Hamas-Israel conflict. We analyse the proportion for each opinion to find the bias under different LLMs, including BERT, Gemini, and DeepSeek. Our results show that after the outbreak of the wars, the political bias of Western media shifts towards the left-wing and each LLM gives a different result. DeepSeek consistently showed a stable Left-leaning tendency, while BERT and Gemini remained closer to the Centre. The BBC and The Guardian showed distinct reporting behaviours across the two conflicts. In the Russia-Ukraine war, both outlets maintained relatively stable positions; however, in the Israel-Hamas conflict, we identified larger political bias shifts, particularly in Guardian coverage, suggesting a more event-driven pattern of reporting bias. These variations suggest that LLMs are shaped not only by their training data and architecture, but also by underlying worldviews with associated political biases.
https://arxiv.org/abs/2601.06132
Academic Papers
svg
58d00ac8b395988d84da8f91009b92712e58f6a813ea019d994971a1b30ac82c
2026-01-13T00:00:00-05:00
A Review of Online Diffusion Policy RL Algorithms for Scalable Robotic Control
arXiv:2601.06133v1 Announce Type: new Abstract: Diffusion policies have emerged as a powerful approach for robotic control, demonstrating superior expressiveness in modeling multimodal action distributions compared to conventional policy networks. However, their integration with online reinforcement learning remains challenging due to fundamental incompatibilities between diffusion model training objectives and standard RL policy improvement mechanisms. This paper presents the first comprehensive review and empirical analysis of current Online Diffusion Policy Reinforcement Learning (Online DPRL) algorithms for scalable robotic control systems. We propose a novel taxonomy that categorizes existing approaches into four distinct families -- Action-Gradient, Q-Weighting, Proximity-Based, and Backpropagation Through Time (BPTT) methods -- based on their policy improvement mechanisms. Through extensive experiments on a unified NVIDIA Isaac Lab benchmark encompassing 12 diverse robotic tasks, we systematically evaluate representative algorithms across five critical dimensions: task diversity, parallelization capability, diffusion step scalability, cross-embodiment generalization, and environmental robustness. Our analysis identifies key findings regarding the fundamental trade-offs inherent in each algorithmic family, particularly concerning sample efficiency and scalability. Furthermore, we reveal critical computational and algorithmic bottlenecks that currently limit the practical deployment of online DPRL. Based on these findings, we provide concrete guidelines for algorithm selection tailored to specific operational constraints and outline promising future research directions to advance the field toward more general and scalable robotic learning systems.
https://arxiv.org/abs/2601.06133
Academic Papers
svg
975adf639558f03c1dc63acdaa068acba70b6e255fa7bd2caa14e4d0bc244ed6
2026-01-13T00:00:00-05:00
DeeperBrain: A Neuro-Grounded EEG Foundation Model Towards Universal BCI
arXiv:2601.06134v1 Announce Type: new Abstract: Electroencephalography (EEG) foundation models hold significant promise for universal Brain-Computer Interfaces (BCIs). However, existing approaches often rely on end-to-end fine-tuning and exhibit limited efficacy under frozen-probing protocols, lacking the intrinsic universality required for broad generalization. This limitation stems from adapting general-purpose sequence architectures that overlook the biophysical and dynamical principles of neural activity. To bridge this gap, we propose DeeperBrain, a neuro-grounded foundation model integrating domain-specific inductive biases into its model design and learning objectives. Architecturally, DeeperBrain incorporates a volume conduction-aware channel encoding to model spatial mixing via 3D geometry, and a neurodynamics-aware temporal encoding capturing slow adaptations using oscillatory and exponential bases. For pretraining, we introduce a dual-objective strategy combining Masked EEG Reconstruction (MER) for local fidelity and Neurodynamics Statistics Prediction (NSP). NSP enforces alignment with macroscopic brain states by predicting interpretable order parameters, including spectral power, functional connectivity, cross-frequency coupling, and dynamic complexity. Extensive experiments demonstrate that DeeperBrain achieves state-of-the-art or highly competitive performance under end-to-end fine-tuning. Crucially, it maintains superior efficacy under a rigorous frozen-probing protocol, verifying that embedding neuroscientific first principles endows learned representations with the intrinsic universality essential for universal BCI. The code will be publicly available.
https://arxiv.org/abs/2601.06134
Academic Papers
svg
3e72d6035f05705492fc284bba100e8861200f9149e7ea7f2e2dac143776f9c5
2026-01-13T00:00:00-05:00
Attention in Geometry: Scalable Spatial Modeling via Adaptive Density Fields and FAISS-Accelerated Kernels
arXiv:2601.06135v1 Announce Type: new Abstract: This work introduces Adaptive Density Fields (ADF), a geometric attention framework that formulates spatial aggregation as a query-conditioned, metric-induced attention operator in continuous space. By reinterpreting spatial influence as geometry-preserving attention grounded in physical distance, ADF bridges concepts from adaptive kernel methods and attention mechanisms. Scalability is achieved via FAISS-accelerated inverted file indices, treating approximate nearest-neighbor search as an intrinsic component of the attention mechanism. We demonstrate the framework through a case study on aircraft trajectory analysis in the Chengdu region, extracting trajectory-conditioned Zones of Influence (ZOI) to reveal recurrent airspace structures and localized deviations.
https://arxiv.org/abs/2601.06135
Academic Papers
svg
032caaf6704f758771b75333a9cfd00f17c601b54b675e7fd60a8ddce049cbf7
2026-01-13T00:00:00-05:00
RainBalance: Alleviating Dual Imbalance in GNSS-based Precipitation Nowcasting via Continuous Probability Modeling
arXiv:2601.06137v1 Announce Type: new Abstract: Global navigation satellite systems (GNSS) station-based Precipitation Nowcasting aims to predict rainfall within the next 0-6 hours by leveraging a GNSS station's historical observations of precipitation, GNSS-PWV, and related meteorological variables, which is crucial for disaster mitigation and real-time decision-making. In recent years, time-series forecasting approaches have been extensively applied to GNSS station-based precipitation nowcasting. However, the highly imbalanced temporal distribution of precipitation, marked not only by the dominance of non-rainfall events but also by the scarcity of extreme precipitation samples, significantly limits model performance in practical applications. To address the dual imbalance problem in precipitation nowcasting, we propose a continuous probability modeling-based framework, RainBalance. This plug-and-play module performs clustering for each input sample to obtain its cluster probability distribution, which is further mapped into a continuous latent space via a variational autoencoder (VAE). By learning in this continuous probabilistic space, the task is reformulated from fitting single and imbalance-prone precipitation labels to modeling continuous probabilistic label distributions, thereby alleviating the imbalance issue. We integrate this module into multiple state-of-the-art models and observe consistent performance gains. Comprehensive statistical analysis and ablation studies further validate the effectiveness of our approach.
https://arxiv.org/abs/2601.06137
Academic Papers
svg
e82712db3d87200e843aa7fbb3df42521d6f9b6e1a3cebc49c0de6f84a2d6cfb
2026-01-13T00:00:00-05:00
Low-Back Pain Physical Rehabilitation by Movement Analysis in Clinical Trial
arXiv:2601.06138v1 Announce Type: new Abstract: To allow the development and assessment of physical rehabilitation by an intelligent tutoring system, we propose a medical dataset of clinical patients carrying out low back-pain rehabilitation exercises and benchmark on state of the art human movement analysis algorithms. This dataset is valuable because it includes rehabilitation motions in a clinical setting with patients in their rehabilitation program. This paper introduces the Keraal dataset, a clinically collected dataset to enable intelligent tutoring systems (ITS) for rehabilitation. It addresses four challenges in exercise monitoring: motion assessment, error recognition, spatial localization, temporal localization
https://arxiv.org/abs/2601.06138
Academic Papers
svg
de1492b3b5abf63adf159c66f0d07ce8b794646cbff038a9cac5f03a8dfb6494
2026-01-13T00:00:00-05:00
Perspective: The creation of "Newsgames" as a teaching method-Empirical observations
arXiv:2601.06139v1 Announce Type: new Abstract: This chapter reports an empirical teaching experience integrating newsgame creation-serious games addressing current events and contributing to public debate-into an introductory game design course for engineering students. From 2010 to 2012, around 80 students produced 17 games on diverse news topics (e.g., H1N1 influenza, Megaupload shutdown, Tunisian Revolution, Haiti earthquake), relying on online journalistic sources and using accessible development tools suited to mixed programming backgrounds (RPG Maker, The Games Factory 2, Flash, Java). The authors argue that designing newsgames fosters learning outcomes beyond technical design skills: (1) thorough information seeking and documentation of real-world issues, (2) exchange and confrontation of viewpoints through contrasting game interpretations of the same event, and (3) classroom debates about the legitimacy and limits of video games as an expressive medium for sensitive topics. The paper concludes that newsgame design can support the development of reasoning skills (evidence-based argumentation and perspective-taking) and suggests extending the approach to other serious game types to further explore its educational potential.
https://arxiv.org/abs/2601.06139
Academic Papers
svg
c3efa3d5b4d8cdbfa5e2185a914fb2d75bcd9e59f85bc824acb822ad57df8634
2026-01-13T00:00:00-05:00
Causal and Federated Multimodal Learning for Cardiovascular Risk Prediction under Heterogeneous Populations
arXiv:2601.06140v1 Announce Type: new Abstract: Cardiovascular disease (CVD) continues to be the major cause of death globally, calling for predictive models that not only handle diverse and high-dimensional biomedical signals but also maintain interpretability and privacy. We create a single multimodal learning framework that integrates cross modal transformers with graph neural networks and causal representation learning to measure personalized CVD risk. The model combines genomic variation, cardiac MRI, ECG waveforms, wearable streams, and structured EHR data to predict risk while also implementing causal invariance constraints across different clinical subpopulations. To maintain transparency, we employ SHAP based feature attribution, counterfactual explanations and causal latent alignment for understandable risk factors. Besides, we position the design in a federated, privacy, preserving optimization protocol and establish rules for convergence, calibration and uncertainty quantification under distributional shift. Experimental studies based on large-scale biobank and multi institutional datasets reveal state discrimination and robustness, exhibiting fair performance across demographic strata and clinically distinct cohorts. This study paves the way for a principled approach to clinically trustworthy, interpretable and privacy respecting CVD prediction at the population level.
https://arxiv.org/abs/2601.06140
Academic Papers
svg
5aa7b97e36d2cdec0d365f7be64d1c741815605d864fe16f291181b501f635e7
2026-01-13T00:00:00-05:00
An LLM -Powered Assessment Retrieval-Augmented Generation (RAG) For Higher Education
arXiv:2601.06141v1 Announce Type: new Abstract: Providing timely, consistent, and high-quality feedback in large-scale higher education courses remains a persistent challenge, often constrained by instructor workload and resource limitations. This study presents an LLM-powered, agentic assessment system built on a Retrieval-Augmented Generation (RAG) architecture to address these challenges. The system integrates a large language model with a structured retrieval mechanism that accesses rubric criteria, exemplar essays, and instructor feedback to generate contextually grounded grades and formative comments. A mixed-methods evaluation was conducted using 701 student essays, combining quantitative analyses of inter-rater reliability, scoring alignment, and consistency with instructor assessments, alongside qualitative evaluation of feedback quality, pedagogical relevance, and student support. Results demonstrate that the RAG system can produce reliable, rubric-aligned feedback at scale, achieving 94--99% agreement with human evaluators, while also enhancing students' opportunities for self-regulated learning and engagement with assessment criteria. The discussion highlights both pedagogical limitations, including potential constraints on originality and feedback dialogue, and the transformative potential of RAG systems to augment instructors' capabilities, streamline assessment workflows, and support scalable, adaptive learning environments. This research contributes empirical evidence for the application of agentic AI in higher education, offering a scalable and pedagogically informed model for enhancing feedback accessibility, consistency, and quality.
https://arxiv.org/abs/2601.06141
Academic Papers
svg
3ec037320cbad7e7f17b215a0e70fdf0deb58565c096e7260a99d1dc012e22b0
2026-01-13T00:00:00-05:00
Is Sanskrit the most token-efficient language? A quantitative study using GPT, Gemini, and SentencePiece
arXiv:2601.06142v1 Announce Type: new Abstract: Tokens are the basic units of Large Language Models (LLMs). LLMs rely on tokenizers to segment text into these tokens, and tokenization is the primary determinant of computational and inference cost. Sanskrit, one of the oldest languages, is hypothesized to express more meaning per token due to its morphology and grammar rules; however, no prior work has quantified this. We use a dataset of 701 parallel verses of the Bhagavad Gita, which comprises three languages-Sanskrit, English, and Hindi along with transliteration of Sanskrit into English. We test tokenizers including SentencePiece (SPM), older GPT models, and the latest generation tokenizers from Gemini and GPT. We use metrics of token count, characters per token (token efficiency), and tokens per character (token cost). Results show a ~2x difference in token counts between Sanskrit and English/Hindi under the unbiased SPM baseline. English/Hindi translations of Sanskrit commentary resulted in an approximately 20x increase in token count. GPT o200k base (latest, used by GPT-4o) and Gemini (latest) reduce bias by a significant degree compared to GPT cl100k base (used until GPT-4), but still fail to fully capture Sanskrit's compactness. This matters because there might be a penalty bias for non-English users, which inflates the token count. This research provides a foundation for improving future tokenizer design and shows the potential of Sanskrit for highly compact encoding, saving on cost while speeding up training and inference. The code and dataset are available at https://github.com/anshulkr713/sanskrit-token-efficiency
https://arxiv.org/abs/2601.06142
Academic Papers
svg
b4f976e8f1c0c9390d12b0e55b245ff33717104c4830b236b0498e1b37dea510
2026-01-13T00:00:00-05:00
The Patient/Industry Trade-off in Medical Artificial Intelligence
arXiv:2601.06144v1 Announce Type: new Abstract: Artificial intelligence (AI) in healthcare has led to many promising developments; however, increasingly, AI research is funded by the private sector leading to potential trade-offs between benefits to patients and benefits to industry. Health AI practitioners should prioritize successful adaptation into clinical practice in order to provide meaningful benefits to patients, but translation usually requires collaboration with industry. We discuss three features of AI studies that hamper the integration of AI into clinical practice from the perspective of researchers and clinicians. These include lack of clinically relevant metrics, lack of clinical trials and longitudinal studies to validate results, and lack of patient and physician involvement in the development process. For partnerships between industry and health research to be sustainable, a balance must be established between patient and industry benefit. We propose three approaches for addressing this gap: improved transparency and explainability of AI models, fostering relationships with industry partners that have a reputation for centering patient benefit in their practices, and prioritization of overall healthcare benefits. With these priorities, we can sooner realize meaningful AI technologies used by clinicians where mutua
https://arxiv.org/abs/2601.06144
Academic Papers
svg
1ed93445acdd4aa9e820f1fcbf40ba4067c7b8f0ab7f8db1e6ec5da6988e5f69
2026-01-13T00:00:00-05:00
Bridging the AI divide in sub-Saharan Africa: Challenges and opportunities for inclusivity
arXiv:2601.06145v1 Announce Type: new Abstract: The artificial intelligence (AI) digital divide in sub-Saharan Africa (SSA) presents significant disparities in AI access, adoption, and development due to varying levels of infrastructure, education, and policy support. This study investigates the extent of AI readiness among the top SSA countries using the 2024 Government AI Readiness Index, alongside an analysis of AI initiatives to foster inclusivity. A comparative analysis of AI readiness scores highlights disparities across nations, with Mauritius (53.94) and South Africa (52.91) leading, while Zambia (42.58) and Uganda (43.32) lag. Quartile analysis reveals a concentration of AI preparedness among a few nations, suggesting uneven AI development. The study further examines the relationship between AI readiness and economic indicators, identifying instances where AI progress does not strictly correlate with Gross Domestic Product per capita, as seen in Rwanda and Uganda. Using case studies of AI initiatives across SSA, this research contextualises quantitative findings, identifying key strategies contributing to AI inclusivity, including talent development programs, research networks, and policy interventions. The study concludes with recommendations to bridge the AI digital divide, emphasising investments in AI education, localised AI solutions, and cross-country collaborations to accelerate AI adoption in SSA.
https://arxiv.org/abs/2601.06145
Academic Papers
svg
4714d8e854b9cdeb6da5a005ff119a419b217c216fb11e5c63dbdba73f8e847a
2026-01-13T00:00:00-05:00
LLM Flow Processes for Text-Conditioned Regression
arXiv:2601.06147v1 Announce Type: new Abstract: Meta-learning methods for regression like Neural (Diffusion) Processes achieve impressive results, but with these models it can be difficult to incorporate expert prior knowledge and information contained in metadata. Large Language Models (LLMs) are trained on giant corpora including varied real-world regression datasets alongside their descriptions and metadata, leading to impressive performance on a range of downstream tasks. Recent work has extended this to regression tasks and is able to leverage such prior knowledge and metadata, achieving surprisingly good performance, but this still rarely matches dedicated meta-learning methods. Here we introduce a general method for sampling from a product-of-experts of a diffusion or flow matching model and an `expert' with binned probability density; we apply this to combine neural diffusion processes with LLM token probabilities for regression (which may incorporate textual knowledge), exceeding the empirical performance of either alone.
https://arxiv.org/abs/2601.06147
Academic Papers
svg
3c854ad93d66c5e9437d96000d176ae6aeaa2d42f8042c35b80a9edb2bed6866
2026-01-13T00:00:00-05:00
A Foundation Model Approach for Fetal Stress Prediction During Labor From cardiotocography (CTG) recordings
arXiv:2601.06149v1 Announce Type: new Abstract: Intrapartum cardiotocography (CTG) is widely used for fetal monitoring during labor, yet its interpretation suffers from high inter-observer variability and limited predictive accuracy. Deep learning approaches have been constrained by the scarcity of CTG recordings with clinical outcome labels. We present the first application of self-supervised pre-training to intrapartum CTG analysis, leveraging 2,444 hours of unlabeled recordings for masked pre-training followed by fine-tuning on the 552-recording CTU-UHB benchmark. Using a PatchTST transformer architecture with a channel-asymmetric masking scheme designed for fetal heart rate reconstruction, we achieve an area under the receiver operating characteristic curve of 0.83 on the full test set and 0.853 on uncomplicated vaginal deliveries, exceeding previously reported results on this benchmark (0.68-0.75). Error analysis reveals that false-positive alerts typically correspond to CTG patterns judged concerning on retrospective clinical review, suggesting clinically meaningful predictions even when umbilical pH is normal. We release standardized dataset splits and model weights to enable reproducible benchmarking. Our results demonstrate that self-supervised pre-training can address data scarcity in fetal monitoring, offering a path toward reliable decision support in the labor room.
https://arxiv.org/abs/2601.06149
Academic Papers
svg
dd83a42838af74c49d8d8c64998a57425ca443da222a5e5b9b3ddf81e27cc70e
2026-01-13T00:00:00-05:00
PromptPort: A Reliability Layer for Cross-Model Structured Extraction
arXiv:2601.06151v1 Announce Type: new Abstract: Structured extraction with LLMs fails in production not because models lack understanding, but because output formatting is unreliable across models and prompts. A prompt that returns clean JSON on GPT-4 may produce fenced, prose-wrapped, or malformed output on Llama, causing strict parsers to reject otherwise correct extractions. We formalize this as format collapse and introduce a dual-metric evaluation framework: ROS (strict parsing, measuring operational reliability) and CSS (post-canonicalization, measuring semantic capability). On a 37,346-example camera metadata benchmark across six model families, we find severe format collapse (for example, Gemma-2B: ROS 0.116 versus CSS 0.246) and large cross-model portability gaps (0.4 to 0.6 F1). We then present PromptPort, a reliability layer combining deterministic canonicalization with a lightweight verifier (DistilBERT) and a safe-override policy. PromptPort recovers format failures (plus 6 to 8 F1), adds verifier-driven semantic selection (plus 14 to 16 F1 beyond canonicalization), and approaches per-field oracle performance (0.890 versus 0.896 in zero-shot) without modifying base models. The method generalizes to held-out model families and provides explicit abstention when uncertain, enabling reliable structured extraction in production deployments.
https://arxiv.org/abs/2601.06151
Academic Papers
svg
9aa0e2d91e01e477d6e19186e9dcdba9421a3efbc711e1c3d444b28f21789c57
2026-01-13T00:00:00-05:00
HiMeS: Hippocampus-inspired Memory System for Personalized AI Assistants
arXiv:2601.06152v1 Announce Type: new Abstract: Large language models (LLMs) power many interactive systems such as chatbots, customer-service agents, and personal assistants. In knowledge-intensive scenarios requiring user-specific personalization, conventional retrieval-augmented generation (RAG) pipelines exhibit limited memory capacity and insufficient coordination between retrieval mechanisms and user-specific conversational history, leading to redundant clarification, irrelevant documents, and degraded user experience. Inspired by the hippocampus-neocortex memory mechanism, we propose HiMeS, an AI-assistant architecture that fuses short-term and long-term memory. Our contributions are fourfold: (1) A short-term memory extractor is trained end-to-end with reinforcement learning to compress recent dialogue and proactively pre-retrieve documents from the knowledge base, emulating the cooperative interaction between the hippocampus and prefrontal cortex. (2) A partitioned long-term memory network stores user-specific information and re-ranks retrieved documents, simulating distributed cortical storage and memory reactivation. (3) On a real-world industrial dataset, HiMeS significantly outperforms a cascaded RAG baseline on question-answering quality. (4) Ablation studies confirm the necessity of both memory modules and suggest a practical path toward more reliable, context-aware, user-customized LLM-based assistants.
https://arxiv.org/abs/2601.06152
Academic Papers
svg
493ec3bf377d2970eb1c57602fa9afe83e007c96cade08c2f1d979e06b3bcac5
2026-01-13T00:00:00-05:00
Interoperability in AI Safety Governance: Ethics, Regulations, and Standards
arXiv:2601.06153v1 Announce Type: new Abstract: This policy report draws on country studies from China, South Korea, Singapore, and the United Kingdom to identify effective tools and key barriers to interoperability in AI safety governance. It offers practical recommendations to support a globally informed yet locally grounded governance ecosystem. Interoperability is a central goal of AI governance, vital for reducing risks, fostering innovation, enhancing competitiveness, promoting standardization, and building public trust. However, structural gaps such as fragmented regulations and lack of global coordination, and conceptual gaps, including limited Global South engagement, continue to hinder progress. Focusing on three high-stakes domains - autonomous vehicles, education, and cross-border data flows - the report compares ethical, legal, and technical frameworks across the four countries. It identifies areas of convergence, divergence, and potential alignment, offering policy recommendations that support the development of interoperability mechanisms aligned with the Global Digital Compact and relevant UN resolutions. The analysis covers seven components: objectives, regulators, ethics, binding measures, targeted frameworks, technical standards, and key risks.
https://arxiv.org/abs/2601.06153
Academic Papers
svg
a124d51eeff131a94181cf4e3526063ed7d3903ea81c3b58f96aea05fc65b4fa
2026-01-13T00:00:00-05:00
BotSim: Mitigating The Formation Of Conspiratorial Societies with Useful Bots
arXiv:2601.06154v1 Announce Type: new Abstract: Societies can become a conspiratorial society where there is a majority of humans that believe, and therefore spread, conspiracy theories. Artificial intelligence gave rise to social media bots that can spread conspiracies in an automated fashion. Currently, organizations combat the spread of conspiracies through manual fact-checking processes and the dissemination of counter-narratives. However, the effects of harnessing the same automation to create useful bots are not well explored. To address this, we create BotSim, an Agent-Based Model of a society in which useful bots are introduced into a small world network. These useful bots are: Info-Correction Bots, which correct bad information into good, and Good Bots, which put out good messaging. The simulated agents interact through generating, consuming and propagating information. Our results show that, left unchecked, Bad Bots can create a conspiratorial society, and this can be mitigated by either Info-Correction Bots or Good Bots; however, Good Bots are more efficient and sustainable than Info-Correction Bots . Proactive good messaging is more resource-effective than reactive information correction. With our observations, we expand the concept of bots as a malicious social media agent towards automated social media agent that can be used for both good and bad purposes. These results have implications for designing communication strategies to maintain a healthy social cyber ecosystem.
https://arxiv.org/abs/2601.06154
Academic Papers
svg
220b769c730131f39dbeff88b728633cb80c7bb1d6e6ed8ea5182a7511ae1cbb
2026-01-13T00:00:00-05:00
Channel Knowledge Map Construction via Guided Flow Matching
arXiv:2601.06156v1 Announce Type: new Abstract: The efficient construction of accurate channel knowledge maps (CKMs) is crucial for unleashing the full potential of environment-aware wireless networks, yet it remains a difficult ill-posed problem due to the sparsity of available location-specific channel knowledge data. Although diffusion-based methods such as denoising diffusion probabilistic models (DDPMs) have been exploited for CKM construction, they rely on iterative stochastic sampling, rendering them too slow for real-time wireless applications. To bridge the gap between high fidelity and efficient CKM construction, this letter introduces a novel framework based on linear transport guided flow matching (LT-GFM). Deviating from the noise-removal paradigm of diffusion models, our approach models the CKM generation process as a deterministic ordinary differential equation (ODE) that follows linear optimal transport paths, thereby drastically reducing the number of required inference steps. We propose a unified architecture that is applicable to not only the conventional channel gain map (CGM) construction, but also the more challenging spatial correlation map (SCM) construction. To achieve physics-informed CKM constructions, we integrate environmental semantics (e.g., building masks) for edge recovery and enforce Hermitian symmetry for property of the SCM. Simulation results verify that LT-GFM achieves superior distributional fidelity with significantly lower Fr\'echet Inception Distance (FID) and accelerates inference speed by a factor of 25 compared to DDPMs.
https://arxiv.org/abs/2601.06156
Academic Papers
svg
196e9037bbe4a462876e20e6e94cb78af6b573f032517ef6d74b2430cd891609
2026-01-13T00:00:00-05:00
ECLIPTICA - A Framework for Switchable LLM Alignment via CITA - Contrastive Instruction-Tuned Alignment
arXiv:2601.06157v1 Announce Type: new Abstract: Alignment in large language models (LLMs) is still largely static: after training, the policy is frozen. DPO, GRPO methods typically imprint one behavior into the weights, leaving little runtime control beyond prompt hacks or expensive re-alignment. We introduce ECLIPTICA, which treats alignment as instruction-driven and runtime-controllable: natural-language alignment instructions act as an explicit behavioral contract (stance, refusal boundary, verbosity) that modulates behavior on the fly under evolving safety requirements, user roles, and governance constraints. We introduce CITA (Contrastive Instruction-Tuned Alignment), combining SFT with contrastive preference optimization under an explicit geometric anchor to a reference model. This yields a stable Riemannian chart and keeps instruction updates within a shared neighborhood, so regimes stay nearby and traversable for reliable switching. To isolate policy switching from ordinary instruction following, we release the ECLIPTICA benchmark: 3000 controlled cases (300 prompts x 10 instruction types) where the user request is fixed and only the alignment instruction changes. On Llama-3.1-8B across five suites (ECLIPTICA, TruthfulQA, Conditional Safety, Length Control, LITMUS), CITA reaches 86.7% instruction-alignment efficiency, beating DPO (56.1%), GRPO (36.1%), and PPO (20.4%).
https://arxiv.org/abs/2601.06157
Academic Papers
svg